incrementality

Foundations of Incrementality

Sophie MacIntyreAds Research Lead, Marketing Science, Meta

Randomized Control Trials (RCTs) are the gold standard for unbiased measurement of incrementality according to Sophie MacIntyre (Meta). However, there are situations where RCTs are not available so Meta explored other methods to improve the measurement of incrementality. Meta’s researchers wanted to know how close they could get to the experimental result by using non-experimental methods. The researchers were unable to accurately measure an ad campaign’s effect with sophisticated observational methods. Additionally, traditional non-experimental models like propensity score matching and double machine learning were difficult to use and resulted in large errors. Sophie presented incrementality as a ladder of options that get closer to measuring true business value as the ladder is ascended. The different rungs of the ladder are based on how well a particular measurement approach can isolate the effect of a campaign from any other factors. This research was undertaken in collaboration with the MMA and analyzed non-incremental models, quasi-experiments with incrementality models and randomized experiments. Meta revealed that incrementality could be achieved with modeling if the research included some RCTs. Using PIE (predictive incrementality by experimentation) estimates for decision making led to results similar to experiment-based decisions. Sophie stated that academic collaborations provide quantitative evidence of the value of incremental methods. Key takeaways:
  • Incrementality matters because it is the foundation of good business decisions and should be the “North Star.”
  • Randomized Control Trial (RCTs) are the gold standard for determining incrementality.
  • Using a significant amount of data and complex models can improve the performance of observational methods but does not accurately measure an ad campaign’s effect.
  • Using Predictive Incrementality by Experimentation (PIE) estimates for decision making leads to results similar to experiment-based decisions.

Download Presentation

Member Only Access

Business Outcomes in Advertising Powered by Machine Learning

Brett MershmannSr. Director, Research & Development (R&D), NCSolutions

Brett Mershmann’s (NCSolutions) discussion focused on how to quantify incremental advantages of some more modern contemporary machine learning (ML) frameworks, over more traditional measurements for incrementality. Beginning the presentation, Brett provided an overview of both traditional modeling techniques as well as more contemporary ML campaign measurements. To understand the differences, Brett detailed an 11-experiment process, using real observational household data, intersected with real campaign impression data but with simulated outcome and with a defined outcome function. The experiments measured accuracy, validity and power. Additionally, they compared ML with randomized controlled trials (RCTs), noting that RCTs are the gold standard but are not always feasible. To accomplish this, they ran both an RCT and an ML analysis, by creating test-control groups on real, limited data. This experiment applied the same outcome function to each, depending on a larger set of variables. In closing, Brett shared feedback from these experiments, which supported ML as a powerful method of measurement and a viable alternative to RCTs. He highlighted the importance of getting the correct data into these models for optimum results. Key takeaways:
  • A survey from the CMO Council indicated that 56% of marketers want to improve their campaign measurement performance in the next 12 months.
  • Traditional campaign measurement techniques use household matching (Nearest-Neighbor), household matching (Propensity) and inverse propensity weighting (IPW), based on simple statistical models applied uniformly. This method simulates balanced test and control groups to estimate the group-wise counterfactual.
  • The ML measurement technique, using NCSolutions’ measurement methodology, is computationally robust for large, complex data sets, understanding that data is not one-size-fits-all and estimates counterfactual for individual observations.
  • Simple A/B testing does not capture the true effect, while the counterfactual approach uses a "what-if model" approach to estimate the true effect.
  • The experiments comparing ML to traditional methods, measuring accuracy, validity and power showed that:
    • Accuracy: Machine learning outperforms on accuracy 55% compared to inverse propensity weighing (9%), propensity match (27%) and nearest-neighbor match (8%).
    • Validity: Percentage of scenarios with true effect in confidence interval (validity) found that ML gave valid estimates most often (91%), compared with inverse propensity weighing (82%), propensity match (64%) and nearest-neighbor match (73%).
    • Power: Machine learning is more statistically powerful. The average width of confidence interval using machine learning was 1.48, compared to inverse propensity weighing (1.56), propensity match (1.78) and nearest-neighbor match (1.72).
  • Results from ML vs. RCTs: Both ML and RCT are accurate in campaign measurement, both methods are generally valid, but ML is more powerful.
    • Overall, ML can be an adequate substitute for RCTs providing meaningful estimates when running an RCT is not a possibility.

Download Presentation

Member Only Access

A Clean Room Incrementality Experiment – An Indeed Case Study

Joe ZuckerSenior Manager, Marketing Analytics, Indeed

Clean room experiments are challenging in an online marketplace, such as Indeed’s job site for employers and employees, due to potential online experimentation biases, including activity bias, ad server bias and base rate bias, according to Joe Zucker (Indeed). Control groups can be created in multiple ways with different degrees of technical setup or in some cases, external modeling. The five variations of control groups are ghost ads, publisher house ads, PSA ads, propensity score matching and intent to treat. A comparison indicated that each option has both pros and cons, including cost, the need for additional data or publisher support. Joe reminded the audience that there is “no free lunch.” Ghost ads would be preferred by Indeed to create the control group; however, this option has high technical set-up requirements, few publisher partners have this capability and there is low control over the analysis. There are also challenges related to interpreting experimental results, which include low match/conversion rates and the need to analyze experiments with different control group construction. Indeed was able to measure aggregate incrementality for their campaign metrics and prove the value of their advertising as a result of these clean room experiments. Key takeaways:
  • Despite the challenges of clean room experiments, these experiments are critical to the measurement of the incremental impact of advertising on KPIs.
  • Clean room experiments can ensure high quality continuous reporting with actionable analytics and insights while achieving user data privacy compliance.
  • Experimentation enabled Indeed to focus on new customers in a cookie-free, privacy-forward manner with the ability to verify advertiser data.

Download Presentation

Member Only Access

2023 Attribution & Analytics Accelerator

The Attribution & Analytics Accelerator returned for its eighth year as the only event focused exclusively on attribution, marketing mix models, in-market testing and the science of marketing performance measurement. The boldest and brightest minds took the stage to share their latest innovations and case studies. Modelers, marketers, researchers and data scientists gathered in NYC to quicken the pace of innovation, fortify the science and galvanize the industry toward best practices and improved solutions. Content is available to event attendees and ARF members.

Member Only Access

FORECASTING 2023: Managing Risk — How Businesses Can Get Better Visibility into the Near and Long-Term Future

Managing business risk involves having a rational, data-driven view of the future while simultaneously being as prepared as possible for external shocks — from a global pandemic and the ensuing supply-chain disruptions, to inflation, data signal losses, war, and great power competition. At our annual Forecasting event, held virtually on July 18, leading experts shared how businesses can adapt forecasting techniques to manage risk.

How Businesses Can Get Better Visibility into the Near and Long-Term Future

  • FORECASTING 2023

Managing business risk involves having a rational, data-driven view of the future while simultaneously being as prepared as possible for external shocks — from a global pandemic and the ensuing supply-chain disruptions, to inflation, data signal losses, war, and great power competition. At our annual Forecasting event, held virtually on July 18, leading experts shared how businesses can adapt forecasting techniques to manage risk.

Member Only Access

Prior Attentive Ad Exposures Increase Ad Attention

Tristan Webster and Kenneth Wilbur showcased their most recent collaborative work examining attention and frequency in advertising: the impact of multiple exposures on people’s attention to TV ads. They applied CTV data which TVision has collected natively in the field to provide insight into the long-examined question, “Is there an optimal frequency for TV ads?”, but more granularly: “What is happening in the media environment while viewers see ads, and how does that affect their attention?”

MODERATED TRACK DISCUSSIONS: Cross-Platform: Measurement & Identity

Moderator Jorge (TikTok) asked the panelists three key questions:

  • The pros and cons of using mobile phones as meters in a time of such strong privacy concerns;
  • The panelists’ views on measurement of advertising effectiveness; and
  • The most unexpected feedback they received.

Enabling Alternative TV Measurement for Buyers and Sellers

Pete Doe (Xandr) and Caroline Horner (605) provided a case study of their partnership that derived results from alternative currency measurement with buy and sell side perspectives. Xandr’s nimble workflow method enabled 605’s shift from advanced targeting to a very specific, custom-built, “persuadable” target audience with a range between 2 to 10x increase in outcomes.

 

Day 4 Panel Discussion & Closing Remarks

Maggie Zhang of NBC Universal invited all the presenters back to a wrap-up session called “Attribution Pivot,” where she asked what challenges marketers are facing and how they are meeting them. Each provided insight into important attribution challenges that they as a marketer or their client is facing. Limitations include lacking the ability to do A/B testing, privacy issues and the looming issue of cookie depreciation. It is also difficult to determine long-term lift, such as lifetime value.