- Past Event Highlights
- Article
Business Outcomes in Advertising Powered by Machine Learning
Brett Mershmann – Sr. Director, Research & Development (R&D), NCSolutions
Brett Mershmann’s (NCSolutions) discussion focused on how to quantify incremental advantages of some more modern contemporary machine learning (ML) frameworks, over more traditional measurements for incrementality. Beginning the presentation, Brett provided an overview of both traditional modeling techniques as well as more contemporary ML campaign measurements. To understand the differences, Brett detailed an 11-experiment process, using real observational household data, intersected with real campaign impression data but with simulated outcome and with a defined outcome function. The experiments measured accuracy, validity and power. Additionally, they compared ML with randomized controlled trials (RCTs), noting that RCTs are the gold standard but are not always feasible. To accomplish this, they ran both an RCT and an ML analysis, by creating test-control groups on real, limited data. This experiment applied the same outcome function to each, depending on a larger set of variables. In closing, Brett shared feedback from these experiments, which supported ML as a powerful method of measurement and a viable alternative to RCTs. He highlighted the importance of getting the correct data into these models for optimum results. Key takeaways:- A survey from the CMO Council indicated that 56% of marketers want to improve their campaign measurement performance in the next 12 months.
- Traditional campaign measurement techniques use household matching (Nearest-Neighbor), household matching (Propensity) and inverse propensity weighting (IPW), based on simple statistical models applied uniformly. This method simulates balanced test and control groups to estimate the group-wise counterfactual.
- The ML measurement technique, using NCSolutions’ measurement methodology, is computationally robust for large, complex data sets, understanding that data is not one-size-fits-all and estimates counterfactual for individual observations.
- Simple A/B testing does not capture the true effect, while the counterfactual approach uses a "what-if model" approach to estimate the true effect.
- The experiments comparing ML to traditional methods, measuring accuracy, validity and power showed that:
- Accuracy: Machine learning outperforms on accuracy 55% compared to inverse propensity weighing (9%), propensity match (27%) and nearest-neighbor match (8%).
- Validity: Percentage of scenarios with true effect in confidence interval (validity) found that ML gave valid estimates most often (91%), compared with inverse propensity weighing (82%), propensity match (64%) and nearest-neighbor match (73%).
- Power: Machine learning is more statistically powerful. The average width of confidence interval using machine learning was 1.48, compared to inverse propensity weighing (1.56), propensity match (1.78) and nearest-neighbor match (1.72).
- Results from ML vs. RCTs: Both ML and RCT are accurate in campaign measurement, both methods are generally valid, but ML is more powerful.
- Overall, ML can be an adequate substitute for RCTs providing meaningful estimates when running an RCT is not a possibility.