modelling

Concurrent Track Panel Discussions: NEW METHODS TO VALIDATE AUDIENCE ESTIMATES

This live panel featured the presenters from the New Methods to Validate Audience Estimates track, with moderator Megan Margraff of Oracle following up on key points involving alternate data currencies, data harmonization and normalization, fragmentation challenges and advanced targeting in TV and CPG.

The Exploding Complexity of Programming Research, and How to Measure It, When Content is King

Programming researchers are not getting the data they need to make informed decisions and Joan FitzGerald (Data ImpacX) uses streaming’s complex ecosystem to explain the conundrum facing programmers. Key insights into monetization and performance are not supported despite the inundation of new forms of data, leaving programmers without a comprehensive picture of their audience. Together with Michael McGuire at MSA, Joan outlined a methodology funnel that combined 1st, 2nd and 3rd party data to create equivalized metrics that, once leveraged, could meet critical programming research demands.

FORECASTING 2022: How Can Scenario Planning Improve Agility in Adjusting to Change?

On July 12, 2022, forecasting, and product experts shared frameworks and strategies for participants to consider as they plan amid disruptions in the industry. Presenters discussed techniques marketers could use to drive consumer action and advocacy — as well as econometric models for search trends, insights on holistic analytics programs, reflections on gold standard probability methods — and new forecasting techniques in the wake of the pandemic and more.

Panel Discussion

Carl Mela (Duke University) helmed a panel of the day’s presenters to further review the “vanguard work of MMM in 2022.” Granularity inspired the most debate among the panelists, with other topics including causality, cadence of modeling vs. decision-making, false trust in priors, marketing mix model (MMM)’s worst mistakes and lack of precision, and methods for long-term ROI and branding meriting discussion.

How Cox Communications Leveraged Next Generation Measurement to Drive Organizational Change and Prepare for Uncertainty

Analytic Partners’ Trent Huxley interviewed client Mallory Fetters of Cox Communications on the telecom’s marketing measurement strategy. Dealing with constantly evolving challenges, both common and specific to its industry, Mallory expanded on how data deprecation, shifting consumer media behaviors, demand for faster speeds and growing consumer choice and competition rapidly accelerated Cox’s learning curve.

Unlocking the Value of Alternative Linear TV Currencies with Universal Forecasting

Matt Weinman (TelevisaUnivision) and Spencer Lambert (datafuelX) shared the methodology and results from testing TelevisaUnivision’s initiative that, with datafuelX’s technology, enabled their advertising partners to choose their preferred currency in forecasting both long- and short-term audiences for their programming. Implementation involved adjusting the business flow for multi-measurement sources but with each source ingested, validated and normalized to the tech standard separately.

Holistic Cross-Media Measurement

Brendan Kroll of Nielsen and Anne Ori and Daniel Sacks, both of Google, explained that their study’s objective was to identify potential improvements to marketing mix models by utilizing enhanced prior beliefs (priors) based on sales lift studies and exploring the resulting changes in campaign-level sales lift once those priors were incorporated.

Complexities of Integrating Big Data and Probability Sample People Meter Data

Nielsen compared the implied ratings from ACR data and STB data in homes where they also have meters. The correlation was quite high, though panel adjustments raised the rating levels by about 1%. Big Data are limited in different ways: not all sets in a house provide ACR or STB data, they are devoid of persons information and STB’s are often powered up but the TV is off. Nielsen presented how a panel of 40,000 homes can be used to correct those biases. A critical finding was that projection of MVPD data outside of its geographic footprint significantly changed network shares. That said, Big Data can significantly improve local market data where samples are necessarily much smaller.

Expanding Spanish Language Audiences

Sergey Fogelson and Edouardo Vitale, both from TelevisaUnivision, outlined their motivations for developing a custom lookalike model (LAM) to expand Spanish-language audiences, which were under-represented:

  • Misidentification: 4 in 10 Hispanics are excluded from 3p datasets.
  • Waste: 70% of impressions targeted at Hispanics are wasted.
  • Scale: The true scale of the Hispanic population within a given brand’s 1p dataset is hard to identify without extensive validation.

A Layman’s Guide to Cross Media Reach & Frequency Measurement Using Virtual IDs

  • By Niraj Patel, Horizon Media, Young Pros Officer

On May 17, the ARF Analytics Council explored the groundbreaking concept of Virtual IDs (VIDs) and their potential to revolutionize cross-media measurement. The essential mechanics of VIDs were explained in a non-technical manner to help professionals across media and advertising understand it better. Panelists shared how VIDs could overcome barriers in calculating cross-media and device reach and frequency.

Member Only Access