big data

The Power of Big Data

Big data’s power can only go so far, as cautioned in this presentation from Nielsen. Envisioning a future where big data’s integration in measurement is calibrated with panel assets, Nielsen’s Kimberly Gilberti addressed big data’s gaps in TV sources and usage, like CTV, video games, and smart TV’s native apps. This brief overview of how Nielsen uses big data detailed the enrichment needs for MVPDs and ACR in addition to the power of people-based panels that fill in the missing pieces.

How Truthscores Can Significantly Improve the Accuracy of Addressable Marketing

Truthset was founded 18 months ago to address the concern that attributes in Big Datasets used to define target audiences were not always accurate. Aaron reported that a female audience segment from a data provider that Truthset had examined turned out to be only 58% female. Truthset receives data on 2.6 billion IDs from 19 data providers each quarter and provides them with “truth scores” for the demographic attributes of each ID. The truth scores, which range from 0 to 1, represent Truthset’s estimate of the probability that an assigned attribute is correct.

Assuring Research Integrity in Data-Driven TV

Xandr’s Peter Doe reinforced the omnipresence of bias in TV measurement as he outlined four key areas of bias in assessing DirecTV’s (DTV) set-top box (STB) data for its national data-driven linear TV advertising. Noting DTV’s relatively low sampling size (7M STB homes) has a high level of bias when measuring for national TV viewing, Peter provided a top-line overview of Xandr’s viewership data methodology relevant to advertisers and marketers working with big datasets.

Perspectives on the Most Important Consumer and Video Marketplace Changes

The major challenge to measurement is the vast amount of content to measure and the ability to deduplicate across the many screens. Inclusiveness is the second major challenge – showing the face of America. But this is an opportunity also. The third is chasing identity and privacy at the same time.

Big Data and Converging TV — What Role Can Deterministic Panels Play in Unlocking Opportunity?

Return Path Data. Set Top Box Data. Millions of Consumer Devices. Server Logs. In an era where big data is being tapped for decision making, and each source has a limited and often unrepresentative view, what is the role of a representative panel? What will and should panels look like in the future? After all, TV is converging with digital, the rise of CTV has ushered in content and marketing opportunities for businesses, and consumers have decision-making power unlike ever before. At this ARF Insights Studio, industry leaders Jane Clarke (CIMM), Pete Doe (Xandr), Mainak Mazumdar (Nielsen) and Paul Donato (ARF), discussed where single source panels may fit in the media measurement landscape of the future and how they can work alongside big data to benefit both.

The Exploding Complexity of Programming Research, and How to Measure It, When Content is King

Programming researchers are not getting the data they need to make informed decisions and Joan FitzGerald (Data ImpacX) uses streaming’s complex ecosystem to explain the conundrum facing programmers. Key insights into monetization and performance are not supported despite the inundation of new forms of data, leaving programmers without a comprehensive picture of their audience. Together with Michael McGuire at MSA, Joan outlined a methodology funnel that combined 1st, 2nd and 3rd party data to create equivalized metrics that, once leveraged, could meet critical programming research demands.

Identity Resolution Group: Demystifying Data Cleanrooms

Data clean room technology has had a place in the advertising ecosystem for years but has become increasingly prominent in today’s landscape where major disruptions in data governance and privacy are emerging. Data cleanroom companies provide environments for two or more companies to share first-party data in a neutral, secure, privacy-compliant manner. They are used for activation, media measurement, and insights. At this event, Working Group Chair Sable Mi (VP of Analytics, Epsilon) moderated a powerful discussion with guests Alya Adelman (Director of Product, Blockgraph), Devon DeBlasio (VP of Product Marketing, InfoSum), Matt Karasick (Chief Product Officer, Habu), and Alysia Melisaratos (Head of Solutions Engineering, LiveRamp) to unpack the value that data clean rooms can provide.

Panel Discussion

In this session, Elea McDonnell Feit (Drexel University) led a panel discussion with the day’s speakers on innovations in experiments in marketing and referred to these experiments as a “mature part of the measurement system.” In this discussion panel members brought up ideas and examples of how to effectively employ randomized controlled trials (RCT) and the benefits of using experiments for attribution. They examined the lack of patterns stemming from advertising incrementality and credited this to the changing nature of the consumer journey and unique factors in strategy, the business life cycle and the product being sold. The panel also explored processes to ensure the deployment of a successful and effective experiment. In addition, geo-based tests were also considered. Other topics discussed were the cost-effectiveness of running experiments and the value of failed experiments.

Measurement with Large-Scale Experiments: Lessons Learned

In this session, Ross Link (Marketing Attribution) and Jeff Doud (Ocean Spray Cranberries) examined a large-scale experiment conducted with Ocean Spray. They applied randomized controlled trials (RCT) to approximately 10 million households (30 – 40 million people) in which ads were consumed by their participants via a variety of devices. Jeff explained that the experiment was done to measure the impact of when certain ads were suppressed for some of their participants. Additionally, they examined an MTA (multi-touch attribution) logit model that was subsequently applied, which yielded KPIs such as ROI. Information from this MTA-RCT experiment supplied refreshed results monthly. Daily ROI results from the campaign were collected from the MTA-applied modeling. Outcomes from this experiment revolved around retargeting and recent and lagged buyers. In addition, the study also explored creative treatments and platform effectiveness.

Complexities of Integrating Big Data and Probability Sample People Meter Data

Nielsen compared the implied ratings from ACR data and STB data in homes where they also have meters. The correlation was quite high, though panel adjustments raised the rating levels by about 1%. Big Data are limited in different ways: not all sets in a house provide ACR or STB data, they are devoid of persons information and STB’s are often powered up but the TV is off. Nielsen presented how a panel of 40,000 homes can be used to correct those biases. A critical finding was that projection of MVPD data outside of its geographic footprint significantly changed network shares. That said, Big Data can significantly improve local market data where samples are necessarily much smaller.