fbpx

sampling

FORECASTING 2022: How Can Scenario Planning Improve Agility in Adjusting to Change?

On July 12, 2022, forecasting, and product experts shared frameworks and strategies for participants to consider as they plan amid disruptions in the industry. Presenters discussed techniques marketers could use to drive consumer action and advocacy — as well as econometric models for search trends, insights on holistic analytics programs, reflections on gold standard probability methods — and new forecasting techniques in the wake of the pandemic and more.

Member Only Access

Surveys Don’t Always Predict Behavior

An analysis in The New York Times reminds us that surveys and polls often do not predict behavior. A report by the ARF examines the reasons why and pollsters are reevaluating their methods. Read more »

Warning: Flawed Data May Be Sabotaging Your Targeting Efforts

  • Alice K. Sylvester and Jim Spaeth
  • JOURNAL OF ADVERTISING RESEARCH

Flaws in data sampling accuracy and coverage can foil brands’ targeting efforts and ultimately their ROI, which bodes badly for advanced TV advertising. The culprits: crippling media fragmentation, and a dearth of research on missing and misidentifying consumers when using commercially available target segments for digital campaigns.

Member Only Access

ARF Event 11/29/16 – Predicting Election 2016: What Worked, What Didn’t and the Implications for Marketing & Insights

The ARF partnered with GreenBook to assemble a forum of this highly charged topic. Ten industry experts were on hand, with ARF EVP Chris Bacon serving as moderator for the panel discussion. Here are excerpts from the event:

Gary Langer, Langer Research and formerly ABC News:

  • There is no real “sample” of voters, only estimates of likely voters. This adds uncertainty, especially in predicting the electoral college vote
  • Polling is not just about the horse race, as we use it to collect important information on what voters were thinking

Raghavan Mayur, TechnoMetrica Market Intelligence:

  • You can never compensate for bad data

Cliff Young, Ipsos Public Affairs:

  • We are in a political era of uncertainty worldwide. It is anti-establishment. This means we also need new methods (for polling)
  • This was a “disruption” election, which globally accounts for only 15%. And in these elections, past behavior may not be a useful guide

Matthew Oczkowski, Cambridge Analytics:

  • As with marketing research, it is all about finding the right consumer with the right message at the right time
  • His role is to help clients win elections. Micro-targeting is a key component. He worked as a consultant for Trump team

Rick Bruner, Viant Inc:

  • We need more random control trials (to improve polling). We also need more behavioral inputs
  • Underlying values are important, like we do for marketing. For potential voters, this begins with “do I vote?”

Melanie Courtright, Research Now:

  • Everything was different in 2016!
  • We need samples representing the real population (of voters)

Jared Schreiber, Infoscout:

  • The undecided and the indifferent voters matter a lot (swung to Trump)

Dr. Aaron Reid, Sentient:

  • Traditional methods are not accurate enough
  • Need to measuring the unconscious – people may have had conscious access to answer the researcher’s questions

Tom Anderson, Odin Text:

  • Collected 3,000 Americans via Google Surveys one week before the election (inexpensive and predictive). Goal was positioning, finding out what candidates stand for via simple text
  • Three issues: Non response bias; “Shy Trump” voter; voter identification

Taylor Schreiner, Tube Mogul:

  • How will this (failure) impact CMO’s opinion of research
  • We need more experimentation
  • Let’s use this election as a teachable and learnable moment

Testing the collection of TV and radio usage – via InsideRadio

Conducted last fall in two markets by Nielsen and the Council for Research Excellence (CRE), the study demonstrates the potential of creating one process to collect radio, TV and qualitative audience data, rather than relying on separate samples as is currently done.

In the trials, Nielsen radio respondents were asked to complete a TV diary about one month after the week of their original radio diary—and vice-versa for TV respondents. Half the sample started with radio, the other half with TV. Conducted in two markets, the tests coincided with the fall ratings surveys. The tests used some different methodologies than are used in Nielsen’s live currency samples.

Significantly, (just) more than half of participants completed both diary types. Those who received a radio diary first participated at significantly higher levels than those that started with a TV diary.

Access full article from InsideRadio