Large language models (LLMs) are rapidly becoming a new gateway to online information, potentially disrupting traditional search engines, websites and advertising markets. Using detailed clickstream data from 2022–2023, this study examines how adopting LLM tools changes consumers’ online behavior. The authors find that LLM adoption gradually reduces traditional search activity and the browsing of smaller websites, while also lowering display advertising exposure. These results suggest that generative AI may reshape how users access information online and alter the distribution of attention and advertising revenue across digital platforms.
Member Only Access
As generative AI becomes a central tool for producing marketing content, firms increasingly rely on fine-tuning models using engagement data, such as A/B test results. This MSI working paper argues that optimizing only for “what works” risks reward hacking, clickbait and poor generalization. The authors propose a knowledge-guided alignment framework in which large language models (LLMs) generate and validate hypotheses about why content performs well, and then use this knowledge to guide fine-tuning. Using more than 23,000 A/B-tested news headlines, the study shows that knowledge-guided AI produces higher engagement, avoids clickbait and generalizes better—especially in low-data settings.
Member Only Access
On January 22, we introduced a fundamentally different paradigm: One-Demand Decision AI powered by Large Causal Models (LCMs) that move enterprises from descriptive insights to prescriptive growth recommendations through counterfactual causal reasoning. Attendees gained a clear understanding of how one-demand causal AI transforms descriptive correlation into prescriptive causation, what it takes to implement unified decision platforms at scale, and why now is the moment to rethink the measurement stack from first principles.
Member Only Access
A new ARF Psych of GenAI experiment reveals that large language models apply a rigid, rule-driven logic when evaluating privacy scenarios—even when humans typically shift their reasoning based on framing, emotion and social context. Unlike consumers, who blend intuition, feeling and social perspective into their judgments, GPT-4o relied on a single internal rule across all testing conditions: data use is acceptable only with explicit consent. This consistency offers value for certain analytic tasks but exposes limits for advertising research that depends on emotional nuance and context-sensitive consumer insight.
Member Only Access