Data Privacy and Regulation:
As AI becomes more pervasive, so does the scrutiny on data collection practices. Complying with evolving data privacy laws will be paramount.
On April 21, 2021, the European Commission proposed the AI Regulation, the first comprehensive legal framework for AI to encourage investment and innovation while ensuring safety and fundamental rights. This draft regulation proposes harmonized rules, global reach, turnover-based fines, and emphasizes transparency, risk management, and accountability. It covers the entire AI lifecycle and applies to providers, users, distributors, importers, and resellers, including those outside the EU if their AI systems operate within it. The regulation categorizes AI practices into three tiers: unacceptable, high-risk, and low-risk. Prohibited practices include AI for social scoring, large-scale surveillance, and adverse behavioral influencing. High-risk AI systems, like those in critical infrastructure or justice, are allowed under strict controls. Low-risk AI systems are subject to basic transparency requirements. The regulation’s impact on market research is minimal, but it reinforces existing self-regulatory regimes and emphasizes careful management of biometric data and other sensitive areas. On December 9, 2023, a tentative agreement was reached by Parliament and Council representatives on the Artificial Intelligence Act.
In the US, AI regulation took center stage leading to President’s Biden’s October 2023 executive order which called for increased transparency and new standards in AI. This order laid the foundation for a US-centric AI policy, emphasizing industry-friendly best practices and allowing various agencies to develop their own sector-specific regulations. The 2023 legislative session in the U.S. has seen an unprecedented number of state AI laws being proposed, eclipsing previous years. Ten states have incorporated AI regulations within broader consumer privacy laws set to take effect or already in place this year. Additionally, numerous states have introduced bills with similar objectives. Various states are establishing task forces to scrutinize AI’s role in sectors like healthcare, insurance, and employment. A notable example is a law in New York City, part of the broader consumer privacy legislation, which specifically addresses AI in hiring practices and has garnered national interest. Apart from the California AI-ware Act, which governs the use of generative AI in government applications, several other bills are aimed at addressing the potential harms of generative AI. These legislative efforts largely concentrate on mitigating the issues arising from AI generated images and videos (Zhu, 2023).
According to the MIT Technology Review (Ryan-Mosley et al., 2024) we can anticipate a risk-based regulatory approach to AI, similar to the EU’s AI Act, where AI types and applications are assessed based on their risk levels. The National Institute of Standards and Technology has proposed this framework, which is now set to be implemented across various sectors and agencies.
Ethical Concerns:
Beyond data privacy, the ethical implications of AI decisions, biased algorithms, and the potential misuse of hyper-personalized content will be areas of concern.
Market research has long recognized the right of individuals to control their personal data, guided by transparency in data collection, protection of personal data, and ethical behavior. With AI and secondary data use challenging traditional methods, there’s a need for a more outward-focused ethical framework, extending beyond protecting participant and client interests to actively doing good.
This shift, aligning with civil society and legislative expectations, calls for the market research industry to lead in setting behavioral standards. For AI in market research, ESOMAR (Cooke & Passingham, 2022) propose an ethical framework based on Floridi & Cowls (2019) Framework that emphasizes beneficence, non-maleficence, autonomy, justice, and explicability. This framework, influential in AI4People and the European Commission’s Ethics Guidelines for Trustworthy AI, can offer a starting point to develop AI guidelines, considering diverse perspectives. As AI evolves in market research, adapting to this ethical approach is important for guiding industry standards and responding to the global regulatory environment.
Technical Complexities:
As AI models grow more sophisticated, so will the need for advanced expertise and infrastructure.
The dynamic nature of AI technology means that the learning curve in the advertising industry will become steeper and continuous. Professionals will need to stay abreast of the latest AI trends, tools, and methodologies to remain relevant and effective in their roles. To help with this, educational institutions and industry bodies will need to offer targeted training and upskilling programs. These initiatives should focus not only on technical skills related to AI but also on enhancing creative, analytical, and strategic thinking abilities. However, other than the more technical coding examples needed to evaluate survey data, the case studies presented in Chapter 4, could be experimented with, played with, explored with anyone with no coding experience. There are few barriers to entry introducing LLMs to an advertising research project.
Companies in the advertising sector will need to manage the transition effectively. This includes providing support for employees undergoing re-skilling, fostering a culture of lifelong learning, and redesigning job roles to accommodate the coexistence of AI and human expertise. Since the future of advertising is not about AI replacing humans, but rather about humans working in tandem with AI, developing skills to collaborate effectively with AI, understanding its capabilities and limitations, and using AI as a tool to enhance human-driven strategies are all of ultimate importance.
Public Perception:
Balancing innovation with concerns about the “creepiness” factor. Brands will need to be cautious about not overstepping boundaries and making users feel overly monitored. This balance is crucial for brands that want to leverage advanced technologies to personalize and enhance customer experiences without alienating their audience. The sense of discomfort or unease that consumers may feel when they perceive that a brand has too much insight into their personal lives or behaviors can occur when personalized marketing efforts become too intrusive or intimate, giving the impression that the consumer’s privacy has been violated. That said, there are generations that don’t change a channel by a remote, but instead talk to their television – and don’t find this creepy, even though their television is an advertising platform harvesting data about their preferences.
Boerman & Smit (2023) conduct a systematic review of 84 articles to reveal three main contexts in which privacy is a key theme in advertising: as part of the ethical and regulatory considerations of advertising, in relation to personal characteristics that vary among consumers, and as a factor influencing how consumers respond to and are affected by advertising (for e.g., Lina & Setiyanto, 2021). Looking forward, Boerman and Smit (2023) address the growing use of personalized advertising in public spaces, privacy fatigue (also termed privacy cynicism), and ways in which to deal with constraints to personalization.
In balancing personalization and privacy, brands should focus on engaging consumers effectively without invading their privacy. They must ethically manage consumer data, complying with regulations like GDPR and CCPA. Building consumer trust is crucial, requiring transparency in data usage and respect for privacy. Brands should avoid overreliance on technology for data analysis and consider ethical implications in marketing strategies. Regular consumer feedback helps in adjusting strategies. The use of AI and analytics must be balanced with privacy concerns. The goal is to enhance consumer experiences innovatively without overstepping privacy boundaries.
Economic Implications:
Potential job displacements in the advertising industry, requiring re-skilling and up-skilling initiatives.
The advent of AI in the advertising industry brings with it a significant shift in the nature of work and the skill sets required. This technological evolution could potentially lead to job displacements, as AI systems and algorithms become capable of performing tasks that were traditionally done by humans. These changes necessitate a focus on re-skilling and up-skilling initiatives to prepare the workforce for the new landscape. Since AI is exceptionally adept at automating routine, repetitive tasks, in advertising, this could mean automation in areas like data analysis, customer segmentation, and even some aspects of creative design. As AI takes over certain tasks, new roles and skills will potentially emerge. For instance, there will be a growing need for AI trainers who can teach AI systems how to mimic human-like decisions in advertising contexts. Similarly, roles centered around AI ethics, compliance, and interpretation of AI-driven insights will become crucial. At the same time, while AI can handle data-driven tasks efficiently, it is clear that human creativity and strategic thinking are not easily replicated by machines. Professionals in the advertising industry may need to pivot more towards roles that leverage these uniquely human skills, such as creative direction, strategy development, and emotional engagement in advertising campaigns.
While numerous startups have begun to offer AI marketing services and businesses and universities started offering AI marketing certifications, major brands have not yet restructured their organizations or created AI-specific leadership roles. Indeed, the absence of roles like VP of AI marketing is noticeably apparent, making us question the gap between AI’s perceived impact and its actual implementation in marketing leadership. The Wall Street journal (Coffee, 2023) noted that as of November 2023, job listings mentioning AI in marketing were 8% lower than the previous year, despite the rise of AI startups like OpenAI. This is in contrast to Indeed, which saved $10 million using generative AI for content development in 2023. However, AI’s influence on marketing seems less significant compared to other fields – for instance, sales job listings were almost three times more likely to mention AI compared to those in marketing.
Some B2B companies and advertising firms like WPP have introduced roles such as chief AI officer and head of AI, focusing mainly on promoting products and services to business clients. Apart from Coca-Cola, who promoted two executives to the newly created roles of global head of generative AI and global head of marketing AI, such titles remain relatively rare in the broader marketing industry.
Additional economic implications can be opening new opportunities for revenue generation marketing. For instance, AI-driven insights can lead to the development of new products or services, targeted advertising, and dynamic pricing strategies. Additionally, AI’s ability to analyze and predict consumer behavior with high accuracy can lead to more effective allocation of marketing budgets, ensuring higher returns on investment. AI tools can potentially level the playing field, allowing smaller businesses to compete more effectively with larger players by providing insights and automation that were previously only available to those with extensive resources (Korganbekova & Zuber, 2023) demonstrate that their probabilistic recognition algorithm can improve visibility and revenue for smaller sellers. This approach effectively counters the disproportionate effects of privacy restrictions on vulnerable consumer groups and smaller sellers) . At the same time, with the increased use of AI in advertising, companies will need to invest more in data management and security. Ensuring data privacy and complying with regulations like GDPR can entail significant costs.