Ethics and Transparency in AI-Driven Advertising Research

Having illustrated the kinds of actions people might take with LLM, we now address the ethical issues related to those kinds of advertising research functions.

Advancements in Artificial Intelligence (AI) have sparked significant interest within both the media and the public. As AI systems, including robots, chatbots, avatars, and other intelligent agents, transition from mere tools to autonomous entities and collaborators, there is a critical emphasis on researching and comprehending the ethical implications associated with these systems. This shift prompts a series of important inquiries: What constitutes decision-making in the context of AI systems? What ethical, societal, and legal repercussions arise from their actions and decisions? Can AI systems be held accountable for their conduct? How can we exercise control over these systems as their learning capabilities lead them into states that may only tangentially relate to their initial design and setup? Should we permit such autonomous innovation in commercial systems, and if so, how should their use and development be regulated? These questions, along with numerous others, currently occupy the forefront of attention. The way society and our institutions address these inquiries will significantly impact our trust levels, the broader influence of AI in society, and ultimately, the existence of AI itself (Boddington, 2017; Bostrom & Yudkowsky, 2018; Dignum, 2018).

The evolution of business models, sales processes, customer service options, and marketing information systems (Donthu & Gustafsson, 2020), necessitates a careful consideration of ethical issues and data protection concerns (Ameen et al., 2020a; Etzioni & Etzioni, 2017). For instance, the collection of data through speech recognition, including clients’ tone of voice when interacting with voice bots, to enhance marketing strategies, must adhere to the General Data Protection Regulation and obtain client consent (Butterworth, 2018) (See the future of Voice and Conversational AI in the next chapter. To mitigate consumer skepticism and prevent speciesism towards AI, practitioners should uphold ethical standards (Stone et al., 2020) and prioritize data protection (Kolbjørnsrud et al., 2017). Understanding privacy in this context involves recognizing its varied cultural and regional interpretations. An ethical quandary arises when considering the collection of user data without explicit consent, prompting a need for methods that aggregate and anonymize data to safeguard individual identities. The importance of secure data storage solutions cannot be overstated, with regular audits and updates being crucial to prevent data breaches. In terms of regulations and compliance, businesses must navigate the complexities of global data protection laws like the GDPR and CCPA, understanding that non-compliance can have serious repercussions. Building trust with users is paramount, achieved by informing them about the use of their data and offering options to opt-out or limit data collection (see e.g., Ipsos, 2023).

Although recent advancements in information technology and AI are facilitating better coordination and integration between humans and technology, developing what has been termed Human-Aware AI, it is still far from being able to function as a “team member,” adapting to the cognitive strengths and weaknesses of human collaborators (Korteling et al., 2021). To this extent, while AI models can make highly accurate predictions, they do so in ways that are difficult for humans to comprehend. This lack of transparency can be problematic in various domains, including healthcare, finance, and autonomous vehicles, where it’s crucial to understand why a particular decision was made. The growing complexity of AI and machine learning models, such as deep neural networks, which are often considered “black box” systems strengthen the need for explainable AI (XAI).

XAI refers to the concept of designing and developing artificial intelligence systems and machine learning models in a way that makes their decisions, predictions, and reasoning processes understandable and interpretable by humans. In other words, XAI seeks to provide insights into how AI systems arrive at their conclusions, allowing users to grasp the rationale behind those decisions.

Explainable AI techniques aim to address this issue by providing Transparency and visibility into the inner workings of AI models;7 Interpretability, namely human-understandable explanations for AI predictions (in the form of visualizations, textual descriptions, or other formats that make it easier for users to grasp the reasoning behind AI decisions); Accountability, namely allowing for the identification of biases, errors, or ethical concerns in AI models.

Explainable AI is particularly important in applications where trust, fairness, and safety are critical. It can help AI practitioners, regulators, and end-users have confidence in AI systems and ensure that they align with ethical and legal standards. XAI techniques continue to evolve and play a crucial role in the responsible deployment of AI technologies across various industries (Goebel et al., 2018; Gunning et al., 2019; Xu et al., 2019).

Another way to mitigate or counter AI is through systems that can differentiate between content created by AI and that made by humans (GPTZero, Fictitious.ai, and Writer.com as examples). This emerging requirement addresses concerns about authenticity and origin in various digital media, ensuring clarity and credibility in a landscape where AI-generated content is becoming more common and sophisticated (Fried, 2023).

The surge of interest in AI capabilities has also given rise to numerous inquiries concerning the societal repercussions, potential misuse, risks, and governance of these innovations, all of which hold paramount significance. One of the major concerns is about the potential misuse of AI platforms in spreading disinformation (Metz, 2023) and the challenges in identifying AI-generated content (Spitale et al., 2023). Up until recently ChatGPT 4 lacked internet access and possessed only limited knowledge of global developments post-2021 (Stokel-Walker & van Noorden, 2023). As the sphere of knowledge is continuously evolving, this constraint occasionally led to the delivery of outdated or erroneous responses. At the time of writing this report, ChatGPT 4 uses Microsoft’s Bing search engine to locate relevant information when producing responses to prompts. As of the writing of this report, ChatGPT 4 4 also claims to have been updated to March 2023. To be sure, while it often provides relevant and useful results, its accuracy can vary. A related problem is the tendency to generate seemingly credible but ultimately fictitious citations that lack real-world sources when prompted to incorporate current references (Choi et al., 2023).

An over-reliance on AI may have negative consequences in terms of a decline in higher-order cognitive skills such as creativity, critical thinking, reasoning, and problem-solving (Farrokhnia et al., 2023).

Another concern is about AI’s ability to discern between authentic and fabricated content,
including distinguishing its own outputs from those made by humans. This issue underlines the potential risks in trusting AI-generated information and highlights the ease with which malicious actors could exploit such tools to generate large volumes of deceptive content (Coldewey & Lardinois, 2023). In the context of advertising and, occasionally, advertising research, the ethical implications of these AI-related issues are significant. The potential decline in cognitive skills like creativity and critical thinking due to AI reliance raises questions about the quality and originality of advertising content. Furthermore, AI’s challenge in distinguishing between authentic and fabricated content can lead to ethical dilemmas in advertising practices, where discerning the truthfulness of AI-generated information becomes crucial.

Bias and fairness in AI models present another significant challenge. Bias can creep into AI models through various channels, and distinguishing between model bias and data bias is essential. In advertising research, the consequences of bias can lead to the misrepresentation or exclusion of certain groups, fostering negative brand perceptions. Techniques like adversarial testing are employed to detect bias in AI models, with data augmentation and diversification being vital for creating balanced datasets. Continuous monitoring is key to ensuring AI models remain fair, adapting as societal norms and values evolve.

Relatedly, there has been a growing focus on the targeted applications of AI in the realm of social well-being. This domain has attracted a multitude of stakeholders, including charitable organizations such as DataKind (established in 2012), academic initiatives like the Data Science for Social Good (DSSG) program at the University of Chicago (established in 2013), and international entities like the UN Global Pulse Labs. Additionally, there has been a proliferation of AI for Social Good workshops at prominent conferences, including the 2018 and 2019 NeurIPS conferences, the 2019 ICML conference, and the 2019 ICLR conference. Corporate support in this endeavor has been evident through initiatives such as Google AI for Good Grants, Microsoft AI for Humanity, Mastercard Center for Inclusive Growth, and the Rockefeller Foundation’s Data Science for Social Impact, among others (Tomašev et al., 2020). Recent studies have demonstrated the potential benefits of harnessing AI for societal betterment. For instance, Amnesty International and ElementAI demonstrated how AI can assist human moderators in identifying and quantifying online abuse against women on Twitter. Anticipated enhancements in both data infrastructure and AI technology hold the promise of enabling an even wider array of potential applications for AI in the service of societal good.

Conclusion:

While AI-driven advertising research offers transformative potential, it also presents ethical challenges. As the industry advances, balancing innovation with ethical considerations becomes paramount. Companies that prioritize privacy, fairness, and transparency not only adhere to regulatory standards but also foster trust with their audiences, paving the way for a more responsible and inclusive AI-driven advertising landscape.

7 Transparency is not only about the AI models but also about data usage, necessitating clear communication about the type of data collected and its purpose. Stakeholder engagement is crucial, involving consumers, regulators, and others in discussions about AI transparency and collaborative efforts to set industry standards. Moreover, educating the public through workshops, webinars, and educational content is essential for demystifying AI in advertising research, fostering a broader understanding and acceptance.

Add a Comment