Today, consumers are increasingly encountering artificial intelligence, whether it is through voice-activated assistants, websites or customer service experiences. Increased exposure means an increase in awareness of the fact that AI can and does make mistakes. However, research is now showing that different types of errors influence consumers’ perceptions and behavior in numerous ways. This includes their willingness to use technologies that utilize AI.
Researchers in this multi-study project: Alexander Mueller, Sabine Kuester, and Sergej von Janda, examined consumer responses to AI-induced errors. They also looked at how and to what extent explainable AI (XAI) can mitigate the effects of such errors. Their research specifically focused on how consumers respond to a violation of social norms versus a minor technical error. The researchers also examined the different impacts of low versus high error severity.
This study provides a granular perspective on consumer responses to erroneous AI and highlights the importance of AI’s adherence to social norms. Specifically, minor social errors could foster the stigmatization of minorities and suggest the necessity of implementing additional safeguards against social norm violations by AI.
In the first study, subjects were given different scenarios in which they imagined asking an AI-based voice assistant to tell a joke to their friends and family. In the technical failure scenario, low severity involved repeating the request three times, while high severity meant that the assistant did not respond. To test the effect of social errors, the assistant’s joke either made fun of blondes (low severity) or offended all people of color (high severity). A second study with a similar design tested the mediating role of cognitive and affective trust at varying levels of error severity.
A third study measured trust prior to and after the error to further specify the mechanism underlying consumers’ responses. The researchers found that affective trust was particularly low after technical errors, which, in turn, negatively affected consumer responses. The influence of social errors on affective trust was not as strong, leading to more positive consumer responses than technical errors.
A final study demonstrated that XAI positively affected consumer responses to social errors such as recommending a gift that would offend the recipient. In the case of technical errors, however, XAI has no effect on consumers’ perception of AI competence. The researchers concluded that the negative effects from a technical failure most likely occur because the voice assistant falls short of consumers’ performance expectations in the presence of XAI.
Read the full working paper here.