Expl(AI)ned: the impact of explainable artificial intelligence on cognitive processes

  • This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.

Download full text files

Export metadata

Metadaten
Author:Kevin BauerORCiDGND, Moritz von Zahn, Oliver HinzORCiDGND
URN:urn:nbn:de:hebis:30:3-591606
DOI:https://doi.org/10.2139/ssrn.3872711
Parent Title (English):SAFE working paper ; No. 315
Series (Serial Number):SAFE working paper series (315)
Publisher:SAFE
Place of publication:Frankfurt am Main
Document Type:Working Paper
Language:English
Year of Completion:2021
Year of first Publication:2021
Publishing Institution:Universitätsbibliothek Johann Christian Senckenberg
Release Date:2021/07/30
Tag:Algorithmic transparency; Belief up-dating; Explainable machine learning; Information processing; XAI
Issue:June 16, 2021
Page Number:45
Institutes:Wirtschaftswissenschaften / Wirtschaftswissenschaften
Wissenschaftliche Zentren und koordinierte Programme / House of Finance (HoF)
Wissenschaftliche Zentren und koordinierte Programme / Center for Financial Studies (CFS)
Wissenschaftliche Zentren und koordinierte Programme / Sustainable Architecture for Finance in Europe (SAFE)
Dewey Decimal Classification:3 Sozialwissenschaften / 33 Wirtschaft / 330 Wirtschaft
Sammlungen:Universitätspublikationen
Licence (German):License LogoDeutsches Urheberrecht