TY - UNPD A1 - Bauer, Kevin A1 - Zahn, Moritz von A1 - Hinz, Oliver T1 - Expl(AI)ned: the impact of explainable artificial intelligence on cognitive processes T2 - SAFE working paper ; No. 315 N2 - This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective. T3 - SAFE working paper - 315 KW - XAI KW - Explainable machine learning KW - Information processing KW - Belief up-dating KW - Algorithmic transparency Y1 - 2021 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/59160 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-591606 IS - June 16, 2021 PB - SAFE CY - Frankfurt am Main ER -