TY - JOUR A1 - Lötsch, Jörn A1 - Kringel, Dario A1 - Ultsch, Alfred T1 - Explainable artificial intelligence (XAI) in biomedicine: making AI decisions trustworthy for physicians and patients T2 - BioMedInformatics N2 - The use of artificial intelligence (AI) systems in biomedical and clinical settings can disrupt the traditional doctor–patient relationship, which is based on trust and transparency in medical advice and therapeutic decisions. When the diagnosis or selection of a therapy is no longer made solely by the physician, but to a significant extent by a machine using algorithms, decisions become nontransparent. Skill learning is the most common application of machine learning algorithms in clinical decision making. These are a class of very general algorithms (artificial neural networks, classifiers, etc.), which are tuned based on examples to optimize the classification of new, unseen cases. It is pointless to ask for an explanation for a decision. A detailed understanding of the mathematical details of an AI algorithm may be possible for experts in statistics or computer science. However, when it comes to the fate of human beings, this “developer’s explanation” is not sufficient. The concept of explainable AI (XAI) as a solution to this problem is attracting increasing scientific and regulatory interest. This review focuses on the requirement that XAIs must be able to explain in detail the decisions made by the AI to the experts in the field. KW - data science KW - artificial intelligence KW - machine learning KW - patient–doctor relationship KW - digital medicine Y1 - 2021 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/75570 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-755703 SN - 2673-7426 VL - 2 IS - 1 SP - 1 EP - 17 PB - MDPI CY - Basel ER -