TY - JOUR A1 - Zicari, Roberto V. A1 - Brusseau, James A1 - Blomberg, Stig Nikolaj A1 - Christensen, Helle Collatz A1 - Coffee, Megan A1 - Ganapini, Marianna B. A1 - Gerke, Sara A1 - Gilbert, Thomas Krendl A1 - Hickman, Eleanore A1 - Hildt, Elisabeth A1 - Holm, Sune A1 - Kühne, Ulrich A1 - Madai, Vince Istvan A1 - Osika, Walter A1 - Spezzatti, Andy A1 - Schnebel, Eberhard A1 - Tithi, Jesmin Jahan A1 - Vetter, Dennis A1 - Westerlund, Magnus A1 - Wurth, Renee A1 - Amann, Julia A1 - Antun, Vegard A1 - Beretta, Valentina A1 - Bruneault, Frédérick A1 - Campano, Erik A1 - Düdder, Boris A1 - Gallucci, Alessio A1 - Goffi, Emmanuel A1 - Haase, Christoffer Bjerre A1 - Hagendorff, Thilo A1 - Kringen, Pedro A1 - Möslein, Florian A1 - Ottenheimer, Davi A1 - Ozols, Matiss A1 - Palazzani, Laura A1 - Petrin, Martin A1 - Tafur, Karin A1 - Tørresen, Jim A1 - Volland, Holger A1 - Kararigas, Georgios T1 - On assessing trustworthy AI in healthcare. Machine learning as a supportive tool to recognize cardiac arrest in emergency calls T2 - Frontiers in Human Dynamics N2 - Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice. KW - artificial intelligence KW - cardiac arrest KW - case study KW - ethical trade-off KW - explainable AI KW - healthcare KW - trust KW - trustworthy AI Y1 - 2021 UR - http://publikationen.ub.uni-frankfurt.de/frontdoor/index/index/docId/62450 UR - https://nbn-resolving.org/urn:nbn:de:hebis:30:3-624508 SN - 2673-2726 N1 - SG was supported by a grant from the Collaborative Research Program for Biomedical Innovation Law, a scientifically independent collaborative research program supported by a Novo Nordisk Foundation grant (NNF17SA0027784). JA received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 777107 (PRECISE4Q). TH was supported by the Cluster of Excellence “Machine Learning—New Perspectives for Science” funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy—Reference Number EXC 2064/1—Project ID 390727645. All other authors did not receive any funding (neither private nor public) to conduct this work. VL - 3 IS - art. 673104 SP - 1 EP - 24 PB - Frontiers Media CY - Lausanne ER -