Refine
Document Type
- Working Paper (2)
- Article (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- trust (3) (remove)
Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.
This paper challenges widespread assumptions in trust research according to which trust and conflict are opposing terms or where trust is generally seen as a value. Rather, it argues that trust is only valuable if properly justified, and it places such justifications in contexts of social and political conflict. For these purposes, the paper suggests a distinction between a general concept and various conceptions of trust, and it defines the concept as a four-place one. With regard to the justification of trust, a distinction between internal and full justification is introduced, and the justification of trust is linked to relations of justification between trusters and trusted. Finally, trust in conflict(s) emerges were such relations exist among the parties of a conflict, often by way of institutional mediation.
The paper looks at the determinants of fiscal adjustments as reflected in the primary surplus of countries. Our conjecture is that governments will usually find it more attractive to pursue fiscal adjustments in a situation of relatively high growth, but based on a simple stylized model of government behavior the expectation is that mainly high trust governments will be in a position to defer consolidation to years with higher growth. Overall, our analysis of a panel of European countries provides support for this expectation. The difference in fiscal policies depending on government trust levels may help explaining why better governed countries have been found to have less severe business cycles. It suggests that trust and credibility play an important role not only in monetary policy, but also in fiscal policy.