Refine
Year of publication
- 2021 (3) (remove)
Document Type
- Article (3)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- artificial intelligence (2)
- healthcare (2)
- trustworthy AI (2)
- HBV (1)
- Z-inspection (1)
- cardiac arrest (1)
- case study (1)
- co-infection (1)
- core expression (1)
- ethical co-design (1)
Institute
- Medizin (2)
- Informatik (1)
Background & Aims: HBV genotype G (HBV/G) is mainly found in co-infections with other HBV genotypes and was identified as an independent risk factor for liver fibrosis. This study aimed to analyse the prevalence of HBV/G co-infections in healthy European HBV carriers and to characterize the crosstalk of HBV/G with other genotypes.
Methods: A total of 560 European HBV carriers were tested via HBV/G-specific PCR for HBV/G co-infections. Quasispecies distribution was analysed via deep sequencing, and the clinical phenotype was characterized regarding qHBsAg-/HBV-DNA levels and frequent mutations. Replicative capacity and expression of HBsAg/core was studied in hepatoma cells co-expressing HBV/G with either HBV/A, HBV/D or HBV/E using bicistronic vectors.
Results: Although no HBV/G co-infection was found by routine genotyping PCR, HBV/G was detected by specific PCR in 4%-8% of patients infected with either HBV/A or HBV/E but only infrequently in other genotypes. In contrast to HBV/E, HBV/G was found as the quasispecies major variant in co-infections with HBV/A. No differences in the clinical phenotype were observed for HBV/G co-infections. In vitro RNA and DNA levels were comparable among all genotypes, but expression and release of HBsAg was reduced in co-expression of HBV/G with HBV/E. In co-expression with HBV/A and HBV/E expression of HBV/G-specific core was enhanced while core expression from the corresponding genotype was markedly diminished.
Conclusions: HBV/G co-infections are common in European inactive carriers with HBV/A and HBV/E infection, but sufficient detection depends strongly on the assay. HBV/G regulated core expression might play a critical role for survival of HBV/G in co-infections.
Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier
(2021)
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.
Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.