Refine
Document Type
- Article (13)
- Working Paper (1)
Language
- English (14)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14)
Keywords
- artificial intelligence (14) (remove)
Institute
- Medizin (9)
- Informatik (2)
- Physik (2)
- Frankfurt Institute for Advanced Studies (FIAS) (1)
- House of Finance (HoF) (1)
- Rechtswissenschaft (1)
- Wirtschaftswissenschaften (1)
Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.
Co-design of a trustworthy AI system in healthcare: deep learning based skin lesion classifier
(2021)
This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.
In this roadmap article, we have focused on the most recent advances in terahertz (THz) imaging with particular attention paid to the optimization and miniaturization of the THz imaging systems. Such systems entail enhanced functionality, reduced power consumption, and increased convenience, thus being geared toward the implementation of THz imaging systems in real operational conditions. The article will touch upon the advanced solid-state-based THz imaging systems, including room temperature THz sensors and arrays, as well as their on-chip integration with diffractive THz optical components. We will cover the current-state of compact room temperature THz emission sources, both optolectronic and electrically driven; particular emphasis is attributed to the beam-forming role in THz imaging, THz holography and spatial filtering, THz nano-imaging, and computational imaging. A number of advanced THz techniques, such as light-field THz imaging, homodyne spectroscopy, and phase sensitive spectrometry, THz modulated continuous wave imaging, room temperature THz frequency combs, and passive THz imaging, as well as the use of artificial intelligence in THz data processing and optics development, will be reviewed. This roadmap presents a structured snapshot of current advances in THz imaging as of 2021 and provides an opinion on contemporary scientific and technological challenges in this field, as well as extrapolations of possible further evolution in THz imaging.
Bayesian inference is ubiquitous in science and widely used in biomedical research such as cell sorting or “omics” approaches, as well as in machine learning (ML), artificial neural networks, and “big data” applications. However, the calculation is not robust in regions of low evidence. In cases where one group has a lower mean but a higher variance than another group, new cases with larger values are implausibly assigned to the group with typically smaller values. An approach for a robust extension of Bayesian inference is proposed that proceeds in two main steps starting from the Bayesian posterior probabilities. First, cases with low evidence are labeled as “uncertain” class membership. The boundary for low probabilities of class assignment (threshold 𝜀
) is calculated using a computed ABC analysis as a data-based technique for item categorization. This leaves a number of cases with uncertain classification (p < 𝜀
). Second, cases with uncertain class membership are relabeled based on the distance to neighboring classified cases based on Voronoi cells. The approach is demonstrated on biomedical data typically analyzed with Bayesian statistics, such as flow cytometric data sets or biomarkers used in medical diagnostics, where it increased the class assignment accuracy by 1–10% depending on the data set. The proposed extension of the Bayesian inference of class membership can be used to obtain robust and plausible class assignments even for data at the extremes of the distribution and/or for which evidence is weak.
Purpose: Artificial intelligence (AI) has accelerated novel discoveries across multiple disciplines including medicine. Clinical medicine suffers from a lack of AI-based applications, potentially due to lack of awareness of AI methodology. Future collaboration between computer scientists and clinicians is critical to maximize the benefits of transformative technology in this field for patients. To illustrate, we describe AI-based advances in the diagnosis and management of gliomas, the most common primary central nervous system (CNS) malignancy.
Methods: Presented is a succinct description of foundational concepts of AI approaches and their relevance to clinical medicine, geared toward clinicians without computer science backgrounds. We also review novel AI approaches in the diagnosis and management of glioma.
Results: Novel AI approaches in gliomas have been developed to predict the grading and genomics from imaging, automate the diagnosis from histopathology, and provide insight into prognosis.
Conclusion: Novel AI approaches offer acceptable performance in gliomas. Further investigation is necessary to improve the methodology and determine the full clinical utility of these novel approaches.
The human immune system is determined by the functionality of the human lymph node. With the use of high-throughput techniques in clinical diagnostics, a large number of data is currently collected. The new data on the spatiotemporal organization of cells offers new possibilities to build a mathematical model of the human lymph node - a virtual lymph node. The virtual lymph node can be applied to simulate drug responses and may be used in clinical diagnosis. Here, we review mathematical models of the human lymph node from the viewpoint of cellular processes. Starting with classical methods, such as systems of differential equations, we discuss the values of different levels of abstraction and methods in the range from artificial intelligence techniques formalism.
Regulating IP exclusion/inclusion on a global scale: the example of copyright vs. AI training
(2024)
This article builds upon the literature on inclusion/inclusivity in IP law by applying these concepts to the example of the scraping and mining of copyright-protected content for the purpose of training an artificial intelligence (AI) system or model. Which mode of operation dominates in this technological area: exclusion, inclusion or even inclusivity? The features of AI training appear to call for universal and sustainable “inclusivity” instead of a mere voluntary “inclusion” of AI provider bots by copyright holders. As the overview on the copyright status of AI training activities in different jurisdictions and emerging laws on AI safety (such as the EU AI Act) demonstrates, the global regulatory landscape is, however, much too fragmented and dynamic to immediately jump to an inclusive global AI regime. For the time being, legally secure global AI training requires the voluntary cooperation between AI providers and copyright holders, and innovative techno-legal reasoning is needed on how to effectuate this inclusion.
Advanced machine learning has achieved extraordinary success in recent years. “Active” operational risk beyond ex post analysis of measured-data machine learning could provide help beyond the regime of traditional statistical analysis when it comes to the “known unknown” or even the “unknown unknown.” While machine learning has been tested successfully in the regime of the “known,” heuristics typically provide better results for an active operational risk management (in the sense of forecasting). However, precursors in existing data can open a chance for machine learning to provide early warnings even for the regime of the “unknown unknown.”
Feature selection is a common step in data preprocessing that precedes machine learning to reduce data space and the computational cost of processing or obtaining the data. Filtering out uninformative variables is also important for knowledge discovery. By reducing the data space to only those components that are informative to the class structure, feature selection can simplify models so that they can be more easily interpreted by researchers in the field, reminiscent of explainable artificial intelligence. Knowledge discovery in complex data thus benefits from feature selection that aims to understand feature sets in the thematic context from which the data set originates. However, a single variable selected from a very small number of variables that are technically sufficient for AI training may make little immediate thematic sense, whereas the additional consideration of a variable discarded during feature selection could make scientific discovery very explicit. In this report, we propose an approach to explainable feature selection (XFS) based on a systematic reconsideration of unselected features. The difference between the respective classifications when training the algorithms with the selected features or with the unselected features provides a valid estimate of whether the relevant features in a data set have been selected and uninformative or trivial information was filtered out. It is shown that revisiting originally unselected variables in multivariate data sets allows for the detection of pathologies and errors in the feature selection that occasionally resulted in the failure to identify the most appropriate variables.
Recent advances in mathematical modelling and artificial intelligence have challenged the use of traditional regression analysis in biomedical research. This study examined artificial and cancer research data using binomial and multinomial logistic regression and compared its performance with other machine learning models such as random forests, support vector machines, Bayesian classifiers, k-nearest neighbours and repeated incremental clipping (RIPPER). The alternative models often outperformed regression in accurately classifying new cases. Logistic regression had a structural problem similar to early single-layer neural networks, which limited its ability to identify variables with high statistical significance for reliable class assignment. Therefore, regression is not always the best model for class prediction in biomedical datasets. The study emphasises the importance of validating selected models and suggests that a mixture of experts approach may be a more advanced and effective strategy for analysing biomedical datasets.