004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (257)
- Doctoral Thesis (147)
- Working Paper (122)
- Conference Proceeding (53)
- Bachelor Thesis (52)
- Diploma Thesis (48)
- Preprint (43)
- Part of a Book (42)
- Contribution to a Periodical (39)
- diplomthesis (31)
Is part of the Bibliography
- no (901)
Keywords
- Lambda-Kalkül (21)
- Inklusion (13)
- Formale Semantik (11)
- Barrierefreiheit (10)
- Digitalisierung (10)
- Operationale Semantik (9)
- artificial intelligence (9)
- data science (9)
- lambda calculus (9)
- machine learning (9)
Institute
- Informatik (472)
- Informatik und Mathematik (103)
- Präsidium (74)
- Frankfurt Institute for Advanced Studies (FIAS) (53)
- Medizin (53)
- Wirtschaftswissenschaften (45)
- Physik (35)
- Hochschulrechenzentrum (24)
- studiumdigitale (24)
- Extern (12)
Unified probabilistic deep continual learning through generative replay and open set recognition
(2022)
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless focus primarily on alleviating catastrophic interference with learned representations. In this work, we introduce a probabilistic approach that connects these perspectives based on variational inference in a single deep autoencoder model. Specifically, we propose to bound the approximate posterior by fitting regions of high density on the basis of correctly classified data points. These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application. Simultaneously, to retain already acquired knowledge, a generative replay process can be narrowed to strictly in-distribution samples, in order to significantly alleviate catastrophic interference.
Assessing communicative accommodation in the context of large language models : a semiotic approach
(2023)
Recently, significant strides have been made in the ability of transformer-based chatbots to hold natural conversations. However, despite a growing societal and scientific relevancy, there are few frameworks systematically deriving what it means for a chatbot conversation to be natural. The present work approaches this question through the phenomenon of communicative accommodation/interactive alignment. While there is existing research suggesting that humans adapt communicatively to technologies, the aim of this work is to explore the accommodation of AI-chatbots to an interlocutor. Its research interest is twofold: Firstly, the structural ability of the transformer-architecture to support accommodative behavior is assessed using a frame constructed in accordance with existing accommodationtheories.
This results in hypotheses to be tested empirically. Secondly, since effective accommodation produces the same outcomes, regardless of technical implementation, a behavioral experiment is proposed. Existing quantifications of accommodation are reconciled,
extended, and modified to apply them to nonhuman-interlocutors. Thus, a measurement scheme is suggested which evaluates textual data from text-only, double-blind interactions between chatbots and humans, chatbots and chatbots and humans and humans. Using the generated human-to-human convergence data as a reference, the degree of artificial accommodation can be evaluated. Accommodation as a central facet of artificial interactivity can thus be evaluated directly against its theoretical paradigm, i.e. human interaction. In case that subsequent examinations show that chatbots effectively do not accommodate, there may be a new form of algorithmic bias, emerging from the aggregate accommodation towards chatbots but not towards humans. Thus, existing, hegemonic semantics could be cemented through chatbot-learning. Meanwhile, the ability to effectively accommodate would render chatbots vastly more susceptible to misuse.
Detailed feedback on exercises helps learners become proficient but is time-consuming for educators and, thus, hardly scalable. This manuscript evaluates how well Generative Artificial Intelligence (AI) provides automated feedback on complex multimodal exercises requiring coding, statistics, and economic reasoning. Besides providing this technology through an easily accessible web application, this article evaluates the technology’s performance by comparing the quantitative feedback (i.e., points achieved) from Generative AI models with human expert feedback for 4,349 solutions to marketing analytics exercises. The results show that automated feedback produced by Generative AI (GPT-4) provides almost unbiased evaluations while correlating highly with (r = 0.94) and deviating only 6 % from human evaluations. GPT-4 performs best among seven Generative AI models, albeit at the highest cost. Comparing the models’ performance with costs shows that GPT-4, Mistral Large, Claude 3 Opus, and Gemini 1.0 Pro dominate three other Generative AI models (Claude 3 Sonnet, GPT-3.5, and Gemini 1.5 Pro). Expert assessment of the qualitative feedback (i.e., the AI’s textual response) indicates that it is mostly correct, sufficient, and appropriate for learners. A survey of marketing analytics learners shows that they highly recommend the app and its Generative AI feedback. An advantage of the app is its subject-agnosticism—it does not require any subject- or exercise-specific training. Thus, it is immediately usable for new exercises in marketing analytics and other subjects.
The human immune system is determined by the functionality of the human lymph node. With the use of high-throughput techniques in clinical diagnostics, a large number of data is currently collected. The new data on the spatiotemporal organization of cells offers new possibilities to build a mathematical model of the human lymph node - a virtual lymph node. The virtual lymph node can be applied to simulate drug responses and may be used in clinical diagnosis. Here, we review mathematical models of the human lymph node from the viewpoint of cellular processes. Starting with classical methods, such as systems of differential equations, we discuss the values of different levels of abstraction and methods in the range from artificial intelligence techniques formalism.
Highlights
• The Munich Procedure, developed for p-XRF data, standardises coefficient corrections.
• It ensures consistent, reproducible data, benefiting specialists in various industries.
• The protocol, documented as R-Skript, enhances accuracy and transparency of p-XRF data.
• Establishing a common baseline fosters discussion and improves the overall understanding of p-XRF.
Abstract
The Munich Procedure, a protocol presented as R code and initially developed on the basis of archaeometric portable X-ray fluorescence (p-XRF) data, offers adaptability and standardisation to evaluate coefficient corrections. These corrections are derived from linear regressions calculated by comparing p-XRF values with laboratory chemical analyses of the same sample set. The versatility of this procedure allows collaboration and ensures consistent data structure. Not tied to specific instrumentation, this approach helps to universally improve the accuracy of p-XRF data, benefiting specialists in a variety of industries. By providing a common baseline for performance evaluation, it enables discussion across different applications.
We introduce a novel technique that utilizes a physics-driven deep learning method to reconstruct the dense matter equation of state from neutron star observables, particularly the masses and radii. The proposed framework involves two neural networks: one to optimize the EoS using Automatic Differentiation in the unsupervised learning scheme; and a pre-trained network to solve the Tolman–Oppenheimer–Volkoff (TOV) equations. The gradient-based optimization process incorporates a Bayesian picture into the proposed framework. The reconstructed EoS is proven to be consistent with the results from conventional methods. Furthermore, the resulting tidal deformation is in agreement with the limits obtained from the gravitational wave event, GW170817.
In online video games toxic interactions are very prevalent and often
even considered an imperative part of gaming.
Most studies analyse the toxicity in video games by analysing the messages that are sent during a match, while only a few focus on other interactions. We focus specifically on the in-game events to try to identify toxic matches, by constructing a framework that takes a list of time-based events and projects them into a graph structure which we can then analyse with current methods in the field of graph representation learning.
Specifically we use a Graph Neural Network and Principal Neighbour-
hood Aggregation to analyse the graph structure to predict the toxicity of a match.
We also discuss the subjectivity behind the term toxicity and why the
process of only analysing in-game messages with current state-of-the-art NLP methods isn’t capable to infer if a match is perceived as toxic or not.
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.