004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (251)
- Doctoral Thesis (147)
- Working Paper (122)
- Conference Proceeding (53)
- Bachelor Thesis (50)
- Diploma Thesis (47)
- Preprint (43)
- Part of a Book (42)
- Contribution to a Periodical (38)
- diplomthesis (31)
Is part of the Bibliography
- no (891)
Keywords
- Lambda-Kalkül (21)
- Inklusion (13)
- Formale Semantik (11)
- Barrierefreiheit (10)
- Digitalisierung (10)
- Operationale Semantik (9)
- data science (9)
- lambda calculus (9)
- machine learning (9)
- Computerlinguistik (8)
Institute
- Informatik (469)
- Informatik und Mathematik (101)
- Präsidium (73)
- Frankfurt Institute for Advanced Studies (FIAS) (51)
- Medizin (51)
- Wirtschaftswissenschaften (44)
- Physik (34)
- Hochschulrechenzentrum (24)
- studiumdigitale (24)
- Extern (12)
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
Blockchains in public administration : a RADIUS on blockchain framework for public administration
(2023)
The emergence of blockchain technology has generated a great deal of attention, as reflected in numerous scientific and journalistic articles. However, the implementation of blockchain for public administrations in Germany has encountered a setback owing to unsuccessful initiatives. Initial enthusiasm was followed by disillusionment. Nevertheless, technology continues to evolve. This paper examines whether the use of a blockchain can still optimize the processes of public administrations. Not only the failed projects are analysed, but also more current applications of the technology and their potential relevance for the administration, especially in the state of Hesse.
To answer if blockchains are promising to administrations, a Design Science Research (DSR) research approach is chosen. The DSR method is a research-based approach that aims to create new and innovative solutions to real-world problems through the development and evaluation of artefacts such as models, methods, or prototypes. For this work, the implementation of a framework to realize an Authentication, Authorization, and Accounting (AAA) system on the blockchain was identified as profitable. The framework aims to implement the aforementioned AAA tasks using a blockchain. The Remote Authentication Dial-In User Service (RADIUS) protocol has been identified as a potential protocol of the AAA system. The goal is to create a way to implement the system either entirely on a blockchain or as a hybrid system. Various blockchain technologies will be considered. Suitable for development, the framework AAA-me is named.
The development of AAA-me has shown that the desired framework for implementing RADIUS on the blockchain is possible in various degrees of implementation. Previous work mostly relied on full development. Additionally, it has been shown that AAA-me can be used to perform hybrid integration at different implementation levels. This makes AAA-me stand out from the few hybrid previous approaches. Furthermore, AAA-me was investigated in different laboratory environments. This was to determine the expected resilience against Single Point of Failure (SPOF). The results of the lab investigation indicated that a RADIUS system on top of a blockchain can provide benefits in terms of security and performance. In the lab environment, times were measured within which a series of authorization requests were processed. In addition, it was illustrated how a RADIUS system implemented using blockchain can protect itself against Man-in-the-Middle (MITM) attacks.
Finally, in collaboration with the Hessian Central Office for Data Processing (German: Hessische Zentrale für Datenverarbeitung) (HZD), another test lab demonstrated how a RADIUS system on the blockchain can integrate with the existing IT systems of the German state of Hesse. Based on these findings, this work reevaluated the applicability of blockchain technology for public administration processes.
The work has thus shown that the use of a blockchain can still be purposeful. However, it has also been shown that an implementation can bring many problems with it. The small number of blockchain developers and engineers also poses the risk of finding people to develop and maintain a system. In addition, one faces the problem of determining an architecture now that will be applied to many projects in the future. However, each project can, in turn, have an impact on the choice of architecture. Once one has solved this problem and a blockchain infrastructure is available, it can be established quickly and be more SPOF resistant, for example, for Public Key Infrastructure (PKI) systems.
AAA-me was only applied in lab and test environments. As a result, no real data ran over its own infrastructure. This allowed the necessary flexibility for development. However, system-related properties could appear in real situations that are not detectable here in this way. Furthermore, the initial stage of AAA-me’s development is still in its infancy. Many manual adjustments need to be made in order for this to integrate with an existing RADIUS system. Also, no system security effort in and of itself has been carried out in the lab environments. Thus, vulnerabilities can quickly open up on web servers due to misconfigurations and missing updates. For the above reasons, productive use should be discouraged unless major developments are carried out.
PolarCAP – A deep learning approach for first motion polarity classification of earthquake waveforms
(2022)
Highlights
• We present PolarCAP, a deep learning model that can classify the polarity of a waveform with a 98% accuracy.
• The first-motion polarity of seismograms is a useful parameter, but its manual determination can be laborious and imprecise.
• We demonstrate that in several cases the model can assign trace polar-ity more accurately than a human analyst.
Abstract
The polarity of first P-wave arrivals plays a significant role in the effective determination of focal mechanisms specially for smaller earthquakes. Manual estimation of polarities is not only time-consuming but also prone to human errors. This warrants a need for an automated algorithm for first motion polarity determination. We present a deep learning model - PolarCAP that uses an autoencoder architecture to identify first-motion polarities of earth-quake waveforms. PolarCAP is trained in a supervised fashion using more than 130,000 labelled traces from the Italian seismic dataset (INSTANCE) and is cross-validated on 22,000 traces to choose the most optimal set of hyperparameters. We obtain an accuracy of 0.98 on a completely unseen test dataset of almost 33,000 traces. Furthermore, we check the model generalizability by testing it on the datasets provided by previous works and show that our model achieves a higher recall on both positive and negative polarities.
The ubiquitin (Ub) code denotes the complex Ub architectures, including Ub chains of different length, linkage-type and linkage combinations, which enable ubiquitination to control a wide range of protein fates. Although many linkage-specific interactors have been described, how interactors are able to decode more complex architectures is not fully understood. We conducted a Ub interactor screen, in humans and yeast, using Ub chains of varying length, as well as, homotypic and heterotypic branched chains of the two most abundant linkage types – K48- and K63-linked Ub. We identified some of the first K48/K63 branch-specific Ub interactors, including histone ADP-ribosyltransferase PARP10/ARTD10, E3 ligase UBR4 and huntingtin-interacting protein HIP1. Furthermore, we revealed the importance of chain length by identifying interactors with a preference for Ub3 over Ub2 chains, including Ub-directed endoprotease DDI2, autophagy receptor CCDC50 and p97-adaptor FAF1. Crucially, we compared datasets collected using two common DUB inhibitors – Chloroacetamide and N-ethylmaleimide. This revealed inhibitor-dependent interactors, highlighting the importance of inhibitor consideration during pulldown studies. This dataset is a key resource for understanding how the Ub code is read.
Structural rearrangements play a central role in the organization and function of complex biomolecular systems. In principle, Molecular Dynamics (MD) simulations enable us to investigate these thermally activated processes with an atomic level of resolution. In practice, an exponentially large fraction of computational resources must be invested to simulate thermal fluctuations in metastable states. Path sampling methods focus the computational power on sampling the rare transitions between states. One of their outstanding limitations is to efficiently generate paths that visit significantly different regions of the conformational space. To overcome this issue, we introduce a new algorithm for MD simulations that integrates machine learning and quantum computing. First, using functional integral methods, we derive a rigorous low-resolution spatially coarse-grained representation of the system’s dynamics, based on a small set of molecular configurations explored with machine learning. Then, we use a quantum annealer to sample the transition paths of this low-resolution theory. We provide a proof-of-concept application by simulating a benchmark conformational transition with all-atom resolution on the D-Wave quantum computer. By exploiting the unique features of quantum annealing, we generate uncorrelated trajectories at every iteration, thus addressing one of the challenges of path sampling. Once larger quantum machines will be available, the interplay between quantum and classical resources may emerge as a new paradigm of high-performance scientific computing. In this work, we provide a platform to implement this integrated scheme in the field of molecular simulations.
Gradient-consistent enrichment of finite element spaces for the DNS of fluid-particle interaction
(2019)
Highlights
• Monolithic scheme for particulate flows preventing an oscillating pressure along the interface.
• The choice of enriching shape functions is driven by the properties of its gradient instead of its value.
• The choice of enriching shape functions inherits a natural stabilization on small cut elements.
Abstract
We present gradient-consistent enriched finite element spaces for the simulation of free particles in a fluid. This involves forces being exchanged between the particles and the fluid at the interface. In an earlier work [23] we derived a monolithic scheme which includes the interaction forces into the Navier-Stokes equations by means of a fictitious domain like strategy. Due to an inexact approximation of the interface oscillations of the pressure along the interface were observed. In multiphase flows oscillations and spurious velocities are a common issue. The surface force term yields a jump in the pressure and therefore the oscillations are usually resolved by extending the spaces on cut elements in order to resolve the discontinuity. For the construction of the enriched spaces proposed in this paper we exploit the Petrov-Galerkin formulation of the vertex-centered finite volume method (PG-FVM), as already investigated in [23]. From the perspective of the finite volume scheme we argue that wrong discrete normal directions at the interface are the origin of the oscillations. The new perspective of normal vectors suggests to look at gradients rather than values of the enriching shape functions. The crucial parameter of the enrichment functions therefore is the gradient of the shape functions and especially the one of the test space. The distinguishing feature of our construction therefore is an enrichment that is based on the choice of shape functions with consistent gradients. These derivations finally yield a fitted scheme for the immersed interface. We further propose a strategy ensuring a well-conditioned system independent of the location of the interface. The enriched spaces can be used within any existing finite element discretization for the Navier-Stokes equation. Our numerical tests were conducted using the PG-FVM. We demonstrate that the enriched spaces are able to eliminate the oscillations.
Rotational test spaces for a fully-implicit FVM and FEM for the DNS of fluid-particle interaction
(2019)
The paper presents a fully-implicit and stable finite element and finite volume scheme for the simulation of freely moving particles in a fluid. The developed method is based on the Petrov-Galerkin formulation of a vertex-centered finite volume method (PG-FVM) on unstructured grids. Appropriate extension of the ansatz and test spaces lead to a formulation comparable to a fictitious domain formulation. The purpose of this work is to introduce a new concept of numerical modeling reducing the mathematical overhead which many other methods require. It exploits the identification of the PG-FVM with a corresponding finite element bilinear form. The surface integrals of the finite volume scheme enable a natural incorporation of the interface forces purely based on the original bilinear operator for the fluid. As a result, there is no need to expand the system of equations to a saddle-point problem. Like for fictitious domain methods the extended scheme treats the particles as rigid parts of the fluid. The distinguishing feature compared to most existing fictitious domain methods is that there is no need for an additional Lagrange multiplier or other artificial external forces for the fluid-solid coupling. Consequently, only one single solve for the derived linear system for the fluid together with the particles is necessary and the proposed method does not require any fractional time stepping scheme to balance the interaction forces between fluid and particles. For the linear Stokes problem we will prove the stability of both schemes. Moreover, for the stationary case the conservation of mass and momentum is not violated by the extended scheme, i.e. conservativity is accomplished within the range of the underlying, unconstrained discretization scheme. The scheme is applicable for problems in two and three dimensions.
We investigate the applicability of the well-known multilevel Monte Carlo (MLMC) method to the class of density-driven flow problems, in particular the problem of salinisation of coastal aquifers. As a test case, we solve the uncertain Henry saltwater intrusion problem. Unknown porosity, permeability and recharge parameters are modelled by using random fields. The classical deterministic Henry problem is non-linear and time-dependent, and can easily take several hours of computing time. Uncertain settings require the solution of multiple realisations of the deterministic problem, and the total computational cost increases drastically. Instead of computing of hundreds random realisations, typically the mean value and the variance are computed. The standard methods such as the Monte Carlo or surrogate-based methods are a good choice, but they compute all stochastic realisations on the same, often, very fine mesh. They also do not balance the stochastic and discretisation errors. These facts motivated us to apply the MLMC method. We demonstrate that by solving the Henry problem on multi-level spatial and temporal meshes, the MLMC method reduces the overall computational and storage costs. To reduce the computing cost further, parallelization is performed in both physical and stochastic spaces. To solve each deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual learning, where consecutively arriving data is investigated. The core challenge is framed as protecting previously acquired representations from being catastrophically forgotten. However, comparison of individual methods is nevertheless performed in isolation from the real world by monitoring accumulated benchmark test set performance. The closed world assumption remains predominant, i.e. models are evaluated on data that is guaranteed to originate from the same distribution as used for training. This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown and corrupted instances. In this work we critically survey the literature and argue that notable lessons from open set recognition, identifying unknown examples outside of the observed set, and the adjacent field of active learning, querying data to maximize the expected performance gain, are frequently overlooked in the deep learning era. Hence, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks. Finally, the established synergies are supported empirically, showing joint improvement in alleviating catastrophic forgetting, querying data, selecting task orders, while exhibiting robust open world application.
Residual connections have been proposed as an architecture-based inductive bias to mitigate the problem of exploding and vanishing gradients and increased task performance in both feed-forward and recurrent networks (RNNs) when trained with the backpropagation algorithm. Yet, little is known about how residual connections in RNNs influence their dynamics and fading memory properties. Here, we introduce weakly coupled residual recurrent networks (WCRNNs) in which residual connections result in well-defined Lyapunov exponents and allow for studying properties of fading memory. We investigate how the residual connections of WCRNNs influence their performance, network dynamics, and memory properties on a set of benchmark tasks. We show that several distinct forms of residual connections yield effective inductive biases that result in increased network expressivity. In particular, those are residual connections that (i) result in network dynamics at the proximity of the edge of chaos, (ii) allow networks to capitalize on characteristic spectral properties of the data, and (iii) result in heterogeneous memory properties. In addition, we demonstrate how our results can be extended to non-linear residuals and introduce a weakly coupled residual initialization scheme that can be used for Elman RNNs.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
This dissertation is concerned with the task of map-based self-localization, using images of the ground recorded with a downward-facing camera. In this context, map-based (self-)localization is the task of determining the position and orientation of a query image that is to be localized. The map used for this purpose consists of a set of reference images with known positions and orientations in a common coordinate system. For localization, the considered methods determine correspondences between features of the query image and those of the reference images.
In comparison with localization approaches that use images of the surrounding environment, we expect that using images of the ground has the advantage that, unlike the surrounding, the visual appearance of the ground is often long-term stable. Also, by using active lighting of the ground, localization becomes independent of external lighting conditions.
This dissertation includes content of several published contributions, which present research on the development and testing of methods for feature-based localization of ground images. Our first contribution examines methods for the extraction of image features that have not been designed to be used on ground images. This survey shows that, with appropriate parametrization, several of these methods are well suited for the task.
Based on this insight, we develop and examine methods for various subtasks of map-based localization in the following contributions. We examine global localization, where all reference images have to be considered, as well as local localization, where an approximation of the query image position is already known, which allows for disregarding reference images with a large distance to this position.
In our second contribution, we present the first systematic comparison of state-of-the-art methods for ground texture based localization. Furthermore, we present a method, which is characterized by its usage of our novel feature matching technique. This technique is called identity matching, as it matches only those features with identical descriptors, in contrast to the state-of-the-art that also matches features with similar descriptors. We show that our method is well suited for global and local localization, as it has favorable scaling with the number of reference images considered during the localization process. In another contribution, we develop a variant of our localization method that is significantly faster to compute, as it applies a sampling approach to determine the image positions at which local features are extracted, instead of using classical feature detectors.
Two further contributions are concerned with global localization. The first one introduces a prediction model for the global localization performance, based on an evaluation of the local localization performance. This allows us to quickly evaluate any considered parameter settings of global localization methods. The second contribution introduces a learning-based method that computes compact descriptors of ground images. This descriptor can be used to retrieve the overlapping reference images of a query image from a large set of reference images with little computational effort.
The most recent contribution included in this dissertation presents a new ground image database, which was recorded with a dedicated platform using a downward-facing camera. In addition to the data, we also explain our guidelines for the construction of the platform. In comparison with existing databases, our database contains more images and presents a larger variety of ground textures. Furthermore, this database enables us to perform the first systematic evaluation of how localization performance is affected by the time interval between the point in time at which the reference images are recorded and the point in time at which the query image is recorded. We find out that for outdoor areas all ground texture based localization methods have reliability issues, if the time interval between the recording of the query and reference images is large, and also if there are different weather conditions. These findings point to remaining challenges in ground texture base localization that should be addressed in future work.
A central concern in genetics is to identify mechanisms of transcriptional regulation. The aim is to unravel the mapping between the DNA sequence and gene expression. However, it turned out that this is extremely complex. Gene regulation is highly cell type-specific and even moderate changes in gene ex- pression can have functional consequences.
Important contributors to gene regulation are transcription factors (TFs), that are able to directly interact with the DNA. Often, a first step in understanding the effect of a TF on the gene’s regulation is to identify the genomic regions a TF binds to. Therefore, one needs to be aware of the TF’s binding preferences, which are commonly summarized in TF binding motifs. Although for many TFs the binding motif is experimentally validated, there is still a large number of TFs where no binding motif is known. There exist many tools that link TF binding motifs to TFs. We developed the method Massif that improves the performance of such tools by incorporating a domain score that uses the DNA binding domain of the studied TF as additional information.
TF binding sites are often enriched in regulatory elements (REMs) such as promoters or enhancers, where the latter can be located megabases away from its target gene. However, to understand the regulation of a gene it is crucial to know where the REMs of a gene are located. We introduced the EpiRegio webserver that holds REMs associated to target genes predicted across many cell types and tissues using STITCHIT, a previously established method. Our publicly available webserver enables to query for REMs associated to genes (gene query) and REMs overlapping genomic regions (region query). We illus- trated the usefulness of EpiRegio by pointing to a TF that occurs enriched in the REMs of differential expressed genes in circPLOD2 depleted pericytes. Further, we highlighted genes, which are affected by CRISPR-Cas induced mutations in non-coding genomic regions using EpiRegio’s region query. Non-coding genetic variants within REMs may alter gene expression by modifying TF binding sites, which can lead to various kinds of traits or diseases. To understand the underlying molecular mechanisms, one aims to evaluate the effect of such genetic variations on TF binding sites. We developed an accurate and fast statistical approach, that can assess whether a single nucleotide polymorphism (SNP) is regulatory. Further, we combined this approach with epigenetic data and additional analyses in our Sneep workflow. For instance, it enables to identify TFs whose binding preferences are affected by the analyzed SNPs, which is illustrated on eQTL datasets for different cell types. Additionally, we used our Sneep workflow to highlight cardiovascular disease genes using regulatory SNPs and REM-gene interactions.
Overall, the described results allow a better understanding of REM-gene interactions and their interplay with TFs on gene regulation.
Das adaptive Immunsystem schützt den Menschen vor extra- wie auch intrakorporal auftretenden Pathogenen und Krebszellen. Die Funktionalität dieses Prozesses geht hierbei auf die Interaktion und Kooperation einer Vielzahl verschiedener Zelltypen des Körpers zurück und ist vorwiegend innerhalb der Lymphknoten lokalisiert. Ist auch nur ein Bestandteil dieses sensiblen Prozesses gestört, kann dies zu einem teilweisen oder vollständigen Verlust der immunologischen Fitness des Menschen führen. Daher war es das Ziel dieser Arbeit, solche Aberrationen des humanen Lymphknotengewebes umfassend digital-pathologisch zu detektieren und zu definieren.
Hierfür wurde zunächst eine digitale Gewebedatenbank etabliert. Diese basiert auf dem im Rahmen dieser Arbeit implementierten Content-Management-System Digital Tissue Management Suite. Weiterhin wurde die Software Feature analysis in tissue histomorphometry entwickelt, welche die Analyse von zweidimensionalen whole slide images ermöglicht. Hierbei werden Methoden aus dem Bereich Computer Vision und Graphentheorie eingesetzt, um morphologische und distributionale Eigenschaften der Zelltypen des Lymphknotens zu charakterisieren. Darüber hinaus enthält diese Software Plug-ins zur Visualisierung und statistischen Analyse der Daten.
Aufbauend auf der eigens implementierten, digitalen Infrastruktur, in Kombination mit der Software Imaris wurden zweidimensional und dreidimensional gescannte, reaktive und neoplastische Gewebeproben digital phänotypisiert. Hierbei konnten neue mechanische Barrieren zur Kompartimentalisierung der Keimzentren aufgeklärt werden. Weiterhin konnte der Erhalt des quantitativen Verhältnisses einzelner Zellpopulationen innerhalb der Keimzentren beschrieben werden. Ausgehend von den reaktiven Phänotypen des Lymphknotens, wurden pathophysiologische Aberrationen in verschiedenen lymphatischen Neoplasien untersucht. Hierbei konnte gezeigt werden, dass speziell die strukturelle Destruktion häufig mit einer morphologischen Veränderung der fibroblastischen Retikulumzellen einhergeht.
Neben strukturellen Veränderungen sind auch zytologische Veränderungen der Tumormikroumgebung zu verzeichnen. Eine besondere Rolle spielen hierbei sogenannte Tumor-assoziierte Makrophagen. Im Rahmen dieser Arbeit konnte gezeigt werden, dass speziell Makrophagen in der Tumormikroumgebung des diffus großzelligen B-Zell-Lymphoms und der chronisch lymphatischen Leukämie spezifische pathophysiologische Veränderungen aufzeigen. Auch konnte gezeigt werden, dass genetische Änderungen neoplastischer B-Zellen mit einer generellen Reduktion der CD20-Antigendichte einhergehen.
Zusammenfassend ermöglichten die Ergebnisse die Generierung eines umfassenden digital-pathologischen Profils des klassischen Hodgkin-Lymphoms. Hierbei konnten morphologische Veränderungen neoplastischer, CD30-positiver Hodgkin-Reed-Sternberg-Zellen validiert und beschrieben werden. Auch konnten pathologische Veränderungen des Konnektoms und der Tumormikroumgebung dieser Zellen parametrisiert und quantifiziert werden. Abschließend wurde unter Anwendung eines Random forest-Klassifikators die diagnostische Potenz digital-pathologischer Profile evaluiert und validiert.
Metahumans ist ein innovatives Framework für die Unreal Engine, das hochgradig realistische digitale Charaktere zur Verfügung stellt. Metahumans zeichnen sich durch eine vollständige Control Rig aus, die es Entwicklern ermöglicht, vorgefertigte Animationen zu nutzen und sie nach Bedarf anzupassen und zu erweitern.
Im Rahmen dieser wissenschaftlichen Arbeit wird die Anwendung von Metahumans in der virtuellen Umgebung der Unreal Engine 5 untersucht. Das Hauptziel besteht darin, die Fähigkeit eines Metahumans zu untersuchen, mittels eines herkömmlichen Virtual Reality Headsets mithilfe von Motion Tracking gesteuert und animiert zu werden. Dabei wird speziell auf die Verwendung von Inverse Kinematics als Methode zur Erzeugung möglichst natürlicher Bewegungsabläufe eingegangen. Zusätzlich wird angestrebt, die Interaktion zwischen verschiedenen Metahuman-Avataren in einer Online-Sitzung zu ermöglichen.
Um den Einfluss auf das Immersionserlebnis der Benutzerinnen und Benutzer zu analysieren, werden Probandinnen und Probanden eingeladen, ihre Nutzererfahrungen zu evaluieren. Zu diesem Zweck werden zwei vergleichbare Level erstellt: eines in der Unreal Engine mit Metahumans und das andere in Unity mit den Meta Avataren von Oculus.
Diese wissenschaftliche Untersuchung zielt darauf ab, ein umfassendes Verständnis für die Leistungsfähigkeit von Metahumans zu erlangen, insbesondere im Vergleich zu anderen Avatar-Systemen.
The recent COVID-19 pandemic represents an unprecedented worldwide event to study the influence of related news on the financial markets, especially during the early stage of the pandemic when information on the new threat came rapidly and was complex for investors to process. In this paper, we investigate whether the flow of news on COVID-19 had an impact on forming market expectations. We analyze 203,886 online articles dealing with COVID-19 and published on three news platforms (MarketWatch.com, NYTimes.com, and Reuters.com) in the period from January to June 2020. Using machine learning techniques, we extract the news sentiment through a financial market-adapted BERT model that enables recognizing the context of each word in a given item. Our results show that there is a statistically significant and positive relationship between sentiment scores and S&P 500 market. Furthermore, we provide evidence that sentiment components and news categories on NYTimes.com were differently related to market returns.
In this contribution we present algorithms for model checking of analog circuits enabling the specification of time constraints. Furthermore, a methodology for defining time-based specifications is introduced. An already known method for model checking of integrated analog circuits has been extended to take into account time constraints. The method will be presented using three industrial circuits. The results of model checking will be compared to verification by simulation.
With the rise of digitalization and ubiquity of media use, both opportunities and challenges emerge for academic learning. One prevalent challenge is media multitasking, which can become distracting and hinder learning success. This thesis investigates two facets of this issue: the enhancement of data tracking, and the exploration of digital interventions that support self-control.
The first paper focuses on digital tracking of media use, as a comprehensive understanding of digital distractions requires careful data collection to avoid misinterpretations. The paper presents a tracking system where media use is linked to learning activities. An annotation dashboard enabled the enrichment of the log data with self-reports. The efficacy of this system was evaluated in a 14-day online course taken by 177 students, with results confirming the initial assumptions about media tracking.
The second paper tackles the recognition of whether a text was thoroughly read, an issue brought on by the tendency of students to skip lengthy and demanding texts. A method utilizing scroll data and time series classification algorithms is presented and tested, showing promising results for early recognition and intervention.
The third paper presents the results of a systematic literature review on the effectiveness of digital self-control tools in academic learning. The paper identifies gaps in existing research and outlines a roadmap for further research on self-control tools.
The fourth paper shares findings from a survey of 273 students, exploring the practical use and perceived helpfulness of DSCTs. The study highlights the challenge of balancing between too restrictive and too lenient DSCTs, particularly for platforms offering both learning content and entertainment. The results also show a special role of media use that is highly habitual.
The fifth paper of this work investigates facets of app-based habit building. In a study over 27 days, 106 school-aged children used the specially developed PROMPT-app. The children carried out one of three digital activities each day, each of which was supposed to promote a deeper or more superficial processing of plans. Significant differences regarding the processing of plans emerged between the three activities, and the results suggest that a child-friendly planning application needs to be personalized to be effective.
Overall, this work offers a comprehensive insight into the complexity and potentials of dealing with distracting media usage and shows ways for future research and interventions in this fascinating and ever more important field.
Background: The technical development of imaging techniques in life sciences has enabled the three-dimensional recording of living samples at increasing temporal resolutions. Dynamic 3D data sets of developing organisms allow for time-resolved quantitative analyses of morphogenetic changes in three dimensions, but require efficient and automatable analysis pipelines to tackle the resulting Terabytes of image data. Particle image velocimetry (PIV) is a robust and segmentation-free technique that is suitable for quantifying collective cellular migration on data sets with different labeling schemes. This paper presents the implementation of an efficient 3D PIV package using the Julia programming language—quickPIV. Our software is focused on optimizing CPU performance and ensuring the robustness of the PIV analyses on biological data.
Results: QuickPIV is three times faster than the Python implementation hosted in openPIV, both in 2D and 3D. Our software is also faster than the fastest 2D PIV package in openPIV, written in C++. The accuracy evaluation of our software on synthetic data agrees with the expected accuracies described in the literature. Additionally, by applying quickPIV to three data sets of the embryogenesis of Tribolium castaneum, we obtained vector fields that recapitulate the migration movements of gastrulation, both in nuclear and actin-labeled embryos. We show normalized squared error cross-correlation to be especially accurate in detecting translations in non-segmentable biological image data.
Conclusions: The presented software addresses the need for a fast and open-source 3D PIV package in biological research. Currently, quickPIV offers efficient 2D and 3D PIV analyses featuring zero-normalized and normalized squared error cross-correlations, sub-pixel/voxel approximation, and multi-pass. Post-processing options include filtering and averaging of the resulting vector fields, extraction of velocity, divergence and collectiveness maps, simulation of pseudo-trajectories, and unit conversion. In addition, our software includes functions to visualize the 3D vector fields in Paraview.
Background: Patients with rare diseases (RDs) are often diagnosed too late or not at all. Clinical decision support systems (CDSSs) could support the diagnosis in RDs. The MIRACUM (Medical Informatics in Research and Medicine) consortium, which is one of four funded consortia in the German Medical Informatics Initiative, will develop a CDSS for RDs based on distributed clinical data from ten university hospitals. This qualitative study aims to investigate (1) the relevant organizational conditions for the operation of a CDSS for RDs when diagnose patients (e.g. the diagnosis workflow), (2) which data is necessary for decision support, and (3) the appropriate user group for such a CDSS.
Methods: Interviews were carried out with RDs experts. Participants were recruited from staff physicians at the Rare Disease Centers (RDCs) at the MIRACUM locations, which offer diagnosis and treatment of RDs.
An interview guide was developed with a category-guided deductive approach. The interviews were recorded on an audio device and then transcribed into written form. We continued data collection until all interviews were completed. Afterwards, data analysis was performed using Mayring’s qualitative content analysis approach.
Results: A total of seven experts were included in the study. The results show that medical center guides and physicians from RDC B-centers (with a focus on different RDs) are involved in the diagnostic process. Furthermore, interdisciplinary case discussions between physicians are conducted.
The experts explained that RDs exist which cannot be fully differentiated, but rather described only by their overall symptoms or findings: diagnosis is dependent on the disease or disease group. At the end of the diagnostic process, most centers prepare a summary of the patient case. Furthermore, the experts considered both physicians and experts from the B-centers to be potential users of a CDSS. The experts also have different experiences with CDSS for RDs.
Conclusions: This qualitative study is a first step towards establishing the requirements for the development of a CDSS for RDs. Further research is necessary to create solutions by also including the experts on RDs.
For medicine to fulfill its promise of personalized treatments based on a better understanding of disease biology, computational and statistical tools must exist to analyze the increasing amount of patient data that becomes available. A particular challenge is that several types of data are being measured to cope with the complexity of the underlying systems, enhance predictive modeling and enrich molecular understanding.
Here we review a number of recent approaches that specialize in the analysis of multimodal data in the context of predictive biomedicine. We focus on methods that combine different OMIC measurements with image or genome variation data. Our overview shows the diversity of methods that address analysis challenges and reveals new avenues for novel developments.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
Local climate change risk assessments (LCCRAs) are best supported by a quantitative integration of physical hazards, exposures and vulnerabilities that includes the characterization of uncertainties. We propose to use Bayesian Networks (BNs) for this task and show how to integrate freely-available output of multiple global hydrological models (GHMs) into BNs, in order to probabilistically assess risks for water supply. Projected relative changes in hydrological variables computed by three GHMs driven by the output of four global climate models were processed using MATLAB, taking into account local information on water availability and use. A roadmap to set up BNs and apply probability distributions of risk levels under historic and future climate and water use was co-developed with experts from the Maghreb (Tunisia, Algeria, Morocco) who positively evaluated the BN application for LCCRAs. We conclude that the presented approach is suitable for application in the many LCCRAs necessary for successful adaptation to climate change world-wide.
Large language models have become widely available to the general public, especially due to ChatGPT's release. Consequently, the AI community has invested much effort into recreating language models of the same caliber as ChatGPT, since the latter is still a technical blackbox. This thesis aims to contribute to that cause by proposing R.O.B.E.R.T., a Robotic Operating Buddy for Efficiency, Research and Teaching. In doing so, it presents a first implementation of a lightweight environment which produces tailor-made, instruction-following language models with a heavy focus on conversational capabilities that instruct themselves into a given domain-context. Within this environment, the generation of datasets, the fine-tuning process and finally the inference of a unique R.O.B.E.R.T. instance are all carried out as part of an automated pipeline.
Linking mathematics with reality is not new. It is also not new to use outdoor activities to learn mathematics. It seems to be new, to combine such mathematical outdoor activities with mobile technology, like the geocache community which makes use of GPS technology to guide their members to special places and points of interest. The use of mobile technologies to learn at any time and any location is known as “mobile learning”. This type of learning can be seen as an extension of eLearning. Considering the definition of O’Malley one notices that this definition does not exactly match with the idea of the MathCityMap-Project (MCM), because the learning environment in the MCM-Project is predetermined. Combined with the math trail method the project enables mobile learning within math trails with latest technology.In the MCM-Project students experience mathematics at real places and within real situations in out-of-school activities,with help of GPS-enabled smartphones and special math problems. In contrast to the paper versions of math trails we are able to give direct feedback on the solutions by using “mobile devices” such as smartphones or tablets. If the user has difficulties in solving the modeling task, stepped hints can be provided. The teacher is able to use the MCM-Portal to upload tasks developed by himself or by his students and he is also able to build a personal math trail for his students.
Background: Rare Diseases (RDs), which are defined as diseases affecting no more than 5 out of 10,000 people, are often severe, chronic and life-threatening. A main problem is the delay in diagnosing RDs. Clinical decision support systems (CDSSs) for RDs are software systems to support clinicians in the diagnosis of patients with RDs. Due to their clinical importance, we conducted a scoping review to determine which CDSSs are available to support the diagnosis of RDs patients, whether the CDSSs are available to be used by clinicians and which functionalities and data are used to provide decision support.
Methods: We searched PubMed for CDSSs in RDs published between December 16, 2008 and December 16, 2018. Only English articles, original peer reviewed journals and conference papers describing a clinical prototype or a routine use of CDSSs were included. For data charting, we used the data items “Objective and background of the publication/project”, “System or project name”, “Functionality”, “Type of clinical data”, “Rare Diseases covered”, “Development status”, “System availability”, “Data entry and integration”, “Last software update” and “Clinical usage”.
Results: The search identified 636 articles. After title and abstracting screening, as well as assessing the eligibility criteria for full-text screening, 22 articles describing 19 different CDSSs were identified. Three types of CDSSs were classified: “Analysis or comparison of genetic and phenotypic data,” “machine learning” and “information retrieval”. Twelve of nineteen CDSSs use phenotypic and genetic data, followed by clinical data, literature databases and patient questionnaires. Fourteen of nineteen CDSSs are fully developed systems and therefore publicly available. Data can be entered or uploaded manually in six CDSSs, whereas for four CDSSs no information for data integration was available. Only seven CDSSs allow further ways of data integration. thirteen CDSS do not provide information about clinical usage.
Conclusions: Different CDSS for various purposes are available, yet clinicians have to determine which is best for their patient. To allow a more precise usage, future research has to focus on CDSSs RDs data integration, clinical usage and updating clinical knowledge. It remains interesting which of the CDSSs will be used and maintained in the future.
The single-source shortest-path problem is a fundamental problem in computer science. We consider a generalization of the shortest-path problem, the $k$-shortest path problem. Let $G$ be a directed edge-weighted graph with $n$ nodes and $m$ edges and $s,t$ be two fixed nodes. The goal is to compute $k$ paths $P_1,\dots,P_k$ between two fixed nodes $s$ and $t$ in non-decreasing order of their length such that all other paths between $s$ and $t$ are at least as long as the $k$\nth path $P_k$. We focus on the version of the $k$-shortest path problem where the paths are not allowed to visit nodes multiple times, sometime referred to as $k$-shortest simple path problem.
The probably best known $k$-shortest path algorithm is Yen's algorithm. It has a worst-case time complexity of O(kn\cdot scp(n,m)), where scp(n,m) is the complexity of the single-source shortest-path algorithm used as a subroutine. In case of Dijkstra's algorithm scp(n,m) is O(m + n\log n). One of the more recent improvements of Yen's algorithm is by Feng.
Even though Feng's algorithm is much faster in practice, it has the same worst-case complexity as Yen's algorithm.
The main results presented in this thesis are upper bounds on the average-case of Yen's and Feng's algorithm, as well as practical improvements and a parallel implementation of Yen's and Feng's algorithms including these improvements. The implementation is publicly available under GPLv3 open source license.
We show in our analysis that Yen's algorithm has an average-case complexity of O(k \log(n)\cdot scp(n,m)) on G(n,p) graphs with at least logarithmic average-degree and random edge weights following a distribution with certain properties.
On G(n,p) graphs with constant to logarithmic average-degree and uniform random edge-weights over $[0;1]$, we show an average-case complexity of O(k\cdot\frac{\log^2 n}{np}\cdot scp(n,m)). Feng's algorithm has an even better average-case complexity of O(k\cdot scp(n,m)) on unweighted G(n,p) graphs with logarithmic average-degree and for constant values of $k$. We further provide evidence that the same holds true for G(n,p) graphs with uniform random edge-weights over $[0;1]$.
On the practical side, we suggest new heuristics to prune even more single-source shortest-path computations than Feng's algorithm and evaluate all presented algorithms on G(n,p) and Grid graphs with up to 256 million nodes. We demonstrate speedups by a factor of up to 40 compared to Feng's algorithm.
Finally we discuss two ways to parallelize the suggested algorithms and evaluate them on grid graphs showing speedups by a factor of 2 using 4 threads and by a factor of up to 8 using 16 threads, respectively.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
We present a massively parallel framework for computing tropicalizations of algebraic varieties which can make use of symmetries using the workflow management system GPI-Space and the computer algebra system Singular. We determine the tropical Grassmannian TGr0(3,8). Our implementation works efficiently on up to 840 cores, computing the 14763 orbits of maximal cones under the canonical S8-action in about 20 minutes. Relying on our result, we show that the Gröbner structure of TGr0(3,8) refines the 16-dimensional skeleton of the coarsest fan structure of the Dressian Dr(3,8), except for 23 orbits of special cones, for which we construct explicit obstructions to the realizability of their tropical linear spaces. Moreover, we propose algorithms for identifying maximal-dimensional cones which belong to positive tropicalizations of algebraic varieties. We compute the positive Grassmannian TGr+(3,8) and compare it to the cluster complex of the classical Grassmannian Gr(3,8).
Cyber Physical Systems (CPS) are growing more and more complex due to the availability of cheap hardware, sensors, actuators and communication links. A network of cooperating CPSs (CPN) additionally increases the complexity. This poses challenges as well as it offers chances: the increasing complexity makes it harder to design, operate, optimize and maintain such CPNs. However, on the other side an appropriate use of the increasing resources in computational nodes, sensors, actuators can significantly improve the system performance, reliability and flexibility. Therefore, self-X features like self-organization, self-adaptation and self-healing are key principles for such systems.
Additionally, CPNs are often deployed in dynamic, unpredictable environments and safety-critical domains, such as transportation, energy, and healthcare. In such domains, usually applications of different criticality level exist. In an automotive environment for example, the brake has a higher criticality level regarding safety as the infotainment. As a result of mixed-criticality, applications requiring hard real-time guarantees compete with those requiring soft real-time guarantees and best-effort application for the given resources within the overall system. This leads to the need to accommodate multiple levels of criticality while ensuring safety and reliability, which increases the already high complexity even more.
This thesis deals with the question on how to conveniently, effectively and efficiently handle the management and complexity of mixed-critical CPNs (MC-CPNs). Since this cannot be done by the system developer without the assistance of the system itself any longer, it is essential to develop new approaches and techniques to ensure that such systems can operate under a range of conditions while meeting stringent requirements.
Based on five research hypothesis, this thesis introduces a comprehensive adaptive mixed-criticality supporting middleware for Cyber-Physical Networks (Chameleon), which efficiently and autonomously takes care of the management and complexity of CPNs with regard to the mixed-criticality aspect.
Chameleon contributes to the state-of-art by introducing and combining the following concepts:
- A comprehensive self-adaption mechanism on all levels of the system model is provided.
- This mechanism allows a flexible combination of parametric and structural adaptation actions (relocation, scheduling, tuning, ...) to modify the behavior of the system.
- Real-time constraints of mixed-critical applications (hard real-time, soft real-time, best-effort) are considered in all possible adaptation conditions and actions by the use of the importance parameter.
- CPNs are supported by the introduction of different scopes (local, system, global) for the adaptation conditions and actions. This also enables the combination of different scopes for conditions and actions.
- The realization of the adaptation with a MAPE-K loop instantiated by a distributed LCS allows for real-time capable reasoning of adaptation actions which also works on resource-spare systems.
- The developed rule language Rango offers an intuitive way to specify an initial rule set for LCS in the context of CPS/CPNs and supports the system administrators in the process of rule set generation.
Proteins are biological macromolecules playing essential roles in all living organisms.
Proteins often bind with each other forming complexes to fulfill their function. Such protein complexes assemble along an ordered pathway. An assembled protein complex can often be divided into structural and functional modules. Knowing the order of assembly and the modules of a protein complex is important to understand biological processes and treat diseases related to misassembly.
Typical structures of the Protein Data Bank (PDB) contain two to three subunits and a few thousand atoms. Recent developments have led to large protein complexes being resolved. The increasing number and size of the protein complexes demand for computational assistance for the visualization and analysis. One such large protein complex is respiratory complex I accounting for 45 subunits in Homo sapiens.
Complex I is a well understood protein complex that served as case study to validate our methods.
Our aim was to analyze time-resolved Molecular Dynamics (MD) simulation data, identify modules of a protein complex and generate hypotheses for the assembly pathway of a protein complex. For that purpose, we abstracted the topology of protein complexes to Complex Graphs of the Protein Topology Graph Library (PTGL). The subunits are represented as vertices, and spatial contacts as edges. The edges are weighted with the number of contacts based on a distance threshold. This allowed us to apply graph-theoretic methods to visualize and analyze protein complexes.
We extended the implementations of two methods to achieve a computation of Complex Graphs in feasible runtimes. The first method skipped checks for contacts using the information which residues are sequential neighbors. We extended the method to protein complexes and structures containing ligands. The second method introduced spheres encompassing all atoms of a subunit and skipped the check for contacts if the corresponding spheres do not overlap. Both methods combined allowed skipping up to 93 % of the checks for contacts for sample complexes of 40 subunits compared to up to 10 % of the previous implementation. We showed that the runtime of the combined method scaled linearly with the number of atoms compared to a non-linear scaling of the previous implementation We implemented a third method fixing the assignment of an orientation to secondary structure elements. We placed a three-dimensional vector in each secondary structure element and computed the angle between secondary structure elements to assign an orientation. This method sped up the runtime especially for large structures, such as the capsid of human immunodeficiency virus, for which the runtime decreased from 43 to less than 9 hours.
The feasible runtimes allowed us to investigate two data sets of MD trajectories of respiratory complex I of Thermus thermophilus that we received. The data sets differ only by whether ubiquinone is bound to the complex. We implemented a pipeline, PTGLdynamics, to compute the contacts and Complex Graphs for all time steps of the trajectories. We investigated different methods to track changes of contacts during the simulation and created a heat map put onto the three-dimensional structure visualizing the changes. We also created line plots to visualize the changes of contacts over the course of the simulation. Both visualizations helped spotting outstandingly flexible or rigid regions of the structure or time points of the simulation in which major dynamics occur.
We introduced normalizations of the edge weights of Complex Graphs for identi-fying modules and predicting the assembly pathway. The idea is to normalize the number of contacts for the number of residues of a subunit. We defined five different normalizations.
To identify structural and functional modules, we applied the Leiden graph clustering algorithm to the Complex Graphs of respiratory complex I and the respiratory supercomplex. We examined the results for the different normalizations of the weights of the Complex Graphs. The absolute edge weight produced the best result identifying three of four modules that have been defined in the literature for respiratory complex I.
We applied agglomerative hierarchical clustering to the edges of a Complex Graph to create hypotheses of the assembly pathway. The rationale was that subunits with an extensive interface in the final structure assemble early. We tested our method against two existing methods on a data set of 21 proteins with reported assembly pathways. Our prediction outperformed the other methods and ran in feasible runtimes of a few minutes at most.
We also tested our method on respiratory complex I, the respiratory supercomplex and the respiratory megacomplex. We compared the results for the different normalizations with an assembly pathway of respiratory complex I described in the literature. We transformed the assembly pathways to dendrograms and compared the predictions to the reference using the Robinson-Foulds distance and clustering information distance. We analyzed the landscape of the clustering information distance by generating random dendrograms and showed that our result is far better than expected at random. We showed in a detailed analysis that the assembly prediction using one normalization was able to capture key features of the assembly pathway that has been proposed in the literature.
In conclusion, we presented different applications of graph theory to automatically analyze the topology of protein complexes. Our programs run in feasible runtimes even for large complexes. We showed that graph-theoretic modeling of the protein structure can be used to analyze MD simulation data, identify modules of protein complexes and predict assembly pathways.
Pattern recognition approaches, such as the Support Vector Machine (SVM), have been successfully used to classify groups of individuals based on their patterns of brain activity or structure. However these approaches focus on finding group differences and are not applicable to situations where one is interested in accessing deviations from a specific class or population. In the present work we propose an application of the one-class SVM (OC-SVM) to investigate if patterns of fMRI response to sad facial expressions in depressed patients would be classified as outliers in relation to patterns of healthy control subjects. We defined features based on whole brain voxels and anatomical regions. In both cases we found a significant correlation between the OC-SVM predictions and the patients' Hamilton Rating Scale for Depression (HRSD), i.e. the more depressed the patients were the more of an outlier they were. In addition the OC-SVM split the patient groups into two subgroups whose membership was associated with future response to treatment. When applied to region-based features the OC-SVM classified 52% of patients as outliers. However among the patients classified as outliers 70% did not respond to treatment and among those classified as non-outliers 89% responded to treatment. In addition 89% of the healthy controls were classified as non-outliers.
When experienced in-person, engagement with art has been associated with positive outcomes in well-being and mental health. However, especially in the last decade, art viewing, cultural engagement, and even ‘trips’ to museums have begun to take place online, via computers, smartphones, tablets, or in virtual reality. Similarly, to what has been reported for in-person visits, online art engagements—easily accessible from personal devices—have also been associated to well-being impacts. However, a broader understanding of for whom and how online-delivered art might have well-being impacts is still lacking. In the present study, we used a Monet interactive art exhibition from Google Arts and Culture to deepen our understanding of the role of pleasure, meaning, and individual differences in the responsiveness to art. Beyond replicating the previous group-level effects, we confirmed our pre-registered hypothesis that trait-level inter-individual differences in aesthetic responsiveness predict some of the benefits that online art viewing has on well-being and further that such inter-individual differences at the trait level were mediated by subjective experiences of pleasure and especially meaningfulness felt during the online-art intervention. The role that participants' experiences play as a possible mechanism during art interventions is discussed in light of recent theoretical models.
Background: Clinical trial registries increase transparency in medical research by making information and results of planned, ongoing, and completed studies publicly available. However, the registration of clinical trials remains a time-consuming manual task complicated by the fact that the same studies often need to be registered in different registries with different data entry requirements and interfaces.
Objective: This study investigates how Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) may be used as a standardized format for exchanging and storing clinical trial records.
Methods: We designed and prototypically implemented an open-source central trial registry containing records from university hospitals, which are automatically exported and updated by local study management systems.
Results: We provided an architecture and implementation of a multisite clinical trials registry based on HL7 FHIR as a data storage and exchange format.
Conclusions: The results show that FHIR resources establish a harmonized view of study information from heterogeneous sources by enabling automated data exchange between trial centers and central study registries.
Goal-Conditioned Reinforcement Learning (GCRL) is a popular framework for training agents to solve multiple tasks in a single environment. It is cru- cial to train an agent on a diverse set of goals to ensure that it can learn to generalize to unseen downstream goals. Therefore, current algorithms try to learn to reach goals while simultaneously exploring the environment for new ones (Aubret et al., 2021; Mendonca et al., 2021). This creates a form of the prominent exploration-exploitation dilemma. To relieve the pres- sure of a single agent having to optimize for two competing objectives at once, this thesis proposes the novel algorithm family Goal-Conditioned Re- inforcement Learning with Prior Intrinsic Exploration (GC-π), which sep- arates exploration and goal learning into distinct phases. In the first ex- ploration phase, an intrinsically motivated agent explores the environment and collects a rich dataset of states and actions. This dataset is then used to learn a representation space, which acts as the distance metric for the goal- conditioned reward signal. In the final phase, a goal-conditioned policy is trained with the help of the representation space, and its training goals are randomly sampled from the dataset collected during the exploration phase. Multiple variations of these three phases have been extensively evaluated in the classic AntMaze MuJoCo environment (Nachum et al., 2018). The fi- nal results show that the proposed algorithms are able to fully explore the environment and solve all downstream goals while using every dimension of the state space for the goal space. This makes the approach more flexible compared to previous GCRL work, which only ever uses a small subset of the dimensions for the goals (S. Li et al., 2021a; Pong et al., 2020).
A deep convolutional neural network (CNN) is developed to study symmetry energy (Esym(ρ)) effects by learning the mapping between the symmetry energy and the two-dimensional (transverse momentum and rapidity) distributions of protons and neutrons in heavy-ion collisions. Supervised training is performed with labeled data-set from the ultrarelativistic quantum molecular dynamics (UrQMD) model simulation. It is found that, by using proton spectra on event-by-event basis as input, the accuracy for classifying the soft and stiff Esym(ρ) is about 60% due to large event-by-event fluctuations, while by setting event-summed proton spectra as input, the classification accuracy increases to 98%. The accuracies for 5-label (5 different Esym(ρ)) classification task are about 58% and 72% by using proton and neutron spectra, respectively. For the regression task, the mean absolute errors (MAE) which measure the average magnitude of the absolute differences between the predicted and actual L (the slope parameter of Esym(ρ)) are about 20.4 and 14.8 MeV by using proton and neutron spectra, respectively. Fingerprints of the density-dependent nuclear symmetry energy on the transverse momentum and rapidity distributions of protons and neutrons can be identified by convolutional neural network algorithm.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
In this proceeding, we review our recent work using deep convolutional neural network (CNN) to identify the nature of the QCD transition in a hybrid modeling of heavy-ion collisions. Within this hybrid model, a viscous hydrodynamic model is coupled with a hadronic cascade “after-burner”. As a binary classification setup, we employ two different types of equations of state (EoS) of the hot medium in the hydrodynamic evolution. The resulting final-state pion spectra in the transverse momentum and azimuthal angle plane are fed to the neural network as the input data in order to distinguish different EoS. To probe the effects of the fluctuations in the event-by-event spectra, we explore different scenarios for the input data and make a comparison in a systematic way. We observe a clear hierarchy in the predictive power when the network is fed with the event-by-event, cascade-coarse-grained and event-fine-averaged spectra. The carefully-trained neural network can extract high-level features from pion spectra to identify the nature of the QCD transition in a realistic simulation scenario.
Background: Persistent pain in breast cancer survivors is common. Psychological and sleep-related factors modulate perception, interpretation and coping with pain and may contribute to the clinical phenotype. The present analysis pursued the hypothesis that breast cancer survivors form subgroups, based on psychological and sleep-related parameters that are relevant to the impact of pain on the patients’ life.
Methods: We analysed 337 women treated for breast cancer, in whom psychological and sleep-related parameters as well as parameters related to pain intensity and interference had been acquired. Data were analysed by using supervised and unsupervised machine-learning techniques (i) to detect patient subgroups based on the pattern of psychological or sleep-related parameters, (ii) to interpret the detected cluster structure and (iii) to relate this data structure to pain interference and impact on life.
Results: Artificial intelligence-based detection of data structure, implemented as self-organizing neuronal maps, identified two different clusters of patients. A smaller cluster (11.5% of the patients) had comparatively lower resilience, more depressive symptoms and lower extraversion than the other patients. In these patients, life-satisfaction, mood, and life in general were comparatively more impeded by persistent pain.
Conclusions: The results support the initial hypothesis that psychological and sleep-related parameter patterns are meaningful for subgrouping patients with respect to how persistent pain after breast cancer treatments interferes with their life. This indicates that management of pain should address more complex features than just pain intensity. Artificial intelligence is a useful tool in the identification of subgroups of patients based on psychological factors.
We present a hierarchy of polynomial time lattice basis reduction algorithms that stretch from Lenstra, Lenstra, Lovász reduction to Korkine–Zolotareff reduction. Let λ(L) be the length of a shortest nonzero element of a lattice L. We present an algorithm which for k∈N finds a nonzero lattice vector b so that |b|2⩽(6k2)nkλ(L)2. This algorithm uses O(n2(kk+o(k))+n2)log B) arithmetic operations on O(n log B)-bit integers. This holds provided that the given basis vectors b1,…,bn∈Zn are integral and have the length bound B. This algorithm successively applies Korkine–Zolotareff reduction to blocks of length k of the lattice basis. We also improve Kannan's algorithm for Korkine-Zolotareff reduction.
Gene therapy (GT) is becoming a realistic treatment option for patients with haemophilia. Outside clinical trials, the complexity and potential complications of GT will pose unprecedented challenges to haemophilia care centres.AIM: To explore the potential use of electronic tools to improve the delivery of GT under real-world conditions.METHODS: Considering the hub-and-spoke model, the GTH working group on GT considered the entire patient pathway and reached consensus on requirements for an integrative software tool to secure documenting and sharing information between treaters, pharmacies and patients.RESULTS: Six steps of the gene therapy process were identified, each requiring completion of the previous step as a prerequisite for entry. The responsibilities of GT dosing and follow-up treatment centres, read/write access rules, and the minimum data set were outlined. Data contributed by patients through mobile devices was also considered.CONCLUSION: Important information needs to be shared between patients and treatment centres in a real-world GT hub-and-spoke model. Collecting and sharing this information in well-organised electronic applications will not only improve patient care but also enable national and international data collection in clinical registries...
Internalin B–mediated activation of the membrane-bound receptor tyrosine kinase MET is accompanied by a change in receptor mobility. Conversely, it should be possible to infer from receptor mobility whether a cell has been treated with internalin B. Here, we propose a method based on hidden Markov modeling and explainable artificial intelligence that machine-learns the key differences in MET mobility between internalin B–treated and –untreated cells from single-particle tracking data. Our method assigns receptor mobility to three diffusion modes (immobile, slow, and fast). It discriminates between internalin B–treated and –untreated cells with a balanced accuracy of >99% and identifies three parameters that are most affected by internalin B treatment: a decrease in the mobility of slow molecules (1) and a depopulation of the fast mode (2) caused by an increased transition of fast molecules to the slow mode (3). Our approach is based entirely on free software and is readily applicable to the analysis of other membrane receptors.
Recent scientific evidence suggests that chronic pain phenotypes are reflected in metabolomic changes. However, problems associated with chronic pain, such as sleep disorders or obesity, may complicate the metabolome pattern. Such a complex phenotype was investigated to identify common metabolomics markers at the interface of persistent pain, sleep, and obesity in 71 men and 122 women undergoing tertiary pain care. They were examined for patterns in d = 97 metabolomic markers that segregated patients with a relatively benign pain phenotype (low and little bothersome pain) from those with more severe clinical symptoms (high pain intensity, more bothersome pain, and co-occurring problems such as sleep disturbance). Two independent lines of data analysis were pursued. First, a data-driven supervised machine learning-based approach was used to identify the most informative metabolic markers for complex phenotype assignment. This pointed primarily at adenosine monophosphate (AMP), asparagine, deoxycytidine, glucuronic acid, and propionylcarnitine, and secondarily at cysteine and nicotinamide adenine dinucleotide (NAD) as informative for assigning patients to clinical pain phenotypes. After this, a hypothesis-driven analysis of metabolic pathways was performed, including sleep and obesity. In both the first and second line of analysis, three metabolic markers (NAD, AMP, and cysteine) were found to be relevant, including metabolic pathway analysis in obesity, associated with changes in amino acid metabolism, and sleep problems, associated with downregulated methionine metabolism. Taken together, present findings provide evidence that metabolomic changes associated with co-occurring problems may play a role in the development of severe pain. Co-occurring problems may influence each other at the metabolomic level. Because the methionine and glutathione metabolic pathways are physiologically linked, sleep problems appear to be associated with the first metabolic pathway, whereas obesity may be associated with the second.
Background: Persistent postsurgical neuropathic pain (PPSNP) can occur after intraoperative damage to somatosensory nerves, with a prevalence of 29–57% in breast cancer surgery. Proteomics is an active research field in neuropathic pain and the first results support its utility for establishing diagnoses or finding therapy strategies. Methods: 57 women (30 non-PPSNP/27 PPSNP) who had experienced a surgeon-verified intercostobrachial nerve injury during breast cancer surgery, were examined for patterns in 74 serum proteomic markers that allowed discrimination between subgroups with or without PPSNP. Serum samples were obtained both before and after surgery. Results: Unsupervised data analyses, including principal component analysis and self-organizing maps of artificial neurons, revealed patterns that supported a data structure consistent with pain-related subgroup (non-PPSPN vs. PPSNP) separation. Subsequent supervised machine learning-based analyses revealed 19 proteins (CD244, SIRT2, CCL28, CXCL9, CCL20, CCL3, IL.10RA, MCP.1, TRAIL, CCL25, IL10, uPA, CCL4, DNER, STAMPB, CCL23, CST5, CCL11, FGF.23) that were informative for subgroup separation. In cross-validated training and testing of six different machine-learned algorithms, subgroup assignment was significantly better than chance, whereas this was not possible when training the algorithms with randomly permuted data or with the protein markers not selected. In particular, sirtuin 2 emerged as a key protein, presenting both before and after breast cancer treatments in the PPSNP compared with the non-PPSNP subgroup. Conclusions: The identified proteins play important roles in immune processes such as cell migration, chemotaxis, and cytokine-signaling. They also have considerable overlap with currently known targets of approved or investigational drugs. Taken together, several lines of unsupervised and supervised analyses pointed to structures in serum proteomics data, obtained before and after breast cancer surgery, that relate to neuroinflammatory processes associated with the development of neuropathic pain after an intraoperative nerve lesion.
Motivation: Gaussian mixture models (GMMs) are probabilistic models commonly used in biomedical research to detect subgroup structures in data sets with one-dimensional information. Reliable model parameterization requires that the number of modes, i.e., states of the generating process, is known. However, this is rarely the case for empirically measured biomedical data. Several implementations are available that estimate GMM parameters differently. This work aims to provide a comparative evaluation of automated GMM fitting methods.
Results and conclusions: The performance of commonly used algorithms for automatic parameterization and mode number determination was compared with respect to reproducing the ground truth of generated data derived from multiple normal distributions. Four main variants of Gaussian mode number detection algorithms and five variants of GMM parameter estimation methods were tested in a combinatory scenario. The combination of best performing mode number determination algorithms and GMM parameter estimation methods was then tested on artificial and real-live data sets known to display a GMM structure. None of the tested methods correctly determined the underlying data structure consistently. The likelihood ratio test had the best performance in identifying the mode number associated with the best GMM fit of the data distribution while the Markov chain Monte Carlo (MCMC) algorithm was best for GMM parameter estimation while. The combination of the two methods of number determination algorithms and GMM parameter estimation was consistently among the best and overall outperformed the available implementations.
Implementation: An automated tool for the detection of GMM based structures in (biomedical) datasets was created based on the present results and made freely available in the R library “opGMMassessment” at https://cran.r-project.org/package=opGMMassessment.
Because it is associated with central nervous changes, and olfactory dysfunction has been reported with increased prevalence among persons with diabetes, this study addressed the question of whether the risk of developing diabetes in the next 10 years is reflected in olfactory symptoms. In a cross-sectional study, in 164 individuals seeking medical consulting for possible diabetes, olfactory function was evaluated using a standardized clinical test assessing olfactory threshold, odor discrimination, and odor identification. Metabolomics parameters were assessed via blood concentrations. The individual diabetes risk was quantified according to the validated German version of the “FINDRISK” diabetes risk score. Machine learning algorithms trained with metabolomics patterns predicted low or high diabetes risk with a balanced accuracy of 63–75%. Similarly, olfactory subtest results predicted the olfactory dysfunction category with a balanced accuracy of 85–94%, occasionally reaching 100%. However, olfactory subtest results failed to improve the prediction of diabetes risk based on metabolomics data, and metabolomics data did not improve the prediction of the olfactory dysfunction category based on olfactory subtest results. Results of the present study suggest that olfactory function is not a useful predictor of diabetes.