004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (251)
- Doctoral Thesis (147)
- Working Paper (122)
- Conference Proceeding (53)
- Bachelor Thesis (50)
- Diploma Thesis (47)
- Preprint (43)
- Part of a Book (42)
- Contribution to a Periodical (38)
- diplomthesis (31)
Has Fulltext
- yes (890) (remove)
Is part of the Bibliography
- no (890)
Keywords
- Lambda-Kalkül (21)
- Inklusion (13)
- Formale Semantik (11)
- Barrierefreiheit (10)
- Digitalisierung (10)
- Operationale Semantik (9)
- data science (9)
- lambda calculus (9)
- machine learning (9)
- Computerlinguistik (8)
Institute
- Informatik (469)
- Informatik und Mathematik (101)
- Präsidium (73)
- Frankfurt Institute for Advanced Studies (FIAS) (51)
- Medizin (51)
- Wirtschaftswissenschaften (44)
- Physik (34)
- studiumdigitale (24)
- Hochschulrechenzentrum (23)
- Extern (12)
Analysis of machine learning prediction quality for automated subgroups within the MIMIC III dataset
(2023)
The motivation for this master’s thesis is to explore the potential of predictive data analytics in the field of medicine. For this, the MIMIC-III dataset offers an extensive foundation for the construction of prediction models, including Random Forest, XGBOOST, and deep learning networks. These models were implemented to forecast the mortality of 2,655 stroke patients.
The first part of the thesis involved conducting a comprehensive data analysis of the filtered MIMIC-III dataset.
Subsequently, the effectiveness and fairness of the predictive models were evaluated. Although the performance levels of the developed models did not match those reported in related research, their potential became evident. The results obtained demonstrated promising capabilities and highlighted the effectiveness of the applied methodologies. Moreover, the feature relevance within the XGBOOST model was examined to increase model explainability.
Finally, relevant subgroups were identified to perform a comparative analysis of the prediction performance across these subgroups. While this approach can be regarded as a valuable methodology, it was not possible to investigate underlying reasons for potential unfairness across clusters. Inside the test data, not enough instances remained per subgroup for further fairness or feature relevance analysis.
In conclusion, the implementation of an alternative use case with a higher patient count is recommended.
The code for this analysis is made available via a GitHub repository and includes a frontend to visualize the results.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
We study threshold testing, an elementary probing model with the goal to choose a large value out of n i.i.d. random variables. An algorithm can test each variable X_i once for some threshold t_i, and the test returns binary feedback whether X_i ≥ t_i or not. Thresholds can be chosen adaptively or non-adaptively by the algorithm. Given the results for the tests of each variable, we then select the variable with highest conditional expectation. We compare the expected value obtained by the testing algorithm with expected maximum of the variables. Threshold testing is a semi-online variant of the gambler’s problem and prophet inequalities. Indeed, the optimal performance of non-adaptive algorithms for threshold testing is governed by the standard i.i.d. prophet inequality of approximately 0.745 + o(1) as n → ∞. We show how adaptive algorithms can significantly improve upon this ratio. Our adaptive testing strategy guarantees a competitive ratio of at least 0.869 - o(1). Moreover, we show that there are distributions that admit only a constant ratio c < 1, even when n → ∞. Finally, when each box can be tested multiple times (with n tests in total), we design an algorithm that achieves a ratio of 1 - o(1).
Blockchains in public administration : a RADIUS on blockchain framework for public administration
(2023)
The emergence of blockchain technology has generated a great deal of attention, as reflected in numerous scientific and journalistic articles. However, the implementation of blockchain for public administrations in Germany has encountered a setback owing to unsuccessful initiatives. Initial enthusiasm was followed by disillusionment. Nevertheless, technology continues to evolve. This paper examines whether the use of a blockchain can still optimize the processes of public administrations. Not only the failed projects are analysed, but also more current applications of the technology and their potential relevance for the administration, especially in the state of Hesse.
To answer if blockchains are promising to administrations, a Design Science Research (DSR) research approach is chosen. The DSR method is a research-based approach that aims to create new and innovative solutions to real-world problems through the development and evaluation of artefacts such as models, methods, or prototypes. For this work, the implementation of a framework to realize an Authentication, Authorization, and Accounting (AAA) system on the blockchain was identified as profitable. The framework aims to implement the aforementioned AAA tasks using a blockchain. The Remote Authentication Dial-In User Service (RADIUS) protocol has been identified as a potential protocol of the AAA system. The goal is to create a way to implement the system either entirely on a blockchain or as a hybrid system. Various blockchain technologies will be considered. Suitable for development, the framework AAA-me is named.
The development of AAA-me has shown that the desired framework for implementing RADIUS on the blockchain is possible in various degrees of implementation. Previous work mostly relied on full development. Additionally, it has been shown that AAA-me can be used to perform hybrid integration at different implementation levels. This makes AAA-me stand out from the few hybrid previous approaches. Furthermore, AAA-me was investigated in different laboratory environments. This was to determine the expected resilience against Single Point of Failure (SPOF). The results of the lab investigation indicated that a RADIUS system on top of a blockchain can provide benefits in terms of security and performance. In the lab environment, times were measured within which a series of authorization requests were processed. In addition, it was illustrated how a RADIUS system implemented using blockchain can protect itself against Man-in-the-Middle (MITM) attacks.
Finally, in collaboration with the Hessian Central Office for Data Processing (German: Hessische Zentrale für Datenverarbeitung) (HZD), another test lab demonstrated how a RADIUS system on the blockchain can integrate with the existing IT systems of the German state of Hesse. Based on these findings, this work reevaluated the applicability of blockchain technology for public administration processes.
The work has thus shown that the use of a blockchain can still be purposeful. However, it has also been shown that an implementation can bring many problems with it. The small number of blockchain developers and engineers also poses the risk of finding people to develop and maintain a system. In addition, one faces the problem of determining an architecture now that will be applied to many projects in the future. However, each project can, in turn, have an impact on the choice of architecture. Once one has solved this problem and a blockchain infrastructure is available, it can be established quickly and be more SPOF resistant, for example, for Public Key Infrastructure (PKI) systems.
AAA-me was only applied in lab and test environments. As a result, no real data ran over its own infrastructure. This allowed the necessary flexibility for development. However, system-related properties could appear in real situations that are not detectable here in this way. Furthermore, the initial stage of AAA-me’s development is still in its infancy. Many manual adjustments need to be made in order for this to integrate with an existing RADIUS system. Also, no system security effort in and of itself has been carried out in the lab environments. Thus, vulnerabilities can quickly open up on web servers due to misconfigurations and missing updates. For the above reasons, productive use should be discouraged unless major developments are carried out.
PolarCAP – A deep learning approach for first motion polarity classification of earthquake waveforms
(2022)
Highlights
• We present PolarCAP, a deep learning model that can classify the polarity of a waveform with a 98% accuracy.
• The first-motion polarity of seismograms is a useful parameter, but its manual determination can be laborious and imprecise.
• We demonstrate that in several cases the model can assign trace polar-ity more accurately than a human analyst.
Abstract
The polarity of first P-wave arrivals plays a significant role in the effective determination of focal mechanisms specially for smaller earthquakes. Manual estimation of polarities is not only time-consuming but also prone to human errors. This warrants a need for an automated algorithm for first motion polarity determination. We present a deep learning model - PolarCAP that uses an autoencoder architecture to identify first-motion polarities of earth-quake waveforms. PolarCAP is trained in a supervised fashion using more than 130,000 labelled traces from the Italian seismic dataset (INSTANCE) and is cross-validated on 22,000 traces to choose the most optimal set of hyperparameters. We obtain an accuracy of 0.98 on a completely unseen test dataset of almost 33,000 traces. Furthermore, we check the model generalizability by testing it on the datasets provided by previous works and show that our model achieves a higher recall on both positive and negative polarities.
The ubiquitin (Ub) code denotes the complex Ub architectures, including Ub chains of different length, linkage-type and linkage combinations, which enable ubiquitination to control a wide range of protein fates. Although many linkage-specific interactors have been described, how interactors are able to decode more complex architectures is not fully understood. We conducted a Ub interactor screen, in humans and yeast, using Ub chains of varying length, as well as, homotypic and heterotypic branched chains of the two most abundant linkage types – K48- and K63-linked Ub. We identified some of the first K48/K63 branch-specific Ub interactors, including histone ADP-ribosyltransferase PARP10/ARTD10, E3 ligase UBR4 and huntingtin-interacting protein HIP1. Furthermore, we revealed the importance of chain length by identifying interactors with a preference for Ub3 over Ub2 chains, including Ub-directed endoprotease DDI2, autophagy receptor CCDC50 and p97-adaptor FAF1. Crucially, we compared datasets collected using two common DUB inhibitors – Chloroacetamide and N-ethylmaleimide. This revealed inhibitor-dependent interactors, highlighting the importance of inhibitor consideration during pulldown studies. This dataset is a key resource for understanding how the Ub code is read.
Structural rearrangements play a central role in the organization and function of complex biomolecular systems. In principle, Molecular Dynamics (MD) simulations enable us to investigate these thermally activated processes with an atomic level of resolution. In practice, an exponentially large fraction of computational resources must be invested to simulate thermal fluctuations in metastable states. Path sampling methods focus the computational power on sampling the rare transitions between states. One of their outstanding limitations is to efficiently generate paths that visit significantly different regions of the conformational space. To overcome this issue, we introduce a new algorithm for MD simulations that integrates machine learning and quantum computing. First, using functional integral methods, we derive a rigorous low-resolution spatially coarse-grained representation of the system’s dynamics, based on a small set of molecular configurations explored with machine learning. Then, we use a quantum annealer to sample the transition paths of this low-resolution theory. We provide a proof-of-concept application by simulating a benchmark conformational transition with all-atom resolution on the D-Wave quantum computer. By exploiting the unique features of quantum annealing, we generate uncorrelated trajectories at every iteration, thus addressing one of the challenges of path sampling. Once larger quantum machines will be available, the interplay between quantum and classical resources may emerge as a new paradigm of high-performance scientific computing. In this work, we provide a platform to implement this integrated scheme in the field of molecular simulations.
Gradient-consistent enrichment of finite element spaces for the DNS of fluid-particle interaction
(2019)
Highlights
• Monolithic scheme for particulate flows preventing an oscillating pressure along the interface.
• The choice of enriching shape functions is driven by the properties of its gradient instead of its value.
• The choice of enriching shape functions inherits a natural stabilization on small cut elements.
Abstract
We present gradient-consistent enriched finite element spaces for the simulation of free particles in a fluid. This involves forces being exchanged between the particles and the fluid at the interface. In an earlier work [23] we derived a monolithic scheme which includes the interaction forces into the Navier-Stokes equations by means of a fictitious domain like strategy. Due to an inexact approximation of the interface oscillations of the pressure along the interface were observed. In multiphase flows oscillations and spurious velocities are a common issue. The surface force term yields a jump in the pressure and therefore the oscillations are usually resolved by extending the spaces on cut elements in order to resolve the discontinuity. For the construction of the enriched spaces proposed in this paper we exploit the Petrov-Galerkin formulation of the vertex-centered finite volume method (PG-FVM), as already investigated in [23]. From the perspective of the finite volume scheme we argue that wrong discrete normal directions at the interface are the origin of the oscillations. The new perspective of normal vectors suggests to look at gradients rather than values of the enriching shape functions. The crucial parameter of the enrichment functions therefore is the gradient of the shape functions and especially the one of the test space. The distinguishing feature of our construction therefore is an enrichment that is based on the choice of shape functions with consistent gradients. These derivations finally yield a fitted scheme for the immersed interface. We further propose a strategy ensuring a well-conditioned system independent of the location of the interface. The enriched spaces can be used within any existing finite element discretization for the Navier-Stokes equation. Our numerical tests were conducted using the PG-FVM. We demonstrate that the enriched spaces are able to eliminate the oscillations.
Rotational test spaces for a fully-implicit FVM and FEM for the DNS of fluid-particle interaction
(2019)
The paper presents a fully-implicit and stable finite element and finite volume scheme for the simulation of freely moving particles in a fluid. The developed method is based on the Petrov-Galerkin formulation of a vertex-centered finite volume method (PG-FVM) on unstructured grids. Appropriate extension of the ansatz and test spaces lead to a formulation comparable to a fictitious domain formulation. The purpose of this work is to introduce a new concept of numerical modeling reducing the mathematical overhead which many other methods require. It exploits the identification of the PG-FVM with a corresponding finite element bilinear form. The surface integrals of the finite volume scheme enable a natural incorporation of the interface forces purely based on the original bilinear operator for the fluid. As a result, there is no need to expand the system of equations to a saddle-point problem. Like for fictitious domain methods the extended scheme treats the particles as rigid parts of the fluid. The distinguishing feature compared to most existing fictitious domain methods is that there is no need for an additional Lagrange multiplier or other artificial external forces for the fluid-solid coupling. Consequently, only one single solve for the derived linear system for the fluid together with the particles is necessary and the proposed method does not require any fractional time stepping scheme to balance the interaction forces between fluid and particles. For the linear Stokes problem we will prove the stability of both schemes. Moreover, for the stationary case the conservation of mass and momentum is not violated by the extended scheme, i.e. conservativity is accomplished within the range of the underlying, unconstrained discretization scheme. The scheme is applicable for problems in two and three dimensions.
We investigate the applicability of the well-known multilevel Monte Carlo (MLMC) method to the class of density-driven flow problems, in particular the problem of salinisation of coastal aquifers. As a test case, we solve the uncertain Henry saltwater intrusion problem. Unknown porosity, permeability and recharge parameters are modelled by using random fields. The classical deterministic Henry problem is non-linear and time-dependent, and can easily take several hours of computing time. Uncertain settings require the solution of multiple realisations of the deterministic problem, and the total computational cost increases drastically. Instead of computing of hundreds random realisations, typically the mean value and the variance are computed. The standard methods such as the Monte Carlo or surrogate-based methods are a good choice, but they compute all stochastic realisations on the same, often, very fine mesh. They also do not balance the stochastic and discretisation errors. These facts motivated us to apply the MLMC method. We demonstrate that by solving the Henry problem on multi-level spatial and temporal meshes, the MLMC method reduces the overall computational and storage costs. To reduce the computing cost further, parallelization is performed in both physical and stochastic spaces. To solve each deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.