004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Doctoral Thesis (58)
- Bachelor Thesis (18)
- Article (17)
- Master's Thesis (5)
- Conference Proceeding (4)
- Habilitation (2)
- Diploma Thesis (1)
- Preprint (1)
Has Fulltext
- yes (106)
Is part of the Bibliography
- no (106)
Keywords
- Machine Learning (5)
- NLP (5)
- Annotation (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- ALICE (2)
- Blockchain (2)
- CBM experiment (2)
- Classification (2)
Institute
- Informatik und Mathematik (106) (remove)
Cone photoreceptor cells are wavelength-sensitive neurons in the retinas of vertebrate eyes and are responsible for color vision. The spatial distribution of these nerve cells is commonly referred to as the cone photoreceptor mosaic. By applying the principle of maximum entropy, we demonstrate the universality of retinal cone mosaics in vertebrate eyes by examining various species, namely, rodent, dog, monkey, human, fish, and bird. We introduce a parameter called retinal temperature, which is conserved across the retinas of vertebrates. The virial equation of state for two-dimensional cellular networks, known as Lemaître’s law, is also obtained as a special case of our formalism. We investigate the behavior of several artificially generated networks and the natural one of the retina concerning this universal, topological law.
This bachelor thesis developed a pipeline for automatic processing of scanned hospital letters: HospLetExtractor. Hospital letters can contain valuable information about potential adverse drug reactions and useful case information relevant to pharmacovigilance. To make this data accessible, this thesis presents a pipeline consisting of image pre-processing, optical character recognition and post-processing. Pre-processing deskews the images, removes lines and rectangles, reduces noise and applies super-resolution. For the post-processing a spell checking system was set up including a newly built word frequency dictionary for german medical terms based on a created corpus of german medical texts. Furthermore, classical and deep learning models for the classification of hospital letters were compared, in which the transformer-based models performed best. In order to train and test the models, a new gold standard was created. By making these medical documents accessible for automatic analysis, hopefully a contribution can be made to expand the scope of pharmacovigilance.
Background: Prostate cancer is a major health concern in aging men. Paralleling an aging society, prostate cancer prevalence increases emphasizing the need for efcient diagnostic algorithms.
Methods: Retrospectively, 106 prostate tissue samples from 48 patients (mean age,
66 ± 6.6 years) were included in the study. Patients sufered from prostate cancer (n = 38) or benign prostatic hyperplasia (n = 10) and were treated with radical prostatectomy or Holmium laser enucleation of the prostate, respectively. We constructed tissue microarrays (TMAs) comprising representative malignant (n = 38) and benign (n = 68) tissue cores. TMAs were processed to histological slides, stained, digitized and assessed for the applicability of machine learning strategies and open–source tools in diagnosis of prostate cancer. We applied the software QuPath to extract features for shape, stain intensity, and texture of TMA cores for three stainings, H&E, ERG, and PIN-4. Three machine learning algorithms, neural network (NN), support vector machines (SVM), and random forest (RF), were trained and cross-validated with 100 Monte Carlo random splits into 70% training set and 30% test set. We determined AUC values for single color channels, with and without optimization of hyperparameters by exhaustive grid search. We applied recursive feature elimination to feature sets of multiple color transforms.
Results: Mean AUC was above 0.80. PIN-4 stainings yielded higher AUC than H&E and
ERG. For PIN-4 with the color transform saturation, NN, RF, and SVM revealed AUC of 0.93 ± 0.04, 0.91 ± 0.06, and 0.92 ± 0.05, respectively. Optimization of hyperparameters improved the AUC only slightly by 0.01. For H&E, feature selection resulted in no increase of AUC but to an increase of 0.02–0.06 for ERG and PIN-4.
Conclusions: Automated pipelines may be able to discriminate with high accuracy between malignant and benign tissue. We found PIN-4 staining best suited for classifcation. Further bioinformatic analysis of larger data sets would be crucial to evaluate the reliability of automated classifcation methods for clinical practice and to evaluate potential discrimination of aggressiveness of cancer to pave the way to automatic precision medicine.
Unified probabilistic deep continual learning through generative replay and open set recognition
(2022)
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge. Although it is inevitable for continual-learning systems to encounter such unseen concepts, the corresponding literature appears to nonetheless focus primarily on alleviating catastrophic interference with learned representations. In this work, we introduce a probabilistic approach that connects these perspectives based on variational inference in a single deep autoencoder model. Specifically, we propose to bound the approximate posterior by fitting regions of high density on the basis of correctly classified data points. These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application. Simultaneously, to retain already acquired knowledge, a generative replay process can be narrowed to strictly in-distribution samples, in order to significantly alleviate catastrophic interference.
In online video games toxic interactions are very prevalent and often
even considered an imperative part of gaming.
Most studies analyse the toxicity in video games by analysing the messages that are sent during a match, while only a few focus on other interactions. We focus specifically on the in-game events to try to identify toxic matches, by constructing a framework that takes a list of time-based events and projects them into a graph structure which we can then analyse with current methods in the field of graph representation learning.
Specifically we use a Graph Neural Network and Principal Neighbour-
hood Aggregation to analyse the graph structure to predict the toxicity of a match.
We also discuss the subjectivity behind the term toxicity and why the
process of only analysing in-game messages with current state-of-the-art NLP methods isn’t capable to infer if a match is perceived as toxic or not.
Blockchains in public administration : a RADIUS on blockchain framework for public administration
(2023)
The emergence of blockchain technology has generated a great deal of attention, as reflected in numerous scientific and journalistic articles. However, the implementation of blockchain for public administrations in Germany has encountered a setback owing to unsuccessful initiatives. Initial enthusiasm was followed by disillusionment. Nevertheless, technology continues to evolve. This paper examines whether the use of a blockchain can still optimize the processes of public administrations. Not only the failed projects are analysed, but also more current applications of the technology and their potential relevance for the administration, especially in the state of Hesse.
To answer if blockchains are promising to administrations, a Design Science Research (DSR) research approach is chosen. The DSR method is a research-based approach that aims to create new and innovative solutions to real-world problems through the development and evaluation of artefacts such as models, methods, or prototypes. For this work, the implementation of a framework to realize an Authentication, Authorization, and Accounting (AAA) system on the blockchain was identified as profitable. The framework aims to implement the aforementioned AAA tasks using a blockchain. The Remote Authentication Dial-In User Service (RADIUS) protocol has been identified as a potential protocol of the AAA system. The goal is to create a way to implement the system either entirely on a blockchain or as a hybrid system. Various blockchain technologies will be considered. Suitable for development, the framework AAA-me is named.
The development of AAA-me has shown that the desired framework for implementing RADIUS on the blockchain is possible in various degrees of implementation. Previous work mostly relied on full development. Additionally, it has been shown that AAA-me can be used to perform hybrid integration at different implementation levels. This makes AAA-me stand out from the few hybrid previous approaches. Furthermore, AAA-me was investigated in different laboratory environments. This was to determine the expected resilience against Single Point of Failure (SPOF). The results of the lab investigation indicated that a RADIUS system on top of a blockchain can provide benefits in terms of security and performance. In the lab environment, times were measured within which a series of authorization requests were processed. In addition, it was illustrated how a RADIUS system implemented using blockchain can protect itself against Man-in-the-Middle (MITM) attacks.
Finally, in collaboration with the Hessian Central Office for Data Processing (German: Hessische Zentrale für Datenverarbeitung) (HZD), another test lab demonstrated how a RADIUS system on the blockchain can integrate with the existing IT systems of the German state of Hesse. Based on these findings, this work reevaluated the applicability of blockchain technology for public administration processes.
The work has thus shown that the use of a blockchain can still be purposeful. However, it has also been shown that an implementation can bring many problems with it. The small number of blockchain developers and engineers also poses the risk of finding people to develop and maintain a system. In addition, one faces the problem of determining an architecture now that will be applied to many projects in the future. However, each project can, in turn, have an impact on the choice of architecture. Once one has solved this problem and a blockchain infrastructure is available, it can be established quickly and be more SPOF resistant, for example, for Public Key Infrastructure (PKI) systems.
AAA-me was only applied in lab and test environments. As a result, no real data ran over its own infrastructure. This allowed the necessary flexibility for development. However, system-related properties could appear in real situations that are not detectable here in this way. Furthermore, the initial stage of AAA-me’s development is still in its infancy. Many manual adjustments need to be made in order for this to integrate with an existing RADIUS system. Also, no system security effort in and of itself has been carried out in the lab environments. Thus, vulnerabilities can quickly open up on web servers due to misconfigurations and missing updates. For the above reasons, productive use should be discouraged unless major developments are carried out.
This dissertation is concerned with the task of map-based self-localization, using images of the ground recorded with a downward-facing camera. In this context, map-based (self-)localization is the task of determining the position and orientation of a query image that is to be localized. The map used for this purpose consists of a set of reference images with known positions and orientations in a common coordinate system. For localization, the considered methods determine correspondences between features of the query image and those of the reference images.
In comparison with localization approaches that use images of the surrounding environment, we expect that using images of the ground has the advantage that, unlike the surrounding, the visual appearance of the ground is often long-term stable. Also, by using active lighting of the ground, localization becomes independent of external lighting conditions.
This dissertation includes content of several published contributions, which present research on the development and testing of methods for feature-based localization of ground images. Our first contribution examines methods for the extraction of image features that have not been designed to be used on ground images. This survey shows that, with appropriate parametrization, several of these methods are well suited for the task.
Based on this insight, we develop and examine methods for various subtasks of map-based localization in the following contributions. We examine global localization, where all reference images have to be considered, as well as local localization, where an approximation of the query image position is already known, which allows for disregarding reference images with a large distance to this position.
In our second contribution, we present the first systematic comparison of state-of-the-art methods for ground texture based localization. Furthermore, we present a method, which is characterized by its usage of our novel feature matching technique. This technique is called identity matching, as it matches only those features with identical descriptors, in contrast to the state-of-the-art that also matches features with similar descriptors. We show that our method is well suited for global and local localization, as it has favorable scaling with the number of reference images considered during the localization process. In another contribution, we develop a variant of our localization method that is significantly faster to compute, as it applies a sampling approach to determine the image positions at which local features are extracted, instead of using classical feature detectors.
Two further contributions are concerned with global localization. The first one introduces a prediction model for the global localization performance, based on an evaluation of the local localization performance. This allows us to quickly evaluate any considered parameter settings of global localization methods. The second contribution introduces a learning-based method that computes compact descriptors of ground images. This descriptor can be used to retrieve the overlapping reference images of a query image from a large set of reference images with little computational effort.
The most recent contribution included in this dissertation presents a new ground image database, which was recorded with a dedicated platform using a downward-facing camera. In addition to the data, we also explain our guidelines for the construction of the platform. In comparison with existing databases, our database contains more images and presents a larger variety of ground textures. Furthermore, this database enables us to perform the first systematic evaluation of how localization performance is affected by the time interval between the point in time at which the reference images are recorded and the point in time at which the query image is recorded. We find out that for outdoor areas all ground texture based localization methods have reliability issues, if the time interval between the recording of the query and reference images is large, and also if there are different weather conditions. These findings point to remaining challenges in ground texture base localization that should be addressed in future work.
Das adaptive Immunsystem schützt den Menschen vor extra- wie auch intrakorporal auftretenden Pathogenen und Krebszellen. Die Funktionalität dieses Prozesses geht hierbei auf die Interaktion und Kooperation einer Vielzahl verschiedener Zelltypen des Körpers zurück und ist vorwiegend innerhalb der Lymphknoten lokalisiert. Ist auch nur ein Bestandteil dieses sensiblen Prozesses gestört, kann dies zu einem teilweisen oder vollständigen Verlust der immunologischen Fitness des Menschen führen. Daher war es das Ziel dieser Arbeit, solche Aberrationen des humanen Lymphknotengewebes umfassend digital-pathologisch zu detektieren und zu definieren.
Hierfür wurde zunächst eine digitale Gewebedatenbank etabliert. Diese basiert auf dem im Rahmen dieser Arbeit implementierten Content-Management-System Digital Tissue Management Suite. Weiterhin wurde die Software Feature analysis in tissue histomorphometry entwickelt, welche die Analyse von zweidimensionalen whole slide images ermöglicht. Hierbei werden Methoden aus dem Bereich Computer Vision und Graphentheorie eingesetzt, um morphologische und distributionale Eigenschaften der Zelltypen des Lymphknotens zu charakterisieren. Darüber hinaus enthält diese Software Plug-ins zur Visualisierung und statistischen Analyse der Daten.
Aufbauend auf der eigens implementierten, digitalen Infrastruktur, in Kombination mit der Software Imaris wurden zweidimensional und dreidimensional gescannte, reaktive und neoplastische Gewebeproben digital phänotypisiert. Hierbei konnten neue mechanische Barrieren zur Kompartimentalisierung der Keimzentren aufgeklärt werden. Weiterhin konnte der Erhalt des quantitativen Verhältnisses einzelner Zellpopulationen innerhalb der Keimzentren beschrieben werden. Ausgehend von den reaktiven Phänotypen des Lymphknotens, wurden pathophysiologische Aberrationen in verschiedenen lymphatischen Neoplasien untersucht. Hierbei konnte gezeigt werden, dass speziell die strukturelle Destruktion häufig mit einer morphologischen Veränderung der fibroblastischen Retikulumzellen einhergeht.
Neben strukturellen Veränderungen sind auch zytologische Veränderungen der Tumormikroumgebung zu verzeichnen. Eine besondere Rolle spielen hierbei sogenannte Tumor-assoziierte Makrophagen. Im Rahmen dieser Arbeit konnte gezeigt werden, dass speziell Makrophagen in der Tumormikroumgebung des diffus großzelligen B-Zell-Lymphoms und der chronisch lymphatischen Leukämie spezifische pathophysiologische Veränderungen aufzeigen. Auch konnte gezeigt werden, dass genetische Änderungen neoplastischer B-Zellen mit einer generellen Reduktion der CD20-Antigendichte einhergehen.
Zusammenfassend ermöglichten die Ergebnisse die Generierung eines umfassenden digital-pathologischen Profils des klassischen Hodgkin-Lymphoms. Hierbei konnten morphologische Veränderungen neoplastischer, CD30-positiver Hodgkin-Reed-Sternberg-Zellen validiert und beschrieben werden. Auch konnten pathologische Veränderungen des Konnektoms und der Tumormikroumgebung dieser Zellen parametrisiert und quantifiziert werden. Abschließend wurde unter Anwendung eines Random forest-Klassifikators die diagnostische Potenz digital-pathologischer Profile evaluiert und validiert.
AttendAffectNet-emotion prediction of movie viewers using multimodal fusion with self-attention
(2021)
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension.
Linking mathematics with reality is not new. It is also not new to use outdoor activities to learn mathematics. It seems to be new, to combine such mathematical outdoor activities with mobile technology, like the geocache community which makes use of GPS technology to guide their members to special places and points of interest. The use of mobile technologies to learn at any time and any location is known as “mobile learning”. This type of learning can be seen as an extension of eLearning. Considering the definition of O’Malley one notices that this definition does not exactly match with the idea of the MathCityMap-Project (MCM), because the learning environment in the MCM-Project is predetermined. Combined with the math trail method the project enables mobile learning within math trails with latest technology.In the MCM-Project students experience mathematics at real places and within real situations in out-of-school activities,with help of GPS-enabled smartphones and special math problems. In contrast to the paper versions of math trails we are able to give direct feedback on the solutions by using “mobile devices” such as smartphones or tablets. If the user has difficulties in solving the modeling task, stepped hints can be provided. The teacher is able to use the MCM-Portal to upload tasks developed by himself or by his students and he is also able to build a personal math trail for his students.