Institutes
Refine
Year of publication
- 2020 (32) (remove)
Document Type
- Article (10)
- Doctoral Thesis (10)
- Bachelor Thesis (6)
- Book (2)
- Conference Proceeding (2)
- Habilitation (1)
- Preprint (1)
Has Fulltext
- yes (32)
Is part of the Bibliography
- no (32)
Keywords
- Machine Learning (2)
- Virtual Reality (2)
- algorithms (2)
- 11N45 (1)
- 14N10 (secondary) (1)
- 30F30 (1)
- 32G15 (primary) (1)
- Annotation (1)
- Arithmetic-geometric exponentials (1)
- Artificial intelligence (1)
Institute
We show how nonlocal boundary conditions of Robin type can be encoded in the pointwise expression of the fractional operator. Notably, the fractional Laplacian of functions satisfying homogeneous nonlocal Neumann conditions can be expressed as a regional operator with a kernel having logarithmic behaviour at the boundary.
Diese Bachelorarbeit befasst sich mit der Themenklassifikation von unstrukturiertem Text. Aufgrund der stetig steigenden Menge von textbasierten Daten werden automatisierte Klassifikationsmethoden in vielen Disziplinen benötigt und erforscht. Aufbauend auf dem text2ddc-Klassifikator, der am Text Technology Lab der Goethe-Universität Frankfurt am Main entwickelt wurde, werden die Auswirkungen der Vergrößerung des Trainingskorpus mittels unterschiedlicher Methoden untersucht. text2ddc nutzt die Dewey Decimal Classification (DDC) als Zielklassifikation und wird trainiert auf Artikeln der Wikipedia. Nach einer Einführung, in der Grundlagen beschrieben werden, wird das Klassifikationsmodell von text2ddc vorgestellt, sowie die Probleme und daraus resultierenden Aufgaben betrachtet. Danach wird die Aktualisierung der bisherigen Daten beschrieben, gefolgt von der Vorstellung der verschiedenen Methoden, das Trainingskorpus zu erweitern. Mit insgesamt elf Sprachen wird experimentiert. Die Evaluation zeigt abschließend die Verbesserungen der Qualität der Klassifikation mit text2ddc auf, diskutiert die problematischen Fälle und gibt Anregungen für weitere zukünftige Arbeiten.
Machine Learning (ML) is so pervasive in our todays life that we don't even realise that, more often than expected, we are using systems based on it. It is also evolving faster than ever before. When deploying ML systems that make decisions on their own, we need to think about their ignorance of our uncertain world. The uncertainty might arise due to scarcity of the data, the bias of the data or even a mismatch between the real world and the ML-model. Given all these uncertainties, we need to think about how to build systems that are not totally ignorant thereof. Bayesian ML can to some extent deal with these problems. The specification of the model using probabilities provides a convenient way to quantify uncertainties, which can then be included in the decision making process.
In this thesis, we introduce the Bayesian ansatz to modeling and apply Bayesian ML models in finance and economics. Especially, we will dig deeper into Gaussian processes (GP) and Gaussian process latent variable model (GPLVM). Applied to the returns of several assets, GPLVM provides the covariance structure and also a latent space embedding thereof. Several financial applications can be build upon the output of the GPLVM. To demonstrate this, we build an automated asset allocation system, a predictor for missing asset prices and identify other structure in financial data.
It turns out that the GPLVM exhibits a rotational symmetry in the latent space, which makes it harder to fit. Our second publication reports, how to deal with that symmetry. We propose another parameterization of the model using Householder transformations, by which the symmetry is broken. Bayesian models are changed by reparameterization, if the prior is not changed accordingly. We provide the correct prior distribution of the new parameters, such that the model, i.e. the data density, is not changed under the reparameterization. After applying the reparametrization on Bayesian PCA, we show that the symmetry of nonlinear models can also be broken in the same way.
In our last project, we propose a new method for matching quantile observations, which uses order statistics. The use of order statistics as the likelihood, instead of a Gaussian likelihood, has several advantages. We compare these two models and highlight their advantages and disadvantages. To demonstrate our method, we fit quantiled salary data of several European countries. Given several candidate models for the fit, our method also provides a metric to choose the best option.
We hope that this thesis illustrates some benefits of Bayesian modeling (especially Gaussian processes) in finance and economics and its usage when uncertainties are to be quantified.
This article deals with the solution of linear ill-posed equations in Hilbert spaces. Often, one only has a corrupted measurement of the right hand side at hand and the Bakushinskii veto tells us, that we are not able to solve the equation if we do not know the noise level. But in applications it is ad hoc unrealistic to know the error of a measurement. In practice, the error of a measurement may often be estimated through averaging of multiple measurements. We integrated that in our anlaysis and obtained convergence to the true solution, with the only assumption that the measurements are unbiased, independent and identically distributed according to an unknown distribution.
Objectives: To analyze the performance of radiological assessment categories and quantitative computational analysis of apparent diffusion coefficient (ADC) maps using variant machine learning algorithms to differentiate clinically significant versus insignificant prostate cancer (PCa). Methods: Retrospectively, 73 patients were included in the study. The patients (mean age, 66.3 ± 7.6 years) were examined with multiparametric MRI (mpMRI) prior to radical prostatectomy (n = 33) or targeted biopsy (n = 40). The index lesion was annotated in MRI ADC and the equivalent histologic slides according to the highest Gleason Grade Group (GrG). Volumes of interest (VOIs) were determined for each lesion and normal-appearing peripheral zone. VOIs were processed by radiomic analysis. For the classification of lesions according to their clinical significance (GrG ≥ 3), principal component (PC) analysis, univariate analysis (UA) with consecutive support vector machines, neural networks, and random forest analysis were performed. Results: PC analysis discriminated between benign and malignant prostate tissue. PC evaluation yielded no stratification of PCa lesions according to their clinical significance, but UA revealed differences in clinical assessment categories and radiomic features. We trained three classification models with fifteen feature subsets. We identified a subset of shape features which improved the diagnostic accuracy of the clinical assessment categories (maximum increase in diagnostic accuracy ΔAUC = + 0.05, p < 0.001) while also identifying combinations of features and models which reduced overall accuracy. Conclusions: The impact of radiomic features to differentiate PCa lesions according to their clinical significance remains controversial. It depends on feature selection and the employed machine learning algorithms. It can result in improvement or reduction of diagnostic performance.
Density visualization pipeline: a tool for cellular and network density visualization and analysis
(2020)
Neuron classification is an important component in analyzing network structure and quantifying the effect of neuron topology on signal processing. Current quantification and classification approaches rely on morphology projection onto lower-dimensional spaces. In this paper a 3D visualization and quantification tool is presented. The Density Visualization Pipeline (DVP) computes, visualizes and quantifies the density distribution, i.e., the “mass” of interneurons. We use the DVP to characterize and classify a set of GABAergic interneurons. Classification of GABAergic interneurons is of crucial importance to understand on the one hand their various functions and on the other hand their ubiquitous appearance in the neocortex. 3D density map visualization and projection to the one-dimensional x, y, z subspaces show a clear distinction between the studied cells, based on these metrics. The DVP can be coupled to computational studies of the behavior of neurons and networks, in which network topology information is derived from DVP information. The DVP reads common neuromorphological file formats, e.g., Neurolucida XML files, NeuroMorpho.org SWC files and plain ASCII files. Full 3D visualization and projections of the density to 1D and 2D manifolds are supported by the DVP. All routines are embedded within the visual programming IDE VRL-Studio for Java which allows the definition and rapid modification of analysis workflows.
A Large Ion Collider Experiment (ALICE) is one of the four large experiments at the Large Hadron Collider (LHC) at the European Organization for Particle Physics (CERN). ALICE focuses on the physics of the strong interaction and in particular on the Quark-Gluon Plasma. This is a state of matter in which quarks are de-confined. It is believed that it existed in the earliest moments of the evolution of the universe. The ALICE detector studies the products of the collisions between heavy-nuclei, between protons, and between protons and heavy-nuclei. The sub-detector closest to the interaction point is the Inner Tracking System (ITS), which is used to measure the momentum and trajectory of the particles generated by the collisions and allows reconstructing primary and secondary interaction vertices. The ITS needs to have an accurate spatial resolution, together with a low material budget to limit the effect of multiple scattering on low-energetic particles to precisely reconstruct their trajectory. During the Long Shutdown 2 (2019-2020) of the LHC, the current ITS will be replaced by a completely redesigned sub-detector, which will improve readout rate and particle tracking performance especially at low-momentum.
The ALice PIxel DEtector (ALPIDE) chip was designed to meet the requirements of the upgraded ITS in terms of resolution, material budget, radiation hardness, and readout rate. The ALPIDE chip is a Monolithic Active Pixel Sensor (MAPS) realised in Complementary Metal-Oxide Semiconductor (CMOS) technology. Sensing element, analogue front-end, and its digital readout are integrated into the same silicon die. The readout architecture of the new ITS foresees that data is transmitted via a high-speed serial link directly from the ALPIDE to the off-detector electronics. The data is transmitted off-chip by a so-called Data Transmission Unit (DTU) which needs to be tolerant to Single-Event Effects induced by radiation, in order to guarantee reliable operation. The ALPIDE chip will operate in a radiation field with a High-Energy Hadron peak flux of 7.7·10^5 cm^-2s^-1.
The data are sent by the ALPIDE on copper cables to the readout system, which aggregates them and re-transmits them via optical fibres to the counting room. The position where the readout electronics will be placed is constrained by the maximum transmission distance reasonably achievable by the ALPIDE Data Transmission Unit and mechanical constraints of the ALICE experiment. The radiation field at that location is not negligible for its effects on electronics: the high-energy hadrons flux can reach 10^3 cm^-2s^-1. Static RAM (SRAM)-based Field Programmable Gate Arrays (FPGAs) are favoured over Application Specific Integrated Circuits (ASICs) or Radiation Hard by Design (RHBD) commercial devices because of cost effectiveness. Moreover, SRAM-based FPGAs are re-configurable and provide the data throughput required by the ITS. The main issue with SRAM-based FPGAs, for the intended application, is the susceptibility of their Configuration RAM (CRAM) to Single-Event Upsets: the number of CRAM bits is indeed much higher than the logic they configure. Total Ionizing Dose (TID) at the readout designed position is indeed still acceptable for Component Off The Shelf (COTS), provided that proper verification is carried out.
This dissertation focuses on two parts of the design of the readout system: the Data Transmission Unit of the ALPIDE chip and the design of fundamental modules for the SRAM-based FPGA of the readout electronics. In the first part, a module of the Data Transmission Unit is designed, optimising the trade-off between power consumption, radiation tolerance, and jitter performance. The design was tested and thoroughly characterised, including tests while under irradiation with a 30 MeV protons. Furthermore the Data Transmission Unit performance was validated after the integration into the first prototypes of ITS modules. In the second part, the problem of developing a radiation-tolerant SRAM-based FPGA design is investigated and a solution is provided. First, a general methodology for designing radiation-tolerant Finite State Machines in SRAM-based FPGAs is analysed, implemented, and verified. Later, the radiation-tolerant FPGA design for the ITS readout is described together with the radiation effects mitigation techniques that were selectively applied to the different modules. The design was tested with multiple irradiation tests and the results are stated below.
The main goal of this work was to create a network environment for the Unity Engine project StolperwegeVR, developed by the Text Technology Lab of Goethe University, in which you will be able to annotate one to several documents in a group. For this, basic network utils like seeing other users or moving objects had to be implemented which had to be easy to use and work with in the future.
This thesis explores a variety of methods of text quantification applicable in the field of educational text technology. Besides the cohort of existing linguistic, lexical, syntactic, and semantic text quantification methods, additional methods based on Bidirectional Encoder Representations from Transformers (BERT) are introduced and analysed. The model, developed in this thesis, is tested on a multilingual data composed of task descriptions used in Test of Understanding in College Economics (TUCE). Quantitative features extracted from raw textual data are analysed using an array of evaluation methods with the goal of finding the best predictors of the target variable - the rate of correct student responses in TUCE.
Das Ziel dieser Arbeit ist die realitätsgetreue Entwicklung eines interaktiven 3D-Stadtmodells, welches auf den ÖPNV zugeschnitten ist. Dabei soll das Programm anhand von Benutzereingaben und mit Hilfe einer Datenquelle, automatisch eine dreidimensionale Visualisierung der Gebäude erzeugen und den lokalen ÖPNV mitintegrieren. Als Beispiel der Ausarbeitung diente das ÖPNV-Netz der Stadt Frankfurt. Hierbei wurde auf die Problematik der Erhebung von Geoinformationen und der Verarbeitung von solchen komplexen Daten eingegangen. Es wurde ermittelt, welche Nutzergruppen einen Mehrwert durch eine derartige 3D Visualisierung haben und welche neuen Erweiterungs- und Nutzungspotenziale das Modell bietet.
Dem Leser soll insbesondere ein Einblick in die Generierung von interaktiven 3D-Modellen aus reinen Rohdaten verschafft werden. Dazu wurde als Entwicklungsumgebung die Spiele-Engine Unity eingesetzt, welche sich als sehr fähiges und modernes Entwicklungswerkzeug bei der Erstellung von funktionalen 3D-Visualisierungen herausgestellt hat. Als Datenquelle wurde das OpenStreetMap Projekt benutzt und im Rahmen dieser Arbeit behandelt. Anschließend wurde zur Evaluation, das Modell verschiedenen Nutzern bereitgestellt und anhand eines Fragebogens evaluiert.