Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91)
- Article (58)
- Bachelor Thesis (17)
- Book (13)
- Master's Thesis (10)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Habilitation (2)
- Preprint (2)
- Diploma Thesis (1)
Has Fulltext
- yes (202)
Is part of the Bibliography
- no (202) (remove)
Keywords
- Machine Learning (5)
- NLP (4)
- ALICE (3)
- Annotation (3)
- Machine learning (3)
- Text2Scene (3)
- TextAnnotator (3)
- Virtual Reality (3)
- mathematics education (3)
- Artificial intelligence (2)
Institute
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
In our work, we establish the existence of standing waves to a nonlinear Schrödinger equation with inverse-square potential on the half-line. We apply a profile decomposition argument to overcome the difficulty arising from the non-compactness of the setting. We obtain convergent minimizing sequences by comparing the problem to the problem at “infinity” (i.e., the equation without inverse square potential). Finally, we establish orbital stability/instability of the standing wave solution for mass subcritical and supercritical nonlinearities respectively.
Machine learning (ML) techniques have evolved rapidly in recent years and have shown impressive capabilities in feature extraction, pattern recognition, and causal inference. There has been an increasing attention to applying ML to medical applications, such as medical diagnosis, drug discovery, personalized medicine, and numerous other medical problems. ML-based methods have the advantage of processing vast amounts of data.
With an ever increasing amount of medical data collection and large, inter-subject variability in the medical data, automated data processing pipelines are very much desirable since it is laborious, expensive, and error-prone to rely solely on human processing. ML methods have the potential to uncover interesting patterns, unravel correlations between complex features, learn patient-specific representations, and make accurate predictions. Motivated by these promising aspects, in this thesis, I present studies where I have implemented deep neural networks for the early diagnosis of epilepsy based on electroencephalography (EEG) data and brain tumor detection based on magnetic resonance spectroscopy (MRS) data.
In the project for early diagnosis of epilepsy, we are dealing with one of the most common neurological disorders, epilepsy, which is characterized by recurrent unprovoked seizures. It can be triggered by a variety of initial brain injuries and manifests itself after a time window which is called the latent period. During this period, a cascade of structural and functional brain alterations takes place leading to an increased seizure susceptibility.
The development and extension of brain tissue capable of generating spontaneous seizures is defined as epileptogenesis (EPG).
Detecting the presence of EPG provides a precious opportunity for targeted early medical interventions and, thus, can slow down or even halt the disease progression. In order to study brain signals in this latent window, animal epilepsy models are used to provide valuable data as it is extremely difficult to obtain this data from human patients. The aim of this study is to discover biomarkers of EPG using animal models and then to find the equivalent and counterparts in human patients' data. However, the EEG features for EPG are not well-understood and there is not a sufficiently large amount of annotated data for ML-based algorithms. To approach this problem, firstly, I utilized the timestamp information of the recorded EEG from an animal epilepsy model where epilepsy is induced by an electrical stimulation. The timestamp serves as a form of weak supervision, i.e., before and after the stimulation. Secondly, I implemented a deep residual neural network and trained it with a binary classification task to distinguish the EEG signals from these two phases. After obtaining a high discriminative ability on the binary classification task, I proposed to divide further the time span after the stimulation for a three-class classification, aiming to detect possible stages of the progression of the latent EPG phase. I have shown that the model can distinguish EEG signals at different stages of EPG with high accuracy and generalization ability. I have also demonstrated that some of the learned features from the network are clinically relevant.
In the task of detecting brain tumors based on MRS data, I first proposed to apply a deep neural network on the MRS data collected from over 400 patients for a binary classification task. To combat the challenge of noisy labeling, I developed a distillation step to filter out relatively ``cleanly'' labeled samples. A mixing-based data augmentation method was also implemented to expand the size of the training set. All the experiments were designed to be conducted with a leave-patient-out scheme to ensure the generalization ability of the model. Averaged across all leave-patient-out cross-validation sets, the proposed method performed on par with human neuroradiologists, while outperforming other baseline methods. I have demonstrated the distillation effect on the MNIST data set with manually-introduced label noise as well as providing visualization of the input influences on the final classification through a class activation map method.
Moreover, I have proposed to aggregate information at the subject level, which could provide more information and insights. This is inspired by the concept of multiple instance learning, where instance-level labels are not required and which is more tolerant to noisy labeling. I have proposed to generate data bags consisting of instances from each patient and also proposed two modules to ensure permutation invariance, i.e., an attention module and a pooling module. I have compared the performance of the network in different cases, i.e., with and without permutation-invariant modules, with and without data augmentation, single-instance-based and multiple-instance-based learning and have shown that neural networks equipped with the proposed attention or pooling modules can outperform human experts.
Autonomous steering of an electric bicycle based on sensor fusion using model predictive control
(2019)
In this thesis a control and steering module for an autonomous bicycle was developed. Based on sensor fusion and model predictive control, the module is able to trace routes autonomously.
The system is developed to run on a Raspberry Pi. An ultrasonic sensor and a 2D Lidar sensor are used for distance measurements. The vehicle’s position is determined by using GPS signals. Additionally, a camera is used to capture pictures for the roadside detection. In order to recognize the road and the position of the vehicle on it, computer vision techniques are used. The captured images are denoised, Canny edge detection is performed and a perspective transformation is applied. Thereafter a sliding window algorithm selects the edges belonging to the roadside and a second order polynomial is fitted to the selected data. Based on this, the road curvature and the lateral position of the vehicle on the road are calculated. The implemented software is thus able to detect straight and curved roads as well as the vehicle’s lateral offset.
A route planning module was implemented to navigate the vehicle from the start to the destination coordinates. This is done by creating an abstract graph of the roads and using Dijkstra’s algorithm to determine the shortest path.
Four MPC controllers were implemented to control the movements of the vehicle. They are based on state space equations derived from the linear single-track vehicle model. This relatively straightforward model makes it possible to predict the vehicle behavior and is efficient to compute. Each controller was built with different parameters for different vehicle speeds to account for the non-linearity of the system. The controllers simulate the future states of the system at each timeslot and select appropriate control signals for steering, throttle and brakes.
In this thesis, all the components of the steering and control module were individually validated. It was established that the each individual component works as expected and certain constraints and accuracy limits were identified. Finally, the closed loop capabilities of the system were assessed using a test vehicle. Despite some limitations imposed by this setup, it was shown that the control module is indeed capable of autonomously navigating a vehicle and avoiding collisions.
When we browse via WiFi on our laptop or mobile phone, we receive data over a noisy channel. The received message may differ from the one that was sent originally. Luckily it is often possible to reconstruct the original message but it may take a lot of time. That’s because decoding the received message is a complex problem, NP-hard to be exact. As we continue browsing, new information is sent to us in a high frequency. So if lags are to be avoided and as memory is finite, there is not much time left for decoding. Coding theory tackles this problem by creating models of the channels we use to communicate and tailor codes based on the channel properties. A well known family of codes are Low-Density Parity-Check codes (LDPC codes), they are widely used in standards like WiFi and DVB-T2. In practical settings the complexity of decoding a received message can be heavily reduced by using LDPC codes and approximative decoding algorithms. This thesis lays out the basic construction of LDPC codes and a proper decoding using the sum-product algorithm. On this basis a neural network to improve decoding is introduced. Therefore the sum-product algorithm is transformed into a neural network decoder. This approach was first presented by Nachmani et al. and treated in detail by Navneet Agrawal in 2017. To find out how machine learning can improve the codes, the bit error rates of the trained neural network decoder are compared with the bit error rates of the classic sum-product algorithm approach. Experiments with static and dynamic training datasets of diverse sizes, various signal-to-noise ratios, a feed forward as well as a recurrent architecture show how to tune the neural network decoder even further. Results of the experiments are used to verify statements made in Agrawal’s work. In addition, corrections and improvements in the area of metrics are presented. An implementation of the neural network to facilitate access for others will be made available to the public.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
Digital distractions can interfere with goal attainment and lead to undesirable habits that are hard to get red rid of. Various digital self-control interventions promise support to alleviate the negative impact of digital distractions. These interventions use different approaches, such as the blocking of apps and websites, goal setting, or visualizations of device usage statistics. While many apps and browser extensions make use of these features, little is known about their effectiveness. This systematic review synthesizes the current research to provide insights into the effectiveness of the different kinds of interventions. From a search of the ‘ACM’, ‘Springer Link’, ‘Web of Science’, ’IEEE Xplore’ and ‘Pubmed’ databases, we identified 28 digital self-control interventions. We categorized these interventions according to their features and their outcomes. The interventions showed varying degrees of effectiveness, and especially interventions that relied purely on increasing the participants' awareness were barely effective. For those interventions that sanctioned the use of distractions, the current literature indicates that the sanctions have to be sufficiently difficult to overcome, as they will otherwise be quickly dismissed. The overall confidence in the results is low, with small sample sizes, short study duration, and unclear study contexts. From these insights, we highlight research gaps and close with suggestions for future research.
We obtain spectral inequalities and asymptotic formulae for the discrete spectrum of the operator 12log(−Delta) in an open set OmegaERd, d≥2, of finite measure with Dirichlet boundary conditions. We also derive some results regarding lower bounds for the eigenvalue Lambda1(Omega) and compare them with previously known inequalities.
In the first part of this thesis, we introduce the concept of prospective strict no-arbitrage for discrete-time financial market models with proportional transaction. The prospective strict no-arbitrage condition, which is a variant of strict no-arbitrage, is slightly weaker than the robust no-arbitrage condition. It still implies that the set of portfolios attainable from zero initial endowment is closed in probability. Consequently, prospective strict no-arbitrage implies the existence of consistent prices, which may lie on the boundary of the bid-ask spread. A weak version of prospective strict no-arbitrage turns out to be equivalent to the existence of a consistent price system.
In continuous-time financial market models with proportional transaction costs, efficient friction, i.e., nonvanishing transaction costs, is a standing assumption. Together with robust no free lunch with vanishing risk, it rules out strategies of infinite variation which usually appear in frictionless financial markets. In the second part of this thesis, we show how models with and without transaction costs can be unified. The bid and the ask price of a risky asset are given by cadlag processes which are locally bounded from below and may coincide at some points. In a first step, we show that if the bid-ask model satisfies no unbounded profit with bounded risk for simple long-only strategies, then there exists a semimartingale lying between the bid and the ask price process.
In a second step, under the additional assumption that the zeros of the bid-ask spread are either starting points of an excursion away from zero or inner points from the right, we show that for every bounded predictable strategy specifying the amount of risky assets, the semimartingale can be used to construct the corresponding self-financing risk-free position in a consistent way. Finally, the set of most general strategies is introduced, which also provides a new view on the frictionless case.
Our purpose was to analyze the robustness and reproducibility of magnetic resonance imaging (MRI) radiomic features. We constructed a multi-object fruit phantom to perform MRI acquisition as scan-rescan using a 3 Tesla MRI scanner. We applied T2-weighted (T2w) half-Fourier acquisition single-shot turbo spin-echo (HASTE), T2w turbo spin-echo (TSE), T2w fluid-attenuated inversion recovery (FLAIR), T2 map and T1-weighted (T1w) TSE. Images were resampled to isotropic voxels. Fruits were segmented. The workflow was repeated by a second reader and the first reader after a pause of one month. We applied PyRadiomics to extract 107 radiomic features per fruit and sequence from seven feature classes. We calculated concordance correlation coefficients (CCC) and dynamic range (DR) to obtain measurements of feature robustness. Intraclass correlation coefficient (ICC) was calculated to assess intra- and inter-observer reproducibility. We calculated Gini scores to test the pairwise discriminative power specific for the features and MRI sequences. We depict Bland Altmann plots of features with top discriminative power (Mann–Whitney U test). Shape features were the most robust feature class. T2 map was the most robust imaging technique (robust features (rf), n = 84). HASTE sequence led to the least amount of rf (n = 20). Intra-observer ICC was excellent (≥ 0.75) for nearly all features (max–min; 99.1–97.2%). Deterioration of ICC values was seen in the inter-observer analyses (max–min; 88.7–81.1%). Complete robustness across all sequences was found for 8 features. Shape features and T2 map yielded the highest pairwise discriminative performance. Radiomics validity depends on the MRI sequence and feature class. T2 map seems to be the most promising imaging technique with the highest feature robustness, high intra-/inter-observer reproducibility and most promising discriminative power.
An exploratory latent class analysis of student expectations towards learning analytics services
(2021)
For service implementations to be widely adopted, it is necessary for the expectations of the key stakeholders to be considered. Failure to do so may lead to services reflecting ideological gaps, which will inadvertently create dissatisfaction among its users. Learning analytics research has begun to recognise the importance of understanding the student perspective towards the services that could be potentially offered; however, student engagement remains low. Furthermore, there has been no attempt to explore whether students can be segmented into different groups based on their expectations towards learning analytics services. In doing so, it allows for a greater understanding of what is and is not expected from learning analytics services within a sample of students. The current exploratory work addresses this limitation by using the three-step approach to latent class analysis to understand whether student expectations of learning analytics services can clearly be segmented, using self-report data obtained from a sample of students at an Open University in the Netherlands. The findings show that student expectations regarding ethical and privacy elements of a learning analytics service are consistent across all groups; however, those expectations of service features are quite variable. These results are discussed in relation to previous work on student stakeholder perspectives, policy development, and the European General Data Protection Regulation (GDPR).
Szenen automatisch aus Texten generieren zu können ist eine interessante Aufgabe der Informatik. Für diese Aufgabe wurde VANNOTATOR (Mehler und Abrami 2019, Abrami, Spiekermann und Mehler 2019, Spiekermann, Abrami und Mehler 2018) entwickelt, ein Framework, das die Beschreibung bzw. Beschriftung von VR-Szenen ermöglicht. Damit für diese Szenen die benötigten 3D-Objekte bereitgestellt werden können, sind entsprechende Datenbanken vonnöten. Diese Datenbanken müssen umfangreich annotiert sein, damit diese Aufgabe bewältigt werden kann. Deshalb wurde im Falle des VANNOTATORs auf die ShapeNetSem Datenbank zurückgegriffen (Abrami, Henlein, Kett u. a. 2020).
Je detailreicher eine Szene dargestellt wird, desto detailreicher kann diese auch durch einen Text beschrieben werden. Aus diesem Grund wird die Datenbank um einen Teilbereich von PartNet (Mo u. a. 2019) erweitert. Dieser erlaubt die Option, Objekte zu segmentieren, und erweitert hierdurch das annotierbare Vokabular. Manche der bereits vorhandenen ShapeNetSem-Objekte verfügen über die Eigenschaft, dass sie auch PartNet-Objekte sind. Diese Arbeit befasst sich mit der Umsetzung, wie ShapeNetSem-Objekte mit hinterlegten PartNetObjekten durch diese ersetzt werden können. Um das zu bewerkstelligen, wurde ein Panel entworfen, in welchem ein PartNet-Objekt mit samt seinen einzelnen Segmenten aufgeführt wird. Diese Segmente können nun wie ShapeNetSem-Objekte ausgewählt und in einer Szene platziert werden. Dadurch werden 1.881 Objekte mit wiederum 34.016 Unterobjekten VANNOTATOR zur Verfügung gestellt. Dieses vergrößerte Vokabular hilft Natural Language Processing noch effektiver und präziser voranzutreiben.
Die Arbeit befasst sich mit zwei funktionalen Grenzwertsätzen für skalierte Linienzählprozesse von anzestralen Selektionsgraphen. Dazu werden zwei Modelle aus der mathematischen Populationsgenetik betrachtet. Wir führen zuerst das Moran-Modell mit gerichteter Selektion mit konstanter Populationsgröße N in kontinuierlicher Zeit und den Linienzählprozess des anzestralen Selektionsgraphen (MASP) gemäß Krone und Neuhauser (Theor. Popul. Biol. 1997) ein. Die Hauptaussage dieser Abschlussarbeit besagt, dass der passend standardisierte MASP im Fall der moderaten Selektion für N gegen unendlich in Verteilung gegen einen Ornstein-Uhlenbeck-Prozess konvergiert. Das zweite betrachtete Modell ist das Cannings-Modell mit gerichteter Selektion in diskreter Zeit, das gemäß Boenkost, González Casanova, Pokalyuk und Wakolbinger (Electron. J. Probab. 2021) eingeführt wird. Für ein Teilregime der moderat schwachen Selektion wird bewiesen, dass die reskalierten Fluktuationen des Linienzählprozesses des anzestralen Selektionsgraphen im Cannings-Modell ebenfalls in Verteilung gegen einen Ornstein-Uhlenbeck-Prozess konvergieren.
Abstract: The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.
Author Summary: Human visual perception is a complex cognitive feat known to be mediated by distinct cortical regions of the brain. However, the exact function of these regions remains unknown, and thus it remains unclear how those regions together orchestrate visual perception. Here, we apply an AI-driven brain mapping approach to reveal visual brain function. This approach integrates multiple artificial deep neural networks trained on a diverse set of functions with functional recordings of the whole human brain. Our results reveal a systematic tiling of visual cortex by mapping regions to particular functions of the deep networks. Together this constitutes a comprehensive account of the functions of the distinct cortical regions of the brain that mediate human visual perception.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
The main topic of the present thesis is scene flow estimation in a monocular camera system. Scene flow describes the joint representation of 3D positions and motions of the scene. A special focus is placed on approaches that combine two kinds of information, deep-learning-based single-view depth estimation and model-based multi-view geometry.
The first part addresses single-view depth estimation focussing on a method that provides single-view depth information in an advantageous form for monocular scene flow estimation methods. A convolutional neural network, called ProbDepthNet, is proposed, which provides pixel-wise well-calibrated depth distributions. The experiments show that different strategies for quantifying the measurement uncertainty provide overconfident estimates due to overfitting effects. Therefore, a novel recalibration technique is integrated as part of the ProbDepthNet, which is validated to improve the calibration of the uncertainty measures. The monocular scene flow methods presented in the subsequent parts confirm that the integration of single-view depth information results in the best performance if the neural network provides depth distributions instead of single depth values and contains a recalibration.
Three methods for monocular scene flow estimation are presented, each one designed to combine multi-view geometry-based optimization with deep learning-based single-view depth estimation such as ProbDepthNet. While the first method, SVD-MSfM, performs the motion and depth estimation as two subsequent steps, the second method, Mono-SF, jointly optimizes the motion estimates and the depth structure. Both methods are tailored to address scenes, where the objects and motions can be represented by a set of rigid bodies. Dynamic traffic scenes are one kind of scenes that essentially fulfill this characteristic. The method, Mono-Stixel, uses an even more specialized scene model for traffic scenes, called stixel world, as underlying scene representation.
The proposed methods provide new state of the art for monocular scene flow estimation with Mono-SF being the first and leading monocular method on the KITTI scene flow benchmark at the time of submission of the present thesis. The experiments validate that both kind of information, the multi-view geometric optimization and the single-view depth estimates, contribute to the monocular scene flow estimates and are necessary to achieve the new state of the art accuracy.
Sublinear circuits are generalizations of the affine circuits in matroid theory, and they arise as the convex-combinatorial core underlying constrained non-negativity certificates of exponential sums and of polynomials based on the arithmetic-geometric inequality. Here, we study the polyhedral combinatorics of sublinear circuits for polyhedral constraint sets. We give results on the relation between the sublinear circuits and their supports and provide necessary as well as sufficient criteria for sublinear circuits. Based on these characterizations, we provide some explicit results and enumerations for two prominent polyhedral cases, namely the non-negative orthant and the cube [− 1,1]n.
We derive a shape derivative formula for the family of principal Dirichlet eigenvalues λs(Ω) of the fractional Laplacian (−Δ)s associated with bounded open sets Ω⊂RN of class C1,1. This extends, with a help of a new approach, a result in Dalibard and Gérard-Varet (Calc. Var. 19(4):976–1013, 2013) which was restricted to the case s=12. As an application, we consider the maximization problem for λs(Ω) among annular-shaped domains of fixed volume of the type B∖B¯¯¯¯′, where B is a fixed ball and B′ is ball whose position is varied within B. We prove that λs(B∖B¯¯¯¯′) is maximal when the two balls are concentric. Our approach also allows to derive similar results for the fractional torsional rigidity. More generally, we will characterize one-sided shape derivatives for best constants of a family of subcritical fractional Sobolev embeddings.
Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming
(2021)
Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex non-linear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimization-based approaches for inverse coefficient problems.
Die folgende Arbeit handelt von einer Text2Scene Anwendung, welche in der Virtual Reality (VR) umgesetzt wurde. Das System ermöglicht es den Usern aus einer Beschreibung einer Szene, diese virtuell nachzustellen. Dies bietet eine neue Art der Interaktion mit einem Text, die die visuelle Komponente hervorhebt und somit eine Geschichte auf neue Wege erfahrbar macht.
Dazu kann der User einen fertigen Text entweder vom Server zu laden oder einen eigenen erstellen, der dann automatisch verarbeitet wird. Dabei werden die vorhanden physischen Objekte im Text automatisch erkannt und dem User als 3D-Objekte in der virtuellen Umgebung zur Verfügung gestellt. Diese können dann manuell platziert werden und erzeugen dadurch die Szene, die im Ausgangstext beschrieben wurde. Das Ziel der Textverarbeitung ist eine möglichst genaue Beschreibung der Objekte, damit diese zielgerichtet in der Objektdatenbank gesucht werden können.
Bei der Textverarbeitung wird besonderer Wert auf das Erkennen von Teil-Ganz Beziehungen gelegt. Sodass Objekte, die im Text vorkommen und ein Holonym besitzen, automatisch mit diesem verknüpft werden. Gleichzeitig wird die Teil-Ganz Beziehung aber auch in die andere Richtung genauer betrachtet. Die Textverarbeitung soll ferner dazu in der Lage sein, Objekte genauer zu spezifizieren und an den Kontext des Textes anzupassen. Weiterhin wurde das Natural Language Processing (NLP) so ausgebaut, dass der Kontext des Textes erkannt wird und die Objekte entsprechend kategorisiert werden. Die Textverarbeitung wird mithilfe eines Neuronalen Netzes implementiert. Die verwendeten Tools zur Erkennung von Teil-Ganz Beziehungen, Kontext und Spezifikation von Objekten wurden anhand von Texteingaben nach der Genauigkeit der Ausgabe evaluiert.
Zur Nutzung der Textverarbeitung wurde eine virtuelle Szene entwickelt, die das Erstellen von eigenen Szenen aus vorher geladenen beziehungsweise eingegebenen Texten ermöglicht.
Dazu kann der Nutzer manuell oder automatisch Objekte laden lassen, die er dann platzieren kann.
Analysing survival or fixation probabilities for a beneficial allele is a prominent task in the field of theoretical population genetics. Haldane's asymptotics is an approximation for the fixation probability in the case of a single beneficial mutant with small selective advantage in a large population.
In this thesis we analyse the interplay between genetic drift and directional selection and prove Haldane's asymptotics in different settings: For the fixation probability in Cannings models with moderate selection and for the survival probability of a slightly supercritical branching processes in a random environment.
In Chapter 3 we introduce a class of Cannings models with selection that allow for a forward and backward construction. In particular, a Cannings ancestral selection process can be defined for this class of models, which counts the number of potential parents and is in sampling duality to the forward frequency process. By means of this duality the probability of fixation can be expressed through the expectation of the Cannings ancestral selection process in stationarity. A control of this expectation yields that the fixation probability fulfils Haldane's asymptotics in a regime of moderately weak selection (Thm. 8).
In Chapter 4 we study the fixation probability of Cannings models in a regime of moderately strong selection. Here couplings of the frequency process of beneficial individuals with slightly supercritical Galton-Watson processes imply that the fixation probability is given by Haldane's asymptotics (Thm. 9).
Lastly, in Chapter 5 we consider slightly supercritical branching processes in an independent and identically distributed random environment and study the probability of survival as the number of expected offspring tends from above to one. We show that only if variance and expectation of the random offspring mean are of the same order the random environment has a non-trivial influence on the probability of survival, which results in a modification of Haldane's asymptotics. Out of the critical parameter regime the population goes extinct or survives with a probability that fulfils Haldane's asymptotics (Thm. 10).
The proof establishes an expression for the survival probability in terms of the shape function of the random offspring generating functions. This expression exhibits similarities to perpetuities known from a financial context. Consequently, we prove a limiting theorem for perpetuities with vanishing interest rates (Thm. 11).
This work describes development of a comprehensive methodology for analyzing vibro-acoustic and wear mechanisms in transmission systems. The thesis addresses certain gaps present in the fields of structure dynamics and abrasion mechanism and opens new areas for further research.
The paper attempts to understand new and relatively unexplored challenges like influences of wear on the dynamics of drive train. It also focuses on developing new techniques for analyzing the vibration and acoustic behavior of the drive unit structures and surrounding fluids respectively.
The developed methodology meets the requirements of both the complete system and component level modeling by using specially identified combination of different simulation techniques. Based on the created template model, a three-stage spur plus helical gearbox is constructed and simulated as an application example. In addition to the internal mechanical excitation mechanisms, the transmission model also includes the rotational and translational dynamics of the gears, shafts and bearings. It is followed by illustration of wear among the rotating components.
Different kinds of static and dynamic analyses are performed and coupled at various levels depending on the mechanical complexities involved. Furthermore, the structure dynamic vibration of the housing and the associated sound particle radiations are mapped into the surrounding fluid. Additionally, the approach for selection of the potential parameters for optimization is depicted. Final part focuses on the measurements of different system states used for validation of the model. In the end, results obtained from both simulations and experiments are analyzed and assessed for there respective performances.
Machine Learning (ML) is so pervasive in our todays life that we don't even realise that, more often than expected, we are using systems based on it. It is also evolving faster than ever before. When deploying ML systems that make decisions on their own, we need to think about their ignorance of our uncertain world. The uncertainty might arise due to scarcity of the data, the bias of the data or even a mismatch between the real world and the ML-model. Given all these uncertainties, we need to think about how to build systems that are not totally ignorant thereof. Bayesian ML can to some extent deal with these problems. The specification of the model using probabilities provides a convenient way to quantify uncertainties, which can then be included in the decision making process.
In this thesis, we introduce the Bayesian ansatz to modeling and apply Bayesian ML models in finance and economics. Especially, we will dig deeper into Gaussian processes (GP) and Gaussian process latent variable model (GPLVM). Applied to the returns of several assets, GPLVM provides the covariance structure and also a latent space embedding thereof. Several financial applications can be build upon the output of the GPLVM. To demonstrate this, we build an automated asset allocation system, a predictor for missing asset prices and identify other structure in financial data.
It turns out that the GPLVM exhibits a rotational symmetry in the latent space, which makes it harder to fit. Our second publication reports, how to deal with that symmetry. We propose another parameterization of the model using Householder transformations, by which the symmetry is broken. Bayesian models are changed by reparameterization, if the prior is not changed accordingly. We provide the correct prior distribution of the new parameters, such that the model, i.e. the data density, is not changed under the reparameterization. After applying the reparametrization on Bayesian PCA, we show that the symmetry of nonlinear models can also be broken in the same way.
In our last project, we propose a new method for matching quantile observations, which uses order statistics. The use of order statistics as the likelihood, instead of a Gaussian likelihood, has several advantages. We compare these two models and highlight their advantages and disadvantages. To demonstrate our method, we fit quantiled salary data of several European countries. Given several candidate models for the fit, our method also provides a metric to choose the best option.
We hope that this thesis illustrates some benefits of Bayesian modeling (especially Gaussian processes) in finance and economics and its usage when uncertainties are to be quantified.
We show that throughout the satisfiable phase the normalized number of satisfying assignments of a random 2-SAT formula converges in probability to an expression predicted by the cavity method from statistical physics. The proof is based on showing that the Belief Propagation algorithm renders the correct marginal probability that a variable is set to “true” under a uniformly random satisfying assignment.
Within the last thirty years, the contraction method has become an important tool for the distributional analysis of random recursive structures. While it was mainly developed to show weak convergence, the contraction approach can additionally be used to obtain bounds on the rate of convergence in an appropriate metric. Based on ideas of the contraction method, we develop a general framework to bound rates of convergence for sequences of random variables as they mainly arise in the analysis of random trees and divide-and-conquer algorithms. The rates of convergence are bounded in the Zolotarev distances. In essence, we present three different versions of convergence theorems: a general version, an improved version for normal limit laws (providing significantly better bounds in some examples with normal limits) and a third version with a relaxed independence condition. Moreover, concrete applications are given which include parameters of random trees, quantities of stochastic geometry as well as complexity measures of recursive algorithms under either a random input or some randomization within the algorithm.
Chatbots are a promising technology with the potential to enhance workplaces and everyday life. In terms of scalability and accessibility, they also offer unique possibilities as communication and information tools for digital learning. In this paper, we present a systematic literature review investigating the areas of education where chatbots have already been applied, explore the pedagogical roles of chatbots, the use of chatbots for mentoring purposes, and their potential to personalize education. We conducted a preliminary analysis of 2,678 publications to perform this literature review, which allowed us to identify 74 relevant publications for chatbots’ application in education. Through this, we address five research questions that, together, allow us to explore the current state-of-the-art of this educational technology. We conclude our systematic review by pointing to three main research challenges: 1) Aligning chatbot evaluations with implementation objectives, 2) Exploring the potential of chatbots for mentoring students, and 3) Exploring and leveraging adaptation capabilities of chatbots. For all three challenges, we discuss opportunities for future research.
The sketch map tool facilitates the assessment of OpenStreetMap data for participatory mapping
(2021)
A worldwide increase in the number of people and areas affected by disasters has led to more and more approaches that focus on the integration of local knowledge into disaster risk reduction processes. The research at hand shows a method for formalizing this local knowledge via sketch maps in the context of flooding. The Sketch Map Tool enables not only the visualization of this local knowledge and analyses of OpenStreetMap data quality but also the communication of the results of these analyses in an understandable way. Since the tool will be open-source and several analyses are made automatically, the tool also offers a method for local governments in areas where historic data or financial means for flood mitigation are limited. Example analyses for two cities in Brazil show the functionalities of the tool and allow the evaluation of its applicability. Results depict that the fitness-for-purpose analysis of the OpenStreetMap data reveals promising results to identify whether the sketch map approach can be used in a certain area or if citizens might have problems with marking their flood experiences. In this way, an intrinsic quality analysis is incorporated into a participatory mapping approach. Additionally, different paper formats offered for printing enable not only individual mapping but also group mapping. Future work will focus on advancing the automation of all steps of the tool to allow members of local governments without specific technical knowledge to apply the Sketch Map Tool for their own study areas.
This thesis presents research which spans three conference papers and one manuscript which has not yet been submitted for peer review.
The topic of 1 is the inherent complexity of maintaining perfect height in B-trees. We consider the setting in which a B-tree of optimal height contains n = (1−ϵ)N elements where N is the number of elements in full B-tree of the same height (the capacity of the tree). We show that the rebalancing cost when updating the tree—while maintaining optimal height—depends on ϵ. Specifically, our analysis gives a lower bound for the rebalancing cost of Ω(1/(ϵB)). We then describe a rebalancing algorithm which has an amortized rebalancing cost with an almost matching upper bound of O(1/(ϵB)⋅log²(min{1/ϵ,B})). We additionally describe a scheme utilizing this algorithm which, given a rebalancing budget f(n), maintains optimal height for decreasing ϵ until the cost exceeds the
budget at which time it maintains optimal height plus one. Given a rebalancing budget of Θ(logn), this scheme maintains optimal height for all but a vanishing fraction of sizes in the intervals between tree capacities.
Manuscript 2 presents empirical analysis of practical randomized external-memory algorithms for computing the connected components of graphs. The best known theoretical results for this problem are essentially all derived from results for minimum spanning tree algorithms. In the realm of randomized external-memory MST algorithms, the best asymptotic result has I/O-complexity O(sort(|E|)) in expectation while an empirically studied practical algorithm has a bound of O(sort(|E|)⋅log(|V|/M)). We implement and evaluate an algorithm for connected components with expected I/O-complexity O(sort(|E|))—a simplification of the MST
algorithm with this asymptotic cost, we show that this approach may also yield good results in practice.
In paper 3, we present a novel approach to simulating large-scale population protocol models. Naive simulation of N interactions of a population protocol with n agents and m states requires Θ(nlogm) bits of memory and Θ(N) time. For
very large n, this is prohibitive both in memory consumption and time, as interesting protocols will typically require N > n interactions for convergence. We describe a histogram-based simulation framework which requires Θ(mlogn) bits of memory instead—an improvement as it is typically the case that
n ≫ m. We analyze, implement, and compare a number of different data structures to perform correct agent sampling in this regime. For this purpose, we develop dynamic alias tables which allow sampling an interaction in expected amortized
constant time. We then show how to use sampling techniques to process agent interactions in batches, giving a simulation approach which uses subconstant time per interaction under reasonable assumptions.
With paper 4, we introduce the new model of fragile complexity for comparison-based algorithms. Within this model, we analyze classical comparison-based problems such as finding the minimum value of a set, selection (or finding the median), and sorting. We prove a number of lower and upper bounds and in particular, we give a number of randomized results which describe trade-offs not achievable by deterministic algorithms.
Um Wissen in einer Form abzulegen, in der es automatisiert verarbeitet werden kann, werden unter anderem Ontologien verwendet. Ontologien erlauben über einen als Inferenz bezeichneten Prozess die Ableitung neuen Wissens. Bei inhaltlichen Überschneidungen werden Ontologien über Ontologie-Alignments miteinander verbunden, die Entitäten aus den verschiedenen Ontologien in Beziehung zueinander setzen. Üblicherweise werden diese Alignments als Mengen von Äquivalenzen formuliert, die beschreiben, welche Konzepte aus einer Ontologie Konzepten aus einer anderen Ontologie entsprechen. Ebenfalls verbreitet sind Ober- und Unterklassenbeziehungen in Alignments.
Diese Ontologie-Alignments werden zum Beispiel in der Biomedizin in Forschungsdatenbanken verwendet, da durch Alignments Informationen aus verschiedenen Bereichen zusammengeführt werden können. Der manuelle Aufwand, um große Ontologien und Alignments zu erstellen, ist sehr hoch. Dementsprechend wäre es wünschenswert, bei einer Veränderung von Ontologien nicht wieder von vorne beginnen und eine neue Ontologie erstellen zu müssen und möglichst viel aus der veränderten Ontologie und den die Ontologie betreffenden Alignments wiederverwenden zu können. Daher sollten möglichst automatisierte Verfahren verwendet werden. Diese Arbeit untersucht vier Ansätze, um die Anpassung von Alignments an Veränderungen in Ontologien zu automatisieren.
Der erste Ansatz bezieht Inferenzen in den Prozess zur Vorhersage von Alignment-Änderungen mit ein. Dazu werden die Inferenzen vor und nach der Änderung der Ontologien berechnet und auf Basis der Unterschiede mit einem regelbasierten Algorithmus bestimmt, wie sich das Alignment ändern soll. Der zweite Ansatz, wie auch die weiteren Ansätze, hat nicht zum Ziel das Alignment direkt anzupassen. Stattdessen soll vorhergesagt werden, welche Teile des Alignments angepasst werden müssen. Dazu werden die Ontologien und das Alignment als Wissensgraph-Embeddings repräsentiert. Diese Embeddings bilden Knoten aus den Ontologien in einen Raum mit 300-1000 Dimensionen so ab, dass in dem Raum auch die Beziehungen zwischen den Entitäten der Ontologien repräsentiert werden können. Diese Embeddings werden dann verwendet, um verschiedene Klassifikationsalgorithmen zu trainieren. Auf diese Weise wird vorhergesagt, welche Teile des Alignments sich verändern werden. Der dritte Ansatz verbindet Embeddings mit einem Veränderungsmodell. Das Veränderungsmodell kategorisiert die an den Ontologien vorgenommenen Veränderungen. Auf diese Kategorisierung und das Embedding werden dann Klassifikationsalgorithmen angewandt. Der vierte Ansatz verwendet eine speziell auf Wissensgraphen ausgerichtete Architektur für neuronale Netze, sogenannte Graph Convolutional Networks, um Veränderungen an Alignments vorher zu sagen.
Diese Ansätze werden auf ihre jeweiligen Vor- und Nachteile untersucht. Dazu werden die Verfahren an zwei Anwendungsfällen untersucht. Der Ansatz zur regelbasierten Einbeziehung von Inferenzen wird anhand eines Anwendungsbeispiels aus dem Bereich der Interweaving Systems betrachtet. In dem Beispiel wird eine allgemeine Methode für Interweaving Systems angewandt um das Selbstmanagement von Ampelsteuerungen zu ermöglichen. Die auf maschinellem Lernen aufbauenden Ansätze werden auf einem Auszug aus der biomedizinischen Forschungsdatenbank UMLS evaluiert.
Dabei konnte festgestellt werden, dass die betrachteten Ansätze grundsätzlich zur Anpassung von Alignments an Ontologie-Veränderungen eingesetzt werden können. Der Ansatz zur regelbasierten Einbeziehung von Inferenzen kann dabei vor allem auf sehr kleinen Datensätzen eingesetzt werden, bei denen alle Gesetzmäßigkeiten der Veränderungen grundsätzlich bekannt sind. Diese Anwendbarkeit ergibt sich aus dem Entwurf der Problemstellung für den ersten Ansatz. Die auf maschinellem Lernen aufbauenden Ansätze eignen sich besonders für große Datensätze und bieten den Vorteil, dass auch ohne ein vollständiges Verständnis des Veränderungsprozesses Vorhersagen getroffen werden können.
Unter den Ansätzen, die maschinelles Lernen einsetzen, zeigte die Einbeziehung von Veränderungsmodellen keine Vorteile gegenüber den anderen Ansätzen. Auf einem etwas
kleineren Datensatz waren die Ergebnisse des Embedding-basierten Ansatzes und der Relational Graph Convolutional Networks vergleichbar, während auf einem größeren Datensatz
die Graph Convolutional Networks etwas bessere Ergebnisse erreichen konnten.
Weitere Ergebnisse dieser Arbeit stellen eine Formalisierung der Problemstellung der Anpassung von Ontologie-Alignments an Veränderungen sowie eine formale Darstellung der Ansätze dar. Ein weiterer Beitrag der Arbeit ist die Vorstellung eines Anwendungsfalls aus dem Bereich der Interweaving Systems für Ontologie-Alignments. Außerdem wurde das Problem der Anpassung von Alignments an Veränderungen so formuliert, dass es mithilfe von
maschinellem Lernen betrachtet werden kann.
Principles of cognitive maps
(2021)
This thesis analyses the concept of a cognitive map in the research fields of geography. Cognitive mapping research is essential as it investigates the relations between cognitive maps and external representations of space that people regularly use by acquiring spatial knowledge, such as maps in geographic information systems. Moreover, cognitive maps, when expanded on semantic maps, explain the relations between people and things in a non-physically environment, where the considered space is not spanned by distance but with other non-spatially variables. Nevertheless, cognitive maps are often distorted. Although a good formation of a cognitive map is vital in navigation processes, cognitive distortions are barely investigated in the field of geography. By analyzing the relevant work, especially Tobler’s first law of geography, a new lexical variant of Tobler’s first law could be stated that could presumably describe a specific distortion in the processing of landmarks in cognitive maps.
In 2020, Germany and Spain experienced lockdowns of their school systems. This resulted in a new challenge for learners and teachers: lessons moved from the classroom to the children’s homes. Therefore, teachers had to set rules, implement procedures and make didactical–methodical decisions regarding how to handle this new situation. In this paper, we focus on the roles of mathematics teachers in Germany and Spain. The article first describes how mathematics lessons were conducted using distance learning. Second, problems encountered throughout this process were examined. Third, teachers drew conclusions from their mathematics teaching experiences during distance learning. To address these research interests, a questionnaire was answered by N = 248 teachers (N1 = 171 German teachers; N2 = 77 Spanish teachers). Resulting from a mixed methods approach, differences between the countries can be observed, e.g., German teachers conducted more lessons asynchronously. In contrast, Spanish teachers used synchronous teaching more frequently, but still regard the lack of personal contact as a main challenge. Finally, for both countries, the digitization of mathematics lessons seems to have been normalized by the pandemic.
Deep learning with neural networks seems to have largely replaced traditional design of computer vision systems. Automated methods to learn a plethora of parameters are now used in favor of previously practiced selection of explicit mathematical operators for a specific task. The entailed promise is that practitioners no longer need to take care of every individual step, but rather focus on gathering big amounts of data for neural network training. As a consequence, both a shift in mindset towards a focus on big datasets, as well as a wave of conceivable applications based exclusively on deep learning can be observed.
This PhD dissertation aims to uncover some of the only implicitly mentioned or overlooked deep learning aspects, highlight unmentioned assumptions, and finally introduce methods to address respective immediate weaknesses. In the author’s humble opinion, these prevalent shortcomings can be tied to the fact that the involved steps in the machine learning workflow are frequently decoupled. Success is predominantly measured based on accuracy measures designed for evaluation with static benchmark test sets. Individual machine learning workflow components are assessed in isolation with respect to available data, choice of neural network architecture, and a particular learning algorithm, rather than viewing the machine learning system as a whole in context of a particular application. Correspondingly, in this dissertation, three key challenges have been identified: 1. Choice and flexibility of a neural network architecture. 2. Identification and rejection of unseen unknown data to avoid false predictions. 3. Continual learning without forgetting of already learned information. These latter challenges have already been crucial topics in older literature, alas, seem to require a renaissance in modern deep learning literature. Initially, it may appear that they pose independent research questions, however, the thesis posits that the aspects are intertwined and require a joint perspective in machine learning based systems. In summary, the essential question is thus how to pick a suitable neural network architecture for a specific task, how to recognize which data inputs belong to this context, which ones originate from potential other tasks, and ultimately how to continuously include such identified novel data in neural network training over time without overwriting existing knowledge.
Thus, the central emphasis of this dissertation is to build on top of existing deep learning strengths, yet also acknowledge mentioned weaknesses, in an effort to establish a deeper understanding of interdependencies and synergies towards the development of unified solution mechanisms. For this purpose, the main portion of the thesis is in cumulative form. The respective publications can be grouped according to the three challenges outlined above. Correspondingly, chapter 1 is focused on choice and extendability of neural network architectures, analyzed in context of popular image classification tasks. An algorithm to automatically determine neural network layer width is introduced and is first contrasted with static architectures found in the literature. The importance of neural architecture design is then further showcased on a real-world application of defect detection in concrete bridges. Chapter 2 is comprised of the complementary ensuing questions of how to identify unknown concepts and subsequently incorporate them into continual learning. A joint central mechanism to distinguish unseen concepts from what is known in classification tasks, while enabling consecutive training without forgetting or revisiting older classes, is proposed. Once more, the role of the chosen neural network architecture is quantitatively reassessed. Finally, chapter 3 culminates in an overarching view, where developed parts are connected. Here, an extensive survey further serves the purpose to embed the gained insights in the broader literature landscape and emphasizes the importance of a common frame of thought. The ultimately presented approach thus reflects the overall thesis’ contribution to advance neural network based machine learning towards a unified solution that ties together choice of neural architecture with the ability to learn continually and the capability to automatically separate known from unknown data.
We show the existence of additive kinematic formulas for general flag area measures, which generalizes a recent result by Wannerer. Building on previous work by the second named author, we introduce an algebraic framework to compute these formulas explicitly. This is carried out in detail in the case of the incomplete flag manifold consisting of all (p+1)-planes containing a unit vector.
We calculate the Masur–Veech volume of the gothic locus G in the stratum H(23) of genus 4. Our method is based on the use of the formulae for the Euler characteristics of gothic Teichmu ̈ller curves to determine the number of lattice points of given area. We also use this method to recal- culate the Masur–Veech volumes of the Prym loci P3 ⊂ H(4) and P4 ⊂ H(6) in genus 3 and 4.
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters instead of analysing the richness of the content of the conversations by understanding the linkage between these words. To overcome this, we made a starting step in this technical paper based on field trials to prototype a tool to move towards automatic collaboration analytics. We designed a technical setup to collect, process and visualize audio data automatically. The data collection took place while a board game was played among the university staff with pre-assigned roles to create awareness of the connection between learning analytics and learning design. We not only did a word-level analysis of the conversations, but also analysed the richness of these conversations by visualizing the strength of the linkage between these words and phrases interactively. In this visualization, we used a network graph to visualize turn taking exchange between different roles along with the word-level and phrase-level analysis. We also used centrality measures to understand the network graph further based on how much words have hold over the network of words and how influential are certain words. Finally, we found that this approach had certain limitations in terms of automation in speaker diarization (i.e., who spoke when) and text data pre-processing. Therefore, we concluded that even though the technical setup was partially automated, it is a way forward to understand the richness of the conversations between different roles and makes a significant step towards automatic collaboration analytics.
Studying large discrete systems is of central interest in, non-exclusively, discrete mathematics, computer sciences and statistical physics. The study of phase transitions, e.g. points in the evolution of a large random system in which the behaviour of the system changes drastically, became of interest in the classical field of random graphs, the theory of spin glasses as well as in the analysis of algorithms [78,82, 121].
It turns out that ideas from the statistical physics’ point of view on spin glass systems can be used to study inherently combinatorial problems in discrete mathematics and theoretical computer sciences(for instance, satisfiability) or to analyse phase transitions occurring in inference problems (like the group testing problem) [68, 135, 168]. A mathematical flaw of this approach is that the physical methods only render mathematical conjectures as they are not known to be rigorous.
In this thesis, we will discuss the results of six contributions. For instance, we will explore how the
theory of diluted mean-field models for spin glasses helps studying random constraint satisfaction problems through the example of the random 2−SAT problem. We will derive a formula for the number of satisfying assignments that a random 2−SAT formula typically possesses [2].
Furthermore, we will discuss how ideas from spin glass models (more precisely, from their planted versions) can be used to facilitate inference in the group testing problem. We will answer all major open questions with respect to non-adaptive group testing if the number of infected individuals scales sublinearly in the population size and draw a complete picture of phase transitions with respect to the
complexity and solubility of this inference problem [41, 46].
Subsequently, we study the group testing problem under sparsity constrains and obtain a (not fully understood) phase diagram in which only small regions stay unexplored [88].
In all those cases, we will discover that important results can be achieved if one combines the rich theory of the statistical physics’ approach towards spin glasses and inherent combinatorial properties of the underlying random graph.
Furthermore, based on partial results of Coja-Oghlan, Perkins and Skubch [42] and Coja-Oghlan et al. [49], we introduce a consistent limit theory for discrete probability measures akin to the graph limit theory [31, 32, 128] in [47]. This limit theory involves the extensive study of a special variant of the cut-distance and we obtain a continuous version of a very simple algorithm, the pinning operation, which allows to decompose the phase space of an underlying system into parts such that a probability
measure, restricted to this decomposition, is close to a product measure under the cut-distance. We will see that this pinning lemma can be used to rigorise predictions, at least in some special cases, based on the physical idea of a Bethe state decomposition when applied to the Boltzmann distribution.
Finally, we study sufficient conditions for the existence of perfect matchings, Hamilton cycles and bounded degree trees in randomly perturbed graph models if the underlying deterministic graph is sparse [93].
Netzwerkmodelle spielen in verschiedenen Wissenschaftsdisziplinen eine wichtige Rolle und dienen unter anderem der Beschreibung realistischer Graphen.
Sie werden häufig als Zufallsgraphen formuliert und stellen somit Wahrscheinlichkeitsverteilungen über Graphen dar.
Meist ist die Verteilung dabei parametrisiert und ergibt sich implizit, etwa über eine randomisierten Konstruktionsvorschrift.
Ein früher Vertreter ist das G(n,p) Modell, welches über allen ungerichteten Graphen mit n Knoten definiert ist und jede Kante unabhängig mit Wahrscheinlichkeit p erzeugt.
Ein aus G(n,p) gezogener Graph hat jedoch kaum strukturelle Ähnlichkeiten zu Graphen, die zumeist in Anwendungen beobachtet werden.
Daher sind populäre Modelle so gestaltet, dass sie mit hinreichend hoher Wahrscheinlichkeit gewünschte topologische Eigenschaften erzeugen.
Beispielsweise ist es ein gängiges Ziel die nur unscharf definierte Klasse der sogenannten komplexen Netzwerke nachzubilden, der etwa viele soziale Netze zugeordnet werden.
Unter anderem verfügen diese Graphen in der Regel über eine Gradverteilung mit schweren Rändern (heavy-tailed), einen kleinen Durchmesser, eine dominierende Zusammenhangskomponente, sowie über überdurchschnittlich dichte Teilbereiche, sogenannte Communities.
Die Einsatzmöglichkeiten von Netzwerkmodellen gehen dabei weit über das ursprüngliche Ziel, beobachtete Effekte zu erklären, hinaus.
Ein gängiger Anwendungsfall besteht darin, Daten systematisch zu produzieren.
Solche Daten ermöglichen oder unterstützen experimentelle Untersuchungen, etwa zur empirischen Verifikation theoretischer Vorhersagen oder zur allgemeinen Bewertung von Algorithmen und Datenstrukturen.
Hierbei ergeben sich insbesondere für große Probleminstanzen Vorteile gegenüber beobachteten Netzen.
So sind massive Eingaben, die auf echten Daten beruhen, oft nicht in ausreichender Menge verfügbar, nur aufwendig zu beschaffen und zu verwalten, unterliegen rechtlichen Beschränkungen, oder sind von unklarer Qualität.
In der vorliegenden Arbeit betrachten wir daher algorithmische Aspekte der Generierung massiver Zufallsgraphen.
Um Anwendern Reproduzierbarkeit mit vorhandenen Studien zu ermöglichen, fokussieren wir uns hierbei zumeist auf getreue Implementierungen etablierter Netzwerkmodelle,
etwa Preferential Attachment-Prozesse, LFR, simple Graphen mit vorgeschriebenen Gradsequenzen, oder Graphen mit hyperbolischer (o.Ä.) Einbettung.
Zu diesem Zweck entwickeln wir praktisch sowie analytisch effiziente Generatoren.
Unsere Algorithmen sind dabei jeweils auf ein geeignetes Maschinenmodell hin optimiert.
Hierzu entwerfen wir etwa klassische sequentielle Generatoren für Registermaschinen, Algorithmen für das External Memory Model, und parallele Ansätze für verteilte oder Shared Memory-Maschinen auf CPUs, GPUs, und anderen Rechenbeschleunigern.
Diese Arbeit beschäftigt sich mit linearen inversen Problemen, wie sie in einer Vielzahl an Anwendungen auftreten. Diese Probleme zeichnen sich dadurch aus, dass sie typischerweise schlecht gestellt sind, was in erster Linie die Stabilität betrifft. Selbst kleinste Messfehler haben enorme Konsequenzen für die Rekonstruktion der zu bestimmenden Größe.
Um eine robuste Rekonstruktion zu ermöglichen, muss das Problem regularisiert, dass heißt durch eine ganze Familie abgeänderter, stabiler Approximationen ersetzt werden. Die konkrete Wahl aus der Familie, die sogenannte Parameterwahlstrategie, stützt sich dann auf zusätzliche ad hoc Annahmen über den Messfehler. Typischerweise ist dies im deterministischen Fall die Kenntnis einer oberen Schranke an die Norm des Datenfehlers, oder im stochastischen Fall, die Kenntnis der Verteilung des Fehlers, beziehungsweise die Einschränkung auf eine bestimmte Klasse von Verteilungen, zumeist Gaußsche. In der vorliegenden Arbeit wird untersucht, wie sich diese Informationen unter der Annahme der Wiederholbarkeit der Messung gewinnen lassen. Die Daten werden dabei aus mehreren Messungen gemittelt, welche einer beliebigen, unbekannten Verteilung folgen, wobei die zur Lösung des Problems unweigerlich notwendige Fehlerschranke geschätzt wird. Auf Mittelwert und Schätzer wird dann ein klassisches Regularisierungsverfahren angewandt. Als Regularisierungen werden größtenteils Filter-basierte Verfahren behandelt, die sich auf die Spektralzerlegung des Problems stützen. Als Parameterwahlstrategien werden sowohl einfache a priori-Wahlen betrachtet, als auch das Diskrepanzprinzip als adaptives Verfahren. Es wird Konvergenz für unbekannte beliebige Fehlerverteilungen mit endlicher Varianz sowie für Weißes Rauschen (bezüglich allgemeiner Diskretisierungen) nachgewiesen. Schließlich wird noch die Konvergenz des Diskrepanzprinzips für ein stochastisches Gradientenverfahren gezeigt, als erste rigorose Analyse einer adaptiven Stoppregel für ein solches nicht Filter-basiertes Regularisierungsverfahren.
Diese Arbeit beschäftigt sich mit der theoriegeleiteten Entwicklung eines digitalen Werkzeugs namens MathCityMap (MCM) für das außerschulische Lehren und Lernen von Mathematik.
Den Ausgangspunkt des Projekts bilden die sogenannten Mathtrails. Dies sind Wanderpfade zum Entdecken mathematischer Sachverhalte an realen Objekten in der Umwelt. Eine didaktische, methodische sowie lernpsychologische Analyse konstatiert Mathtrails zahlreiche Potentiale für den Lernprozess wie beispielsweise die Möglichkeit, Primärerfahrungen zu sammeln, das Interesse am Fach Mathematik zu steigern sowie das Lernen aktiv und konstruktiv zu gestalten. Trotz der genannten Vorteile wird deutlich, dass die Vorbereitung und Umsetzung der mathematischen Wanderpfade mit einem immensen Aufwand verbunden sind. Eine weitere Herausforderung für Lernende liegt im offenen Charakter der Mathtrails, die in der Regel in autonomen Kleingruppen abgelaufen werden. Aus der Literatur ist bekannt, dass insbesondere für schwächere Lerner die Gefahr besteht, durch die Anforderungen einer selbstständigen Arbeitsweise überfordert zu werden.
Als Lösungsansatz für die zuvor genannten Probleme wird im Rahmen dieser Arbeit die Entwicklung eines digitalen Werkzeugs für Mathtrails erläutert. Die erste Forschungsfrage beschäftigt sich mit den theoretischen Anforderungen an solch ein Tool:
1. Welchen Anforderungen muss ein digitales Werkzeug genügen, um die Vorzüge der Mathtrails zu erhalten, deren Aufwand zu minimieren und die Gefahren zu kompensieren?
Unter Berücksichtigung der theoretischen Grundlagen digitaler Werkzeuge und des „Mobile Learnings“ werden zunächst Möglichkeiten identifiziert, den Vorbereitungsaufwand zu minimieren. Konkret erscheinen die automatische Datenverarbeitung, das digitale Zusammen-arbeiten sowie das Teilen und Wiederverwenden von digitalen Aufgaben und Trails als theoretisch zielführende Bestandteile von MCM. Weiterhin sollen zur Unterstützung der Lerner bei der eigenständigen Bearbeitung von Mathtrails didaktisch bewährte Konzepte – wie gestufte Hilfestellungen und Feedback – eingesetzt werden.
Vor dem Hintergrund der soeben formulierten Anforderungen bilden der Entwicklungsprozess sowie die Beschreibung des aktuellen Ist-Zustandes des MCM-Systems zentrale Bestand-teile dieser Arbeit. Das System setzt sich aus zwei Komponenten für jeweils unterschiedliche Zielgruppen zusammen: das MCM-Webportal zum Erstellen von Mathtrails und die MCM-App zum Ablaufen selbiger. Die Hauptziele von MCM können in der Minimierung des Vorbereitungsaufwands sowie der Kompensation einer Überforderungsgefahr gesehen werden.
In ersten Feldversuchen konnte MCM bereits in einem frühen Stadium erfolgreich mit Lernenden der Sekundarstufe I getestet werden. Gleichzeitig fiel jedoch auf, dass das implementierte Feedback-System Schwächen aufwies und von Lernenden zum systematischen Erraten von Lösungen genutzt werden konnte. In der Folge wurden Spielelemente (Gamification), denen nicht nur eine motivationssteigernde Wirkung nachgesagt wird, sondern auch das Potential das Verhalten zu beeinflussen, Bestandteil der MCM-App. Die zweite Forschungs-frage dieser Arbeit zielt auf die Auswirkungen der Gamification-Integration ab und lautet:
2. Welchen Einfluss haben Gamification-Elemente auf die Motivation sowie auf das Nutzungs-verhalten des digitalen Werkzeugs von Neuntklässlern bei der Bearbeitung eines Mathtrails?
Zur Beantwortung der zweiten Forschungsfrage wurde eine empirische Studie mit 16 Schulklassen (304 Schülerinnen und Schüler) der neunten Jahrgangsstufe im Sommer 2017 durch-geführt. Die Ergebnisse können wie folgt zusammengefasst werden: Die Implementierung einer Rangliste (Leaderboard) in die MCM-App führte zwar nicht zu einer höheren Motivation, jedoch spornte der Wettbewerb die Teilnehmer an, viele Aufgaben zu bearbeiten. Im Ver-gleich zu der Kontrollgruppe ohne Gamification-Elemente löste die Experimentalgruppe signifikant mehr Aufgaben, legte die doppelte Strecke zurück und nutzte das Feedbacks-System seltener aus, um Lösungen zu erraten. Die Studie konnte empirisch den gewünschten Einfluss von Spielelementen auf die Benutzung eines digitalen Werkzeugs für das außerschulische Lernen von Mathematik aufzeigen.
Die Evaluation der Ziele von MCM erfolgt indirekt über die Analyse der Verbreitung der Mathtrail-Idee ohne MCM und mit MCM. Die dritte Forschungsfrage lautet dementsprechend:
3. Welchen Beitrag hat das digitale Werkzeug zur Verbreitung der Mathtrail-Idee nach 4 Jahren Projektlaufzeit geleistet?
Zur Beantwortung der dritten Forschungsfrage werden wissenschaftliche Publikationen zu Mathtrails analysiert. Es wird insbesondere in Publikationen mit und ohne Stichwort „MathCityMap“ unterschieden, um eine Aussage über den Einfluss des MCM-Projekts auf den wissenschaftlichen Diskurs treffen zu können. Stand August 2020 enthält bereits jede dritte Mathtrail-Publikation einen Bezug zu MCM. Weiterhin wird ein Vergleich zu vorherigen, ähnlichen Bemühungen – gemeint sind Online-Mitmach-Projekte für Mathtrails – gezogen. So existierten im Zeitraum 2000 bis 2010 im anglo-amerikanischen Raum erste Webseiten für mathematische Wanderpfade. Diese boten zusammengenommen 131 Mathtrails an. Im Vergleich hierzu existieren bereits über 2.500 MCM-Mathtrails in 57 Ländern.
Sowohl die Publikationen als auch die Anzahl der erstellten Trails stellen erste Indizien dafür dar, dass mit MCM die Realisation eines theoretischen Konzepts für ein digitales Mathtrail-Werkzeug gelungen ist und die Idee der Mathtrails verbreitet werden konnte.
This thesis explores a variety of methods of text quantification applicable in the field of educational text technology. Besides the cohort of existing linguistic, lexical, syntactic, and semantic text quantification methods, additional methods based on Bidirectional Encoder Representations from Transformers (BERT) are introduced and analysed. The model, developed in this thesis, is tested on a multilingual data composed of task descriptions used in Test of Understanding in College Economics (TUCE). Quantitative features extracted from raw textual data are analysed using an array of evaluation methods with the goal of finding the best predictors of the target variable - the rate of correct student responses in TUCE.
In order to address security and privacy problems in practice, it is very important to have a solid elicitation of requirements, before trying to address the problem. In this thesis, specific challenges of the areas of social engineering, security management and privacy enhancing technologies are analyzed:
Social Engineering: An overview of existing tools usable for social engineering is provided and defenses against social engineering are analyzed. Serious games are proposed as a more pleasant way to raise employees’ awareness and to train them.
Security Management: Specific requirements for small and medium sized energy providers are analyzed and a set of tools to support them in assessing security risks and improving their security is proposed. Larger enterprises are supported by a method to collect security key performance indicators for different subsidiaries and with a risk assessment method for apps on mobile devices. Furthermore, a method to select a secure cloud provider – the currently most popular form of outsourcing – is provided.
Privacy Enhancing Technologies: Relevant factors for the users’ adoption of privacy enhancing technologies are identified and economic incentives and hindrances for companies are discussed. Privacy by design is applied to integrate privacy into the use cases e-commerce and internet of things.
Begriffe sind häufig nicht eindeutig. Eine „Bank“ kann ein Finanzinstitut oder eine Sitzgelegenheit sein und die Stadt Frankfurt existiert mehr als einmal. Dennoch können sie in vielen Fällen problemlos von Menschen unterschieden werden. Computer sind noch nicht in der Lage, diese Leistung mit vergleichbarer Genauigkeit zu erfüllen.
Der in dieser Arbeit vorgestellte Ansatz baut auf dem für das Deutsche bereits gute Ergebnisse erzielenden fastSense auf und verwendet ein neuronales Netz, um Namen und Begriffe in englischen Texten mit Hilfe der Wikipedia zu disambiguieren. Dabei konnte eine Genauigkeit von bis zu 89,5% auf Testdaten erreicht werden.
Mit dem entwickelten Python-Modul kann das trainierte Modell in bestehende Anwendungen eingebunden werden. Die im Modul enthaltenen Programme ermöglichen es, neue Modelle zu trainieren und zu testen.
In der aktuellen Zeit gibt es eine Vielzahl an annotierten Texten und anderen Medien. Genauso gibt es verschiedenste Möglichkeiten neue Texte zu annotieren, sowohl manuell als auch automatisch. Es gibt Systeme, die diese Annotationen in andere, visuell ansprechendere Medien umwandeln. Zu diesen Systemen gehören auch die Text2Scene Systeme, dort wird ein annotierter Text in eine dreidimensionale Szene umgewandelt. Ein Teil dieser Text2Scene Systeme können auch Personen durch Modelle von Menschen darstellen, aber bis jetzt gibt es noch kein System, dass Avatar Modelle selber synthetisieren kann.
Der Fokus dieser Arbeit liegt sowohl darauf eine Schnittstelle bereitzustellen, mit der Avatare mit bestimmten Parametern erstellt werden können, als auch die Möglichkeit diese Avatare in der virtuellen Realität anzuzeigen und zu bearbeiten. Man kann in einer virtuellen Szene die Eigenschaften bestimmter Körperteile anpassen und die Kleidung der Avatare auswählen.
The $p$-adic section conjecture predicts that for a smooth, proper, hyperbolic curve $X$ over a $p$-adic field $k$, every section of the map of étale fundamental groups $\pi_1(X) \to G_k$ is induced by a unique $k$-rational point of $X$. While this conjecture is still open, the birational variant in which $X$ is replaced by its generic point is known due to Koenigsmann. Generalising an alternative proof of Pop, we extend this result to certain localisations of $X$ at a set of closed points $S$, an intermediate version in between the full section conjecture and its birational variant. As one application, we prove the section conjecture for $X_S$ whenever $S$ is a countable set of closed points.