Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (91) (remove)
Has Fulltext
- yes (91)
Is part of the Bibliography
- no (91)
Keywords
- ALICE (3)
- CBM experiment (2)
- Cellular Automaton (2)
- Computer Vision (2)
- FPGA (2)
- Machine Learning (2)
- Tracking (2)
- ALICE experiment (1)
- Ageing (1)
- Agent (1)
Institute
- Informatik und Mathematik (91) (remove)
Die Emergenz digitaler Netzwerke ist auf die ständige Entwicklung und Transformation neuer Informationstechnologien zurückzuführen.
Dieser Strukturwandel führt zu äußerst komplexen Systemen in vielen verschiedenen Lebensbereichen.
Es besteht daher verstärkt die Notwendigkeit, die zugrunde liegenden wesentlichen Eigenschaften von realen Netzwerken zu untersuchen und zu verstehen.
In diesem Zusammenhang wird die Netzwerkanalyse als Mittel für die Untersuchung von Netzwerken herangezogen und stellt beobachtete Strukturen mithilfe mathematischer Modelle dar.
Hierbei, werden in der Regel parametrisierbare Zufallsgraphen verwendet, um eine systematische experimentelle Evaluation von Algorithmen und Datenstrukturen zu ermöglichen.
Angesichts der zunehmenden Menge an Informationen, sind viele Aspekte der Netzwerkanalyse datengesteuert und zur Interpretation auf effiziente Algorithmen angewiesen.
Algorithmische Lösungen müssen daher sowohl die strukturellen Eigenschaften der Eingabe als auch die Besonderheiten der zugrunde liegenden Maschinen, die sie ausführen, sorgfältig berücksichtigen.
Die Generierung und Analyse massiver Netzwerke ist dementsprechend eine anspruchsvolle Aufgabe für sich.
Die vorliegende Arbeit bietet daher algorithmische Lösungen für die Generierung und Analyse massiver Graphen.
Zu diesem Zweck entwickeln wir Algorithmen für das Generieren von Graphen mit vorgegebenen Knotengraden, die Berechnung von Zusammenhangskomponenten massiver Graphen und zertifizierende Grapherkennung für Instanzen, die die Größe des Hauptspeichers überschreiten.
Unsere Algorithmen und Implementierungen sind praktisch effizient für verschiedene Maschinenmodelle und bieten sequentielle, Shared-Memory parallele und/oder I/O-effiziente Lösungen.
Antimicrobial resistant infections arise as a consequential response to evolutionary mechanisms within microbes which cause them to be protected from the effects of antimicrobials. The frequent occurrence of resistant infections poses a global public health threat as their control has become challenging despite many efforts. The dynamics of such infections are driven by processes at multiple levels. For a long time, mathematical models have proved valuable for unravelling complex mechanisms in the dynamics of infections. In this thesis, we focus on mathematical approaches to modelling the development and spread of resistant infections at between-host (population-wide) and within-host (individual) levels.
Within an individual host, switching between treatments has been identified as one of the methods that can be employed for the gradual eradication of resistant strains on the long term. With this as motivation, we study the problem using dynamical systems and notions from control theory. We present a model based on deterministic logistic differential equations which capture the general dynamics of microbial resistance inside an individual host. Fundamentally, this model describes the spread of resistant infections whilst accounting for evolutionary mutations observed in resistant pathogens and capturing them in mutation matrices. We extend this model to explore the implications of therapy switching from a control theoretic perspective by using switched systems and developing control strategies with the goal of reducing the appearance of drug resistant pathogens within the host.
At the between-host level, we use compartmental models to describe the transmission of infection between multiple individuals in a population. In particular, we make a case study of the evolution and spread of the novel coronavirus (SARS-CoV-2) pandemic. So far, vaccination remains a critical component in the eventual solution to this public health crisis. However, as with many other pathogens, vaccine resistant variants of the virus have been a major concern in control efforts by governments and all stakeholders. Using network theory, we investigate the spread and transmission of the disease on social networks by compartmentalising and studying the progression of the disease in each compartment, considering both the original virus strain and one of its highly transmissible vaccine-resistant mutant strains. We investigate these dynamics in the presence of vaccinations and other interventions. Although vaccinations are of absolute importance during viral outbreaks, resistant variants coupled with population hesitancy towards vaccination can lead to further spread of the virus.
We thoroughly study the properties of conically stable polynomials and imaginary projections. A multivariate complex polynomial is called stable if its nonzero whenever all coordinates of the respective argument have a positive imaginary part. In this dissertation we consider the generalized notion of K-stability. A multivariate complex polynomial is called K-stable if its non-zero whenever the imaginary part of the respective argument lies in the relative interior of the cone K. We study connections to various other objects, including imaginary projections as well as preservers and combinatorial criteria for conically stable polynomials.
People can describe spatial scenes with language and, vice versa, create images based on linguistic descriptions. However, current systems do not even come close to matching the complexity of humans when it comes to reconstructing a scene from a given text. Even the ever-advancing development of better and better Transformer-based models has not been able to achieve this so far. This task, the automatic generation of a 3D scene based on an input text, is called text-to-3D scene generation. The key challenge, and focus of this dissertation, now relate to the following topics:
(a) Analyses of how well current language models understand spatial information, how static embeddings compare, and whether they can be improved by anaphora resolution.
(b) Automated resource generation for context expansion and grounding that can help in the creation of realistic scenes.
(c) Creation of a VR-based text-to-3D scene system that can be used as an annotation and active-learning environment, but can also be easily extended in a modular way with additional features to solve more contexts in the future.
(d) Analyze existing practices and tools for digital and virtual teaching, learning, and collaboration, as well as the conditions and strategies in the context of VR.
In the first part of this work, we could show that static word embeddings do not benefit significantly from pronoun substitution. We explain this result by the loss of contextual information, the reduction in the relative occurrence of rare words, and the absence of pronouns to be substituted. But we were able to we have shown that both static and contextualizing language models appear to encode object knowledge, but require a sophisticated apparatus to retrieve it. The models themselves in combination with the measures differ greatly in terms of the amount of knowledge they allow to extract.
Classifier-based variants perform significantly better than the unsupervised methods from bias research, but this is also due to overfitting. The resources generated for this evaluation are later also an important component of point three.
In the second part, we present AffordanceUPT, a modularization of UPT trained on the HICO-DET dataset, which we have extended with Gibsonien/telic annotations. We then show that AffordanceUPT can effectively make the Gibsonian/telic distinction and that the model learns other correlations in the data to make such distinctions (e.g., the presence of hands in the image) that have important implications for grounding images to language.
The third part first presents a VR project to support spatial annotation respectively IsoSpace. The direct spatial visualization and the immediate interaction with the 3D objects should make the labeling more intuitive and thus easier. The project will later be incorporated as part of the Semantic Scene Builder (SeSB). The project itself in turn relies on the Text2SceneVR presented here for generating spatial hypertext, which in turn is based on the VAnnotatoR. Finally, we introduce Semantic Scene Builder (SeSB), a VR-based text-to-3D scene framework using Semantic Annotation Framework (SemAF) as a scheme for annotating semantic relations. It integrates a wide range of tools and resources by utilizing SemAF and UIMA as a unified data structure to generate 3D scenes from textual descriptions and also supports annotations. When evaluating SeSB against another state-of-the-art tool, it was found that our approach not only performed better, but also allowed us to model a wider variety of scenes. The final part reviews existing practices and tools for digital and virtual teaching, learning, and collaboration, as well as the conditions and strategies needed to make the most of technological opportunities in the future.
In the human brain, the incoming light to the retina is transformed into meaningful representations that allow us to interact with the world. In a similar vein, the RGB pixel values are transformed by a deep neural network (DNN) into meaningful representations relevant to solving a computer vision task it was trained for. Therefore, in my research, I aim to reveal insights into the visual representations in the human visual cortex and DNNs solving vision tasks.
In the previous decade, DNNs have emerged as the state-of-the-art models for predicting neural responses in the human and monkey visual cortex. Research has shown that training on a task related to a brain region’s function leads to better predictivity than a randomly initialized network. Based on this observation, we proposed that we can use DNNs trained on different computer vision tasks to identify functional mapping of the human visual cortex.
To validate our proposed idea, we first investigate a brain region occipital place area (OPA) using DNNs trained on scene parsing task and scene classification task. From the previous investigations about OPA’s functions, we knew that it encodes navigational affordances that require spatial information about the scene. Therefore, we hypothesized that OPA’s representation should be closer to a scene parsing model than a scene classification model as the scene parsing task explicitly requires spatial information about the scene. Our results showed that scene parsing models had representation closer to OPA than scene classification models thus validating our approach.
We then selected multiple DNNs performing a wide range of computer vision tasks ranging from low-level tasks such as edge detection, 3D tasks such as surface normals, and semantic tasks such as semantic segmentation. We compared the representations of these DNNs with all the regions in the visual cortex, thus revealing the functional representations of different regions of the visual cortex. Our results highly converged with previous investigations of these brain regions validating the feasibility of the proposed approach in finding functional representations of the human brain. Our results also provided new insights into underinvestigated brain regions that can serve as starting hypotheses and promote further investigation into those brain regions.
We applied the same approach to find representational insights about the DNNs. A DNN usually consists of multiple layers with each layer performing a computation leading to the final layer that performs prediction for a given task. Training on different tasks could lead to very different representations. Therefore, we first investigate at which stage does the representation in DNNs trained on different tasks starts to differ. We further investigate if the DNNs trained on similar tasks lead to similar representations and on dissimilar tasks lead to more dissimilar representations. We selected the same set of DNNs used in the previous work that were trained on the Taskonomy dataset on a diverse range of 2D, 3D and semantic tasks. Then, given a DNN trained on a particular task, we compared the representation of multiple layers to corresponding layers in other DNNs. From this analysis, we aimed to reveal where in the network architecture task-specific representation is prominent. We found that task specificity increases as we go deeper into the DNN architecture and similar tasks start to cluster in groups. We found that the grouping we found using representational similarity was highly correlated with grouping based on transfer learning thus creating an interesting application of the approach to model selection in transfer learning.
During previous works, several new measures were introduced to compare DNN representations. So, we identified the commonalities in different measures and unified different measures into a single framework referred to as duality diagram similarity. This work opens up new possibilities for similarity measures to understand DNN representations. While demonstrating a much higher correlation with transfer learning than previous state-of-the-art measures we extend it to understanding layer-wise representations of models trained on the Imagenet and Places dataset using different tasks and demonstrate its applicability to layer selection for transfer learning.
In all the previous works, we used the task-specific DNN representations to understand the representations in the human visual cortex and other DNNs. We were able to interpret our findings in terms of computer vision tasks such as edge detection, semantic segmentation, depth estimation, etc. however we were not able to map the representations to human interpretable concepts. Therefore in our most recent work, we developed a new method that associates individual artificial neurons with human interpretable concepts.
Overall, the works in this thesis revealed new insights into the representation of the visual cortex and DNNs...
In this thesis, we cover two intimately related objects in combinatorics, namely random constraint satisfaction problems and random matrices. First we solve a classic constraint satisfaction problem, 2-SAT using the graph structure and a message passing algorithm called Belief Propagation. We also explore another message passing algorithm called Warning Propagation and prove a useful result that can be employed to analyze various type of random graphs. In particular, we use this Warning Propagation to study a Bernoulli sparse parity matrix and reveal a unique phase transition regarding replica symmetry. Lastly, we use variational methods and a version of local limit theorem to prove a sufficient condition for a general random matrix to be of full rank.
Ausgangspunkt der Forschungsarbeit ist der Gebrauch von Gesten in mathematischen Interaktionen von Lernenden. Es wird untersucht, inwiefern Gesten Teil des mathematischen Aushandlungsprozesses sind. Damit ist die Rekonstruktion einer potentiell fachlichen Bedeutung des Gestengebrauchs beim Mathematiklernen das zentrale Forschungsanliegen.
Theoretisch gerahmt wird die Arbeit von Erkenntnissen aus der psychologisch-linguistischen Gestenforschung zur systematischen Beschreibung von Gestik im Zusammenspiel mit der gleichzeitig geäußerten Lautsprache (McNeill, 1992; Kendon, 2004). Es werden ebenso ausgewählte Forschungen zur Gestik beim Mathematiklernen beleuchtet (Arzarello, 2006; Wille, 2020; Kiesow, 2016). Die mathematikdidaktische Interaktionstheorie begründet den sozial-konstruktivistischen Lernbegriff (Krummheuer, 1992). Ausgewählte Aspekte der Semiotik nach C. S. Peirce bieten eine theoretische Fundierung des Zeichenbegriffs und des Kerns mathematischen Agierens, verstanden als diagrammatisches Arbeiten (Peirce, 1931, CP 1.54 u. 1932, CP 2.228).
Von besonderer Bedeutung für die vorliegende Forschungsarbeit ist der linguistische Ansatz der Code-Integration und -Manifestation von redebegleitenden Gesten im Sprachsystem nach Fricke (2007, 2012) in Verbindung mit dem Peirce’schen Diagrammbegriff. Diese Perspektive ermöglicht eine theoretische Fundierung der zunächst empirisch beobachtbaren Multimodalität der Ausdrucksweisen von Lernenden beim gemeinsamen Mathematiktreiben. Der Peirce’sche Diagrammbegriff dient hierbei zur Rekonstruktion einer systemischen Relevanz von Gesten für das Betreiben von Mathematik: Bestimmte Gesten sind semiotisch als mathematische Zeichen beschreibbar und haben potentiell konstituierende Funktion für das diagrammatische Arbeiten der Lernenden. Der übergeordnete Forschungsfokus lautet: Wie nutzen Grundschüler*innen Gestik und Lautsprache, insbesondere in deren Zusammenspiel, um ihre mathematischen Ideen in den interaktiven Aushandlungsprozess einzubringen und über den Verlauf der Interaktion aufzugreifen, möglicherweise weiterzuentwickeln oder auch zu verwerfen? In der Ausdifferenzierung wird die Funktion der verwendeten Gesten und die Rekonstruktion von potentiell gemeinsam gebrauchten Gesten der Interagierenden in den Blick genommen.
Methodisch lässt sich die Forschungsarbeit der qualitativen Sozialforschung (Bohnsack, 2008) bzw. der interpretativen mathematikdidaktischen Unterrichtsforschung zuordnen (Krummheuer & Naujok, 1999). Es werden Beispiele aus mathematischen Interaktionssituationen ausgewertet, in denen sich Paare von Zweitklässler*innen mit einem mathematischen Problem aus der Kombinatorik und der Geometrie beschäftigen. Eine eigens theoriekonform entwickelte Transkriptpartitur dient zur Aufarbeitung der Videodaten. Mit der textbasierten Interaktionsanalyse (Krummheuer, 1992) und der grafisch angelegten Semiotischen Analyse (Schreiber, 2010) in einer Weiterentwicklung der Semiotischen Prozess-Karten (Huth, 2014) werden zwei hierarchisch aufeinander aufbauende Analyseverfahren verwendet.
Zentrale Forschungsergebnisse sind 1) die funktionale und gestalterische Flexibilität des Gestengebrauchs beim diagrammatischen Arbeiten der Lernenden, 2) die Rekonstruktion von Modusschnittstellen der Gesten mit anderen Ausdrucksmodi in Funktion, interaktionaler Bedeutungszuschreibung und Chronologie, und 3) die häufige Verwendung der Gesten als Modus der Wahl der Lernenden in mathematischen Interaktionen. Gesten weisen eine unmittelbare und voraussetzungslose Verfügbarkeit auf, eine funktionale und gestalterische Flexibilität in der mathematischen Auseinandersetzung und die Möglichkeit, Funktionen anderer Modi (vorübergehen) zu übernehmen. Es zeigt sich eine konstitutive und fachliche Bedeutung der Gestik für das mathematisch-diagrammatische Agieren der Lernenden. In der Arbeit wird daraus schließlich das doppelte Kontinuum der Gesten für das Mathematiklernen entwickelt. Es zeigt in der Dimension der Funktion des Gestengebrauchs und der Dimension des Objektbezugs der Gestengestalt die Vielfältigkeit der Gestenfunktionen im gemeinsamen diagrammatischen Arbeiten der Lernenden und gibt Einblick in die verwendeten Gestengestalten.
Die Forschungsarbeit offenbart den Bedarf einer Beachtung von Gesten in der fachdidaktischen Planung und Gestaltung von Mathematikunterricht und in der Erforschung und Diagnostik der mathematischen Entwicklung von Lernenden. Es handelt sich bei Gesten in mathematischen Interaktionen nicht um ein reines Beiwerk der Äußerung, sondern um einen fachlich bedeutsamen Modus in Bezug auf das Mathematiklernen. Der Gebrauch von Gestik ermöglicht die Erzeugung von Diagrammen im Handumdrehen und eröffnet perspektivisch eine Erforschung ihrer Bedeutung für mathematische Lehr-Lern-Prozesse.
Die in dieser Zusammenfassung angegebene Literatur findet sich im Literaturverzeichnis der vorgelegten Forschungsarbeit.
The main task of modern large experiments with heavy ions, such as CBM (FAIR), STAR (BNL) and ALICE (CERN) is a detailed study of the phase diagram of quantum chromodynamics (QCD) in the quark-gluon plasma (QGP), the equation of state of matter at extremely high baryonic densities, and the transition from the hadronic phase of matter to the quark-gluon phase.
In the thesis, the missing mass method is developed for the reconstruction of short-lived particles with neutral particles in their decay products, as well as its implementation in the form of fast algorithms and a set of software for prac- tical application in heavy ion physics experiments. Mathematical procedures implementing the method were developed and implemented within the KF Par- ticle Finder package for the future CBM (FAIR) experiment and subsequently adapted and applied for processing and analysis of real data in the STAR (BNL) experiment.
The KF Particle Finder package is designed to reconstruct most signal particles from the physics program of the CBM experiment, including strange particles, strange resonances, hypernuclei, light vector mesons, charm particles and char- monium. The package includes searches for over a hundred decays of short-lived particles. This makes the KF Particle Finder a universal platform for short-lived particle reconstruction and physics analysis both online and offline.
The missing mass method has been proposed to reconstruct decays of short-lived charged particles when one of the daughter particles is neutral and is not regis- tered in the detector system. The implementation of the missing mass method was integrated into the KF Particle Finder package to search for 18 decays with a neutral daughter particle.
Like all other algorithms of the KF Particle Finder package, the missing mass method is implemented with extensive use of vector (SIMD) instructions and is optimized for parallel operation on modern many-core high performance com- puter clusters, which can include both processors and coprocessors. A set of algorithms implementing the method was tested on computers with tens of cores and showed high speed and practically linear scalability with respect to the num- ber of cores involved.
It is extremely important, especially for the initial stage of the CBM experiment, which is planned for 2025, to demonstrate already now on real data the reliability of the developed approach, as well as the high efficiency of the current implemen- tation of both the entire KF Particle Finder package, and its integral part, the missing mass method. Such an opportunity was provided by the FAIR Phase-0 program, motivating the use in the STAR experiment of software packages orig- inally developed for the CBM experiment.
Application of the method to real data of the STAR experiment shows very good results with a high signal-to-background ratio and a large significance value. The results demonstrate the reliability and high efficiency of the missing mass method in the reconstruction of both charged mother particles and their neutral daughter particles. Being an integral part of the KF Particle Finder package, now the main approach for reconstruction and analysis of short-lived particles in the STAR experiment, the missing mass method will continue to be used for the physics analysis in online and offline modes.
The high quality of the results of the express data analysis has led to their status as preliminary physics results with the right to present them at international physics conferences and meetings on behalf of the STAR Collaboration.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
In this thesis we discuss the group Out(Gal_K) of outer automorphism of the absolute Galois group Gal_K of a p-adic number field K. Using results about the mapping class group of a surface S, as well as a result by Jannsen--Wingberg on the structure of the absolute Galois group Gal_K, we construct a large subgroup of Out(Gal_K) arising as images of certain Dehn twists on S.
This thesis is concerned with the study of symmetry breaking phenomena for several different semilinear partial differential equations. Roughly speaking, this encompasses equations whose symmetries are not necessarily inherited by their solutions, which is particularly interesting for ground state solutions.
Reactive oxygen species are a class of naturally occurring, highly reactive molecules that change the structure and function of macromolecules. This can often lead to irreversible intracellular damage. Conversely, they can also cause reversible changes through post-translational modification of proteins which are utilized in the cell for signaling. Most of these modifications occur on specific cysteines. Which structural and physicochemical features contribute to the sensitivity of cysteines to redox modification is currently unclear. Here, I investigated the in uence of protein structural and sequence features on the modifiability of proteins and specific cysteines therein using statistical and machine learning methods. I found several strong structural predictors for redox modification, such as a higher accessibility to the cytosol and a high number of positively charged amino acids in the close vicinity. I detected a high frequency of other post-translational modifications, such as phosphorylation and ubiquitination, near modified cysteines. Distribution of secondary structure elements appears to play a major role in the modifiability of proteins. Utilizing these features, I created models to predict the presence of redox modifiable cysteines in proteins, including human mitochondrial complex I, NKG2E natural killer cell receptors and proximal tubule cell proteins, and compared some of these predictions to earlier experimental results.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
The relevant field of interest in High Energy Physics experiments is shifting to searching and studying extremely rare particles and phenomena. The search for rare probes requires an increase in the number of available statistics by increasing the particle interaction rate. The structure of the events also becomes more complicated, the multiplicity of particles in each event increases, and a pileup appears. Due to technical limitations, such data flow becomes impossible to store fully on available storage devices. The solution to the problem is the correct triggering of events and real-time data processing.
In this work, the issue of accelerating and improving the algorithms for reconstruction of the charged particles' trajectories based on the Cellular Automaton in the STAR experiment is considered to implement them for track reconstruction in real-time within the High-Level Trigger. This is an important step in the preparation of the CBM experiment as part of the FAIR Phase-0 program. The study of online data processing methods in real conditions at similar interaction energies allows us to study this process and determine the possible weaknesses of the approach.
Two versions of the Cellular Automaton based track reconstruction are discussed, which are used, depending on the detecting systems' features. HFT~CA Track Finder, similar to the tracking algorithm of the CBM experiment, has been accelerated by several hundred times, using both algorithm optimization and data-level parallelism. TPC~CA Track Finder has been upgraded to improve the reconstruction quality while maintaining high calculation speed. The algorithm was tuned to work with the new iTPC geometry and provided an additional module for very low momentum track reconstruction.
The improved track reconstruction algorithm for the TPC detector in the STAR experiment was included in the HLT reconstruction chain and successfully tested in the express production for the online real data analysis. This made it possible to obtain important physical results during the experiment runtime without the full offline data processing. The tracker is also being prepared for integration into a standard offline data processing chain, after which it will become the basic track search algorithm in the STAR experiment.
Monte Carlo methods : barrier option pricing with stable Greeks and multilevel Monte Carlo learning
(2021)
For discretely observed barrier options, there exists no closed solution under the Black-Scholes model. Thus, it is often helpful to use Monte Carlo simulations, which are easily adapted to these models. However, as presented above, the discontinuous payoff may lead to instability in option's sensitivities for Monte Carlo algorithms.
This thesis presents a new Monte Carlo algorithm that can calculate the pathwise sensitivities for discretely monitored barrier options. The idea is based on Glasserman and Staum's one-step survival strategy and the results of Alm et al., with which we can stably determine the option's sensitivities such as Delta and Vega by finite-differences. The basic idea of Glasserman and Staum is to use a truncated normal distribution, which excludes the values above the barrier (e.g.\ for knock-up-out options), instead of sampling from the full normal distribution. This approach avoids the discontinuity generated by any Monte Carlo path crossing the barrier and yields a Lipschitz-continuous payoff function.
The new part will be to develop an extended algorithm that estimates the sensitivities directly, without simulation at multiple parameter values as in finite-difference.
Consider the local volatility model, which is a generalisation of the Black-Scholes model. Although standard Monte Carlo algorithms work well for the pricing of continuously monitored barrier options within this model, they often do not behave stably with respect to numerical differentiation.
To bypass this problem, one would generally either resort to regularised differentiation schemes or derive an algorithm for precise differentiation. Unfortunately, while the widespread solution of using a Brownian bridge approach leads to accurate first derivatives, they are not Lipschitz-continuous. This leads to instability with respect to numerical differentiation for second-order Greeks.
To alleviate this problem - i.e. produce Lipschitz-continuous first-order derivatives - and reduce variance, we generalise the idea of one-step survival to general scalar stochastic differential equations. This approach leads to the new one-step survival Brownian bridge approximation, which allows for stable second-order Greeks calculations.
To show the new approach's numerical efficiency, we present a new respective Monte Carlo pathwise sensitivity estimator for the first-order Greeks and study different methods to compute second-order Greeks stably. Finally, we develop a one-step survival Brownian bridge multilevel Monte Carlo algorithm to reduce the computational cost in practice.
This thesis proves unbiasedness and variance reduction of our new, one-step survival version with respect to the classical, Brownian bridge approach. Furthermore, we will present a new convergence result for the Brownian bridge approach using the Milstein scheme under certain conditions. Overall, these properties imply convergence of the new one-step survival Brownian bridge approach.
In recent years, deep learning has become pervasive in various fields. As a family of machine learning methods it is used in a broad set of applications, such as image processing, voice recognition, email filtering, computer vision. Most modern deep learning algorithms are based on artificial neural networks inspired by the biological neural networks constituting animal brains. Also in computational finance deep learning may be of use: Consider there is no closed-solution available for an option price, Monte Carlo simulations are substantially for estimation. Instead of persistently contributing new price computations arising from an updated volatility term, one could replace these by evaluating a neural network.
If an according neural network is available, the evaluation could lead to substantial savings and be highly efficient. I.e., once trained, a neural network could save further expensive estimations. However, in practice, the challenge is the training process of the neural network.
We study and compare two generic neural network training algorithms' computational complexity. Then, we introduce a new multilevel training algorithm that combines a deep learning algorithm with the idea of multilevel Monte Carlo path simulation. The idea is to train several neural networks with training data computed from the so-called level estimators of the multilevel Monte Carlo approach introduced by Giles. We show that the new method can reduce computational complexity by formulating a complexity theorem.
The thesis is composed of four Chapters.
In the first Chapter, the boundary expression of the one-sided shape derivative of nonlocal Sobolev best constants is derived. As a simple consequence, we obtain the fractional version of the so-called Hadamard formula for the torsional rigidity and the first Dirichlet eigenvalue. An application to the optimal obstacle placement problem for the torsional rigidity and the first eigenvalue of the fractional Laplacian is given.
In the second Chapter, we introduce and prove a new maximum principle for doubly antisymmetric functions. The latter can be seen as the first step towards studying the optimal obstacle placement problem for the second fractional eigenvalue. Using the new maximum principle we derive new symmetry results for odd solutions to semilinear Dirichlet boundary value problems with Lipschitz nonlinearity.
In the third Chapter, we derive new integration by parts formula for the fractional Laplace operator with a general globally Lipschitz vector field and in particular, we obtain a new Pohozaev type identity generalizing the one obtained by X. Ros-Oton and J. Serra. As an application we obtain nonexistence results for semilinear Dirichlet boundary problems in bounded domains that are not necessarly starshaped.
In the last Chapter, we study symmetry properties of second eigenfunctions of annuli. Using results from the first Chapter and the maximum principle in Chpater 2, we extend the result on the optimal obstacle placement problem from the first eigenvalue to the second eigenvalue.
We present new results on nonlocal Dirichlet problems established by means of suitable spectral theoretic and variational methods, taking care of the nonlocal feature of the operators. We mainly address: First, we estimate the Morse index of radially symmetric sign changing bounded weak solutions to a semilinear Dirichlet problem involving the fractional Laplacian. In particular, we derive a conjecture due to Bañuelos and Kulczycki on the geometric structure of the second Dirichlet eigenfunctions. Secondly, we study a small order asymptotics with respect to the parameter s of the Dirichlet eigenvalues problem for the fractional Laplacian. Thirdly, we deal with the logarithmic Schrödinger operator. In particular, we provide an alternative to derive the singular integral representation corresponding to the associated Fourier symbol and introduce tools and functional analytic framework for variational studies. Finaly, we study nonlocal operators of order strictly below one. In particular, we investigate interior regularity properties of weak solutions to the associated Poisson problem depending on the regularity of the right-hand side.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
Machine learning (ML) techniques have evolved rapidly in recent years and have shown impressive capabilities in feature extraction, pattern recognition, and causal inference. There has been an increasing attention to applying ML to medical applications, such as medical diagnosis, drug discovery, personalized medicine, and numerous other medical problems. ML-based methods have the advantage of processing vast amounts of data.
With an ever increasing amount of medical data collection and large, inter-subject variability in the medical data, automated data processing pipelines are very much desirable since it is laborious, expensive, and error-prone to rely solely on human processing. ML methods have the potential to uncover interesting patterns, unravel correlations between complex features, learn patient-specific representations, and make accurate predictions. Motivated by these promising aspects, in this thesis, I present studies where I have implemented deep neural networks for the early diagnosis of epilepsy based on electroencephalography (EEG) data and brain tumor detection based on magnetic resonance spectroscopy (MRS) data.
In the project for early diagnosis of epilepsy, we are dealing with one of the most common neurological disorders, epilepsy, which is characterized by recurrent unprovoked seizures. It can be triggered by a variety of initial brain injuries and manifests itself after a time window which is called the latent period. During this period, a cascade of structural and functional brain alterations takes place leading to an increased seizure susceptibility.
The development and extension of brain tissue capable of generating spontaneous seizures is defined as epileptogenesis (EPG).
Detecting the presence of EPG provides a precious opportunity for targeted early medical interventions and, thus, can slow down or even halt the disease progression. In order to study brain signals in this latent window, animal epilepsy models are used to provide valuable data as it is extremely difficult to obtain this data from human patients. The aim of this study is to discover biomarkers of EPG using animal models and then to find the equivalent and counterparts in human patients' data. However, the EEG features for EPG are not well-understood and there is not a sufficiently large amount of annotated data for ML-based algorithms. To approach this problem, firstly, I utilized the timestamp information of the recorded EEG from an animal epilepsy model where epilepsy is induced by an electrical stimulation. The timestamp serves as a form of weak supervision, i.e., before and after the stimulation. Secondly, I implemented a deep residual neural network and trained it with a binary classification task to distinguish the EEG signals from these two phases. After obtaining a high discriminative ability on the binary classification task, I proposed to divide further the time span after the stimulation for a three-class classification, aiming to detect possible stages of the progression of the latent EPG phase. I have shown that the model can distinguish EEG signals at different stages of EPG with high accuracy and generalization ability. I have also demonstrated that some of the learned features from the network are clinically relevant.
In the task of detecting brain tumors based on MRS data, I first proposed to apply a deep neural network on the MRS data collected from over 400 patients for a binary classification task. To combat the challenge of noisy labeling, I developed a distillation step to filter out relatively ``cleanly'' labeled samples. A mixing-based data augmentation method was also implemented to expand the size of the training set. All the experiments were designed to be conducted with a leave-patient-out scheme to ensure the generalization ability of the model. Averaged across all leave-patient-out cross-validation sets, the proposed method performed on par with human neuroradiologists, while outperforming other baseline methods. I have demonstrated the distillation effect on the MNIST data set with manually-introduced label noise as well as providing visualization of the input influences on the final classification through a class activation map method.
Moreover, I have proposed to aggregate information at the subject level, which could provide more information and insights. This is inspired by the concept of multiple instance learning, where instance-level labels are not required and which is more tolerant to noisy labeling. I have proposed to generate data bags consisting of instances from each patient and also proposed two modules to ensure permutation invariance, i.e., an attention module and a pooling module. I have compared the performance of the network in different cases, i.e., with and without permutation-invariant modules, with and without data augmentation, single-instance-based and multiple-instance-based learning and have shown that neural networks equipped with the proposed attention or pooling modules can outperform human experts.