Institutes
Refine
Year of publication
- 2018 (20) (remove)
Document Type
- Doctoral Thesis (17)
- Bachelor Thesis (1)
- Contribution to a Periodical (1)
- Habilitation (1)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- Brownian motion (1)
- Hidden Markov Model (1)
- Lyapunov exponents (1)
- Mc Kean martingale (1)
- bistable perception (1)
- changepoint (1)
- cover times (1)
- erasure codes (1)
- error correction codes (1)
- extreme value theory (1)
Institute
- Informatik und Mathematik (20) (remove)
The results of this thesis lie in the area of convex algebraic geometry, which is the intersection of real algebraic geometry, convex geometry, and optimization.
We study sums of nonnegative circuit polynomials (SONC) and their related cone, both geometrically and in application to polynomial optimization. SONC polynomials are certain sparse polynomials having a special structure in terms of their Newton polytopes and supports, and serve as a certificate of nonnegativity for real polynomials, which is independent of sums of squares.
The first part of this thesis is dedicated to the convex geometric study of the SONC cone. As main results we show that the SONC cone is full-dimensional in the cone of nonnegative polynomials, we exactly determine the number of zeros of a nonnegative circuit polynomial, and we give a complete and explicit characterization of the number of zeros of SONC polynomials and forms. Moreover, we provide a first approach to the study of the exposed faces of the SONC cone and their dimensions.
In the second part of the thesis we use SONC polynomials to tackle constrained polynomial optimization problems (CPOPs).
As a first step, we derive a lower bound for the optimal value of CPOP based on SONC polynomials by using a single convex optimization program, which is a geometric program (GP) under certain assumptions. GPs are a special type of convex optimization problems and can be solved in polynomial time. We test the new method experimentally and provide examples comparing our new SONC/GP approach with Lasserre's relaxation, a common approach for tackling CPOPs, which approximates nonnegative polynomials via sums of squares and semidefinite programming (SDP). The new approach comes with the benefit that in practice GPs can be solved significantly faster than SDPs. Furthermore, increasing the degree of a given problem has almost no effect on the runtime of the new program, which is in sharp contrast to SDPs.
As a second step, we establish a hierarchy of efficiently computable lower bounds converging to the optimal value of CPOP based on SONC polynomials. For a given degree each bound is computable by a relative entropy program. This program is also a convex optimization program, which is more general than a geometric program, but still efficiently solvable via interior point methods.
In this thesis we introduce the imaginary projection of (multivariate) polynomials as the projection of their variety onto its imaginary part, I(f) = { Im(z_1, ... , z_n) : f(z_1, ... , z_n) = 0 }. This induces a geometric viewpoint to stability, since a polynomial f is stable if and only if its imaginary projection does not intersect the positive orthant. Accordingly, the thesis is mainly motivated by the theory of stable polynomials.
Interested in the number and structure of components of the complement of imaginary projections, we show as a key result that there are only finitely many components which are all convex. This offers a connection to the theory of amoebas and coamoebas as well as to the theory of hyperbolic polynomials.
For hyperbolic polynomials, we show that hyperbolicity cones coincide with components of the complement of imaginary projections, which provides a strong structural relationship between these two sets. Based on this, we prove a tight upper bound for the number of hyperbolicity cones and, respectively, for the number of components of the complement in the case of homogeneous polynomials. Beside this, we investigate various aspects of imaginary projections and compute imaginary projections of several classes explicitly.
Finally, we initiate the study of a conic generalization of stability by considering polynomials whose roots have no imaginary part in the interior of a given real, n-dimensional, proper cone K. This appears to be very natural, since many statements known for univariate and multivariate stable polynomials can be transferred to the conic situation, like the Hermite-Biehler Theorem and the Hermite-Kakeya-Obreschkoff Theorem. When considering K to be the cone of positive semidefinite matrices, we prove a criterion for conic stability of determinantal polynomials.
Antimicrobial resistance became a serious threat to the worldwide public health in this century. A better understanding of the mechanisms, by which bacteria infect host cells and how the host counteracts against the invading pathogens, is an important subject of current research. Intracellular bacteria of the Salmonella genus have been frequently used as a model system for bacterial infections. Salmonella are ingested by contaminated food or water and cause gastroenteritis and typhoid fever in animals and humans. Once inside the gastrointestinal tract, Salmonella can invade intestinal epithelial cells. The host cell can fight against intracellular pathogens by a process called xenophagy. For complex systems, such as processes involved in the bacterial infection of cells, computational systems biology provides approaches to describe mathematically how these intertwined mechanisms in the cell function. Computational systems biology allows the analysis of biological systems at different levels of abstraction. Functional dependencies as well as dynamic behavior can be studied. In this thesis, we used the Petri net formalism to gain a better insight into bacterial infections and host defense mechanisms and to predict cellular behavior that can be tested experimentally. We also focused on the development of new computational methods.
In this work, the first realization of a mathematical model of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells was developed. The mathematical model expressed in the Petri net formalism was constructed in an iterative way of modeling and analyses. For the model verification, we analyzed the Petri net, including a computational performance of knockout experiments named in silico knockouts, which was established in this work. The in silico knockouts of the proposed Petri net are consistent with the published experimental perturbation studies and, thus, ensures the biological credibility of the Petri net. In silico knockouts that have not been experimentally investigated yet provide hypotheses for future investigations of the pathway.
To study the dynamic behavior of an epithelial cell infected with Salmonella enterica serovar Typhimurium, a stochastic Petri net was constructed. In experimental research, a decision like "Which incubation time is needed to infect half of the epithelial cells with Salmonella?" is based on experience or practicability. A mathematical model can help to answer these questions and improve experimental design. The stochastic Petri net models the cell at different stages of the Salmonella infection. We parameterized the model by a set of experimental data derived from different literature sources. The kinetic parameters of the stochastic Petri net determine the time evolution of the bacterial infection of a cell. The model captures the stochastic variation and heterogeneity of the intracellular Salmonella population of a single cell over time. The stochastic Petri net is a valuable tool to examine the dynamics of Salmonella infections in epithelial cells and generate valuable information for experimental design.
In the last part of this thesis, a novel theoretical method was introduced to perform knockout experiments in silico. The new concept of in silico knockouts is based on the computation of signal flows at steady state and allows the determination of knockout behavior that is comparable to experimental perturbation behavior. In this context, we established the concept of Manatee invariants and demonstrated the suitability of their application for in silico knockouts by reflecting biological dependencies from the signal initiation to the response. As a proof of principle, we applied the proposed concept of in silico knockouts to the Petri net of the xenophagic recognition of Salmonella. To enable the application of in silico knockouts for the scientific community, we implemented the novel method in the software isiKnock. isiKnock allows the automatized performance and visualization of in silico knockouts in signaling pathways expressed in the Petri net formalism. In conclusion, the knockout analysis provides a valuable method to verify computational models of signaling pathways, to detect inconsistencies in the current knowledge of a pathway, and to predict unknown pathway behavior.
In summary, the main contributions of this thesis are the Petri net of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells to study the knockout behavior and the stochastic Petri net of an epithelial cell infected with Salmonella enterica serovar Typhimurium to analyze the infection dynamics. Moreover, we established a new method for in silico knockouts, including the concept of Manatee invariants and the software isiKnock. The results of these studies are useful to a better understanding of bacterial infections and provide valuable model analysis techniques for the field of computational systems biology.
The thesis is about random Constraint Satisfaction Problems (rCSP). These are random instances of classical problems in NP. In the literature the study of rCSP involve identifying-locating phase transition phenomena as well as investigating algorithmic questions.
Recently, some ingenious however mathematically non-rigorous theories from statistical physics have given the study of rCSP a new perspective; the so-called Cavity Method makes some very impressing predictions about the most fundamental properties of rCSP.
In this thesis, we investigate the soundness of some of the most basic predictions of the Cavity Method, mainly, regarding the structure of the so-called Gibbs distribution on various rCSP models. Furthermore, we study some fundamental algorithmic problem related to rCSP. This includes both analysing well-known dynamical process (dynamics) like Glauber Dynamics, Metropolis Process, as well as proposing new algorithmic approaches to some natural problems related to rCSP.
Die vorliegende Arbeit beschäftigt sich mit dem Thema Stemmatologie, d.h. primär der Rekonstruktion der Kopiergeschichte handschriftlich fixierter Dokumente. Zentrales Objekt der Stemmatologie ist das Stemma, eine visuelle Darstellung der Kopiergeschichte, welche i.d.R. graphtheoretisch als Baum bzw. gerichteter azyklischer Graph vorliegt, wobei die Knoten Textzeugen (d.s. die Textvarianten) darstellen während die Kanten für einzelne Kopierprozesse stehen. Im Mittelpunkt des Wissenschaftszweiges steht die Frage des Autorenoriginals (falls ein einziges solches existiert haben sollte) und die Frage der Rekonstruktion seines Textes. Das Stemma selbst ist ein Mittel zu diesem Hauptzweck (Cameron 1987). Der durch für manuelle Kopierprozesse kennzeichnende Abweichungen zunehmend abgewandelte Originaltext ist meist nicht direkt überliefert. Ziel der Arbeit ist es, die semi-automatische Stemmatologie umfassend zu beschreiben und durch Tools und analytische Verfahren weiterzuentwickeln. Der erste Teil der Arbeit beschreibt die Geschichte der computer-assistierten Stemmatologie inkl. ihrer klassischen Vorläufer und mündet in der Vorstellung eines einfachen Tools zur dynamischen graphischen Darstellung von Stemmata. Ein Exkurs zum philologischen Leitphänomen Lectio difficilior erörtert dessen mögliche psycholinguistische Ursachen im schnelleren lexikalischen Zugriff auf hochfrequente Lexeme. Im zweiten Teil wird daraufhin die existenziellste aller stemmatologischen Debatten, initiiert durch Joseph Bédier, mit mathematischen Argumenten auf Basis eines von Paul Maas 1937 vorgeschlagenen stemmatischen Models beleuchtet. Des Weiteren simuliert der Autor in diesem Kapitel Stemmata, um den potenziellen Einfluss der Distribution an Kopierhäufigkeiten pro Manuskript abzuschätzen.
Im nächsten Teil stellt der Autor ein eigens erstelltes Korpus in persischer Sprache vor, welches ebenso wie 3 der bekannten artifiziellen Korpora (Parzival, Notre Besoin, Heinrichi) qualitativ untersucht wird. Schließlich wird mit der Multi Modal Distance eine Methode zur Stemmagenerierung angewandt, welche auf externen Daten psycholinguistisch determinierter Buchstabenverwechslungswahrscheinlichkeiten beruht. Im letzten Teil arbeitet der Autor mit minimalen Spannbäumen zur Stemmaerzeugung, wobei eine vergleichende Studie zu 4 Methoden der Distanzmatrixgenerierung mit 4 Methoden zur Stemmaerzeugung durchgeführt, evaluiert und diskutiert wird.
A lot of software systems today need to make real-time decisions to optimize an objective of interest. This could be maximizing the click-through rate of an ad displayed on a web page or profit for an online trading software. The performance of these systems is crucial for the parties involved. Although great progress has been made over the years in understanding such online systems and devising efficient algorithms, a fine-grained analysis and problem specific solutions are often missing. This dissertation focuses on two such specific problems: bandit learning and pricing in gross-substitutes markets.
Bandit learning problems are a prominent class of sequential learning problems with several real-world applications. The classical algorithms proposed for these problems, although optimal in a theoretical sense often tend to overlook model-specific proper- ties. With this as our motivation, we explore several sequential learning models and give efficient algorithms for them. Our approaches, inspired by several classical works, incorporate the model-specific properties to derive better performance bounds.
The second part of the thesis investigates an important class of price update strategies in static markets. Specifically, we investigate the effectiveness of these strategies in terms of the total revenue generated by the sellers and the convergence of the resulting dynamics to market equilibrium. We further extend this study to a class of dynamic markets. Interestingly, in contrast to most prior works on this topic, we demonstrate that these price update dynamics may be interpreted as resulting from revenue optimizing actions of the sellers. No such interpretation was known previously. As a part of this investigation, we also study some specialized forms of no-regret dynamics and prediction techniques for supply estimation. These approaches based on learning algorithms are shown to be particularly effective in dynamic markets.
Begriffe sind häufig nicht eindeutig. Eine „Bank“ kann ein Finanzinstitut oder eine Sitzgelegenheit sein und die Stadt Frankfurt existiert mehr als einmal. Dennoch können sie in vielen Fällen problemlos von Menschen unterschieden werden. Computer sind noch nicht in der Lage, diese Leistung mit vergleichbarer Genauigkeit zu erfüllen.
Der in dieser Arbeit vorgestellte Ansatz baut auf dem für das Deutsche bereits gute Ergebnisse erzielenden fastSense auf und verwendet ein neuronales Netz, um Namen und Begriffe in englischen Texten mit Hilfe der Wikipedia zu disambiguieren. Dabei konnte eine Genauigkeit von bis zu 89,5% auf Testdaten erreicht werden.
Mit dem entwickelten Python-Modul kann das trainierte Modell in bestehende Anwendungen eingebunden werden. Die im Modul enthaltenen Programme ermöglichen es, neue Modelle zu trainieren und zu testen.
Deep learning and isolation based security for intrusion detection and prevention in grid computing
(2018)
The use of distributed computational resources for the solution of scientific problems, which require highly intensive data processing is a fundamental mechanism for modern scientific collaborations. The Worldwide Large Hadron Collider Computing Grid (WLCG) is one of the most important examples of a distributed infrastructure for scientific projects and is one of the pioneering examples of grid computing. The WLCG is the global grid that analyzes data from the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN), with 170 sites in 40 countries and more than 600,000 processing cores. The grid service providers grant users access to resources that they can utilize on demand for the execution of custom software applications used for the analysis of data. The code that the users can execute is completely flexible, and commonly there are no significant restrictions. This flexibility and the availability of immense computing power increases the security challenges of these environments. Attackers are a concern for grid administrators. These attackers may request the execution of software with a malicious code that gives them the possibility of compromising the underlying institutions’ infrastructure. Grid systems need security countermeasures to keep the user code running, without allowing access to critical components but whilst still retaining flexibility. The administrators of grid systems also need to be continuously monitoring the activities that the applications are carrying out. An analysis of these activities is necessary to detect possible security issues, to identify ongoing incidents and to perform autonomous responses. The size and complexity of grid systems make manual security monitoring and response expensive and complicated for human analysts. Legacy intrusion detection and prevention systems (IDPS) such as Snort and OSSEC are traditionally used for security incident monitoring in the grid, cloud, clusters and standalone systems. However, IDPS are limited due to the use of hardcoded fixed rules that need to be updated continuously to cope with different threats.
This thesis introduces an architecture for improving security in grid computing. The architecture integrates the use of security by isolation, behavior monitoring and deep learning (DL) for the classification of real-time traces of the running user payloads also known as grid jobs. The first component of the proposal, the Linux containers (LCs), are used to provide isolation between grid jobs and to gather specific traceable information about the behavior of individual jobs. LCs offer a safe environment for the execution of arbitrary user scripts or binaries, protecting the sensitive components of the grid member organizations. The containers consist of a software sandboxing technique and form a lightweight alternative to other technologies such as virtual machines (VMs) that usually implement a full machine-level emulation and can, therefore, significantly affect the performance. This performance loss is commonly unacceptable in high-throughput computing scenarios. Containers enable the collection of monitoring information from the processes running inside them. The data collected via the LCs monitoring is employed to feed a DL-based IDPS.
DL methods can acquire knowledge from experience, which eliminates the need for operators to formally specify all the knowledge that a system requires. These methods can improve IDPS by building models that are utilized to detect security incidents automatically, having the ability to generalize to new classes of issues. DL can produce lower false positive rates for intrusion detection, but also provides a measure of false negatives, which can be improved with new training data. Convolutional neural networks (CNNs) are utilized for the distinction between regular and malicious job classes. A set of samples is collected from regular production grid jobs from the grid infrastructure of “A Large Ion Collider Experiment” (ALICE) and malicious Linux binaries from a malware research website. The features extracted from these samples are utilized for the training and validation of the machine learning (ML) models. The utilization of a generative approach to enhance the required training data is also proposed. Recurrent neural networks (RNN) are used as generative models for the simulation of training data that complements and improves the real collected dataset. This data augmentation strategy is useful to supplement the lack of training data in ML processes.
...