510 Mathematik
Refine
Year of publication
Document Type
- Doctoral Thesis (50)
- Article (35)
- Master's Thesis (5)
- Contribution to a Periodical (2)
- Diploma Thesis (1)
- Habilitation (1)
Has Fulltext
- yes (94)
Is part of the Bibliography
- no (94)
Keywords
- ALICE (2)
- FPGA (2)
- MathCityMap (2)
- Positive polynomials (2)
- Sums of arithmetic-geometric exponentials (2)
- mathematics education (2)
- 11N45 (1)
- 14N10 (secondary) (1)
- 2-SAT (1)
- 30F30 (1)
Institute
- Informatik und Mathematik (94) (remove)
We consider a linear ill-posed equation in the Hilbert space setting. Multiple independent unbiased measurements of the right-hand side are available. A natural approach is to take the average of the measurements as an approximation of the right-hand side and to estimate the data error as the inverse of the square root of the number of measurements. We calculate the optimal convergence rate (as the number of measurements tends to infinity) under classical source conditions and introduce a modified discrepancy principle, which asymptotically attains this rate.
We show explicit formulas for the evaluation of (possibly higher-order) fractional Laplacians (-△)ˢ of some functions supported on ellipsoids. In particular, we derive the explicit expression of the torsion function and give examples of s-harmonic functions. As an application, we infer that the weak maximum principle fails in eccentric ellipsoids for s ∈ (1; √3 + 3/2) in any dimension n ≥ 2. We build a counterexample in terms of the torsion function times a polynomial of degree 2. Using point inversion transformations, it follows that a variety of bounded and unbounded domains do not satisfy positivity preserving properties either and we give some examples.
A convex body is unconditional if it is symmetric with respect to reflections in all coordinate hyperplanes. We investigate unconditional lattice polytopes with respect to geometric, combinatorial, and algebraic properties. In particular, we characterize unconditional reflexive polytopes in terms of perfect graphs. As a prime example, we study the signed Birkhoff polytope. Moreover, we derive constructions for Gale-dual pairs of polytopes and we explicitly describe Gröbner bases for unconditional reflexive polytopes coming from partially ordered sets.
Adaptive, synchronous, and mobile online education: developing the ASYMPTOTE learning environment
(2022)
The COVID-19-induced distance education was perceived as highly challenging by teachers and students. A cross-national comparison of five European countries identified several challenges occurred during the distance learning period. On this basis, the article aims to develop a theoretical framework and design requirements for distance and online learning tools. As one example for online learning in mathematics education, the ASYMPTOTE system is introduced. It will be freely available by May 2022. ASYMPTOTE is aimed at the adaptive and synchronous delivery of online education by taking a mobile learning approach. Its core is the so-called digital classroom, which not only allows students to interact with each other or with the teacher but also enables teachers to monitor their students’ work progress in real time. With respect to the theoretical framework, this article analyses to what extent the ASYMPTOTE system meets the requirements of online learning. Overall, the digital classroom can be seen as a promising tool for teachers to carry out appropriate formative assessment and—partly—to maintain personal and content-related interaction at a distance. Moreover, we highlight the availability of this tool. Due to its mobile learning approach, almost all students will be able to participate in lessons conducted with ASYMPTOTE.
Matroids are combinatorial objects that generalize linear independence. A matroid can be represented geometrically by its Bergman fan and we compare the symmetries of these two objects. Sometimes, the Bergman fan has additional automorphisms, which are related to Cremona transformations in projective space. Their existence depends on a combinatorial property of the matroid, as has been shown by Shaw and Werner, and we study the consequences for the structure of such matroids. This allows us to gain a better understanding of the so-called Cremona group of a matroid and we apply our results to root system matroids.
FEM–BEM coupling for the thermoelastic wave equation with transparent boundary conditions in 3D
(2022)
We consider the thermoelastic wave equation in three dimensions with transparent boundary conditions on a bounded, not necessarily convex domain. In order to solve this problem numerically, we introduce a coupling of the thermoelastic wave equation in the interior domain with time-dependent boundary integral equations. Here, we want to highlight that this type of problem differs from other wave-type problems that dealt with FEM–BEM coupling so far, e.g., the acoustic as well as the elastic wave equation, since our problem consists of coupled partial differential equations involving a vector-valued displacement field and a scalar-valued temperature field. This constitutes a nontrivial challenge which is solved in this paper. Our main focus is on a coercivity property of a Calderón operator for the thermoelastic wave equation in the Laplace domain, which is valid for all complex frequencies in a half-plane. Combining Laplace transform and energy techniques, this coercivity in the frequency domain is used to prove the stability of a fully discrete numerical method in the time domain. The considered numerical method couples finite elements and the leapfrog time-stepping in the interior with boundary elements and convolution quadrature on the boundary. Finally, we present error estimates for the semi- and full discretization.
The development of epilepsy (epileptogenesis) involves a complex interplay of neuronal and immune processes. Here, we present a first-of-its-kind mathematical model to better understand the relationships among these processes. Our model describes the interaction between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model reproduces the available data from three animal models. The model successfully describes characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries or the existence of qualitatively different outcomes for varying injury intensity. In line with the concept of degeneracy, our simulations reveal multiple routes toward epilepsy with neuronal loss as a sufficient but non-necessary component. Finally, we show that our model allows for in silico predictions of therapeutic strategies, revealing injury-specific therapeutic targets and optimal time windows for intervention.
The present paper is concerned with the half-space Dirichlet problem [...] where ℝ𝑁+:={𝑥∈ℝ𝑁:𝑥𝑁>0} for some 𝑁≥1 and 𝑝>1, 𝑐>0 are constants. We analyse the existence, non-existence and multiplicity of bounded positive solutions to (𝑃𝑐). We prove that the existence and multiplicity of bounded positive solutions to (𝑃𝑐) depend in a striking way on the value of 𝑐>0 and also on the dimension N. We find an explicit number 𝑐𝑝∈(1,𝑒√), depending only on p, which determines the threshold between existence and non-existence. In particular, in dimensions 𝑁≥2, we prove that, for 0<𝑐<𝑐𝑝, problem (𝑃𝑐) admits infinitely many bounded positive solutions, whereas, for 𝑐>𝑐𝑝, there are no bounded positive solutions to (𝑃𝑐).
We present a symmetry result to solutions of equations involving the fractional Laplacian in a domain with at least two perpendicular symmetries. We show that if the solution is continuous, bounded, and odd in one direction such that it has a fixed sign on one side, then it will be symmetric in the perpendicular direction. Moreover, the solution will be monotonic in the part where it is of fixed sign. In addition, we present also a class of examples in which our result can be applied.
Motivated by Gröbner basis theory for finite point configurations, we define and study the class of standard complexes associated to a matroid. Standard complexes are certain subcomplexes of the independence complex that are invariant under matroid duality. For the lexicographic term order, the standard complexes satisfy a deletion-contraction-type recurrence. We explicitly determine the lexicographic standard complexes for lattice path matroids using classical bijective combinatorics.
For an abeloid variety A over a complete algebraically closed field extension K of Qp, we construct a p-adic Corlette–Simpson correspondence, namely an equivalence between finite-dimensional continuous K-linear representations of the Tate module and a certain subcategory of the Higgs bundles on A. To do so, our central object of study is the category of vector bundles for the v-topology on the diamond associated to A. We prove that any pro-finite-étale v-vector bundle can be built from pro-finite-étale v-line bundles and unipotent v-bundles. To describe the latter, we extend the theory of universal vector extensions to the v-topology and use this to generalise a result of Brion by relating unipotent v-bundles on abeloids to representations of vector groups.
Through the glasses of didactic reduction, we consider a (periodic) tessellation Δ of either Euclidean or hyperbolic 𝑛-space 𝑀. By a piecewise isometric rearrangement of Δ we mean the process of cutting 𝑀 along corank-1 tile-faces into finitely many convex polyhedral pieces, and rearranging the pieces to a new tight covering of the tessellation Δ. Such a rearrangement defines a permutation of the (centers of the) tiles of Δ, and we are interested in the group of 𝑃𝐼(Δ) all piecewise isometric rearrangements of Δ. In this paper, we offer (a) an illustration of piecewise isometric rearrangements in the visually attractive hyperbolic plane, (b) an explanation on how this is related to Richard Thompson's groups, (c) a section on the structure of the group pei(ℤ𝑛) of all piecewise Euclidean rearrangements of the standard cubically tessellated ℝ𝑛, and (d) results on the finiteness properties of some subgroups of pei(ℤ𝑛).
Conditional Sums-of-AM/GM-Exponentials (conditional SAGE) is a decomposition method to prove nonnegativity of a signomial or polynomial over some subset X of real space. In this article, we undertake the first structural analysis of conditional SAGE signomials for convex sets X. We introduce the X-circuits of a finite subset A⊂Rn , which generalize the simplicial circuits of the affine-linear matroid induced by A to a constrained setting. The X-circuits serve as the main tool in our analysis and exhibit particularly rich combinatorial properties for polyhedral X, in which case the set of X-circuits is comprised of one-dimensional cones of suitable polyhedral fans. The framework of X-circuits transparently reveals when an X-nonnegative conditional AM/GM-exponential can in fact be further decomposed as a sum of simpler X-nonnegative signomials. We develop a duality theory for X-circuits with connections to geometry of sets that are convex according to the geometric mean. This theory provides an optimal power cone reconstruction of conditional SAGE signomials when X is polyhedral. In conjunction with a notion of reduced X-circuits, the duality theory facilitates a characterization of the extreme rays of conditional SAGE cones. Since signomials under logarithmic variable substitutions give polynomials, our results also have implications for nonnegative polynomials and polynomial optimization.
In this article, we prove the Hodge conjecture for a desingularization of the moduli space of rank 2, semi-stable, torsion-free sheaves with fixed odd degree determinant over a very general irreducible nodal curve of genus at least 2. We also compute the algebraic Poincaré polynomial of the associated cohomology ring.
We provide a Hopf boundary lemma for the regional fractional Laplacian (−Δ)sΩ, with Ω ⊂ RN a bounded open set. More precisely, given u a pointwise or weak super-solution of the equation (−Δ)s u = c(x)u in Ω, we show that the ratio u(x)∕(dist(x, 𝜕Ω))2s−1 is strictly Ω positive as x approaches the boundary 𝜕Ω of Ω. We also prove a strong maximum principle for distributional super-solutions.
Antimicrobial resistant infections arise as a consequential response to evolutionary mechanisms within microbes which cause them to be protected from the effects of antimicrobials. The frequent occurrence of resistant infections poses a global public health threat as their control has become challenging despite many efforts. The dynamics of such infections are driven by processes at multiple levels. For a long time, mathematical models have proved valuable for unravelling complex mechanisms in the dynamics of infections. In this thesis, we focus on mathematical approaches to modelling the development and spread of resistant infections at between-host (population-wide) and within-host (individual) levels.
Within an individual host, switching between treatments has been identified as one of the methods that can be employed for the gradual eradication of resistant strains on the long term. With this as motivation, we study the problem using dynamical systems and notions from control theory. We present a model based on deterministic logistic differential equations which capture the general dynamics of microbial resistance inside an individual host. Fundamentally, this model describes the spread of resistant infections whilst accounting for evolutionary mutations observed in resistant pathogens and capturing them in mutation matrices. We extend this model to explore the implications of therapy switching from a control theoretic perspective by using switched systems and developing control strategies with the goal of reducing the appearance of drug resistant pathogens within the host.
At the between-host level, we use compartmental models to describe the transmission of infection between multiple individuals in a population. In particular, we make a case study of the evolution and spread of the novel coronavirus (SARS-CoV-2) pandemic. So far, vaccination remains a critical component in the eventual solution to this public health crisis. However, as with many other pathogens, vaccine resistant variants of the virus have been a major concern in control efforts by governments and all stakeholders. Using network theory, we investigate the spread and transmission of the disease on social networks by compartmentalising and studying the progression of the disease in each compartment, considering both the original virus strain and one of its highly transmissible vaccine-resistant mutant strains. We investigate these dynamics in the presence of vaccinations and other interventions. Although vaccinations are of absolute importance during viral outbreaks, resistant variants coupled with population hesitancy towards vaccination can lead to further spread of the virus.
We give theorems about asymptotic normality of general additive functionals on patricia tries, derived from results on tries. These theorems are applied to show asymptotic normality of the distribution of random fringe trees in patricia tries. Formulas for asymptotic mean and variance are given. The proportion of fringe trees with 𝑘 keys is asymptotically, ignoring oscillations, given by (1−𝜌(𝑘))/(𝐻 +𝐽)𝑘(𝑘−1) with the source entropy 𝐻, an entropy-like constant 𝐽, that is 𝐻 in the binary case, and an exponentially decreasing function 𝜌(𝑘). Another application gives asymptotic normality of the independence number and the number of 𝑘-protected nodes.
We thoroughly study the properties of conically stable polynomials and imaginary projections. A multivariate complex polynomial is called stable if its nonzero whenever all coordinates of the respective argument have a positive imaginary part. In this dissertation we consider the generalized notion of K-stability. A multivariate complex polynomial is called K-stable if its non-zero whenever the imaginary part of the respective argument lies in the relative interior of the cone K. We study connections to various other objects, including imaginary projections as well as preservers and combinatorial criteria for conically stable polynomials.
The 𝒮-cone provides a common framework for cones of polynomials or exponen- tial sums which establish non-negativity upon the arithmetic-geometric inequality, in particular for sums of non-negative circuit polynomials (SONC) or sums of arithmetic- geometric exponentials (SAGE). In this paper, we study the S-cone and its dual from the viewpoint of second-order representability. Extending results of Averkov and of Wang and Magron on the primal SONC cone, we provide explicit generalized second- order descriptions for rational S-cones and their duals.
In this thesis, we cover two intimately related objects in combinatorics, namely random constraint satisfaction problems and random matrices. First we solve a classic constraint satisfaction problem, 2-SAT using the graph structure and a message passing algorithm called Belief Propagation. We also explore another message passing algorithm called Warning Propagation and prove a useful result that can be employed to analyze various type of random graphs. In particular, we use this Warning Propagation to study a Bernoulli sparse parity matrix and reveal a unique phase transition regarding replica symmetry. Lastly, we use variational methods and a version of local limit theorem to prove a sufficient condition for a general random matrix to be of full rank.
Ausgangspunkt der Forschungsarbeit ist der Gebrauch von Gesten in mathematischen Interaktionen von Lernenden. Es wird untersucht, inwiefern Gesten Teil des mathematischen Aushandlungsprozesses sind. Damit ist die Rekonstruktion einer potentiell fachlichen Bedeutung des Gestengebrauchs beim Mathematiklernen das zentrale Forschungsanliegen.
Theoretisch gerahmt wird die Arbeit von Erkenntnissen aus der psychologisch-linguistischen Gestenforschung zur systematischen Beschreibung von Gestik im Zusammenspiel mit der gleichzeitig geäußerten Lautsprache (McNeill, 1992; Kendon, 2004). Es werden ebenso ausgewählte Forschungen zur Gestik beim Mathematiklernen beleuchtet (Arzarello, 2006; Wille, 2020; Kiesow, 2016). Die mathematikdidaktische Interaktionstheorie begründet den sozial-konstruktivistischen Lernbegriff (Krummheuer, 1992). Ausgewählte Aspekte der Semiotik nach C. S. Peirce bieten eine theoretische Fundierung des Zeichenbegriffs und des Kerns mathematischen Agierens, verstanden als diagrammatisches Arbeiten (Peirce, 1931, CP 1.54 u. 1932, CP 2.228).
Von besonderer Bedeutung für die vorliegende Forschungsarbeit ist der linguistische Ansatz der Code-Integration und -Manifestation von redebegleitenden Gesten im Sprachsystem nach Fricke (2007, 2012) in Verbindung mit dem Peirce’schen Diagrammbegriff. Diese Perspektive ermöglicht eine theoretische Fundierung der zunächst empirisch beobachtbaren Multimodalität der Ausdrucksweisen von Lernenden beim gemeinsamen Mathematiktreiben. Der Peirce’sche Diagrammbegriff dient hierbei zur Rekonstruktion einer systemischen Relevanz von Gesten für das Betreiben von Mathematik: Bestimmte Gesten sind semiotisch als mathematische Zeichen beschreibbar und haben potentiell konstituierende Funktion für das diagrammatische Arbeiten der Lernenden. Der übergeordnete Forschungsfokus lautet: Wie nutzen Grundschüler*innen Gestik und Lautsprache, insbesondere in deren Zusammenspiel, um ihre mathematischen Ideen in den interaktiven Aushandlungsprozess einzubringen und über den Verlauf der Interaktion aufzugreifen, möglicherweise weiterzuentwickeln oder auch zu verwerfen? In der Ausdifferenzierung wird die Funktion der verwendeten Gesten und die Rekonstruktion von potentiell gemeinsam gebrauchten Gesten der Interagierenden in den Blick genommen.
Methodisch lässt sich die Forschungsarbeit der qualitativen Sozialforschung (Bohnsack, 2008) bzw. der interpretativen mathematikdidaktischen Unterrichtsforschung zuordnen (Krummheuer & Naujok, 1999). Es werden Beispiele aus mathematischen Interaktionssituationen ausgewertet, in denen sich Paare von Zweitklässler*innen mit einem mathematischen Problem aus der Kombinatorik und der Geometrie beschäftigen. Eine eigens theoriekonform entwickelte Transkriptpartitur dient zur Aufarbeitung der Videodaten. Mit der textbasierten Interaktionsanalyse (Krummheuer, 1992) und der grafisch angelegten Semiotischen Analyse (Schreiber, 2010) in einer Weiterentwicklung der Semiotischen Prozess-Karten (Huth, 2014) werden zwei hierarchisch aufeinander aufbauende Analyseverfahren verwendet.
Zentrale Forschungsergebnisse sind 1) die funktionale und gestalterische Flexibilität des Gestengebrauchs beim diagrammatischen Arbeiten der Lernenden, 2) die Rekonstruktion von Modusschnittstellen der Gesten mit anderen Ausdrucksmodi in Funktion, interaktionaler Bedeutungszuschreibung und Chronologie, und 3) die häufige Verwendung der Gesten als Modus der Wahl der Lernenden in mathematischen Interaktionen. Gesten weisen eine unmittelbare und voraussetzungslose Verfügbarkeit auf, eine funktionale und gestalterische Flexibilität in der mathematischen Auseinandersetzung und die Möglichkeit, Funktionen anderer Modi (vorübergehen) zu übernehmen. Es zeigt sich eine konstitutive und fachliche Bedeutung der Gestik für das mathematisch-diagrammatische Agieren der Lernenden. In der Arbeit wird daraus schließlich das doppelte Kontinuum der Gesten für das Mathematiklernen entwickelt. Es zeigt in der Dimension der Funktion des Gestengebrauchs und der Dimension des Objektbezugs der Gestengestalt die Vielfältigkeit der Gestenfunktionen im gemeinsamen diagrammatischen Arbeiten der Lernenden und gibt Einblick in die verwendeten Gestengestalten.
Die Forschungsarbeit offenbart den Bedarf einer Beachtung von Gesten in der fachdidaktischen Planung und Gestaltung von Mathematikunterricht und in der Erforschung und Diagnostik der mathematischen Entwicklung von Lernenden. Es handelt sich bei Gesten in mathematischen Interaktionen nicht um ein reines Beiwerk der Äußerung, sondern um einen fachlich bedeutsamen Modus in Bezug auf das Mathematiklernen. Der Gebrauch von Gestik ermöglicht die Erzeugung von Diagrammen im Handumdrehen und eröffnet perspektivisch eine Erforschung ihrer Bedeutung für mathematische Lehr-Lern-Prozesse.
Die in dieser Zusammenfassung angegebene Literatur findet sich im Literaturverzeichnis der vorgelegten Forschungsarbeit.
Tasks are a key resource in the process of teaching and learning mathematics, which is why task design continues to be one of the main research issues in mathematics education. Different settings can influence the principles underlying the formulation of tasks, and so does the outdoor context. Specifically, a math trail can be a privileged context, known to promote positive attitudes and additional engagement for the learning of mathematics, confronting students with a sequence of real-life tasks, related to a particular mathematical theme. Recently, mobile devices and apps, i.e., MathCityMap, have been recognized as an important resource to facilitate the extension of the classroom to the outdoors. The study reported in this paper intends to identify the principles of design for mobile theme-based math trails (TBT) that result in rich learning experiences in early algebraic thinking. A designed-based research is used, through a qualitative approach, to develop and refine design principles for TBT about Sequences and Patterns. The iterative approach is described by cycles with the intervention of the researchers, pre-service and in-service teachers and students of the targeted school levels. The results are discussed taking into account previous research and data collected along the cycles, conducing to the development of general design principles for TBT tasks.
Existence of nonradial domains for overdetermined and isoperimetric problems in nonconvex cones
(2022)
In this work we address the question of the existence of nonradial domains inside a nonconvex cone for which a mixed boundary overdetermined problem admits a solution. Our approach is variational, and consists in proving the existence of nonradial minimizers, under a volume constraint, of the associated torsional energy functional. In particular we give a condition on the domain D on the sphere spanning the cone which ensures that the spherical sector is not a minimizer. Similar results are obtained for the relative isoperimetric problem in nonconvex cones.
Statistical shape models learn to capture the most characteristic geometric variations of anatomical structures given samples from their population. Accordingly, shape models have become an essential tool for many medical applications and are used in, for example, shape generation, reconstruction, and classification tasks. However, established statistical shape models require precomputed dense correspondence between shapes, often lack robustness, and ignore the global surface topology. This thesis presents a novel neural flow-based shape model that does not require any precomputed correspondence. The proposed model relies on continuous flows of a neural ordinary differential equation to model shapes as deformations of a template. To increase the expressivity of the neural flow and disentangle global, low-frequency deformations from the generation of local, high- frequency details, we propose to apply a hierarchy of flows. We evaluate the performance of our model on two anatomical structures, liver, and distal femur. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior, as indicated by its generalization ability and specificity. More so, we demonstrate the effectiveness of our shape model on shape reconstruction tasks and find anatomically plausible solutions. Finally, we assess the quality of the emerging shape representation in an unsupervised setting and discriminate healthy from pathological shapes.
In this thesis we discuss the group Out(Gal_K) of outer automorphism of the absolute Galois group Gal_K of a p-adic number field K. Using results about the mapping class group of a surface S, as well as a result by Jannsen--Wingberg on the structure of the absolute Galois group Gal_K, we construct a large subgroup of Out(Gal_K) arising as images of certain Dehn twists on S.
This thesis is concerned with the study of symmetry breaking phenomena for several different semilinear partial differential equations. Roughly speaking, this encompasses equations whose symmetries are not necessarily inherited by their solutions, which is particularly interesting for ground state solutions.
We establish weighted Lp-Fourier extension estimates for O(N−k)×O(k)-invariant functions defined on the unit sphere SN−1, allowing for exponents p below the Stein–Tomas critical exponent 2(N+1)/N−1. Moreover, in the more general setting of an arbitrary closed subgroup G⊂O(N) and G-invariant functions, we study the implications of weighted Fourier extension estimates with regard to boundedness and nonvanishing properties of the corresponding weighted Helmholtz resolvent operator. Finally, we use these properties to derive new existence results for G-invariant solutions to the nonlinear Helmholtz equation −Δu−u = Q(x)|u|p−2u,u∈W2,p(RN), where Q is a nonnegative bounded and G-invariant weight function.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
In this survey paper, we present a multiscale post-processing method in exploration. Based on a physically relevant mollifier technique involving the elasto-oscillatory Cauchy–Navier equation, we mathematically describe the extractable information within 3D geological models obtained by migration as is commonly used for geophysical exploration purposes. More explicitly, the developed multiscale approach extracts and visualizes structural features inherently available in signature bands of certain geological formations such as aquifers, salt domes etc. by specifying suitable wavelet bands.
Monte Carlo methods : barrier option pricing with stable Greeks and multilevel Monte Carlo learning
(2021)
For discretely observed barrier options, there exists no closed solution under the Black-Scholes model. Thus, it is often helpful to use Monte Carlo simulations, which are easily adapted to these models. However, as presented above, the discontinuous payoff may lead to instability in option's sensitivities for Monte Carlo algorithms.
This thesis presents a new Monte Carlo algorithm that can calculate the pathwise sensitivities for discretely monitored barrier options. The idea is based on Glasserman and Staum's one-step survival strategy and the results of Alm et al., with which we can stably determine the option's sensitivities such as Delta and Vega by finite-differences. The basic idea of Glasserman and Staum is to use a truncated normal distribution, which excludes the values above the barrier (e.g.\ for knock-up-out options), instead of sampling from the full normal distribution. This approach avoids the discontinuity generated by any Monte Carlo path crossing the barrier and yields a Lipschitz-continuous payoff function.
The new part will be to develop an extended algorithm that estimates the sensitivities directly, without simulation at multiple parameter values as in finite-difference.
Consider the local volatility model, which is a generalisation of the Black-Scholes model. Although standard Monte Carlo algorithms work well for the pricing of continuously monitored barrier options within this model, they often do not behave stably with respect to numerical differentiation.
To bypass this problem, one would generally either resort to regularised differentiation schemes or derive an algorithm for precise differentiation. Unfortunately, while the widespread solution of using a Brownian bridge approach leads to accurate first derivatives, they are not Lipschitz-continuous. This leads to instability with respect to numerical differentiation for second-order Greeks.
To alleviate this problem - i.e. produce Lipschitz-continuous first-order derivatives - and reduce variance, we generalise the idea of one-step survival to general scalar stochastic differential equations. This approach leads to the new one-step survival Brownian bridge approximation, which allows for stable second-order Greeks calculations.
To show the new approach's numerical efficiency, we present a new respective Monte Carlo pathwise sensitivity estimator for the first-order Greeks and study different methods to compute second-order Greeks stably. Finally, we develop a one-step survival Brownian bridge multilevel Monte Carlo algorithm to reduce the computational cost in practice.
This thesis proves unbiasedness and variance reduction of our new, one-step survival version with respect to the classical, Brownian bridge approach. Furthermore, we will present a new convergence result for the Brownian bridge approach using the Milstein scheme under certain conditions. Overall, these properties imply convergence of the new one-step survival Brownian bridge approach.
In recent years, deep learning has become pervasive in various fields. As a family of machine learning methods it is used in a broad set of applications, such as image processing, voice recognition, email filtering, computer vision. Most modern deep learning algorithms are based on artificial neural networks inspired by the biological neural networks constituting animal brains. Also in computational finance deep learning may be of use: Consider there is no closed-solution available for an option price, Monte Carlo simulations are substantially for estimation. Instead of persistently contributing new price computations arising from an updated volatility term, one could replace these by evaluating a neural network.
If an according neural network is available, the evaluation could lead to substantial savings and be highly efficient. I.e., once trained, a neural network could save further expensive estimations. However, in practice, the challenge is the training process of the neural network.
We study and compare two generic neural network training algorithms' computational complexity. Then, we introduce a new multilevel training algorithm that combines a deep learning algorithm with the idea of multilevel Monte Carlo path simulation. The idea is to train several neural networks with training data computed from the so-called level estimators of the multilevel Monte Carlo approach introduced by Giles. We show that the new method can reduce computational complexity by formulating a complexity theorem.
We show how nonlocal boundary conditions of Robin type can be encoded in the pointwise expression of the fractional operator. Notably, the fractional Laplacian of functions satisfying homogeneous nonlocal Neumann conditions can be expressed as a regional operator with a kernel having logarithmic behaviour at the boundary.
This article deals with the solution of linear ill-posed equations in Hilbert spaces. Often, one only has a corrupted measurement of the right hand side at hand and the Bakushinskii veto tells us, that we are not able to solve the equation if we do not know the noise level. But in applications it is ad hoc unrealistic to know the error of a measurement. In practice, the error of a measurement may often be estimated through averaging of multiple measurements. We integrated that in our anlaysis and obtained convergence to the true solution, with the only assumption that the measurements are unbiased, independent and identically distributed according to an unknown distribution.
We prove new existence results for a nonlinear Helmholtz equation with sign-changing nonlinearity of the form − delta u−k2u=Q(x)/u/p−2u, uEW2, p(RN) – delta u − k2u=Q(x)/u/p−2u, uEW2, p(RN) with k>0, k>0, N≥3N≥3, pE[2(N+1)N − 1, 2NN − 2)pE[2(N+1)N − 1, 2NN−2) and QEL ∞ (RN)QEL ∞ (RN). Due to the sign-changes of Q, our solutions have infinite Morse-Index in the corresponding dual variational formulation.
The recently introduced Lipschitz–Killing curvature measures on pseudo-Riemannian manifolds satisfy a Weyl principle, i.e. are invariant under isometric embeddings. We show that they are uniquely characterized by this property. We apply this characterization to prove a Künneth-type formula for Lipschitz–Killing curvature measures, and to classify the invariant generalized valuations and curvature measures on all isotropic pseudo-Riemannian space forms.
The thesis is composed of four Chapters.
In the first Chapter, the boundary expression of the one-sided shape derivative of nonlocal Sobolev best constants is derived. As a simple consequence, we obtain the fractional version of the so-called Hadamard formula for the torsional rigidity and the first Dirichlet eigenvalue. An application to the optimal obstacle placement problem for the torsional rigidity and the first eigenvalue of the fractional Laplacian is given.
In the second Chapter, we introduce and prove a new maximum principle for doubly antisymmetric functions. The latter can be seen as the first step towards studying the optimal obstacle placement problem for the second fractional eigenvalue. Using the new maximum principle we derive new symmetry results for odd solutions to semilinear Dirichlet boundary value problems with Lipschitz nonlinearity.
In the third Chapter, we derive new integration by parts formula for the fractional Laplace operator with a general globally Lipschitz vector field and in particular, we obtain a new Pohozaev type identity generalizing the one obtained by X. Ros-Oton and J. Serra. As an application we obtain nonexistence results for semilinear Dirichlet boundary problems in bounded domains that are not necessarly starshaped.
In the last Chapter, we study symmetry properties of second eigenfunctions of annuli. Using results from the first Chapter and the maximum principle in Chpater 2, we extend the result on the optimal obstacle placement problem from the first eigenvalue to the second eigenvalue.
We present new results on nonlocal Dirichlet problems established by means of suitable spectral theoretic and variational methods, taking care of the nonlocal feature of the operators. We mainly address: First, we estimate the Morse index of radially symmetric sign changing bounded weak solutions to a semilinear Dirichlet problem involving the fractional Laplacian. In particular, we derive a conjecture due to Bañuelos and Kulczycki on the geometric structure of the second Dirichlet eigenfunctions. Secondly, we study a small order asymptotics with respect to the parameter s of the Dirichlet eigenvalues problem for the fractional Laplacian. Thirdly, we deal with the logarithmic Schrödinger operator. In particular, we provide an alternative to derive the singular integral representation corresponding to the associated Fourier symbol and introduce tools and functional analytic framework for variational studies. Finaly, we study nonlocal operators of order strictly below one. In particular, we investigate interior regularity properties of weak solutions to the associated Poisson problem depending on the regularity of the right-hand side.
Are nearby places (e.g., cities) described by related words? In this article, we transfer this research question in the field of lexical encoding of geographic information onto the level of intertextuality. To this end, we explore Volunteered Geographic Information (VGI) to model texts addressing places at the level of cities or regions with the help of so-called topic networks. This is done to examine how language encodes and networks geographic information on the aboutness level of texts. Our hypothesis is that the networked thematizations of places are similar, regardless of their distances and the underlying communities of authors. To investigate this, we introduce Multiplex Topic Networks (MTN), which we automatically derive from Linguistic Multilayer Networks (LMN) as a novel model, especially of thematic networking in text corpora. Our study shows a Zipfian organization of the thematic universe in which geographical places (especially cities) are located in online communication. We interpret this finding in the context of cognitive maps, a notion which we extend by so-called thematic maps. According to our interpretation of this finding, the organization of thematic maps as part of cognitive maps results from a tendency of authors to generate shareable content that ensures the continued existence of the underlying media. We test our hypothesis by example of special wikis and extracts of Wikipedia. In this way, we come to the conclusion that geographical places, whether close to each other or not, are located in neighboring semantic places that span similar subnetworks in the topic universe.
In our work, we establish the existence of standing waves to a nonlinear Schrödinger equation with inverse-square potential on the half-line. We apply a profile decomposition argument to overcome the difficulty arising from the non-compactness of the setting. We obtain convergent minimizing sequences by comparing the problem to the problem at “infinity” (i.e., the equation without inverse square potential). Finally, we establish orbital stability/instability of the standing wave solution for mass subcritical and supercritical nonlinearities respectively.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
We obtain spectral inequalities and asymptotic formulae for the discrete spectrum of the operator 12log(−Delta) in an open set OmegaERd, d≥2, of finite measure with Dirichlet boundary conditions. We also derive some results regarding lower bounds for the eigenvalue Lambda1(Omega) and compare them with previously known inequalities.
In the first part of this thesis, we introduce the concept of prospective strict no-arbitrage for discrete-time financial market models with proportional transaction. The prospective strict no-arbitrage condition, which is a variant of strict no-arbitrage, is slightly weaker than the robust no-arbitrage condition. It still implies that the set of portfolios attainable from zero initial endowment is closed in probability. Consequently, prospective strict no-arbitrage implies the existence of consistent prices, which may lie on the boundary of the bid-ask spread. A weak version of prospective strict no-arbitrage turns out to be equivalent to the existence of a consistent price system.
In continuous-time financial market models with proportional transaction costs, efficient friction, i.e., nonvanishing transaction costs, is a standing assumption. Together with robust no free lunch with vanishing risk, it rules out strategies of infinite variation which usually appear in frictionless financial markets. In the second part of this thesis, we show how models with and without transaction costs can be unified. The bid and the ask price of a risky asset are given by cadlag processes which are locally bounded from below and may coincide at some points. In a first step, we show that if the bid-ask model satisfies no unbounded profit with bounded risk for simple long-only strategies, then there exists a semimartingale lying between the bid and the ask price process.
In a second step, under the additional assumption that the zeros of the bid-ask spread are either starting points of an excursion away from zero or inner points from the right, we show that for every bounded predictable strategy specifying the amount of risky assets, the semimartingale can be used to construct the corresponding self-financing risk-free position in a consistent way. Finally, the set of most general strategies is introduced, which also provides a new view on the frictionless case.
Die Arbeit befasst sich mit zwei funktionalen Grenzwertsätzen für skalierte Linienzählprozesse von anzestralen Selektionsgraphen. Dazu werden zwei Modelle aus der mathematischen Populationsgenetik betrachtet. Wir führen zuerst das Moran-Modell mit gerichteter Selektion mit konstanter Populationsgröße N in kontinuierlicher Zeit und den Linienzählprozess des anzestralen Selektionsgraphen (MASP) gemäß Krone und Neuhauser (Theor. Popul. Biol. 1997) ein. Die Hauptaussage dieser Abschlussarbeit besagt, dass der passend standardisierte MASP im Fall der moderaten Selektion für N gegen unendlich in Verteilung gegen einen Ornstein-Uhlenbeck-Prozess konvergiert. Das zweite betrachtete Modell ist das Cannings-Modell mit gerichteter Selektion in diskreter Zeit, das gemäß Boenkost, González Casanova, Pokalyuk und Wakolbinger (Electron. J. Probab. 2021) eingeführt wird. Für ein Teilregime der moderat schwachen Selektion wird bewiesen, dass die reskalierten Fluktuationen des Linienzählprozesses des anzestralen Selektionsgraphen im Cannings-Modell ebenfalls in Verteilung gegen einen Ornstein-Uhlenbeck-Prozess konvergieren.
The sum of Lyapunov exponents Lf of a semi-stable fibration is the ratio of the degree of the Hodge bundle by the Euler characteristic of the base. This ratio is bounded from above by the Arakelov inequality. Sheng-Li Tan showed that for fiber genus g≥2 the Arakelov equality is never attained. We investigate whether there are sequences of fibrations approaching asymptotically the Arakelov bound. The answer turns out to be no, if the fibration is smooth, or non-hyperelliptic, or has a small base genus. Moreover, we construct examples of semi-stable fibrations showing that Teichmüller curves are not attaining the maximal possible value of Lf.
Sublinear circuits are generalizations of the affine circuits in matroid theory, and they arise as the convex-combinatorial core underlying constrained non-negativity certificates of exponential sums and of polynomials based on the arithmetic-geometric inequality. Here, we study the polyhedral combinatorics of sublinear circuits for polyhedral constraint sets. We give results on the relation between the sublinear circuits and their supports and provide necessary as well as sufficient criteria for sublinear circuits. Based on these characterizations, we provide some explicit results and enumerations for two prominent polyhedral cases, namely the non-negative orthant and the cube [− 1,1]n.
We derive a shape derivative formula for the family of principal Dirichlet eigenvalues λs(Ω) of the fractional Laplacian (−Δ)s associated with bounded open sets Ω⊂RN of class C1,1. This extends, with a help of a new approach, a result in Dalibard and Gérard-Varet (Calc. Var. 19(4):976–1013, 2013) which was restricted to the case s=12. As an application, we consider the maximization problem for λs(Ω) among annular-shaped domains of fixed volume of the type B∖B¯¯¯¯′, where B is a fixed ball and B′ is ball whose position is varied within B. We prove that λs(B∖B¯¯¯¯′) is maximal when the two balls are concentric. Our approach also allows to derive similar results for the fractional torsional rigidity. More generally, we will characterize one-sided shape derivatives for best constants of a family of subcritical fractional Sobolev embeddings.
Solving an inverse elliptic coefficient problem by convex non-linear semidefinite programming
(2021)
Several applications in medical imaging and non-destructive material testing lead to inverse elliptic coefficient problems, where an unknown coefficient function in an elliptic PDE is to be determined from partial knowledge of its solutions. This is usually a highly non-linear ill-posed inverse problem, for which unique reconstructability results, stability estimates and global convergence of numerical methods are very hard to achieve. The aim of this note is to point out a new connection between inverse coefficient problems and semidefinite programming that may help addressing these challenges. We show that an inverse elliptic Robin transmission problem with finitely many measurements can be equivalently rewritten as a uniquely solvable convex non-linear semidefinite optimization problem. This allows to explicitly estimate the number of measurements that is required to achieve a desired resolution, to derive an error estimate for noisy data, and to overcome the problem of local minima that usually appears in optimization-based approaches for inverse coefficient problems.
We show that throughout the satisfiable phase the normalized number of satisfying assignments of a random 2-SAT formula converges in probability to an expression predicted by the cavity method from statistical physics. The proof is based on showing that the Belief Propagation algorithm renders the correct marginal probability that a variable is set to “true” under a uniformly random satisfying assignment.
Within the last thirty years, the contraction method has become an important tool for the distributional analysis of random recursive structures. While it was mainly developed to show weak convergence, the contraction approach can additionally be used to obtain bounds on the rate of convergence in an appropriate metric. Based on ideas of the contraction method, we develop a general framework to bound rates of convergence for sequences of random variables as they mainly arise in the analysis of random trees and divide-and-conquer algorithms. The rates of convergence are bounded in the Zolotarev distances. In essence, we present three different versions of convergence theorems: a general version, an improved version for normal limit laws (providing significantly better bounds in some examples with normal limits) and a third version with a relaxed independence condition. Moreover, concrete applications are given which include parameters of random trees, quantities of stochastic geometry as well as complexity measures of recursive algorithms under either a random input or some randomization within the algorithm.
In 2020, Germany and Spain experienced lockdowns of their school systems. This resulted in a new challenge for learners and teachers: lessons moved from the classroom to the children’s homes. Therefore, teachers had to set rules, implement procedures and make didactical–methodical decisions regarding how to handle this new situation. In this paper, we focus on the roles of mathematics teachers in Germany and Spain. The article first describes how mathematics lessons were conducted using distance learning. Second, problems encountered throughout this process were examined. Third, teachers drew conclusions from their mathematics teaching experiences during distance learning. To address these research interests, a questionnaire was answered by N = 248 teachers (N1 = 171 German teachers; N2 = 77 Spanish teachers). Resulting from a mixed methods approach, differences between the countries can be observed, e.g., German teachers conducted more lessons asynchronously. In contrast, Spanish teachers used synchronous teaching more frequently, but still regard the lack of personal contact as a main challenge. Finally, for both countries, the digitization of mathematics lessons seems to have been normalized by the pandemic.
We show the existence of additive kinematic formulas for general flag area measures, which generalizes a recent result by Wannerer. Building on previous work by the second named author, we introduce an algebraic framework to compute these formulas explicitly. This is carried out in detail in the case of the incomplete flag manifold consisting of all (p+1)-planes containing a unit vector.
We calculate the Masur–Veech volume of the gothic locus G in the stratum H(23) of genus 4. Our method is based on the use of the formulae for the Euler characteristics of gothic Teichmu ̈ller curves to determine the number of lattice points of given area. We also use this method to recal- culate the Masur–Veech volumes of the Prym loci P3 ⊂ H(4) and P4 ⊂ H(6) in genus 3 and 4.
Studying large discrete systems is of central interest in, non-exclusively, discrete mathematics, computer sciences and statistical physics. The study of phase transitions, e.g. points in the evolution of a large random system in which the behaviour of the system changes drastically, became of interest in the classical field of random graphs, the theory of spin glasses as well as in the analysis of algorithms [78,82, 121].
It turns out that ideas from the statistical physics’ point of view on spin glass systems can be used to study inherently combinatorial problems in discrete mathematics and theoretical computer sciences(for instance, satisfiability) or to analyse phase transitions occurring in inference problems (like the group testing problem) [68, 135, 168]. A mathematical flaw of this approach is that the physical methods only render mathematical conjectures as they are not known to be rigorous.
In this thesis, we will discuss the results of six contributions. For instance, we will explore how the
theory of diluted mean-field models for spin glasses helps studying random constraint satisfaction problems through the example of the random 2−SAT problem. We will derive a formula for the number of satisfying assignments that a random 2−SAT formula typically possesses [2].
Furthermore, we will discuss how ideas from spin glass models (more precisely, from their planted versions) can be used to facilitate inference in the group testing problem. We will answer all major open questions with respect to non-adaptive group testing if the number of infected individuals scales sublinearly in the population size and draw a complete picture of phase transitions with respect to the
complexity and solubility of this inference problem [41, 46].
Subsequently, we study the group testing problem under sparsity constrains and obtain a (not fully understood) phase diagram in which only small regions stay unexplored [88].
In all those cases, we will discover that important results can be achieved if one combines the rich theory of the statistical physics’ approach towards spin glasses and inherent combinatorial properties of the underlying random graph.
Furthermore, based on partial results of Coja-Oghlan, Perkins and Skubch [42] and Coja-Oghlan et al. [49], we introduce a consistent limit theory for discrete probability measures akin to the graph limit theory [31, 32, 128] in [47]. This limit theory involves the extensive study of a special variant of the cut-distance and we obtain a continuous version of a very simple algorithm, the pinning operation, which allows to decompose the phase space of an underlying system into parts such that a probability
measure, restricted to this decomposition, is close to a product measure under the cut-distance. We will see that this pinning lemma can be used to rigorise predictions, at least in some special cases, based on the physical idea of a Bethe state decomposition when applied to the Boltzmann distribution.
Finally, we study sufficient conditions for the existence of perfect matchings, Hamilton cycles and bounded degree trees in randomly perturbed graph models if the underlying deterministic graph is sparse [93].
Netzwerkmodelle spielen in verschiedenen Wissenschaftsdisziplinen eine wichtige Rolle und dienen unter anderem der Beschreibung realistischer Graphen.
Sie werden häufig als Zufallsgraphen formuliert und stellen somit Wahrscheinlichkeitsverteilungen über Graphen dar.
Meist ist die Verteilung dabei parametrisiert und ergibt sich implizit, etwa über eine randomisierten Konstruktionsvorschrift.
Ein früher Vertreter ist das G(n,p) Modell, welches über allen ungerichteten Graphen mit n Knoten definiert ist und jede Kante unabhängig mit Wahrscheinlichkeit p erzeugt.
Ein aus G(n,p) gezogener Graph hat jedoch kaum strukturelle Ähnlichkeiten zu Graphen, die zumeist in Anwendungen beobachtet werden.
Daher sind populäre Modelle so gestaltet, dass sie mit hinreichend hoher Wahrscheinlichkeit gewünschte topologische Eigenschaften erzeugen.
Beispielsweise ist es ein gängiges Ziel die nur unscharf definierte Klasse der sogenannten komplexen Netzwerke nachzubilden, der etwa viele soziale Netze zugeordnet werden.
Unter anderem verfügen diese Graphen in der Regel über eine Gradverteilung mit schweren Rändern (heavy-tailed), einen kleinen Durchmesser, eine dominierende Zusammenhangskomponente, sowie über überdurchschnittlich dichte Teilbereiche, sogenannte Communities.
Die Einsatzmöglichkeiten von Netzwerkmodellen gehen dabei weit über das ursprüngliche Ziel, beobachtete Effekte zu erklären, hinaus.
Ein gängiger Anwendungsfall besteht darin, Daten systematisch zu produzieren.
Solche Daten ermöglichen oder unterstützen experimentelle Untersuchungen, etwa zur empirischen Verifikation theoretischer Vorhersagen oder zur allgemeinen Bewertung von Algorithmen und Datenstrukturen.
Hierbei ergeben sich insbesondere für große Probleminstanzen Vorteile gegenüber beobachteten Netzen.
So sind massive Eingaben, die auf echten Daten beruhen, oft nicht in ausreichender Menge verfügbar, nur aufwendig zu beschaffen und zu verwalten, unterliegen rechtlichen Beschränkungen, oder sind von unklarer Qualität.
In der vorliegenden Arbeit betrachten wir daher algorithmische Aspekte der Generierung massiver Zufallsgraphen.
Um Anwendern Reproduzierbarkeit mit vorhandenen Studien zu ermöglichen, fokussieren wir uns hierbei zumeist auf getreue Implementierungen etablierter Netzwerkmodelle,
etwa Preferential Attachment-Prozesse, LFR, simple Graphen mit vorgeschriebenen Gradsequenzen, oder Graphen mit hyperbolischer (o.Ä.) Einbettung.
Zu diesem Zweck entwickeln wir praktisch sowie analytisch effiziente Generatoren.
Unsere Algorithmen sind dabei jeweils auf ein geeignetes Maschinenmodell hin optimiert.
Hierzu entwerfen wir etwa klassische sequentielle Generatoren für Registermaschinen, Algorithmen für das External Memory Model, und parallele Ansätze für verteilte oder Shared Memory-Maschinen auf CPUs, GPUs, und anderen Rechenbeschleunigern.
Diese Arbeit beschäftigt sich mit linearen inversen Problemen, wie sie in einer Vielzahl an Anwendungen auftreten. Diese Probleme zeichnen sich dadurch aus, dass sie typischerweise schlecht gestellt sind, was in erster Linie die Stabilität betrifft. Selbst kleinste Messfehler haben enorme Konsequenzen für die Rekonstruktion der zu bestimmenden Größe.
Um eine robuste Rekonstruktion zu ermöglichen, muss das Problem regularisiert, dass heißt durch eine ganze Familie abgeänderter, stabiler Approximationen ersetzt werden. Die konkrete Wahl aus der Familie, die sogenannte Parameterwahlstrategie, stützt sich dann auf zusätzliche ad hoc Annahmen über den Messfehler. Typischerweise ist dies im deterministischen Fall die Kenntnis einer oberen Schranke an die Norm des Datenfehlers, oder im stochastischen Fall, die Kenntnis der Verteilung des Fehlers, beziehungsweise die Einschränkung auf eine bestimmte Klasse von Verteilungen, zumeist Gaußsche. In der vorliegenden Arbeit wird untersucht, wie sich diese Informationen unter der Annahme der Wiederholbarkeit der Messung gewinnen lassen. Die Daten werden dabei aus mehreren Messungen gemittelt, welche einer beliebigen, unbekannten Verteilung folgen, wobei die zur Lösung des Problems unweigerlich notwendige Fehlerschranke geschätzt wird. Auf Mittelwert und Schätzer wird dann ein klassisches Regularisierungsverfahren angewandt. Als Regularisierungen werden größtenteils Filter-basierte Verfahren behandelt, die sich auf die Spektralzerlegung des Problems stützen. Als Parameterwahlstrategien werden sowohl einfache a priori-Wahlen betrachtet, als auch das Diskrepanzprinzip als adaptives Verfahren. Es wird Konvergenz für unbekannte beliebige Fehlerverteilungen mit endlicher Varianz sowie für Weißes Rauschen (bezüglich allgemeiner Diskretisierungen) nachgewiesen. Schließlich wird noch die Konvergenz des Diskrepanzprinzips für ein stochastisches Gradientenverfahren gezeigt, als erste rigorose Analyse einer adaptiven Stoppregel für ein solches nicht Filter-basiertes Regularisierungsverfahren.
Diese Arbeit beschäftigt sich mit der theoriegeleiteten Entwicklung eines digitalen Werkzeugs namens MathCityMap (MCM) für das außerschulische Lehren und Lernen von Mathematik.
Den Ausgangspunkt des Projekts bilden die sogenannten Mathtrails. Dies sind Wanderpfade zum Entdecken mathematischer Sachverhalte an realen Objekten in der Umwelt. Eine didaktische, methodische sowie lernpsychologische Analyse konstatiert Mathtrails zahlreiche Potentiale für den Lernprozess wie beispielsweise die Möglichkeit, Primärerfahrungen zu sammeln, das Interesse am Fach Mathematik zu steigern sowie das Lernen aktiv und konstruktiv zu gestalten. Trotz der genannten Vorteile wird deutlich, dass die Vorbereitung und Umsetzung der mathematischen Wanderpfade mit einem immensen Aufwand verbunden sind. Eine weitere Herausforderung für Lernende liegt im offenen Charakter der Mathtrails, die in der Regel in autonomen Kleingruppen abgelaufen werden. Aus der Literatur ist bekannt, dass insbesondere für schwächere Lerner die Gefahr besteht, durch die Anforderungen einer selbstständigen Arbeitsweise überfordert zu werden.
Als Lösungsansatz für die zuvor genannten Probleme wird im Rahmen dieser Arbeit die Entwicklung eines digitalen Werkzeugs für Mathtrails erläutert. Die erste Forschungsfrage beschäftigt sich mit den theoretischen Anforderungen an solch ein Tool:
1. Welchen Anforderungen muss ein digitales Werkzeug genügen, um die Vorzüge der Mathtrails zu erhalten, deren Aufwand zu minimieren und die Gefahren zu kompensieren?
Unter Berücksichtigung der theoretischen Grundlagen digitaler Werkzeuge und des „Mobile Learnings“ werden zunächst Möglichkeiten identifiziert, den Vorbereitungsaufwand zu minimieren. Konkret erscheinen die automatische Datenverarbeitung, das digitale Zusammen-arbeiten sowie das Teilen und Wiederverwenden von digitalen Aufgaben und Trails als theoretisch zielführende Bestandteile von MCM. Weiterhin sollen zur Unterstützung der Lerner bei der eigenständigen Bearbeitung von Mathtrails didaktisch bewährte Konzepte – wie gestufte Hilfestellungen und Feedback – eingesetzt werden.
Vor dem Hintergrund der soeben formulierten Anforderungen bilden der Entwicklungsprozess sowie die Beschreibung des aktuellen Ist-Zustandes des MCM-Systems zentrale Bestand-teile dieser Arbeit. Das System setzt sich aus zwei Komponenten für jeweils unterschiedliche Zielgruppen zusammen: das MCM-Webportal zum Erstellen von Mathtrails und die MCM-App zum Ablaufen selbiger. Die Hauptziele von MCM können in der Minimierung des Vorbereitungsaufwands sowie der Kompensation einer Überforderungsgefahr gesehen werden.
In ersten Feldversuchen konnte MCM bereits in einem frühen Stadium erfolgreich mit Lernenden der Sekundarstufe I getestet werden. Gleichzeitig fiel jedoch auf, dass das implementierte Feedback-System Schwächen aufwies und von Lernenden zum systematischen Erraten von Lösungen genutzt werden konnte. In der Folge wurden Spielelemente (Gamification), denen nicht nur eine motivationssteigernde Wirkung nachgesagt wird, sondern auch das Potential das Verhalten zu beeinflussen, Bestandteil der MCM-App. Die zweite Forschungs-frage dieser Arbeit zielt auf die Auswirkungen der Gamification-Integration ab und lautet:
2. Welchen Einfluss haben Gamification-Elemente auf die Motivation sowie auf das Nutzungs-verhalten des digitalen Werkzeugs von Neuntklässlern bei der Bearbeitung eines Mathtrails?
Zur Beantwortung der zweiten Forschungsfrage wurde eine empirische Studie mit 16 Schulklassen (304 Schülerinnen und Schüler) der neunten Jahrgangsstufe im Sommer 2017 durch-geführt. Die Ergebnisse können wie folgt zusammengefasst werden: Die Implementierung einer Rangliste (Leaderboard) in die MCM-App führte zwar nicht zu einer höheren Motivation, jedoch spornte der Wettbewerb die Teilnehmer an, viele Aufgaben zu bearbeiten. Im Ver-gleich zu der Kontrollgruppe ohne Gamification-Elemente löste die Experimentalgruppe signifikant mehr Aufgaben, legte die doppelte Strecke zurück und nutzte das Feedbacks-System seltener aus, um Lösungen zu erraten. Die Studie konnte empirisch den gewünschten Einfluss von Spielelementen auf die Benutzung eines digitalen Werkzeugs für das außerschulische Lernen von Mathematik aufzeigen.
Die Evaluation der Ziele von MCM erfolgt indirekt über die Analyse der Verbreitung der Mathtrail-Idee ohne MCM und mit MCM. Die dritte Forschungsfrage lautet dementsprechend:
3. Welchen Beitrag hat das digitale Werkzeug zur Verbreitung der Mathtrail-Idee nach 4 Jahren Projektlaufzeit geleistet?
Zur Beantwortung der dritten Forschungsfrage werden wissenschaftliche Publikationen zu Mathtrails analysiert. Es wird insbesondere in Publikationen mit und ohne Stichwort „MathCityMap“ unterschieden, um eine Aussage über den Einfluss des MCM-Projekts auf den wissenschaftlichen Diskurs treffen zu können. Stand August 2020 enthält bereits jede dritte Mathtrail-Publikation einen Bezug zu MCM. Weiterhin wird ein Vergleich zu vorherigen, ähnlichen Bemühungen – gemeint sind Online-Mitmach-Projekte für Mathtrails – gezogen. So existierten im Zeitraum 2000 bis 2010 im anglo-amerikanischen Raum erste Webseiten für mathematische Wanderpfade. Diese boten zusammengenommen 131 Mathtrails an. Im Vergleich hierzu existieren bereits über 2.500 MCM-Mathtrails in 57 Ländern.
Sowohl die Publikationen als auch die Anzahl der erstellten Trails stellen erste Indizien dafür dar, dass mit MCM die Realisation eines theoretischen Konzepts für ein digitales Mathtrail-Werkzeug gelungen ist und die Idee der Mathtrails verbreitet werden konnte.
The $p$-adic section conjecture predicts that for a smooth, proper, hyperbolic curve $X$ over a $p$-adic field $k$, every section of the map of étale fundamental groups $\pi_1(X) \to G_k$ is induced by a unique $k$-rational point of $X$. While this conjecture is still open, the birational variant in which $X$ is replaced by its generic point is known due to Koenigsmann. Generalising an alternative proof of Pop, we extend this result to certain localisations of $X$ at a set of closed points $S$, an intermediate version in between the full section conjecture and its birational variant. As one application, we prove the section conjecture for $X_S$ whenever $S$ is a countable set of closed points.
In dieser Arbeit werden drei Themenkomplexe aus dem Bereich der Externspeicheralgorithmen näher beleuchtet: Approximationsalgorithmen, dynamische Algorithmen und Echtzeitanfragen. Das Thema Approximationsalgorithmen wird sowohl im Kapitel 3 als auch im Kapitel 5 behandelt.
In Kapitel 3 wird ein Algorithmus vorgestellt, welcher den Durchmesser eines Graphen heuristisch bestimmt. Im RAM- Modell ist eine modifizierte Breitensuche selbst ein günstiger und äußerst genauer Algorithmus. Dies ändert sich im Externspeicher. Dort ist die Hauptspeicher-Breitensuche durch die O(n + m) unstrukturierten Zugriffe auf den externen Speicher zu teuer. 2008 wurde von Meyer ein Verfahren zu effizienten Approximation des Graphdurchmessers im Externspeicher gezeigt, welches O(k · scan(n + m) + sort(n + m) + √(n·m/k·B)· log(n/k) + MST(n, m)) I/Os bei einem multiplikativen Approximationsfehler von O(√k · log (k)) benötigt. Die Implementierung, welche in dieser Arbeit vorgestellt wird, konnte in vielen praktischen Fällen die Anzahl an I/Os durch Rekursion auf O(k · scan(n + m) + sort(n + m) + MST(n, m)) I/Os reduzieren. Dabei wurden verschiedene Techniken untersucht, um die Auswahl der Startpunkte (Masterknoten) zum rekursiven Schrumpfen des Graphen so wählen zu können, dass der Fehler möglichst klein bleibt. Weiterhin wurde eine adaptive Regel eingeführt, um nur so viele Masterknoten zu wählen, dass der geschrumpfte Graph nach möglichst wenigen Rekursionsaufrufen in den Hauptspeicher passt. Es wirdgezeigt, dass die untere Schranke für den worst case-Fehler dabei auf Ω(k^{4/3−e}) mit hoher Wahrscheinlichkeit steigt. Die experimentelle Auswertung zeigt jedoch, dass in der Praxis häufig deutlich bessere Ergebnisse erzielt werden.
In Kapitel 4 wird ein Algorithmus vorgestellt, welcher, nach dem Einfügen einer neuen Kante in einen Graphen, den zugehörigen Baum der Breitensuche unter Verwendung von O(n · (n/B^{2/3} + sort(n) · log (B))) I/Os mit hoher Wahrscheinlichkeit aktualisiert. Dies ist für hinreichend große B schneller als die statische Neuberechnung. Zur Umsetzung des Algorithmus wurde eine neue deterministische Partitionsmethode entwickelt, bei der die Größe der Cluster balanciert und effizient veränderbar ist. Hierfür wird ein Dendrogramm des Graphen auf einer geeigneten Baumrepräsentation, wie beispielsweise Spannbaum, berechnet. Dadurch hat jeder Knoten ein Label, welches aufgrund seiner Lage innerhalb der Baumrepräsentation berechnet worden ist. Folglich kann mittels schneller Bit-Operationen das Label um niederwertige Stellen gekürzt werden, um Cluster der Größe µ = 2 i zu berechnen, wobei der Clusterdurchmesser auf µ beschränkt ist, was für die I/O-Komplexität gewährleistet sein muss, da der Trade-off aus MM_BFS zwischen Cluster- und Hotpoolgröße genutzt wird. In der experimentellen Auswertung wird gezeigt, dass die Performanz von dynamischer Breitensuche sowohl auf synthetischen als auch auf realen Daten oftmals schneller ist als eine statische Neuberechnung des Baums der Breitensuche. Selbst wenn dies nicht der Falls ist, so sind wir nur um kleine, konstante Faktoren langsamer als die statische Implementierung von MM_BFS.
Schließlich wird in Kapitel 5 ein Approximationsalgorithmus vorgestellt, welcher sowohl dynamische Komponenten beinhaltet als auch die Eigenschaft besitzt, Anfragen in Echtzeit zu beantworten. Um die Echtzeitfähigkeit zu erreichen, darf eine Anfrage nur O(1) I/Os hervorrufen. Im Szenario dieser Arbeit wurden Anfragen zu Distanzen zwischen zwei beliebigen Knoten u und v auf realen Graphdaten mittels eines Distanzorakels beantwortet. Es wird eine Implementierung sowohl für mechanische Festplatten als auch für SSDs vorgestellt, wobei kontinuierliche Anfragen im Onlineszenario von SSDs in Millisekunden gelöst werden können, während ein großer Block von Anfragen auf beiden Architekturen in Mikrosekunden pro Anfrage amortisiert gelöst werden kann.
Hierarchical self-organizing systems for task-allocation in large scaled distributed architectures
(2019)
This thesis deals with the subject of autonomous, decentralized task allocation in a large scaled multi-core network. The self-organization of such interconnected systems becomes more and more important for upcoming developments. It is to be expected that the complexity of those systems becomes hardly manageable to human users. Self-organization is part of a research field of the Organic Computing initiative, which aims to find solutions for technical systems by imitating natural systems and their processes. Within this initiative, a system for task allocation in a small scaled multi-core network was already developed, researched and published. The system is called the Artificial Hormone System (AHS), since it is inspired by the endocrine system of mammals. The AHS produces a high amount of communication load in case the multi-core network is of a bigger scale.
The contribution of this thesis is two new approaches, both based on the AHS in order to cope with large scaled architectures. The major idea of those two approaches is to introduce a hierarchy into the AHS in order to reduce the produced communication load. The first and more detailed researched approach is called the Hierarchical Artificial Hormone System (HAHS), which orders the processing elements in clusters and builds an additional communication layer between them. The second approach is the Recursive Artificial Hormone System (RAHS), which also clusters the system’s processing elements and orders the clusters into a topological tree structure for communication.
Both approaches will be explained in this thesis by their principle structure as well as some optional methods. Furthermore, this thesis presents estimations for the worst case timing behavior and the worst-case communication load of the HAHS and RAHS. At last, the evaluation results of both approaches, especially in comparison to the AHS, will be shown and discussed.
Biologische Signalwege bilden komplexe Netzwerke aus, um die Zellantwort sensibel regulieren zu können. Systembiologische Ansätze werden eingesetzt, um biologische Systeme anhand von Computer-gestützten Modellen zu untersuchen. Ein mathematisches Modell erlaubt, neben der logischen Erfassung der Regulation des biologischen Systems, die systemweite Simulation des dynamischen Verhaltens und Analyse der Robustheit und Anfälligkeit.
Der TNFR1-vermittelte Signalweg reguliert essenzielle Zellvorgänge wie Entzündungsantworten,
Proliferation und Zelltod. TNFR1 wird von dem Zytokin TNF-α stimuliert und fördert daraufhin die Bildung verschiedener makromolekularer Komplexe, welche unterschiedliche Zellantworten einleiten, von der Aktivierung des Transkriptionsfaktors NF-κB, welcher die Expression von proliferationsfördernden Genen reguliert, bis zu zwei Formen des Zelltods, der Apoptose und der Nekroptose. Die Regulation der verschiedenen Zellantworten wird auch als molekularer Schalter bezeichnet. Die exakten molekularen Vorgänge, welche die Zellantwort modulieren, sind noch nicht vollständig entschlüsselt. Eine Fehlregulation des Signalwegs kann chronische Entzündungen hervorrufen oder die Entstehung von Tumoren fördern.
In dieser Thesis haben wir die neuesten Erkenntnisse der Forschung des TNFR1-Signalwegs anhand von umfangreichen Interaktionsdaten aus der Literatur erstmals in einem Petrinetz-Modell erfasst und analysiert. Das manuell kuratierte Modell umfasst die sequenziellen Prozesse der NF-κB-Aktivierung, Apoptose und Nekroptose und berücksichtigt den Einfluss posttranslationaler Modifikationen.
Weiterhin wurden Analysemethoden für Signalwegs-Modelle entwickelt, welche die spezifischen Anforderungen dieser biologischen Systeme berücksichtigen und eine biologisch motivierte Netzwerkanalyse ermöglichen. Die Manatee-Invarianten identifizieren Signalflüsse im Gleichgewichtszustand in Modellen, die Zyklen aufweisen, und werden als Linearkombination von Transitions-Invarianten gebildet. Diese Signalflüsse erfassen idealerweise einen Prozess von der Rezeptorstimulation zur Zellantwort in einem Modell eines Signalwegs. Die Bestimmung aller möglichen Signalflüsse in Modellen von Signalwegen ist eine notwendige Voraussetzung für weitere biologisch motivierte Analysen, wie die in silico-Knockout Analyse. Wir haben ebenfalls ein neues Konzept zur Untersuchung von in silico-Knockouts vorgestellt. Die Effekte der in silico-Knockouts auf einzelne Komplexe und Prozesse des Signalwegs werden in der in silico-Knockout-Matrix repräsentiert. Wir haben die Software-Anwendung isiKnock entwickelt, welche beide Konzepte kombiniert und eine systematische Knockout-Analyse von Petrinetz-Modellen unterstützt.
Das Petrinetz-Modell des TNFR1-Signalwegs wurde auf seine elementaren Eigenschaften geprüft und die etablierten Analysen wie Platz-Invarianten und Transitions-Invarianten durchgeführt. Hierbei konnten die Transitions-Invarianten nicht in allen Fällen komplette biologische Signalflüsse beschreiben. Wir haben ebenfalls die neu vorgestellten Methoden auf das Petrinetz-Modell angewandt. Anhand der Manatee-Invarianten konnten wir die zusammenhängenden Signalflüsse identifizieren und nach ihrem biologischen Ausgang klassifizieren sowie die Auswirkungen der Rückkopplungen untersuchen. Wir konnten zeigen, dass die survival-Antwort durch die Aktivierung von NF-κB am häufigsten auftritt, danach die Apoptose, gefolgt von der Nekroptose. Die alternativen Signalflüsse in Form der Manatee-Invarianten spiegeln die Robustheit des biologischen Systems wider. Wir führten eine ausgiebige in silico-Knockout-Analyse basierend auf den Manatee-Invarianten durch, um die Proteine des Signalwegs nach ihrem Einfluss einzustufen und zu gruppieren. Die Proteine des Komplex I wiesen hierbei den größten Einfluss auf, angeführt von der Rezeptorstimulation und RIP1. Wir betrachteten und diskutierten die Regulation des molekularen Schalters anhand der Knockout-Analyse von selektierten Proteinen und deren Auswirkung auf wichtige Komplexe im Modell. Wir identifizierten die Ubiquitinierung in Komplex I sowie die NF-κB-abhängige Genexpression als die wichtigen Kontrollpunkte des TNFR1-Signalwegs. In Komplex II ist die Regulation der Aktivierung der Caspase-Aktivität entscheidend.
Die umfangreiche Netzwerkanalyse basierend auf Manatee-Invarianten und systematischer in silico-Knockout-Analyse verifizierte das Petrinetz-Modell und erlaubte die Untersuchung der Robustheit und Anfälligkeit des Systems. Die neu entwickelten Methoden ermöglichen eine fundierte, biologisch relevante Untersuchung von in silico-Modellen von Signalwegen. Der systembiologische Ansatz unterstützt die Aufklärung der Regulation und Funktion des verflochtenen Netzwerks des TNFR1-Signalwegs.
Die digitale Pathologie ist ein neues, aber stetig wachsendes, Feld in der Medizin. Die kontinuierliche Entwicklung von verbesserten digitalen Scannern erlaubt heute das Abscannen von kompletten Gewebeschnitten und Whole Slide Images gewinnen an Bedeutung. Ziel dieser Arbeit ist die Methodenentwicklung zur Analyse von Whole Slide Images des klassischen Hodgkin Lymphoms. Das Hodgkin-Lymphom, oder Morbus Hodgkin, ist eine Tumorerkrankung des Lymphsystems, bei der die monoklonalen Tumorzellen in der Regel von B-Lymphozyten im Vorläuferstadium abstammen.
Etwas mehr als 9.000 Hodgkin-Lymphom-Fälle werden jährlich in den USA diagnostiziert. Zwar ist die 5-Jahre-Überlebensrate für Hodgkin-Lymphome mit 85,3 % vergleichsweise hoch, dennoch werden etwa 1.100 Todesfälle pro Jahr in den USA registriert. Auf mikroskopischer Ebene sind die Hodgkin-Reed-Sternberg Zellen (HRS-Zellen) typisch für das klassische Hodgkin Lymphom. HRS-Zellen haben einen oder mehrere Zellkerne, die stark vergrößert sind und eine grobe Chromatinstruktur aufweisen. Immunhistologisch gibt es für HRS-Zellen charakterisierende Marker, so sind HRS-Zellen positiv für den Aktivierungsmarker CD30.
Neben der konventionellen Mikroskopie, ermöglichen Scanner das Digitalisieren von ganzen Objektträgern (Whole Slide Image). Whole Slide Images werden bisher wenig in der Routinediagnostik eingesetzt. Ein großer Vorteil von digitalisierten Gewebeschnitten bietet sich bei der computergestützten Analyse. Automatisierte Bildanalyseverfahren wie Zellerkennung können Pathologen bei der Diagnose unterstützen, indem sie umfassende Statistiken zur Anzahl und Verteilung von immungefärbten Zellen bereitstellen.
Die untersuchten immunohistologischen Bilder wurden vom Dr. Senckenbergisches Institut für Pathologie des Universitätsklinikums Frankfurt bereit gestellt. Die betrachteten Gewebeschnitte sind gegen CD30 immungefärbt, einem Membranrezeptor, welcher in HRS-Zellen und aktivierten Lymphozyten exprimiert wird. Die Gewebeschnitte wurden mit einem Aperio ScanScope slide scanner digitalisiert und liegen mit einer hohen Auflösung von 0,25 μm pro Pixel vor. Bei den vorliegenden Gewebeschnittgrößen ergeben sich Bilder mit bis zu 90.000 x 90.000 Pixeln.
Der untersuchte Bilddatensatz umfasst 35 Bilder von Lymphknotengewebeschnitten der drei Krankheitsbilder: Gemischtzelliges klassisches Hodgkinlymphom, noduläres klassisches Hodgkinlymphom und Lymphadenitis. Die Bildverarbeitungspipeline wurden teils neu implementiert, teils von etablierten Bilderkennungssoftware und -bibliotheken wie CellProfiler und Java Advanced Imaging verwendet. CD30-positive Zellobjekte werden in den Gewebeschnitten automatisiert erkannt und neben der globalen Position im Whole Slide Image weitere Morphologiedeskriptoren berechnet, wie Fläche, Feret-Durchmesser, Exzentrität und Solidität. Die Zellerkennung zeigt mit 84 % eine hohe Präzision und mit 95 % eine sehr gute Sensitivität.
Es konnte gezeigt werden, dass in Lymphadenitisfällen im Schnitt deutlich weniger CD30- positive Zellen präsent sind als in klassisches Hodgkinlymphom. Während hier im Schnitt nur rund 3.000 Zellen gefunden wurden, lag der Durchschnitt für das Mischtyp klassisches Hodgkinlymphom bei rund 19.000 CD30 positiven Zellen. Während die CD30-positiven Zellen in Lymphadenitisfällen relativ gleichmäßig verteilt sind, bilden diese in klassischen Hodgkinlymphom-Fällen Zellcluster höherer Dichte.
Die berechneten Morphologiedeskriptoren bieten die Möglichkeit die Gewebeschnitte und den Krankheitsverlauf näher zu beschreiben. Zudem sind bisher Größe und Erscheinungsbild der HRS-Zellen hauptsächlich anhand manuell ausgewählter Zellen bestimmt worden. Ein Maß für die Ausdehnung der Zellen ist der maximale Feret-Durchmesser. Bei CD30-Zellen im klassischen Hodgkinlymphom liegt dieser im Durchschnitt bei 20 μm und ist somit deutlich größer als die durchschnittlich gemessenen 15 μm in Lymphadenitis.
Es wurde ein graphentheoretischer Ansatz gewählt, um die CD30 positiven Zellen und ihre räumliche Nachbarschaft zu modellieren. In CD30-Zellgraphen von klassischen Hodgkinlymphom-Gewebeschnitten ist der durchschnittliche Knotengrad gegenüber den von Lymphadenitis-Bildern stark erhöht. Der Vergleich mit Zufallsgraphen zeigt, dass die beobachteten Knotengradverteilungen nicht für eine zufällige Verteilung der Zellen im Gewebeschnitt sprechen. Eigenschaften und Verteilung von Communities in CD30-Zellgraphen können hinzugenommen werden, um klassisches Hodgkinlymphom Gewebeschnitte näher zu charakterisieren.
Diese Arbeit zeigt, dass die Auswertung von Whole Slide Image unterstützend zur Verbesserung der Diagnose möglich ist. Die mehr als 400.000 automatisch erkannten CD30-positiven Zellobjekte wurden morphologisch beschrieben, und zusammen mit ihrer Position im Gewebeschnitt ist die Betrachtung wichtiger Eigenschaften des klassischen Hodgkinlymphoms realisierbar. Zellgraphen können durch weitere Zelltypen erweitert werden und auf andere Krankheitsbilder angewendet werden.
Precise timing of spikes between different neurons has been found to convey reliable information beyond the spike count. In contrast, the role of small phase delays with high temporal variability, as reported for example in oscillatory activity in the visual cortex, remains largely unclear. This issue becomes particularly important considering the high speed of neuronal information processing, which is assumed to be based on only a few milliseconds, or oscillation cycles within each processing step.
We investigate the role of small and imprecise phase delays with a stochastic spiking model that is strongly motivated by experimental observations. Within individual oscillation cycles the model contains only two signal parameters describing directly the rate and the phase. We specifically investigate two quantities, the probability of correct stimulus detection and the probability of correct change point detection, as a function of these signal parameters and within short periods of time such as individual oscillation cycles.
Optimal combinations of the signal parameters are derived that maximize these probabilities and enable comparison of pure rate, pure phase and combined codes. In particular, the gain in detection probability when adding imprecise phases to pure rate coding increases with the number of stimuli. More interestingly, imprecise phase delays can considerably improve the process of detecting changes in the stimulus, while also decreasing the probability of false alarms and thus, increasing robustness and speed of change point detection.
The results are applied to parameters extracted from empirical spike train recordings of neurons in the visual cortex in response to a number of visual stimuli. The results suggest that near-optimal combinations of rate and phase parameters can be implemented in the brain, and that phase parameters could particularly increase the quality of change point detection in cases of highly similar stimuli.
The thesis is about random Constraint Satisfaction Problems (rCSP). These are random instances of classical problems in NP. In the literature the study of rCSP involve identifying-locating phase transition phenomena as well as investigating algorithmic questions.
Recently, some ingenious however mathematically non-rigorous theories from statistical physics have given the study of rCSP a new perspective; the so-called Cavity Method makes some very impressing predictions about the most fundamental properties of rCSP.
In this thesis, we investigate the soundness of some of the most basic predictions of the Cavity Method, mainly, regarding the structure of the so-called Gibbs distribution on various rCSP models. Furthermore, we study some fundamental algorithmic problem related to rCSP. This includes both analysing well-known dynamical process (dynamics) like Glauber Dynamics, Metropolis Process, as well as proposing new algorithmic approaches to some natural problems related to rCSP.
We live in age of data ubiquity. Even the most conservative estimates predict exponential growth in produced, transmitted and stored data. Big data is used to power business analytics as well as to foster scientific discoveries. In many cases, explosion of produced data exceeds capabilities of digital storage systems. Scientific high-performance computing environments cope with this problem by utilizing large, distributed, storage systems. These complex systems can only provide a high degree of reliability and durability by means of data redundancy. The most straight-forward way of doing that is by replicating the data over different physical devices. However, more elaborate approaches, such as erasure coding, can provide similar data protection while utilizing less storage. Recently, software-defined reliability methods began to replace traditional, hardware- based, solutions. Complicated failure modes of storage system components also warrant checksums to guaranty long-term data integrity. To cope with ever increasing data volumes, flexible and efficient software implementation of error correction codes is of great importance. This thesis introduces a method for realizing a flexible Reed-Solomon erasure code using the “Just-In-Time” compilation technique. By exploiting intrinsic arithmetic redundancy in the algorithm, and by relying on modern optimizing compilers, we obtain a throughput-efficient erasure code implementation. Additionally, exploitation of data parallelism is achieved effortlessly by instructing the compiler to produce SIMD code for desired execution platform. We show results of codes implemented using SSE and AVX2 SIMD instruction sets for x86, and NEON instruction set for ARM platforms. Next, we introduce a framework for efficient vectorized RAID-Z redundancy operations of ZFS file system. Traditional, table-based Galois field multiplication algorithms are replaced with custom SSE and AVX2 parallel methods, providing significantly faster and more efficient parity operations. The implementation of this framework was made publicly available as a part of ZFS on Linux project, since version 0.7. Finally, we propose a new erasure scheme for use with existing, high performance, parallel filesystems. Described reliability middleware (ECCFS) allows definition of flexible, file-based, reliability policies, adapting to customized user needs. By utilizing the block erasure code, the ECCFS achieves optimal storage, computation, and network resource utilization, while providing a high level of reliability. The distributed nature of the middleware allows greater scalability and more efficient utilization of storage and network resources, in order to improve availability of the system.
In the first part of the thesis we investigate Lyapunov exponents for general flat vector bundles over Riemann surfaces and we describe properties of Lyapunov exponents on special loci of the moduli space of flat vector bundles. In the second part of the thesis we show how the knowledge of Lyapunov exponent over a sporadic Teichmüller curve can be used to compute the algebraic equation of the associated universal family of curves.
Die vorliegende Arbeit beschäftigt sich mit dem Thema Stemmatologie, d.h. primär der Rekonstruktion der Kopiergeschichte handschriftlich fixierter Dokumente. Zentrales Objekt der Stemmatologie ist das Stemma, eine visuelle Darstellung der Kopiergeschichte, welche i.d.R. graphtheoretisch als Baum bzw. gerichteter azyklischer Graph vorliegt, wobei die Knoten Textzeugen (d.s. die Textvarianten) darstellen während die Kanten für einzelne Kopierprozesse stehen. Im Mittelpunkt des Wissenschaftszweiges steht die Frage des Autorenoriginals (falls ein einziges solches existiert haben sollte) und die Frage der Rekonstruktion seines Textes. Das Stemma selbst ist ein Mittel zu diesem Hauptzweck (Cameron 1987). Der durch für manuelle Kopierprozesse kennzeichnende Abweichungen zunehmend abgewandelte Originaltext ist meist nicht direkt überliefert. Ziel der Arbeit ist es, die semi-automatische Stemmatologie umfassend zu beschreiben und durch Tools und analytische Verfahren weiterzuentwickeln. Der erste Teil der Arbeit beschreibt die Geschichte der computer-assistierten Stemmatologie inkl. ihrer klassischen Vorläufer und mündet in der Vorstellung eines einfachen Tools zur dynamischen graphischen Darstellung von Stemmata. Ein Exkurs zum philologischen Leitphänomen Lectio difficilior erörtert dessen mögliche psycholinguistische Ursachen im schnelleren lexikalischen Zugriff auf hochfrequente Lexeme. Im zweiten Teil wird daraufhin die existenziellste aller stemmatologischen Debatten, initiiert durch Joseph Bédier, mit mathematischen Argumenten auf Basis eines von Paul Maas 1937 vorgeschlagenen stemmatischen Models beleuchtet. Des Weiteren simuliert der Autor in diesem Kapitel Stemmata, um den potenziellen Einfluss der Distribution an Kopierhäufigkeiten pro Manuskript abzuschätzen.
Im nächsten Teil stellt der Autor ein eigens erstelltes Korpus in persischer Sprache vor, welches ebenso wie 3 der bekannten artifiziellen Korpora (Parzival, Notre Besoin, Heinrichi) qualitativ untersucht wird. Schließlich wird mit der Multi Modal Distance eine Methode zur Stemmagenerierung angewandt, welche auf externen Daten psycholinguistisch determinierter Buchstabenverwechslungswahrscheinlichkeiten beruht. Im letzten Teil arbeitet der Autor mit minimalen Spannbäumen zur Stemmaerzeugung, wobei eine vergleichende Studie zu 4 Methoden der Distanzmatrixgenerierung mit 4 Methoden zur Stemmaerzeugung durchgeführt, evaluiert und diskutiert wird.
The thesis deals with the analysis and modeling of point processes emerging from different experiments in neuroscience. In particular, the description and detection of different types of variability changes in point processes is of interest.
A non-stationary rate or variance of life times is a well-known problem in the description of point processes like neuronal spike trains and can affect the results of further analyses requiring stationarity. Moreover, non-stationary parameters might also contain important information themselves. The goal of the first part of the thesis is the (further) development of a technique to detect both rate and variance changes that may occur in multiple time scales separately or simultaneously. A two-step procedure building on the multiple filter test (Messer et al., 2014) is used that first tests the null hypothesis of rate homogeneity allowing for an inhomogeneous variance and that estimates change points in the rate if the null hypothesis is rejected. In the second step, the null hypothesis of variance homogeneity is tested and variance change points are estimated. Rate change points are used as input. The main idea is the comparison of estimated variances in adjacent windows of different sizes sliding over the process. To determine the rejection threshold functionals of the Brownian motion are identified as limit processes under the null of variance homogeneity. The non-parametric procedure is not restricted to the case of at most one change point. It is shown in simulation studies that the corresponding test keeps the asymptotic significance level for a wide range of parameters and that the test power is remarkable. The practical applicability of the procedure is underlined by the analysis of neuronal spike trains.
Point processes resulting from experiments on bistable perception are analyzed in the second part of the thesis. Visual illusions allowing for than more possible perception lead to unpredictable changes of perception. In the thesis data from (Schmack et al., 2015) are used. A rotating sphere with switching perceived rotation direction was presented to the participants of the study. The stimulus was presented continuously and intermittently, i.e., with short periods of „blank display“ between the presentation periods. There are remarkable differences in the response patterns between the two types of presentation. During continuous presentation the distribution of dominance times, i.e., the intervals of constant perception, is a right-skewed and unimodal distribution with a mean of about five seconds. In contrast, during intermittent presentation one observes very long, stable dominance times of more than one minute interchanging with very short, unstable dominance times of less than five seconds, i.e., an increase of variability.
The main goal of the second part is to develop a model for the response patterns to bistable perception that builds a bridge between empirical data analysis and mechanistic modeling. Thus, the model should be able to describe both the response patterns to continuous presentation and to intermittent presentation. Moreover, the model should be fittable to typically short experimental data, and the model should allow for neuronal correlates. Current approaches often use detailed assumptions and large parameter sets, which complicate parameter estimation.
First, a Hidden Markov Model is applied. Second, to allow for neuronal correlates, a Hierarchical Brownian Model (HBM) is introduced, where perception is modeled by the competition of two neuronal populations. The activity difference between these two populations is described by a Brownian motion with drift fluctuating between two borders, where each first hitting time causes a perceptual change. To model the response patterns to intermittent presentation a second layer with competing neuronal populations (coding a stable and an unstable state) is assumed. Again, the data are described very well, and the hypothesis that the relative time in the stable state is identical in a group of patients with schizophrenia and a control group is rejected. To sum up, the HBM intends to link empirical data analysis and mechanistic modeling and provides interesting new hypotheses on potential neuronal mechanisms of cognitive phenomena.
Diese Arbeit beschäftigt sich mit inversen Problemen für partielle Differentialgleichungen. Moderne Lösungsverfahren solcher inversen Probleme müssen die zugehörige partielle Differentialgleichung (PDGL) oft sehr häufig lösen. Mit Hinblick auf die Rechenzeit solcher Verfahren stellt das häufige Lösen der PDGL den Hauptanteil der benötigten Rechenzeit dar. Daraus resultiert die Grundidee dieser Arbeit: es sollen Lösungsverfahren von inversen Problemen beschleunigt werden, indem die für die Vorwärtslösung benötigte Rechenzeit verringert wird. Genauer gesagt soll anstatt der Vorwärtslösung eine Approximation an diese, welche kostengünstig zu berechnen ist, verwendet werden. Für die Bestimmung einer kostengünstigen Annäherung an die Vorwärtslösung wird die Reduzierte Basis Methode, eine Modellreduktionstechnik, verwendet.
Das Ziel der klassischen Reduzierten Basis Methode ist es einen globalen Reduzierte Basis Raum (RB-Raum) zu konstruieren. Dabei handelt es sich um einen niedrigdimensionalen Teilraum des Lösungsraumes der PDGL, welcher für jeden Parameter aus dem Parameterraum eine gute Näherung der PDGL-Lösung liefert. Eine beispielhafte Methode zur Konstruktion eines solchen Raumes ist es, geschickt Parameter auszuwählen und die dazu gehörigen PDGL-Lösungen als Basisvektoren des RB-Raumes zu verwenden. Die orthogonale Projektion der PDGL auf diesen RB-Raum liefert die entsprechenden Reduzierte Basis Lösungen. Das Besondere in dieser Arbeit ist, dass die betrachteten PDGLn einen sehr hochdimensionalen und unbeschränkten Parameterraum besitzen, und es ist bekannt, dass dies für die Reduzierte Basis Methode eine immense Schwierigkeit darstellt.
In Kapitel 1 wird ein schlechtgestelltes inverses Modellproblem, die Rekonstruktion der Wärmeleitfähigkeit eines Gegenstandes aus der Messung der Temperatur desselben, eingeführt und das nichtlineare Landweber-Verfahren als iteratives Regularisierungsverfahren zur Lösung dieses inversen Problems vorgestellt. Die Grundlagen der Reduzierten Basis Methode werden dargelegt und es wird erläutert, warum die klassische Variante der Methode in diesem Kontext der Bildrekonstruktion versagt. Daraufhin wird der neuartig Ansatz, ein adaptiver Reduzierte Basis Ansatz, entwickelt. Die folgenden Schritte bilden die Grundlage dieses adaptiven Reduzierte Basis Ansatzes:
1. Sei ein RB-Raum gegeben, so projiziere den Lösungsalgorithmus des inversen Problems auf diesen RB-Raum.
2. Generiere mit Hilfe dieses projizierten Verfahrens neue Iterierte bis entweder eine Iterierte das inverse Problem löst oder bis der RB-Raum erweitert werden muss.
3. Im ersten Fall wird das Verfahren beendet, im zweiten Fall wird die zur aktuellen Iterierten gehörige Vorwärtslösung verwendet um den RB-Raum zu verbessern. Danach wird mit dem ersten Schritt fortgefahren.
Es wird also nach und nach ein lokal approximierender RB-Raum konstruiert, indem Parameter für neue Basisvektoren mittels einer projizierten Variante des Lösungsalgorithmus des inversen Problems gefunden werden. Das neuartige Reduzierte Basis Landweber-Verfahren ist das Hauptresultat von Kapitel 1, wobei das Verfahren ausführlich numerisch untersucht und mit dem ursprünglichen Landweber-Verfahren verglichen wird.
In Kapitel 2 dieser Arbeit soll der zuvor entwickelte adaptive Reduzierte Basis Ansatz auf ein komplexes und praxisrelevantes Problem angewandt werden. Insbesondere soll die dadurch entstehende neue Methode mit Hinblick auf Konvergenz theoretisch ausführlich untersucht werden. Daher widmet sich der zweite Teil dieser Arbeit dem Problem der Magnet Resonanz Elektrischen Impedanztomographie (MREIT).
Bei der MREIT handelt es sich um ein Bildgebungsverfahren, welches während der letzten drei Jahrzehnte entwickelt wurde. Dabei wird ein Gegenstand, an welchen Elektroden angeheftet sind, in einen Kernspintomographen gelegt und es ist das Ziel des Verfahrens die elektrische Leitfähigkeit des Gegenstandes zu bestimmen. Die dazu benötigten Daten werden folgendermaßen gewonnen: indem Strom an einer der Elektroden angelegt wird, wird ein Stromfluss erzeugt, welcher wiederum eine Änderung der Magnetflussdichte induziert. Diese kann mit Hilfe des Kernspintomographen gemessen werden, wodurch man einen vollen Satz innerer Daten zur Hand hat, sodass hoch aufgelöste Bilder der elektrischen Leitfähigkeit des Gegenstandes rekonstruiert werden können.
Als Lösungsalgorithmus für dieses praxisrelevante Problem wird der bereits bekannte Harmonische Bz Algorithmus vorgestellt. Das Problem und der Algorithmus werden mit Hinblick auf Konvergenz des Verfahrens untersucht und ein Konvergenzresultat, welches die bestehende Konvergenztheorie hin zu einem approximativen Harmonischen Bz Algorithmus erweitert, wird bewiesen. Dabei hängt das Resultat nicht davon ab welche Art von Approximation an die Vorwärtslösung der entsprechenden PDGL im approximativen Harmonischen Bz Algorithmus verwendet wird solange diese einer Regularitäts- und einer Qualitätsbedingung genügt. Damit folgt das zweite Hauptresultat dieser Arbeit: die numerische Konvergenz des Harmonischen Bz Algorithmus. Es soll dabei hervorgehoben werden, dass Konvergenzresultate im Bereich der inversen Probleme (sofern es sie gibt) meistens die Kenntnis der exakten Vorwärtslösung annehmen, sodass keine numerische Konvergenz des zugehörigen Verfahrens folgt (in einer numerischen Implementation wird stets eine Approximation an die Vorwärtslösung verwendet). Somit ist dieses Konvergenzresultat ein Schritt hin zur numerischen Konvergenz anderer Lösungsverfahren von inversen Problemen.
Da das theoretische Resultat von der Art der Approximation nicht abhängt, erhält man ebenfalls die Konvergenz des neuartigen Reduzierte Basis Harmonischen Bz Algorithmus, welcher die Kombination des in Kapitel 1 entwickelten adaptiven Reduzierte Basis Ansatzes und des Harmonischen Bz Algorithmus ist. In einer kurzen numerischen Untersuchung wird festgestellt, dass dieser Reduzierte Basis Harmonische Bz Algorithmus schneller als der Harmonische Bz Algorithmus ist, wobei die Qualität der Rekonstruktion gleichbleibend ist. Somit funktioniert der entwickelte adaptive Reduzierte Basis Ansatz auch angewandt auf dieses komplexe praxisrelevante inverse Problem der MREIT.
The results of this thesis lie in the area of convex algebraic geometry, which is the intersection of real algebraic geometry, convex geometry, and optimization.
We study sums of nonnegative circuit polynomials (SONC) and their related cone, both geometrically and in application to polynomial optimization. SONC polynomials are certain sparse polynomials having a special structure in terms of their Newton polytopes and supports, and serve as a certificate of nonnegativity for real polynomials, which is independent of sums of squares.
The first part of this thesis is dedicated to the convex geometric study of the SONC cone. As main results we show that the SONC cone is full-dimensional in the cone of nonnegative polynomials, we exactly determine the number of zeros of a nonnegative circuit polynomial, and we give a complete and explicit characterization of the number of zeros of SONC polynomials and forms. Moreover, we provide a first approach to the study of the exposed faces of the SONC cone and their dimensions.
In the second part of the thesis we use SONC polynomials to tackle constrained polynomial optimization problems (CPOPs).
As a first step, we derive a lower bound for the optimal value of CPOP based on SONC polynomials by using a single convex optimization program, which is a geometric program (GP) under certain assumptions. GPs are a special type of convex optimization problems and can be solved in polynomial time. We test the new method experimentally and provide examples comparing our new SONC/GP approach with Lasserre's relaxation, a common approach for tackling CPOPs, which approximates nonnegative polynomials via sums of squares and semidefinite programming (SDP). The new approach comes with the benefit that in practice GPs can be solved significantly faster than SDPs. Furthermore, increasing the degree of a given problem has almost no effect on the runtime of the new program, which is in sharp contrast to SDPs.
As a second step, we establish a hierarchy of efficiently computable lower bounds converging to the optimal value of CPOP based on SONC polynomials. For a given degree each bound is computable by a relative entropy program. This program is also a convex optimization program, which is more general than a geometric program, but still efficiently solvable via interior point methods.
Powerful environment perception systems are a fundamental prerequisite for the successful deployment of intelligent vehicles, from advanced driver assistance systems to self-driving cars. Arguably the most essential task of such systems is the reliable detection and localization of obstacles in order to avoid collisions. Two particularly challenging scenarios in this context are represented by small, unexpected obstacles on the road ahead, and by potentially dynamic objects observed from a large distance. Both scenarios become exceedingly critical when the ego-vehicle is traveling at high speed. As a consequence, two major requirements placed on environment perception systems are the capability of (a) high-sensitivity generic object detection and (b) high-accuracy obstacle distance estimation. The present thesis addresses both requirements by proposing novel approaches based on stereo vision for spatial perception.
First, this work presents a novel method for the detection of small, generic obstacles and objects at long range directly from stereo imagery. The detection is based on sound statistical tests using local geometric criteria which are applicable to both static and moving objects. The approach is not limited to predefined sets of semantic object classes and does not rely on restrictive assumptions on the environment, such as oversimplified global ground surface models. Free-space and obstacle hypotheses are evaluated based on a statistical model of the input image data in order to avoid a loss of sensitivity through intermediate processing steps. In addition to the detection result, the algorithm simultaneously yields refined estimates of object distances, originating from an implicit optimization of the geometric obstacle hypothesis models. The proposed detection system provides multiple flexible output representations, ranging from 3D obstacle point clouds to compact mid-level obstacle segments to bounding box representations of object instances suitable for model-based tracking. The core algorithm concept lends itself to massive parallelization and can be implemented efficiently on dedicated hardware. Real-time execution is demonstrated on a test vehicle in real-world traffic. For a thorough quantitative evaluation of the detection performance, two dedicated datasets are employed, covering small and hard-to-detect obstacles in urban environments as well as distant dynamic objects in highway driving scenarios. The proposed system is shown to significantly outperform current general purpose obstacle detection approaches in both setups, providing a considerable increase in detection range while reducing the false positive rate at the same time.
Second, this work considers the high-accuracy estimation of object distances from stereo vision, particularly at long range. Several new methods for optimizing the stereo-based distance estimates of detected objects are proposed and compared to state-of-the-art concepts. A comprehensive statistical evaluation is performed on an extensive dedicated dataset, establishing reference values for the accuracy limits actually achievable in practice. Notably, the refined distance estimates implicitly provided by the proposed obstacle detection system are shown to yield highly accurate results, on par with the top-performing dedicated stereo matching algorithms considered in the analysis.
In this thesis we introduce the imaginary projection of (multivariate) polynomials as the projection of their variety onto its imaginary part, I(f) = { Im(z_1, ... , z_n) : f(z_1, ... , z_n) = 0 }. This induces a geometric viewpoint to stability, since a polynomial f is stable if and only if its imaginary projection does not intersect the positive orthant. Accordingly, the thesis is mainly motivated by the theory of stable polynomials.
Interested in the number and structure of components of the complement of imaginary projections, we show as a key result that there are only finitely many components which are all convex. This offers a connection to the theory of amoebas and coamoebas as well as to the theory of hyperbolic polynomials.
For hyperbolic polynomials, we show that hyperbolicity cones coincide with components of the complement of imaginary projections, which provides a strong structural relationship between these two sets. Based on this, we prove a tight upper bound for the number of hyperbolicity cones and, respectively, for the number of components of the complement in the case of homogeneous polynomials. Beside this, we investigate various aspects of imaginary projections and compute imaginary projections of several classes explicitly.
Finally, we initiate the study of a conic generalization of stability by considering polynomials whose roots have no imaginary part in the interior of a given real, n-dimensional, proper cone K. This appears to be very natural, since many statements known for univariate and multivariate stable polynomials can be transferred to the conic situation, like the Hermite-Biehler Theorem and the Hermite-Kakeya-Obreschkoff Theorem. When considering K to be the cone of positive semidefinite matrices, we prove a criterion for conic stability of determinantal polynomials.
As an integral part of ALICE, the dedicated heavy ion experiment at CERN’s Large Hadron Collider, the Transition Radiation Detector (TRD) contributes to the experiment’s tracking, triggering and particle identification. Central element in the TRD’s processing chain is its trigger and readout processor, the Global Tracking Unit (GTU). The GTU implements fast triggers on various signatures, which rely on the reconstruction of up to 20 000 particle track segments to global tracks, and performs the buffering and processing of event raw data as part of a complex detector readout tree.
The high data rates the system has to handle and its dual use as trigger and readout processor with shared resources and interwoven processing paths require the GTU to be a unique, high-performance parallel processing system. To achieve high data taking efficiency, all elements of the GTU are optimized for high running stability and low dead time.
The solutions presented in this thesis for the handling of readout data in the GTU, from the initial reception to the final assembly and transmission to the High-Level Trigger computer farm, address all these aspects. The presented concepts employ multi-event buffering, in-stream data processing, extensive embedded diagnostics, and advanced features of modern FPGAs to build a robust high-performance system that can conduct the high- bandwidth readout of the TRD with maximum stability and minimized dead time. The work summarized here not only includes the complete process from the conceptual layout of the multi-event data handling and segment control, but also its implementation, simulation, verification, operation and commissioning. It also covers the system upgrade for the second data taking period and presents an analysis of the actual system performance.
The presented design of the GTU’s input stage, which is comprised of 90 FPGA-based nodes, is built to support multi-event buffering for the data received from the 18 TRD supermodules on 1080 optical links at the full sender aggregate net bandwidth of 2.16 Tbit/s. With careful design of the control logic and the overall data path, the readout on the 18 concentrator nodes of the supermodule stage can utilize an effective aggregate output bandwidth of initially 3.33 GiB/s, and, after the successful readout bandwidth upgrade, 6.50 GiB/s via 18 optical links. The high possible readout link utilization of more than 99 % and the intermediate buffering of events on the GTU helps to keep the dead time associated with the local event building and readout typically below 10%. The GTU has been used for production data taking since start-up of the experiment and ever since performs the event buffering, local event building and readout for the TRD in a correct, efficient and highly dependable fashion.
Eine 1-1-Korrespondenz zwischen einer Klasse von Leftist-Bäumen und erweiterten t-nären Bäumen
(2006)
Leftist-Bäume sind eine Teilmenge der geordneten Bäume mit der Eigenschaft, daß der [kürzeste] Weg von jedem inneren Knoten zu einem Blatt des Teilbaums mit diesem Knoten als Wurzel immer über den am weitesten links stehenden Sohn dieses Knotens verläuft.
In der vorliegenden Arbeit wird eine 1-1-Korrespondenz zwischen erweiterten t-nären Bäumen und der Klasse der Leftist-Bäumen mit erlaubten Knotengraden 0, t, 2t-1, ... 1+t(t-1) präsentiert. Diese 1-1-Korrespondenz verallgemeinert ein Ergebnis von R. Kemp.
Embedding spanning structures into the random graph G(n,p) is a well-studied problem in random graph theory, but when one turns to the random r-uniform hypergraph H(r)(n,p) much less is known. In this thesis we will examine this topic from different perspectives, providing insights into various aspects of the theory of random graphs. Our results cover the determination of existence thresholds in two models, as well as an algorithmic approach. For the embeddings, we work with random and pseudorandom structures.
Together with Person we first notice that a general result of Riordan can be adapted from random graphs to hypergraphs and provide sufficient conditions for when H(r)(n,p) contains a given spanning structure asymptotically almost surely. As applications, we discuss several spanning structures such as cubes, lattices, spheres, and Hamilton cycles in hypergraphs.
Moreover, we study universality, i.e. when does an r-uniform hypergraph contain every hypergraph on n vertices with maximum vertex degree bounded by [delta]? For H(r)(n,p), it is shown with Person that this holds for p = w(ln n/n)1/[delta]) asymptotically almost surely by combining approaches taken by Dellamonica, Kohayakawa, Rödl, and Ruciński, of Ferber, Nenadov, and Peter, and of Kim and Lee.
Any hypergraph that is universal for the family of bounded degree r-uniform hypergraphs has to contain [omega](nr-r/[delta]) edges. With Hetterich and Person we exploit constructions of Alon and Capalbo to obtain universal r-uniform hypergraphs with the optimal number of edges O(nr-r/[delta]) when r is even, r | [delta], or [delta] = 2. Furthermore, we generalise the result of Alon and Asodi about optimal universal graphs for the family of graphs with at most m edges and no isolated vertices to hypergraphs.
In an r-uniform hypergraph on n vertices a tight Hamilton cycle consists of n edges such that there exists a cyclic ordering of the vertices where the edges correspond to consecutive segments of r vertices. In collaboration with Allen, Koch, and Person we provide a first deterministic polynomial time algorithm, which finds asymptotically almost surely tight Hamilton cycles in random r-uniform hypergraphs with edge probability at least C log3 n/n. This result partially answers a question of Nenadov and Skorić and of Dudek and Frieze who proved that tight Hamilton cycles exist already for p = w(1/n) for r = 3 and p [größer/gleich] (e + o(1))/n for r [größer/gleich] 4 using a second moment argument. Moreover our algorithm is superior to previous results of Allen, Böttcher, Kohayakawa, and Person and Nenadov and Skorić.
Lastly, we study the model of randomly perturbed dense graphs introduced by Bohman, Frieze and Martin, that is, the union of any n-vertex graph G[alpha] with minimum degree at least [alpha]n and G(n,p). For any fixed [alpha] > 0, and p = w(n-2/([delta]+1)), we show with Böttcher, Montgomery, and Person that G[alpha] UG(n,p) almost surely contains any single spanning graph with maximum degree [delta], where [delta] [größer/gleich] 5. As in previous results concerning this model, the bound used for p is lower by a log-term in comparison to the conjectured threshold for the general appearance of such subgraphs in G(n,p) alone. The new techniques we introduce also give simpler proofs of related results in the literature on trees and factors.
Measuring information processing in neural data: The application of transfer entropy in neuroscience
(2017)
It is a common notion in neuroscience research that the brain and neural systems in general "perform computations" to generate their complex, everyday behavior (Schnitzer, 2002). Understanding these computations is thus an important step in understanding neural systems as a whole (Carandini, 2012;Clark, 2013; Schnitzer, 2002; de-Wit, 2016). It has been proposed that one way to analyze these computations is by quantifying basic information processing operations necessary for computation, namely the transfer, storage, and modification of information (Langton, 1990; Mitchell, 2011; Mitchell, 1993;Wibral, 2015). A framework for the analysis of these operations has been emerging (Lizier2010thesis), using measures from information theory (Shannon, 1948) to analyze computation in arbitrary information processing systems (e.g., Lizier, 2012b). Of these measures transfer entropy (TE) (Schreiber2000), a measure of information transfer, is the most widely used in neuroscience today (e.g., Vicente, 2011; Wibral, 2011; Gourevitch, 2007; Vakorin, 2010; Besserve, 2010; Lizier, 2011; Richter, 2016; Huang, 2015; Rivolta, 2015; Roux, 2013). Yet, despite this popularity, open theoretical and practical problems in the application of TE remain (e.g., Vicente, 2011; Wibral, 2014a). The present work addresses some of the most prominent of these methodological problems in three studies.
The first study presents an efficient implementation for the estimation of TE from non-stationary data. The statistical properties of non-stationary data are not invariant over time such that TE can not be easily estimated from these observations. Instead, necessary observations can be collected over an ensemble of data, i.e., observations of physical or temporal replications of the same process (Gomez-Herrero, 2010). The latter approach is computationally more demanding than the estimation from observations over time. The present study demonstrates how to handles this increased computational demand by presenting a highly-parallel implementation of the estimator using graphics processing units.
The second study addresses the problem of estimating bivariate TE from multivariate data. Neuroscience research often investigates interactions between more than two (sub-)systems. It is common to analyze these interactions by iteratively estimating TE between pairs of variables, because a fully multivariate approach to TE-estimation is computationally intractable (Lizier, 2012a; Das, 2008; Welch, 1982). Yet, the estimation of bivariate TE from multivariate data may yield spurious, false-positive results (Lizier, 2012a;Kaminski, 2001; Blinowska, 2004). The present study proposes that such spurious links can be identified by characteristic coupling-motifs and the timings of their information transfer delays in networks of bivariate TE-estimates. The study presents a graph-algorithm that detects these coupling motifs and marks potentially spurious links. The algorithm thus partially corrects for spurious results due to multivariate effects and yields a more conservative approximation of the true network of multivariate information transfer.
The third study investigates the TE between pre-frontal and primary visual cortical areas of two ferrets under different levels of anesthesia. Additionally, the study investigates local information processing in source and target of the TE by estimating information storage (Lizier, 2012) and signal entropy. Results of this study indicate an alternative explanation for the commonly observed reduction in TE under anesthesia (Imas, 2005; Ku, 2011; Lee, 2013; Jordan, 2013; Untergehrer, 2014), which is often explained by changes in the underlying coupling between areas. Instead, the present study proposes that reduced TE may be due to a reduction in information generation measured by signal entropy in the source of TE. The study thus demonstrates how interpreting changes in TE as evidence for changes in causal coupling may lead to erroneous conclusions. The study further discusses current bast-practice in the estimation of TE, namely the use of state-of-the-art estimators over approximative methods and the use of optimization procedures for estimation parameters over the use of ad-hoc choices. It is demonstrated how not following this best-practice may lead to over- or under-estimation of TE or failure to detect TE altogether.
In summary, the present work proposes an implementation for the efficient estimation of TE from non-stationary data, it presents a correction for spurious effects in bivariate TE-estimation from multivariate data, and it presents current best-practice in the estimation and interpretation of TE. Taken together, the work presents solutions to some of the most pressing problems of the estimation of TE in neuroscience, improving the robust estimation of TE as a measure of information transfer in neural systems.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
Recently, Aumüller and Dietzfelbinger proposed a version of a dual-pivot Quicksort, called "Count", which is optimal among dual-pivot versions with respect to the average number of key comparisons required. In this master's thesis we provide further probabilistic analysis of "Count". We derive an exact formula for the average number of swaps needed by "Count" as well as an asymptotic formula for the variance of the number of swaps and a limit law. Also for the number of key comparisons the asymptotic variance and a limit law are identified. We also consider both complexity measures jointly and find their asymptotic correlation.
The future heavy-ion experiment CBM (FAIR/GSI, Darmstadt, Germany) will focus on the measurements of very rare probes, which require the experiment to operate under extreme interaction rates of up to 10 MHz. Due to high multiplicity of charged particles in heavy-ion collisions, this will lead to the data rates of up to 1 TB/s. In order to meet the modern achievable archival rate, this data ow has to be reduced online by more than two orders of magnitude.
The rare observables are featured with complicated trigger signatures and require full event topology reconstruction to be performed online. The huge data rates together with the absence of simple hardware triggers make traditional latency limited trigger architectures typical for conventional experiments inapplicable for the case of CBM. Instead, CBM will employ a novel data acquisition concept with autonomous, self-triggered front-end electronics.
While in conventional experiments with event-by-event processing the association of detector hits with corresponding physical event is known a priori, it is not true for the CBM experiment, where the reconstruction algorithms should be modified in order to process non-event-associated data. At the highest interaction rates the time difference between hits belonging to the same collision will be larger than the average time difference between two consecutive collisions. Thus, events will overlap in time. Due to a possible overlap of events one needs to analyze time-slices rather than isolated events.
The time-stamped data will be shipped and collected into a readout buffer in a form of a time-slice of a certain length. The time-slice data will be delivered to a large computer farm, where the archival decision will be obtained after performing online reconstruction. In this case association of hit information with physical events must be performed in software and requires full online event reconstruction not only in space, but also in time, so-called 4-dimensional (4D) track reconstruction.
Within the scope of this work the 4D track finder algorithm for online reconstruction has been developed. The 4D CA track finder is able to reproduce performance and speed of the traditional event-based algorithm. The 4D CA track finder is both vectorized (using SIMD instructions) and parallelized (between CPU cores). The algorithm shows strong scalability on many-core systems. The speed-up factor of 10.1 has been achieved on a CPU with 10 hyper-threaded physical cores.
The 4D CA track finder algorithm is ready for the time-slice-based reconstruction in the CBM experiment.
Modern mobile devices offer a great variety of data that can be recorded. This broad range of information offers the possibility to tailor applications more to the needs of a user. Several context information can be collected, like e.g. information about position or movement. Besides integrated sensors, a broad range of additional sensors are available which can be connected to a mobile device. These additional sensors offer for example the possibility to measure physiological signals of a user.The human body offers a broad range of different signals. These signals have been used in several examples to conclude on the state of a user. The different signals allow to get a deeper insight into emotional or mental state of a user. Electrodermal activity gives feedback about the current arousal level of a user. Heart rate and heart rate variability can give an estimation about valence and mental load of a user. Several models exist to conclude from information like valence and arousal on different emotional states. Russell defined a two dimensional model, using valence and arousal to define affective states. Yerkes and Dodson developed a curve that expresses the relationship between arousal and performance of a user. Different examples exist, that use physiological signals to determine the user state for tailoring and adapting of applications. At the time of this work most of these examples did not address the usage of physiological signals for user state estimation in mobile applications and in mobile scenarios. Mobile scenarios lead to several challenges that need to be addressed. Influencing factors on physiological signals, like e.g. movement have to be controlled. Furthermore a user might be interrupted and influenced by environmental aspects. The combination of physiological data and context information might improve the interpretation of user state in mobile scenarios. In this work, we present a model that addresses the challenges of usage in mobile scenarios to offer an estimation of user state to mobile applications. To address a broad range of mobile applications, affective and cognitive state are provided as output. As input heart rate and electrodermal activity are used, as well as context information about movement and performance. Electrodermal activity is measured by a simple sensor that can be worn as a wristband. Heart rate is measured by a chest strap as used in sports. The input channels are transformed to affective and cognitive state based on a fuzzy rule based approach. With help of fuzzy logic, uncertainty can be expressed and the data continuously being processed. At the start, input channels are fuzzified by defined functions. After a that, a first fuzzy rule set transforms the input signals into values for valence, arousal and mental load. In a second step, these values and context information are transformed with another fuzzy rule set to values for affective and cognitive state. Affective state is based on the model of Russell, where valence and arousal are used to determine different emotional states. The output of the model are eight different affective states (alarmed, excited, happy, relaxed, tired, bored, sad and frustrated), which can have a high, medium, low or very low value as output. Cognitive state is determined based on mental load and context information about performance and movement. The output value can be very high, high, medium or low. The model was implemented as background service for Android devices. Different applications have been used for evaluation of the model. The model has been integrated in a multiplayer space shooter game, called ”Zone of Impulse”, which mainly benefits from the affective state. Cognitive state is more addressed in applications like a simple vocable trainer, which adapts difficulty based on user state. A study to evaluate different aspects of the model has been conducted. The study was designed to investigate the suitability of the model for mobile scenarios. The game ”zone of impulse” and the vocable trainer have been investigated in different configurations. Versions with integrated model have been compared to version of the applications without model, as well as versions of the model without context information. In total 41 participants took part in the study. A part of the participants had to do the tasks of the study in a mobile scenario, walking around several streets. The remaining participants had to do the tasks in a controlled environment in a sitting position. Different aspects were collected with ratings and questionnaires. Overall, participants rated that they did not feel impaired by the sensors they had to wear. The results showed, that the combination of physiological data and context information had an advantage against versions without context information in part of the ratings. A comparison between versions with and without model showed, that the subjective mental load ratings were significantly better for the version with model. Subjective ratings for aspects like fun, overstrain and support were mixed. When comparing the application versions in indoor and outdoor scenarios, no significant difference could be found, which leads to the assumption that there is no loss of interpretation quality in outdoor scenarios. The results also showed that the model seems to be robust enough to compensate the loss of an input channel, as there was no significant difference between application versions with full integrated model and versions with one channel lost. With the model developed in this work, context information and physiological data were combined to improve user state estimation. Furthermore pitfalls of user state estimation in mobile scenarios are overcome with this combination. However, the model has only been evaluated with a limited amount of applications and situations that mobile scenarios offer.
Data-parallel programming is more important than ever since serial performance is stagnating. All mainstream computing architectures have been and are still enhancing their support for general purpose computing with explicitly data-parallel execution. For CPUs, data-parallel execution is implemented via SIMD instructions and registers. GPU hardware works very similar allowing very efficient parallel processing of wide data streams with a common instruction stream.
These advances in parallel hardware have not been accompanied by the necessary advances in established programming languages. Developers have thus not been enabled to explicitly state the data-parallelism inherent in their algorithms. Some approaches of GPU and CPU vendors have introduced new programming languages, language extensions, or dialects enabling explicit data-parallel programming. However, it is arguable whether the programming models introduced by these approaches deliver the best solution. In addition, some of these approaches have shortcomings from a hardware-specific focus of the language design. There are several programming problems for which the aforementioned language approaches are not expressive and flexible enough.
This thesis presents a solution tailored to the C++ programming language. The concepts and interfaces are presented specifically for C++ but as abstract as possible facilitating adoption by other programming languages as well. The approach builds upon the observation that C++ is very expressive in terms of types. Types communicate intention and semantics to developers as well as compilers. It allows developers to clearly state their intentions and allows compilers to optimize via explicitly defined semantics of the type system.
Since data-parallelism affects data structures and algorithms, it is not sufficient to enhance the language's expressivity in only one area. The definition of types whose operators express data-parallel execution automatically enhances the possibilities for building data structures. This thesis therefore defines low-level, but fully portable, arithmetic and mask types required to build a flexible and portable abstraction for data-parallel programming. On top of these, it presents higher-level abstractions such as fixed-width vectors and masks, abstractions for interfacing with containers of scalar types, and an approach for automated vectorization of structured types.
The Vc library is an implementation of these types. I developed the Vc library for researching data-parallel types and as a solution for explicitly data-parallel programming. This thesis discusses a few example applications using the Vc library showing the real-world relevance of the library. The Vc types enable parallelization of search algorithms and data structures in a way unique to this solution. It shows the importance of using the type system for expressing data-parallelism. Vc has also become an important building block in the high energy physics community. Their reliance on Vc shows that the library and its interfaces were developed to production quality.
In the first part of the thesis, we show that the payment flow of a linear tax on trading gains from a security with a semimartingale price process can be constructed for all càglàd and adapted trading strategies. It is characterized as the unique continuous extension of the tax payments for elementary strategies w.r.t. the convergence "uniformly in probability". In this framework, we prove that under quite mild assumptions dividend payoffs have almost surely a negative effect on investor’s after-tax wealth if the riskless interest rate is always positive. In addition, we give an example for tax-efficient strategies for which the tax payment flow can be computed explicitly.
In the second part of the thesis, we investigate the impact of capital gains taxes on optimal investment decisions in a quite simple model. Namely, we consider a risk neutral investor who owns one risky stock from which she assumes that it has a lower expected return than the riskless bank account and determine the optimal stopping time at which she sells the stock to invest the proceeds in the bank account up to the maturity date. In the case of linear taxes and a positive riskless interest rate, the problem is nontrivial because at the selling time the investor has to realize book profits which triggers tax payments. We derive a boundary that is continuous and increasing in time, and decreasing in the volatility of the stock such that the investor sells the stock at the first time its price is smaller or equal to this boundary.
Die zentralen Objekte der Dissertation sind Translationsflächen. Dabei handelt es sich um Riemann’sche Flächen, die aus in die euklidische Ebene eingebetteten Polygonen durch Verkleben von parallelen gleichlangen Seiten entstehen. Zwei Translationsflächen sind gleich, wenn es möglich ist, die Polygone durch ”Zerschneiden und mittels Translationen neu Zusammenkleben“ ineinander zu überführen. Die Gruppe GL_2(R) operiert auf der Menge der Translationsflächen via der linearen Abbildungen auf den Polygonen. Der Stabilisator einer Translationsfläche X unter dieser Operation wird die Veech-Gruppe von X genannt und mit SL(X) bezeichnet. Die Veech-Gruppe ist eine diskrete Untergruppe von SL_2(R) und damit eine Fuchs’sche Gruppe.
Fuchs’sche Gruppen werden je nach ihrer Limesmenge in elementare und nicht-elementare Gruppen eingeteilt. Letztere wiederum unterteilt man in Gruppen erster oder zweiter Art. Fuchs’sche Gruppen mit endlichem co-Volumen heißen Gitter und sind genau die endlich erzeugten Gruppen erster Art. Translationsflächen, deren Veech-Gruppe ein Gitter ist, heißen Veech-Flächen und sind von besonderem Interesse, da für sie die Veech Alternative gilt.
Ein feineres Maß für die Größe einer Fuchs’schen Gruppe ist der kritische Exponent. Er ist definiert als das Infimum aller reellen Zahlen, für die die Poincaré Reihe konvergiert und liegt für alle unendlichen Fuchs’schen Gruppen zwischen 0 und 1. Hauptziel der Dissertation ist der Beweis von Theorem 1. Es gibt Translationsflächen, für die der kritische Exponent ihrer Veech-Gruppe echt zwischen 1/2 und 1 liegt.
Der kritische Exponent von elementaren Gruppen ist höchstens 1/2, Translationsflächen mit elementaren Veech-Gruppen sind also als Kandidaten für das Theorem ausgeschlossen. Der kritische Exponent von Gittern ist 1. Also scheiden auch Veech-Flächen für das Theorem aus.
Bis zum Jahr 2003 waren Gitter die einzigen bekannten nicht-elementaren Veech-Gruppen. McMullen klassifizierte die Veech-Flächen vom Geschlecht 2 und zeigte, dass jede solche Fläche, die nur eine Singularität besitzt, in der GL_2(R)-Bahn der Fläche L_D liegt, die aus einem L-förmigen Polygon mit geeigneten von D abhängigen Seitenlängen entsteht.
Während auch heute noch keine Translationsfläche mit Veech-Gruppe zweiter Art bekannt ist, fanden McMullen und unabhängig davon Hubert und Schmidt Konstruktionen unendlich erzeugter Veech-Gruppen erster Art. Eine Abschätzung des kritischen Exponenten dieser Gruppen war 10 Jahre lang eine wichtige offene Frage, die nun durch Theorem 1 beantwortet wird.
Zentral in der Konstruktion von Hubert und Schmidt sind spezielle Punkte, nämlich Verbindungspunkte. Hubert und Schmidt konstruieren Translationsflächen, deren Veech-Gruppen kommensurabel zum Stabilisator SL(X;P) von P sind und damit den gleichen kritischen Exponenten haben. Für Verbindungspunkte mit unendlicher SL(X)- Bahn (diese Punkte heißen nicht-periodisch) ist SL(X;P) unendlich erzeugt und von erster Art.
Wir zeigen Theorem 1, indem wir zeigen, dass für jedes D kongruent 0 mod 4, (kein Quadrat), und jeden nicht-periodischen Verbindungspunkt P in L_D der kritische Exponent der Gruppe SL(L_D;P) echt zwischen 1/2 und 1 liegt.
Eine natürliche Frage in diesem Zusammenhang ist die Abhängigkeit von P: Punkte Q in der SL(L_D)-Bahn von P sind auch er nicht-periodische Verbindungspunkte und die zugehö̈rigen Gruppen SL(L_D;P) und SL(L_D;Q) sind konjugiert zueinander. Daher widmen wir uns in Kapitel 4 der Bestimmung der Bahnen nicht-periodischer Verbindungspunkte.
Die Verbindungspunkte haben die Form P=(x_r+x_iw;y_r+y_iw) mit x_r,x_i,y_r,y_i aus Q. Wir zeigen, dass der Hauptnenner N(P) dieser (gekürzten) Brüche eine Invariante der Bahn ist. Daraus folgt:
Theorem 2. Es gibt unendlich viele verschiedene Bahnen von Verbindungspunkten von L_D.
Wir kennen die Operation der horizontalen und der vertikalen Scherungen A und B aus SL(L_D). Im Spezialfall D=8 erzeugen diese beiden Elemente die ganze Gruppe und wir geben je ein Verfahren an, um eine untere und eine obere Schranke an die Anzahl der Bahnen von nicht-periodischen Verbindungspunkten P mit fixiertem Hauptnenner N(P) zu finden. Damit zeigen wir:
Theorem 3. Die Menge der Verbindungspunkte P mit festem Wert N(P) zerfällt in eine endliche Anzahl von SL(L_8)-Bahnen.
Im Beweis von Theorem 1 ist es nötig, die Nicht-Mittelbarkeit eines Graphen zu zeigen. Da wir nur sehr wenige Informationen über dessen Struktur in unserer konkreten Situation haben, entwickeln wir in Kapitel 1 die folgende Methode:
Theorem 4. Sei G ein Graph, den man durch Weglassen von Kanten in einen Wald G′ ohne Blätter überführen kann, bei dem das Supremum der Längen von zusammenhängenden Valenz-2-Teilgraphen von G′ beschränkt ist. Dann ist G nicht mittelbar.
Um diese Methode anzuwenden, ordnen wir jeder Ecke P von G ein Komplexitätsmaß s(P) zu und weisen nach, dass dieser Wert für die Operation von Worten in A- und B-Potenzen mit wachsender Wortlänge ”tendenziell wächst“.
Zeitreihen von spontan auftretenden Topograpfien elektrischer Felder an der Kopfoberfläche, die durch eine Elektroenzephalografie (EEG) gemessen werden, zeigen Zeiträume („EEG-Microstates“), während denen die Topografie quasi-stabil ist. Diese EEG-Microstates werden üblicherweise dadurch analysiert, dass die zu spezifischen Zeitpunkten beobachteten Ausprägungen des EEGs in eine kleine Anzahl von prototypischen Topografien („Karten“) eingeteilt werden. Dadurch erhält man eine diskrete Kartensequenz.
Um die Struktur der Übergangswahrscheinlichkeiten in experimentellen Kartensequenzen zu beschreiben, werden diese Sequenzen durch eine reduzierte Markov-Kette modelliert mit nur einem Parameter pro Karte. Die Markov-Ketten können mithilfe von zwei bestimmten stochastischen Prozessen konstruiert werden. Durch den einen Prozess werden zufällige Intervalle definiert, die zufällig den verschiedenen Karten zugeordnet werden. Durch den anderen Prozess werden zufällige Abtastungszeitpunkte bestimmt, zu denen die Karte des jeweils aktuellen Intervalls abgelesen wird.
Neben der Motivation und Vorstellung des Markov-Ketten-Modells werden in dieser Arbeit Schätzer für die Modellparameter vorgeschlagen und diskutiert sowie ihre asymptotischen Varianzen hergeleitet. Zudem wird ein Anpassungstest durchgeführt und es werden Abwandlungen des Modells untersucht.
Ultrarelativistic Quantum Molecular Dynamics is a physics model to describe the transport, collision, scattering, and decay of nuclear particles. The UrQMD framework has been in use for nearly 20 years since its first development. In this period computing aspects, the design of code, and the efficiency of computation have been minor points of interest. Nowadays an additional issue arises due to the fact that the run time of the framework does not diminish any more with new hardware generations.
The current development in computing hardware is mainly focused on parallelism. Especially in scientific applications a high order of parallelisation can be achieved due to the superposition principle. In this thesis it is shown how modern design criteria and algorithm redesign are applied to physics frameworks. The redesign with a special emphasise on many-core architectures allows for significant improvements of the execution speed.
The most time consuming part of UrQMD is a newly introduced relativistic hydrodynamic phase. The algorithm used to simulate the hydrodynamic evolution is the SHASTA. As the sequential form of SHASTA is successfully applied in various simulation frameworks for heavy ion collisions its possible parallelisation is analysed. Two different implementations of SHASTA are presented.
The first one is an improved sequential implementation. By applying a more concise design and evading unnecessary memory copies, the execution time could be reduced to the half of the FORTRAN version’s execution time. The usage of memory could be reduced by 80% compared to the memory needed in the original version.
The second implementation concentrates fully on the usage of many-core architectures and deviates significantly from the classical implementation. Contrary to the sequential implementation, it follows the recalculate instead of memory look-up paradigm. By this means the execution speed could be accelerated up to a factor of 460 on GPUs.
Additionally a stability analysis of the UrQMD model is presented. Applying metapro- gramming UrQMD is compiled and executed in a massively parallel setup. The resulting simulation data of all parallel UrQMD instances were hereafter gathered and analysed. Hence UrQMD could be proven of high stability to the uncertainty of experimental data.
As a further application of modern programming paradigms a prototypical implementa- tion of the worldline formalism is presented. This formalism allows for a direct calculation of Feynman integrals and constitutes therefore an interesting enhancement for the UrQMD model. Its massively parallel implementation on GPUs is examined.
Spin(9)-invariant valuations
(2013)
The first aim of this thesis is to give a Hadwiger-type theorem for the exceptional Lie group Spin(9). The space of Spin(9)-invariant k-homogeneous valuations is studied through the construction of an exact sequence involving some spaces of differential forms. We present then a description of the spin representation using the properties of the 8-dimensional division algebra of the octonions. Using this description as well as representation-theoretic formulas, we can compute the dimensions of the spaces of differential forms appearing in the exact sequence. Hence we obtain the dimensions of the spaces of k-homogeneous Spin(9)-invariant valuations for k=0,1,...,16.
In the second part of this work, we construct one new element for a basis of one of these spaces. It is clear, that the k-th intrinsic volume is also Spin(9)-invariant. The last chapter of this work presents the construction of a new 2-homogeneous Spin(9)-invariant valuation. On a Riemannian manifold (M,g), we construct a valuation by integrating the curvature tensor over the disc bundle. We associate to this valuation on M a family of valuations on the tangent spaces. We show that these valuations are even and homogeneous of degree 2. Moreover, since the valuation on M is invariant under the action of the isometry group of M, the induced valuation on the tangent space in a point p in M is invariant under the action of the stabilisator of p for all p in M. In the special case where M is the octonionic projective plane, this construction yields an even, homogeneous of degree 2, Spin(9)-invariant valuation, whose Klain function is not constant, i.e. which is linearly independent of the second intrinsic volume.
This thesis presents various algorithms which have been developed for on-line event reconstruction in the CBM experiment at GSI, Darmstadt and the ALICE experiment at CERN, Geneve. Despite the fact that the experiments are different — CBM is a fixed target experiment with forward geometry, while ALICE has a typical collider geometry — they share common aspects when reconstruction is concerned.
The thesis describes:
— general modifications to the Kalman filter method, which allows one to accelerate, to improve, and to simplify existing fit algorithms;
— developed algorithms for track fit in CBM and ALICE experiment, including a new method for track extrapolation in non-homogeneous magnetic field.
— developed algorithms for primary and secondary vertex fit in the both experiments. In particular, a new method of reconstruction of decayed particles is presented.
— developed parallel algorithm for the on-line tracking in the CBM experiment.
— developed parallel algorithm for the on-line tracking in High Level Trigger of the ALICE experiment.
— the realisation of the track finders on modern hardware, such as SIMD CPU registers and GPU accelerators.
All the presented methods have been developed by or with the direct participation of the author.