Mathematik
Refine
Year of publication
Document Type
- Article (112)
- Doctoral Thesis (76)
- Preprint (48)
- diplomthesis (38)
- Book (25)
- Report (22)
- Conference Proceeding (18)
- Diploma Thesis (9)
- Bachelor Thesis (8)
- Contribution to a Periodical (8)
Has Fulltext
- yes (377)
Is part of the Bibliography
- no (377) (remove)
Keywords
- Kongress (6)
- Kryptologie (5)
- Mathematik (5)
- Stochastik (5)
- Doku Mittelstufe (4)
- Doku Oberstufe (4)
- Online-Publikation (4)
- Statistik (4)
- Finanzmathematik (3)
- LLL-reduction (3)
Institute
- Mathematik (377)
- Informatik (56)
- Präsidium (22)
- Physik (6)
- Psychologie (6)
- Geschichtswissenschaften (5)
- Sportwissenschaften (5)
- Biochemie und Chemie (3)
- Biowissenschaften (3)
- Geographie (3)
Der Hoppe-Baum ist eine zufällig wachsende, diskrete Baumstuktur, wobei die stochastische Dynamik durch die Entwicklung der Hoppe Urne wie folgt gegeben ist: Die ausgezeichnete Kugel mit der die Hoppe Urne startet entspricht der Wurzel des Hoppe Baumes. In der Hoppe Urne wird diese Kugel mit Wahrscheinlichkeit proportional zu einem Parameter theta>0 gezogen, alle anderen Kugeln werden mit Wahrscheinlichkeit proportional zu 1 gezogen. Wann immer eine Kugel gezogen wird, wird sie zusammen mit einer neuen Kugel in die Urne zurückgelegt, was in unserem Baum dem Einfügen eines neuen Kindes an den gezogenen Knoten entspricht. Im Spezialfall theta=1 erhält man einen zufälligen rekursiven Baum.
In der Arbeit werden Erwartungswerte, Varianzen und Grenzwertsätze für Tiefe, Höhe, Pfadlänge und die Anzahl der Blätter gegeben.
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Im Rahmen dieser Arbeit wird der aktuelle Stand auf dem Gebiet des Lokalen Lovász Lemmas (LLL) beschrieben und ein Überblick über die Arbeiten zu konstruktiven Beweisen und Anwendungen gegeben. Ausgehend von Jószef Becks Arbeit zu einer algorithmischen Herangehensweise, haben sich in den letzten Jahren im Umfeld von Moser und Tardos und ihren Arbeiten zu einem konstruktiven Beweis des LLL eine erneute starke Beschäftigung mit dem Thema und eine Fülle von Verbesserungen entwickelt.
In Kapitel 1 wird als Motivation eine kurze Einführung in die probabilistische Methode gegeben. Mit der First- und Second Moment Method werden zwei einfache Vorgehensweisen vorgestellt, die die Grundidee dieses Beweisprinzips klar werden lassen. Von Paul Erdős eröffnet, beschreibt es Wege, Existenzbeweise in nicht-stochastischen Teilgebieten der Mathematik mithilfe stochastischer Überlegungen zu führen. Das Lokale Lemma als eine solche Überlegung entstammt dieser Idee.
In Kapitel 2 werden verschiedene Formen des LLL vorgestellt und bewiesen, außerdem wird anhand einiger Anwendungsbeispiele die Vorgehensweise bei der Verwendung des LLL veranschaulicht.
In Kapitel 3 werden algorithmische Herangehensweisen beschrieben, die geeignet sind, von der (mithilfe des LLL gezeigten) Existenz gewisser Objekte zur tatsächlichen Konstruktion derselben zu gelangen.
In Kapitel 4 wird anhand von Beispielen aus dem reichen Schatz neuerer Veröffentlichungen gezeigt, welche Bewegung nach der Arbeit von Moser und Tardos entstanden ist. Dabei beleuchtet die Arbeit nicht nur einen anwendungsorientierten Beitrag von Haeupler, Saha und Srinivasan, sondern auch einen Beitrag Terence Taos, der die Beweistechnik Mosers aus einem anderen Blickwinkel beleuchtet.
Die anaerobe Fermentation beschreibt den Abbau organischen Materials unter Ausschluss von Sauerstoff und setzt sich aus vier Prozessphasen (Hydrolyse, Acidogenese, Acetogenese und Methanogenese) zusammen. Im Rahmen dieser Arbeit konnte die Aufteilung dieser vier Prozessphasen auf die beiden Stufen eines zweistufigen zweiphasigen Biogas-Reaktors genau bestimmt werden. Die Aufteilung ist von entscheidender Bedeutung für zukünftige Arbeiten, da dadurch genau festgelegt werden kann, welche Stoffe bei den Messungen und bei der Modellierung berücksichtigt werden müssen.
Im Jahre 2002 wurde von der IWA Taskgroup das ADM1-Modell, welches alle vier Prozessphasen der anaeroben Fermentation berücksichtigt, veröffentlicht. In der vorliegenden Arbeit wird ein räumlich aufgelöstes Modell für die anaerobe Fermentation erarbeitet, in dem das ADM1-Modell mit einem Strömungsmodell gekoppelt wird. Anschließend wird ein reduziertes Simulationsmodell für acetoklastische Methanogenese in einem zweistufigen zweiphasigen Biogasreaktor erstellt. Anhand von Messdaten wird gezeigt, dass der Abbau von Essigsäure zu Methan innerhalb des Reaktors durch das Simulationsmodell gut wiedergegeben werden kann.
Anschließend wird das validierte Modell verwendet um Regeln für eine optimale Steuerung des Reaktors herzuleiten und weiterhin wird mit Hilfe der lokalen Methanproduktion die Effektivität des Reaktors bestimmt. Die erlangten Informationen können verwendet werden, um den Biogas-Reaktor zu optimieren.
Der im Jahr 2004 am IWR Heidelberg entwickelte Neuronen Rekonstruktions-Algorithmus NeuRA extrahiert die Oberflächenmorphologie oder ein Merkmalskelett von Neuronenzellen, die mittels konfokaler oder Zwei-Photon-Mikroskopie als Bildstapel aufgenommen wurden. Hierbei wird zunächst das Signal-zu-Rausch-Verhältnis der Rohdaten durch Anwendung des speziell entwickelten trägheitsbasierten anisotropen Diffusionsfilters verbessert, dann das Bild nach der statistischen Methode von Otsu segmentiert und anschließend das Oberflächengitter der Neuronenzellen durch den Regularisierten Marching-Tetrahedra-Algorithmus rekonstruiert oder das Merkmalskelett mit einer speziellen Thinning-Methode extrahiert. In einschlägigen Vorarbeiten wurde mit Hilfe solcher Rekonstruktionen von Neuronenzellkernen gezeigt, dass diese, entgegen der vorher üblichen Meinung, nicht notwendigerweise rund sind, sondern Einstülpungen, sogenannte Invaginationen, aufweisen können. Der Einfluss der Invaginationen auf die Ausbreitung von Calciumionen innerhalb solcher Zellkerne konnte durch entsprechende numerische Simulationen systematisch untersucht werden.
Um diese Rekonstruktionsmethode auf hochaufgelöste Mikroskopaufnahmen anwenden zu können, wurden im Rahmen der vorliegenden Arbeit, die in NeuRA verwendeten Verfahren auf Basis von Nvidia CUDA auf moderner Grafikhardware parallelisiert und unter dem Namen NeuRA2 optimiert und neu implementiert. Erzielte Beschleunigungen von bis zu einem Faktor 100, bei Verwendung einer Hochleistungsgrafikkarte, zeigen, dass sich die moderne Grafikarchitektur besonders für die Parallelisierung von Bildverarbeitungsoperatoren eignet. Insbesondere das Herzstück des Rekonstruktions-Algorithmus - der sehr rechenintensive trägheitsbasierte anisotrope Diffusionsfilter - wurde durch eine clusterbasierte Implementierung, welche die parallele Verwendung beliebig vieler Grafikkarten ermöglicht, immens beschleunigt.
Darüber hinaus wurde in dieser Arbeit das Konzept von NeuRA verallgemeinert, um nicht nur Neuronenzellen aus konfokalen oder Zwei-Photon-Bildstapeln rekonstruieren zu können, sondern vielmehr die Oberflächenmorphologie oder Merkmalskelette von allgemeinen Objekten aus beliebigen Bildstapeln zu extrahieren. Dabei wird das ursprüngliche Konzept von Rauschreduktion, Bildsegmentierung und Rekonstruktion beibehalten. Für die einzelnen Schritte stehen aber nun eine Vielfalt von Bildverarbeitungs- und Rekonstruktionsmethoden zur Verfügung, die abhängig von der Beschaffenheit der Daten und den Anforderungen an die Rekonstruktion, ausgewählt werden können. Die meisten dieser Verfahren wurden ebenfalls auf Basis moderner Grafikhardware parallelisiert.
Die weiterentwickelten Rekonstruktionsverfahren wurden in mehreren Anwendungen eingesetzt: Einerseits wurden Oberflächen- und Volumengitter aus konfokalen Bildstapeln und Computertomographie-Aufnahmen generiert, die für verschiedene numerische Simulationen eingesetzt wurden oder eingesetzt werden sollen. Des Weiteren wurden über zwanzig antike Keramikgefäße und Fragmente anderer antiker Keramiken rekonstruiert. Hierbei wurde jeweils die Rohdichte und bei den komplett erhaltenen Gefäßen das Füllvolumen berechnet. Es konnte gezeigt werden, dass dieses Verfahren exakter ist als die in der Archäologie üblichen Methoden zur Volumenbestimmung von Gefäßen. Außerdem zeigt sich eine Abhängigkeit der Rohdichte der rekonstruierten Objekte vom jeweils verwendeten Keramiktyp. Eine Analyse, wie genau die Krümmung von Objekten durch die Approximation von Dreiecksgittern dargestellt werden kann, wurde ebenfalls durchgeführt.
Zusätzlich wurde ein Verfahren zur Rekonstruktion der Merkmalskelette lebender Neuronenzellen oder Teilen von Neuronenzellen entwickelt. Bei den damit rekonstruierten Daten wurden einzelne dendritische Dornfortsätze, auch Spines genannt, hochaufgelöst mikroskopiert. Auf Basis dieser Rekonstruktionen kann die Länge von Dendriten oder einzelner Spines, der Winkel zwischen Dendritenverzweigungen, sowie das Volumen einzelner Spines automatisch berechnet werden. Mit Hilfe dieser Daten kann der Einfluss pharmakologischer Präparate und mechanischer Eingriffe in das Nervensystem von lebenden Versuchstieren systematisch untersucht werden.
Eine Adaption der beschriebenen Rekonstruktionsverfahren ist aufgrund deren einfacher Erweiterbarkeit und flexibler Verwendbarkeit für zukünftige Anwendungen leicht möglich.
It is possible to represent each of a number of Markov chains as an evolving sequence of connected subsets of a directed acyclic graph that grow in the following way: initially, all vertices of the graph are unoccupied, particles are fed in one-by-one at a distinguished source vertex, successive particles proceed along directed edges according to an appropriate stochastic mechanism, and each particle comes to rest once it encounters an unoccupied vertex. Examples include the binary and digital search tree processes, the random recursive tree process and generalizations of it arising from nested instances of Pitman's two-parameter Chinese restaurant process, tree-growth models associated with Mallows' ϕ model of random permutations and with Schützenberger's non-commutative q-binomial theorem, and a construction due to Luczak and Winkler that grows uniform random binary trees in a Markovian manner. We introduce a framework that encompasses such Markov chains, and we characterize their asymptotic behavior by analyzing in detail their Doob-Martin compactifications, Poisson boundaries and tail σ-fields.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
Eine Billion ist mathematisch leicht darstellbar. Es ist eine Eins mit 12 Nullen: 1 000 000 000 000, mathematisch kurz und prägnant als 10^12 geschrieben. Aber darstellbar heißt nicht unbedingt vorstellbar. Versuchen wir, diese Anzahlen zu veranschaulichen, entstehen teilweise surreale, aber einprägsame Bilder.
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
We investigate multivariate Laurent polynomials f \in \C[\mathbf{z}^{\pm 1}] = \C[z_1^{\pm 1},\ldots,z_n^{\pm 1}] with varieties \mathcal{V}(f) restricted to the algebraic torus (\C^*)^n = (\C \setminus \{0\})^n. For such Laurent polynomials f one defines the amoeba \mathcal{A}(f) of f as the image of the variety \mathcal{V}(f) under the \Log-map \Log : (\C^*)^n \to \R^n, (z_1,\ldots,z_n) \mapsto (\log|z_1|, \ldots, \log|z_n|). I.e., the amoeba \mathcal{A}(f) is the projection of the variety \mathcal{V}(f) on its (componentwise logarithmized) absolute values. Amoebas were first defined in 1994 by Gelfand, Kapranov and Zelevinksy. Amoeba theory has been strongly developed since the beginning of the new century. It is related to various mathematical subjects, e.g., complex analysis or real algebraic curves. In particular, amoeba theory can be understood as a natural connection between algebraic and tropical geometry.
In this thesis we investigate the geometry, topology and methods for the approximation of amoebas.
Let \C^A denote the space of all Laurent polynomials with a given, finite support set A \subset \Z^n and coefficients in \C^*. It is well known that, in general, the existence of specific complement components of the amoebas \mathcal{A}(f) for f \in \C^A depends on the choice of coefficients of f. One prominent key problem is to provide bounds on the coefficients in order to guarantee the existence of certain complement components. A second key problem is the question whether the set U_\alpha^A \subseteq \C^A of all polynomials whose amoeba has a complement component of order \alpha \in \conv(A) \cap \Z^n is always connected.
We prove such (upper and lower) bounds for multivariate Laurent polynomials supported on a circuit. If the support set A \subset \Z^n satisfies some additional barycentric condition, we can even give an exact description of the particular sets U_\alpha^A and, especially, prove that they are path-connected.
For the univariate case of polynomials supported on a circuit, i.e., trinomials f = z^{s+t} + p z^t + q (with p,q \in \C^*), we show that a couple of classical questions from the late 19th / early 20th century regarding the connection between the coefficients and the roots of trinomials can be traced back to questions in amoeba theory. This yields nice geometrical and topological counterparts for classical algebraic results. We show for example that a trinomial has a root of a certain, given modulus if and only if the coefficient p is located on a particular hypotrochoid curve. Furthermore, there exist two roots with the same modulus if and only if the coefficient p is located on a particular 1-fan. This local description of the configuration space \C^A yields in particular that all sets U_\alpha^A for \alpha \in \{0,1,\ldots,s+t\} \setminus \{t\} are connected but not simply connected.
We show that for a given lattice polytope P the set of all configuration spaces \C^A of amoebas with \conv(A) = P is a boolean lattice with respect to some order relation \sqsubseteq induced by the set theoretic order relation \subseteq. This boolean lattice turns out to have some nice structural properties and gives in particular an independent motivation for Passare's and Rullgard's conjecture about solidness of amoebas of maximally sparse polynomials. We prove this conjecture for special instances of support sets.
A further key problem in the theory of amoebas is the description of their boundaries. Obviously, every boundary point \mathbf{w} \in \partial \mathcal{A}(f) is the image of a critical point under the \Log-map (where \mathcal{V}(f) is supposed to be non-singular here). Mikhalkin showed that this is equivalent to the fact that there exists a point in the intersection of the variety \mathcal{V}(f) and the fiber \F_{\mathbf{w}} of \mathbf{w} (w.r.t. the \Log-map), which has a (projective) real image under the logarithmic Gauss map. We strengthen this result by showing that a point \mathbf{w} may only be contained in the boundary of \mathcal{A}(f), if every point in the intersection of \mathcal{V}(f) and \F_{\mathbf{w}} has a (projective) real image under the logarithmic Gauss map.
With respect to the approximation of amoebas one is in particular interested in deciding membership, i.e., whether a given point \mathbf{w} \in \R^n is contained in a given amoeba \mathcal{A}(f). We show that this problem can be traced back to a semidefinite optimization problem (SDP), basically via usage of the Real Nullstellensatz. This SDP can be implemented and solved with standard software (we use SOSTools and SeDuMi here). As main theoretic result we show that, from the complexity point of view, our approach is at least as good as Purbhoo's approximation process (which is state of the art).
In this thesis, the asymptotic behaviour of Pólya urn models is analyzed, using an approach based on the contraction method. For this, a combinatorial discrete time embedding of the evolution of the composition of the urn into random rooted trees is used. The recursive structure of the trees is used to study the asymptotic behavior using ideas from the contraction method.
The approach is applied to a couple of concrete Pólya urns that lead to limit laws with normal distributions, with non-normal limit distributions, or with asymptotic periodic distributional behavior.
Finally, an approach more in the spirit of earlier applications of the contraction method is discussed for one of the examples. A general transfer theorem of the contraction method is extended to cover this example, leading to conditions on the coefficients of the recursion that are not only weaker but also in general easier to check.
The relation between the complexity of a time-switched dynamics and the complexity of its control sequence depends critically on the concept of a non-autonomous pullback attractor. For instance, the switched dynamics associated with scalar dissipative affine maps has a pullback attractor consisting of singleton component sets. This entails that the complexity of the control sequence and switched dynamics, as quantified by the topological entropy, coincide. In this paper we extend the previous framework to pullback attractors with nontrivial components sets in order to gain further insights in that relation. This calls, in particular, for distinguishing two distinct contributions to the complexity of the switched dynamics. One proceeds from trajectory segments connecting different component sets of the attractor; the other contribution proceeds from trajectory segments within the component sets. We call them “macroscopic” and “microscopic” complexity, respectively, because only the first one can be measured by our analytical tools. As a result of this picture, we obtain sufficient conditions for a switching system to be more complex than its unswitched subsystems, i.e., a complexity analogue of Parrondo’s paradox.
We study the price-setting problem of market makers under perfect competition in continuous time. Thereby we follow the classic Glosten-Milgrom model that defines bid and ask prices as the expectation of a true value of the asset given the market makers partial information that includes the customers trading decisions. The true value is modeled as a Markov process that can be observed by the customers with some noise at Poisson times.
We analyze the price-setting problem by solving a non-standard filtering problem with an endogenous filtration that depends on the bid and ask price process quoted by the market maker. Under some conditions we show existence and uniqueness of the price processes. In a different setting we construct a counterexample to uniqueness. Further, we discuss the behavior of the spread by a convergence result and simulations.
[Nachruf] Wolfgang Schwarz
(2013)
Für balancierte, irreduzible Pólya-Urnen-Modelle sind Grenzwertsätze für die normalisierte Anzahl von Kugeln einer Farbe bekannt. Für eine spezielle Urne, deren Dynamik mit "Randomised-Play-the-Winner Rule" bezeichnet wird, werden im Rahmen der bekannten Grenzwertsätze Konvergenzraten in Wasserstein-Metriken und in der Kolmogorov-Metrik im Falle eines nicht-normalverteilten Grenzwerts hergeleitet.
In der Arbeit wird ein Testverfahren zum Prüfen der Varianzhomogenität der Lebenszeiten eines Erneuerungsprozesses entwickelt. Das Verfahren basiert auf der "Filtered-Derivative"-Methode. Zur Herleitung des Annahmebereichs werden zunächst Bootstrap-Permutationen genutzt, bevor zu einer asymptotischen Methode übergangen wird. Ein entsprechender funktionaler Grenzwertsatz wird skizziert. Aufbauend auf dem Test wird ein Multiple-Filter-Algorithmus zur genauen Detektion der Varianz-Change-Points besprochen. Schließlich folgt die Inklusion von vorher detektierten Ratenänderungen in das Verfahren. Der Test und der Algorithmus werden in Simulationsstudien evaluiert. Abschließend erfolgt eine Anwendung auf EEG-Daten.
Optimierung von Phasen- und Ratenparametern in einem stochastischen Modell neuronaler Feueraktivität
(2014)
In unserem Gehirn wird Information von Neuronen durch die Emission von Spikes repräsentiert. Als wichtige Signalkomponenten werden hierbei die Rate (Anzahl Spikes), die Phase (zeitliche Verschiebung der Spikes) und synchrone Oszillationen (rhythmische Entladungen der Neuronen am selben Zyklus) diskutiert.
In dieser Arbeit wird untersucht, wie Rate und Phase für eine optimale Detektion miteinander kombiniert werden und abhängig vom gewählten Parameterbereich wird der Beitrag der Phase quantifiziert.
Dies wird anhand eines stochastischen Spiketrain-Modell untersucht, das hohe Ähnlichkeiten zu empirischen Spiketrains zeigt und die drei genannten Signalkomponenten beinhaltet. Das ELO-Modell („exponential lockig to a free oscillator“) ist in zwei Prozessstufen unterteilt: Im Hintergrund steht ein globaler Oszillationsprozess, der unabhängige und normal-verteilte Intervallabschnitte hervorbringt (Oszillation). An den Intervallgrenzen starten unabhängig, inhomogene Poisson-Prozesse (Synchronizität) mit exponentiell abnehmender Feuerrate, die durch eine stimulusspezifische Rate und Phase festgelegt ist.
Neben einer analytischen Bestimmung der optimalen Parameter im Falle reiner Raten- bzw. Phasencodierung, wird die gemeinsame Codierung anhand von Simulationsstudien analysiert.
The cones of nonnegative polynomials and sums of squares arise as central objects in convex algebraic geometry and have their origin in the seminal work of Hilbert ([Hil88]). Depending on the number of variables n and the degree d of the polynomials, Hilbert famously characterizes all cases of equality between the cone of nonnegative polynomials and the cone of sums of squares. This equality precisely holds for bivariate forms, quadratic forms and ternary quartics ([Hil88]). Since then, a lot of work has been done in understanding the difference between these two cones, which has major consequences for many practical applications such as for polynomial optimization problems. Roughly speaking, minimizing polynomial functions (constrained as well as unconstrained) can be done efficiently whenever certain nonnegative polynomials can be written as sums of squares (see Section 2.3 for the precise relationship). The underlying reason is the fundamental difference that checking nonnegativity of polynomials is an NP-hard problem whenever the degree is greater or equal than four ([BCSS98]), whereas checking whether a polynomial can be written as a sum of squares is a semidefinite feasibility problem (see Section 2.2). Although the complexity status of the semidefinite feasibility problem is still an open problem, it is polynomial for fixed number of variables. Hence, understanding the difference between nonnegative polynomials and sums of squares is highly desirable both from a theoretical and a practical viewpoint.
This work is concerned with two topics at the intersection of convex algebraic geometry and optimization.
We develop a new method for the optimization of polynomials over polytopes. From the point of view of convex algebraic geometry the most common method for the approximation of polynomial optimization problems is to solve semidefinite programming relaxations coming from the application of Positivstellensätze. In optimization, non-linear programming problems are often solved using branch and bound methods. We propose a fused method that uses Positivstellensatz-relaxations as lower bounding methods in a branch and bound scheme. By deriving a new error bound for Handelman's Positivstellensatz, we show convergence of the resulting branch and bound method. Through the application of Positivstellensätze, semidefinite programming has gained importance in polynomial optimization in recent years. While it arises to be a powerful tool, the underlying geometry of the feasibility regions (spectrahedra) is not yet well understood. In this work, we study polyhedral and spectrahedral containment problems, in particular we classify their complexity and introduce sufficient criteria to certify the containment of one spectrahedron in another one.
A multiple filter test for the detection of rate changes in renewal processes with varying variance
(2014)
The thesis provides novel procedures in the statistical field of change point detection in time series.
Motivated by a variety of neuronal spike train patterns, a broad stochastic point process model is introduced. This model features points in time (change points), where the associated event rate changes. For purposes of change point detection, filtered derivative processes (MOSUM) are studied. Functional limit theorems for the filtered derivative processes are derived. These results are used to support novel procedures for change point detection; in particular, multiple filters (bandwidths) are applied simultaneously in oder to detect change points in different time scales.
The work presented in this thesis is devoted to two classes of mathematical population genetics models, namely the Kingman-coalescent and the Beta-coalescents. Chapters 2, 3 and 4 of the thesis include results concerned with the first model, whereas Chapter 5 presents contributions to the second class of models.
Based on a non-rigorous formalism called the “cavity method”, physicists have made intriguing predictions on phase transitions in discrete structures. One of the most remarkable ones is that in problems such as random k-SAT or random graph k-coloring, very shortly before the threshold for the existence of solutions there occurs another phase transition called condensation [Krzakala et al., PNAS 2007]. The existence of this phase transition seems to be intimately related to the difficulty of proving precise results on, e. g., the k-colorability threshold as well as to the performance of message passing algorithms. In random graph k-coloring, there is a precise conjecture as to the location of the condensation phase transition in terms of a distributional fixed point problem. In this paper we prove this conjecture, provided that k exceeds a certain constant k0.
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c⋅nα are chosen, where 0<α≤12, c>0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n→∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties.
We consider a class of nonautonomous nonlinear competitive parabolic systems on bounded radial domains under Neumann or Dirichlet boundary conditions. We show that, if the initial profiles satisfy a reflection inequality with respect to a hyperplane, then bounded positive solutions are asymptotically (in time) foliated Schwarz symmetric with respect to antipodal points. Additionally, a related result for (positive and sign changing solutions) of scalar equations with Neumann or Dirichlet boundary conditions is given. The asymptotic shape of solutions to cooperative systems is also discussed.
Thought structures of modelling task solutions and their connection to the level of difficulty
(2015)
Although efforts have been made to integrate the concept of mathematical modelling in school, among others PISA and TIMSS revealed weaknesses of not only German students in the field of mathematical modelling. There may be various reasons starting from educational policy via curricular issues to practical instructional concerns. Studies show that mathematical modelling has not been arrived yet in everyday school class (Blum &BorromeoFerri, 2009, p. 47). Thus, the proportion of mathematical modelling in everyday school classes is low (Jordan et al., 2006). When focusing on the teachers’ point of view there are difficulties which may contribute to avoid modelling tasks in class. The development of reasonable modelling tasks, estimating the task space, valuating the task difficulty and assessing the student solutions are difficulties which occur to an increasing degree compared to ordinary mathematics tasks.The project MokiMaS (transl.: modeling competency in math classes of secondary education) aims at providing inter-year modelling tasks, whose task space and level of difficulty is known, together with an evaluation scheme. In particular a theory based method has been developed to determine the level of difficulty of modelling tasks on the basis of thought structures, representing the cognitive load of solution approaches. The current question is whether this method leads to a realistic rating. To go further into that question an evaluation scheme has been developed which is guided by the daily assessment work of teachers, to investigate the relation of task difficulty and student performance.
Die Mathematik ist gleichermaßen eine Kulturwissenschaft mit langer Tradition als auch treibende Kraft hinter vielen modernen Technologien und damit Schlüsseldisziplin des Informationszeitalters. Zum einen zielt die Mathematik darauf ab, abstrakte Strukturen und ihre Zusammenhänge zu verstehen; zum anderen entwickelt sie kraftvolle Methoden, um Frage- und Problemstellungen in zahlreichen Wissenschaftsdisziplinen zu behandeln. Moderne Anwendungen der Mathematik liegen beispielsweise in den Bereichen der Datensicherheit und -kompression, der Verkehrssteuerung, der Bewertung und Optimierung von Finanzinstrumenten oder der medizinischen Operationsplanung.
In dieser Broschüre stellen wir Ihnen das Profil der Frankfurter Mathematik in Forschung und Lehre sowie speziell die Studiengänge
• Bachelor Mathematik
• Master Mathematik
vor. An der Goethe-Universität ist es auch möglich, Mathematik auf Lehramt (L1, L2, L3, L5) zu studieren. ...
This thesis covers the analysis of radix sort, radix select and the path length of digital trees under a stochastic input assumption known as the Markov model.
The main results are asymptotic expansions of mean and variance as well as a central limit theorem for the complexity of radix sort and the path length of tries, PATRICIA tries and digital search trees.
Concerning radix select, a variety of different models for ranks are discussed including a law of large numbers for the worst case behavior, a limit theorem for the grand averages model and the first order asymptotic of the average complexity in the quantile model.
Some of the results are achieved by moment transfer techniques, the limit laws are based on a novel use of the contraction method suited for systems of stochastic recurrences.
In the qualitative analysis of solutions of partial differential equations, many interesting questions are related to the shape of solutions. In particular, the symmetries of a given solution are of interest. One of the first more general results in this direction was given in 1979 by Gidas, Ni and Nirenberg... The main tool in proving this symmetry and monotonicity result is the moving plane method. This method, which goes back to Alexandrov’s work on constant mean curvature surfaces in 1962, was introduced in 1971 by Serrin in the context of partial differential equations to analyze an overdetermined problem...
This work proposes to employ the (bursty) GLO model from Bingmer et. al (2011) to model the occurrence of tropical cyclones. We develop a Bayesian framework to estimate the parameters of the model and, particularly, employ a Markov chain Monte Carlo algorithm. This also allows us to develop a forecasting framework for future events.
Moreover, we assess the default probability of an insurance company that is exposed to claims that occur according to a GLO process and show that the model is able to substantially improve actuarial risk management if events occur in oscillatory bursts.
Containment problems belong to the classical problems of (convex) geometry. In the proper sense, a containment problem is the task to decide the set-theoretic inclusion of two given sets, which is hard from both the theoretical and the practical perspective. In a broader sense, this includes, e.g., radii or packing problems, which are even harder. For some classes of convex sets there has been strong interest in containment problems. This includes containment problems of polyhedra and balls, and containment of polyhedra, which have been studied in the late 20th century because of their inherent relevance in linear programming and combinatorics.
Since then, there has only been limited progress in understanding containment problems of that type. In recent years, containment problems for spectrahedra, which naturally generalize the class of polyhedra, have seen great interest. This interest is particularly driven by the intrinsic relevance of spectrahedra and their projections in polynomial optimization and convex algebraic geometry. Except for the treatment of special classes or situations, there has been no overall treatment of that kind of problems, though.
In this thesis, we provide a comprehensive treatment of containment problems concerning polyhedra, spectrahedra, and their projections from the viewpoint of low-degree semialgebraic problems and study algebraic certificates for containment. This leads to a new and systematic access to studying containment problems of (projections of) polyhedra and spectrahedra, and provides several new and partially unexpected results.
The main idea - which is meanwhile common in polynomial optimization, but whose understanding of the particular potential on low-degree geometric problems is still a major challenge - can be explained as follows. One point of view towards linear programming is as an application of Farkas' Lemma which characterizes the (non-)solvability of a system of linear inequalities. The affine form of Farkas' Lemma characterizes linear polynomials which are nonnegative on a given polyhedron. By omitting the linearity condition, one gets a polynomial nonnegativity question on a semialgebraic set, leading to so-called Positivstellensaetze (or, more precisely Nichtnegativstellensaetze). A Positivstellensatz provides a certificate for the positivity of a polynomial function in terms of a polynomial identity. As in the linear case, these Positivstellensaetze are the foundation of polynomial optimization and relaxation methods. The transition from positivity to nonnegativity is still a major challenge in real algebraic geometry and polynomial optimization.
With this in mind, several principal questions arise in the context of containment problems: Can the particular containment problem be formulated as a polynomial nonnegativity (or, feasibility) problem in a sophisticated way? If so, how are positivity and nonnegativity related to the containment question in the sense of their geometric meaning? Is there a sophisticated Positivstellensatz for the particular situation, yielding certificates for containment? Concerning the degree of the semialgebraic certificates, which degree is necessary, which degree is sufficient to decide containment?
Indeed, (almost) all containment problems studied in this thesis can be formulated as polynomial nonnegativity problems allowing the application of semialgebraic relaxations. Other than this general result, the answer to all the other questions (highly) depends on the specific containment problem, particularly with regard to its underlying geometry. An important point is whether the hierarchies coming from increasing the degree in the polynomial relaxations always decide containment in finitely many steps.
We focus on the containment problem of an H-polytope in a V-polytope and of a spectrahedron in a spectrahedron. Moreover, we address containment problems concerning projections of H-polyhedra and spectrahedra. This selection is justified by the fact that the mentioned containment problems are computationally hard and their geometry is not well understood.
Triangles of groups have been introduced by Gersten and Stallings. They are, roughly speaking, a generalization of the amalgamated free product of two groups and occur in the framework of Corson diagrams. First, we prove an intersection theorem for Corson diagrams. Then, we focus on triangles of groups. It has been shown by Howie and Kopteva that the colimit of a hyperbolic triangle of groups contains a non-abelian free subgroup. We give two natural conditions, each of which ensures that the colimit of a non-spherical triangle of groups either contains a non-abelian free subgroup or is virtually solvable.
Thought structures of modelling task solutions and their connection to the level of difficulty
(2015)
Although efforts have been made to integrate the concept of mathematical modelling in school, among others PISA and TIMSS revealed weaknesses of not only German students in the field of mathematical modelling. There may be various reasons starting from educational policy via curricular issues to practical instructional concerns. Studies show that mathematical modelling has not been arrived yet in everyday school class (Blum &BorromeoFerri, 2009, p. 47). Thus, the proportion of mathematical modelling in everyday school classes is low (Jordan et al., 2006). When focusing on the teachers’ point of view there are difficulties which may contribute to avoid modelling tasks in class. The development of reasonable modelling tasks, estimating the task space, valuating the task difficulty and assessing the student solutions are difficulties which occur to an increasing degree compared to ordinary mathematics tasks.The project MokiMaS (transl.: modeling competency in math classes of secondary education) aims at providing inter-year modelling tasks, whose task space and level of difficulty is known, together with an evaluation scheme. In particular a theory based method has been developed to determine the level of difficulty of modelling tasks on the basis of thought structures, representing the cognitive load of solution approaches. The current question is whether this method leads to a realistic rating. To go further into that question an evaluation scheme has been developed which is guided by the daily assessment work of teachers, to investigate the relation of task difficulty and student performance.
Das Vertrauen vieler Menschen in ihre mathematischen und musikalischen Fähigkeiten ist oftmals sehr niedrig ausgeprägt oder wenig ausdifferenziert. Sie glauben, dass sie in dem einen oder anderen Fach (oder beiden) nicht gut seien. Hinzukommt, dass die Aussage „Ich kann nicht singen“ oder „Mathematik habe ich noch nie verstanden“ durchaus gesellschaftsfähig ist und sie nicht daran hindern muss, eine erfolgreiche Karriere zu durchlaufen, noch wird es die Meinung anderer über sie ändern.
Das Projekt „European Music Portfolio – Sounding Ways into Mathematics“ (EMP-Maths) möchte dieses Verständnis ändern. Jeder kann singen und Musik machen und jeder kann Mathematik treiben. Beide Themen sind integraler Bestandteil unseres Lebens und unserer Gesellschaft. Was geändert werden muss, ist das Bild von diesen beiden Fächern und die Fähigkeit von Lehrpersonen, Lernenden die Gelegenheit zu geben, dieses zu verändern und die beiden Fächer als bereichernd für die Lebensgestaltung einzustufen.
Beispielhaft wird im Arbeitsbuch eine Aktivität vorgestellt, in welcher Mathematik und Musik in einer Unterrichtssequenz miteinander verbunden werden. Weitere Aktivitäten, die in der Schule genutzt werden können, finden sich im Handbuch für Lehrerinnen und Lehrer. Viele weitere Beispiele und Vorschläge sind bereits vorhanden (siehe Web-Seite des Projekts) und wir möchten jeden ermutigen, sie zu nutzen. Die Auswahl im Handbuch deckt einige zentralen Felder der Mathematik und der Musik ab: Singen, Tanzen, Hören, Probleme lösen, Zahlen, Messen, Raum und Form. Mit diesem Ansatz wollen wir das Projekt an die Kerncurricula der beteiligten Länder anbinden: Deutschland, Griechenland, Rumänien, Slowakei, Spanien, Schweiz und Großbritannien. Die Dokumentation der Beispiele erfolgt in einer Art von Didaktischen Design Patterns, deren Struktur an die Anforderungen des Projekts angepasst wurde.
Das Projekt „Sounding Ways into Mathematics“ stellt Aktivitäten mit unterschiedlichen mathematischen und musikalischen Inhalten vor, um Lehrpersonen ein möglichst breites Spektrum an Hilfsmitteln, Ideen und Beispielen anbieten zu können. Diese Aktivitäten sind so aufgebaut, dass sie erweiter- und anpassbar an unterschiedliche Kontexte sowie auf die Bedürfnisse einer jeden Lehrperson und deren Schülerinnen und Schülern sind. Ferner wurden diese Aktivitäten nicht nur entwickelt, um von der Lehrperson instruktiv ausgeführt zu werden, sondern, um sie gemeinsam mit der Lerngruppe zu nutzen und eventuell sogar gemeinsam zu verändern und weiter zu entwickeln.
Das Projekt „Sounding Ways into Mathematics“ steht in Verbindung zum EMP-Sprachen Projekt „A creative Way into Languages“ (http://emportfolio.eu/emp/).
European Music Portfolio (EMP) – Maths: 'Sounding ways into mathematics' : teacher’s handbook
(2016)
Music and mathematics share an odd character: many people believe that they are not good at one or the other (or both). However, ‘I cannot sing’ or ‘I never understood mathematics’ will probably not keep them from having successful careers, and nor will it change the opinions others have about them.
The project ‘European Music Portfolio – Sounding Ways into Mathematics’ (EMP-Maths) aims towards a different understanding with regards to this character. Everyone can sing and make music, and everyone can do mathematics. Both topics are integral parts of our life and society. What needs to be improved is our ability to give students opportunities to like them.
This teacher’s handbook presents activities with different mathematical and musical content in order to offer teachers resources, ideas and examples. These activities are designed to be expandable, adaptable to different contexts, and adjustable to the needs of each teacher and their students. Furthermore, these activities are not just planned to be carried out individually; a teaching unit could be used to make sense of them, or they could even be developed in connection with each other.
Apart from this teacher’s handbook, the project provides a continuing professional development (CPD) course, a webpage (http://maths.emportfolio.eu) from which all materials can be downloaded, and an online collaboration platform. A general overview of related literature and research is available in separate documents. Additional teacher booklets provide related materials and a brief overview of the theoretical background, and are the basis for the CPD courses. The project ‘Sounding Ways into Mathematics’ is related to the EMP-Languages project ‘A Creative Way into Languages’ (http://emportfolio.eu/emp/).
Die Populationsgenetik beschäftigt sich mit dem Einfluss von zufälliger Reproduktion, Rekombination, Migration, Mutation und Selektion auf die genetische Struktur einer Population.
In dieser Arbeit mit dem englischen Titel "Ancestral lines under mutation and selection" wird das Zusammenspiel von zufälliger Reproduktion, gerichteter Selektion und Zweiwegmutation untersucht.
Dazu betrachten wir eine haploide Population in der jedes Individuum zu jedem Zeitpunkt genau einen von zwei Typen aus S:={0,1} trägt. Dabei sei 1 der neutrale und 0 der selektiv bevorzugte Typ. Im Diffusionslimes sehr großer Populationen modellieren wir den Prozess der Frequenz der Typ-0-Individuen durch eine Wright-Fisher-Diffusion X:=(X_t) mit Mutation und gerichteter Selektion.
Zu jedem Zeitpunkt s gibt es genau ein Individuum, dessen Nachkommen ab einem bestimmten zukünftigen Zeitpunkt t>s die gesamte Population ausmachen werden. Wir nennen dieses Individuum den gemeinsamen Vorfahren zum Zeitpunkt s, da alle Individuen zu allen Zeitpunkten r>t von ihm abstammen. Sei R_{s} dessen Typ zum Zeitpunkt s. Wir nehmen an, dass der Prozess X zum Zeitpunkt 0 im Gleichgewicht ist und definieren die Wahrscheinlichkeit, dass der gemeinsame Vorfahre zum Zeitpunkt 0 Typ 0 hat, durch h(x):= P(R_{0}=0|X_{0}=x). Eine Darstellung von h(x) wurde bereits von Fearnhead (2002) und Taylor (2007) gefunden und dort mit vorwiegend analytischen Methoden bewiesen. In dieser Arbeit entwickeln wir in Kapitel 3 ein neues Teilchenbild, den pruned lookdown ancestral selection graph (pruned LD-ASG), der für sich selbst genommen interessant ist und eine neue probabilistische Interpretation der Darstellung von h(x) liefert.
Durch Erweiterung des Teilchenbildes auf Nachkommenverteilungen mit schweren Tails und mit Hilfe einer Siegmund Dualität gelingt es uns in Kapitel 4 das Resultat für h(x) von klassischen Wright-Fisher-Diffusionen auf Lambda-Wright-Fisher-Diffuison zu erweitern.
Eine Verbindung zwischen Ideen von Taylor (2007), der den gemeinsamen Prozess (X,R) untersucht hat, und einem von Fearnhead (2002) betrachteten Prozess (R,V), der die Entwicklung des Typs R des gemeinsamen Vorfahren in einer Umgebung von V sogenannten virtuellen Linien beschreibt, stellen wir in Kapitel 6 her. Wir bestimmen die gemeinsame Dynamik des Tripels (X,R,V). In Kapitel 7 betrachten wir ein diskretes Bild mit endlicher Populationsgröße N und schlagen dort eine Brücke zu Resultaten von Kluth, Hustedt und Baake (2013).
Des Weiteren entwickeln wir in Kapitel 5 dieser Arbeit einen Algorithmus zur Simulation der Typen einer Stichprobe von m Individuen, die aus einer Wright-Fisher-Population mit Mutation und Selektion im Gleichgewicht gezogen wird. Mittels dieses Algorithmus illustrieren wir die Typenverteilung für verschiedene Parameterwerte und Stichprobengrößen.
Als wir im Herbst 2015 auf den Homepages von BURG FÜRSTENECK und der Schülerakademie unsere Ausschreibung für die Akademie 2016 veröffentlichten, ahnten wir noch nicht, dass wir uns weitere Werbung mit dem jährlichen Flyer, den wir zum Jahreswechsel an die hessischen Gymnasien und Gesamtschulen mit gymnasialen Zweig versenden, hätten (fast) sparen können. Zu unserer Überraschung und großer Freude zählten wir bereits im Februar 2016 "58" Anmeldungen von Schülerinnen und Schülern. Die Werbung hat uns im Anschluss über 20 weitere Bewerbungen beschert und in die unangenehme Situation gebracht, (zu) vielen Schülerinnen und Schülern absagen bzw. sie auf das nächste Jahr vertrösten zu müssen.
The condensation phase transition and the number of solutions in random graph and hypergraph models
(2016)
This PhD thesis deals with two different types of questions on random graph and random hypergraph structures.
One part is about the proof of the existence and the determination of the location of the condensation phase transition. This transition will be investigated for large values of $k$ in the problem of $k$-colouring random graphs and in the problem of 2-colouring random $k$-uniform hypergraphs, where in the latter case we investigate a more general model with finite inverse temperature.
The other part deals with establishing the limiting distribution of the number of solutions in these structures in density regimes below the condensation threshold.
Random constraint satisfaction problems have been on the agenda of various sciences such as discrete mathematics, computer science, statistical physics and a whole series of additional areas of application since the 1990s at least. The objective is to find a state of a system, for instance an assignment of a set of variables, satisfying a bunch of constraints. To understand the computational hardness as well as the underlying random discrete structures of these problems analytically and to develop efficient algorithms that find optimal solutions has triggered a huge amount of work on random constraint satisfaction problems up to this day. Referring to this context in this thesis we present three results for two random constraint satisfaction problems. ...
Die letzten Jahrzehnte brachten einen enormen Zuwachs des Wissens und Verständnisses über die molekularen Prozesse des Lebens.Möglich wurde dieser Zuwachs durch die Entwicklung diverser Methoden, mit denen beispielsweise gezielt die Konzentration einzelner Stoffe gemessen werden kann oder gar alle anwesenden Metaboliten eines biologischen Systems erfasst werden können. Die großflächige Anwendung dieser Methoden führte zur Ansammlung vieler unterschiedlicher -om-Daten, wie zum Beispiel Metabolom-, Proteom- oder Transkriptoms-Datensätzen. Die Systembiologie greift auf solche Daten zurück, um mathematische Modelle biologischer Systeme zu erstellen, und ermöglicht so ein Studium biologischer Systeme auch außerhalb des Labors.
Für größere biologische Systeme stehen jedoch meistens nicht alle Informationen über Stoffkonzentrationen oder Reaktionsgeschwindigkeiten zur Verfügung, um eine quantitative Modellierung, also die Beschreibung von Änderungsraten kontinuierlicher Variablen, durchführen zu können. In einem solchen Fall wird auf Methoden der qualitativen Modellierung zurückgegriffen. Eine dieser Methoden sind die Petrinetze (PN), welche in den 1960er Jahren von Carl Adam Petri entwickelt wurden, um nebenläufige Prozesse im technischen Umfeld zu beschreiben. Seit Anfang der 1990er Jahre finden PN auch Anwendung in der Systembiologie, um zum Beispiel metabolische Systeme oder Signaltransduktionswege zu modellieren. Einer der Vorteile dieser Methode ist zudem, dass Modelle als qualitative Beschreibung des Systems begonnen werden können und im Laufe der Zeit um quantitative Beschreibungen ergänzt werden können.
Zur Modellierung und Analyse von PN existieren bereits viele Anwendungen. Da das Konzept der PN jedoch ursprünglich nicht für die Systembiologie entwickelt wurde und meist im technischen Bereich verwendet wird, existierten kaum Anwendungen, die für den Einsatz in der Systembiologie entwickelt wurden. Daher ist auch die Durchführung der für die Systembiologie entwickelten Analysemethoden für PN nicht mit diesen Anwendungen möglich. Die Motivation des ersten Teiles dieser Arbeit war daher, eine Anwendung zu schaffen, die speziell für die PN-Modellierung und Analyse in der Systembiologie gedacht ist, also in ihren Analysemethoden und ihrer Terminologie sich an den Bedürfnissen der Systembiologie orientiert. Zudem sollte die Anwendung den Anwender bei der Auswertung der Resultate der Analysemethoden visuell unterstützen, indem diese direkt visuell im Kontext des PN gesetzt werden. Da bei komplexeren PN die Resultate der Analysemethoden in ihrer Zahl drastisch anwachsen, wird eine solche Auswertung dieser notwendig. Aus dieser Motivation heraus entstand die Anwendung MonaLisa, dessen Implementierung und Funktionen im ersten Teil der vorliegenden Arbeit beschrieben werden. Neben den klassischen Analysemethoden für PN, wie den Transitions- und Platz-Invarianten, mit denen grundlegende funktionale Module innerhalb eines PN gefunden werden können, wurden weitere, meist durch die Systembiologie entwickelte, Analysemethoden implementiert. Dazu zählen zum Beispiel die Minimal Cut Sets, die Maximal Common Transitions Sets oder Knock-out-Analysen. Mit MonaLisa ist aber auch die Simulation des dynamischen Verhaltens des modellierten biologischen Systems möglich. Hierzu stehen sowohl deterministische als auch stochastische Verfahren, beispielsweise der Algorithmus von Gillespie zur Simulation chemischer Systeme, zur Verfügung. Für alle zur Verfügung gestellten Analysemethoden wird ebenfalls eine visuelle Repräsentation ihrer Resultate bereitgestellt. Im Falle der Invarianten werden deren Elemente beispielsweise in der Visualisierung des PN eingefärbt. Die Resultate der Simulationen oder der topologischen Analyse können durch verschiedene Graphen ausgewertet werden. Um eine Schnittstelle zu anderen Anwendungen zu schaffen, wurde für MonaLisa eine Unterstützung einiger gängiger Dateiformate der Systembiologie geschaffen, so z.B. für SBML und KGML.
Der zweite Teil der Arbeit beschäftigt sich mit der topologischen Analyse eines Datensatzes von 2641 Gesamtgenom Modellen aus der path2models-Datenbank. Diese Modelle wurden automatisiert aus dem vorhandenen Wissen der KEGG- und der MetaCyc-Datenbank erstellt. Die Analyse der topologischen Eigenschaften eines Graphen ermöglicht es, grundlegende Aussagen über die globalen Eigenschaften des modellierten Systems und dessen Entstehungsprozesses zu treffen. Daher ist eine solche Analyse oft der erste Schritt für das Verständnis eines komplexen biologischen Systems. Für die Analyse der Knotengrade aller Reaktionen und Metaboliten dieser Modelle wurden sie in einem ersten Schritt in PN transformiert. Die topologischen Eigenschaften von metabolischen Systemen werden in der Literatur schon sehr gut beschrieben, wobei die Untersuchungen meist auf einem Netzwerk der Metaboliten oder der Reaktionen basieren. Durch die Verwendung von PN wird es möglich, die topologischen Eigenschaften von Metaboliten und Reaktionen in einem gemeinsamen Netzwerk zu untersuchen. Die Motivation hinter diesen Untersuchungen war, zu überprüfen, ob die schon beschriebenen Eigenschaften auch für eine Darstellung als PN zutreffen und welche neuen Eigenschaften gefunden werden können. Untersucht wurden der Knotengrad und der Clusterkoeffizient der Modelle. Es wird gezeigt, dass einige wenige Metaboliten mit sehr hohem Knotengrad für eine ganze Reihe von Effekten verantwortlich sind, wie beispielsweise dass die Verteilung des Knotengrades und des Clusterkoeffizienten, im Bezug auf Metaboliten, skalenfrei sind und dass sie für die Vernetzung der Nachbarschaft von Reaktionen verantwortlich sind. Weiter wird gezeigt, dass die Größe eines Modelles Einfluss auf dessen topologische Eigenschaften hat. So steigt die Vernetzung der Nachbarschaft eines Metaboliten, je mehr Metaboliten in einem biologischen System vorhanden sind, gleiches gilt für den durchschnittlichen Knotengrad der Metaboliten.
Random ordinary differential equations (RODEs) are ordinary differential equations (ODEs) which have a stochastic process in their vector field functions. RODEs have been used in a wide range of applications such as biology, medicine, population dynamics and engineering and play an important role in the theory of random dynamical systems, however, they have been long overshadowed by stochastic differential equations.
Typically, the driving stochastic process has at most Hoelder continuous sample paths and the resulting vector field is, thus, at most Hoelder continuous in time, no matter how smooth the vector function is in its original variables, so the sample paths of the solution are certainly continuously differentiable, but their derivatives are at most Hoelder continuous in time. Consequently, although the classical numerical schemes for ODEs can be applied pathwise to RODEs, they do not achieve their traditional orders.
Recently, Gruene and Kloeden derived the explicit averaged Euler scheme by taking the average of the noise within the vector field. In addition, new forms of higher order Taylor-like schemes for RODEs are derived systematically by Jentzen and Kloeden.
However, it is still important to build higher order numerical schemes and computationally less expensive schemes as well as numerically stable schemes and this is the motivation of this thesis. The schemes by Gruene and Kloeden and Jentzen and Kloeden are very general, so RODEs with special structure, i.e., RODEs with Ito noise and RODEs with affine structure, are focused and numerical schemes which exploit these special structures are investigated.
The developed numerical schemes are applied to several mathematical models in biology and medicine. In order to see the performance of the numerical schemes, trajectories of solutions are illustrated. In addition, the error vs. step sizes as well as the computational costs are compared among newly developed schemes and the schemes in literature.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
Algorithms for the Maximum Cardinality Matching Problem which greedily add edges to the solution enjoy great popularity. We systematically study strengths and limitations of such algorithms, in particular of those which consider node degree information to select the next edge. Concentrating on nodes of small degree is a promising approach: it was shown, experimentally and analytically, that very good approximate solutions are obtained for restricted classes of random graphs. Results achieved under these idealized conditions, however, remained unsupported by statements which depend on less optimistic assumptions.
The KarpSipser algorithm and 1-2-Greedy, which is a simplified variant of the well-known MinGreedy algorithm, proceed as follows. In each step, if a node of degree one (resp. at most two) exists, then an edge incident with a minimum degree node is picked, otherwise an arbitrary edge is added to the solution.
We analyze the approximation ratio of both algorithms on graphs of degree at most D. Families of graphs are known for which the expected approximation ratio converges to 1/2 as D grows to infinity, even if randomization against the worst case is used. If randomization is not allowed, then we show the following convergence to 1/2: the 1-2-Greedy algorithm achieves approximation ratio (D-1)/(2D-3); if the graph is bipartite, then the more restricted KarpSipser algorithm achieves the even stronger factor D/(2D-2). These guarantees set both algorithms apart from other famous matching heuristics like e.g. Greedy or MRG: these algorithms depend on randomization to break the 1/2-barrier even for paths with D=2. Moreover, for any D our guarantees are strictly larger than the best known bounds on the expected performance of the randomized variants of Greedy and MRG.
To investigate whether KarpSipser or 1-2-Greedy can be refined to achieve better performance, or be simplified without loss of approximation quality, we systematically study entire classes of deterministic greedy-like algorithms for matching. Therefore we employ the adaptive priority algorithm framework by Borodin, Nielsen, and Rackoff: in each round, an adaptive priority algorithm requests one or more edges by formulating their properties---like e.g. "is incident with a node of minimum degree"---and adds the received edges to the solution. No constraints on time and space usage are imposed, hence an adaptive priority algorithm is restricted only by its nature of picking edges in a greedy-like fashion. If an adaptive priority algorithm requests edges by processing degree information, then we show that it does not surpass the performance of KarpSipser: our D/(2D-2)-guarantee for bipartite graphs is tight and KarpSipser is optimal among all such "degree-sensitive" algorithms even though it uses degree information merely to detect degree-1 nodes. Moreover, we show that if degrees of both nodes of an edge may be processed, like e.g. the Double-MinGreedy algorithm does, then the performance of KarpSipser can only be increased marginally, if at all. Of special interest is the capability of requesting edges not only by specifying the degree of a node but additionally its set of neighbors. This enables an adaptive priority algorithm to "traverse" the input graph. We show that on general degree-bounded graphs no such algorithm can beat factor (D-1)/(2D-3). Hence our bound for 1-2-Greedy is tight and this algorithm performs optimally even though it ignores neighbor information. Furthermore, we show that an adaptive priority algorithm deteriorates to approximation ratio exactly 1/2 if it does not request small degree nodes. This tremendous decline of approximation quality happens for graphs on which 1-2-Greedy and KarpSipser perform optimally, namely paths with D=2. Consequently, requesting small degree nodes is vital to beat factor 1/2.
Summarizing, our results show that 1-2-Greedy and KarpSipser stand out from known and hypothetical algorithms as an intriguing combination of both approximation quality and conceptual simplicity.
Nur eine Institution, die sich verändern kann, kann auch bestehen – das gilt mit Sicherheit im besonderen Maße für Bildungseinrichtungen. Veränderungen können jedoch in unterschiedlichem Gewand daherkommen. Manche geschehen unerwartet und verursachen dadurch vielleicht Probleme, andere hingegen bahnen sich so langsam an, dass ihre Effekte geradezu überraschend wirken können. Die in ihrer Geschwindigkeit unerwartete Einführung des Praxissemesters in der ersten, universitären Phase der Lehrerausbildung in Hessen ist eine solche problematische Veränderung für die Hessische Schülerakademie (Oberstufe), weil sie deren bisher gültige Integration in die schulpraktischen Studienanteile der studentischen BetreuerInnen nicht mehr vorsieht – ein Umstand, der Akademieleitung und Kuratorium ebenso wie unsere Kooperationspartner an der Universität und im Kultusministerium jetzt schon seit über zwei Jahren intensiv beschäftigt.
To crack the neural code and read out the information neural spikes convey, it is essential to understand how the information is coded and how much of it is available for decoding. To this end, it is indispensable to derive from first principles a minimal set of spike features containing the complete information content of a neuron. Here we present such a complete set of coding features. We show that temporal pairwise spike correlations fully determine the information conveyed by a single spiking neuron with finite temporal memory and stationary spike statistics. We reveal that interspike interval temporal correlations, which are often neglected, can significantly change the total information. Our findings provide a conceptual link between numerous disparate observations and recommend shifting the focus of future studies from addressing firing rates to addressing pairwise spike correlation functions as the primary determinants of neural information.
Given an Abelian semi-group (A, +), an A-valued curvature measure is a valuation with values in A-valued measures. If A = R, complete classifications of Hausdorff-continuous translation-invariant SO(n)-invariant valuations and curvature measures were obtained by Hadwiger and Schneider, respectively. More recently, characterisation results have been achieved for curvature measures with values in A = Sym^p R^n and A = Sym^2 Λ^q R^n for p, q ≥ 1 with varying assumptions as for their invariance properties.
In the present work, we classify all smooth translation-invariant SO(n)-covariant curvature measures with values in any SO(n)-representation in terms of certain differential forms on the sphere bundle S R^n and describe their behaviour under the globalisation map. The latter result also yields a similar classification of all continuous SO(n)-module-valued SO(n)-covariant valuations. Furthermore, a decomposition of the space of smooth translation-
invariant scalar-valued curvature measures as an SO(n)-module is obtained. As a corollary, we construct explicit bases of continuous translation-invariant scalar-valued valuations and smooth translation-invariant scalar-valued curvature measures.
Interactional niche in the development of geometrical and spatial thinking in the familial context
(2016)
In the analysis of mathematics education in early childhood it is necessary to consider the familial context, which has a significant influence on development in early childhood. Many reputable international research studies emphasize that the more children experience mathematical situations in their families, the more different emerging forms of participation occur for the children that enable them to learn mathematics in the early years. In this sense mathematical activities in the familial context are cornerstones of children’s mathematical development, which is also affected by the ethnic, cultural, educational and linguistic features of their families. Germany has a population of approximately 82 million, about 7.2 million of whom are immigrants (Statisches Bundesamt 2009, pp.28-32). Children in immigrant families grow up with multiculturalism and multilingualism, therefore these children are categorized as a risk group in Germany. “Early Steps in Mathematics Learning – Family Study” (erStMaL-FaSt) is the one of the first familial studies in Germany to deal with the impact of familial socialization on mathematics learning. The study enables us to observe children from different ethnic groups with their family members in different mathematical play situations. The family study (erStMaL-FaSt) is empirically performed within the framework of the erStMaL (Early Steps in Mathematics Learning) project, which relates to the investigation of longitudinal mathematical cognitive development in preschool and early primary-school ages from a socio-constructivist perspective. This study uses two selected mathematical domains, Geometry and Measurement, and four play situations within these two mathematical domains.
My PhD study is situated in erStMaL-FaSt. Therefore, in the beginning of this first chapter, I briefly touch upon IDeA Centre and the erStMaL project and then elaborate on erStMaL-FaSt. As parts of my research concepts, I specify two themes of erStMaL-FaSt: family and play. Thereafter I elaborate upon my research interest. The aim of my study is the research and development of theoretical insights in the functioning of familial interactions for the formation of geometrical (spatial) thinking and learning of children of Turkish ethnic background. Therefore, still in Chapter 1, I present some background on the Turkish people who live in Germany and the spatial development of the children.
This study is designed as a longitudinal study and constructed from interactionist and socio-constructivist perspectives. From a socio-constructivist perspective the cognitive development of an individual is constitutively bound to the participation of this individual in a variety of social interactions. In this regard the presence of each family member provides the child with some “learning opportunities” that are embedded in the interactive process of negotiation of meaning about mathematical play. During the interaction of such various mathematical learning situations, there occur different emerging forms of participation and support. For the purpose of analysing the spatial development of a child in interaction processes in play situations with family members, various statuses of participation are constructed and theoretically described in terms of the concept of the “interactional niche in the development of mathematical thinking in the familial context” (NMT-Family) (Acar & Krummheuer, 2011), which is adapted to the special needs of familial interaction processes. The concept of the “interactional niche in the development of mathematical thinking” (NMT) consists of the “learning offerings” provided by a group or society, which are specific to their culture and are categorized as aspects of “allocation”, and of the situationally emerging performance occurring in the process of meaning negotiation, both of which are subsumed under the aspect of the “situation”, and of the individual contribution of the particular child, which constitutes the aspect of “child’s contribution” (Krummheuer 2011a, 2011b, 2012, 2014; Krummheuer & Schütte 2014). Thereby NMT-Family is constructed as a subconcept of NMT, which offers the advantage of closer analyses and comparisons between familial mathematical learning occasions in early childhood and primary school ages.
Within the scope of NMT-Family, a “mathematics learning support system” (MLSS) is an interactional system which may emerge between the child and the family members in the course of the interaction process of concrete situations in play (Krummheuer & Acar Bayraktar, 2011). All these topics are addressed in Chapter 2 as theoretical approaches and in Chapter 3 as the research method of this study. In Chapter 4 the data collection and analysis is clarified in respect of these approaches...
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to potential underlying mechanisms.
Gleichungen mit mehreren Unbekannten zu lösen, üben Schüler schon in der Mittelstufe. Für die einen ist es eine spannende mathematische Knobelei, für die anderen eher Quälerei. Doch den wenigsten ist bewusst, wie viele Leben dadurch jeden Tag gerettet werden. Die moderne medizinische Bildgebung beruht darauf, sehr viele Gleichungen nach sehr vielen Unbekannten aufzulösen.
Frühe mathematische Bildung – Ziele und Gelingensbedingungen für den Elementar- und Primarbereich
(2017)
Im Rahmen der Schriftenreihe "Wissenschaftliche Untersuchungen zur Arbeit der Stiftung 'Haus der kleinen Forscher'" werden regelmäßig wissenschaftliche Beiträge von renommierten Expertinnen und Experten aus dem Bereich der frühen Bildung veröffentlicht. Diese Schriftenreihe dient einem fachlichen Dialog zwischen Stiftung, Wissenschaft und Praxis, mit dem Ziel, allen Kitas, Horten und Grundschulen in Deutschland fundierte Unterstützung für ihren frühkindlichen Bildungsauftrag zu geben.
Der vorliegende achte Band der Reihe mit einem Geleitwort von Kristina Reiss stellt die Ziele und Gelingensbedingungen mathematischer Bildung im Elementar- und Primarbereich in den Fokus.
Christiane Benz, Meike Grüßing, Jens Holger Lorenz, Christoph Selter und Bernd Wollring spezifizieren in ihrer Expertise pädagogisch-inhaltliche Zieldimensionen mathematischer Bildung im Kita- und Grundschulalter. Neben einer theoretischen Fundierung verschiedener Zielbereiche werden Instrumente für deren Messung aufgeführt. Des Weiteren erörtern die Autorinnen und Autoren Gelingensbedingungen für eine effektive und wirkungsvolle frühe mathematische Bildung in der Praxis. Sie geben zudem Empfehlungen für die Weiterentwicklung der Stiftungsangebote und die wissenschaftliche Begleitung der Stiftungsarbeit im Bereich Mathematik.
Das Schlusskapitel des Bandes beschreibt die Umsetzung dieser fachlichen Empfehlungen in den inhaltlichen Angeboten der Stiftung "Haus der kleinen Forscher".
Wer gern mitzählt, wird vielleicht festgestellt haben, dass im Sommer 2017 die zwanzigste Hessische Schülerakademie stattfand – dreizehn Oberstufenakademien waren es seit 2004, sieben für die Mittelstufe kamen seit 2011 hinzu. Zwanzig erfolgreiche Akademien bieten nicht nur Anlass zur Freude, sie bilden auch die solide Grundlage für einen selbstbewussten Blick in die Zukunft. Im nächsten Frühjahr lädt daher die Akademie Burg Fürsteneck gemeinsam mit dem Hessischen Kultusministerium zu einem interdisziplinären Symposium ein, bei dem die Hessische Schülerakademie und das Programm KulturSchule im Mittelpunkt stehen: Unter dem Titel "Kulturelle Bildung auf dem Weg" beschäftigen sich vom 2. bis zum 4. März 2018 Fachleute aus Wissenschaft und Praxis auf Burg Fürsteneck mit den "Qualitätsbedingungen in der Kulturellen Bildung am Beispiel der Schülerakademien und der Kulturschulen in Hessen".
Das Zusammentreffen zu Beginn der Sommerferien von 60 wissbegierigen und experimentierfreudigen Schülerinnen und Schülern mit einem ebensolchen Team aus Hochschullehrenden und Kulturschaffenden, versprach wie immer eine intensive und aufregende Zeit zu werden. Diese positive Erwartung wurde auch voll erfüllt und gipfelte am Gästenachmittag mit Eltern, Verwandten, Freunden und interessierten Besuchern in einen feierlich-fröhlichen Abschluss mit spannenden und auch überraschenden Werkschauen der Kurse. Ein besonderes Highlight war die großformatige Gestaltung eines Modells der BURG FÜRSTENECK als interdisziplinäres Ergebnis des Hauptkurses Mathematik und des Wahlkurses Modellbau.
We study exchangeable coalescent trees and the evolving genealogical trees in models for neutral haploid populations.
We show that every exchangeable infinite coalescent tree can be obtained as the genealogical tree of iid samples from a random marked metric measure space when the marks are added to the metric distances. We apply this representation to generalize the tree-valued Fleming-Viot process to include the case with dust in which the genealogical trees have isolated leaves.
Using the Donnelly-Kurtz lookdown approach, we describe all individuals ever alive in the population model by a random complete and separable metric space, the lookdown space, which we endow with a family of sampling measures. This yields a pathwise construction of tree-valued Fleming-Viot processes. In the case of coming down from infinity, we also read off a process whose state space is endowed with the Gromov-Hausdorff-Prohorov topology. This process has additional jumps at the extinction times of parts of the population.
In the case with only binary reproduction events, we construct the lookdown space also from the Aldous continuum random tree by removing the root and the highest leaf, and by deforming the metric in a way that corresponds to the time change that relates the Fleming-Viot process with a Dawson-Watanabe process. The sampling measures on the lookdown space are then image measures of the normalized local time measures.
We also show invariance principles for Markov chains that describe the evolving genealogy in Cannings models. For such Markov chains with values in the space of distance matrix distributions, we show convergence to tree-valued Fleming-Viot processes under the conditions of Möhle and Sagitov for the convergence of the genealogy at a fixed time to a coalescent with simultaneous multiple mergers. For the convergence of Markov chains with values in the space of marked metric measure spaces, an additional assumption is needed in the case with dust.
Can variances of latent variables be scaled in such a way that they correspond to eigenvalues?
(2017)
The paper reports an investigation of whether sums of squared factor loadings obtained in confirmatory factor analysis correspond to eigenvalues of exploratory factor analysis. The sum of squared factor loadings reflects the variance of the corresponding latent variable if the variance parameter of the confirmatory factor model is set equal to one. Hence, the computation of the sum implies a specific type of scaling of the variance. While the investigation of the theoretical foundations suggested the expected correspondence between sums of squared factor loadings and eigenvalues, the necessity of procedural specifications in the application, as for example the estimation method, revealed external influences on the outcome. A simulation study was conducted that demonstrated the possibility of exact correspondence if the same estimation method was applied. However, in the majority of realized specifications the estimates showed similar sizes but no correspondence.
Strong convergence rates for numerical approximations of stochastic partial differential equations
(2018)
In this thesis and in the research articles which this thesis consists of, respectively, we focus on strong convergence rates for numerical approximations of stochastic partial differential equations (SPDEs). In Part I of this thesis, i.e., Chapter 2 and Chapter 3, we study higher order numerical schemes for SPDEs with multiplicative trace class noise based on suitable Taylor expansions of the Lipschitz continuous coefficients of the SPDEs under consideration. More precisely, Chapter 2 proves strong convergence rates for a linear implicit Euler-Milstein scheme for SPDEs and is based on an unpublished manuscript written by the author of this thesis. This chapter extends an earlier result1 by slightly lowering the assumptions posed on the diffusion coefficient and a different approximation of the semigroup. In Chapter 3 we introduce an exponential Wagner-Platen type numerical scheme for SPDEs and prove that this numerical approximation method converges in the strong sense with oder up to 3/2−. Moreover, we illustrate how the (mixed) iterated stochastic-deterministic integrals, that are part of our numerical scheme, can be simulated exactly under suitable assumptions.
The second part of this thesis, i.e. Chapter 4 and Chapter 5, is devoted to strong convergence rates for numerical approximations of SPDEs with superlinearly growing nonlinearities driven by additive space-time white noise. More specifically, in Chapter 4, we prove strong convergence with rate in the time variable for a class of nonlinearity-truncated numerical approximation schemes for SPDEs and provide examples that fit into our abstract setting like stochastic Allen-Cahn equations. Finally, in Chapter 5, we extend this result with spatial approximations and establish strong convergence rates for a class of full-discrete nonlinearity truncated numerical approximation schemes for SPDEs. Moreover, we apply our strong convergence result to stochastic Allen-Cahn equations and provide lower and upper bounds which show that our strong convergence result can, in general, not essentially be improved.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Das Akademiejahr 2018 hatte neben den beiden Schülerakademien für die Mittelstufe und die Oberstufe noch einen weiteren Höhepunkt: das Symposium "Kulturelle Bildung auf dem Weg" (vom 2. bis 4. März 2018, ausgerichtet von Burg Fürsteneck gemeinsam mit dem Schulentwicklungsprogramm KulturSchule des Hessischen Kultusministeriums und dem Weiterbildungsmaster Kulturelle Bildung an Schulen der Uni Marburg). Es wurde von unserem Schirmherrn, Kultusminister Prof. Dr. R. Alexander Lorz, eröffnet und hatte unter anderem das Ziel, in der Begegnung von Bildungsexpert*innen und -praktiker*innen eine Fachdebatte über "Qualitätsbedingungen in der Kulturellen Bildung am Beispiel der Schülerakademien und der Kulturschulen in Hessen" anzustoßen.
Die Erfahrung, "…dass alles auch ganz anders sein könnte" ist die wohl wichtigste Erfahrung in Bildungsprozessen. Die Entdeckung von Möglichkeiten, Perspektivwechseln und transformatorischen Selbst-Bildungsprozessen ist zentral für eine gelungene kulturelle Bildungssituation. (Birgit Mandel, 2005).
Die Hessischen Schülerakademien zur Förderung besonders engagierter und begabter junger Menschen wurden bewusst als ein Unterfangen des Forschenden Lernens gegründet und fühlen sich diesem Leitgedanken im Kontext kultureller Bildung verpflichtet. Dieser Satz klingt zunächst einmal gut und zeitgemäß. Doch was steckt genau dahinter?
In this paper we deal with an implementation as well as numerical experiments for the coupling of interior and exterior problems of the elastodynamic wave equation with transparent boundary conditions in 3D as described in a previous paper by this author. In more detail, the FEM‐BEM‐coupling as well as the time discretization by using leapfrog and convolution quadrature is considered. Our aim is to provide an insight into the necessary steps of the implementation. Based on this, we present numerical experiments for a non‐convex domain and analyze the errors.
In vivo functional diversity of midbrain dopamine neurons within identified axonal projections
(2019)
Functional diversity of midbrain dopamine (DA) neurons ranges across multiple scales, from differences in intrinsic properties and connectivity to selective task engagement in behaving animals. Distinct in vitro biophysical features of DA neurons have been associated with different axonal projection targets. However, it is unknown how this translates to different firing patterns of projection-defined DA subpopulations in the intact brain. We combined retrograde tracing with single-unit recording and labelling in mouse brain to create an in vivo functional topography of the midbrain DA system. We identified differences in burst firing among DA neurons projecting to dorsolateral striatum. Bursting also differentiated DA neurons in the medial substantia nigra (SN) projecting either to dorsal or ventral striatum. We found differences in mean firing rates and pause durations among ventral tegmental area (VTA) DA neurons projecting to lateral or medial shell of nucleus accumbens. Our data establishes a high-resolution functional in vivo landscape of midbrain DA neurons.
Kaum ein Name ist so eng mit dem "Projekt HSAKA" verbunden wie der von Wolf Aßmus: Seit der ersten Hessischen Schülerakademie für die Oberstufe im Jahre 2004 ist er als Leiter des Physik-Kurses dabei; die Gründung der Mittelstufenakademie 2011 wurde von ihm tatkräftig unterstützt und gefördert; einen Sitz im Kuratorium hat er ebenso übernommen wie das Amt des Ersten Vorsitzenden des Trägervereins von Burg Fürsteneck – der inzwischen pensionierte Professor für Festkörperphysik verkörpert geradezu die Idee vom "Un-Ruhestand". Wer mag es ihm da verübeln, wenn Wolf beschließt, im nächsten Sommer mal mehr Zeit mit seinen Enkeln zu verbringen, statt auf die Burg zu fahren? Weil es daher 2020 zum ersten Mal eine Oberstufenakademie ohne Wolf und ohne Physik-Kurs geben wird (stattdessen Philosophie und Informatik), haben wir auf der vergangenen Akademie die Gelegenheit genutzt, Wolf für 15 Jahre Schülerakademie zu danken. Genauer gesagt: für 15 Jahre, 16 Fachkurse in Physik (15 auf der Oberstufenakademie und einer bei der Mittelstufe), 15 kursübergreifende Naturkunde-Angebote, für die Betreuung Dutzender Studierender und weit über 200 Schüler*innen, für unzählige gemeinsame Aha-Erlebnisse und humorvolle Geschichten, für unermüdliches Engagement und geduldigen Beistand – und nicht zuletzt für viele, viele Liter Speiseeis. Unsere Dankbarkeit wollen wir hier mit allen Leser*innen dieser Dokumentation teilen.
Wir konnten unseren eigenen Weg gehen, jeder von uns hatte am Ende ein anderes Ergebnis und es war keines falsch. Das macht für mich die Qualität beim Lernen aus, dass mir genug Platz für meine Gedanken gegeben wird und ich ernst genommen werde. […] Dieses Gefühl ist bis heute nicht verloren gegangen und der Gedanke, wie es sein könnte, hilft mir, aus mir raus zukommen und andere zu motivieren, das ebenfalls zu tun, um auch um mich herum anregende Gespräche zu führen, die an die während der Akademie geführten heranreichen. (Feedback einer Teilnehmerin der HSAKA-M 2018)
Bildung durch Wissenschaft im Sinne des Forschenden Lernens ist ein zentrales Thema schulischer Bildung und findet beispielsweise im Konzept Kultur.Forscher! eine didaktische, schulische Umsetzung und wird vom Wissenschaftsrat als Leitgedanke ebenfalls für Universitäten mit dem Ziel empfohlen, Studium und Lehre deutlicher an der Forschung auszurichten.
Uniqueness and Lipschitz stability in electrical impedance tomography with finitely many electrodes
(2019)
For the linearized reconstruction problem in electrical impedance tomography with the complete electrode model, Lechleiter and Rieder (2008 Inverse Problems 24 065009) have shown that a piecewise polynomial conductivity on a fixed partition is uniquely determined if enough electrodes are being used. We extend their result to the full non-linear case and show that measurements on a sufficiently high number of electrodes uniquely determine a conductivity in any finite-dimensional subset of piecewise-analytic functions. We also prove Lipschitz stability, and derive analogue results for the continuum model, where finitely many measurements determine a finite-dimensional Galerkin projection of the Neumann-to-Dirichlet operator on a boundary part.
We contribute to the foundations of tropical geometry with a view toward formulating tropical moduli problems, and with the moduli space of curves as our main example. We propose a moduli functor for the moduli space of curves and show that it is representable by a geometric stack over the category of rational polyhedral cones. In this framework, the natural forgetful morphisms between moduli spaces of curves with marked points function as universal curves.
Our approach to tropical geometry permits tropical moduli problems—moduli of curves or otherwise—to be extended to logarithmic schemes. We use this to construct a smooth tropicalization morphism from the moduli space of algebraic curves to the moduli space of tropical curves, and we show that this morphism commutes with all of the tautological morphisms.
We derive a simple criterion that ensures uniqueness, Lipschitz stability and global convergence of Newton’s method for the finite dimensional zero-finding problem of a continuously differentiable, pointwise convex and monotonic function. Our criterion merely requires to evaluate the directional derivative of the forward function at finitely many evaluation points and for finitely many directions. We then demonstrate that this result can be used to prove uniqueness, stability and global convergence for an inverse coefficient problem with finitely many measurements. We consider the problem of determining an unknown inverse Robin transmission coefficient in an elliptic PDE. Using a relation to monotonicity and localized potentials techniques, we show that a piecewise-constant coefficient on an a-priori known partition with a-priori known bounds is uniquely determined by finitely many boundary measurements and that it can be uniquely and stably reconstructed by a globally convergent Newton iteration. We derive a constructive method to identify these boundary measurements, calculate the stability constant and give a numerical example.
We show that the metrisability of an oriented projective surface is equivalent to the existence of pseudo-holomorphic curves. A projective structure p and a volume form σ on an oriented surface M equip the total space of a certain disk bundle Z→M with a pair (Jp,Jp,σ) of almost complex structures. A conformal structure on M corresponds to a section of Z→M and p is metrisable by the metric g if and only if [g]:M→Z is a pseudo-holomorphic curve with respect to Jp and Jp,dAg.
We study continuous dually epi-translation invariant valuations on certain cones of convex functions containing the space of finite-valued convex functions. Using the homogeneous decomposition of this space, we associate a certain distribution to any homogeneous valuation similar to the Goodey-Weil embedding for translation invariant valuations on convex bodies. The support of these distributions induces a corresponding notion of support for the underlying valuations, which imposes certain restrictions on these functionals, and we study the relation between the support of a valuation and its domain. This gives a partial answer to the question which dually epi-translation invariant valuations on finite-valued convex functions can be extended to larger cones of convex functions.
We also study topological properties of spaces of valuations with support contained in a fixed compact set. As an application of these results, we introduce the class of smooth valuations on convex functions and show that the subspace of smooth dually epi-translation invariant valuations is dense in the space of continuous dually epi-translation invariant valuation on finite-valued convex functions. These smooth valuations are given by integrating certain smooth differential forms over the graph of the differential of a convex function. We use this construction to give a characterization of a dense subspace of all continuous valuations on finite-valued convex functions that are rotation invariant as well as dually epi-translation invariant.
Using results from Alesker's theory of smooth valuations on convex bodies, we also show that any smooth valuation can be written as a convergent sum of mixed Hessian valuations. In particular, mixed Hessian valuations span a dense subspace, which is a version of McMullen’s conjecture for valuations on convex functions.
We use recent results by Bainbridge–Chen–Gendron–Grushevsky–Möller on compactifications of strata of abelian differentials to give a comprehensive solution to the realizability problem for effective tropical canonical divisors in equicharacteristic zero. Given a pair (Γ,D) consisting of a stable tropical curve Γ and a divisor D in the canonical linear system on Γ, we give a purely combinatorial condition to decide whether there is a smooth curve X over a non-Archimedean field whose stable reduction has Γ as its dual tropical curve together with an effective canonical divisor KX that specializes to D.
The specific temporal evolution of bacterial and phage population sizes, in particular bacterial depletion and the emergence of a resistant bacterial population, can be seen as a kinetic fingerprint that depends on the manifold interactions of the specific phage–host pair during the course of infection. We have elaborated such a kinetic fingerprint for a human urinary tract Klebsiella pneumoniae isolate and its phage vB_KpnP_Lessing by a modeling approach based on data from in vitro co-culture. We found a faster depletion of the initially sensitive bacterial population than expected from simple mass action kinetics. A possible explanation for the rapid decline of the bacterial population is a synergistic interaction of phages which can be a favorable feature for phage therapies. In addition to this interaction characteristic, analysis of the kinetic fingerprint of this bacteria and phage combination revealed several relevant aspects of their population dynamics: A reduction of the bacterial concentration can be achieved only at high multiplicity of infection whereas bacterial extinction is hardly accomplished. Furthermore the binding affinity of the phage to bacteria is identified as one of the most crucial parameters for the reduction of the bacterial population size. Thus, kinetic fingerprinting can be used to infer phage–host interactions and to explore emergent dynamics which facilitates a rational design of phage therapies.
Foundations of geometry
(2020)
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
We provide extensions of the dual variational method for the nonlinear Helmholtz equation from Evéquoz and Weth. In particular we prove the existence of dual ground state solutions in the Sobolev critical case, extend the dual method beyond the standard Stein Tomas and Kenig Ruiz Sogge range and generalize the method for sign changing nonlinearities.
In this article we use techniques from tropical and logarithmic geometry to construct a non-Archimedean analogue of Teichmüller space T¯g whose points are pairs consisting of a stable projective curve over a non-Archimedean field and a Teichmüller marking of the topological fundamental group of its Berkovich analytification. This construction is closely related to and inspired by the classical construction of a non-Archimedean Schottky space for Mumford curves by Gerritzen and Herrlich. We argue that the skeleton of non-Archimedean Teichmüller space is precisely the tropical Teichmüller space introduced by Chan–Melo–Viviani as a simplicial completion of Culler–Vogtmann Outer space. As a consequence, Outer space turns out to be a strong deformation retract of the locus of smooth Mumford curves in T¯g.
The problem of unconstrained or constrained optimization occurs in many branches of mathematics and various fields of application. It is, however, an NP-hard problem in general. In this thesis, we examine an approximation approach based on the class of SAGE exponentials, which are nonnegative exponential sums. We examine this SAGE-cone, its geometry, and generalizations. The thesis consists of three main parts:
1. In the first part, we focus purely on the cone of sums of globally nonnegative exponential sums with at most one negative term, the SAGE-cone. We ex- amine the duality theory, extreme rays of the cone, and provide two efficient optimization approaches over the SAGE-cone and its dual.
2. In the second part, we introduce and study the so-called S-cone, which pro- vides a uniform framework for SAGE exponentials and SONC polynomials. In particular, we focus on second-order representations of the S-cone and its dual using extremality results from the first part.
3. In the third and last part of this thesis, we turn towards examining the con- ditional SAGE-cone. We develop a notion of sublinear circuits leading to new duality results and a partial characterization of extremality. In the case of poly- hedral constraint sets, this examination is simplified and allows us to classify sublinear circuits and extremality for some cases completely. For constraint sets with certain conditions such as sets with symmetries, conic, or polyhedral sets, various optimization and representation results from the unconstrained setting can be applied to the constrained case.
The aim of this bachelor thesis is to compare and empirically test the use of classification to improve the topic models Latent Dirichlet Allocation (LDA) and Author Topic Modeling
(ATM) in the context of the social media platform Twitter. For this purpose, a corpus was classified with the Dewey Decimal Classification (DDC) and then used to train the topic models. A second dataset, the unclassified corpus, was used for comparison. The assumption that the use of classification could improve the topic models did not prove true for the LDA topic model. Here, a sufficiently good improvement of the models could not be achieved. The ATM model, on the other hand, could be improved by using the classification. In general, the ATM model performed significantly better than the LDA model. In the context of the social media platform Twitter, it can thus be seen that the ATM model is superior to the LDA model and can additionally be improved by classifying the data.
Between his arrival in Frankfurt in 1922 and and his proof of his famous finiteness theorem for integral points in 1929, Siegel had no publications. He did, however, write a letter to Mordell in 1926 in which he explained a proof of the finiteness of integral points on hyperelliptic curves. Recognizing the importance of this argument (and Siegel's views on publication), Mordell sent the relevant extract to be published under the pseudonym "X".
The purpose of this note is to explain how to optimize Siegel's 1926 technique to obtain the following bound. Let K be a number field, S a finite set of places of K, and f∈oK,S[t] monic of degree d≥5 with discriminant Δf∈o×K,S. Then: #|{(x,y):x,y∈oK,S,y2=f(x)}|≤2rankJac(Cf)(K)⋅O(1)d3⋅([K:Q]+#|S|).
This improves bounds of Evertse-Silverman and Bombieri-Gubler from 1986 and 2006, respectively.
The main point underlying our improvement is that, informally speaking, we insist on "executing the descents in the presence of only one root (and not three) until the last possible moment".
Several novel imaging and non-destructive testing technologies are based on reconstructing the spatially dependent coefficient in an elliptic partial differential equation from measurements of its solution(s). In practical applications, the unknown coefficient is often assumed to be piecewise constant on a given pixel partition (corresponding to the desired resolution), and only finitely many measurement can be made. This leads to the problem of inverting a finite-dimensional non-linear forward operator F: D(F)⊆Rn→Rm , where evaluating ℱ requires one or several PDE solutions.
Numerical inversion methods require the implementation of this forward operator and its Jacobian. We show how to efficiently implement both using a standard FEM package and prove convergence of the FEM approximations against their true-solution counterparts. We present simple example codes for Comsol with the Matlab Livelink package, and numerically demonstrate the challenges that arise from non-uniqueness, non-linearity and instability issues. We also discuss monotonicity and convexity properties of the forward operator that arise for symmetric measurement settings.
This text assumes the reader to have a basic knowledge on Finite Element Methods, including the variational formulation of elliptic PDEs, the Lax-Milgram-theorem, and the Céa-Lemma. Section 3 also assumes that the reader is familiar with the concept of Fréchet differentiability.
For a class of Cannings models we prove Haldane’s formula, π(sN)∼2sNρ2, for the fixation probability of a single beneficial mutant in the limit of large population size N and in the regime of moderately strong selection, i.e. for sN∼N−b and 0<b<1/2. Here, sN is the selective advantage of an individual carrying the beneficial type, and ρ2 is the (asymptotic) offspring variance. Our assumptions on the reproduction mechanism allow for a coupling of the beneficial allele’s frequency process with slightly supercritical Galton–Watson processes in the early phase of fixation.
For genus g=2i≥4 and the length g−1 partition μ=(4,2,…,2,−2,…,−2) of 0, we compute the first coefficients of the class of D¯¯¯¯(μ) in PicQ(R¯¯¯¯g), where D(μ) is the divisor consisting of pairs [C,η]∈Rg with η≅OC(2x1+x2+⋯+xi−1−xi−⋯−x2i−1) for some points x1,…,x2i−1 on C. We further provide several enumerative results that will be used for this computation.
For genus g=2i≥4 and the length g−1 partition μ=(4,2,…,2,−2,…,−2) of 0, we compute the first coefficients of the class of D¯¯¯¯(μ) in PicQ(R¯¯¯¯g), where D(μ) is the divisor consisting of pairs [C,η]∈Rg with η≅OC(2x1+x2+⋯+xi−1−xi−⋯−x2i−1) for some points x1,…,x2i−1 on C. We further provide several enumerative results that will be used for this computation.
For genus g=2i≥4 and the length g−1 partition μ=(4,2,…,2,−2,…,−2) of 0, we compute the first coefficients of the class of D¯¯¯¯(μ) in PicQ(R¯¯¯¯g), where D(μ) is the divisor consisting of pairs [C,η]∈Rg with η≅OC(2x1+x2+⋯+xi−1−xi−⋯−x2i−1) for some points x1,…,x2i−1 on C. We further provide several enumerative results that will be used for this computation.
In an earlier paper we proposed a recursive model for epidemics; in the present paper we generalize this model to include the asymptomatic or unrecorded symptomatic people, which we call dark people (dark sector). We call this the SEPARd-model. A delay differential equation version of the model is added; it allows a better comparison to other models. We carry this out by a comparison with the classical SIR model and indicate why we believe that the SEPARd model may work better for Covid-19 than other approaches.
In the second part of the paper we explain how to deal with the data provided by the JHU, in particular we explain how to derive central model parameters from the data. Other parameters, like the size of the dark sector, are less accessible and have to be estimated more roughly, at best by results of representative serological studies which are accessible, however, only for a few countries. We start our country studies with Switzerland where such data are available. Then we apply the model to a collection of other countries, three European ones (Germany, France, Sweden), the three most stricken countries from three other continents (USA, Brazil, India). Finally we show that even the aggregated world data can be well represented by our approach.
At the end of the paper we discuss the use of the model. Perhaps the most striking application is that it allows a quantitative analysis of the influence of the time until people are sent to quarantine or hospital. This suggests that imposing means to shorten this time is a powerful tool to flatten the curves.
We study the asymptotics of Dirichlet eigenvalues and eigenfunctions of the fractional Laplacian (−Δ)s in bounded open Lipschitz sets in the small order limit s→0+. While it is easy to see that all eigenvalues converge to 1 as s→0+, we show that the first order correction in these asymptotics is given by the eigenvalues of the logarithmic Laplacian operator, i.e., the singular integral operator with Fourier symbol 2log|ξ|. By this we generalize a result of Chen and the third author which was restricted to the principal eigenvalue. Moreover, we show that L2-normalized Dirichlet eigenfunctions of (−Δ)s corresponding to the k-th eigenvalue are uniformly bounded and converge to the set of L2-normalized eigenfunctions of the logarithmic Laplacian. In order to derive these spectral asymptotics, we establish new uniform regularity and boundary decay estimates for Dirichlet eigenfunctions for the fractional Laplacian. As a byproduct, we also obtain corresponding regularity properties of eigenfunctions of the logarithmic Laplacian.
Although everyone is familiar with using algorithms on a daily basis, formulating, understanding and analysing them rigorously has been (and will remain) a challenging task for decades. Therefore, one way of making steps towards their understanding is the formulation of models that are portraying reality, but also remain easy to analyse. In this thesis we take a step towards this way by analyzing one particular problem, the so-called group testing problem. R. Dorfman introduced the problem in 1943. We assume a large population and in this population we find a infected group of individuals. Instead of testing everybody individually, we can test group (for instance by mixing blood samples). In this thesis we look for the minimum number of tests needed such that we can say something meaningful about the infection status. Furthermore we assume various versions of this problem to analyze at what point and why this problem is hard, easy or impossible to solve.
Eine Woche lang präsentieren Wissenschaftler*innen Ergebnisse aus der mathematikdidaktischen Forschung und Lehr-Lern-Konzepte für mathematisches Lernen von Schüler*innen sowie für das mathematische und mathematikdidaktische Lernen in den verschiedenen Phasen der Lehrer*innenbildung. Der UniReport sprach mit den Organisatorinnen der Tagung – Prof.in Dr. Susanne Schnell, Prof.in Dr. Rose Vogel und Prof.in Dr. Jessica Hoth – über das Programm der Tagung und über die künftige Ausrichtung des Faches Mathematik.
Aus Sicht der Pädagogischen Psychologie ist Lernen ein Prozess, bei dem es zu überdauernden Änderungen im Verhaltenspotenzial als Folge von Erfahrungen kommt. Aus konstruktivistischer Perspektive lässt sich Lernen am besten als eine individuelle Konstruktion von Wissen infolge des Entdeckens, Transformierens und Interpretierens komplexer Informationen durch den Lernenden selbst beschreiben. Erkennt der Lernende den Sinn und übernimmt, erweitert oder verändert ihn für sich selbst, so ist der Grundstein für nachhaltiges Lernen gelegt.
Lernen ist ein sehr individueller Prozess. Schule muss also individuelles Lernen auch im Klassenverband ermöglichen und der Lehrende muss zum Lerncoach werden, da sonst kein individuelles und eigenaktives Lernen möglich ist. Das Unterrichtskonzept des forschend-entdeckenden Lernens bietet genau diese Möglichkeit. Es erlaubt die Erfüllung der drei Grundbedürfnisse eines Menschen nach Kompetenz, Autonomie und sozialer Eingebundenheit und ermöglicht damit Motivation, Leistung und Wohlbefinden (Ryan & Deci, 2004).
Forschend-entdeckendes Lernen im Mathematikunterricht ist schrittweise geprägt von folgenden Merkmalen:
- eine problemorientierte Organisation
- selbstständiges, eigenaktives und eigenverantwortliches Lernen der Schülerinnen und Schüler
- individuelle Lernwege und Lernprozesse
- Entwicklung eigener Fragestellungen und Vorgehensweisen der Lernenden
- eigenes Aufstellen von Hypothesen und Vermutungen; Überprüfung der Vermutungen; Dokumentation, Interpretation und Präsentation der Ergebnisse
- eine fördernde Atmosphäre, in der die Lernenden nach und nach forschende Arbeitstechniken vermitteln bekommen
- kooperative Lernformen und damit Förderung von Team- und Kommunikationsfähigkeit
- Unterrichtsinhalte mit hohem Realitäts- und Sinnbezug, gesellschaftlicher Relevanz, Möglichkeiten der Interdisziplinarität
- Stetige Angebote der Unterstützung
Das entdeckende Lernen kann als Vorstufe des forschenden Lernens gesehen werden, da hier der wissenschaftliche Fokus noch nicht so stark ausgeprägt ist. Um alle Phasen auf dem Weg zu annähernd wissenschaftlichen forschenden Lernens anzusprechen, verwenden wir den Begriff des forschend-entdeckenden Lernens.
Voraussetzung ist, dass die Lehrkräfte das forschende Lernen als aktiven, produktiven und selbstbestimmten Lernprozess selbst zuvor erlebt haben müssen. Unter anderem können die Lehrkräfte Unterrichtsprozesse danach besser planen und währenddessen unterstützen, da sie selbst forschend-entdeckendem Lernen „ausgesetzt“ waren und vergleichbare Prozesse durchlebt haben.
Hiermit wird deutlich, dass forschendes Lernen nicht bedeuten kann, dass die Schülerinnen und Schüler auf sich gestellt sind. Die gezielte Unterstützung der Lernenden beim Entdecken und Forschen durch die Lehrkraft ist für einen ertragreichen Lernerfolg unverzichtbar und muss Teil der Vorbereitung und des Prozesses sein.
Internationale Studien zeigen, dass forschend-entdeckende Unterrichtsansätze (inquiry-based learning IBL) im Mathematikunterricht bei geeigneter Umsetzung Lernen verbessern, Lernerfolg und Lernleistung steigern und Freude gegenüber Mathematikunterricht erhöhen können. Die Implementierung dieses Unterrichtsansatzes ist trotz der positiven Ergebnisse nicht alltäglich.
Um neue Unterrichtskonzepte in den Schulalltag zu bringen beziehungsweise um bestehende Unterrichtskonzepte neu in den Schulalltag zu bringen bedarf es Fortbildungen zur Professionalisierung von Lehrerinnen und Lehrern.
In this article we provide a stack-theoretic framework to study the universal tropical Jacobian over the moduli space of tropical curves. We develop two approaches to the process of tropicalization of the universal compactified Jacobian over the moduli space of curves -- one from a logarithmic and the other from a non-Archimedean analytic point of view. The central result from both points of view is that the tropicalization of the universal compactified Jacobian is the universal tropical Jacobian and that the tropicalization maps in each of the two contexts are compatible with the tautological morphisms. In a sequel we will use the techniques developed here to provide explicit polyhedral models for the logarithmic Picard variety.