Mathematik
Refine
Year of publication
Document Type
- Article (112)
- Doctoral Thesis (76)
- Preprint (47)
- diplomthesis (39)
- Book (25)
- Report (22)
- Conference Proceeding (18)
- Bachelor Thesis (8)
- Contribution to a Periodical (8)
- Diploma Thesis (8)
Has Fulltext
- yes (376)
Is part of the Bibliography
- no (376)
Keywords
- Kongress (6)
- Kryptologie (5)
- Mathematik (5)
- Stochastik (5)
- Doku Mittelstufe (4)
- Doku Oberstufe (4)
- Online-Publikation (4)
- Statistik (4)
- Finanzmathematik (3)
- LLL-reduction (3)
Institute
- Mathematik (376)
- Informatik (55)
- Präsidium (22)
- Physik (6)
- Psychologie (6)
- Geschichtswissenschaften (5)
- Sportwissenschaften (5)
- Biochemie und Chemie (3)
- Biowissenschaften (3)
- Geographie (3)
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
We investigate multivariate Laurent polynomials f \in \C[\mathbf{z}^{\pm 1}] = \C[z_1^{\pm 1},\ldots,z_n^{\pm 1}] with varieties \mathcal{V}(f) restricted to the algebraic torus (\C^*)^n = (\C \setminus \{0\})^n. For such Laurent polynomials f one defines the amoeba \mathcal{A}(f) of f as the image of the variety \mathcal{V}(f) under the \Log-map \Log : (\C^*)^n \to \R^n, (z_1,\ldots,z_n) \mapsto (\log|z_1|, \ldots, \log|z_n|). I.e., the amoeba \mathcal{A}(f) is the projection of the variety \mathcal{V}(f) on its (componentwise logarithmized) absolute values. Amoebas were first defined in 1994 by Gelfand, Kapranov and Zelevinksy. Amoeba theory has been strongly developed since the beginning of the new century. It is related to various mathematical subjects, e.g., complex analysis or real algebraic curves. In particular, amoeba theory can be understood as a natural connection between algebraic and tropical geometry.
In this thesis we investigate the geometry, topology and methods for the approximation of amoebas.
Let \C^A denote the space of all Laurent polynomials with a given, finite support set A \subset \Z^n and coefficients in \C^*. It is well known that, in general, the existence of specific complement components of the amoebas \mathcal{A}(f) for f \in \C^A depends on the choice of coefficients of f. One prominent key problem is to provide bounds on the coefficients in order to guarantee the existence of certain complement components. A second key problem is the question whether the set U_\alpha^A \subseteq \C^A of all polynomials whose amoeba has a complement component of order \alpha \in \conv(A) \cap \Z^n is always connected.
We prove such (upper and lower) bounds for multivariate Laurent polynomials supported on a circuit. If the support set A \subset \Z^n satisfies some additional barycentric condition, we can even give an exact description of the particular sets U_\alpha^A and, especially, prove that they are path-connected.
For the univariate case of polynomials supported on a circuit, i.e., trinomials f = z^{s+t} + p z^t + q (with p,q \in \C^*), we show that a couple of classical questions from the late 19th / early 20th century regarding the connection between the coefficients and the roots of trinomials can be traced back to questions in amoeba theory. This yields nice geometrical and topological counterparts for classical algebraic results. We show for example that a trinomial has a root of a certain, given modulus if and only if the coefficient p is located on a particular hypotrochoid curve. Furthermore, there exist two roots with the same modulus if and only if the coefficient p is located on a particular 1-fan. This local description of the configuration space \C^A yields in particular that all sets U_\alpha^A for \alpha \in \{0,1,\ldots,s+t\} \setminus \{t\} are connected but not simply connected.
We show that for a given lattice polytope P the set of all configuration spaces \C^A of amoebas with \conv(A) = P is a boolean lattice with respect to some order relation \sqsubseteq induced by the set theoretic order relation \subseteq. This boolean lattice turns out to have some nice structural properties and gives in particular an independent motivation for Passare's and Rullgard's conjecture about solidness of amoebas of maximally sparse polynomials. We prove this conjecture for special instances of support sets.
A further key problem in the theory of amoebas is the description of their boundaries. Obviously, every boundary point \mathbf{w} \in \partial \mathcal{A}(f) is the image of a critical point under the \Log-map (where \mathcal{V}(f) is supposed to be non-singular here). Mikhalkin showed that this is equivalent to the fact that there exists a point in the intersection of the variety \mathcal{V}(f) and the fiber \F_{\mathbf{w}} of \mathbf{w} (w.r.t. the \Log-map), which has a (projective) real image under the logarithmic Gauss map. We strengthen this result by showing that a point \mathbf{w} may only be contained in the boundary of \mathcal{A}(f), if every point in the intersection of \mathcal{V}(f) and \F_{\mathbf{w}} has a (projective) real image under the logarithmic Gauss map.
With respect to the approximation of amoebas one is in particular interested in deciding membership, i.e., whether a given point \mathbf{w} \in \R^n is contained in a given amoeba \mathcal{A}(f). We show that this problem can be traced back to a semidefinite optimization problem (SDP), basically via usage of the Real Nullstellensatz. This SDP can be implemented and solved with standard software (we use SOSTools and SeDuMi here). As main theoretic result we show that, from the complexity point of view, our approach is at least as good as Purbhoo's approximation process (which is state of the art).
In this thesis, the asymptotic behaviour of Pólya urn models is analyzed, using an approach based on the contraction method. For this, a combinatorial discrete time embedding of the evolution of the composition of the urn into random rooted trees is used. The recursive structure of the trees is used to study the asymptotic behavior using ideas from the contraction method.
The approach is applied to a couple of concrete Pólya urns that lead to limit laws with normal distributions, with non-normal limit distributions, or with asymptotic periodic distributional behavior.
Finally, an approach more in the spirit of earlier applications of the contraction method is discussed for one of the examples. A general transfer theorem of the contraction method is extended to cover this example, leading to conditions on the coefficients of the recursion that are not only weaker but also in general easier to check.
The relation between the complexity of a time-switched dynamics and the complexity of its control sequence depends critically on the concept of a non-autonomous pullback attractor. For instance, the switched dynamics associated with scalar dissipative affine maps has a pullback attractor consisting of singleton component sets. This entails that the complexity of the control sequence and switched dynamics, as quantified by the topological entropy, coincide. In this paper we extend the previous framework to pullback attractors with nontrivial components sets in order to gain further insights in that relation. This calls, in particular, for distinguishing two distinct contributions to the complexity of the switched dynamics. One proceeds from trajectory segments connecting different component sets of the attractor; the other contribution proceeds from trajectory segments within the component sets. We call them “macroscopic” and “microscopic” complexity, respectively, because only the first one can be measured by our analytical tools. As a result of this picture, we obtain sufficient conditions for a switching system to be more complex than its unswitched subsystems, i.e., a complexity analogue of Parrondo’s paradox.
We study the price-setting problem of market makers under perfect competition in continuous time. Thereby we follow the classic Glosten-Milgrom model that defines bid and ask prices as the expectation of a true value of the asset given the market makers partial information that includes the customers trading decisions. The true value is modeled as a Markov process that can be observed by the customers with some noise at Poisson times.
We analyze the price-setting problem by solving a non-standard filtering problem with an endogenous filtration that depends on the bid and ask price process quoted by the market maker. Under some conditions we show existence and uniqueness of the price processes. In a different setting we construct a counterexample to uniqueness. Further, we discuss the behavior of the spread by a convergence result and simulations.
[Nachruf] Wolfgang Schwarz
(2013)
Für balancierte, irreduzible Pólya-Urnen-Modelle sind Grenzwertsätze für die normalisierte Anzahl von Kugeln einer Farbe bekannt. Für eine spezielle Urne, deren Dynamik mit "Randomised-Play-the-Winner Rule" bezeichnet wird, werden im Rahmen der bekannten Grenzwertsätze Konvergenzraten in Wasserstein-Metriken und in der Kolmogorov-Metrik im Falle eines nicht-normalverteilten Grenzwerts hergeleitet.
In der Arbeit wird ein Testverfahren zum Prüfen der Varianzhomogenität der Lebenszeiten eines Erneuerungsprozesses entwickelt. Das Verfahren basiert auf der "Filtered-Derivative"-Methode. Zur Herleitung des Annahmebereichs werden zunächst Bootstrap-Permutationen genutzt, bevor zu einer asymptotischen Methode übergangen wird. Ein entsprechender funktionaler Grenzwertsatz wird skizziert. Aufbauend auf dem Test wird ein Multiple-Filter-Algorithmus zur genauen Detektion der Varianz-Change-Points besprochen. Schließlich folgt die Inklusion von vorher detektierten Ratenänderungen in das Verfahren. Der Test und der Algorithmus werden in Simulationsstudien evaluiert. Abschließend erfolgt eine Anwendung auf EEG-Daten.
Optimierung von Phasen- und Ratenparametern in einem stochastischen Modell neuronaler Feueraktivität
(2014)
In unserem Gehirn wird Information von Neuronen durch die Emission von Spikes repräsentiert. Als wichtige Signalkomponenten werden hierbei die Rate (Anzahl Spikes), die Phase (zeitliche Verschiebung der Spikes) und synchrone Oszillationen (rhythmische Entladungen der Neuronen am selben Zyklus) diskutiert.
In dieser Arbeit wird untersucht, wie Rate und Phase für eine optimale Detektion miteinander kombiniert werden und abhängig vom gewählten Parameterbereich wird der Beitrag der Phase quantifiziert.
Dies wird anhand eines stochastischen Spiketrain-Modell untersucht, das hohe Ähnlichkeiten zu empirischen Spiketrains zeigt und die drei genannten Signalkomponenten beinhaltet. Das ELO-Modell („exponential lockig to a free oscillator“) ist in zwei Prozessstufen unterteilt: Im Hintergrund steht ein globaler Oszillationsprozess, der unabhängige und normal-verteilte Intervallabschnitte hervorbringt (Oszillation). An den Intervallgrenzen starten unabhängig, inhomogene Poisson-Prozesse (Synchronizität) mit exponentiell abnehmender Feuerrate, die durch eine stimulusspezifische Rate und Phase festgelegt ist.
Neben einer analytischen Bestimmung der optimalen Parameter im Falle reiner Raten- bzw. Phasencodierung, wird die gemeinsame Codierung anhand von Simulationsstudien analysiert.
The cones of nonnegative polynomials and sums of squares arise as central objects in convex algebraic geometry and have their origin in the seminal work of Hilbert ([Hil88]). Depending on the number of variables n and the degree d of the polynomials, Hilbert famously characterizes all cases of equality between the cone of nonnegative polynomials and the cone of sums of squares. This equality precisely holds for bivariate forms, quadratic forms and ternary quartics ([Hil88]). Since then, a lot of work has been done in understanding the difference between these two cones, which has major consequences for many practical applications such as for polynomial optimization problems. Roughly speaking, minimizing polynomial functions (constrained as well as unconstrained) can be done efficiently whenever certain nonnegative polynomials can be written as sums of squares (see Section 2.3 for the precise relationship). The underlying reason is the fundamental difference that checking nonnegativity of polynomials is an NP-hard problem whenever the degree is greater or equal than four ([BCSS98]), whereas checking whether a polynomial can be written as a sum of squares is a semidefinite feasibility problem (see Section 2.2). Although the complexity status of the semidefinite feasibility problem is still an open problem, it is polynomial for fixed number of variables. Hence, understanding the difference between nonnegative polynomials and sums of squares is highly desirable both from a theoretical and a practical viewpoint.
This work is concerned with two topics at the intersection of convex algebraic geometry and optimization.
We develop a new method for the optimization of polynomials over polytopes. From the point of view of convex algebraic geometry the most common method for the approximation of polynomial optimization problems is to solve semidefinite programming relaxations coming from the application of Positivstellensätze. In optimization, non-linear programming problems are often solved using branch and bound methods. We propose a fused method that uses Positivstellensatz-relaxations as lower bounding methods in a branch and bound scheme. By deriving a new error bound for Handelman's Positivstellensatz, we show convergence of the resulting branch and bound method. Through the application of Positivstellensätze, semidefinite programming has gained importance in polynomial optimization in recent years. While it arises to be a powerful tool, the underlying geometry of the feasibility regions (spectrahedra) is not yet well understood. In this work, we study polyhedral and spectrahedral containment problems, in particular we classify their complexity and introduce sufficient criteria to certify the containment of one spectrahedron in another one.
A multiple filter test for the detection of rate changes in renewal processes with varying variance
(2014)
The thesis provides novel procedures in the statistical field of change point detection in time series.
Motivated by a variety of neuronal spike train patterns, a broad stochastic point process model is introduced. This model features points in time (change points), where the associated event rate changes. For purposes of change point detection, filtered derivative processes (MOSUM) are studied. Functional limit theorems for the filtered derivative processes are derived. These results are used to support novel procedures for change point detection; in particular, multiple filters (bandwidths) are applied simultaneously in oder to detect change points in different time scales.
The work presented in this thesis is devoted to two classes of mathematical population genetics models, namely the Kingman-coalescent and the Beta-coalescents. Chapters 2, 3 and 4 of the thesis include results concerned with the first model, whereas Chapter 5 presents contributions to the second class of models.
Based on a non-rigorous formalism called the “cavity method”, physicists have made intriguing predictions on phase transitions in discrete structures. One of the most remarkable ones is that in problems such as random k-SAT or random graph k-coloring, very shortly before the threshold for the existence of solutions there occurs another phase transition called condensation [Krzakala et al., PNAS 2007]. The existence of this phase transition seems to be intimately related to the difficulty of proving precise results on, e. g., the k-colorability threshold as well as to the performance of message passing algorithms. In random graph k-coloring, there is a precise conjecture as to the location of the condensation phase transition in terms of a distributional fixed point problem. In this paper we prove this conjecture, provided that k exceeds a certain constant k0.
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c⋅nα are chosen, where 0<α≤12, c>0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n→∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties.
We consider a class of nonautonomous nonlinear competitive parabolic systems on bounded radial domains under Neumann or Dirichlet boundary conditions. We show that, if the initial profiles satisfy a reflection inequality with respect to a hyperplane, then bounded positive solutions are asymptotically (in time) foliated Schwarz symmetric with respect to antipodal points. Additionally, a related result for (positive and sign changing solutions) of scalar equations with Neumann or Dirichlet boundary conditions is given. The asymptotic shape of solutions to cooperative systems is also discussed.
Thought structures of modelling task solutions and their connection to the level of difficulty
(2015)
Although efforts have been made to integrate the concept of mathematical modelling in school, among others PISA and TIMSS revealed weaknesses of not only German students in the field of mathematical modelling. There may be various reasons starting from educational policy via curricular issues to practical instructional concerns. Studies show that mathematical modelling has not been arrived yet in everyday school class (Blum &BorromeoFerri, 2009, p. 47). Thus, the proportion of mathematical modelling in everyday school classes is low (Jordan et al., 2006). When focusing on the teachers’ point of view there are difficulties which may contribute to avoid modelling tasks in class. The development of reasonable modelling tasks, estimating the task space, valuating the task difficulty and assessing the student solutions are difficulties which occur to an increasing degree compared to ordinary mathematics tasks.The project MokiMaS (transl.: modeling competency in math classes of secondary education) aims at providing inter-year modelling tasks, whose task space and level of difficulty is known, together with an evaluation scheme. In particular a theory based method has been developed to determine the level of difficulty of modelling tasks on the basis of thought structures, representing the cognitive load of solution approaches. The current question is whether this method leads to a realistic rating. To go further into that question an evaluation scheme has been developed which is guided by the daily assessment work of teachers, to investigate the relation of task difficulty and student performance.
Die Mathematik ist gleichermaßen eine Kulturwissenschaft mit langer Tradition als auch treibende Kraft hinter vielen modernen Technologien und damit Schlüsseldisziplin des Informationszeitalters. Zum einen zielt die Mathematik darauf ab, abstrakte Strukturen und ihre Zusammenhänge zu verstehen; zum anderen entwickelt sie kraftvolle Methoden, um Frage- und Problemstellungen in zahlreichen Wissenschaftsdisziplinen zu behandeln. Moderne Anwendungen der Mathematik liegen beispielsweise in den Bereichen der Datensicherheit und -kompression, der Verkehrssteuerung, der Bewertung und Optimierung von Finanzinstrumenten oder der medizinischen Operationsplanung.
In dieser Broschüre stellen wir Ihnen das Profil der Frankfurter Mathematik in Forschung und Lehre sowie speziell die Studiengänge
• Bachelor Mathematik
• Master Mathematik
vor. An der Goethe-Universität ist es auch möglich, Mathematik auf Lehramt (L1, L2, L3, L5) zu studieren. ...
This thesis covers the analysis of radix sort, radix select and the path length of digital trees under a stochastic input assumption known as the Markov model.
The main results are asymptotic expansions of mean and variance as well as a central limit theorem for the complexity of radix sort and the path length of tries, PATRICIA tries and digital search trees.
Concerning radix select, a variety of different models for ranks are discussed including a law of large numbers for the worst case behavior, a limit theorem for the grand averages model and the first order asymptotic of the average complexity in the quantile model.
Some of the results are achieved by moment transfer techniques, the limit laws are based on a novel use of the contraction method suited for systems of stochastic recurrences.
Triangles of groups have been introduced by Gersten and Stallings. They are, roughly speaking, a generalization of the amalgamated free product of two groups and occur in the framework of Corson diagrams. First, we prove an intersection theorem for Corson diagrams. Then, we focus on triangles of groups. It has been shown by Howie and Kopteva that the colimit of a hyperbolic triangle of groups contains a non-abelian free subgroup. We give two natural conditions, each of which ensures that the colimit of a non-spherical triangle of groups either contains a non-abelian free subgroup or is virtually solvable.
In the qualitative analysis of solutions of partial differential equations, many interesting questions are related to the shape of solutions. In particular, the symmetries of a given solution are of interest. One of the first more general results in this direction was given in 1979 by Gidas, Ni and Nirenberg... The main tool in proving this symmetry and monotonicity result is the moving plane method. This method, which goes back to Alexandrov’s work on constant mean curvature surfaces in 1962, was introduced in 1971 by Serrin in the context of partial differential equations to analyze an overdetermined problem...
This work proposes to employ the (bursty) GLO model from Bingmer et. al (2011) to model the occurrence of tropical cyclones. We develop a Bayesian framework to estimate the parameters of the model and, particularly, employ a Markov chain Monte Carlo algorithm. This also allows us to develop a forecasting framework for future events.
Moreover, we assess the default probability of an insurance company that is exposed to claims that occur according to a GLO process and show that the model is able to substantially improve actuarial risk management if events occur in oscillatory bursts.
Containment problems belong to the classical problems of (convex) geometry. In the proper sense, a containment problem is the task to decide the set-theoretic inclusion of two given sets, which is hard from both the theoretical and the practical perspective. In a broader sense, this includes, e.g., radii or packing problems, which are even harder. For some classes of convex sets there has been strong interest in containment problems. This includes containment problems of polyhedra and balls, and containment of polyhedra, which have been studied in the late 20th century because of their inherent relevance in linear programming and combinatorics.
Since then, there has only been limited progress in understanding containment problems of that type. In recent years, containment problems for spectrahedra, which naturally generalize the class of polyhedra, have seen great interest. This interest is particularly driven by the intrinsic relevance of spectrahedra and their projections in polynomial optimization and convex algebraic geometry. Except for the treatment of special classes or situations, there has been no overall treatment of that kind of problems, though.
In this thesis, we provide a comprehensive treatment of containment problems concerning polyhedra, spectrahedra, and their projections from the viewpoint of low-degree semialgebraic problems and study algebraic certificates for containment. This leads to a new and systematic access to studying containment problems of (projections of) polyhedra and spectrahedra, and provides several new and partially unexpected results.
The main idea - which is meanwhile common in polynomial optimization, but whose understanding of the particular potential on low-degree geometric problems is still a major challenge - can be explained as follows. One point of view towards linear programming is as an application of Farkas' Lemma which characterizes the (non-)solvability of a system of linear inequalities. The affine form of Farkas' Lemma characterizes linear polynomials which are nonnegative on a given polyhedron. By omitting the linearity condition, one gets a polynomial nonnegativity question on a semialgebraic set, leading to so-called Positivstellensaetze (or, more precisely Nichtnegativstellensaetze). A Positivstellensatz provides a certificate for the positivity of a polynomial function in terms of a polynomial identity. As in the linear case, these Positivstellensaetze are the foundation of polynomial optimization and relaxation methods. The transition from positivity to nonnegativity is still a major challenge in real algebraic geometry and polynomial optimization.
With this in mind, several principal questions arise in the context of containment problems: Can the particular containment problem be formulated as a polynomial nonnegativity (or, feasibility) problem in a sophisticated way? If so, how are positivity and nonnegativity related to the containment question in the sense of their geometric meaning? Is there a sophisticated Positivstellensatz for the particular situation, yielding certificates for containment? Concerning the degree of the semialgebraic certificates, which degree is necessary, which degree is sufficient to decide containment?
Indeed, (almost) all containment problems studied in this thesis can be formulated as polynomial nonnegativity problems allowing the application of semialgebraic relaxations. Other than this general result, the answer to all the other questions (highly) depends on the specific containment problem, particularly with regard to its underlying geometry. An important point is whether the hierarchies coming from increasing the degree in the polynomial relaxations always decide containment in finitely many steps.
We focus on the containment problem of an H-polytope in a V-polytope and of a spectrahedron in a spectrahedron. Moreover, we address containment problems concerning projections of H-polyhedra and spectrahedra. This selection is justified by the fact that the mentioned containment problems are computationally hard and their geometry is not well understood.
Thought structures of modelling task solutions and their connection to the level of difficulty
(2015)
Although efforts have been made to integrate the concept of mathematical modelling in school, among others PISA and TIMSS revealed weaknesses of not only German students in the field of mathematical modelling. There may be various reasons starting from educational policy via curricular issues to practical instructional concerns. Studies show that mathematical modelling has not been arrived yet in everyday school class (Blum &BorromeoFerri, 2009, p. 47). Thus, the proportion of mathematical modelling in everyday school classes is low (Jordan et al., 2006). When focusing on the teachers’ point of view there are difficulties which may contribute to avoid modelling tasks in class. The development of reasonable modelling tasks, estimating the task space, valuating the task difficulty and assessing the student solutions are difficulties which occur to an increasing degree compared to ordinary mathematics tasks.The project MokiMaS (transl.: modeling competency in math classes of secondary education) aims at providing inter-year modelling tasks, whose task space and level of difficulty is known, together with an evaluation scheme. In particular a theory based method has been developed to determine the level of difficulty of modelling tasks on the basis of thought structures, representing the cognitive load of solution approaches. The current question is whether this method leads to a realistic rating. To go further into that question an evaluation scheme has been developed which is guided by the daily assessment work of teachers, to investigate the relation of task difficulty and student performance.
Das Vertrauen vieler Menschen in ihre mathematischen und musikalischen Fähigkeiten ist oftmals sehr niedrig ausgeprägt oder wenig ausdifferenziert. Sie glauben, dass sie in dem einen oder anderen Fach (oder beiden) nicht gut seien. Hinzukommt, dass die Aussage „Ich kann nicht singen“ oder „Mathematik habe ich noch nie verstanden“ durchaus gesellschaftsfähig ist und sie nicht daran hindern muss, eine erfolgreiche Karriere zu durchlaufen, noch wird es die Meinung anderer über sie ändern.
Das Projekt „European Music Portfolio – Sounding Ways into Mathematics“ (EMP-Maths) möchte dieses Verständnis ändern. Jeder kann singen und Musik machen und jeder kann Mathematik treiben. Beide Themen sind integraler Bestandteil unseres Lebens und unserer Gesellschaft. Was geändert werden muss, ist das Bild von diesen beiden Fächern und die Fähigkeit von Lehrpersonen, Lernenden die Gelegenheit zu geben, dieses zu verändern und die beiden Fächer als bereichernd für die Lebensgestaltung einzustufen.
Beispielhaft wird im Arbeitsbuch eine Aktivität vorgestellt, in welcher Mathematik und Musik in einer Unterrichtssequenz miteinander verbunden werden. Weitere Aktivitäten, die in der Schule genutzt werden können, finden sich im Handbuch für Lehrerinnen und Lehrer. Viele weitere Beispiele und Vorschläge sind bereits vorhanden (siehe Web-Seite des Projekts) und wir möchten jeden ermutigen, sie zu nutzen. Die Auswahl im Handbuch deckt einige zentralen Felder der Mathematik und der Musik ab: Singen, Tanzen, Hören, Probleme lösen, Zahlen, Messen, Raum und Form. Mit diesem Ansatz wollen wir das Projekt an die Kerncurricula der beteiligten Länder anbinden: Deutschland, Griechenland, Rumänien, Slowakei, Spanien, Schweiz und Großbritannien. Die Dokumentation der Beispiele erfolgt in einer Art von Didaktischen Design Patterns, deren Struktur an die Anforderungen des Projekts angepasst wurde.
Das Projekt „Sounding Ways into Mathematics“ stellt Aktivitäten mit unterschiedlichen mathematischen und musikalischen Inhalten vor, um Lehrpersonen ein möglichst breites Spektrum an Hilfsmitteln, Ideen und Beispielen anbieten zu können. Diese Aktivitäten sind so aufgebaut, dass sie erweiter- und anpassbar an unterschiedliche Kontexte sowie auf die Bedürfnisse einer jeden Lehrperson und deren Schülerinnen und Schülern sind. Ferner wurden diese Aktivitäten nicht nur entwickelt, um von der Lehrperson instruktiv ausgeführt zu werden, sondern, um sie gemeinsam mit der Lerngruppe zu nutzen und eventuell sogar gemeinsam zu verändern und weiter zu entwickeln.
Das Projekt „Sounding Ways into Mathematics“ steht in Verbindung zum EMP-Sprachen Projekt „A creative Way into Languages“ (http://emportfolio.eu/emp/).
European Music Portfolio (EMP) – Maths: 'Sounding ways into mathematics' : teacher’s handbook
(2016)
Music and mathematics share an odd character: many people believe that they are not good at one or the other (or both). However, ‘I cannot sing’ or ‘I never understood mathematics’ will probably not keep them from having successful careers, and nor will it change the opinions others have about them.
The project ‘European Music Portfolio – Sounding Ways into Mathematics’ (EMP-Maths) aims towards a different understanding with regards to this character. Everyone can sing and make music, and everyone can do mathematics. Both topics are integral parts of our life and society. What needs to be improved is our ability to give students opportunities to like them.
This teacher’s handbook presents activities with different mathematical and musical content in order to offer teachers resources, ideas and examples. These activities are designed to be expandable, adaptable to different contexts, and adjustable to the needs of each teacher and their students. Furthermore, these activities are not just planned to be carried out individually; a teaching unit could be used to make sense of them, or they could even be developed in connection with each other.
Apart from this teacher’s handbook, the project provides a continuing professional development (CPD) course, a webpage (http://maths.emportfolio.eu) from which all materials can be downloaded, and an online collaboration platform. A general overview of related literature and research is available in separate documents. Additional teacher booklets provide related materials and a brief overview of the theoretical background, and are the basis for the CPD courses. The project ‘Sounding Ways into Mathematics’ is related to the EMP-Languages project ‘A Creative Way into Languages’ (http://emportfolio.eu/emp/).
Die Populationsgenetik beschäftigt sich mit dem Einfluss von zufälliger Reproduktion, Rekombination, Migration, Mutation und Selektion auf die genetische Struktur einer Population.
In dieser Arbeit mit dem englischen Titel "Ancestral lines under mutation and selection" wird das Zusammenspiel von zufälliger Reproduktion, gerichteter Selektion und Zweiwegmutation untersucht.
Dazu betrachten wir eine haploide Population in der jedes Individuum zu jedem Zeitpunkt genau einen von zwei Typen aus S:={0,1} trägt. Dabei sei 1 der neutrale und 0 der selektiv bevorzugte Typ. Im Diffusionslimes sehr großer Populationen modellieren wir den Prozess der Frequenz der Typ-0-Individuen durch eine Wright-Fisher-Diffusion X:=(X_t) mit Mutation und gerichteter Selektion.
Zu jedem Zeitpunkt s gibt es genau ein Individuum, dessen Nachkommen ab einem bestimmten zukünftigen Zeitpunkt t>s die gesamte Population ausmachen werden. Wir nennen dieses Individuum den gemeinsamen Vorfahren zum Zeitpunkt s, da alle Individuen zu allen Zeitpunkten r>t von ihm abstammen. Sei R_{s} dessen Typ zum Zeitpunkt s. Wir nehmen an, dass der Prozess X zum Zeitpunkt 0 im Gleichgewicht ist und definieren die Wahrscheinlichkeit, dass der gemeinsame Vorfahre zum Zeitpunkt 0 Typ 0 hat, durch h(x):= P(R_{0}=0|X_{0}=x). Eine Darstellung von h(x) wurde bereits von Fearnhead (2002) und Taylor (2007) gefunden und dort mit vorwiegend analytischen Methoden bewiesen. In dieser Arbeit entwickeln wir in Kapitel 3 ein neues Teilchenbild, den pruned lookdown ancestral selection graph (pruned LD-ASG), der für sich selbst genommen interessant ist und eine neue probabilistische Interpretation der Darstellung von h(x) liefert.
Durch Erweiterung des Teilchenbildes auf Nachkommenverteilungen mit schweren Tails und mit Hilfe einer Siegmund Dualität gelingt es uns in Kapitel 4 das Resultat für h(x) von klassischen Wright-Fisher-Diffusionen auf Lambda-Wright-Fisher-Diffuison zu erweitern.
Eine Verbindung zwischen Ideen von Taylor (2007), der den gemeinsamen Prozess (X,R) untersucht hat, und einem von Fearnhead (2002) betrachteten Prozess (R,V), der die Entwicklung des Typs R des gemeinsamen Vorfahren in einer Umgebung von V sogenannten virtuellen Linien beschreibt, stellen wir in Kapitel 6 her. Wir bestimmen die gemeinsame Dynamik des Tripels (X,R,V). In Kapitel 7 betrachten wir ein diskretes Bild mit endlicher Populationsgröße N und schlagen dort eine Brücke zu Resultaten von Kluth, Hustedt und Baake (2013).
Des Weiteren entwickeln wir in Kapitel 5 dieser Arbeit einen Algorithmus zur Simulation der Typen einer Stichprobe von m Individuen, die aus einer Wright-Fisher-Population mit Mutation und Selektion im Gleichgewicht gezogen wird. Mittels dieses Algorithmus illustrieren wir die Typenverteilung für verschiedene Parameterwerte und Stichprobengrößen.
Als wir im Herbst 2015 auf den Homepages von BURG FÜRSTENECK und der Schülerakademie unsere Ausschreibung für die Akademie 2016 veröffentlichten, ahnten wir noch nicht, dass wir uns weitere Werbung mit dem jährlichen Flyer, den wir zum Jahreswechsel an die hessischen Gymnasien und Gesamtschulen mit gymnasialen Zweig versenden, hätten (fast) sparen können. Zu unserer Überraschung und großer Freude zählten wir bereits im Februar 2016 "58" Anmeldungen von Schülerinnen und Schülern. Die Werbung hat uns im Anschluss über 20 weitere Bewerbungen beschert und in die unangenehme Situation gebracht, (zu) vielen Schülerinnen und Schülern absagen bzw. sie auf das nächste Jahr vertrösten zu müssen.
The condensation phase transition and the number of solutions in random graph and hypergraph models
(2016)
This PhD thesis deals with two different types of questions on random graph and random hypergraph structures.
One part is about the proof of the existence and the determination of the location of the condensation phase transition. This transition will be investigated for large values of $k$ in the problem of $k$-colouring random graphs and in the problem of 2-colouring random $k$-uniform hypergraphs, where in the latter case we investigate a more general model with finite inverse temperature.
The other part deals with establishing the limiting distribution of the number of solutions in these structures in density regimes below the condensation threshold.
Random constraint satisfaction problems have been on the agenda of various sciences such as discrete mathematics, computer science, statistical physics and a whole series of additional areas of application since the 1990s at least. The objective is to find a state of a system, for instance an assignment of a set of variables, satisfying a bunch of constraints. To understand the computational hardness as well as the underlying random discrete structures of these problems analytically and to develop efficient algorithms that find optimal solutions has triggered a huge amount of work on random constraint satisfaction problems up to this day. Referring to this context in this thesis we present three results for two random constraint satisfaction problems. ...
Die letzten Jahrzehnte brachten einen enormen Zuwachs des Wissens und Verständnisses über die molekularen Prozesse des Lebens.Möglich wurde dieser Zuwachs durch die Entwicklung diverser Methoden, mit denen beispielsweise gezielt die Konzentration einzelner Stoffe gemessen werden kann oder gar alle anwesenden Metaboliten eines biologischen Systems erfasst werden können. Die großflächige Anwendung dieser Methoden führte zur Ansammlung vieler unterschiedlicher -om-Daten, wie zum Beispiel Metabolom-, Proteom- oder Transkriptoms-Datensätzen. Die Systembiologie greift auf solche Daten zurück, um mathematische Modelle biologischer Systeme zu erstellen, und ermöglicht so ein Studium biologischer Systeme auch außerhalb des Labors.
Für größere biologische Systeme stehen jedoch meistens nicht alle Informationen über Stoffkonzentrationen oder Reaktionsgeschwindigkeiten zur Verfügung, um eine quantitative Modellierung, also die Beschreibung von Änderungsraten kontinuierlicher Variablen, durchführen zu können. In einem solchen Fall wird auf Methoden der qualitativen Modellierung zurückgegriffen. Eine dieser Methoden sind die Petrinetze (PN), welche in den 1960er Jahren von Carl Adam Petri entwickelt wurden, um nebenläufige Prozesse im technischen Umfeld zu beschreiben. Seit Anfang der 1990er Jahre finden PN auch Anwendung in der Systembiologie, um zum Beispiel metabolische Systeme oder Signaltransduktionswege zu modellieren. Einer der Vorteile dieser Methode ist zudem, dass Modelle als qualitative Beschreibung des Systems begonnen werden können und im Laufe der Zeit um quantitative Beschreibungen ergänzt werden können.
Zur Modellierung und Analyse von PN existieren bereits viele Anwendungen. Da das Konzept der PN jedoch ursprünglich nicht für die Systembiologie entwickelt wurde und meist im technischen Bereich verwendet wird, existierten kaum Anwendungen, die für den Einsatz in der Systembiologie entwickelt wurden. Daher ist auch die Durchführung der für die Systembiologie entwickelten Analysemethoden für PN nicht mit diesen Anwendungen möglich. Die Motivation des ersten Teiles dieser Arbeit war daher, eine Anwendung zu schaffen, die speziell für die PN-Modellierung und Analyse in der Systembiologie gedacht ist, also in ihren Analysemethoden und ihrer Terminologie sich an den Bedürfnissen der Systembiologie orientiert. Zudem sollte die Anwendung den Anwender bei der Auswertung der Resultate der Analysemethoden visuell unterstützen, indem diese direkt visuell im Kontext des PN gesetzt werden. Da bei komplexeren PN die Resultate der Analysemethoden in ihrer Zahl drastisch anwachsen, wird eine solche Auswertung dieser notwendig. Aus dieser Motivation heraus entstand die Anwendung MonaLisa, dessen Implementierung und Funktionen im ersten Teil der vorliegenden Arbeit beschrieben werden. Neben den klassischen Analysemethoden für PN, wie den Transitions- und Platz-Invarianten, mit denen grundlegende funktionale Module innerhalb eines PN gefunden werden können, wurden weitere, meist durch die Systembiologie entwickelte, Analysemethoden implementiert. Dazu zählen zum Beispiel die Minimal Cut Sets, die Maximal Common Transitions Sets oder Knock-out-Analysen. Mit MonaLisa ist aber auch die Simulation des dynamischen Verhaltens des modellierten biologischen Systems möglich. Hierzu stehen sowohl deterministische als auch stochastische Verfahren, beispielsweise der Algorithmus von Gillespie zur Simulation chemischer Systeme, zur Verfügung. Für alle zur Verfügung gestellten Analysemethoden wird ebenfalls eine visuelle Repräsentation ihrer Resultate bereitgestellt. Im Falle der Invarianten werden deren Elemente beispielsweise in der Visualisierung des PN eingefärbt. Die Resultate der Simulationen oder der topologischen Analyse können durch verschiedene Graphen ausgewertet werden. Um eine Schnittstelle zu anderen Anwendungen zu schaffen, wurde für MonaLisa eine Unterstützung einiger gängiger Dateiformate der Systembiologie geschaffen, so z.B. für SBML und KGML.
Der zweite Teil der Arbeit beschäftigt sich mit der topologischen Analyse eines Datensatzes von 2641 Gesamtgenom Modellen aus der path2models-Datenbank. Diese Modelle wurden automatisiert aus dem vorhandenen Wissen der KEGG- und der MetaCyc-Datenbank erstellt. Die Analyse der topologischen Eigenschaften eines Graphen ermöglicht es, grundlegende Aussagen über die globalen Eigenschaften des modellierten Systems und dessen Entstehungsprozesses zu treffen. Daher ist eine solche Analyse oft der erste Schritt für das Verständnis eines komplexen biologischen Systems. Für die Analyse der Knotengrade aller Reaktionen und Metaboliten dieser Modelle wurden sie in einem ersten Schritt in PN transformiert. Die topologischen Eigenschaften von metabolischen Systemen werden in der Literatur schon sehr gut beschrieben, wobei die Untersuchungen meist auf einem Netzwerk der Metaboliten oder der Reaktionen basieren. Durch die Verwendung von PN wird es möglich, die topologischen Eigenschaften von Metaboliten und Reaktionen in einem gemeinsamen Netzwerk zu untersuchen. Die Motivation hinter diesen Untersuchungen war, zu überprüfen, ob die schon beschriebenen Eigenschaften auch für eine Darstellung als PN zutreffen und welche neuen Eigenschaften gefunden werden können. Untersucht wurden der Knotengrad und der Clusterkoeffizient der Modelle. Es wird gezeigt, dass einige wenige Metaboliten mit sehr hohem Knotengrad für eine ganze Reihe von Effekten verantwortlich sind, wie beispielsweise dass die Verteilung des Knotengrades und des Clusterkoeffizienten, im Bezug auf Metaboliten, skalenfrei sind und dass sie für die Vernetzung der Nachbarschaft von Reaktionen verantwortlich sind. Weiter wird gezeigt, dass die Größe eines Modelles Einfluss auf dessen topologische Eigenschaften hat. So steigt die Vernetzung der Nachbarschaft eines Metaboliten, je mehr Metaboliten in einem biologischen System vorhanden sind, gleiches gilt für den durchschnittlichen Knotengrad der Metaboliten.
Random ordinary differential equations (RODEs) are ordinary differential equations (ODEs) which have a stochastic process in their vector field functions. RODEs have been used in a wide range of applications such as biology, medicine, population dynamics and engineering and play an important role in the theory of random dynamical systems, however, they have been long overshadowed by stochastic differential equations.
Typically, the driving stochastic process has at most Hoelder continuous sample paths and the resulting vector field is, thus, at most Hoelder continuous in time, no matter how smooth the vector function is in its original variables, so the sample paths of the solution are certainly continuously differentiable, but their derivatives are at most Hoelder continuous in time. Consequently, although the classical numerical schemes for ODEs can be applied pathwise to RODEs, they do not achieve their traditional orders.
Recently, Gruene and Kloeden derived the explicit averaged Euler scheme by taking the average of the noise within the vector field. In addition, new forms of higher order Taylor-like schemes for RODEs are derived systematically by Jentzen and Kloeden.
However, it is still important to build higher order numerical schemes and computationally less expensive schemes as well as numerically stable schemes and this is the motivation of this thesis. The schemes by Gruene and Kloeden and Jentzen and Kloeden are very general, so RODEs with special structure, i.e., RODEs with Ito noise and RODEs with affine structure, are focused and numerical schemes which exploit these special structures are investigated.
The developed numerical schemes are applied to several mathematical models in biology and medicine. In order to see the performance of the numerical schemes, trajectories of solutions are illustrated. In addition, the error vs. step sizes as well as the computational costs are compared among newly developed schemes and the schemes in literature.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
Algorithms for the Maximum Cardinality Matching Problem which greedily add edges to the solution enjoy great popularity. We systematically study strengths and limitations of such algorithms, in particular of those which consider node degree information to select the next edge. Concentrating on nodes of small degree is a promising approach: it was shown, experimentally and analytically, that very good approximate solutions are obtained for restricted classes of random graphs. Results achieved under these idealized conditions, however, remained unsupported by statements which depend on less optimistic assumptions.
The KarpSipser algorithm and 1-2-Greedy, which is a simplified variant of the well-known MinGreedy algorithm, proceed as follows. In each step, if a node of degree one (resp. at most two) exists, then an edge incident with a minimum degree node is picked, otherwise an arbitrary edge is added to the solution.
We analyze the approximation ratio of both algorithms on graphs of degree at most D. Families of graphs are known for which the expected approximation ratio converges to 1/2 as D grows to infinity, even if randomization against the worst case is used. If randomization is not allowed, then we show the following convergence to 1/2: the 1-2-Greedy algorithm achieves approximation ratio (D-1)/(2D-3); if the graph is bipartite, then the more restricted KarpSipser algorithm achieves the even stronger factor D/(2D-2). These guarantees set both algorithms apart from other famous matching heuristics like e.g. Greedy or MRG: these algorithms depend on randomization to break the 1/2-barrier even for paths with D=2. Moreover, for any D our guarantees are strictly larger than the best known bounds on the expected performance of the randomized variants of Greedy and MRG.
To investigate whether KarpSipser or 1-2-Greedy can be refined to achieve better performance, or be simplified without loss of approximation quality, we systematically study entire classes of deterministic greedy-like algorithms for matching. Therefore we employ the adaptive priority algorithm framework by Borodin, Nielsen, and Rackoff: in each round, an adaptive priority algorithm requests one or more edges by formulating their properties---like e.g. "is incident with a node of minimum degree"---and adds the received edges to the solution. No constraints on time and space usage are imposed, hence an adaptive priority algorithm is restricted only by its nature of picking edges in a greedy-like fashion. If an adaptive priority algorithm requests edges by processing degree information, then we show that it does not surpass the performance of KarpSipser: our D/(2D-2)-guarantee for bipartite graphs is tight and KarpSipser is optimal among all such "degree-sensitive" algorithms even though it uses degree information merely to detect degree-1 nodes. Moreover, we show that if degrees of both nodes of an edge may be processed, like e.g. the Double-MinGreedy algorithm does, then the performance of KarpSipser can only be increased marginally, if at all. Of special interest is the capability of requesting edges not only by specifying the degree of a node but additionally its set of neighbors. This enables an adaptive priority algorithm to "traverse" the input graph. We show that on general degree-bounded graphs no such algorithm can beat factor (D-1)/(2D-3). Hence our bound for 1-2-Greedy is tight and this algorithm performs optimally even though it ignores neighbor information. Furthermore, we show that an adaptive priority algorithm deteriorates to approximation ratio exactly 1/2 if it does not request small degree nodes. This tremendous decline of approximation quality happens for graphs on which 1-2-Greedy and KarpSipser perform optimally, namely paths with D=2. Consequently, requesting small degree nodes is vital to beat factor 1/2.
Summarizing, our results show that 1-2-Greedy and KarpSipser stand out from known and hypothetical algorithms as an intriguing combination of both approximation quality and conceptual simplicity.
Nur eine Institution, die sich verändern kann, kann auch bestehen – das gilt mit Sicherheit im besonderen Maße für Bildungseinrichtungen. Veränderungen können jedoch in unterschiedlichem Gewand daherkommen. Manche geschehen unerwartet und verursachen dadurch vielleicht Probleme, andere hingegen bahnen sich so langsam an, dass ihre Effekte geradezu überraschend wirken können. Die in ihrer Geschwindigkeit unerwartete Einführung des Praxissemesters in der ersten, universitären Phase der Lehrerausbildung in Hessen ist eine solche problematische Veränderung für die Hessische Schülerakademie (Oberstufe), weil sie deren bisher gültige Integration in die schulpraktischen Studienanteile der studentischen BetreuerInnen nicht mehr vorsieht – ein Umstand, der Akademieleitung und Kuratorium ebenso wie unsere Kooperationspartner an der Universität und im Kultusministerium jetzt schon seit über zwei Jahren intensiv beschäftigt.
To crack the neural code and read out the information neural spikes convey, it is essential to understand how the information is coded and how much of it is available for decoding. To this end, it is indispensable to derive from first principles a minimal set of spike features containing the complete information content of a neuron. Here we present such a complete set of coding features. We show that temporal pairwise spike correlations fully determine the information conveyed by a single spiking neuron with finite temporal memory and stationary spike statistics. We reveal that interspike interval temporal correlations, which are often neglected, can significantly change the total information. Our findings provide a conceptual link between numerous disparate observations and recommend shifting the focus of future studies from addressing firing rates to addressing pairwise spike correlation functions as the primary determinants of neural information.
Given an Abelian semi-group (A, +), an A-valued curvature measure is a valuation with values in A-valued measures. If A = R, complete classifications of Hausdorff-continuous translation-invariant SO(n)-invariant valuations and curvature measures were obtained by Hadwiger and Schneider, respectively. More recently, characterisation results have been achieved for curvature measures with values in A = Sym^p R^n and A = Sym^2 Λ^q R^n for p, q ≥ 1 with varying assumptions as for their invariance properties.
In the present work, we classify all smooth translation-invariant SO(n)-covariant curvature measures with values in any SO(n)-representation in terms of certain differential forms on the sphere bundle S R^n and describe their behaviour under the globalisation map. The latter result also yields a similar classification of all continuous SO(n)-module-valued SO(n)-covariant valuations. Furthermore, a decomposition of the space of smooth translation-
invariant scalar-valued curvature measures as an SO(n)-module is obtained. As a corollary, we construct explicit bases of continuous translation-invariant scalar-valued valuations and smooth translation-invariant scalar-valued curvature measures.
Interactional niche in the development of geometrical and spatial thinking in the familial context
(2016)
In the analysis of mathematics education in early childhood it is necessary to consider the familial context, which has a significant influence on development in early childhood. Many reputable international research studies emphasize that the more children experience mathematical situations in their families, the more different emerging forms of participation occur for the children that enable them to learn mathematics in the early years. In this sense mathematical activities in the familial context are cornerstones of children’s mathematical development, which is also affected by the ethnic, cultural, educational and linguistic features of their families. Germany has a population of approximately 82 million, about 7.2 million of whom are immigrants (Statisches Bundesamt 2009, pp.28-32). Children in immigrant families grow up with multiculturalism and multilingualism, therefore these children are categorized as a risk group in Germany. “Early Steps in Mathematics Learning – Family Study” (erStMaL-FaSt) is the one of the first familial studies in Germany to deal with the impact of familial socialization on mathematics learning. The study enables us to observe children from different ethnic groups with their family members in different mathematical play situations. The family study (erStMaL-FaSt) is empirically performed within the framework of the erStMaL (Early Steps in Mathematics Learning) project, which relates to the investigation of longitudinal mathematical cognitive development in preschool and early primary-school ages from a socio-constructivist perspective. This study uses two selected mathematical domains, Geometry and Measurement, and four play situations within these two mathematical domains.
My PhD study is situated in erStMaL-FaSt. Therefore, in the beginning of this first chapter, I briefly touch upon IDeA Centre and the erStMaL project and then elaborate on erStMaL-FaSt. As parts of my research concepts, I specify two themes of erStMaL-FaSt: family and play. Thereafter I elaborate upon my research interest. The aim of my study is the research and development of theoretical insights in the functioning of familial interactions for the formation of geometrical (spatial) thinking and learning of children of Turkish ethnic background. Therefore, still in Chapter 1, I present some background on the Turkish people who live in Germany and the spatial development of the children.
This study is designed as a longitudinal study and constructed from interactionist and socio-constructivist perspectives. From a socio-constructivist perspective the cognitive development of an individual is constitutively bound to the participation of this individual in a variety of social interactions. In this regard the presence of each family member provides the child with some “learning opportunities” that are embedded in the interactive process of negotiation of meaning about mathematical play. During the interaction of such various mathematical learning situations, there occur different emerging forms of participation and support. For the purpose of analysing the spatial development of a child in interaction processes in play situations with family members, various statuses of participation are constructed and theoretically described in terms of the concept of the “interactional niche in the development of mathematical thinking in the familial context” (NMT-Family) (Acar & Krummheuer, 2011), which is adapted to the special needs of familial interaction processes. The concept of the “interactional niche in the development of mathematical thinking” (NMT) consists of the “learning offerings” provided by a group or society, which are specific to their culture and are categorized as aspects of “allocation”, and of the situationally emerging performance occurring in the process of meaning negotiation, both of which are subsumed under the aspect of the “situation”, and of the individual contribution of the particular child, which constitutes the aspect of “child’s contribution” (Krummheuer 2011a, 2011b, 2012, 2014; Krummheuer & Schütte 2014). Thereby NMT-Family is constructed as a subconcept of NMT, which offers the advantage of closer analyses and comparisons between familial mathematical learning occasions in early childhood and primary school ages.
Within the scope of NMT-Family, a “mathematics learning support system” (MLSS) is an interactional system which may emerge between the child and the family members in the course of the interaction process of concrete situations in play (Krummheuer & Acar Bayraktar, 2011). All these topics are addressed in Chapter 2 as theoretical approaches and in Chapter 3 as the research method of this study. In Chapter 4 the data collection and analysis is clarified in respect of these approaches...
Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to potential underlying mechanisms.
Gleichungen mit mehreren Unbekannten zu lösen, üben Schüler schon in der Mittelstufe. Für die einen ist es eine spannende mathematische Knobelei, für die anderen eher Quälerei. Doch den wenigsten ist bewusst, wie viele Leben dadurch jeden Tag gerettet werden. Die moderne medizinische Bildgebung beruht darauf, sehr viele Gleichungen nach sehr vielen Unbekannten aufzulösen.
Frühe mathematische Bildung – Ziele und Gelingensbedingungen für den Elementar- und Primarbereich
(2017)
Im Rahmen der Schriftenreihe "Wissenschaftliche Untersuchungen zur Arbeit der Stiftung 'Haus der kleinen Forscher'" werden regelmäßig wissenschaftliche Beiträge von renommierten Expertinnen und Experten aus dem Bereich der frühen Bildung veröffentlicht. Diese Schriftenreihe dient einem fachlichen Dialog zwischen Stiftung, Wissenschaft und Praxis, mit dem Ziel, allen Kitas, Horten und Grundschulen in Deutschland fundierte Unterstützung für ihren frühkindlichen Bildungsauftrag zu geben.
Der vorliegende achte Band der Reihe mit einem Geleitwort von Kristina Reiss stellt die Ziele und Gelingensbedingungen mathematischer Bildung im Elementar- und Primarbereich in den Fokus.
Christiane Benz, Meike Grüßing, Jens Holger Lorenz, Christoph Selter und Bernd Wollring spezifizieren in ihrer Expertise pädagogisch-inhaltliche Zieldimensionen mathematischer Bildung im Kita- und Grundschulalter. Neben einer theoretischen Fundierung verschiedener Zielbereiche werden Instrumente für deren Messung aufgeführt. Des Weiteren erörtern die Autorinnen und Autoren Gelingensbedingungen für eine effektive und wirkungsvolle frühe mathematische Bildung in der Praxis. Sie geben zudem Empfehlungen für die Weiterentwicklung der Stiftungsangebote und die wissenschaftliche Begleitung der Stiftungsarbeit im Bereich Mathematik.
Das Schlusskapitel des Bandes beschreibt die Umsetzung dieser fachlichen Empfehlungen in den inhaltlichen Angeboten der Stiftung "Haus der kleinen Forscher".
Wer gern mitzählt, wird vielleicht festgestellt haben, dass im Sommer 2017 die zwanzigste Hessische Schülerakademie stattfand – dreizehn Oberstufenakademien waren es seit 2004, sieben für die Mittelstufe kamen seit 2011 hinzu. Zwanzig erfolgreiche Akademien bieten nicht nur Anlass zur Freude, sie bilden auch die solide Grundlage für einen selbstbewussten Blick in die Zukunft. Im nächsten Frühjahr lädt daher die Akademie Burg Fürsteneck gemeinsam mit dem Hessischen Kultusministerium zu einem interdisziplinären Symposium ein, bei dem die Hessische Schülerakademie und das Programm KulturSchule im Mittelpunkt stehen: Unter dem Titel "Kulturelle Bildung auf dem Weg" beschäftigen sich vom 2. bis zum 4. März 2018 Fachleute aus Wissenschaft und Praxis auf Burg Fürsteneck mit den "Qualitätsbedingungen in der Kulturellen Bildung am Beispiel der Schülerakademien und der Kulturschulen in Hessen".