Mathematik
Refine
Year of publication
Document Type
- Article (84)
- Preprint (47)
- Doctoral Thesis (46)
- Report (16)
- Conference Proceeding (9)
- diplomthesis (6)
- Book (3)
- Part of a Book (2)
- Bachelor Thesis (1)
- Diploma Thesis (1)
Language
- English (216) (remove)
Has Fulltext
- yes (216) (remove)
Is part of the Bibliography
- no (216)
Keywords
- Kongress (6)
- Kryptologie (5)
- Online-Publikation (4)
- LLL-reduction (3)
- Moran model (3)
- computational complexity (3)
- contraction method (3)
- Algebraische Geometrie (2)
- Brownian motion (2)
- Commitment Scheme (2)
- Heat kernel (2)
- Integral Geometry (2)
- Knapsack problem (2)
- Krein space (2)
- Laplace operator on graphs (2)
- Lattice basis reduction (2)
- Mathematik (2)
- Oblivious Transfer (2)
- Perception (2)
- Quantum Zeno dynamics (2)
- San Jose (2)
- Semidefinite Programming (2)
- Shortest lattice vector problem (2)
- Subset sum problem (2)
- Tropical geometry (2)
- Tropische Geometrie (2)
- Valuation Theory (2)
- Vision (2)
- W*-dynamical system (2)
- X-Y model (2)
- ancestral selection graph (2)
- coalescent (2)
- collective intelligence (2)
- complexity (2)
- duality (2)
- fixation probability (2)
- genealogy (2)
- level of difficulty (2)
- point process (2)
- quantum spin systems (2)
- return to equilibrium (2)
- segments (2)
- spike train (2)
- task space (2)
- thought structure (2)
- Λ−coalescent (2)
- A-Discriminant (1)
- Action potential (1)
- Actions in mathematical learning (1)
- Activity (1)
- Adaptive dynamics (1)
- Amoeba (1)
- Ancestral selection graph (1)
- Anisotropic Norm (1)
- Approximation algorithm (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Asymptotically Even Nonlinearity (1)
- Axon (1)
- Banach spaces (1)
- Bayesian Inference (1)
- Berkovich spaces (1)
- Black and Scholes Option Price theory (1)
- Blind Signature (1)
- Block Korkin—Zolotarev reduction (1)
- Blockplay (1)
- Boolean Lattice (1)
- Boundary (1)
- Boundary Value Problems (1)
- Branch and Bound (1)
- Branching particle systems (1)
- Branching process approximation (1)
- Breaking knapsack cryptosystems (1)
- Burst (1)
- Calderón problem (1)
- Cannings model (1)
- Catalan number (1)
- Chinese Remainder Theorem (1)
- Chinese restaurant process (1)
- Circuit (1)
- Closest Vector Problem (1)
- Coamoeba (1)
- Cognitive psychology (1)
- Commitment (1)
- Commitment schemes (1)
- Computational complexity (1)
- Concentration Inequality (1)
- Condensing (1)
- Containment (1)
- Contraction method (1)
- Degenerate Linear Part (1)
- Dessins d'enfants (1)
- Diagrams and mathematical learning (1)
- Dichte <Stochastik> (1)
- Digital and analogue materials (1)
- Digital trees (1)
- Directional selection (1)
- Dirichlet bound (1)
- Dirichlet random measure (1)
- Discrete Logarithm (1)
- Diskrete Geometrie (1)
- Diversity in trait space (1)
- Donkers theorem (1)
- Dopamine (1)
- Dormancy (1)
- Dosis-Wirkungs-Modellierung (1)
- Duality (1)
- Early Childhood (1)
- Einbettung <Mathematik> (1)
- Energie-Modell (1)
- Error Bound (1)
- Evolutionary branching (1)
- Ewens sampling formula (1)
- Examples (1)
- FEM-BEM-coupling (1)
- FID model (1)
- FIND algorithm (1)
- Face (1)
- Face recognition (1)
- Factoring (1)
- Familie (1)
- Family (1)
- Feller branching with logistic growth (1)
- Finite element methods (1)
- Finitely many measurements (1)
- Fixation probability (1)
- Fixpunkt (1)
- Fractional Brownian Motion (1)
- Fractional Laplacian (1)
- Fuchsian groups (1)
- Fächerübergreifender Unterricht (1)
- Galerkin Approximation (1)
- Game Tree (1)
- Gaussian Random Field (1)
- Gaussian process (1)
- Gelfand-Shilov space (1)
- Gemischte Volumen (1)
- Genealogical construction (1)
- Genealogische Konstruktion (1)
- Genus One (1)
- Geometrie (1)
- Geometry (1)
- Gespräch (1)
- Gestaenge (1)
- Girsanov transform (1)
- Gram-Hadamard inequalities (1)
- Griffiths–Engen–McCloskey distribution (1)
- Group dynamics (1)
- Große Abweichung (1)
- Gruppendynamiken (1)
- Hadamard's Three-Lines Theorem (1)
- Handelman (1)
- Handlung (1)
- Heisenberg algebra (1)
- Hidden Markov models (1)
- Hinterlegungsverfahren <Kryptologie> (1)
- Hintertür <Informatik> (1)
- Hodge bundle (1)
- Holzklötzchen (1)
- Hopf algebroids (1)
- Householder reflection (1)
- Hypotrochoid (1)
- Identification (1)
- Immigration (1)
- Index at Infinity (1)
- Infrared singularity (1)
- Integer relations (1)
- Interaction (1)
- Internet (1)
- Inverse problems (1)
- Klebsiella pneumoniae (1)
- Kochen-Specker theorem (1)
- Kollektivintelligenz (1)
- Kombinatorische Optimierung (1)
- Konzentrationsungleichung (1)
- Korkin—Zolotarev reduction (1)
- Kreuzkorrelation (1)
- Kryptosystem (1)
- Kullback-Leibler Informational Divergence (1)
- L^p bounds (1)
- L^p means (1)
- Label cover (1)
- Lanzeitverhalten (1)
- Large Deviation (1)
- Lattice Reduction (1)
- Lernen (1)
- Linear Filtering (1)
- Linear-Implicit Scheme (1)
- Linkages (1)
- Loewner monotonicity and convexity (1)
- Logarithmic Laplacian (1)
- Long- Range Dependence (1)
- Long-Range Dependence (1)
- Long-time behaviour (1)
- Longitudinal Study (1)
- Lotka-Volterra system (1)
- Low density subset sum algorithm (1)
- Machine Learning (1)
- Malliavin calculus (1)
- Mallows model (1)
- Markov chain Monte Carlo Method (1)
- Markov chain imbedding technique (1)
- Markov model (1)
- Markov-Kette (1)
- Mathematical Giftedness (1)
- Mathematical Reasoning (1)
- Mathematical modelling (1)
- Mathematics Learning (1)
- McEliece (1)
- Mean Anisotropy (1)
- Message authentication (1)
- Mixed Volumes (1)
- Modellierung (1)
- Modular Multiplication (1)
- Mooney faces (1)
- Morava K-theory (1)
- Mouse (1)
- Multityp-Verzweigungsprozess mit Immigration (1)
- Multitype Branching with Immigration (1)
- Musik (1)
- NP-complete problems (1)
- NP-hard (1)
- NP-hardness (1)
- Neural encoding (1)
- Neurophysiology (1)
- Neuroscience (1)
- Newton–Okounkov bodies (1)
- Non-Malleability (1)
- Noticeable Probability (1)
- Optimal Mean-Square Filter (1)
- Oracle Query (1)
- Parabolic SPDE (1)
- Participation (1)
- Partizipation (1)
- Pause (1)
- Permutation (1)
- Phragmén-Lindelöf principle (1)
- Piecewise-constant coefficient (1)
- Poisson Process (1)
- Poisson boundary (1)
- Polyedrische Kombinatorik (1)
- Polymorphic evolution sequence (1)
- Polynomial Optimization (1)
- Pontrjagin space (1)
- Populationsdynamiken (1)
- Portfolios (1)
- Positivstellensatz (1)
- Prag <1999> (1)
- Private Information Retrieval (1)
- Probabilistic analysis of algorithms (1)
- Probabilistically checkable proofs (1)
- Probabilistische Analyse von Algorithmen (1)
- Probability distribution (1)
- Probability of fixation (1)
- Profil Likelihood (1)
- Projektionen (1)
- Public Key Cryptosystem (1)
- Public Parameter (1)
- Punktprozess (1)
- Pólya urn (1)
- Quadratic Residue (1)
- Quantum Zeno Effect (1)
- Quantum Zeno effect (1)
- Quickselect (1)
- Radix sort (1)
- Random Oracle (1)
- Random String (1)
- Random environment (1)
- Random variables (1)
- Ray-Knight representation (1)
- Reaction time (1)
- Rekursiver Algorithmus (1)
- Relaxation (1)
- Representation Problem (1)
- Research article (1)
- Riemann surfaces (1)
- Ringtheorie (1)
- Risikobewertung (1)
- Robustheit (1)
- SLLL-reduction (1)
- San Francisco (1)
- Santa Barbara (1)
- Schizophrenia (1)
- Schwarz triangel functions (1)
- Schwinger model (1)
- Security (1)
- Security Parameter (1)
- Semidefinite Optimierung (1)
- Semidefinite Optimization (1)
- Semiotics according to C. S. Peirce (1)
- Sensory perception (1)
- Sensory processing (1)
- Signature (1)
- Small order expansion (1)
- Spectrahedra (1)
- Spiel (1)
- Spielbaum (1)
- Spielbaum-Suchverfahren (1)
- Stable reduction algorithm (1)
- State dependent branching rate (1)
- Stationarity (1)
- Statistik (1)
- Stochastic Analysis of Square Zero Variation Processes (1)
- Stochastik (1)
- Stonesches Spektrum (1)
- Striatum (1)
- Strong Taylor Scheme (1)
- Sum of Squares (1)
- Support (1)
- Symmetrie (1)
- Symmetry (1)
- Sympatric speciation (1)
- Tail Bound (1)
- Tailschranke (1)
- Talk (1)
- Thorne Kishino Felsenstein model (1)
- Topic Model (1)
- Trapdoor (1)
- Trinomial (1)
- Tropical Geometry (1)
- Tropical Grassmannians (1)
- Tropical bases (1)
- Tropical varieties (1)
- Tropische Basen (1)
- Trotter's product formula (1)
- Turkish immigrants (1)
- Typ-In-Algebra (1)
- Typology (1)
- Türkisch (1)
- Uniform regularity (1)
- Uniform resource locators (1)
- Unterstützung (1)
- Valuation on functions (1)
- Verzweigende Teilchensysteme (1)
- Verzweigungsprozess (1)
- Wahrscheinlichkeitsverteilung (1)
- Wiener Index (1)
- Wiener index (1)
- Wiener-Index (1)
- Zolotarev metric (1)
- Zufällige Umgebung (1)
- Zustandsabhängige Verzweigungsrate (1)
- abelian differentials (1)
- algebraic curves (1)
- algebraic values (1)
- alpha-stable branching (1)
- ampleness (1)
- analysis of algorithms (1)
- anti-Zeno effect (1)
- argumentation (1)
- arithmetic ball quotients (1)
- augmented and restricted base loci (1)
- autocorrelograms (1)
- bid-ask spread (1)
- binary search tree (1)
- bordism theory (1)
- branching processes (1)
- branching random walk in random medium (1)
- cancer cell dormancy (1)
- canonical divisors (1)
- catastrophe modeling (1)
- chosen ciphertext attack (1)
- clique problem (1)
- colorabdity (1)
- combinatorial optimization (1)
- compact Riemann surfaces (1)
- complex multiplication (1)
- composition (1)
- computational geometry (1)
- concurrent composition (1)
- condensing (1)
- confirmatory factory analysis (1)
- consensus (1)
- continued fraction algorithm (1)
- convexity (1)
- convolution quadrature (1)
- cooperative systems (1)
- cross correlation (1)
- cryptography (1)
- cycle structure of permutations (1)
- degenerate semigroup (1)
- delay equation (1)
- dessins d’enfants (1)
- difference sets (1)
- digital search tree (1)
- digital tools (1)
- discrete dynamical system (1)
- discrete logarithm (1)
- discrete logarithm (DL) (1)
- dose-resoponse modelling (1)
- doubly stochastic point process (1)
- eigenvalue (1)
- elastodynamic wave equation (1)
- emergence (1)
- endliche metrische Räume (1)
- error bounds (1)
- exponentiation (1)
- external branch (1)
- face inversion (1)
- face perception (1)
- fake projective planes (1)
- families of hash functions (1)
- finite resolution (1)
- firing patterns (1)
- flat surfaces (1)
- floating point arithmetic (1)
- floating point errors (1)
- foliated Schwarz symmetry (1)
- forming a group (1)
- fractional Brownian motion (1)
- fractions of exponentiation (1)
- frühkindliche Erziehung (1)
- functional limit theorem (1)
- functional limit theorems (1)
- generic algorithm (1)
- generic algorithms (1)
- generic complexity (1)
- generic group model (1)
- geometry (1)
- graph coloring (1)
- graph isomorphism (1)
- h-transform (1)
- hard bit (1)
- hardcore subsets (1)
- harmonic function (1)
- heavy tails (1)
- hidden Markov model (1)
- hierarchical mean-field limit (1)
- highly regular nearby points (1)
- hypergeometric functions (1)
- hypervariable region (1)
- incremental schemes (1)
- indefinite inner product space (1)
- individual-based models (1)
- inner product (1)
- integer relation (1)
- integer vector (1)
- interacting particle Systems (1)
- internal diffusion limited aggregation (1)
- internal path length (1)
- inverse coefficient problem, (1)
- iterated subsegments (1)
- key comparisons (1)
- kinetic fingerprint (1)
- knapsack cryptosystems (1)
- large deviations (1)
- latent variance (1)
- lattice basis reduction (1)
- lattices (1)
- leapfrog (1)
- length defect (1)
- limit order markets (1)
- local LLL-reduction (1)
- local LLLreduction (1)
- local coordinates (1)
- local randomness (1)
- local time (1)
- local time drift (1)
- logarithmic geometry (1)
- logical networks (1)
- lookdown construction (1)
- lower bounds (1)
- manifold and geodesic (1)
- market making (1)
- mathematical modeling (1)
- mathematical modelling (1)
- mathematics (1)
- measurement (1)
- message-passing algorithm (1)
- modelling (1)
- modular automorphism group (1)
- modular group (1)
- moduli spaces (1)
- multi-agents system (1)
- multi-drug treatment (1)
- multilevel branching (1)
- music (1)
- mutation parameter estimation (1)
- neuronal code (1)
- neuronaler Kode (1)
- non-archimedean geometry (1)
- non-autonomous dynamical systems (1)
- non-malleability (1)
- noncommutative ring spectra (1)
- nondetermmistlc Turing machines (1)
- numerical experiments (1)
- observable Funktion (1)
- one-more decryption attack (1)
- one-way function (1)
- one-way functions (1)
- operator algebra (1)
- optimal transport (1)
- pair HMM (1)
- partial match queries (1)
- perceptual closure (1)
- phage (1)
- phage therapy (1)
- phase transitions (1)
- poisson process (1)
- polynomial random number generator (1)
- population dynamics (1)
- portfolio optimization (1)
- positivity of line bundles (1)
- probabilistic analysis of algorithms (1)
- probability metric (1)
- professional development (1)
- profile likelihood (1)
- projections (1)
- projective planes (1)
- q-binomial theorem (1)
- quantum field theory (1)
- quincunx (1)
- random environment (1)
- random function generator (1)
- random graphs (1)
- random measures (1)
- random media (1)
- random metric (1)
- random move (1)
- random number generator (1)
- random oracle model (1)
- random partition (1)
- random recursive tree (1)
- random trees (1)
- random walks (1)
- raum-zeitliche Muster (1)
- reactant-catalyst systems (1)
- recursive distributional equation (1)
- resistance (1)
- resistance mutation (1)
- reversibility (1)
- risk assessment (1)
- risk theory (1)
- rotating plane method (1)
- rough paths theory (1)
- satlsfiablhty (1)
- scaling (1)
- searchtrees (1)
- secure bit (1)
- security analysis of protocols (1)
- security of data (1)
- self-organizing groups (1)
- self-organizing groups; population dynamics; collective intelligence; forming groups; metric on finite sets (1)
- semidefinite optimization (1)
- sequence alignment (1)
- set-valued pullback attractors (1)
- shadow price (1)
- short integer relation (1)
- shortest lattice vector (1)
- signature size (1)
- signed ElGamal encryption (1)
- simultaneous diophantine approximations (1)
- simultaneous security of bits (1)
- single block replacement (1)
- spatio-temporal patterns (1)
- statistic analysis (1)
- statistical alignment (1)
- statistische Analyse (1)
- statistischer Test (1)
- stoch. Analyse von Algorithmen (1)
- stochastic filtering (1)
- stochastic modeling (1)
- stochastic population dynamics (1)
- strong transience (1)
- subgroup growth (1)
- subset sum problems (1)
- substitution attacks (1)
- sum of squared factor loadings (1)
- switching systems (1)
- synergistic interaction (1)
- therapy evasion (1)
- topological entropy (1)
- trading strategies (1)
- transcendence (1)
- transversal learning (1)
- treatment protocol design (1)
- treatment success (1)
- tropical geometry (1)
- tropical universal Jacobian (1)
- tropicalization (1)
- universal compactified Jacobian (1)
- urn model (1)
- von Neumann algebra (1)
- von Neumann algebras (1)
- von Neumann-Algebra (1)
- weak convergence (1)
- Λ-coalescent (1)
- σ-field (1)
Institute
- Mathematik (216)
- Informatik (50)
- Medizin (2)
- Frankfurt Institute for Advanced Studies (FIAS) (1)
- MPI für Hirnforschung (1)
- MPI für empirische Ästhetik (1)
- Physik (1)
It is possible to represent each of a number of Markov chains as an evolving sequence of connected subsets of a directed acyclic graph that grow in the following way: initially, all vertices of the graph are unoccupied, particles are fed in one-by-one at a distinguished source vertex, successive particles proceed along directed edges according to an appropriate stochastic mechanism, and each particle comes to rest once it encounters an unoccupied vertex. Examples include the binary and digital search tree processes, the random recursive tree process and generalizations of it arising from nested instances of Pitman's two-parameter Chinese restaurant process, tree-growth models associated with Mallows' ϕ model of random permutations and with Schützenberger's non-commutative q-binomial theorem, and a construction due to Luczak and Winkler that grows uniform random binary trees in a Markovian manner. We introduce a framework that encompasses such Markov chains, and we characterize their asymptotic behavior by analyzing in detail their Doob-Martin compactifications, Poisson boundaries and tail σ-fields.
We consider versions of the FIND algorithm where the pivot element used is the median of a subset chosen uniformly at random from the data. For the median selection we assume that subsamples of size asymptotic to c⋅nα are chosen, where 0<α≤12, c>0 and n is the size of the data set to be split. We consider the complexity of FIND as a process in the rank to be selected and measured by the number of key comparisons required. After normalization we show weak convergence of the complexity to a centered Gaussian process as n→∞, which depends on α. The proof relies on a contraction argument for probability distributions on càdlàg functions. We also identify the covariance function of the Gaussian limit process and discuss path and tail properties.
We study the price-setting problem of market makers under perfect competition in continuous time. Thereby we follow the classic Glosten-Milgrom model that defines bid and ask prices as the expectation of a true value of the asset given the market makers partial information that includes the customers trading decisions. The true value is modeled as a Markov process that can be observed by the customers with some noise at Poisson times.
We analyze the price-setting problem by solving a non-standard filtering problem with an endogenous filtration that depends on the bid and ask price process quoted by the market maker. Under some conditions we show existence and uniqueness of the price processes. In a different setting we construct a counterexample to uniqueness. Further, we discuss the behavior of the spread by a convergence result and simulations.
In this thesis, the asymptotic behaviour of Pólya urn models is analyzed, using an approach based on the contraction method. For this, a combinatorial discrete time embedding of the evolution of the composition of the urn into random rooted trees is used. The recursive structure of the trees is used to study the asymptotic behavior using ideas from the contraction method.
The approach is applied to a couple of concrete Pólya urns that lead to limit laws with normal distributions, with non-normal limit distributions, or with asymptotic periodic distributional behavior.
Finally, an approach more in the spirit of earlier applications of the contraction method is discussed for one of the examples. A general transfer theorem of the contraction method is extended to cover this example, leading to conditions on the coefficients of the recursion that are not only weaker but also in general easier to check.
The relation between the complexity of a time-switched dynamics and the complexity of its control sequence depends critically on the concept of a non-autonomous pullback attractor. For instance, the switched dynamics associated with scalar dissipative affine maps has a pullback attractor consisting of singleton component sets. This entails that the complexity of the control sequence and switched dynamics, as quantified by the topological entropy, coincide. In this paper we extend the previous framework to pullback attractors with nontrivial components sets in order to gain further insights in that relation. This calls, in particular, for distinguishing two distinct contributions to the complexity of the switched dynamics. One proceeds from trajectory segments connecting different component sets of the attractor; the other contribution proceeds from trajectory segments within the component sets. We call them “macroscopic” and “microscopic” complexity, respectively, because only the first one can be measured by our analytical tools. As a result of this picture, we obtain sufficient conditions for a switching system to be more complex than its unswitched subsystems, i.e., a complexity analogue of Parrondo’s paradox.
Neuronal activity in the brain is often investigated in the presence of stimuli, termed externally driven activity. This stimulus-response-perspective has long been focussed on in order to find out how the nervous system responds to different stimuli. The neuronal response consists of baseline activity, so called spontaneous activity1, and activity which is caused by the stimulus. The baseline activity is often considered as constant over time which allows the identification of the stimulus-evoked part of the neuronal response by averaging over a set of trials.
However, during the last years it has been recognized that own dynamics of the nervous system plays an important role in information processing. As a consequence, spontaneous activity is no longer regarded only as background ’noise’ and its role in cortical processing is reconsidered. Therefore, the study of spontaneous firing pattern gains more importance as these patterns may shape neuronal responses to a larger extent as previously thought. For example, recent findings suggest that prestimulus activity can predict a person’s visual perception performance on a single trial basis (Hanslmayr et al., 2007). In this context, Ringach (2009) remarks that one can learn much about even the quiescent state of the brain which “underlies the importance of understanding cortical responses as the fusion of ongoing activity and sensory input”.
Taking into account that spontaneous activity reflects anything else but noise, new challenges arise when analysing neuronal data. In this thesis one of these problems related to the analysis of neuronal activity will be adressed, namely the nonstationarity of firing rates.
The present work consists of four chapters. First of all the introduction gives neurophysiological background information to get an idea of neuronal information processing. Afterwords the theory of point processes is provided which forms the basis for modeling neuronal spiking data. In the last section of the introduction a statement of the problem is given. Chapter 2 proposes an easily applicable statistical method for the detection of nonstationarity. It is applied to simulations and to real data in order to show its capabilities. Thereafter, four other approaches are presented which provide useful illustrations concerning the nonstationarity of the firing rate but share the problem that one cannot make objective statements on the basis of their results. They were developed in the course of establishing a suitable method. In chapter 4 the results are discussed and suggestions for further study are given.
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
We investigate multivariate Laurent polynomials f \in \C[\mathbf{z}^{\pm 1}] = \C[z_1^{\pm 1},\ldots,z_n^{\pm 1}] with varieties \mathcal{V}(f) restricted to the algebraic torus (\C^*)^n = (\C \setminus \{0\})^n. For such Laurent polynomials f one defines the amoeba \mathcal{A}(f) of f as the image of the variety \mathcal{V}(f) under the \Log-map \Log : (\C^*)^n \to \R^n, (z_1,\ldots,z_n) \mapsto (\log|z_1|, \ldots, \log|z_n|). I.e., the amoeba \mathcal{A}(f) is the projection of the variety \mathcal{V}(f) on its (componentwise logarithmized) absolute values. Amoebas were first defined in 1994 by Gelfand, Kapranov and Zelevinksy. Amoeba theory has been strongly developed since the beginning of the new century. It is related to various mathematical subjects, e.g., complex analysis or real algebraic curves. In particular, amoeba theory can be understood as a natural connection between algebraic and tropical geometry.
In this thesis we investigate the geometry, topology and methods for the approximation of amoebas.
Let \C^A denote the space of all Laurent polynomials with a given, finite support set A \subset \Z^n and coefficients in \C^*. It is well known that, in general, the existence of specific complement components of the amoebas \mathcal{A}(f) for f \in \C^A depends on the choice of coefficients of f. One prominent key problem is to provide bounds on the coefficients in order to guarantee the existence of certain complement components. A second key problem is the question whether the set U_\alpha^A \subseteq \C^A of all polynomials whose amoeba has a complement component of order \alpha \in \conv(A) \cap \Z^n is always connected.
We prove such (upper and lower) bounds for multivariate Laurent polynomials supported on a circuit. If the support set A \subset \Z^n satisfies some additional barycentric condition, we can even give an exact description of the particular sets U_\alpha^A and, especially, prove that they are path-connected.
For the univariate case of polynomials supported on a circuit, i.e., trinomials f = z^{s+t} + p z^t + q (with p,q \in \C^*), we show that a couple of classical questions from the late 19th / early 20th century regarding the connection between the coefficients and the roots of trinomials can be traced back to questions in amoeba theory. This yields nice geometrical and topological counterparts for classical algebraic results. We show for example that a trinomial has a root of a certain, given modulus if and only if the coefficient p is located on a particular hypotrochoid curve. Furthermore, there exist two roots with the same modulus if and only if the coefficient p is located on a particular 1-fan. This local description of the configuration space \C^A yields in particular that all sets U_\alpha^A for \alpha \in \{0,1,\ldots,s+t\} \setminus \{t\} are connected but not simply connected.
We show that for a given lattice polytope P the set of all configuration spaces \C^A of amoebas with \conv(A) = P is a boolean lattice with respect to some order relation \sqsubseteq induced by the set theoretic order relation \subseteq. This boolean lattice turns out to have some nice structural properties and gives in particular an independent motivation for Passare's and Rullgard's conjecture about solidness of amoebas of maximally sparse polynomials. We prove this conjecture for special instances of support sets.
A further key problem in the theory of amoebas is the description of their boundaries. Obviously, every boundary point \mathbf{w} \in \partial \mathcal{A}(f) is the image of a critical point under the \Log-map (where \mathcal{V}(f) is supposed to be non-singular here). Mikhalkin showed that this is equivalent to the fact that there exists a point in the intersection of the variety \mathcal{V}(f) and the fiber \F_{\mathbf{w}} of \mathbf{w} (w.r.t. the \Log-map), which has a (projective) real image under the logarithmic Gauss map. We strengthen this result by showing that a point \mathbf{w} may only be contained in the boundary of \mathcal{A}(f), if every point in the intersection of \mathcal{V}(f) and \F_{\mathbf{w}} has a (projective) real image under the logarithmic Gauss map.
With respect to the approximation of amoebas one is in particular interested in deciding membership, i.e., whether a given point \mathbf{w} \in \R^n is contained in a given amoeba \mathcal{A}(f). We show that this problem can be traced back to a semidefinite optimization problem (SDP), basically via usage of the Real Nullstellensatz. This SDP can be implemented and solved with standard software (we use SOSTools and SeDuMi here). As main theoretic result we show that, from the complexity point of view, our approach is at least as good as Purbhoo's approximation process (which is state of the art).
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a >= 0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
Statistical analysis on various stocks reveals long range dependence behavior of the stock prices that is not consistent with the classical Black and Scholes model. This memory or nondeterministic trend behavior is often seen as a reflection of market sentiments and causes that the historical volatility estimator becomes unreliable in practice. We propose an extension of the Black and Scholes model by adding a term to the original Wiener term involving a smoother process which accounts for these effects. The problem of arbitrage will be discussed. Using a generalized stochastic integration theory [8], we show that it is possible to construct a self financing replicating portfolio for a European option without any further knowledge of the extension and that, as a consequence, the classical concept of volatility needs to be re-interpreted.
AMS subject classifications: 60H05, 60H10, 90A09.
Integral equations for the mean-square estimate are obtained for the linear filtering problem, in which the noise generating the signal is a fractional Brownian motion with Hurst index h∈(3/4,1) and the noise in the observation process includes a fractional Brownian motion as well as a Wiener process. AMS subject classifications: 93E11, 60G20, 60G35.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
In recent years using symmetry has proven to be a very useful tool to simplify computations in semidefinite programming. This dissertation examines the possibilities of exploiting discrete symmetries in three contexts: In SDP-based relaxations for polynomial optimization, in testing positivity of symmetric polynomials, and combinatorial optimization. In these contexts the thesis provides new ways for exploiting symmetries and thus deeper insight in the paradigms behind the techniques and studies a concrete combinatorial optimization question.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. In statistical spike train analysis, stochastic point process models usually assume stationarity, in particular that the underlying spike train shows a constant firing rate (e.g. [1]). However, such models can lead to misinterpretation of the associated tests if the assumption of rate stationarity is not met (e.g. [2]). Therefore, the analysis of nonstationary data requires that rate changes can be located as precisely as possible. However, present statistical methods focus on rejecting the null hypothesis of stationarity without explicitly locating the change point(s) (e.g. [3]). We propose a test for stationarity of a given spike train that can also be used to estimate the change points in the firing rate. Assuming a Poisson process with piecewise constant firing rate, we propose a Step-Filter-Test (SFT) which can work simultaneously in different time scales, accounting for the high variety of firing patterns in experimental spike trains. Formally, we compare the numbers N1=N1(t,h) and N2=N2(t,h) of spikes in the time intervals (t-h,t] and (h,t+h]. By varying t within a fine time lattice and simultaneously varying the interval length h, we obtain a multivariate statistic D(h,t):=(N1-N2)/V(N1+N2), for which we prove asymptotic multivariate normality under homogeneity. From this a practical, graphical device to spot changes of the firing rate is constructed. Our graphical representation of D(h,t) (Figure 1A) visualizes the changes in the firing rate. For the statistical test, a threshold K is chosen such that under homogeneity, |D(h,t)|<K holds for all investigated h and t with probability 0.95. This threshold can indicate potential change points in order to estimate the inhomogeneous rate profile (Figure 1B). The SFT is applied to a sample data set of spontaneous single unit activity recorded from the substantia nigra of anesthetized mice. In this data set, multiple rate changes are identified which agree closely with visual inspection. In contrast to approaches choosing one fixed kernel width [4], our method has advantages in the flexibility of h.
In der folgenden Arbeit werden Eigenschaften von Verzweigungsprozessen in zufälliger Umgebung (engl. branching processes in random environment, kurz BPREs) untersucht. Das Modell geht auf Smith (1969) und Athreya (1971) zurück. Ein BPRE ist ein einfaches mathematisches Modell für die Entwicklung einer Population von apomiktischen (d.h. sich ungeschlechtlich fortpflanzenden) Individuen in diskreter Zeit, wobei die Umgebungsbedingungen einen Einfluß auf den Fortpflanzungserfolg der Individuen haben. Dabei wird angenommen, dass die Umgebungsbedingungen in den einzelnen Generationen zufällig sind, und zwar unabhängig und identisch verteilt von Generation zu Generation. Man denke z.B. an eine Population von Pflanzen mit einem einjährigen Zyklus, die in jedem Jahr anderen Witterungsbedingungen ausgesetzt sind, wobei angenommen wird, dass diese sich unabhängig und identisch verteilt ändern. In Kapitel 1 wird eines der wichtigsten Hilfsmittel zur Beschreibung von BPREs, die sogenannte zugehörige Irrfahrt, eingeführt und die Klassifizierung von BPREs beschrieben. In Kapitel 2 werden bekannte Resultate, insbesondere zu kritischen, schwach subkritischen und stark subkritischen Verzweigungsprozessen, wiederholt. In Kapitel 3 wird der sogenannte intermediär subkritische Fall behandelt. Mithilfe von funktionalen Grenzwertsätzen für bedingte Irrfahrten wird die genaue Asymptotik der Überlebenswahrscheinlichkeit des Prozesses, die bereits in Vatutin (2004) bewiesen wurde, unter etwas allgemeineren Voraussetzungen gezeigt. Anschließend wird untersucht, wie häufig der Prozess, bedingt auf Überleben, nur noch aus einem Individuum besteht. Im letzten Teil des Kapitels wird ein funktionaler Grenzwertsatz für die zugehörige Irrfahrt, bedingt aufs Überleben des Prozesses, gezeigt. Diese konvergiert, richtig skaliert, gegen einen Levy-Prozess, der darauf bedingt ist, sein Minimum am Ende anzunehmen. In Kapitel 4 werden große Abweichungen von BPREs untersucht. Die Ratenfunktion des BPRE wird sowohl für den Fall mindestens geometrisch schnell abfallender Tails, als auch für den Fall von Nachkommenverteilungen mit schweren Tails bestimmt. Wie sich herausstellt, hängt die Ratenfunktion von der Ratenfunktion der zugehörigen Irrfahrt, der exponentiellen Abfallrate der Überlebenswahrscheinlichkeit sowie, bei Nachkommenverteilungen mit schweren Tails, auch von den Tails derselben ab. In der Ratenfunktion spiegeln sich die wahrscheinlichsten Wege, um Ereignisse der großen Abweichungen zu realisieren, wider, was in Kapitel 4.3 beschrieben wird. In Kapitel 4.4 wird im speziellen Fall von Nachkommenverteilungen mit gebrochen-linearer Erzeugendenfunktion die Ratenfunktion für Ereignisse bestimmt, bei denen ein superkritischer BPRE überlebt, aber klein im Vergleich zum Erwartungswert bleibt. In Kapitel 4.5 werden die großen Abweichungen, bedingt auf die Umgebung untersucht (engl. quenched). In diesem Fall können unwahrscheinliche Ereignisse nur über den Verzweigungsmechanismus und nicht mehr über eine außergewöhnliche Umgebung realisiert werden. Zum Abschluss der Dissertation werden Verzweigungsprozesse in zufälliger Umgebung, bedingt auf Überle-ben, simuliert. Dazu wird eine Konstruktion nach Geiger (1999) angewendet. Diese erlaubt es, Galton-Watson Bäume in variierender Umgebung, bedingt auf Überleben, entlang einer Ahnenlinie zu konstruieren. Der Fall geometrischer Nachkommenverteilungen, auf den wir uns in Kapitel 5 beschränken, erlaubt die explizite Berechnung der benötigten Verteilungen. Als Anwendung des Grenzwertsatzes aus Kapitel 3.1 können nun intermediär subkritische Verzweigungsprozesse, bedingt auf Überleben, wie folgt simuliert werden: Zunächst wird die Umgebung zufällig bestimmt, und zwar als Irrfahrt, bedingt darauf ihr Minimum am Ende anzunehmen. Anschließend wird, der Geiger-Konstruktion folgend, ein Verzweigungsprozess in dieser Umgebung, bedingt auf Überleben, simuliert. Zum Abschluss wird in einem kurzen Ausblick auf aktuelle Forschung verwiesen. Im Anhang befinden sich einige technische Resultate.
The Benchmark Dose (BMD) approach, which was suggested firstly in 1984 by K. Crump [CRUMP (1984)], is a widely used instrument in risk assessment of substances in the environment and in food. In this context, the BMD approach determines a reference point (RfP) on the statistically estimated dose-response curve, for which the risk can be determined with adequate certainty and confidence. In the next step of risk characterization a threshold is calculated, based on this RfP and toxicological considerations. The BMD approach bases upon the fit of a dose-response model on the data. For this fit a stochastic distribution of the response endpoint is taken as a basis. Ultimately, the BMD reflects the dose for which a pre-specified increase in an adverse health effect (the benchmark response) can be expected. Until now, the BMD approach has been specified only for quantal and continuous endpoints. But in risk assessment of carcinogens especially so called time-to-event data are of high interest since they contain more information on the tumor development than quantal incidence data. The goal of this diploma thesis was to extend the BMD approach to such time-to-event data.
Dessins d'enfants (children's drawings) may be defined as hypermaps, i.e. as bipartite graphs embedded in compact Riemann surfaces. They are very important objects in order to describe the surface of the embedding as an algebraic curve. Knowing the combinatorial properties of the dessin may, in fact, help us determining defining equations or the field of definition of the surface. This task is easier if the automorphism group of the dessin is "large". In this thesis we consider a special type of dessins, so-called Wada dessins, for which the underlying graph illustrates the incidence structure of points and of hyperplanes of projective spaces. We determine under which conditions they have a large orientation-preserving automorphism group. We show that applying algebraic operations called "mock" Wilson operations to the underlying graph we may obtain new dessins. We study the automorphism group of the new dessins and we show that the dessins we started with are coverings of the new ones.
New conditions of solvability based on a general theorem on the calculation of the index at infinity for vector fields that have degenerate principal linear part as well as degenerate ... next order ... terms are obtained for the 2 Pi-periodic problem for the scalar equation x'' +n2x=g(|x|)+f(t,x)+b(t) with bounded g(u) and f(t,x) -> 0 as |x| -> 0. The result is also applied to the solvability of a two-point boundary value problem and to resonant problems for equations arising in control theory.
AMS subject classifications: 47Hll, 47H30.
Linear-implicit versions of strong Taylor numerical schemes for finite dimensional Itô stochastic differential equations (SDEs) are shown to have the same order as the original scheme. The combined truncation and global discretization error of an gamma strong linear-implicit Taylor scheme with time-step delta applied to the N dimensional Itô-Galerkin SDE for a class of parabolic stochastic partial differential equation (SPDE) with a strongly monotone linear operator with eigenvalues lambda 1 <= lambda 2 <= ... in its drift term is then estimated by K(lambda N -½ + 1 + delta gamma) where the constant K depends on the initial value, bounds on the other coefficients in the SPDE and the length of the time interval under consideration.
AMS subject classifications: 35R60, 60H15, 65M15, 65U05.
We presented a proof for the classical stable limit laws under use of contraction method in combination with the Zolotarev metric. Furthermore, a stable limit law was proved for scaled sums of growing into sequences. This limit law was alternatively formulated for sequences of random variables defined by a simple degenerate recursion.
We present a new self-contained and rigorous proof of the smoothness of invariant fiber bundles for dynamic equations on measure chains or time scales. Here, an invariant fiber bundle is the generalization of an invariant manifold to the nonautonomous case. Our main result generalizes the “Hadamard-Perron theorem” to the time-dependent, infinite-dimensional, noninvertible, and parameter-dependent case, where the linear part is not necessarily hyperbolic with variable growth rates. As a key feature, our proof works without using complicated technical tools.
Dynamical systems driven by Gaussian noises have been considered extensively in modeling, simulation, and theory. However, complex systems in engineering and science are often subject to non-Gaussian fluctuations or uncertainties. A coupled dynamical system under a class of Lévy noises is considered. After discussing cocycle property, stationary orbits, and random attractors, a synchronization phenomenon is shown to occur, when the drift terms of the coupled system satisfy certain dissipativity and integrability conditions. The synchronization result implies that coupled dynamical systems share a dynamical feature in some asymptotic sense.
This work connects Markov chain imbedding technique (MCIT) introduced by M.V. Koutras and J.C. Fu with distributions concerning the cycle structure of permutations. As a final result program code is given that uses MCIT to deliver proper numerical values for these. The discrete distributions of interest are the one of the cycle structure, the one of the number of cycles, the one of the rth longest and shortest cycle and finally the length of a random chosen cycle. These are analyzed for equiprobable permutations as well as for biased ones. Analytical solutions and limit distributions are also considered to put the results on a safe, theoretical base.
Condensing phenomena for systems biology, ecology and sociology present in real life different complex behaviors. Based on local interaction between agents, we present another result of the Energy-based model presented by [20]. We involve an additional condition providing the total condensing (also called consensus) of a discrete positive measure. Key words: Condensing; consensus; random move; self-organizing groups; collective intelligence; stochastic modeling. AMS Subject Classifications: 81T80; 93A30; 37M05; 68U20
Tropical geometry is the geometry of the tropical semiring \[\mathbb{T}:=(\mathbb{R}\cup\{\infty\},\min,+).\] Classical algebraic structures correspond to tropical structures. If $I\lhd K[x_1,\ldots,x_n]$ is an ideal in a polynomial ring over a field $K$ with valuation $v$, then the classical algebraic variety correspond to the tropical variety $T(I)$. It is the set of all points $w$, such that the minimum $\min\{v(c_\alpha)+w\cdot\alpha\}$ is achieved twice for all $f=\sum_\alpha c_\alpha x^\alpha\in I$. So tropical geometry relates algebraic geometric problems with discrete geometric problems. In this thesis we obtain a tropical version of the Eisenbud-Evans Theorem which states that every algebraic variety in $\mathbb{R}^n$ is the intersection of $n$ hypersurfaces. We find out that in the tropical setting every tropical variety $T(I)$ can be written as an intersection of only $(n+1)$ tropical hypersurfaces. So we get a finite generating system of $I$ such that the corresponding tropical hypersurfaces intersect to the tropical variety, a so-called tropical basis. Let $I \lhd K[x_1,\ldots,x_n]$ be a prime ideal generated by the polynomials $f_1, \ldots, f_r$. Then there exist $g_0,\ldots,g_{n} \in I$ such that \[ T(I) \ = \ \bigcap_{i=0}^{n}T(g_i)\] and thus $\mathcal{G} := \{f_1, \ldots, f_r, g_0, \ldots, g_{n}\}$ is a tropical basis for $I$ of cardinality $r+n+1$. Tropical bases are discussed by Bogart, Jensen, Speyer, Sturmfels and Thomas where it is shown that tropical bases of linear polynomials of a linear ideal have to be very large. We do not restrict the tropical basis to consist of linear polynomials and therefore we get a shorter tropical basis. But the degrees of our polynomials can be very large. The main ingredient to get a short tropical basis is the use of projections, in particular geometrically regular projections. Together with the fact that preimages of projections of tropical varieties are themselves tropical varieties of a certain elimination ideal we get the desired result. Let $I \lhd K[x_1, \ldots, x_n]$ be an $m$-dimensional prime ideal and $\pi : \mathbb{R}^n \to \mathbb{R}^{m+1}$ be a rational projection. Then $\pi^{-1}(\pi(T(I)))$ is a tropical variety, namely \[ \pi^{-1}(\pi(T(I))) \ = \ T(J \cap K[x_1, \ldots, x_n]) \,\] Here $J$ is an ideal in $K[x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_{n-m-1}]$ derived from the ideal $I$. We show that this elimination ideal is a principal ideal which yields a polynomial in our tropical basis. The advantage of our method is that we find our polynomials by projections and therefore we can use the results of Gelfand, Kapranov and Zelevinsky , of Esterov and Khovanskii , and of Sturmfels, Tevelev and Yu. With mixed fiber polytopes we get the structure and combinatorics of the image of a tropical variety and therefore the structure of the polynomials in our tropical basis. Let $I=\lhd K[x_1,\ldots,x_n]$ an $m$-dimensional ideal, generated by generic polynomials $f_1,\ldots, f_{n-m}$, $\pi:\mathbb{R}^n\to\mathbb{R}^{m+1}$ a projection and $\psi$ a projection presented by a matrix with a rowspace equal to the kernel of $\pi$. Then up to affine isomorphisms, the cells of the dual subdivision of $\pi^{-1} \pi T(I)$ are of the form \[ \sum_{i=1}^p \Sigma_{\psi} (C_{i1}^{\vee}, \ldots, C_{i{k}}^{\vee}) \] for some $p\in\mathbb{N}$ and faces $F_1, \ldots, F_p$ of $T(f_1)\cap\ldots\cap T(f_k)$ and the dual cell of $F_i\subseteq U = T(f_1)\cup\ldots\cup T(f_k)$ is given by $F_i^\vee=C_{i1}^{\vee}+ \ldots+ C_{ik}^{\vee}$ with faces $C_{i1}, \ldots, C_{i k}$ of $T(f_1), \ldots, T(f_{k})$. In case that we project a tropical curve we want to find the number of $(n-1)$-cells of the above form with $p>1$, i.e. the cells which are dual to vertices of $\pi(T(I))$ which are the intersection of the images of two non-adjacent $1$-cells of $T(I)$. Vertices of this type are called selfintersection points. We show that there exist a tropcal line $L_n\subset\mathbb{R}^n$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $L_n$ has $\sum_{i=1}^{n-2}i$ selfintersection points. Furthermore we find tropical curves $\mathcal{C}\subset\mathbb{R}^n$, which are transversal intersections of $n-1$ tropical hypersurfaces of degrees $d_1,\ldots,d_{n-1}$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $\mathcal{C}$ has at least $(d_1\cdot\ldots\cdot d_{n-1})^2\cdot \sum_{i=1}^{n-2}i) $ selfintersection points. A caterpillar is a certain simple type of a tropical line and for this type we show that it can have at most $\sum_{i=1}^{n-2}i$ selfintersection points.
Mixed volumes, mixed Ehrhart theory and applications to tropical geometry and linkage configurations
(2009)
The aim of this thesis is the discussion of mixed volumes, their interplay with algebraic geometry, discrete geometry and tropical geometry and their use in applications such as linkage configuration problems. Namely we present new technical tools for mixed volume computation, a novel approach to Ehrhart theory that links mixed volumes with counting integer points in Minkowski sums, new expressions in terms of mixed volumes of combinatorial quantities in tropical geometry and furthermore we employ mixed volume techniques to obtain bounds in certain graph embedding problems.
Local interactions between particles of a collection causes all particles to reorganize in new positions. The purpose of this paper is to construct an energy-based model of self-organizing subgroups, which describes the behavior of singular local moves of a particle. The present paper extends the Hegselmann-Krause model on consensus dynamics, where agents simultaneously move to the barycenter of all agents in an epsilon neighborhood. The Energy-based model presented here is analyzed and simulated on finite metric space. AMS Subject Classifications:81T80; 93A30; 37M05; 68U20
Deformation quantization on symplectic stacks and applications to the moduli of flat connections
(2008)
It is a common problem in mathematical physics to describe and quantize the Poisson algebra on a symplectic quotient [...] given in terms of some moment map [...] on a symplectic manifold [...] with a hamiltonian action by a Lie group G. Among others, problems may arise in two parts of the process: c might be a singular value of the moment map and the quotient might not be well-behaving; in the interesting cases the quotient often is singular. By the famous result of Sjamaar and Lerman ([102]) X is a symplectic stratified space. We are interested in cases for which we can give a deformation quantization of the possibly singular Poisson algebra of X. To that purpose we introduce a Poisson algebra on the associated stack [...] for special cases and consider its deformations and their classification. We dedicate ourselves to use the rather geometric methods introduced by Fedosov for symplectic manifolds in [37]. That leads to the question how to perform differential geometry on a smooth stack. The Lie groupoid atlas of a smooth stack is a nice model for the same space (Tu, Xu and Laurent-Gengoux in [107] and Behrend and Xu in [16]), but both have different topoi. We give a morphism (P,R) that compares the topologies of a smooth stack and its atlas. This yields a method to transport sheaves and their sections between a smooth stack and its Lie groupoid atlas. A symplectic stack is a smooth separated Deligne-Mumford stack with a 2-form which is closed and non-degenerate in an atlas. Via (P,R) a deformation quantization on a symplectic stack can be performed in terms of an atlas. We also give a classification functor for the quantizations in the spirit of Deligne ([35]) based on the geometric interpretation given by Gutt and Rawnsely in [49]. As an application we give a deformation quantization for the moduli stack of flat connections in particular configurations. We use Darboux charts provided by Huebschmann (e.g. in [54]) to construct the corresponding Lie groupoid. This captures the symplectic form arising in the reduction process and differs from other approaches using gerbes of bundles (e.g. Teleman [105]).
In this work, we extend the Hegselmann and Krause (HK) model, presented in [16] to an arbitrary metric space. We also present some theoretical analysis and some numerical results of the condensing of particles in finite and continuous metric spaces. For simulations in a finite metric space, we introduce the notion "random metric" using the split metrics studies by Dress and al. [2, 11, 12].
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
This thesis exhibits skeins based on the Homfly polynomial and their relations to Schur functions. The closures of skein-theoretic idempotents of the Hecke algebra are shown to be specializations of Schur functions. This result is applied to the calculation of the Homfly polynomial of the decorated Hopf link. A closed formula for these Homfly polynomials is given. Furthermore, the specialization of the variables to roots of unity is considered. The techniques are skein theory on the one side, and the theory of symmetric functions in the formulation of Schur functions on the other side. Many previously known results have been proved here by only using skein theory and without using knowledge about quantum groups.
Epstein and Penner constructed in [EP88] the Euclidean decomposition of a non-compact hyperbolic n-manifold of finite volume for a choice of cusps, n >= 2. The manifold is cut along geodesic hyperplanes into hyperbolic ideal convex polyhedra. The intersection of the cusps with the Euclidean decomposition determined by them turns out to be rather simple as stated in Theorem 2.2. A dual decomposition resulting from the expansion of the cusps was already mentioned in [EP88]. These two dual hyperbolic decompositions of the manifold induce two dual decompositions in the Euclidean structure of the cusp sections. This observation leads in Theorems 5.1 and 5.2 to easily computable, necessary conditions for an arbitrary ideal polyhedral decomposition of the manifold to be a Euclidean decomposition.
Die vorliegende Arbeit beschäftigt sich mit der BFV-Reduktion von Hamiltonschen Systemen mit erstklassigen Zwangsbedingungen im Rahmen der klassischen Hamiltonschen Mechanik und im Rahmen der Deformationsquantisierung. Besondere Aufmerksamkeit wird dabei Zwangsbedingungen zuteil, die als Nullfaser singulärer äquivarianter Impulsabbildungen entstehen. Es ist schon länger bekannt, daß für Nullfasern regulärer äquivarianter Impulsabbildungen die in der theoretischen Physik gebräuchliche Methode der BFV-Reduktion zur Phasenraumreduktion nach Marsden/Weinstein äquivalent ist. In [24] konnte gezeigt werden, daß in dieser Situation die BFV-Reduktion sich auch im Rahmen der Deformationsquantisierung natürlich formulieren läßt und erfolgreich zur Konstruktion von Sternprodukten auf Marsden/Weinstein-Quotienten verwendet werden kann. Ein Hauptergebnis der vorliegenden Arbeit besteht in der Verallgemeinerung der Ergebnisse aus [24] auf den Fall singulärer Impulsabbildungen, deren Komponenten 1.) das Verschwindungsideal der Zwangsfläche erzeugen und 2.) einen vollständigen Durchschnitt bilden. Die Argumentation von [24] wird durch Gebrauch der Störungslemmata aus dem Anhang A.1 systematisiert und vereinfacht. Zum Existenzbeweis von stetigen Homotopien und stetiger Fortsetzungsabbildung für die Koszulauflösung werden der Zerfällungssatz und der Fortsetzungssatz von Bierstone und Schwarz [20] benutzt. Außerdem wird ein ’Jacobisches Kriterium’ für die Überprüfung von Bedingung 2.) angegeben. Basierend auf diesem Kriterium und Techniken aus [3] werden die Bedingungen 1.) und 2.) an einer Reihe von Beispielen getestet. Als Korollar erhält man den Beweis dafür, daß es symplektisch stratifizierte Räume gibt, die keine Orbifaltigkeiten sind und dennoch eine stetige Deformationsquantisierung zulassen. Ferner wird (ähnlich zu [92]) eine konzeptionielle Erklärung dafür gegeben, warum im Fall vollständiger Durchschnitte das Problem der Quantisierung der BRST-Ladung eine so einfache Lösung hat. Bildet die Impulsabbildung eine erstklassige Zwangsbedingung, ist aber kein vollständiger Durchschnitt, dann ist es im allgemeinen nicht bekannt, wie entsprechende Quantenreduktionsresultate zu erzielen sind. Ein Hauptaugenmerk der Untersuchung wird es deshalb sein, in dieser Situation die klassische BFV-Reduktion besser zu verstehen – natürlich in der Hoffnung, Grundlagen für eine etwaige (Deformations-)Quantisierung zu liefern. Wir werden feststellen, daß es zwei Gründe gibt, die Tate-Erzeuger (alias: Antigeister höheren Niveaus) notwendig machen: die Topologie der Zwangsfläche und die Singularitätentheorie der Impulsabbildung. Die Zahl der Tate-Erzeuger kann durch Übergang zu projektiven Tate-Erzeugern, also Vektorbündeln, verringert werden. Allerdings sorgt Halperins Starrheitssatz [57] dafür, daß im wesentlichen alle Fälle, für die die Zwangsfläche kein lokal vollständiger Durchschnitt ist, zu unendlich vielen Tate-Erzeugern führen. Erzeugen die Komponenten einer Impulsabbildung einer linearen symplektischen Gruppenwirkung das Verschwindungsideal der Zwangsfläche, so kann man eine lokal endliche Tate-Auflösung finden. Diese besitzt nach dem Fortsetzungssatz und dem Zerfällungssatz von Bierstone und Schwarz stetige, kontrahierende Homotopien. Ausgehend von einer solchen Tate-Auflösung konstruieren wir, die klassische BFV-Konstruktion für vollständige Durchschnitte verallgemeinernd, eine graduierte superkommutative Algebra. Wir können zeigen, daß diese graduierte Algebra auch im Vektorbündelfall eine graduierte Poissonklammer besitzt, die sogenannte Rothstein-Poissonklammer. Die Existenz einer solchen Poissonklammer war bereits von Rothstein [87] für die einfachere Situation einer symplektischen Supermannigfaltigkeit bewiesen worden. Darüberhinaus werden wir sehen, daß es auch im Vektorbündelfall eine BRST-Ladung gibt. Diese sieht im Fall von Impulsabbildungen etwas einfacher aus als für allgemeine erstklassige Zwangsbedingungen. Insgesamt wird also die klassische BFV-Konstruktion [95] auf den Fall projektiver Tate-Erzeuger verallgemeinert, und als eine Homotopieäquivalenz in der additiven Kategorie der Fréchet-Räume interpretiert.
Concentration of multivariate random recursive sequences arising in the analysis of algorithms
(2006)
Stochastic analysis of algorithms can be motivated by the analysis of randomized algorithms or by postulating on the sets of inputs of the same length some probability distributions. In both cases implied random quantities are analyzed. Here, the running time is of great concern. Characteristics like expectation, variance, limit law, rates of convergence and tail bounds are studied. For the running time, beside the expectation, upper bounds on the right tail are particularly important, since one wants to know large values of the running time not taking place with possibly high probability. In the first chapter game trees are analyzed. The worst case runnig time of Snir's randomized algorithm is specified and its expectation, asymptotic behavior of the variance, a limit law with uniquely characterized limit and tail bounds are identified. Furthermore, a limit law for the value of the game tree under Pearl's probabilistic modell is proved. In the second chapter upper and lower bounds for the Wiener Index of random binary search trees are identified. In the third chapter tail bounds for the generation size of multitype Galton-Watson processes (with immigration) are derived, depending on their offspring distribution. Therefore, the method used to prove the tail bounds in the first chapter is generalized.
Approximating Perpetuities
(2006)
A perpetuity is a real valued random variable which is characterised by a distributional fixed-point equation of the form X=AX+b, where (A,b) is a vector of random variables independent of X, whereas dependencies between A and b are allowed. Conditions for existence and uniqueness of solutions of such fixed-point equations are known, as is the tail behaviour for most cases. In this work, we look at the central area and develop an algorithm to approximate the distribution function and possibly density of a large class of such perpetuities. For one specific example from the probabilistic analysis of algorithms, the algorithm is implemented and explicit error bounds for this approximation are given. At last, we look at some examples, where the densities or at least some properties are known to compare the theoretical error bounds to the actual error of the approximation. The algorithm used here is based on a method which was developed for another class of fixed-point equations. While adapting to this case, a considerable improvement was found, which can be translated to the original method.
It is commonly agreed that cortical information processing is based on the electric discharges (spikes') of nerve cells. Evidence is accumulating which suggests that the temporal interaction among a large number of neurons can take place with high precision, indicating that the efficiency of cortical processing may depend crucially on the precise spike timing of many cells. This work focuses on two temporal properties of parallel spike trains that attracted growing interest in the recent years: In the first place, specific delays (phase offsets') between the firing times of two spike trains are investigated. In particular, it is studied whether small phase offsets can be identified with confidence between two spike trains that have the tendency to fire almost simultaneously. Second, the temporal relations between multiple spike trains are investigated on the basis of such small offsets between pairs of processes. Since the analysis of all delays among the firing activity of n neurons is extremely complex, a method is required with which this highly dimensional information can be collapsed in a straightforward manner such that the temporal interaction among a large number of neurons can be represented consistently in a single temporal map. Finally, a stochastic model is presented that provides a framework to integrate and explain the observed temporal relations that result from the previous analyses.
The existence of a mean-square continuous strong solution is established for vector-valued Itö stochastic differential equations with a discontinuous drift coefficient, which is an increasing function, and with a Lipschitz continuous diffusion coefficient. A scalar stochastic differential equation with the Heaviside function as its drift coefficient is considered as an example. Upper and lower solutions are used in the proof.
The synchronization of neuronal firing activity is considered an important mechanism in cortical information processing. The tendency of multiple neurons to synchronize their joint firing activity can be investigated with the 'unitary event' analysis (Grün, 1996). This method is based on the nullhypothesis of independent Bernoulli processes and can therefore not tell whether coincidences observed between more than two processes can be considered "genuine" higher- order coincidences or whether they might be caused by coincidences of lower order that coincide by chance ("chance coincidences"). In order to distinguish between genuine and chance coincidences, a parametric model of independent interaction processes (MIIP) is presented. In the framework of this model, Maximum-Likelihood estimates are derived for the firing rates of n single processes and for the rates with which genuine higher order correlations occur. The asymptotic normality of these estimates is used to derive their asymptotic variance and in order to investigate whether higher order coincidences can be considered genuine or whether they can be explained by chance coincidences. The empirical test power of this procedure for n=2 and n=3 processes and for finite analysis windows is derived with simulations and compared to the asymptotic values. Finally, the model is extended in order to allow for the analysis of correlations that are caused by jittered coincidences.
Considered are the classes QL (quasilinear) and NQL (nondet quasllmear) of all those problems that can be solved by deterministic (nondetermlnlsttc, respectively) Turmg machines in time O(n(log n) ~) for some k Effloent algorithms have time bounds of th~s type, it is argued. Many of the "exhausUve search" type problems such as satlsflablhty and colorabdlty are complete in NQL with respect to reductions that take O(n(log n) k) steps This lmphes that QL = NQL iff satisfiabdlty is m QL CR CATEGORIES: 5.25
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
Pseudorandom function tribe ensembles based on one-way permutations: improvements and applications
(1999)
Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on oneway permutations given by Canetti, Micciancio and Reingold [CMR98]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as effcient and if the starting pseudorandom function ensemble is efficiently invertible (given the secret key) then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext independently of the choice of the secret key or the random bits used in the encryption process.
We introduce the relationship between incremental cryptography and memory checkers. We present an incremental message authentication scheme based on the XOR MACs which supports insertion, deletion and other single block operations. Our scheme takes only a constant number of pseudorandom function evaluations for each update step and produces smaller authentication codes than the tree scheme presented in [BGG95]. Furthermore, it is secure against message substitution attacks, where the adversary is allowed to tamper messages before update steps, making it applicable to virus protection. From this scheme we derive memory checkers for data structures based on lists. Conversely, we use a lower bound for memory checkers to show that so-called message substitution detecting schemes produce signatures or authentication codes with size proportional to the message length.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.