Mathematik
Refine
Year of publication
Document Type
- Article (112)
- Doctoral Thesis (76)
- Preprint (46)
- diplomthesis (39)
- Book (25)
- Report (22)
- Conference Proceeding (18)
- Bachelor Thesis (8)
- Contribution to a Periodical (8)
- Diploma Thesis (8)
Has Fulltext
- yes (374) (remove)
Is part of the Bibliography
- no (374)
Keywords
- Kongress (6)
- Kryptologie (5)
- Mathematik (5)
- Stochastik (5)
- Doku Mittelstufe (4)
- Doku Oberstufe (4)
- Online-Publikation (4)
- Statistik (4)
- Finanzmathematik (3)
- LLL-reduction (3)
- Moran model (3)
- coalescent (3)
- computational complexity (3)
- contraction method (3)
- point process (3)
- spike train (3)
- Algebraische Geometrie (2)
- Arithmetische Gruppe (2)
- Biographie (2)
- Brownian motion (2)
- Commitment Scheme (2)
- Frankfurt <Main> / Universität (2)
- Fuchsian groups (2)
- Fächerübergreifender Unterricht (2)
- Geometrie (2)
- Heat kernel (2)
- Hinterlegungsverfahren <Kryptologie> (2)
- Integral Geometry (2)
- Knapsack problem (2)
- Kombinatorische Optimierung (2)
- Krein space (2)
- Laplace operator on graphs (2)
- Lattice basis reduction (2)
- Martingal (2)
- Mathematiker (2)
- Musik (2)
- Oblivious Transfer (2)
- Perception (2)
- Quantum Zeno dynamics (2)
- San Jose (2)
- Semidefinite Programming (2)
- Shortest lattice vector problem (2)
- Stochastischer Prozess (2)
- Subset sum problem (2)
- Tropical geometry (2)
- Tropische Geometrie (2)
- Valuation Theory (2)
- Verzweigungsprozess (2)
- Vision (2)
- W*-dynamical system (2)
- X-Y model (2)
- Yule-Prozess (2)
- ancestral selection graph (2)
- binary search tree (2)
- collective intelligence (2)
- combinatorial optimization (2)
- complexity (2)
- duality (2)
- firing patterns (2)
- fixation probability (2)
- genealogy (2)
- level of difficulty (2)
- quantum spin systems (2)
- return to equilibrium (2)
- segments (2)
- task space (2)
- thought structure (2)
- Λ−coalescent (2)
- A-Discriminant (1)
- ADM1 (1)
- Abelian (1)
- Action potential (1)
- Actions in mathematical learning (1)
- Activity (1)
- Adaptive dynamics (1)
- Algebra (1)
- Algorithmus (1)
- Amoeba (1)
- Anaerobe Fermentation (1)
- Analyse von Algorithmen (1)
- Ancestral selection graph (1)
- Anisotropic Norm (1)
- Approximation (1)
- Approximation algorithm (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Assignment Problem (1)
- Asymptotically Even Nonlinearity (1)
- Ausreißer <Statistik> (1)
- Automorphismengruppe (1)
- Axon (1)
- Banach spaces (1)
- Bayesian Inference (1)
- Berkovich spaces (1)
- Binomialmodell (1)
- Binärsuchbaum (1)
- Black and Scholes Option Price theory (1)
- Black-Scholes (1)
- Blind Signature (1)
- Block Korkin—Zolotarev reduction (1)
- Blockplay (1)
- Bolthausen-Sznitman (1)
- Boolean Lattice (1)
- Bootstrap-Statistik (1)
- Boundary (1)
- Boundary Value Problems (1)
- Branch and Bound (1)
- Branching particle systems (1)
- Branching process approximation (1)
- Breaking knapsack cryptosystems (1)
- Bruhat-Tits-Gebäude (1)
- Burst (1)
- CAT(0)-Räume (1)
- CAT(0)-spaces (1)
- CIR-1 (1)
- Calderón problem (1)
- Cannings model (1)
- Catalan number (1)
- Cauchy-Anfangswertproblem (1)
- Cayley-Graph (1)
- China-Restaurant-Prozess (1)
- Chinese Remainder Theorem (1)
- Chinese restaurant process (1)
- Chinese-restaurant-process (1)
- Circuit (1)
- Closest Vector Problem (1)
- Coamoeba (1)
- Cognitive psychology (1)
- Commitment (1)
- Commitment schemes (1)
- Computational complexity (1)
- Concentration Inequality (1)
- Condensing (1)
- Containment (1)
- Contraction method (1)
- Datenbank (1)
- Datenstruktur (1)
- Degenerate Linear Part (1)
- Dehn (1)
- Derivate (1)
- Dessins d'enfants (1)
- Diagrams and mathematical learning (1)
- Dichte <Stochastik> (1)
- Digital and analogue materials (1)
- Digital trees (1)
- Dimension 2 (1)
- Directional selection (1)
- Dirichlet bound (1)
- Dirichlet random measure (1)
- Dirichletsche L-Reihe; Nullstelle (1)
- Discrete Logarithm (1)
- Diskrete Geometrie (1)
- Diskrete Mathematik (1)
- Diskreter Markov-Prozess (1)
- Diversity in trait space (1)
- Donkers theorem (1)
- Dopamine (1)
- Doplicher-Haag-Roberts Axiomatik; Algebraische Quantenfeldtheorie; Superauswahlregeln und -sektoren; Quantenstatistik; Zopfgruppenstatistik (1)
- Dormancy (1)
- Dosis-Wirkungs-Modellierung (1)
- Dreiecksgruppe (1)
- Dreiecksgruppen (1)
- Duality (1)
- Early Childhood (1)
- Einbettung <Mathematik> (1)
- Elektronische Unterschrift (1)
- Elementar- und Primarbereich (1)
- Endliche Präsentation (1)
- Endlichkeitseigenschaften (1)
- Energie-Modell (1)
- Error Bound (1)
- Erwartungswert (1)
- Evolutionary branching (1)
- Evolving Yield Curves in the Real-World Measures (1)
- Ewens sampling formula (1)
- Examples (1)
- Extended RMJBN Modell (1)
- FEM-BEM-coupling (1)
- FID model (1)
- FIND algorithm (1)
- Face (1)
- Face recognition (1)
- Factoring (1)
- Familie (1)
- Family (1)
- Feller branching with logistic growth (1)
- Finite element methods (1)
- Finitely many measurements (1)
- Fixation probability (1)
- Fixpunkt (1)
- Fractional Brownian Motion (1)
- Fractional Laplacian (1)
- Frühe Bildung (1)
- Fuchs-Gruppe (1)
- Fuchssche Gruppe ; Modulare Einbettung (1)
- Fuchssche Gruppen (1)
- Functions (1)
- Funktionenkegel (1)
- Funktionenkörper ; Arithmetische Gruppe ; Auflösbare Gruppe ; Endlichkeit (1)
- Galerkin Approximation (1)
- Galois group (1)
- Galois-Gruppe (1)
- Game Tree (1)
- Gaussian Random Field (1)
- Gaussian process (1)
- Gelfand-Shilov space (1)
- Gemischte Volumen (1)
- Genealogical construction (1)
- Genealogische Konstruktion (1)
- Genetischer Fingerabdruck (1)
- Genus One (1)
- Geometrische Gruppentheorie (1)
- Geometry (1)
- Gespräch (1)
- Gestaenge (1)
- Girsanov transform (1)
- Gitter <Mathematik> ; Basis <Mathematik> ; Reduktion ; Algorithmus ; Laufzeit ; L-unendlich-Norm ; Rucksackproblem ; Kryptosystem (1)
- Gitter <Mathematik> ; Basis <Mathematik> ; Reduktion ; Gauß-Algorithmus (1)
- Gram-Hadamard inequalities (1)
- Graphen (1)
- Grenzwertsatz (1)
- Griffiths–Engen–McCloskey distribution (1)
- Group dynamics (1)
- Große Abweichung (1)
- Großinvestor (1)
- Gruppendynamiken (1)
- Gruppentheorie (1)
- Hadamard's Three-Lines Theorem (1)
- Halbeinfache algebraische Gruppe (1)
- Handelman (1)
- Handlung (1)
- Harmoniebox (1)
- Heisenberg algebra (1)
- Hidden Markov models (1)
- Hintertür <Informatik> (1)
- Hodge bundle (1)
- Holzklötzchen (1)
- Hopf algebroids (1)
- Householder reflection (1)
- Hyperfunktion ; Asymptotische Entwicklung (1)
- Hypotrochoid (1)
- Identification (1)
- Immigration (1)
- Index at Infinity (1)
- Infrared singularity (1)
- Integer relations (1)
- Integraldarstellung (1)
- Interaction (1)
- Internet (1)
- Invariante (1)
- Inverse problems (1)
- Iteration (1)
- Jahr der Mathematik (1)
- Kettenbruchentwicklung ; Dimension n ; Diophantische Approximation (1)
- Kieferorthopädie (1)
- Klassifizierender Raum (1)
- Klebsiella pneumoniae (1)
- Knotenabstand (1)
- Knotentiefe (1)
- Koaleszent (1)
- Kochen-Specker theorem (1)
- Kollektivintelligenz (1)
- Kombinatorische Gruppen (1)
- Konforme Feldtheorie (1)
- Konstruktiver Beweis (1)
- Kontaktprozess (1)
- Kontraktionsmethode (1)
- Konzentrationsungleichung (1)
- Korkin—Zolotarev reduction (1)
- Kreuzkorrelation (1)
- Kryptosystem (1)
- Kullback-Leibler Informational Divergence (1)
- L^p bounds (1)
- L^p means (1)
- Label cover (1)
- Lanzeitverhalten (1)
- Laplace-Differentialgleichung (1)
- Large Deviation (1)
- Lattice Reduction (1)
- Leerverkauf (1)
- Lernen (1)
- Linear Filtering (1)
- Linear Preferential Attachment Trees (1)
- Linear-Implicit Scheme (1)
- Linkages (1)
- Loewner monotonicity and convexity (1)
- Logarithmic Laplacian (1)
- Long- Range Dependence (1)
- Long-Range Dependence (1)
- Long-time behaviour (1)
- Longitudinal Study (1)
- Lotka-Volterra system (1)
- Lovász Local Lemma (1)
- Low density subset sum algorithm (1)
- MINT-Bildung (1)
- Machine Learning (1)
- Malliavin calculus (1)
- Mallows model (1)
- Markov chain Monte Carlo Method (1)
- Markov chain imbedding technique (1)
- Markov model (1)
- Markov-Kette (1)
- Mathematical Giftedness (1)
- Mathematical Reasoning (1)
- Mathematical modelling (1)
- Mathematics Learning (1)
- Mathematische Bildung (1)
- Mathematische Modellierung (1)
- Max (1)
- McEliece (1)
- Mean Anisotropy (1)
- Message authentication (1)
- Methanogenese (1)
- Mixed Volumes (1)
- Modellierung (1)
- Modular Multiplication (1)
- Mooney faces (1)
- Morava K-theory (1)
- Mouse (1)
- Multi-Harmonie-Ansatz (1)
- Multiple lineare Regression (1)
- Multityp-Verzweigungsprozess mit Immigration (1)
- Multitype Branching with Immigration (1)
- NP-complete problems (1)
- NP-hard (1)
- NP-hardness (1)
- Nash-Gleichgewicht (1)
- Nelson-Siegel (1)
- Neural encoding (1)
- Neurophysiology (1)
- Neuroscience (1)
- Neurowissenschaft (1)
- Newton–Okounkov bodies (1)
- Non-Malleability (1)
- Noticeable Probability (1)
- Optimal Mean-Square Filter (1)
- Oracle Query (1)
- Parabolic SPDE (1)
- Parisi conjecture (1)
- Participation (1)
- Partizipation (1)
- Patientenbewertung (1)
- Pause (1)
- Permutation (1)
- Permutationsgruppen (1)
- Pfadeigenschaften (1)
- Phragmén-Lindelöf principle (1)
- Piecewise-constant coefficient (1)
- Poisson Process (1)
- Poisson boundary (1)
- Poisson-Prozess (1)
- Polyedrische Kombinatorik (1)
- Polymorphic evolution sequence (1)
- Polynomial Optimization (1)
- Pontrjagin space (1)
- Populationsdynamiken (1)
- Portfolios (1)
- Positivstellensatz (1)
- Potenzialtheorie (1)
- Prag <1999> (1)
- Preferential Attachment-Modelle (1)
- Private Information Retrieval (1)
- Probabilistic analysis of algorithms (1)
- Probabilistically checkable proofs (1)
- Probabilistische Analyse von Algorithmen (1)
- Probability distribution (1)
- Probability of fixation (1)
- Professionalisierung (1)
- Profil Likelihood (1)
- Projektionen (1)
- Public Key Cryptosystem (1)
- Public Parameter (1)
- Punktprozess (1)
- Pólya urn (1)
- Quadratic Residue (1)
- Quantenfeldtheorie ; Konforme Feldtheorie ; Algebraische Methode (1)
- Quantum Zeno Effect (1)
- Quantum Zeno effect (1)
- Quasi-Automorphismen (1)
- Quaternionenalgebra (1)
- Quickselect (1)
- RSA-Verschlüsselung (1)
- Radix sort (1)
- Random Oracle (1)
- Random Split Trees (1)
- Random String (1)
- Random environment (1)
- Random variables (1)
- Randomisieren (1)
- Ray-Knight representation (1)
- Reaction time (1)
- Reale vs. risikoneutrale Welt in der Finanzmathematik (1)
- Rechenzentrum (1)
- Rekursiver Algorithmus (1)
- Relaxation (1)
- Representation Problem (1)
- Research article (1)
- Riemann surfaces (1)
- Riemannsche Fläche (1)
- Riemannsche Flächen (1)
- Ringtheorie (1)
- Risikobewertung (1)
- Risikomanagement (1)
- Robustheit (1)
- Rückkopplungseffekt (1)
- S-arithmetic groups (1)
- SLLL-reduction (1)
- Sackgassen (1)
- San Francisco (1)
- Santa Barbara (1)
- Schizophrenia (1)
- Schwarz triangel functions (1)
- Schwinger model (1)
- Security (1)
- Security Parameter (1)
- Semidefinite Optimierung (1)
- Semidefinite Optimization (1)
- Semiotics according to C. S. Peirce (1)
- Sensory perception (1)
- Sensory processing (1)
- Sigma-Invariante (1)
- Sigma-invariant (1)
- Signalverarbeitung (1)
- Signature (1)
- Small Worlds (1)
- Small order expansion (1)
- Spectrahedra (1)
- Spiel (1)
- Spielbaum (1)
- Spielbaum-Suchverfahren (1)
- Stable reduction algorithm (1)
- State dependent branching rate (1)
- Stationarity (1)
- Stochastic Analysis of Square Zero Variation Processes (1)
- Stonesches Spektrum (1)
- Striatum (1)
- Strong Taylor Scheme (1)
- Stummel, Friedrich (1)
- Suchbaum (1)
- Suchoperation (1)
- Sudoku (1)
- Sum of Squares (1)
- Support (1)
- Symmetrie (1)
- Symmetrischer Raum (1)
- Symmetry (1)
- Sympatric speciation (1)
- Tail Bound (1)
- Tailschranke (1)
- Talk (1)
- Thorne Kishino Felsenstein model (1)
- Topic Model (1)
- Trapdoor (1)
- Trinomial (1)
- Tropical Geometry (1)
- Tropical Grassmannians (1)
- Tropical bases (1)
- Tropical varieties (1)
- Tropische Basen (1)
- Trotter's product formula (1)
- Turkish immigrants (1)
- Typ-In-Algebra (1)
- Typology (1)
- Türkisch (1)
- Uniform regularity (1)
- Uniform resource locators (1)
- Unterstützung (1)
- Valuation on functions (1)
- Varianz (1)
- Vertexoperator (1)
- Verzweigende Teilchensysteme (1)
- Virasoro-Algebra (1)
- Wahrscheinlichkeit (1)
- Wahrscheinlichkeitsverteilung (1)
- Wiener Index (1)
- Wiener index (1)
- Wiener-Index (1)
- Yule process (1)
- Yule-process (1)
- Zinsstrukturmodelle (1)
- Zinsänderungsrisiko (1)
- Zolotarev metric (1)
- Zolotarev-Metrik (1)
- Zopfgruppe ; Lineare Darstellung ; Kettengruppe ; Homologiegruppe ; Automorphismengruppe ; Kettenkomplex (1)
- Zufall (1)
- Zufallsgraph (1)
- Zufällige Umgebung (1)
- Zustandsabhängige Verzweigungsrate (1)
- Zweiphasen-Biogasreaktor (1)
- Zweistufen-Biogasreaktor (1)
- abelian differentials (1)
- abstract potential theory (1)
- algebraic curves (1)
- algebraic values (1)
- alpha-stable branching (1)
- ampleness (1)
- analysis of algorithms (1)
- anti-Zeno effect (1)
- argumentation (1)
- arithmetic ball quotients (1)
- arithmetic group (1)
- assignment problem (1)
- augmented and restricted base loci (1)
- autocorrelograms (1)
- bid-ask spread (1)
- bordism theory (1)
- branching processes (1)
- branching random walk in random medium (1)
- buildings (1)
- cancer cell dormancy (1)
- canonical divisors (1)
- catastrophe modeling (1)
- central limit theorem (1)
- chosen ciphertext attack (1)
- clique problem (1)
- colorabdity (1)
- colored graphs (1)
- compact Riemann surfaces (1)
- complex multiplication (1)
- composition (1)
- computational geometry (1)
- concurrent composition (1)
- condensing (1)
- confirmatory factory analysis (1)
- consensus (1)
- contact process (1)
- continued fraction algorithm (1)
- controlled homotopy (1)
- convexity (1)
- convolution quadrature (1)
- cooperative systems (1)
- cross correlation (1)
- cryptography (1)
- cycle structure of permutations (1)
- dead ends (1)
- degenerate semigroup (1)
- delay equation (1)
- depth of a nod (1)
- dessins d’enfants (1)
- difference sets (1)
- digital search tree (1)
- digital tools (1)
- discrete dynamical system (1)
- discrete logarithm (1)
- discrete logarithm (DL) (1)
- diskrete Mathematik (1)
- dose-resoponse modelling (1)
- doubly stochastic point process (1)
- eigenvalue (1)
- elastodynamic wave equation (1)
- emergence (1)
- endliche metrische Räume (1)
- error bounds (1)
- exponentiation (1)
- external branch (1)
- face inversion (1)
- face perception (1)
- fake projective planes (1)
- families of hash functions (1)
- feedback effect (1)
- finite resolution (1)
- finiteness-properties (1)
- flat surfaces (1)
- floating norms (1)
- floating point arithmetic (1)
- floating point errors (1)
- foliated Schwarz symmetry (1)
- forming a group (1)
- fractional Brownian motion (1)
- fractions of exponentiation (1)
- frühkindliche Erziehung (1)
- fuchsian group (1)
- functional limit theorem (1)
- functional limit theorems (1)
- fächerübergreifendes Lernen (1)
- generic algorithm (1)
- generic algorithms (1)
- generic complexity (1)
- generic group model (1)
- geometry (1)
- graph coloring (1)
- graph isomorphism (1)
- h-transform (1)
- hard bit (1)
- hardcore subsets (1)
- harmonic function (1)
- heavy tails (1)
- hidden Markov model (1)
- hierarchical mean-field limit (1)
- highly regular nearby points (1)
- hyperbolische Geometrie (1)
- hypergeometric functions (1)
- hypervariable region (1)
- höhere Momente (1)
- incremental schemes (1)
- indefinite inner product space (1)
- individual-based models (1)
- inner product (1)
- integer relation (1)
- integer vector (1)
- interacting particle Systems (1)
- interdisziplinäre Lehre (1)
- internal diffusion limited aggregation (1)
- internal path length (1)
- inverse coefficient problem, (1)
- iterated subsegments (1)
- key comparisons (1)
- kinetic fingerprint (1)
- knapsack cryptosystems (1)
- kontrollierte Homotopie (1)
- large deviations (1)
- large trader (1)
- latent variance (1)
- lattice basis reduction (1)
- lattices (1)
- leapfrog (1)
- length defect (1)
- limit order markets (1)
- local LLL-reduction (1)
- local LLLreduction (1)
- local coordinates (1)
- local randomness (1)
- local time (1)
- local time drift (1)
- logarithmic geometry (1)
- logical networks (1)
- lookdown construction (1)
- lower bounds (1)
- manifold and geodesic (1)
- market making (1)
- mathematical modeling (1)
- mathematical modelling (1)
- mathematics (1)
- measurement (1)
- mehrdimensionale Ausreißererkennung (1)
- message-passing algorithm (1)
- modelling (1)
- modular automorphism group (1)
- modular group (1)
- moduli spaces (1)
- multi-agents system (1)
- multi-drug treatment (1)
- multiharmony (1)
- multilevel branching (1)
- music (1)
- mutation parameter estimation (1)
- neuronal code (1)
- neuronaler Kode (1)
- nichtlineare stochastische Integration (1)
- non-archimedean geometry (1)
- non-autonomous dynamical systems (1)
- non-malleability (1)
- noncommutative ring spectra (1)
- nondetermmistlc Turing machines (1)
- nonlinear stochastic integration (1)
- numerical experiments (1)
- observable Funktion (1)
- one-more decryption attack (1)
- one-way function (1)
- one-way functions (1)
- operator algebra (1)
- optimal transport (1)
- pair HMM (1)
- parameter dependent semimartingales (1)
- parameterabhängige Semimartingale (1)
- partial match queries (1)
- path properties (1)
- perceptual closure (1)
- permutation groups (1)
- phage (1)
- phage therapy (1)
- phase coding (1)
- phase transitions (1)
- platonischer Körper (1)
- poisson process (1)
- polynomial random number generator (1)
- population dynamics (1)
- portfolio optimization (1)
- positivity of line bundles (1)
- preferential attachment (1)
- preferential attachment models (1)
- probabilistic analysis of algorithms (1)
- probability (1)
- probability metric (1)
- professional development (1)
- profile likelihood (1)
- projections (1)
- projective planes (1)
- q-binomial theorem (1)
- quantum field theory (1)
- quasi-automorphisms (1)
- quaternion algebra (1)
- quincunx (1)
- random assignment problem (1)
- random environment (1)
- random function generator (1)
- random graphs (1)
- random measures (1)
- random media (1)
- random metric (1)
- random move (1)
- random number generator (1)
- random oracle model (1)
- random partition (1)
- random recursive tree (1)
- random rekursiv tree (1)
- random trees (1)
- random walks (1)
- raum-zeitliche Muster (1)
- reactant-catalyst systems (1)
- recursive distributional equation (1)
- reguläre Parkettierung (1)
- resistance (1)
- resistance mutation (1)
- reversibility (1)
- riemann surfaces (1)
- risk assessment (1)
- risk theory (1)
- rotating plane method (1)
- rough paths theory (1)
- satlsfiablhty (1)
- scaling (1)
- search operation (1)
- searchtrees (1)
- secure bit (1)
- security analysis of protocols (1)
- security of data (1)
- self-organizing groups (1)
- self-organizing groups; population dynamics; collective intelligence; forming groups; metric on finite sets (1)
- semidefinite optimization (1)
- sequence alignment (1)
- set-valued pullback attractors (1)
- shadow price (1)
- short integer relation (1)
- shortest lattice vector (1)
- signature size (1)
- signed ElGamal encryption (1)
- simultaneous diophantine approximations (1)
- simultaneous security of bits (1)
- single block replacement (1)
- small worlds (1)
- spatio-temporal patterns (1)
- split tree (1)
- statistic analysis (1)
- statistical alignment (1)
- statistische Analyse (1)
- statistischer Test (1)
- stoch. Analyse von Algorithmen (1)
- stochastic filtering (1)
- stochastic modeling (1)
- stochastic population dynamics (1)
- stochastische Prozesse (1)
- strong transience (1)
- subgroup growth (1)
- subset sum problems (1)
- substitution attacks (1)
- sum of squared factor loadings (1)
- switching systems (1)
- synergistic interaction (1)
- therapy evasion (1)
- topological entropy (1)
- trading strategies (1)
- transcendence (1)
- transversal learning (1)
- treatment protocol design (1)
- treatment success (1)
- triangle group (1)
- triangle groups (1)
- tropical geometry (1)
- tropical universal Jacobian (1)
- tropicalization (1)
- universal compactified Jacobian (1)
- urn model (1)
- von Neumann algebra (1)
- von Neumann algebras (1)
- von Neumann-Algebra (1)
- weak convergence (1)
- zufälliger Algorithmus (1)
- zufälliger rekursiver Baum (1)
- zufälliges Assignment Problem (1)
- Λ-coalescent (1)
- σ-field (1)
Institute
- Mathematik (374) (remove)
Integral equations for the mean-square estimate are obtained for the linear filtering problem, in which the noise generating the signal is a fractional Brownian motion with Hurst index h∈(3/4,1) and the noise in the observation process includes a fractional Brownian motion as well as a Wiener process. AMS subject classifications: 93E11, 60G20, 60G35.
Linear-implicit versions of strong Taylor numerical schemes for finite dimensional Itô stochastic differential equations (SDEs) are shown to have the same order as the original scheme. The combined truncation and global discretization error of an gamma strong linear-implicit Taylor scheme with time-step delta applied to the N dimensional Itô-Galerkin SDE for a class of parabolic stochastic partial differential equation (SPDE) with a strongly monotone linear operator with eigenvalues lambda 1 <= lambda 2 <= ... in its drift term is then estimated by K(lambda N -½ + 1 + delta gamma) where the constant K depends on the initial value, bounds on the other coefficients in the SPDE and the length of the time interval under consideration.
AMS subject classifications: 35R60, 60H15, 65M15, 65U05.
We call a distribution on n bit strings (", e) locally random, if for every choice of e · n positions the induced distribution on e bit strings is in the L1 norm at most " away from the uniform distribution on e bit strings. We establish local randomness in polynomial random number generators (RNG) that are candidate one way functions. Let N be a squarefree integer and let f1, . . . , f be polynomials with coe±- cients in ZZN = ZZ/NZZ. We study the RNG that stretches a random x 2 ZZN into the sequence of least significant bits of f1(x), . . . , f(x). We show that this RNG provides local randomness if for every prime divisor p of N the polynomials f1, . . . , f are linearly independent modulo the subspace of polynomials of degree · 1 in ZZp[x]. We also establish local randomness in polynomial random function generators. This yields candidates for cryptographic hash functions. The concept of local randomness in families of functions extends the concept of universal families of hash functions by Carter and Wegman (1979). The proofs of our results rely on upper bounds for exponential sums.
We show lower bounds for the signature size of incremental schemes which are secure against substitution attacks and support single block replacement. We prove that for documents of n blocks such schemes produce signatures of \Omega(n^(1/(2+c))) bits for any constant c>0. For schemes accessing only a single block resp. a constant number of blocks for each replacement this bound can be raised to \Omega(n) resp. \Omega(sqrt(n)). Additionally, we show that our technique yields a new lower bound for memory checkers.
Die Arbeiten von Alexander Michailowitsch Lyapunov (1857-1918) waren der Anfangspunkt intensiver Erforschung des Stabilitätsverhaltens von Differentialgleichungen. In der vorliegenden Arbeit sollen Lyapunovfunktionen auf Zeitskalen in Bezug auf das Stabilitätsverhalten des homogenen linearen Systems x-delta = A(t)x untersucht werden.
We present a massively parallel framework for computing tropicalizations of algebraic varieties which can make use of symmetries using the workflow management system GPI-Space and the computer algebra system Singular. We determine the tropical Grassmannian TGr0(3,8). Our implementation works efficiently on up to 840 cores, computing the 14763 orbits of maximal cones under the canonical S8-action in about 20 minutes. Relying on our result, we show that the Gröbner structure of TGr0(3,8) refines the 16-dimensional skeleton of the coarsest fan structure of the Dressian Dr(3,8), except for 23 orbits of special cones, for which we construct explicit obstructions to the realizability of their tropical linear spaces. Moreover, we propose algorithms for identifying maximal-dimensional cones which belong to positive tropicalizations of algebraic varieties. We compute the positive Grassmannian TGr+(3,8) and compare it to the cluster complex of the classical Grassmannian Gr(3,8).
In this thesis, the focus is on the actions of primary school children using digital and analogue materials in comparable mathematical situations. To emphasise actions on different materials in the mathematical learning process, a semiotic perspective according to C. S. Peirce (CP 1931-35) on mathematics learning is adopted. This theoretical research perspective highlights the activity itself on diagrams as a mathematical activity and brings actions to the forefront of interest. The actions on comparable digital and analogue diagrams are the basis for the reconstruction of mathematical interpretations of learners in 3rd and 4th grade.
The research questions investigate to what extent possible differences between the reconstructed interpretations of the learners can be attributed to the different materials and what influence the material has on the mathematical relationships that the learners take into account in their actions to manipulate the diagram.
For the reconstruction of the diagram interpretations based on the learners' actions on the material, a semiotic specification of Vogel's (2017) adaptation of Mayring's (2014) context analysis is used. This specification is based on Peirce's triadic theory of signs (Billion, 2023). The reconstructed interpretations of the analogue and digital diagrams are compared in a second step to identify possible differences and similarities.
The results of the qualitative analyses show, among other things, that despite the different actions of the learners on the digital and analogue diagrams, it is possible to reconstruct the same diagram interpretations if the learners establish the same mathematical relationships between the parts of the diagrams in their actions. There are also passages in the analyses where the same diagram interpretations cannot be reconstructed based on the actions on the digital and analogue materials. If the digital material acts as a tool and automatically creates several relationships between the parts of the diagram triggered by an action, then the reconstruction of the learners' diagram interpretations based on the analysis of their actions is partially possible. If the tool automatically establishes relationships, these must then be interpreted by the learners using gestures and phonetic utterances to understand the newly created diagram. Thus, a tool changes how mathematical relationships are expressed, because learners no longer have to interpret the relationships before their actions to manipulate the diagram itself, but afterwards through gestures and phonetic utterances. Regarding diagrammatic reasoning according to Peirce (NEM IV), this means that with analogue material the focus is on the construction and manipulation of diagrams through rule-guided actions, whereas with digital material, which functions as a tool, there is more emphasis on observing the results of the manipulations on the diagram.
At the end of the thesis, a recommendation for teachers on how to design mathematics lessons for primary school children using digital and analogue materials will be derived from the results.
The literature cited in this summary can be found in the references of the presented thesis.
This extended write-up of a talk gives an introductory survey of mathematical problems of the quantization of gauge systems. Using the Schwinger model as an exactly tractable but nontrivial example which exhibits general features of gauge quantum field theory, I cover the following subjects: The axiomatics of quantum field theory, formulation of quantum field theory in terms of Wightman functions, reconstruction of the state space, the local formulation of gauge theories, indefiniteness of the Wightman functions in general and in the special case of the Schwinger model, the state space of the Schwinger model, special features of the model. New results are contained in the Mathematical Appendix, where I consider in an abstract setting the Pontrjagin space structure of a special class of indefinite inner product spaces - the so called quasi-positive ones. This is motivated by the indefinite inner product space structure appearing in the above context and generalizes results of Morchio and Strocchi [J. Math. Phys. 31 (1990) 1467], and Dubin and Tarski [J. Math. Phys. 7 (1966) 574]. See the corresponding paper: Schmidt, Andreas U.: "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra" and the presentation "Infinite Infrared Regularization in Krein Spaces".
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
Rezension zu: George G. Szpiro : Mathematik für Sonntagmorgen : 50 Geschichten aus Mathematik und Wissenschaft, NZZ Verlag, Zürich 2006, ISBN 978-3-03823-353-4 ; 240 Seiten, 26 Euro/38 CHF George G. Szpiro : Mathematik für Sonntagnachmittag : Weitere 50 Geschichten aus Mathematik und Wissenschaft, NZZ Verlag, Zürich 2006, ISBN 978-3-03823-225-4 ; 236 Seiten, 26 Euro/38 CHF
Using limit linear series on chains of curves, we show that closures of certain Brill-Noether loci contain a product of pointed Brill-Noether loci of small codimension. As a result, we obtain new non-containments of Brill-Noether loci, in particular that dimensionally expected non-containments hold for expected maximal Brill-Noether loci. Using these degenerations, we also give a new proof that Brill-Noether loci with expected codimension −ρ≤⌈g/2⌉ have a component of the expected dimension. Additionally, we obtain new non-containments of Brill-Noether loci by considering the locus of the source curves of unramified double covers.
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a >= 0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
We show that the metrisability of an oriented projective surface is equivalent to the existence of pseudo-holomorphic curves. A projective structure p and a volume form σ on an oriented surface M equip the total space of a certain disk bundle Z→M with a pair (Jp,Jp,σ) of almost complex structures. A conformal structure on M corresponds to a section of Z→M and p is metrisable by the metric g if and only if [g]:M→Z is a pseudo-holomorphic curve with respect to Jp and Jp,dAg.
Mixed volumes, mixed Ehrhart theory and applications to tropical geometry and linkage configurations
(2009)
The aim of this thesis is the discussion of mixed volumes, their interplay with algebraic geometry, discrete geometry and tropical geometry and their use in applications such as linkage configuration problems. Namely we present new technical tools for mixed volume computation, a novel approach to Ehrhart theory that links mixed volumes with counting integer points in Minkowski sums, new expressions in terms of mixed volumes of combinatorial quantities in tropical geometry and furthermore we employ mixed volume techniques to obtain bounds in certain graph embedding problems.
Die anaerobe Fermentation beschreibt den Abbau organischen Materials unter Ausschluss von Sauerstoff und setzt sich aus vier Prozessphasen (Hydrolyse, Acidogenese, Acetogenese und Methanogenese) zusammen. Im Rahmen dieser Arbeit konnte die Aufteilung dieser vier Prozessphasen auf die beiden Stufen eines zweistufigen zweiphasigen Biogas-Reaktors genau bestimmt werden. Die Aufteilung ist von entscheidender Bedeutung für zukünftige Arbeiten, da dadurch genau festgelegt werden kann, welche Stoffe bei den Messungen und bei der Modellierung berücksichtigt werden müssen.
Im Jahre 2002 wurde von der IWA Taskgroup das ADM1-Modell, welches alle vier Prozessphasen der anaeroben Fermentation berücksichtigt, veröffentlicht. In der vorliegenden Arbeit wird ein räumlich aufgelöstes Modell für die anaerobe Fermentation erarbeitet, in dem das ADM1-Modell mit einem Strömungsmodell gekoppelt wird. Anschließend wird ein reduziertes Simulationsmodell für acetoklastische Methanogenese in einem zweistufigen zweiphasigen Biogasreaktor erstellt. Anhand von Messdaten wird gezeigt, dass der Abbau von Essigsäure zu Methan innerhalb des Reaktors durch das Simulationsmodell gut wiedergegeben werden kann.
Anschließend wird das validierte Modell verwendet um Regeln für eine optimale Steuerung des Reaktors herzuleiten und weiterhin wird mit Hilfe der lokalen Methanproduktion die Effektivität des Reaktors bestimmt. Die erlangten Informationen können verwendet werden, um den Biogas-Reaktor zu optimieren.
We deal with the shape reconstruction of inclusions in elastic bodies. For solving this inverse problem in practice, data fitting functionals are used. Those work better than the rigorous monotonicity methods from Eberle and Harrach (Inverse Probl 37(4):045006, 2021), but have no rigorously proven convergence theory. Therefore we show how the monotonicity methods can be converted into a regularization method for a data-fitting functional without losing the convergence properties of the monotonicity methods. This is a great advantage and a significant improvement over standard regularization techniques. In more detail, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with a standard approach (one-step linearization with Tikhonov-like regularization), which also shows the robustness of our method regarding noise in practice.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~N is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as N→∞). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as N→∞.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
In dieser Arbeit wurde deutlich, dass die Multilevel Monte Carlo Methode eine signifikante Verbesserung gegenüber der Monte Carlo Methode darstellt. Sie schafft es den Rechenaufwand zu verringern und in fast allen Fällen die gewollte Genauigkeit zu erreichen. Die Erweiterung durch Richardson Extrapolation brachte immer eine Verringerung des Rechenaufwands oder zumindest keine Verschlechterung, auch wenn nicht in allen Fällen die schwache Konvergenzordnung verdoppelt wurde.
Im Falle der Optionssensitivitäten ist eine Anwendung des MLMC-Algorithmus problematisch. Das Funktional, das auf den Aktienkurs angewendet wird, darf keine Unstetigkeitsstelle besitzen, bzw. im Falle des Gammas muss es stetig differenzierbar sein. Die Anwendung der MLMC Methode macht dann vor allem Sinn, wenn sich die Sensitivität als Funktion des Aktienkurses umformen lässt, so dass nur der Pfad der Aktie simuliert werden muss. Nur wenn dies nicht möglich ist, wäre es sinnvoll, die in Kapitel 6.5 am Beispiel des Deltas vorgestellte Methode zu benutzen, in der man einen zweiten Pfad für das Delta simuliert.
Weitere Verbesserungsmöglichkeiten könnten in der Wahl von anderen varianzreduzierenden Methoden liegen oder durch Verwendung von Diskretisierungsverfahren mit höherer starker Ordnung als das Euler-Verfahren (vgl. [7], Verwendung des Milstein-Verfahrens). In diesem Fall ist theoretisch ein Rechenaufwand der Größenordnung O(ϵexp-2) möglich, da die Anzahl der zu erstellenden Samples nicht mehr mit steigendem L erhöht wird. Somit könnte das L so groß gewählt werden, dass der Bias verschwindet und der MSE ausschließlich von der Varianz des Schätzers abhängt. Um diese auf eine Größenordnung von O(ϵexp2) zu bringen, ist es nötig, O(ϵexp2) Pfade zu erstellen (siehe Gleichung (3.6)), was den Rechenaufwand begründet.
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01].
The purpose of the paper is to initiate the development of the theory of Newton Okounkov bodies of curve classes. Our denition is based on making a fundamental property of NewtonOkounkov bodies hold also in the curve case: the volume of the NewtonOkounkov body of a curve is a volume-type function of the original curve. This construction allows us to conjecture a new relation between NewtonOkounkov bodies, we prove it in certain cases.
The cones of nonnegative polynomials and sums of squares arise as central objects in convex algebraic geometry and have their origin in the seminal work of Hilbert ([Hil88]). Depending on the number of variables n and the degree d of the polynomials, Hilbert famously characterizes all cases of equality between the cone of nonnegative polynomials and the cone of sums of squares. This equality precisely holds for bivariate forms, quadratic forms and ternary quartics ([Hil88]). Since then, a lot of work has been done in understanding the difference between these two cones, which has major consequences for many practical applications such as for polynomial optimization problems. Roughly speaking, minimizing polynomial functions (constrained as well as unconstrained) can be done efficiently whenever certain nonnegative polynomials can be written as sums of squares (see Section 2.3 for the precise relationship). The underlying reason is the fundamental difference that checking nonnegativity of polynomials is an NP-hard problem whenever the degree is greater or equal than four ([BCSS98]), whereas checking whether a polynomial can be written as a sum of squares is a semidefinite feasibility problem (see Section 2.2). Although the complexity status of the semidefinite feasibility problem is still an open problem, it is polynomial for fixed number of variables. Hence, understanding the difference between nonnegative polynomials and sums of squares is highly desirable both from a theoretical and a practical viewpoint.
Between his arrival in Frankfurt in 1922 and and his proof of his famous finiteness theorem for integral points in 1929, Siegel had no publications. He did, however, write a letter to Mordell in 1926 in which he explained a proof of the finiteness of integral points on hyperelliptic curves. Recognizing the importance of this argument (and Siegel's views on publication), Mordell sent the relevant extract to be published under the pseudonym "X".
The purpose of this note is to explain how to optimize Siegel's 1926 technique to obtain the following bound. Let K be a number field, S a finite set of places of K, and f∈oK,S[t] monic of degree d≥5 with discriminant Δf∈o×K,S. Then: #|{(x,y):x,y∈oK,S,y2=f(x)}|≤2rankJac(Cf)(K)⋅O(1)d3⋅([K:Q]+#|S|).
This improves bounds of Evertse-Silverman and Bombieri-Gubler from 1986 and 2006, respectively.
The main point underlying our improvement is that, informally speaking, we insist on "executing the descents in the presence of only one root (and not three) until the last possible moment".
We presented a proof for the classical stable limit laws under use of contraction method in combination with the Zolotarev metric. Furthermore, a stable limit law was proved for scaled sums of growing into sequences. This limit law was alternatively formulated for sequences of random variables defined by a simple degenerate recursion.
Random ordinary differential equations (RODEs) are ordinary differential equations (ODEs) which have a stochastic process in their vector field functions. RODEs have been used in a wide range of applications such as biology, medicine, population dynamics and engineering and play an important role in the theory of random dynamical systems, however, they have been long overshadowed by stochastic differential equations.
Typically, the driving stochastic process has at most Hoelder continuous sample paths and the resulting vector field is, thus, at most Hoelder continuous in time, no matter how smooth the vector function is in its original variables, so the sample paths of the solution are certainly continuously differentiable, but their derivatives are at most Hoelder continuous in time. Consequently, although the classical numerical schemes for ODEs can be applied pathwise to RODEs, they do not achieve their traditional orders.
Recently, Gruene and Kloeden derived the explicit averaged Euler scheme by taking the average of the noise within the vector field. In addition, new forms of higher order Taylor-like schemes for RODEs are derived systematically by Jentzen and Kloeden.
However, it is still important to build higher order numerical schemes and computationally less expensive schemes as well as numerically stable schemes and this is the motivation of this thesis. The schemes by Gruene and Kloeden and Jentzen and Kloeden are very general, so RODEs with special structure, i.e., RODEs with Ito noise and RODEs with affine structure, are focused and numerical schemes which exploit these special structures are investigated.
The developed numerical schemes are applied to several mathematical models in biology and medicine. In order to see the performance of the numerical schemes, trajectories of solutions are illustrated. In addition, the error vs. step sizes as well as the computational costs are compared among newly developed schemes and the schemes in literature.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Random constraint satisfaction problems have been on the agenda of various sciences such as discrete mathematics, computer science, statistical physics and a whole series of additional areas of application since the 1990s at least. The objective is to find a state of a system, for instance an assignment of a set of variables, satisfying a bunch of constraints. To understand the computational hardness as well as the underlying random discrete structures of these problems analytically and to develop efficient algorithms that find optimal solutions has triggered a huge amount of work on random constraint satisfaction problems up to this day. Referring to this context in this thesis we present three results for two random constraint satisfaction problems. ...
The random split tree introduced by Devroye (1999) is considered. We derive a second order expansion for the mean of its internal path length and furthermore obtain a limit law by the contraction method. As an assumption we need the splitter having a Lebesgue density and mass in every neighborhood of 1. We use properly stopped homogeneous Markov chains, for which limit results in total variation distance as well as renewal theory are used. Furthermore, we extend this method to obtain the corresponding results for the Wiener index.
Although everyone is familiar with using algorithms on a daily basis, formulating, understanding and analysing them rigorously has been (and will remain) a challenging task for decades. Therefore, one way of making steps towards their understanding is the formulation of models that are portraying reality, but also remain easy to analyse. In this thesis we take a step towards this way by analyzing one particular problem, the so-called group testing problem. R. Dorfman introduced the problem in 1943. We assume a large population and in this population we find a infected group of individuals. Instead of testing everybody individually, we can test group (for instance by mixing blood samples). In this thesis we look for the minimum number of tests needed such that we can say something meaningful about the infection status. Furthermore we assume various versions of this problem to analyze at what point and why this problem is hard, easy or impossible to solve.
We study the price-setting problem of market makers under perfect competition in continuous time. Thereby we follow the classic Glosten-Milgrom model that defines bid and ask prices as the expectation of a true value of the asset given the market makers partial information that includes the customers trading decisions. The true value is modeled as a Markov process that can be observed by the customers with some noise at Poisson times.
We analyze the price-setting problem by solving a non-standard filtering problem with an endogenous filtration that depends on the bid and ask price process quoted by the market maker. Under some conditions we show existence and uniqueness of the price processes. In a different setting we construct a counterexample to uniqueness. Further, we discuss the behavior of the spread by a convergence result and simulations.
We investigate multivariate Laurent polynomials f \in \C[\mathbf{z}^{\pm 1}] = \C[z_1^{\pm 1},\ldots,z_n^{\pm 1}] with varieties \mathcal{V}(f) restricted to the algebraic torus (\C^*)^n = (\C \setminus \{0\})^n. For such Laurent polynomials f one defines the amoeba \mathcal{A}(f) of f as the image of the variety \mathcal{V}(f) under the \Log-map \Log : (\C^*)^n \to \R^n, (z_1,\ldots,z_n) \mapsto (\log|z_1|, \ldots, \log|z_n|). I.e., the amoeba \mathcal{A}(f) is the projection of the variety \mathcal{V}(f) on its (componentwise logarithmized) absolute values. Amoebas were first defined in 1994 by Gelfand, Kapranov and Zelevinksy. Amoeba theory has been strongly developed since the beginning of the new century. It is related to various mathematical subjects, e.g., complex analysis or real algebraic curves. In particular, amoeba theory can be understood as a natural connection between algebraic and tropical geometry.
In this thesis we investigate the geometry, topology and methods for the approximation of amoebas.
Let \C^A denote the space of all Laurent polynomials with a given, finite support set A \subset \Z^n and coefficients in \C^*. It is well known that, in general, the existence of specific complement components of the amoebas \mathcal{A}(f) for f \in \C^A depends on the choice of coefficients of f. One prominent key problem is to provide bounds on the coefficients in order to guarantee the existence of certain complement components. A second key problem is the question whether the set U_\alpha^A \subseteq \C^A of all polynomials whose amoeba has a complement component of order \alpha \in \conv(A) \cap \Z^n is always connected.
We prove such (upper and lower) bounds for multivariate Laurent polynomials supported on a circuit. If the support set A \subset \Z^n satisfies some additional barycentric condition, we can even give an exact description of the particular sets U_\alpha^A and, especially, prove that they are path-connected.
For the univariate case of polynomials supported on a circuit, i.e., trinomials f = z^{s+t} + p z^t + q (with p,q \in \C^*), we show that a couple of classical questions from the late 19th / early 20th century regarding the connection between the coefficients and the roots of trinomials can be traced back to questions in amoeba theory. This yields nice geometrical and topological counterparts for classical algebraic results. We show for example that a trinomial has a root of a certain, given modulus if and only if the coefficient p is located on a particular hypotrochoid curve. Furthermore, there exist two roots with the same modulus if and only if the coefficient p is located on a particular 1-fan. This local description of the configuration space \C^A yields in particular that all sets U_\alpha^A for \alpha \in \{0,1,\ldots,s+t\} \setminus \{t\} are connected but not simply connected.
We show that for a given lattice polytope P the set of all configuration spaces \C^A of amoebas with \conv(A) = P is a boolean lattice with respect to some order relation \sqsubseteq induced by the set theoretic order relation \subseteq. This boolean lattice turns out to have some nice structural properties and gives in particular an independent motivation for Passare's and Rullgard's conjecture about solidness of amoebas of maximally sparse polynomials. We prove this conjecture for special instances of support sets.
A further key problem in the theory of amoebas is the description of their boundaries. Obviously, every boundary point \mathbf{w} \in \partial \mathcal{A}(f) is the image of a critical point under the \Log-map (where \mathcal{V}(f) is supposed to be non-singular here). Mikhalkin showed that this is equivalent to the fact that there exists a point in the intersection of the variety \mathcal{V}(f) and the fiber \F_{\mathbf{w}} of \mathbf{w} (w.r.t. the \Log-map), which has a (projective) real image under the logarithmic Gauss map. We strengthen this result by showing that a point \mathbf{w} may only be contained in the boundary of \mathcal{A}(f), if every point in the intersection of \mathcal{V}(f) and \F_{\mathbf{w}} has a (projective) real image under the logarithmic Gauss map.
With respect to the approximation of amoebas one is in particular interested in deciding membership, i.e., whether a given point \mathbf{w} \in \R^n is contained in a given amoeba \mathcal{A}(f). We show that this problem can be traced back to a semidefinite optimization problem (SDP), basically via usage of the Real Nullstellensatz. This SDP can be implemented and solved with standard software (we use SOSTools and SeDuMi here). As main theoretic result we show that, from the complexity point of view, our approach is at least as good as Purbhoo's approximation process (which is state of the art).
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
We show that P(n)*(P(n)) for p = 2 with its geometrically induced structure maps is not an Hopf algebroid because neither the augmentation Epsilon nor the coproduct Delta are multiplicative. As a consequence the algebra structure of P(n)*(P(n)) is slightly different from what was supposed to be the case. We give formulas for Epsilon(xy) and Delta(xy) and show that the inversion of the formal group of P(n) is induced by an antimultiplicative involution Xi : P(n) -> P(n). Some consequences for multiplicative and antimultiplicative automorphisms of K(n) for p = 2 are also discussed.
In this paper we prove asymptotic normality of the total length of external branches in Kingman's coalescent. The proof uses an embedded Markov chain, which can be described as follows: Take an urn with n black balls. Empty it in n steps according to the rule: In each step remove a randomly chosen pair of balls and replace it by one red ball. Finally remove the last remaining ball. Then the numbers Uk, 0 < k < n, of red balls after k steps exhibit an unexpected property: (U0, ... ,Un) and (Un, ... ;U0) are equal in distribution.
Optimierung von Phasen- und Ratenparametern in einem stochastischen Modell neuronaler Feueraktivität
(2014)
In unserem Gehirn wird Information von Neuronen durch die Emission von Spikes repräsentiert. Als wichtige Signalkomponenten werden hierbei die Rate (Anzahl Spikes), die Phase (zeitliche Verschiebung der Spikes) und synchrone Oszillationen (rhythmische Entladungen der Neuronen am selben Zyklus) diskutiert.
In dieser Arbeit wird untersucht, wie Rate und Phase für eine optimale Detektion miteinander kombiniert werden und abhängig vom gewählten Parameterbereich wird der Beitrag der Phase quantifiziert.
Dies wird anhand eines stochastischen Spiketrain-Modell untersucht, das hohe Ähnlichkeiten zu empirischen Spiketrains zeigt und die drei genannten Signalkomponenten beinhaltet. Das ELO-Modell („exponential lockig to a free oscillator“) ist in zwei Prozessstufen unterteilt: Im Hintergrund steht ein globaler Oszillationsprozess, der unabhängige und normal-verteilte Intervallabschnitte hervorbringt (Oszillation). An den Intervallgrenzen starten unabhängig, inhomogene Poisson-Prozesse (Synchronizität) mit exponentiell abnehmender Feuerrate, die durch eine stimulusspezifische Rate und Phase festgelegt ist.
Neben einer analytischen Bestimmung der optimalen Parameter im Falle reiner Raten- bzw. Phasencodierung, wird die gemeinsame Codierung anhand von Simulationsstudien analysiert.