Mathematik
Refine
Year of publication
Document Type
- Article (110)
- Doctoral Thesis (76)
- Preprint (46)
- diplomthesis (39)
- Book (25)
- Report (22)
- Conference Proceeding (18)
- Bachelor Thesis (8)
- Contribution to a Periodical (8)
- Diploma Thesis (8)
Has Fulltext
- yes (372) (remove)
Is part of the Bibliography
- no (372)
Keywords
- Kongress (6)
- Kryptologie (5)
- Mathematik (5)
- Stochastik (5)
- Doku Mittelstufe (4)
- Doku Oberstufe (4)
- Online-Publikation (4)
- Statistik (4)
- Finanzmathematik (3)
- LLL-reduction (3)
- Moran model (3)
- coalescent (3)
- computational complexity (3)
- contraction method (3)
- point process (3)
- spike train (3)
- Algebraische Geometrie (2)
- Arithmetische Gruppe (2)
- Biographie (2)
- Brownian motion (2)
- Commitment Scheme (2)
- Frankfurt <Main> / Universität (2)
- Fuchsian groups (2)
- Fächerübergreifender Unterricht (2)
- Geometrie (2)
- Heat kernel (2)
- Hinterlegungsverfahren <Kryptologie> (2)
- Integral Geometry (2)
- Knapsack problem (2)
- Kombinatorische Optimierung (2)
- Krein space (2)
- Laplace operator on graphs (2)
- Lattice basis reduction (2)
- Martingal (2)
- Mathematiker (2)
- Musik (2)
- Oblivious Transfer (2)
- Perception (2)
- Quantum Zeno dynamics (2)
- San Jose (2)
- Semidefinite Programming (2)
- Shortest lattice vector problem (2)
- Stochastischer Prozess (2)
- Subset sum problem (2)
- Tropical geometry (2)
- Tropische Geometrie (2)
- Valuation Theory (2)
- Verzweigungsprozess (2)
- Vision (2)
- W*-dynamical system (2)
- X-Y model (2)
- Yule-Prozess (2)
- ancestral selection graph (2)
- binary search tree (2)
- collective intelligence (2)
- combinatorial optimization (2)
- complexity (2)
- duality (2)
- firing patterns (2)
- fixation probability (2)
- genealogy (2)
- level of difficulty (2)
- quantum spin systems (2)
- return to equilibrium (2)
- segments (2)
- task space (2)
- thought structure (2)
- Λ−coalescent (2)
- A-Discriminant (1)
- ADM1 (1)
- Abelian (1)
- Action potential (1)
- Actions in mathematical learning (1)
- Activity (1)
- Adaptive dynamics (1)
- Algebra (1)
- Algorithmus (1)
- Amoeba (1)
- Anaerobe Fermentation (1)
- Analyse von Algorithmen (1)
- Ancestral selection graph (1)
- Anisotropic Norm (1)
- Approximation (1)
- Approximation algorithm (1)
- Approximationsalgorithmus (1)
- Arbitrage (1)
- Assignment Problem (1)
- Asymptotically Even Nonlinearity (1)
- Ausreißer <Statistik> (1)
- Automorphismengruppe (1)
- Axon (1)
- Banach spaces (1)
- Bayesian Inference (1)
- Berkovich spaces (1)
- Binomialmodell (1)
- Binärsuchbaum (1)
- Black and Scholes Option Price theory (1)
- Black-Scholes (1)
- Blind Signature (1)
- Block Korkin—Zolotarev reduction (1)
- Blockplay (1)
- Bolthausen-Sznitman (1)
- Boolean Lattice (1)
- Bootstrap-Statistik (1)
- Boundary (1)
- Boundary Value Problems (1)
- Branch and Bound (1)
- Branching particle systems (1)
- Branching process approximation (1)
- Breaking knapsack cryptosystems (1)
- Bruhat-Tits-Gebäude (1)
- Burst (1)
- CAT(0)-Räume (1)
- CAT(0)-spaces (1)
- CIR-1 (1)
- Calderón problem (1)
- Cannings model (1)
- Catalan number (1)
- Cauchy-Anfangswertproblem (1)
- Cayley-Graph (1)
- China-Restaurant-Prozess (1)
- Chinese Remainder Theorem (1)
- Chinese restaurant process (1)
- Chinese-restaurant-process (1)
- Circuit (1)
- Closest Vector Problem (1)
- Coamoeba (1)
- Cognitive psychology (1)
- Commitment (1)
- Commitment schemes (1)
- Computational complexity (1)
- Concentration Inequality (1)
- Condensing (1)
- Containment (1)
- Contraction method (1)
- Datenbank (1)
- Datenstruktur (1)
- Degenerate Linear Part (1)
- Dehn (1)
- Derivate (1)
- Dessins d'enfants (1)
- Diagrams and mathematical learning (1)
- Dichte <Stochastik> (1)
- Digital and analogue materials (1)
- Digital trees (1)
- Dimension 2 (1)
- Directional selection (1)
- Dirichlet bound (1)
- Dirichlet random measure (1)
- Dirichletsche L-Reihe; Nullstelle (1)
- Discrete Logarithm (1)
- Diskrete Geometrie (1)
- Diskrete Mathematik (1)
- Diskreter Markov-Prozess (1)
- Diversity in trait space (1)
- Donkers theorem (1)
- Dopamine (1)
- Doplicher-Haag-Roberts Axiomatik; Algebraische Quantenfeldtheorie; Superauswahlregeln und -sektoren; Quantenstatistik; Zopfgruppenstatistik (1)
- Dormancy (1)
- Dosis-Wirkungs-Modellierung (1)
- Dreiecksgruppe (1)
- Dreiecksgruppen (1)
- Duality (1)
- Early Childhood (1)
- Einbettung <Mathematik> (1)
- Elektronische Unterschrift (1)
- Elementar- und Primarbereich (1)
- Endliche Präsentation (1)
- Endlichkeitseigenschaften (1)
- Energie-Modell (1)
- Error Bound (1)
- Erwartungswert (1)
- Evolutionary branching (1)
- Evolving Yield Curves in the Real-World Measures (1)
- Ewens sampling formula (1)
- Examples (1)
- Extended RMJBN Modell (1)
- FEM-BEM-coupling (1)
- FID model (1)
- FIND algorithm (1)
- Face (1)
- Face recognition (1)
- Factoring (1)
- Familie (1)
- Family (1)
- Feller branching with logistic growth (1)
- Finite element methods (1)
- Finitely many measurements (1)
- Fixation probability (1)
- Fixpunkt (1)
- Fractional Brownian Motion (1)
- Fractional Laplacian (1)
- Frühe Bildung (1)
- Fuchs-Gruppe (1)
- Fuchssche Gruppe ; Modulare Einbettung (1)
- Fuchssche Gruppen (1)
- Functions (1)
- Funktionenkegel (1)
- Funktionenkörper ; Arithmetische Gruppe ; Auflösbare Gruppe ; Endlichkeit (1)
- Galerkin Approximation (1)
- Galois group (1)
- Galois-Gruppe (1)
- Game Tree (1)
- Gaussian Random Field (1)
- Gaussian process (1)
- Gelfand-Shilov space (1)
- Gemischte Volumen (1)
- Genealogical construction (1)
- Genealogische Konstruktion (1)
- Genetischer Fingerabdruck (1)
- Genus One (1)
- Geometrische Gruppentheorie (1)
- Geometry (1)
- Gespräch (1)
- Gestaenge (1)
- Girsanov transform (1)
- Gitter <Mathematik> ; Basis <Mathematik> ; Reduktion ; Algorithmus ; Laufzeit ; L-unendlich-Norm ; Rucksackproblem ; Kryptosystem (1)
- Gitter <Mathematik> ; Basis <Mathematik> ; Reduktion ; Gauß-Algorithmus (1)
- Gram-Hadamard inequalities (1)
- Graphen (1)
- Grenzwertsatz (1)
- Griffiths–Engen–McCloskey distribution (1)
- Group dynamics (1)
- Große Abweichung (1)
- Großinvestor (1)
- Gruppendynamiken (1)
- Gruppentheorie (1)
- Hadamard's Three-Lines Theorem (1)
- Halbeinfache algebraische Gruppe (1)
- Handelman (1)
- Handlung (1)
- Harmoniebox (1)
- Heisenberg algebra (1)
- Hidden Markov models (1)
- Hintertür <Informatik> (1)
- Hodge bundle (1)
- Holzklötzchen (1)
- Hopf algebroids (1)
- Householder reflection (1)
- Hyperfunktion ; Asymptotische Entwicklung (1)
- Hypotrochoid (1)
- Identification (1)
- Immigration (1)
- Index at Infinity (1)
- Infrared singularity (1)
- Integer relations (1)
- Integraldarstellung (1)
- Interaction (1)
- Internet (1)
- Invariante (1)
- Inverse problems (1)
- Iteration (1)
- Jahr der Mathematik (1)
- Kettenbruchentwicklung ; Dimension n ; Diophantische Approximation (1)
- Kieferorthopädie (1)
- Klassifizierender Raum (1)
- Klebsiella pneumoniae (1)
- Knotenabstand (1)
- Knotentiefe (1)
- Koaleszent (1)
- Kochen-Specker theorem (1)
- Kollektivintelligenz (1)
- Kombinatorische Gruppen (1)
- Konforme Feldtheorie (1)
- Konstruktiver Beweis (1)
- Kontaktprozess (1)
- Kontraktionsmethode (1)
- Konzentrationsungleichung (1)
- Korkin—Zolotarev reduction (1)
- Kreuzkorrelation (1)
- Kryptosystem (1)
- Kullback-Leibler Informational Divergence (1)
- L^p bounds (1)
- L^p means (1)
- Label cover (1)
- Lanzeitverhalten (1)
- Laplace-Differentialgleichung (1)
- Large Deviation (1)
- Lattice Reduction (1)
- Leerverkauf (1)
- Lernen (1)
- Linear Filtering (1)
- Linear Preferential Attachment Trees (1)
- Linear-Implicit Scheme (1)
- Linkages (1)
- Loewner monotonicity and convexity (1)
- Logarithmic Laplacian (1)
- Long- Range Dependence (1)
- Long-Range Dependence (1)
- Long-time behaviour (1)
- Longitudinal Study (1)
- Lotka-Volterra system (1)
- Lovász Local Lemma (1)
- Low density subset sum algorithm (1)
- MINT-Bildung (1)
- Machine Learning (1)
- Malliavin calculus (1)
- Mallows model (1)
- Markov chain Monte Carlo Method (1)
- Markov chain imbedding technique (1)
- Markov model (1)
- Markov-Kette (1)
- Mathematical Giftedness (1)
- Mathematical Reasoning (1)
- Mathematical modelling (1)
- Mathematics Learning (1)
- Mathematische Bildung (1)
- Mathematische Modellierung (1)
- Max (1)
- McEliece (1)
- Mean Anisotropy (1)
- Message authentication (1)
- Methanogenese (1)
- Mixed Volumes (1)
- Modellierung (1)
- Modular Multiplication (1)
- Mooney faces (1)
- Morava K-theory (1)
- Mouse (1)
- Multi-Harmonie-Ansatz (1)
- Multiple lineare Regression (1)
- Multityp-Verzweigungsprozess mit Immigration (1)
- Multitype Branching with Immigration (1)
- NP-complete problems (1)
- NP-hard (1)
- NP-hardness (1)
- Nash-Gleichgewicht (1)
- Nelson-Siegel (1)
- Neural encoding (1)
- Neurophysiology (1)
- Neuroscience (1)
- Neurowissenschaft (1)
- Newton–Okounkov bodies (1)
- Non-Malleability (1)
- Noticeable Probability (1)
- Optimal Mean-Square Filter (1)
- Oracle Query (1)
- Parabolic SPDE (1)
- Parisi conjecture (1)
- Participation (1)
- Partizipation (1)
- Patientenbewertung (1)
- Pause (1)
- Permutation (1)
- Permutationsgruppen (1)
- Pfadeigenschaften (1)
- Phragmén-Lindelöf principle (1)
- Piecewise-constant coefficient (1)
- Poisson Process (1)
- Poisson boundary (1)
- Poisson-Prozess (1)
- Polyedrische Kombinatorik (1)
- Polymorphic evolution sequence (1)
- Polynomial Optimization (1)
- Pontrjagin space (1)
- Populationsdynamiken (1)
- Portfolios (1)
- Positivstellensatz (1)
- Potenzialtheorie (1)
- Prag <1999> (1)
- Preferential Attachment-Modelle (1)
- Private Information Retrieval (1)
- Probabilistic analysis of algorithms (1)
- Probabilistically checkable proofs (1)
- Probabilistische Analyse von Algorithmen (1)
- Probability distribution (1)
- Probability of fixation (1)
- Professionalisierung (1)
- Profil Likelihood (1)
- Projektionen (1)
- Public Key Cryptosystem (1)
- Public Parameter (1)
- Punktprozess (1)
- Pólya urn (1)
- Quadratic Residue (1)
- Quantenfeldtheorie ; Konforme Feldtheorie ; Algebraische Methode (1)
- Quantum Zeno Effect (1)
- Quantum Zeno effect (1)
- Quasi-Automorphismen (1)
- Quaternionenalgebra (1)
- Quickselect (1)
- RSA-Verschlüsselung (1)
- Radix sort (1)
- Random Oracle (1)
- Random Split Trees (1)
- Random String (1)
- Random environment (1)
- Random variables (1)
- Randomisieren (1)
- Ray-Knight representation (1)
- Reaction time (1)
- Reale vs. risikoneutrale Welt in der Finanzmathematik (1)
- Rechenzentrum (1)
- Rekursiver Algorithmus (1)
- Relaxation (1)
- Representation Problem (1)
- Research article (1)
- Riemann surfaces (1)
- Riemannsche Fläche (1)
- Riemannsche Flächen (1)
- Ringtheorie (1)
- Risikobewertung (1)
- Risikomanagement (1)
- Robustheit (1)
- Rückkopplungseffekt (1)
- S-arithmetic groups (1)
- SLLL-reduction (1)
- Sackgassen (1)
- San Francisco (1)
- Santa Barbara (1)
- Schizophrenia (1)
- Schwarz triangel functions (1)
- Schwinger model (1)
- Security (1)
- Security Parameter (1)
- Semidefinite Optimierung (1)
- Semidefinite Optimization (1)
- Semiotics according to C. S. Peirce (1)
- Sensory perception (1)
- Sensory processing (1)
- Sigma-Invariante (1)
- Sigma-invariant (1)
- Signalverarbeitung (1)
- Signature (1)
- Small Worlds (1)
- Small order expansion (1)
- Spectrahedra (1)
- Spiel (1)
- Spielbaum (1)
- Spielbaum-Suchverfahren (1)
- Stable reduction algorithm (1)
- State dependent branching rate (1)
- Stationarity (1)
- Stochastic Analysis of Square Zero Variation Processes (1)
- Stonesches Spektrum (1)
- Striatum (1)
- Strong Taylor Scheme (1)
- Stummel, Friedrich (1)
- Suchbaum (1)
- Suchoperation (1)
- Sudoku (1)
- Sum of Squares (1)
- Support (1)
- Symmetrie (1)
- Symmetrischer Raum (1)
- Symmetry (1)
- Sympatric speciation (1)
- Tail Bound (1)
- Tailschranke (1)
- Talk (1)
- Thorne Kishino Felsenstein model (1)
- Topic Model (1)
- Trapdoor (1)
- Trinomial (1)
- Tropical Geometry (1)
- Tropical Grassmannians (1)
- Tropical bases (1)
- Tropical varieties (1)
- Tropische Basen (1)
- Trotter's product formula (1)
- Turkish immigrants (1)
- Typ-In-Algebra (1)
- Typology (1)
- Türkisch (1)
- Uniform regularity (1)
- Uniform resource locators (1)
- Unterstützung (1)
- Valuation on functions (1)
- Varianz (1)
- Vertexoperator (1)
- Verzweigende Teilchensysteme (1)
- Virasoro-Algebra (1)
- Wahrscheinlichkeit (1)
- Wahrscheinlichkeitsverteilung (1)
- Wiener Index (1)
- Wiener index (1)
- Wiener-Index (1)
- Yule process (1)
- Yule-process (1)
- Zinsstrukturmodelle (1)
- Zinsänderungsrisiko (1)
- Zolotarev metric (1)
- Zolotarev-Metrik (1)
- Zopfgruppe ; Lineare Darstellung ; Kettengruppe ; Homologiegruppe ; Automorphismengruppe ; Kettenkomplex (1)
- Zufall (1)
- Zufallsgraph (1)
- Zufällige Umgebung (1)
- Zustandsabhängige Verzweigungsrate (1)
- Zweiphasen-Biogasreaktor (1)
- Zweistufen-Biogasreaktor (1)
- abelian differentials (1)
- abstract potential theory (1)
- algebraic curves (1)
- algebraic values (1)
- alpha-stable branching (1)
- ampleness (1)
- analysis of algorithms (1)
- anti-Zeno effect (1)
- argumentation (1)
- arithmetic ball quotients (1)
- arithmetic group (1)
- assignment problem (1)
- augmented and restricted base loci (1)
- autocorrelograms (1)
- bid-ask spread (1)
- bordism theory (1)
- branching processes (1)
- branching random walk in random medium (1)
- buildings (1)
- cancer cell dormancy (1)
- canonical divisors (1)
- catastrophe modeling (1)
- central limit theorem (1)
- chosen ciphertext attack (1)
- clique problem (1)
- colorabdity (1)
- colored graphs (1)
- compact Riemann surfaces (1)
- complex multiplication (1)
- composition (1)
- computational geometry (1)
- concurrent composition (1)
- condensing (1)
- confirmatory factory analysis (1)
- consensus (1)
- contact process (1)
- continued fraction algorithm (1)
- controlled homotopy (1)
- convexity (1)
- convolution quadrature (1)
- cooperative systems (1)
- cross correlation (1)
- cryptography (1)
- cycle structure of permutations (1)
- dead ends (1)
- degenerate semigroup (1)
- delay equation (1)
- depth of a nod (1)
- dessins d’enfants (1)
- difference sets (1)
- digital search tree (1)
- digital tools (1)
- discrete dynamical system (1)
- discrete logarithm (1)
- discrete logarithm (DL) (1)
- diskrete Mathematik (1)
- dose-resoponse modelling (1)
- doubly stochastic point process (1)
- eigenvalue (1)
- elastodynamic wave equation (1)
- emergence (1)
- endliche metrische Räume (1)
- error bounds (1)
- exponentiation (1)
- external branch (1)
- face inversion (1)
- face perception (1)
- fake projective planes (1)
- families of hash functions (1)
- feedback effect (1)
- finite resolution (1)
- finiteness-properties (1)
- flat surfaces (1)
- floating norms (1)
- floating point arithmetic (1)
- floating point errors (1)
- foliated Schwarz symmetry (1)
- forming a group (1)
- fractional Brownian motion (1)
- fractions of exponentiation (1)
- frühkindliche Erziehung (1)
- fuchsian group (1)
- functional limit theorem (1)
- functional limit theorems (1)
- fächerübergreifendes Lernen (1)
- generic algorithm (1)
- generic algorithms (1)
- generic complexity (1)
- generic group model (1)
- geometry (1)
- graph coloring (1)
- graph isomorphism (1)
- h-transform (1)
- hard bit (1)
- hardcore subsets (1)
- harmonic function (1)
- heavy tails (1)
- hidden Markov model (1)
- hierarchical mean-field limit (1)
- highly regular nearby points (1)
- hyperbolische Geometrie (1)
- hypergeometric functions (1)
- hypervariable region (1)
- höhere Momente (1)
- incremental schemes (1)
- indefinite inner product space (1)
- individual-based models (1)
- inner product (1)
- integer relation (1)
- integer vector (1)
- interacting particle Systems (1)
- interdisziplinäre Lehre (1)
- internal diffusion limited aggregation (1)
- internal path length (1)
- inverse coefficient problem, (1)
- iterated subsegments (1)
- key comparisons (1)
- kinetic fingerprint (1)
- knapsack cryptosystems (1)
- kontrollierte Homotopie (1)
- large deviations (1)
- large trader (1)
- latent variance (1)
- lattice basis reduction (1)
- lattices (1)
- leapfrog (1)
- length defect (1)
- limit order markets (1)
- local LLL-reduction (1)
- local LLLreduction (1)
- local coordinates (1)
- local randomness (1)
- local time (1)
- local time drift (1)
- logarithmic geometry (1)
- logical networks (1)
- lookdown construction (1)
- lower bounds (1)
- manifold and geodesic (1)
- market making (1)
- mathematical modeling (1)
- mathematical modelling (1)
- mathematics (1)
- measurement (1)
- mehrdimensionale Ausreißererkennung (1)
- message-passing algorithm (1)
- modelling (1)
- modular automorphism group (1)
- modular group (1)
- moduli spaces (1)
- multi-agents system (1)
- multi-drug treatment (1)
- multiharmony (1)
- multilevel branching (1)
- music (1)
- mutation parameter estimation (1)
- neuronal code (1)
- neuronaler Kode (1)
- nichtlineare stochastische Integration (1)
- non-archimedean geometry (1)
- non-autonomous dynamical systems (1)
- non-malleability (1)
- noncommutative ring spectra (1)
- nondetermmistlc Turing machines (1)
- nonlinear stochastic integration (1)
- numerical experiments (1)
- observable Funktion (1)
- one-more decryption attack (1)
- one-way function (1)
- one-way functions (1)
- operator algebra (1)
- optimal transport (1)
- pair HMM (1)
- parameter dependent semimartingales (1)
- parameterabhängige Semimartingale (1)
- partial match queries (1)
- path properties (1)
- perceptual closure (1)
- permutation groups (1)
- phage (1)
- phage therapy (1)
- phase coding (1)
- phase transitions (1)
- platonischer Körper (1)
- poisson process (1)
- polynomial random number generator (1)
- population dynamics (1)
- portfolio optimization (1)
- positivity of line bundles (1)
- preferential attachment (1)
- preferential attachment models (1)
- probabilistic analysis of algorithms (1)
- probability (1)
- probability metric (1)
- professional development (1)
- profile likelihood (1)
- projections (1)
- projective planes (1)
- q-binomial theorem (1)
- quantum field theory (1)
- quasi-automorphisms (1)
- quaternion algebra (1)
- quincunx (1)
- random assignment problem (1)
- random environment (1)
- random function generator (1)
- random graphs (1)
- random measures (1)
- random media (1)
- random metric (1)
- random move (1)
- random number generator (1)
- random oracle model (1)
- random partition (1)
- random recursive tree (1)
- random rekursiv tree (1)
- random trees (1)
- random walks (1)
- raum-zeitliche Muster (1)
- reactant-catalyst systems (1)
- recursive distributional equation (1)
- reguläre Parkettierung (1)
- resistance (1)
- resistance mutation (1)
- reversibility (1)
- riemann surfaces (1)
- risk assessment (1)
- risk theory (1)
- rotating plane method (1)
- rough paths theory (1)
- satlsfiablhty (1)
- scaling (1)
- search operation (1)
- searchtrees (1)
- secure bit (1)
- security analysis of protocols (1)
- security of data (1)
- self-organizing groups (1)
- self-organizing groups; population dynamics; collective intelligence; forming groups; metric on finite sets (1)
- semidefinite optimization (1)
- sequence alignment (1)
- set-valued pullback attractors (1)
- shadow price (1)
- short integer relation (1)
- shortest lattice vector (1)
- signature size (1)
- signed ElGamal encryption (1)
- simultaneous diophantine approximations (1)
- simultaneous security of bits (1)
- single block replacement (1)
- small worlds (1)
- spatio-temporal patterns (1)
- split tree (1)
- statistic analysis (1)
- statistical alignment (1)
- statistische Analyse (1)
- statistischer Test (1)
- stoch. Analyse von Algorithmen (1)
- stochastic filtering (1)
- stochastic modeling (1)
- stochastic population dynamics (1)
- stochastische Prozesse (1)
- strong transience (1)
- subgroup growth (1)
- subset sum problems (1)
- substitution attacks (1)
- sum of squared factor loadings (1)
- switching systems (1)
- synergistic interaction (1)
- therapy evasion (1)
- topological entropy (1)
- trading strategies (1)
- transcendence (1)
- transversal learning (1)
- treatment protocol design (1)
- treatment success (1)
- triangle group (1)
- triangle groups (1)
- tropical geometry (1)
- tropical universal Jacobian (1)
- tropicalization (1)
- universal compactified Jacobian (1)
- urn model (1)
- von Neumann algebra (1)
- von Neumann algebras (1)
- von Neumann-Algebra (1)
- weak convergence (1)
- zufälliger Algorithmus (1)
- zufälliger rekursiver Baum (1)
- zufälliges Assignment Problem (1)
- Λ-coalescent (1)
- σ-field (1)
Institute
- Mathematik (372) (remove)
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
Rezension zu: George G. Szpiro : Mathematik für Sonntagmorgen : 50 Geschichten aus Mathematik und Wissenschaft, NZZ Verlag, Zürich 2006, ISBN 978-3-03823-353-4 ; 240 Seiten, 26 Euro/38 CHF George G. Szpiro : Mathematik für Sonntagnachmittag : Weitere 50 Geschichten aus Mathematik und Wissenschaft, NZZ Verlag, Zürich 2006, ISBN 978-3-03823-225-4 ; 236 Seiten, 26 Euro/38 CHF
Using limit linear series on chains of curves, we show that closures of certain Brill-Noether loci contain a product of pointed Brill-Noether loci of small codimension. As a result, we obtain new non-containments of Brill-Noether loci, in particular that dimensionally expected non-containments hold for expected maximal Brill-Noether loci. Using these degenerations, we also give a new proof that Brill-Noether loci with expected codimension −ρ≤⌈g/2⌉ have a component of the expected dimension. Additionally, we obtain new non-containments of Brill-Noether loci by considering the locus of the source curves of unramified double covers.
Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a >= 0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.
We show that the metrisability of an oriented projective surface is equivalent to the existence of pseudo-holomorphic curves. A projective structure p and a volume form σ on an oriented surface M equip the total space of a certain disk bundle Z→M with a pair (Jp,Jp,σ) of almost complex structures. A conformal structure on M corresponds to a section of Z→M and p is metrisable by the metric g if and only if [g]:M→Z is a pseudo-holomorphic curve with respect to Jp and Jp,dAg.
Mixed volumes, mixed Ehrhart theory and applications to tropical geometry and linkage configurations
(2009)
The aim of this thesis is the discussion of mixed volumes, their interplay with algebraic geometry, discrete geometry and tropical geometry and their use in applications such as linkage configuration problems. Namely we present new technical tools for mixed volume computation, a novel approach to Ehrhart theory that links mixed volumes with counting integer points in Minkowski sums, new expressions in terms of mixed volumes of combinatorial quantities in tropical geometry and furthermore we employ mixed volume techniques to obtain bounds in certain graph embedding problems.
Die anaerobe Fermentation beschreibt den Abbau organischen Materials unter Ausschluss von Sauerstoff und setzt sich aus vier Prozessphasen (Hydrolyse, Acidogenese, Acetogenese und Methanogenese) zusammen. Im Rahmen dieser Arbeit konnte die Aufteilung dieser vier Prozessphasen auf die beiden Stufen eines zweistufigen zweiphasigen Biogas-Reaktors genau bestimmt werden. Die Aufteilung ist von entscheidender Bedeutung für zukünftige Arbeiten, da dadurch genau festgelegt werden kann, welche Stoffe bei den Messungen und bei der Modellierung berücksichtigt werden müssen.
Im Jahre 2002 wurde von der IWA Taskgroup das ADM1-Modell, welches alle vier Prozessphasen der anaeroben Fermentation berücksichtigt, veröffentlicht. In der vorliegenden Arbeit wird ein räumlich aufgelöstes Modell für die anaerobe Fermentation erarbeitet, in dem das ADM1-Modell mit einem Strömungsmodell gekoppelt wird. Anschließend wird ein reduziertes Simulationsmodell für acetoklastische Methanogenese in einem zweistufigen zweiphasigen Biogasreaktor erstellt. Anhand von Messdaten wird gezeigt, dass der Abbau von Essigsäure zu Methan innerhalb des Reaktors durch das Simulationsmodell gut wiedergegeben werden kann.
Anschließend wird das validierte Modell verwendet um Regeln für eine optimale Steuerung des Reaktors herzuleiten und weiterhin wird mit Hilfe der lokalen Methanproduktion die Effektivität des Reaktors bestimmt. Die erlangten Informationen können verwendet werden, um den Biogas-Reaktor zu optimieren.
We deal with the shape reconstruction of inclusions in elastic bodies. For solving this inverse problem in practice, data fitting functionals are used. Those work better than the rigorous monotonicity methods from Eberle and Harrach (Inverse Probl 37(4):045006, 2021), but have no rigorously proven convergence theory. Therefore we show how the monotonicity methods can be converted into a regularization method for a data-fitting functional without losing the convergence properties of the monotonicity methods. This is a great advantage and a significant improvement over standard regularization techniques. In more detail, we introduce constraints on the minimization problem of the residual based on the monotonicity methods and prove the existence and uniqueness of a minimizer as well as the convergence of the method for noisy data. In addition, we compare numerical reconstructions of inclusions based on the monotonicity-based regularization with a standard approach (one-step linearization with Tikhonov-like regularization), which also shows the robustness of our method regarding noise in practice.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~N is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as N→∞). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as N→∞.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
In dieser Arbeit wurde deutlich, dass die Multilevel Monte Carlo Methode eine signifikante Verbesserung gegenüber der Monte Carlo Methode darstellt. Sie schafft es den Rechenaufwand zu verringern und in fast allen Fällen die gewollte Genauigkeit zu erreichen. Die Erweiterung durch Richardson Extrapolation brachte immer eine Verringerung des Rechenaufwands oder zumindest keine Verschlechterung, auch wenn nicht in allen Fällen die schwache Konvergenzordnung verdoppelt wurde.
Im Falle der Optionssensitivitäten ist eine Anwendung des MLMC-Algorithmus problematisch. Das Funktional, das auf den Aktienkurs angewendet wird, darf keine Unstetigkeitsstelle besitzen, bzw. im Falle des Gammas muss es stetig differenzierbar sein. Die Anwendung der MLMC Methode macht dann vor allem Sinn, wenn sich die Sensitivität als Funktion des Aktienkurses umformen lässt, so dass nur der Pfad der Aktie simuliert werden muss. Nur wenn dies nicht möglich ist, wäre es sinnvoll, die in Kapitel 6.5 am Beispiel des Deltas vorgestellte Methode zu benutzen, in der man einen zweiten Pfad für das Delta simuliert.
Weitere Verbesserungsmöglichkeiten könnten in der Wahl von anderen varianzreduzierenden Methoden liegen oder durch Verwendung von Diskretisierungsverfahren mit höherer starker Ordnung als das Euler-Verfahren (vgl. [7], Verwendung des Milstein-Verfahrens). In diesem Fall ist theoretisch ein Rechenaufwand der Größenordnung O(ϵexp-2) möglich, da die Anzahl der zu erstellenden Samples nicht mehr mit steigendem L erhöht wird. Somit könnte das L so groß gewählt werden, dass der Bias verschwindet und der MSE ausschließlich von der Varianz des Schätzers abhängt. Um diese auf eine Größenordnung von O(ϵexp2) zu bringen, ist es nötig, O(ϵexp2) Pfade zu erstellen (siehe Gleichung (3.6)), was den Rechenaufwand begründet.
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01].
The purpose of the paper is to initiate the development of the theory of Newton Okounkov bodies of curve classes. Our denition is based on making a fundamental property of NewtonOkounkov bodies hold also in the curve case: the volume of the NewtonOkounkov body of a curve is a volume-type function of the original curve. This construction allows us to conjecture a new relation between NewtonOkounkov bodies, we prove it in certain cases.
The cones of nonnegative polynomials and sums of squares arise as central objects in convex algebraic geometry and have their origin in the seminal work of Hilbert ([Hil88]). Depending on the number of variables n and the degree d of the polynomials, Hilbert famously characterizes all cases of equality between the cone of nonnegative polynomials and the cone of sums of squares. This equality precisely holds for bivariate forms, quadratic forms and ternary quartics ([Hil88]). Since then, a lot of work has been done in understanding the difference between these two cones, which has major consequences for many practical applications such as for polynomial optimization problems. Roughly speaking, minimizing polynomial functions (constrained as well as unconstrained) can be done efficiently whenever certain nonnegative polynomials can be written as sums of squares (see Section 2.3 for the precise relationship). The underlying reason is the fundamental difference that checking nonnegativity of polynomials is an NP-hard problem whenever the degree is greater or equal than four ([BCSS98]), whereas checking whether a polynomial can be written as a sum of squares is a semidefinite feasibility problem (see Section 2.2). Although the complexity status of the semidefinite feasibility problem is still an open problem, it is polynomial for fixed number of variables. Hence, understanding the difference between nonnegative polynomials and sums of squares is highly desirable both from a theoretical and a practical viewpoint.
Between his arrival in Frankfurt in 1922 and and his proof of his famous finiteness theorem for integral points in 1929, Siegel had no publications. He did, however, write a letter to Mordell in 1926 in which he explained a proof of the finiteness of integral points on hyperelliptic curves. Recognizing the importance of this argument (and Siegel's views on publication), Mordell sent the relevant extract to be published under the pseudonym "X".
The purpose of this note is to explain how to optimize Siegel's 1926 technique to obtain the following bound. Let K be a number field, S a finite set of places of K, and f∈oK,S[t] monic of degree d≥5 with discriminant Δf∈o×K,S. Then: #|{(x,y):x,y∈oK,S,y2=f(x)}|≤2rankJac(Cf)(K)⋅O(1)d3⋅([K:Q]+#|S|).
This improves bounds of Evertse-Silverman and Bombieri-Gubler from 1986 and 2006, respectively.
The main point underlying our improvement is that, informally speaking, we insist on "executing the descents in the presence of only one root (and not three) until the last possible moment".
We presented a proof for the classical stable limit laws under use of contraction method in combination with the Zolotarev metric. Furthermore, a stable limit law was proved for scaled sums of growing into sequences. This limit law was alternatively formulated for sequences of random variables defined by a simple degenerate recursion.
Random ordinary differential equations (RODEs) are ordinary differential equations (ODEs) which have a stochastic process in their vector field functions. RODEs have been used in a wide range of applications such as biology, medicine, population dynamics and engineering and play an important role in the theory of random dynamical systems, however, they have been long overshadowed by stochastic differential equations.
Typically, the driving stochastic process has at most Hoelder continuous sample paths and the resulting vector field is, thus, at most Hoelder continuous in time, no matter how smooth the vector function is in its original variables, so the sample paths of the solution are certainly continuously differentiable, but their derivatives are at most Hoelder continuous in time. Consequently, although the classical numerical schemes for ODEs can be applied pathwise to RODEs, they do not achieve their traditional orders.
Recently, Gruene and Kloeden derived the explicit averaged Euler scheme by taking the average of the noise within the vector field. In addition, new forms of higher order Taylor-like schemes for RODEs are derived systematically by Jentzen and Kloeden.
However, it is still important to build higher order numerical schemes and computationally less expensive schemes as well as numerically stable schemes and this is the motivation of this thesis. The schemes by Gruene and Kloeden and Jentzen and Kloeden are very general, so RODEs with special structure, i.e., RODEs with Ito noise and RODEs with affine structure, are focused and numerical schemes which exploit these special structures are investigated.
The developed numerical schemes are applied to several mathematical models in biology and medicine. In order to see the performance of the numerical schemes, trajectories of solutions are illustrated. In addition, the error vs. step sizes as well as the computational costs are compared among newly developed schemes and the schemes in literature.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
The behaviour of electronic circuits is influenced by ageing effects. Modelling the behaviour of circuits is a standard approach for the design of faster, smaller, more reliable and more robust systems. In this thesis, we propose a formalization of robustness that is derived from a failure model, which is based purely on the behavioural specification of a system. For a given specification, simulation can reveal if a system does not comply with a specification, and thus provide a failure model. Ageing usually works against the specified properties, and ageing models can be incorporated to quantify the impact on specification violations, failures and robustness. We study ageing effects in the context of analogue circuits. Here, models must factor in infinitely many circuit states. Ageing effects have a cause and an impact that require models. On both these ends, the circuit state is highly relevant, an must be factored in. For example, static empirical models for ageing effects are not valid in many cases, because the assumed operating states do not agree with the circuit simulation results. This thesis identifies essential properties of ageing effects and we argue that they need to be taken into account for modelling the interrelation of cause and impact. These properties include frequency dependence, monotonicity, memory and relaxation mechanisms as well as control by arbitrary shaped stress levels. Starting from decay processes, we define a class of ageing models that fits these requirements well while remaining arithmetically accessible by means of a simple structure.
Modeling ageing effects in semiconductor circuits becomes more relevant with higher integration and smaller structure sizes. With respect to miniaturization, digital systems are ahead of analogue systems, and similarly ageing models predominantly focus on digital applications. In the digital domain, the signal levels are either on or off or switching in between. Given an ageing model as a physical effect bound to signal levels, ageing models for components and whole systems can be inferred by means of average operation modes and cycle counts. Functional and faithful ageing effect models for analogue components often require a more fine-grained characterization for physical processes. Here, signal levels can take arbitrary values, to begin with. Such fine-grained, physically inspired ageing models do not scale for larger applications and are hard to simulate in reasonable time. To close the gap between physical processes and system level ageing simulation, we propose a data based modelling strategy, according to which measurement data is turned into ageing models for analogue applications. Ageing data is a set of pairs of stress patterns and the corresponding parameter deviations. Assuming additional properties, such as monotonicity or frequency independence, learning algorithm can find a complete model that is consistent with the data set. These ageing effect models decompose into a controlling stress level, an ageing process, and a parameter that depends on the state of this process. Using this representation, we are able to embed a wide range of ageing effects into behavioural models for circuit components. Based on the developed modelling techniques, we introduce a novel model for the BTI effect, an ageing effect that permits relaxation. In the following, a transistor level ageing model for BTI that targets analogue circuits is proposed. Similarly, we demonstrate how ageing data from analogue transistor level circuit models lift to purely behavioural block models. With this, we are the first to present a data based hierarchical ageing modeling scheme. An ageing simulator for circuits or system level models computes long term transients, solutions of a differential equation. Long term transients are often close to quasi-periodic, in some sense repetitive. If the evaluation of ageing models under quasi-periodic conditions can be done efficiently, long term simulation becomes practical. We describe an adaptive two-time simulation algorithm that basically skips periods during simulation, advancing faster on a second time axis. The bottleneck of two-time simulation is the extrapolation through skipped frames. This involves both the evaluation of the ageing models and the consistency of the boundary conditions. We propose a simulator that computes long term transients exploiting the structure of the proposed ageing models. These models permit extrapolation of the ageing state by means of a locally equivalent stress, a sort of average stress level. This level can be computed efficiently and also gives rise to a dynamic step control mechanism. Ageing simulation has a wide range of applications. This thesis vastly improves the applicability of ageing simulation for analogue circuits in terms of modelling and efficiency. An ageing effect model that is a part of a circuit component model accounts for parametric drift that is directly related to the operation mode. For example asymmetric load on a comparator or power-stage may lead to offset drift, which is not an empiric effect. Monitor circuits can report such effects during operation, when they become significant. Simulating the behaviour of these monitors is important during their development. Ageing effects can be compensated using redundant parts, and annealing can revert broken components to functional. We show that such mechanisms can be simulated in place using our models and algorithms. The aim of automatized circuit synthesis is to create a circuit that implements a specification for a certain use case. Ageing simulation can identify candidates that are more reliable. Efficient ageing simulation allows to factor in various operation modes and helps refining the selection. Using long term ageing simulation, we have analysed the fitness of a set of synthesized operational amplifiers with similar properties concerning various use cases. This procedure enables the selection of the most ageing resilient implementation automatically.
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Random constraint satisfaction problems have been on the agenda of various sciences such as discrete mathematics, computer science, statistical physics and a whole series of additional areas of application since the 1990s at least. The objective is to find a state of a system, for instance an assignment of a set of variables, satisfying a bunch of constraints. To understand the computational hardness as well as the underlying random discrete structures of these problems analytically and to develop efficient algorithms that find optimal solutions has triggered a huge amount of work on random constraint satisfaction problems up to this day. Referring to this context in this thesis we present three results for two random constraint satisfaction problems. ...
The random split tree introduced by Devroye (1999) is considered. We derive a second order expansion for the mean of its internal path length and furthermore obtain a limit law by the contraction method. As an assumption we need the splitter having a Lebesgue density and mass in every neighborhood of 1. We use properly stopped homogeneous Markov chains, for which limit results in total variation distance as well as renewal theory are used. Furthermore, we extend this method to obtain the corresponding results for the Wiener index.
Although everyone is familiar with using algorithms on a daily basis, formulating, understanding and analysing them rigorously has been (and will remain) a challenging task for decades. Therefore, one way of making steps towards their understanding is the formulation of models that are portraying reality, but also remain easy to analyse. In this thesis we take a step towards this way by analyzing one particular problem, the so-called group testing problem. R. Dorfman introduced the problem in 1943. We assume a large population and in this population we find a infected group of individuals. Instead of testing everybody individually, we can test group (for instance by mixing blood samples). In this thesis we look for the minimum number of tests needed such that we can say something meaningful about the infection status. Furthermore we assume various versions of this problem to analyze at what point and why this problem is hard, easy or impossible to solve.
We study the price-setting problem of market makers under perfect competition in continuous time. Thereby we follow the classic Glosten-Milgrom model that defines bid and ask prices as the expectation of a true value of the asset given the market makers partial information that includes the customers trading decisions. The true value is modeled as a Markov process that can be observed by the customers with some noise at Poisson times.
We analyze the price-setting problem by solving a non-standard filtering problem with an endogenous filtration that depends on the bid and ask price process quoted by the market maker. Under some conditions we show existence and uniqueness of the price processes. In a different setting we construct a counterexample to uniqueness. Further, we discuss the behavior of the spread by a convergence result and simulations.
We investigate multivariate Laurent polynomials f \in \C[\mathbf{z}^{\pm 1}] = \C[z_1^{\pm 1},\ldots,z_n^{\pm 1}] with varieties \mathcal{V}(f) restricted to the algebraic torus (\C^*)^n = (\C \setminus \{0\})^n. For such Laurent polynomials f one defines the amoeba \mathcal{A}(f) of f as the image of the variety \mathcal{V}(f) under the \Log-map \Log : (\C^*)^n \to \R^n, (z_1,\ldots,z_n) \mapsto (\log|z_1|, \ldots, \log|z_n|). I.e., the amoeba \mathcal{A}(f) is the projection of the variety \mathcal{V}(f) on its (componentwise logarithmized) absolute values. Amoebas were first defined in 1994 by Gelfand, Kapranov and Zelevinksy. Amoeba theory has been strongly developed since the beginning of the new century. It is related to various mathematical subjects, e.g., complex analysis or real algebraic curves. In particular, amoeba theory can be understood as a natural connection between algebraic and tropical geometry.
In this thesis we investigate the geometry, topology and methods for the approximation of amoebas.
Let \C^A denote the space of all Laurent polynomials with a given, finite support set A \subset \Z^n and coefficients in \C^*. It is well known that, in general, the existence of specific complement components of the amoebas \mathcal{A}(f) for f \in \C^A depends on the choice of coefficients of f. One prominent key problem is to provide bounds on the coefficients in order to guarantee the existence of certain complement components. A second key problem is the question whether the set U_\alpha^A \subseteq \C^A of all polynomials whose amoeba has a complement component of order \alpha \in \conv(A) \cap \Z^n is always connected.
We prove such (upper and lower) bounds for multivariate Laurent polynomials supported on a circuit. If the support set A \subset \Z^n satisfies some additional barycentric condition, we can even give an exact description of the particular sets U_\alpha^A and, especially, prove that they are path-connected.
For the univariate case of polynomials supported on a circuit, i.e., trinomials f = z^{s+t} + p z^t + q (with p,q \in \C^*), we show that a couple of classical questions from the late 19th / early 20th century regarding the connection between the coefficients and the roots of trinomials can be traced back to questions in amoeba theory. This yields nice geometrical and topological counterparts for classical algebraic results. We show for example that a trinomial has a root of a certain, given modulus if and only if the coefficient p is located on a particular hypotrochoid curve. Furthermore, there exist two roots with the same modulus if and only if the coefficient p is located on a particular 1-fan. This local description of the configuration space \C^A yields in particular that all sets U_\alpha^A for \alpha \in \{0,1,\ldots,s+t\} \setminus \{t\} are connected but not simply connected.
We show that for a given lattice polytope P the set of all configuration spaces \C^A of amoebas with \conv(A) = P is a boolean lattice with respect to some order relation \sqsubseteq induced by the set theoretic order relation \subseteq. This boolean lattice turns out to have some nice structural properties and gives in particular an independent motivation for Passare's and Rullgard's conjecture about solidness of amoebas of maximally sparse polynomials. We prove this conjecture for special instances of support sets.
A further key problem in the theory of amoebas is the description of their boundaries. Obviously, every boundary point \mathbf{w} \in \partial \mathcal{A}(f) is the image of a critical point under the \Log-map (where \mathcal{V}(f) is supposed to be non-singular here). Mikhalkin showed that this is equivalent to the fact that there exists a point in the intersection of the variety \mathcal{V}(f) and the fiber \F_{\mathbf{w}} of \mathbf{w} (w.r.t. the \Log-map), which has a (projective) real image under the logarithmic Gauss map. We strengthen this result by showing that a point \mathbf{w} may only be contained in the boundary of \mathcal{A}(f), if every point in the intersection of \mathcal{V}(f) and \F_{\mathbf{w}} has a (projective) real image under the logarithmic Gauss map.
With respect to the approximation of amoebas one is in particular interested in deciding membership, i.e., whether a given point \mathbf{w} \in \R^n is contained in a given amoeba \mathcal{A}(f). We show that this problem can be traced back to a semidefinite optimization problem (SDP), basically via usage of the Real Nullstellensatz. This SDP can be implemented and solved with standard software (we use SOSTools and SeDuMi here). As main theoretic result we show that, from the complexity point of view, our approach is at least as good as Purbhoo's approximation process (which is state of the art).
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
The free energy of TAP-solutions for the SK-model of mean field spin glasses can be expressed as a nonlinear functional of local terms: we exploit this feature in order to contrive abstract REM-like models which we then solve by a classical large deviations treatment. This allows to identify the origin of the physically unsettling quadratic (in the inverse of temperature) correction to the Parisi free energy for the SK-model, and formalizes the true cavity dynamics which acts on TAP-space, i.e. on the space of TAP-solutions. From a non-spin glass point of view, this work is the first in a series of refinements which addresses the stability of hierarchical structures in models of evolving populations.
We show that P(n)*(P(n)) for p = 2 with its geometrically induced structure maps is not an Hopf algebroid because neither the augmentation Epsilon nor the coproduct Delta are multiplicative. As a consequence the algebra structure of P(n)*(P(n)) is slightly different from what was supposed to be the case. We give formulas for Epsilon(xy) and Delta(xy) and show that the inversion of the formal group of P(n) is induced by an antimultiplicative involution Xi : P(n) -> P(n). Some consequences for multiplicative and antimultiplicative automorphisms of K(n) for p = 2 are also discussed.
In this paper we prove asymptotic normality of the total length of external branches in Kingman's coalescent. The proof uses an embedded Markov chain, which can be described as follows: Take an urn with n black balls. Empty it in n steps according to the rule: In each step remove a randomly chosen pair of balls and replace it by one red ball. Finally remove the last remaining ball. Then the numbers Uk, 0 < k < n, of red balls after k steps exhibit an unexpected property: (U0, ... ,Un) and (Un, ... ;U0) are equal in distribution.
Optimierung von Phasen- und Ratenparametern in einem stochastischen Modell neuronaler Feueraktivität
(2014)
In unserem Gehirn wird Information von Neuronen durch die Emission von Spikes repräsentiert. Als wichtige Signalkomponenten werden hierbei die Rate (Anzahl Spikes), die Phase (zeitliche Verschiebung der Spikes) und synchrone Oszillationen (rhythmische Entladungen der Neuronen am selben Zyklus) diskutiert.
In dieser Arbeit wird untersucht, wie Rate und Phase für eine optimale Detektion miteinander kombiniert werden und abhängig vom gewählten Parameterbereich wird der Beitrag der Phase quantifiziert.
Dies wird anhand eines stochastischen Spiketrain-Modell untersucht, das hohe Ähnlichkeiten zu empirischen Spiketrains zeigt und die drei genannten Signalkomponenten beinhaltet. Das ELO-Modell („exponential lockig to a free oscillator“) ist in zwei Prozessstufen unterteilt: Im Hintergrund steht ein globaler Oszillationsprozess, der unabhängige und normal-verteilte Intervallabschnitte hervorbringt (Oszillation). An den Intervallgrenzen starten unabhängig, inhomogene Poisson-Prozesse (Synchronizität) mit exponentiell abnehmender Feuerrate, die durch eine stimulusspezifische Rate und Phase festgelegt ist.
Neben einer analytischen Bestimmung der optimalen Parameter im Falle reiner Raten- bzw. Phasencodierung, wird die gemeinsame Codierung anhand von Simulationsstudien analysiert.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
Die vorliegende Arbeit beschäftigt sich mit der Ermittlung des Preises von Optionen. Optionen sind spezielle Derivate, die wiederum Hull in seinem Buch definiert als: Ein Derivat ist ein Finanzinstrument, dessen Wert von einem anderen, einfacheren zu Grunde liegenden Finanzinstrument (underlying) abhängt . Ein underlying kann unter anderem auch eine Anleihe, eine Aktie oder der Umtauschkurs zweier Währungen sein....
We consider a class of nonautonomous nonlinear competitive parabolic systems on bounded radial domains under Neumann or Dirichlet boundary conditions. We show that, if the initial profiles satisfy a reflection inequality with respect to a hyperplane, then bounded positive solutions are asymptotically (in time) foliated Schwarz symmetric with respect to antipodal points. Additionally, a related result for (positive and sign changing solutions) of scalar equations with Neumann or Dirichlet boundary conditions is given. The asymptotic shape of solutions to cooperative systems is also discussed.
We consider the long-time behaviour of spatially extended random populations with locally dependent branching. We treat two classes of models: 1) Systems of continuous-time random walks on the d-dimensional grid with state dependent branching rate. While there are k particles at a given site, a branching event occurs there at rate s(k), and one of the particles is replaced by a random number of offspring (according to a fixed distribution with mean 1 and finite variance). 2) Discrete-time systems of branching random walks in random environment. Given a space-time i.i.d. field of random offspring distributions, all particles act independently, the offspring law of a given particle depending on its position and generation. The mean number of children per individual, averaged over the random environment, equals one The long-time behaviour is determined by the interplay of the motion and the branching mechanism: In the case of recurrent symmetrised individual motion, systems of the second type become locally extinct. We prove a comparison theorem for convex functionals of systems of type one which implies that these systems also become locally extinct in this case, provided that the branching rate function grows at least linearly. Furthermore, the analysis of a caricature model leads to the conjecture that local extinction prevails generically in this case. In the case of transient symmetrised individual motion the picture is more complex: Branching random walks with state dependent branching rate converge towards a non-trivial equilibrium, which preserves the initial intensity, whenever the branching rate function grows subquadratically. Systems of type 1) and systems of type 2) with quadratic branching rate function show very similar behaviour. They converge towards a non-trivial equilibrium if a conditional exponential moment of the collision time of two random walks of an order that reflects the variability in the branching mechanism is finite almost surely. The equilibrium population has finite variance of the local particle number if the corresponding unconditional exponential moment is finite. These results are proved by means of genealogical representations of the locally size-biased population. Furthermore, we compute the threshold values for existence of conditional exponential moments of the collision time of two random walks in terms of the entropy of the transition functions, using tools from large deviations theory. Our results prove in particular that - in contrast to the classical case of independent branching - there is a regime of equilibria with variance of the local number of particles.
New conditions of solvability based on a general theorem on the calculation of the index at infinity for vector fields that have degenerate principal linear part as well as degenerate ... next order ... terms are obtained for the 2 Pi-periodic problem for the scalar equation x'' +n2x=g(|x|)+f(t,x)+b(t) with bounded g(u) and f(t,x) -> 0 as |x| -> 0. The result is also applied to the solvability of a two-point boundary value problem and to resonant problems for equations arising in control theory.
AMS subject classifications: 47Hll, 47H30.
Containment problems belong to the classical problems of (convex) geometry. In the proper sense, a containment problem is the task to decide the set-theoretic inclusion of two given sets, which is hard from both the theoretical and the practical perspective. In a broader sense, this includes, e.g., radii or packing problems, which are even harder. For some classes of convex sets there has been strong interest in containment problems. This includes containment problems of polyhedra and balls, and containment of polyhedra, which have been studied in the late 20th century because of their inherent relevance in linear programming and combinatorics.
Since then, there has only been limited progress in understanding containment problems of that type. In recent years, containment problems for spectrahedra, which naturally generalize the class of polyhedra, have seen great interest. This interest is particularly driven by the intrinsic relevance of spectrahedra and their projections in polynomial optimization and convex algebraic geometry. Except for the treatment of special classes or situations, there has been no overall treatment of that kind of problems, though.
In this thesis, we provide a comprehensive treatment of containment problems concerning polyhedra, spectrahedra, and their projections from the viewpoint of low-degree semialgebraic problems and study algebraic certificates for containment. This leads to a new and systematic access to studying containment problems of (projections of) polyhedra and spectrahedra, and provides several new and partially unexpected results.
The main idea - which is meanwhile common in polynomial optimization, but whose understanding of the particular potential on low-degree geometric problems is still a major challenge - can be explained as follows. One point of view towards linear programming is as an application of Farkas' Lemma which characterizes the (non-)solvability of a system of linear inequalities. The affine form of Farkas' Lemma characterizes linear polynomials which are nonnegative on a given polyhedron. By omitting the linearity condition, one gets a polynomial nonnegativity question on a semialgebraic set, leading to so-called Positivstellensaetze (or, more precisely Nichtnegativstellensaetze). A Positivstellensatz provides a certificate for the positivity of a polynomial function in terms of a polynomial identity. As in the linear case, these Positivstellensaetze are the foundation of polynomial optimization and relaxation methods. The transition from positivity to nonnegativity is still a major challenge in real algebraic geometry and polynomial optimization.
With this in mind, several principal questions arise in the context of containment problems: Can the particular containment problem be formulated as a polynomial nonnegativity (or, feasibility) problem in a sophisticated way? If so, how are positivity and nonnegativity related to the containment question in the sense of their geometric meaning? Is there a sophisticated Positivstellensatz for the particular situation, yielding certificates for containment? Concerning the degree of the semialgebraic certificates, which degree is necessary, which degree is sufficient to decide containment?
Indeed, (almost) all containment problems studied in this thesis can be formulated as polynomial nonnegativity problems allowing the application of semialgebraic relaxations. Other than this general result, the answer to all the other questions (highly) depends on the specific containment problem, particularly with regard to its underlying geometry. An important point is whether the hierarchies coming from increasing the degree in the polynomial relaxations always decide containment in finitely many steps.
We focus on the containment problem of an H-polytope in a V-polytope and of a spectrahedron in a spectrahedron. Moreover, we address containment problems concerning projections of H-polyhedra and spectrahedra. This selection is justified by the fact that the mentioned containment problems are computationally hard and their geometry is not well understood.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
Using the notion of a root datum of a reductive group G we propose a tropical analogue of a principal G-bundle on a metric graph. We focus on the case G=GLn, i.e. the case of vector bundles. Here we give a characterization of vector bundles in terms of multidivisors and use this description to prove analogues of the Weil--Riemann--Roch theorem and the Narasimhan--Seshadri correspondence. We proceed by studying the process of tropicalization. In particular, we show that the non-Archimedean skeleton of the moduli space of semistable vector bundles on a Tate curve is isomorphic to a certain component of the moduli space of semistable tropical vector bundles on its dual metric graph.
This thesis covers the analysis of radix sort, radix select and the path length of digital trees under a stochastic input assumption known as the Markov model.
The main results are asymptotic expansions of mean and variance as well as a central limit theorem for the complexity of radix sort and the path length of tries, PATRICIA tries and digital search trees.
Concerning radix select, a variety of different models for ranks are discussed including a law of large numbers for the worst case behavior, a limit theorem for the grand averages model and the first order asymptotic of the average complexity in the quantile model.
Some of the results are achieved by moment transfer techniques, the limit laws are based on a novel use of the contraction method suited for systems of stochastic recurrences.
Tropical geometry is the geometry of the tropical semiring \[\mathbb{T}:=(\mathbb{R}\cup\{\infty\},\min,+).\] Classical algebraic structures correspond to tropical structures. If $I\lhd K[x_1,\ldots,x_n]$ is an ideal in a polynomial ring over a field $K$ with valuation $v$, then the classical algebraic variety correspond to the tropical variety $T(I)$. It is the set of all points $w$, such that the minimum $\min\{v(c_\alpha)+w\cdot\alpha\}$ is achieved twice for all $f=\sum_\alpha c_\alpha x^\alpha\in I$. So tropical geometry relates algebraic geometric problems with discrete geometric problems. In this thesis we obtain a tropical version of the Eisenbud-Evans Theorem which states that every algebraic variety in $\mathbb{R}^n$ is the intersection of $n$ hypersurfaces. We find out that in the tropical setting every tropical variety $T(I)$ can be written as an intersection of only $(n+1)$ tropical hypersurfaces. So we get a finite generating system of $I$ such that the corresponding tropical hypersurfaces intersect to the tropical variety, a so-called tropical basis. Let $I \lhd K[x_1,\ldots,x_n]$ be a prime ideal generated by the polynomials $f_1, \ldots, f_r$. Then there exist $g_0,\ldots,g_{n} \in I$ such that \[ T(I) \ = \ \bigcap_{i=0}^{n}T(g_i)\] and thus $\mathcal{G} := \{f_1, \ldots, f_r, g_0, \ldots, g_{n}\}$ is a tropical basis for $I$ of cardinality $r+n+1$. Tropical bases are discussed by Bogart, Jensen, Speyer, Sturmfels and Thomas where it is shown that tropical bases of linear polynomials of a linear ideal have to be very large. We do not restrict the tropical basis to consist of linear polynomials and therefore we get a shorter tropical basis. But the degrees of our polynomials can be very large. The main ingredient to get a short tropical basis is the use of projections, in particular geometrically regular projections. Together with the fact that preimages of projections of tropical varieties are themselves tropical varieties of a certain elimination ideal we get the desired result. Let $I \lhd K[x_1, \ldots, x_n]$ be an $m$-dimensional prime ideal and $\pi : \mathbb{R}^n \to \mathbb{R}^{m+1}$ be a rational projection. Then $\pi^{-1}(\pi(T(I)))$ is a tropical variety, namely \[ \pi^{-1}(\pi(T(I))) \ = \ T(J \cap K[x_1, \ldots, x_n]) \,\] Here $J$ is an ideal in $K[x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_{n-m-1}]$ derived from the ideal $I$. We show that this elimination ideal is a principal ideal which yields a polynomial in our tropical basis. The advantage of our method is that we find our polynomials by projections and therefore we can use the results of Gelfand, Kapranov and Zelevinsky , of Esterov and Khovanskii , and of Sturmfels, Tevelev and Yu. With mixed fiber polytopes we get the structure and combinatorics of the image of a tropical variety and therefore the structure of the polynomials in our tropical basis. Let $I=\lhd K[x_1,\ldots,x_n]$ an $m$-dimensional ideal, generated by generic polynomials $f_1,\ldots, f_{n-m}$, $\pi:\mathbb{R}^n\to\mathbb{R}^{m+1}$ a projection and $\psi$ a projection presented by a matrix with a rowspace equal to the kernel of $\pi$. Then up to affine isomorphisms, the cells of the dual subdivision of $\pi^{-1} \pi T(I)$ are of the form \[ \sum_{i=1}^p \Sigma_{\psi} (C_{i1}^{\vee}, \ldots, C_{i{k}}^{\vee}) \] for some $p\in\mathbb{N}$ and faces $F_1, \ldots, F_p$ of $T(f_1)\cap\ldots\cap T(f_k)$ and the dual cell of $F_i\subseteq U = T(f_1)\cup\ldots\cup T(f_k)$ is given by $F_i^\vee=C_{i1}^{\vee}+ \ldots+ C_{ik}^{\vee}$ with faces $C_{i1}, \ldots, C_{i k}$ of $T(f_1), \ldots, T(f_{k})$. In case that we project a tropical curve we want to find the number of $(n-1)$-cells of the above form with $p>1$, i.e. the cells which are dual to vertices of $\pi(T(I))$ which are the intersection of the images of two non-adjacent $1$-cells of $T(I)$. Vertices of this type are called selfintersection points. We show that there exist a tropcal line $L_n\subset\mathbb{R}^n$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $L_n$ has $\sum_{i=1}^{n-2}i$ selfintersection points. Furthermore we find tropical curves $\mathcal{C}\subset\mathbb{R}^n$, which are transversal intersections of $n-1$ tropical hypersurfaces of degrees $d_1,\ldots,d_{n-1}$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $\mathcal{C}$ has at least $(d_1\cdot\ldots\cdot d_{n-1})^2\cdot \sum_{i=1}^{n-2}i) $ selfintersection points. A caterpillar is a certain simple type of a tropical line and for this type we show that it can have at most $\sum_{i=1}^{n-2}i$ selfintersection points.
In this paper, a translation of the visual description technique HyCharts to Hybrid Data-Flow Graphs (HDFG) is given. While HyCharts combine a data-flow and a control-flow oriented formalism for the specification of the architecture and the behavior of hybrid systems, HDFG allow the efficient and homogeneous internal representation of hybrid systems in computers and their automatic manipulation. HDFG represent a system as a data-flow network built from a set of fundamental functions.
The translation permits to combine the advantages of the different description techniques: The use of HyCharts for specification supports the abstract and formal interactive specification of hybrid systems, while HDFG permit the tool based optimization of hybrid systems and the synthesis of mixed-signal prototypes.
Pseudorandom function tribe ensembles based on one-way permutations: improvements and applications
(1999)
Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on oneway permutations given by Canetti, Micciancio and Reingold [CMR98]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as effcient and if the starting pseudorandom function ensemble is efficiently invertible (given the secret key) then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext independently of the choice of the secret key or the random bits used in the encryption process.
In this thesis, the asymptotic behaviour of Pólya urn models is analyzed, using an approach based on the contraction method. For this, a combinatorial discrete time embedding of the evolution of the composition of the urn into random rooted trees is used. The recursive structure of the trees is used to study the asymptotic behavior using ideas from the contraction method.
The approach is applied to a couple of concrete Pólya urns that lead to limit laws with normal distributions, with non-normal limit distributions, or with asymptotic periodic distributional behavior.
Finally, an approach more in the spirit of earlier applications of the contraction method is discussed for one of the examples. A general transfer theorem of the contraction method is extended to cover this example, leading to conditions on the coefficients of the recursion that are not only weaker but also in general easier to check.
Wie können Optionen bewertet werden, zu denen keine geschlossenen Lösungen existieren? Die Antwort lautet: Numerische Verfahren. In Hinblick auf diese Frage wurden in der Vergangenheit meist Baumverfahren, Finite-Differenzen- oder Monte-Carlo-Methoden herangezogen. Im Gegensatz dazu behandelt diese Bachelorarbeit den Einsatz von Quadraturverfahren (QUAD) bei der Bewertung von exotischen Optionen, also Optionen, die kompliziertere Auszahlungsstrukturen besitzen wie einfache Standard-Optionen. Die Grundidee besteht darin, den Optionswert als mehrdimensionales Integral in eindimensionale Integrale zu zerlegen, die daraufhin durch Quadraturformeln approximiert werden...Die Genauigkeit des Verfahrens wird erhöht, indem die Schrittweite der Quadraturformel h verkleinert wird. Dies hat allerdings zur Folge, dass sich der Rechenaufwand erhöht. QUAD jedoch schafft es, durch Reduzierung der Dimension und Ausnutzung der herausragenden Konvergenzeigenschaften von Quadraturformeln eine hohe Genauigkeit bei gleichzeitig geringen Rechenkosten zu erreichen.
Die Methode ist allgemein anwendbar und zeigt insbesondere beim Preisen von pfadabhängigen Optionen mit diskreten Zeitpunkten ihre Stärken. Als Anwendungsbeispiele betrachten wir deshalb folgende Optionstypen: Digitale-, Barrier-, Zusammengesetzte-, Bermuda- und Lookback Optionen. Ferner existieren entsprechende Verfahren für Asiatische- oder Amerikanische Optionen, für die jedoch mehr Vorarbeit notwendig ist.
Der große Vorteil von QUAD gegenüber anderen numerischen Verfahren liegt in der Vermeidung eines (bedeutsamen) Verteilungsfehlers und in der Tatsache, dass keine Bedingungen an die Auszahlungsfunktion gestellt werden müssen. Baum- oder Finite-Differenzen-Verfahren reduzieren zwar durch Gitterverfeinerung den Verteilungsfehler, allerdings geht dies Hand in Hand mit deutlich höheren Rechenzeiten. Zum Beispiel benötigt ein Baumverfahren für die doppelte Exaktheit einen vierfachen Rechenaufwand, während die QUAD Methode bei einem vierfachen Rechenaufwand die Exaktheit mit Faktor 16 erhöht (bei Extrapolation steigt dieser Faktor bis 256).
QUAD kann als "der perfekte Baum" angesehen werden, da es ähnlich zu Multinomialbäumen auf Rückwärtsverfahren zurückgreift, andererseits aber die hohe Flexibilität besitzt, Knoten frei und in großer Anzahl zu wählen. Des Weiteren gehen nur die den Optionspreis bestimmenden Zeitpunkte in die Bewertung mit ein, sodass auf zwischenzeitliche Zeitschritte gänzlich verzichtet werden kann.
Die eigentliche Arbeit gliedert sich in sechs Abschnitte. Zunächst erfolgt eine Einführung in allgemeine Quadraturverfahren, exotische Optionen und das Black-Scholes-Modell, was im Anschluss den Übergang zum Lösungsansatz liefert. Dieser Abschnitt schließt mit einer geschlossenen Integrallösung für Optionen, die der Black-Scholes-Differentialgleichung folgen, ab. In Abschnitt 4 wird die genaue Untersuchung der QUAD Methode vorgenommen. Unter Verwendung des in Abschnitt 5 vorgestellten Algorithmus wird anschließend in Abschnitt 6 die QUAD Methode auf die zuvor genannten Optionsklassen angewandt. Die entsprechenden Resultate werden am Ende dieses Teils in Tabellen und Graphiken präsentiert. Den Abschluss bildet das Fazit und die Zusammenfassung der Ergebnisse.
Quasi-Monte-Carlo-Verfahren zur Bewertung von Finanzderivaten, BacDas Gebiet der Optionsbewertung ist durch die Entwicklungen zu neuen und immer komplexer werdenden Optionstypen und durch Verbesserungen im Bereich der Aktienkurs-Modelle geprägt. Diese Entwicklung und die gestiegene Leistungsfähigkeit der Parallelrechner haben das Interesse an den flexiblen Quasi-Monte-Carlo-Verfahren neu geweckt.
Die experimentellen Untersuchungen bestätigen die Überlegenheit des Quasi-Monte-Carlo-Verfahren gegenüber den klassische Monte-Carlo-Verfahren in Bezug auf niedrigdimensionale Optionstypen. Dieser Überlegenheit nimmt aber mit zunehmender Dimension ab, was eine Nachteil für das Quasi-Monte-Carlo Verfahren darstellt. Zur Verbesserung des Verfahrens gibt das Dimensions-Reduktions-Prinzip (effective dimension) und weitere Niederdiskrepanz-Folgen, wie Niederreiter-Folgen, Lattice-Regeln, usw. Weitere Verbesserungsmöglichkeiten könnten auch durch Wahl von anderen Diskretisierungsverfahren mit höherer starker Ordnung, wie z.B dem Milstein-Verfahren, erreicht werden. Mit dem Quasi-Monte-Carlo-Verfahren lässen sich auch komplizierte Optionen bewerten,
wie z.B. Bermuda-Optionen, Barrier-Optionen, Cap-Optionen, Shout-Optionen, Lokkback-Optionen, Multi-Asset-Optionen, Outperformance-Optionen, und auch mit weiteren Bewertungs-Modellen kombinieren, wie z.B. dem Black-Scholes-Modell mit variabler Verzinsung, Black-Scholes-Modell mit zeitabhängiger Volatilität, Heston-Modell für stochastische Volatilität, Merton-Sprung-Diffusion-Modell und dem Libor-Markt Modell für Zinsderivate, auf die ich in dieser Bachelorarbeit nicht mehr eingehen werde, mit denen ich mich jedoch in der Masterarbeit genauer beschäftigen werde.
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
Die Arbeit befasst sich mit einer Vereinfachung des von Devroye (1999) geprägten Begriffs der random split trees und verallgemeinert diesen im Sinne von Janson (2019) auf unbeschränkten Verzweigungsgrad. Diese Verallgemeinerung deckt auch preferential attachment trees mit linearen Gewichten ab, wofür ein Beweis von Janson (2019) aufbereitet wird. Zusätzlich bleiben die von Devroye (1999) nachgewiesenen Eigenschaften über die Tiefe der hinzugefügten Knoten erhalten.
We use recent results by Bainbridge–Chen–Gendron–Grushevsky–Möller on compactifications of strata of abelian differentials to give a comprehensive solution to the realizability problem for effective tropical canonical divisors in equicharacteristic zero. Given a pair (Γ,D) consisting of a stable tropical curve Γ and a divisor D in the canonical linear system on Γ, we give a purely combinatorial condition to decide whether there is a smooth curve X over a non-Archimedean field whose stable reduction has Γ as its dual tropical curve together with an effective canonical divisor KX that specializes to D.
Der im Jahr 2004 am IWR Heidelberg entwickelte Neuronen Rekonstruktions-Algorithmus NeuRA extrahiert die Oberflächenmorphologie oder ein Merkmalskelett von Neuronenzellen, die mittels konfokaler oder Zwei-Photon-Mikroskopie als Bildstapel aufgenommen wurden. Hierbei wird zunächst das Signal-zu-Rausch-Verhältnis der Rohdaten durch Anwendung des speziell entwickelten trägheitsbasierten anisotropen Diffusionsfilters verbessert, dann das Bild nach der statistischen Methode von Otsu segmentiert und anschließend das Oberflächengitter der Neuronenzellen durch den Regularisierten Marching-Tetrahedra-Algorithmus rekonstruiert oder das Merkmalskelett mit einer speziellen Thinning-Methode extrahiert. In einschlägigen Vorarbeiten wurde mit Hilfe solcher Rekonstruktionen von Neuronenzellkernen gezeigt, dass diese, entgegen der vorher üblichen Meinung, nicht notwendigerweise rund sind, sondern Einstülpungen, sogenannte Invaginationen, aufweisen können. Der Einfluss der Invaginationen auf die Ausbreitung von Calciumionen innerhalb solcher Zellkerne konnte durch entsprechende numerische Simulationen systematisch untersucht werden.
Um diese Rekonstruktionsmethode auf hochaufgelöste Mikroskopaufnahmen anwenden zu können, wurden im Rahmen der vorliegenden Arbeit, die in NeuRA verwendeten Verfahren auf Basis von Nvidia CUDA auf moderner Grafikhardware parallelisiert und unter dem Namen NeuRA2 optimiert und neu implementiert. Erzielte Beschleunigungen von bis zu einem Faktor 100, bei Verwendung einer Hochleistungsgrafikkarte, zeigen, dass sich die moderne Grafikarchitektur besonders für die Parallelisierung von Bildverarbeitungsoperatoren eignet. Insbesondere das Herzstück des Rekonstruktions-Algorithmus - der sehr rechenintensive trägheitsbasierte anisotrope Diffusionsfilter - wurde durch eine clusterbasierte Implementierung, welche die parallele Verwendung beliebig vieler Grafikkarten ermöglicht, immens beschleunigt.
Darüber hinaus wurde in dieser Arbeit das Konzept von NeuRA verallgemeinert, um nicht nur Neuronenzellen aus konfokalen oder Zwei-Photon-Bildstapeln rekonstruieren zu können, sondern vielmehr die Oberflächenmorphologie oder Merkmalskelette von allgemeinen Objekten aus beliebigen Bildstapeln zu extrahieren. Dabei wird das ursprüngliche Konzept von Rauschreduktion, Bildsegmentierung und Rekonstruktion beibehalten. Für die einzelnen Schritte stehen aber nun eine Vielfalt von Bildverarbeitungs- und Rekonstruktionsmethoden zur Verfügung, die abhängig von der Beschaffenheit der Daten und den Anforderungen an die Rekonstruktion, ausgewählt werden können. Die meisten dieser Verfahren wurden ebenfalls auf Basis moderner Grafikhardware parallelisiert.
Die weiterentwickelten Rekonstruktionsverfahren wurden in mehreren Anwendungen eingesetzt: Einerseits wurden Oberflächen- und Volumengitter aus konfokalen Bildstapeln und Computertomographie-Aufnahmen generiert, die für verschiedene numerische Simulationen eingesetzt wurden oder eingesetzt werden sollen. Des Weiteren wurden über zwanzig antike Keramikgefäße und Fragmente anderer antiker Keramiken rekonstruiert. Hierbei wurde jeweils die Rohdichte und bei den komplett erhaltenen Gefäßen das Füllvolumen berechnet. Es konnte gezeigt werden, dass dieses Verfahren exakter ist als die in der Archäologie üblichen Methoden zur Volumenbestimmung von Gefäßen. Außerdem zeigt sich eine Abhängigkeit der Rohdichte der rekonstruierten Objekte vom jeweils verwendeten Keramiktyp. Eine Analyse, wie genau die Krümmung von Objekten durch die Approximation von Dreiecksgittern dargestellt werden kann, wurde ebenfalls durchgeführt.
Zusätzlich wurde ein Verfahren zur Rekonstruktion der Merkmalskelette lebender Neuronenzellen oder Teilen von Neuronenzellen entwickelt. Bei den damit rekonstruierten Daten wurden einzelne dendritische Dornfortsätze, auch Spines genannt, hochaufgelöst mikroskopiert. Auf Basis dieser Rekonstruktionen kann die Länge von Dendriten oder einzelner Spines, der Winkel zwischen Dendritenverzweigungen, sowie das Volumen einzelner Spines automatisch berechnet werden. Mit Hilfe dieser Daten kann der Einfluss pharmakologischer Präparate und mechanischer Eingriffe in das Nervensystem von lebenden Versuchstieren systematisch untersucht werden.
Eine Adaption der beschriebenen Rekonstruktionsverfahren ist aufgrund deren einfacher Erweiterbarkeit und flexibler Verwendbarkeit für zukünftige Anwendungen leicht möglich.
We deal with the reconstruction of inclusions in elastic bodies based on monotonicity methods and construct conditions under which a resolution for a given partition can be achieved. These conditions take into account the background error as well as the measurement noise. As a main result, this shows us that the resolution guarantees depend heavily on the Lamé parameter μ and only marginally on λ.
Statistical analysis on various stocks reveals long range dependence behavior of the stock prices that is not consistent with the classical Black and Scholes model. This memory or nondeterministic trend behavior is often seen as a reflection of market sentiments and causes that the historical volatility estimator becomes unreliable in practice. We propose an extension of the Black and Scholes model by adding a term to the original Wiener term involving a smoother process which accounts for these effects. The problem of arbitrage will be discussed. Using a generalized stochastic integration theory [8], we show that it is possible to construct a self financing replicating portfolio for a European option without any further knowledge of the extension and that, as a consequence, the classical concept of volatility needs to be re-interpreted.
AMS subject classifications: 60H05, 60H10, 90A09.
Considered are the classes QL (quasilinear) and NQL (nondet quasllmear) of all those problems that can be solved by deterministic (nondetermlnlsttc, respectively) Turmg machines in time O(n(log n) ~) for some k Effloent algorithms have time bounds of th~s type, it is argued. Many of the "exhausUve search" type problems such as satlsflablhty and colorabdlty are complete in NQL with respect to reductions that take O(n(log n) k) steps This lmphes that QL = NQL iff satisfiabdlty is m QL CR CATEGORIES: 5.25
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some times with a legitimate signer and produces from these interactions signatures. Security against the new attack requires that the following ROS-problem is intractable: find an overdetermined, solvable system of linear equations modulo with random inhomogenities (right sides). There is an inherent weakness in the security result of POINTCHEVAL AND STERN. Theorem 26 [PS00] does not cover attacks with 4 parallel interactions for elliptic curves of order 2200. That would require the intractability of the ROS-problem, a plausible but novel complexity assumption. Conversely, assuming the intractability of the ROS-problem, we show that Schnorr signatures are secure in the random oracle and generic group model against the one-more signature forgery.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts.
We present an efficient variant of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lov´asz [LLL82]. We organize LLL-reduction in segments of size k. Local LLL-reduction of segments is done using local coordinates of dimension 2k. Strong segment LLL-reduction yields bases of the same quality as LLL-reduction but the reduction is n-times faster for lattices of dimension n. We extend segment LLL-reduction to iterated subsegments. The resulting reduction algorithm runs in O(n3 log n) arithmetic steps for integer lattices of dimension n with basis vectors of length 2O(n), compared to O(n5) steps for LLL-reduction.
In this short note, we investigate simultaneous recovery inverse problems for semilinear elliptic equations with partial data. The main technique is based on higher order linearization and monotonicity approaches. With these methods at hand, we can determine the diffusion, cavity and coefficients simultaneously by knowing the corresponding localized Dirichlet-Neumann operators.
Let G be a group of prime order q with generator g. We study hardcore subsets H is include in G of the discrete logarithm (DL) log g in the model of generic algorithms. In this model we count group operations such as multiplication, division while computations with non-group data are for free. It is known from Nechaev (1994) and Shoup (1997) that generic DL-algorithms for the entire group G must perform p2q generic steps. We show that DL-algorithms for small subsets H is include in G require m/ 2 + o(m) generic steps for almost all H of size #H = m with m <= sqrt(q). Conversely, m/2 + 1 generic steps are su±cient for all H is include in G of even size m. Our main result justifies to generate secret DL-keys from seeds that are only 1/2 * log2 q bits long.
We study the asymptotics of Dirichlet eigenvalues and eigenfunctions of the fractional Laplacian (−Δ)s in bounded open Lipschitz sets in the small order limit s→0+. While it is easy to see that all eigenvalues converge to 1 as s→0+, we show that the first order correction in these asymptotics is given by the eigenvalues of the logarithmic Laplacian operator, i.e., the singular integral operator with Fourier symbol 2log|ξ|. By this we generalize a result of Chen and the third author which was restricted to the principal eigenvalue. Moreover, we show that L2-normalized Dirichlet eigenfunctions of (−Δ)s corresponding to the k-th eigenvalue are uniformly bounded and converge to the set of L2-normalized eigenfunctions of the logarithmic Laplacian. In order to derive these spectral asymptotics, we establish new uniform regularity and boundary decay estimates for Dirichlet eigenfunctions for the fractional Laplacian. As a byproduct, we also obtain corresponding regularity properties of eigenfunctions of the logarithmic Laplacian.
We study continuous dually epi-translation invariant valuations on certain cones of convex functions containing the space of finite-valued convex functions. Using the homogeneous decomposition of this space, we associate a certain distribution to any homogeneous valuation similar to the Goodey-Weil embedding for translation invariant valuations on convex bodies. The support of these distributions induces a corresponding notion of support for the underlying valuations, which imposes certain restrictions on these functionals, and we study the relation between the support of a valuation and its domain. This gives a partial answer to the question which dually epi-translation invariant valuations on finite-valued convex functions can be extended to larger cones of convex functions.
We also study topological properties of spaces of valuations with support contained in a fixed compact set. As an application of these results, we introduce the class of smooth valuations on convex functions and show that the subspace of smooth dually epi-translation invariant valuations is dense in the space of continuous dually epi-translation invariant valuation on finite-valued convex functions. These smooth valuations are given by integrating certain smooth differential forms over the graph of the differential of a convex function. We use this construction to give a characterization of a dense subspace of all continuous valuations on finite-valued convex functions that are rotation invariant as well as dually epi-translation invariant.
Using results from Alesker's theory of smooth valuations on convex bodies, we also show that any smooth valuation can be written as a convergent sum of mixed Hessian valuations. In particular, mixed Hessian valuations span a dense subspace, which is a version of McMullen’s conjecture for valuations on convex functions.
Sprung-Diffusions-Modelle zur Bewertung Europäischer Optionen, BacIn dieser Arbeit wurden die Europäische Optionen in den Sprung-Diffusions-Modellen von Merton und dem Modell von Kou bewertet. So stellen die geschlossenen Lösungen für das Merton-Modell als Anwendung der Black-Scholes-Formel eine einfache Möglichkeit zur Berechnung eines Optionspreises dar. Die Verwendung einer analytischen Lösung für Merton ist allerdings nur eingeschränkt, d.h. für zwei spezielle Sprungverteilungsfunktionen (Plötzlicher Ruin und die Lognormalverteilung) möglich. Das Kou-Modell hingegen, hat eine geschlossene Lösung für Doppel-Exponentialverteilte Sprünge. Eine flexible Lösungsmöglichkeit zur Bestimmung eines Optionspreises ist die Verwendung des Monte-Carlo-Verfahrens für die Simulation der Kursbewegung mit zugrunde liegendem Sprung-Diffusions-Modell. In diesem Fall ist das Monte-Carlo-Verfahren zur Ermittlung des Optionspreises nur einmal anzuwenden. Dieses Verfahren konvergiert mit einer Konvergenzrate von 1/2.
Wie alle anderen Modelle, die auf Lévy Prozessen basieren, lässt das Kou-Modell eine empirische Beobachtung vermissen, nämlich die mögliche Abhängigkeit zwischen Renditen der Underlyings (der sogenannte "volatility clustering affect"), weil das Modell unabhängige Inkremente unterstellt. Eine Möglichkeit die Abhängigkeit mit einzubeziehen, wäre die Nutzung anderer Punktprozesse Ñ(t) mit abhängigen Inkrementen anstelle des Poisson-Prozesses N(t). Es muss natürlich die Unabhängikeit zwischen der Brownschen Bewegung, den Sprunghöhen und ~N(t) beibehalten werden. Das so modifizierte Modell hat keine unabhängigen Inkremente mehr, ist aber einfach die geschlossene Lösungsformel für Call- und Put-Optionen zu erhalten. Andererseits scheint es schwer analytische Lösungen für Pfadabhängige Optionen durch Nutzung von Ñ(t) anstelle von N(t) zu erhalten.
Motivation: The topic of this paper is the estimation of alignments and mutation rates based on stochastic sequence-evolution models that allow insertions and deletions of subsequences ("fragments") and not just single bases. The model we propose is a variant of a model introduced by Thorne, Kishino, and Felsenstein (1992). The computational tractability of the model depends on certain restrictions in the insertion/deletion process; possible effects we discuss.
Results: The process of fragment insertion and deletion in the sequence-evolution model induces a hidden Markov structure at the level of alignments and thus makes possible efficient statistical alignment algorithms. As an example we apply a sampling procedure to assess the variability in alignment and mutation parameter estimates for HVR1 sequences of human and orangutan, improving results of previous work. Simulation studies give evidence that estimation methods based on the proposed model also give satisfactory results when applied to data for which the restrictions in the insertion/deletion process do not hold.
Availability: The source code of the software for sampling alignments and mutation rates for a pair of DNA sequences according to the fragment insertion and deletion model is freely available from www.math.uni-frankfurt.de/~stoch/software/mcmcsalut under the terms of the GNU public license (GPL, 2000).
It is commonly agreed that cortical information processing is based on the electric discharges (spikes') of nerve cells. Evidence is accumulating which suggests that the temporal interaction among a large number of neurons can take place with high precision, indicating that the efficiency of cortical processing may depend crucially on the precise spike timing of many cells. This work focuses on two temporal properties of parallel spike trains that attracted growing interest in the recent years: In the first place, specific delays (phase offsets') between the firing times of two spike trains are investigated. In particular, it is studied whether small phase offsets can be identified with confidence between two spike trains that have the tendency to fire almost simultaneously. Second, the temporal relations between multiple spike trains are investigated on the basis of such small offsets between pairs of processes. Since the analysis of all delays among the firing activity of n neurons is extremely complex, a method is required with which this highly dimensional information can be collapsed in a straightforward manner such that the temporal interaction among a large number of neurons can be represented consistently in a single temporal map. Finally, a stochastic model is presented that provides a framework to integrate and explain the observed temporal relations that result from the previous analyses.
Im Rahmen dieser Arbeit möchte ich nun aufzeigen, dass ein Projekt zu Glücksspielen eine „reichhaltige Lernsituation“ darstellen kann, in der die Schüler Raum, Gelegenheit und Anlass haben, Grunderfahrungen mit zufälligen Vorgängen zu machen, darauf aufbauend wichtige Begriffe zu bilden und schließlich wesentliche stochastische Zusammenhänge zu erkennen. Der Projektmethode entsprechend lag ein Großteil meiner Tätigkeiten im Vorfeld in vorbereitenden und planenden Tätigkeiten. Während der Projektdurchführung trat ich als beratender „Hintergrundlehrer“ auf. Die Schüler arbeiteten weitgehend selbstständig. Der Schwerpunkt dieser Arbeit liegt daher auf meinen didaktischen und methodischen Überlegungen zur Vorbereitung des Projekts.
Die vorliegende Arbeit untersucht ausgewählte Eigenschaften von Preferential Attachment-Graphen. Darunter verstehen wir eine Klasse komplexer zufälliger Graphen, die mit einer vorgegebenen Konfiguration gestartet werden und anschließend mit jedem Zeitschritt um eine Ecke und m Kanten wachsen. Die Wachstumsregeln sind so gestaltet, dass eine neue Ecke ihre Kanten bevorzugt an Ecken sendet, die bereits mit vielen anderen Ecken verbunden sind, woraus sich die Bezeichnung Preferential Attachment (PA) ableitet. Die Arbeit stellt zunächst heuristisch die Eigenschaft der Skalenfreiheit von PA-Modellen vor und bespricht anschließend einen Beweis zu dieser These. Weiter betrachten wir den Durchmesser von PA-Graphen und untersuchen das Verhalten bei Anwachsen des Graphen. Wir erkennen, dass der Durchmesser bei wachsendem Graphen deutlich langsamer wächst, was wir als Small World-Phänomen bezeichnen. Die zentralen Aussagen und Beweise orientieren sich an den Arbeiten von Remco van der Hofstad, der die bekannten PA-Modelle um einen Parameter erweitert hat. Damit ist es möglich, sowohl logarithmische als auch doppelt-logarithmische Schranken für den Durchmesser zu erhalten.
Die vorliegende Dissertation analysiert Großinvestorhandelsstrategien in illiquiden Finanzmärkten. Ein Großinvestor beeinflusst die Preise der Wertpapiere, die er handelt, so dass der daraus resultierende Feedbackeffekt berücksichtigt werden muss. Der Preisprozess wird durch eine Familie von cadlag Semimartingalen modelliert, die in dem zusätzlichen Parameter stetig differenzierbar ist. Ziel ist es, eine möglichst allgemeine Strategiemenge zu bestimmen, für die eine Vermögensdynamik definiert werden kann. Es sind dies vorhersehbare Prozesse von wohldefinierter quadratischer Variation entlang Stoppzeiten. Sie erweisen sich als laglad. Die Vermögensdynamikzerlegung zeigt, dass bei stetigen adaptierten Strategien von endlicher Variation (zahme Strategien) die quadratischen Transaktionskostenterme verschwinden und der Gewinnprozess nur noch aus einem nichtlinearen stochastischen Integral besteht. Es wird gezeigt, unter welchen Bedingungen gewisse Approximationen vorhersehbarer laglad Strategien durch adaptierte stetige Strategien von endlicher Variation möglich sind. Im Fall, dass der Approximationsfehler für die Risikoeinstellung des Großinvestors erträglich ist, kann er seine Investmentziele durch Verwendung dieser zahmen Strategien, Liquiditätskosten vermeidend, erreichen. In diesem Fall ist der Gewinnprozess durch das nicht-lineare stochastische Integral gegeben.
Die in Englisch verfasste Dissertation, die unter der Betreuung von Herrn Prof. Dr. H. F. de Groote, Fachbereich Mathematik, entstand, ist der Mathematischen Physik zuzuordnen. Sie behandelt Stonesche Spektren von Neumannscher Algebren, observable Funktionen sowie einige Anwendungen in der Physik. Das abschließende Kapitel liefert eine Verallgemeinerung des Kochen-Specker-Theorems. Stonesche Spektren und observable Funktionen wurden von de Groote eingeführt. Das Stonesche Spektrum einer von Neumann-Algebra ist eine Verallgemeinerung des Gelfand-Spektrums, die observablen Funktionen verallgemeinern die Gelfand-Transformierten. Da de Grootes Ergebnisse zum großen Teil unveröffentlicht sind, folgt nach dem Einleitungskapitel im zweiten Kapitel eine Übersichtsdarstellung dieser Ergebnisse. Das dritte Kapitel behandelt die Stoneschen Spektren endlicher von Neumann-Algebren. Für Algebren vom Typ In wird eine vollständige Charakterisierung des Stoneschen Spektrums entwickelt. Zu Typ-II1-Algebren werden einige Resultate vorgestellt. Das vierte Kapitel liefert. einige einfache Anwendungen des Formalismus auf die Physik. Das fünfte Kapitel gibt erstmals einen funktionalanalytischen Beweis des Kochen-Specker-Theorems und liefert die Verallgemeinerung dieses Satzes, wobei die Situation für alle von Neumann-Algebren geklärt wird.
Strong convergence rates for numerical approximations of stochastic partial differential equations
(2018)
In this thesis and in the research articles which this thesis consists of, respectively, we focus on strong convergence rates for numerical approximations of stochastic partial differential equations (SPDEs). In Part I of this thesis, i.e., Chapter 2 and Chapter 3, we study higher order numerical schemes for SPDEs with multiplicative trace class noise based on suitable Taylor expansions of the Lipschitz continuous coefficients of the SPDEs under consideration. More precisely, Chapter 2 proves strong convergence rates for a linear implicit Euler-Milstein scheme for SPDEs and is based on an unpublished manuscript written by the author of this thesis. This chapter extends an earlier result1 by slightly lowering the assumptions posed on the diffusion coefficient and a different approximation of the semigroup. In Chapter 3 we introduce an exponential Wagner-Platen type numerical scheme for SPDEs and prove that this numerical approximation method converges in the strong sense with oder up to 3/2−. Moreover, we illustrate how the (mixed) iterated stochastic-deterministic integrals, that are part of our numerical scheme, can be simulated exactly under suitable assumptions.
The second part of this thesis, i.e. Chapter 4 and Chapter 5, is devoted to strong convergence rates for numerical approximations of SPDEs with superlinearly growing nonlinearities driven by additive space-time white noise. More specifically, in Chapter 4, we prove strong convergence with rate in the time variable for a class of nonlinearity-truncated numerical approximation schemes for SPDEs and provide examples that fit into our abstract setting like stochastic Allen-Cahn equations. Finally, in Chapter 5, we extend this result with spatial approximations and establish strong convergence rates for a class of full-discrete nonlinearity truncated numerical approximation schemes for SPDEs. Moreover, we apply our strong convergence result to stochastic Allen-Cahn equations and provide lower and upper bounds which show that our strong convergence result can, in general, not essentially be improved.
In an earlier paper we proposed a recursive model for epidemics; in the present paper we generalize this model to include the asymptomatic or unrecorded symptomatic people, which we call dark people (dark sector). We call this the SEPARd-model. A delay differential equation version of the model is added; it allows a better comparison to other models. We carry this out by a comparison with the classical SIR model and indicate why we believe that the SEPARd model may work better for Covid-19 than other approaches.
In the second part of the paper we explain how to deal with the data provided by the JHU, in particular we explain how to derive central model parameters from the data. Other parameters, like the size of the dark sector, are less accessible and have to be estimated more roughly, at best by results of representative serological studies which are accessible, however, only for a few countries. We start our country studies with Switzerland where such data are available. Then we apply the model to a collection of other countries, three European ones (Germany, France, Sweden), the three most stricken countries from three other continents (USA, Brazil, India). Finally we show that even the aggregated world data can be well represented by our approach.
At the end of the paper we discuss the use of the model. Perhaps the most striking application is that it allows a quantitative analysis of the influence of the time until people are sent to quarantine or hospital. This suggests that imposing means to shorten this time is a powerful tool to flatten the curves.
Forschungsbedarf. Wenn man die mathematische Entwicklung in der frühen Kindheit als einen ganzheitlichen Prozess betrachtet, so gilt es, die Forschungsperspektive für unterschiedliche Lernorte zu öffnen. Einer dieser Lernorte ist die Familie, in welchem mathematische Bildungsprozesse der frühen Kindheit entscheidend von elterlichem Support beeinflusst werden. Bei der Untersuchung dieses Lernortes ist in der deutschsprachigen Mathematik-didaktik bisher eine Beschränkung auf Interviewstudien zu beobachten, in denen die Vorstellungen und Überzeugungen der Eltern zum Mathematiklernen und zur Mathematik rekonstruiert werden. Beobachtungsstudien, welche unabhängig von der Perspektive der Eltern Realisierungen von Support untersuchen, liegen bisher nicht vor. Die vorliegende Dissertation liefert einen Beitrag zur Bearbeitung dieses spezifischen Forschungsbedarfs in der deutschsprachigen Mathematikdidaktik.
Die Studie. Die durchgeführte, längsschnittlich angelegte Videostudie, die der Dissertation zugrunde liegt, ist der rekonstruktiven Sozialforschung und im Speziellen der Interpretativen Forschung in der Mathematikdidaktik zuzuordnen. Es wurden zehn Vorschulkinder und ihre Mütter ein Jahr lang in offenen Vorlese- und Spielsituationen begleitet. Als Grundlage der Analyse dienen Transkripte von ausgewählten Szenen. Die Transkriptanalyse ist durchgehend am Prinzip der Komparation orientiert und erfolgt zweischrittig: In einer Interaktionsanalyse wird zunächst die interaktionale Entwicklung des mathematischen Themas in der jeweiligen Szenen nachgezeichnet; darauf aufbauend wird die Diskursszene in einer Support-Fokussierung im Hinblick auf das hergestellte Support-System ausgedeutet.
Der Forschungsgegenstand. Gemäß der Verortung in der Interpretativen Forschung wird der Forschungsgegenstand aus sozialkonstruktivistisch-interaktionistischer Perspektive betrachtet. Demzufolge ist der Support in mathematischen Mutter-Kind-Diskursen kein einseitiges Helfen der Mutter, sondern ein von Mutter und Kind gemeinsam in der Interaktion hergestelltes Support-System.
Ergebnisse. Der Support in mathematischen Mutter-Kind-Diskursen wird in der vorliegenden Dissertation als ein Mathematics Acquisition Support System (MASS) beschrieben und aus zwei unterschiedlichen Perspektiven beleuchtet.
Die erste Perspektive ist eine allgemein sozialisationstheoretische und zeigt, dass das MASS in mathematischen Mutter-Kind-Diskursen auf unterschiedliche übergeordnete Aufgaben ausgerichtet sein kann: auf ein Mitmachen, auf einen Entwicklungsfortschritt oder auf eine freie Erkundung des Kindes. Diese Typisierung von Support-Jobs verdeutlicht, dass sich Support-Systeme gegenstandsspezifisch unterscheiden. Während das Discourse Acquisition Support System (DASS), welches im Hinblick auf die Entwicklung von Erzählkompetenz beschrieben
wurde (vgl. Hausendorf und Quasthoff 2005), ausschließlich auf eine übergeordnete Aufgabe ausgerichtet ist, kann das MASS über die Arbeit an unterschiedlichen Support-Jobs hergestellt werden. So sind Vorschulkinder im familialen Kontext in mathematische Diskurse eingebunden, die an unterschiedlichen Support-Jobs ausgerichtet sind. Dieses Ergebnis gewinnt dadurch zusätzliche Bedeutung, dass die Support-Jobs in den mathematischen Mutter-Kind-Diskursen als charakterisierend für die jeweiligen Mutter-Kind-Paare rekonstruiert werden konnten. Sowohl in Komparationen über die Zeit als auch in solchen über das Material etablieren und bearbeiten Mutter und Kind mit einer gewissen Stabilität einen spezifischen Support-Job. Die Biographie von Vorschulkindern als Mathematiklerner wird in der Familie also auf spezifische Weise geprägt.
Die zweite Perspektive ist eine genuin mathematikdidaktische und gliedert sich in zwei Teilperspektiven auf. Eine fokussiert auf das Mathematiklernen, die andere auf die Mathematik. In der Perspektive des Mathematiklernens werden Realisierungen von alltagspädagogischen Konzepten typisiert. Dabei zeigt sich, wie unterschiedlich Vorschulkinder als Mathematiklerner in Support-Systeme eingebunden werden: als Sachkundiger, als Wissender und als Denker (vgl. Olson und Bruner 1996). Anhand dieser gebildeten Typen wird das Forschungsfeld dahingehend strukturiert, welchen Konzepten vom Mathematiklernen und -lehren Vorschulkinder im familialen Kontext begegnen. In der Perspektive der Mathematik wird schließlich erarbeitet, wie Vorschulkinder und ihre Mütter in und mit ihrem je spezifischen MASS die Mathematik in den Sinnbereich ihres Alltags einbinden (vgl. Bachmair 2007): als Hilfsmittel, als Lernstoff und als Beschreibungs- und Denkmittel. Damit sind drei Typen mathematischer Sozialisation im familialen Kontext beschrieben.
Insgesamt macht die Verbindung aus einer allgemein sozialisationstheoretischen und einer mathematikdidaktischen Perspektive umfassend beschreibbar, wie Kinder im familialen Kontext in Support-Systeme zum Mathematiklernen eingebunden sind. Die Bildung unterschiedlicher Typen konnte sowohl im Hinblick auf die allgemeine Fokussierung des MASS als auch im Hinblick auf die damit realisierten Konzepte von Mathematiklernen und Mathematik vor-genommen werden. Damit ist der Lernort der Familie für die frühe mathematische Bildung und den Forschungsgegenstand des Supports strukturiert und anhand von Fallstudien beschrieben. Dieses Forschungsergebnis ist eine neue Einsicht über den bisher nur wenig erforschten Lernort der Familie und gleichzeitig eine Herausforderung für den Mathematikunterricht der Grundschule. Denn im Sinne einer Passung zwischen Familie und Schule gilt es, an die jeweiligen Interaktionserfahrungen der Kinder anzuknüpfen.
We show that the non-Archimedean skeleton of the d-th symmetric power of a smooth projective algebraic curve X is naturally isomorphic to the d-th symmetric power of the tropical curve that arises as the non-Archimedean skeleton of X. The retraction to the skeleton is precisely the specialization map for divisors. Moreover, we show that the process of tropicalization naturally commutes with the diagonal morphisms and the Abel-Jacobi map and we exhibit a faithful tropicalization for symmetric powers of curves. Finally, we prove a version of the Bieri-Groves Theorem that allows us, under certain tropical genericity assumptions, to deduce a new tropical Riemann-Roch-Theorem for the tropicalization of linear systems.
In recent years using symmetry has proven to be a very useful tool to simplify computations in semidefinite programming. This dissertation examines the possibilities of exploiting discrete symmetries in three contexts: In SDP-based relaxations for polynomial optimization, in testing positivity of symmetric polynomials, and combinatorial optimization. In these contexts the thesis provides new ways for exploiting symmetries and thus deeper insight in the paradigms behind the techniques and studies a concrete combinatorial optimization question.