Mathematik
Refine
Year of publication
Document Type
- Article (84)
- Preprint (47)
- Doctoral Thesis (46)
- Report (16)
- Conference Proceeding (9)
- diplomthesis (6)
- Book (3)
- Part of a Book (2)
- Bachelor Thesis (1)
- Diploma Thesis (1)
- Master's Thesis (1)
Language
- English (216) (remove)
Has Fulltext
- yes (216) (remove)
Is part of the Bibliography
- no (216)
Keywords
- Kongress (6)
- Kryptologie (5)
- Online-Publikation (4)
- LLL-reduction (3)
- Moran model (3)
- computational complexity (3)
- contraction method (3)
- Algebraische Geometrie (2)
- Brownian motion (2)
- Commitment Scheme (2)
Institute
- Mathematik (216)
- Informatik (50)
- Medizin (2)
- Frankfurt Institute for Advanced Studies (FIAS) (1)
- MPI für Hirnforschung (1)
- MPI für empirische Ästhetik (1)
- Physik (1)
Statistical analysis on various stocks reveals long range dependence behavior of the stock prices that is not consistent with the classical Black and Scholes model. This memory or nondeterministic trend behavior is often seen as a reflection of market sentiments and causes that the historical volatility estimator becomes unreliable in practice. We propose an extension of the Black and Scholes model by adding a term to the original Wiener term involving a smoother process which accounts for these effects. The problem of arbitrage will be discussed. Using a generalized stochastic integration theory [8], we show that it is possible to construct a self financing replicating portfolio for a European option without any further knowledge of the extension and that, as a consequence, the classical concept of volatility needs to be re-interpreted.
AMS subject classifications: 60H05, 60H10, 90A09.
Integral equations for the mean-square estimate are obtained for the linear filtering problem, in which the noise generating the signal is a fractional Brownian motion with Hurst index h∈(3/4,1) and the noise in the observation process includes a fractional Brownian motion as well as a Wiener process. AMS subject classifications: 93E11, 60G20, 60G35.
Within the last twenty years, the contraction method has turned out to be a fruitful approach to distributional convergence of sequences of random variables which obey additive recurrences. It was mainly invented for applications in the real-valued framework; however, in recent years, more complex state spaces such as Hilbert spaces have been under consideration. Based upon the family of Zolotarev metrics which were introduced in the late seventies, we develop the method in the context of Banach spaces and work it out in detail in the case of continuous resp. cadlag functions on the unit interval. We formulate sufficient conditions for both the sequence under consideration and its possible limit which satisfies a stochastic fixed-point equation, that allow to deduce functional limit theorems in applications. As a first application we present a new and considerably short proof of the classical invariance principle due to Donsker. It is based on a recursive decomposition. Moreover, we apply the method in the analysis of the complexity of partial match queries in two-dimensional search trees such as quadtrees and 2-d trees. These important data structures have been under heavy investigation since their invention in the seventies. Our results give answers to problems that have been left open in the pioneering work of Flajolet et al. in the eighties and nineties. We expect that the functional contraction method will significantly contribute to solutions for similar problems involving additive recursions in the following years.
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
In recent years using symmetry has proven to be a very useful tool to simplify computations in semidefinite programming. This dissertation examines the possibilities of exploiting discrete symmetries in three contexts: In SDP-based relaxations for polynomial optimization, in testing positivity of symmetric polynomials, and combinatorial optimization. In these contexts the thesis provides new ways for exploiting symmetries and thus deeper insight in the paradigms behind the techniques and studies a concrete combinatorial optimization question.
Poster presentation from Twentieth Annual Computational Neuroscience Meeting: CNS*2011 Stockholm, Sweden. 23-28 July 2011. In statistical spike train analysis, stochastic point process models usually assume stationarity, in particular that the underlying spike train shows a constant firing rate (e.g. [1]). However, such models can lead to misinterpretation of the associated tests if the assumption of rate stationarity is not met (e.g. [2]). Therefore, the analysis of nonstationary data requires that rate changes can be located as precisely as possible. However, present statistical methods focus on rejecting the null hypothesis of stationarity without explicitly locating the change point(s) (e.g. [3]). We propose a test for stationarity of a given spike train that can also be used to estimate the change points in the firing rate. Assuming a Poisson process with piecewise constant firing rate, we propose a Step-Filter-Test (SFT) which can work simultaneously in different time scales, accounting for the high variety of firing patterns in experimental spike trains. Formally, we compare the numbers N1=N1(t,h) and N2=N2(t,h) of spikes in the time intervals (t-h,t] and (h,t+h]. By varying t within a fine time lattice and simultaneously varying the interval length h, we obtain a multivariate statistic D(h,t):=(N1-N2)/V(N1+N2), for which we prove asymptotic multivariate normality under homogeneity. From this a practical, graphical device to spot changes of the firing rate is constructed. Our graphical representation of D(h,t) (Figure 1A) visualizes the changes in the firing rate. For the statistical test, a threshold K is chosen such that under homogeneity, |D(h,t)|<K holds for all investigated h and t with probability 0.95. This threshold can indicate potential change points in order to estimate the inhomogeneous rate profile (Figure 1B). The SFT is applied to a sample data set of spontaneous single unit activity recorded from the substantia nigra of anesthetized mice. In this data set, multiple rate changes are identified which agree closely with visual inspection. In contrast to approaches choosing one fixed kernel width [4], our method has advantages in the flexibility of h.
In der folgenden Arbeit werden Eigenschaften von Verzweigungsprozessen in zufälliger Umgebung (engl. branching processes in random environment, kurz BPREs) untersucht. Das Modell geht auf Smith (1969) und Athreya (1971) zurück. Ein BPRE ist ein einfaches mathematisches Modell für die Entwicklung einer Population von apomiktischen (d.h. sich ungeschlechtlich fortpflanzenden) Individuen in diskreter Zeit, wobei die Umgebungsbedingungen einen Einfluß auf den Fortpflanzungserfolg der Individuen haben. Dabei wird angenommen, dass die Umgebungsbedingungen in den einzelnen Generationen zufällig sind, und zwar unabhängig und identisch verteilt von Generation zu Generation. Man denke z.B. an eine Population von Pflanzen mit einem einjährigen Zyklus, die in jedem Jahr anderen Witterungsbedingungen ausgesetzt sind, wobei angenommen wird, dass diese sich unabhängig und identisch verteilt ändern. In Kapitel 1 wird eines der wichtigsten Hilfsmittel zur Beschreibung von BPREs, die sogenannte zugehörige Irrfahrt, eingeführt und die Klassifizierung von BPREs beschrieben. In Kapitel 2 werden bekannte Resultate, insbesondere zu kritischen, schwach subkritischen und stark subkritischen Verzweigungsprozessen, wiederholt. In Kapitel 3 wird der sogenannte intermediär subkritische Fall behandelt. Mithilfe von funktionalen Grenzwertsätzen für bedingte Irrfahrten wird die genaue Asymptotik der Überlebenswahrscheinlichkeit des Prozesses, die bereits in Vatutin (2004) bewiesen wurde, unter etwas allgemeineren Voraussetzungen gezeigt. Anschließend wird untersucht, wie häufig der Prozess, bedingt auf Überleben, nur noch aus einem Individuum besteht. Im letzten Teil des Kapitels wird ein funktionaler Grenzwertsatz für die zugehörige Irrfahrt, bedingt aufs Überleben des Prozesses, gezeigt. Diese konvergiert, richtig skaliert, gegen einen Levy-Prozess, der darauf bedingt ist, sein Minimum am Ende anzunehmen. In Kapitel 4 werden große Abweichungen von BPREs untersucht. Die Ratenfunktion des BPRE wird sowohl für den Fall mindestens geometrisch schnell abfallender Tails, als auch für den Fall von Nachkommenverteilungen mit schweren Tails bestimmt. Wie sich herausstellt, hängt die Ratenfunktion von der Ratenfunktion der zugehörigen Irrfahrt, der exponentiellen Abfallrate der Überlebenswahrscheinlichkeit sowie, bei Nachkommenverteilungen mit schweren Tails, auch von den Tails derselben ab. In der Ratenfunktion spiegeln sich die wahrscheinlichsten Wege, um Ereignisse der großen Abweichungen zu realisieren, wider, was in Kapitel 4.3 beschrieben wird. In Kapitel 4.4 wird im speziellen Fall von Nachkommenverteilungen mit gebrochen-linearer Erzeugendenfunktion die Ratenfunktion für Ereignisse bestimmt, bei denen ein superkritischer BPRE überlebt, aber klein im Vergleich zum Erwartungswert bleibt. In Kapitel 4.5 werden die großen Abweichungen, bedingt auf die Umgebung untersucht (engl. quenched). In diesem Fall können unwahrscheinliche Ereignisse nur über den Verzweigungsmechanismus und nicht mehr über eine außergewöhnliche Umgebung realisiert werden. Zum Abschluss der Dissertation werden Verzweigungsprozesse in zufälliger Umgebung, bedingt auf Überle-ben, simuliert. Dazu wird eine Konstruktion nach Geiger (1999) angewendet. Diese erlaubt es, Galton-Watson Bäume in variierender Umgebung, bedingt auf Überleben, entlang einer Ahnenlinie zu konstruieren. Der Fall geometrischer Nachkommenverteilungen, auf den wir uns in Kapitel 5 beschränken, erlaubt die explizite Berechnung der benötigten Verteilungen. Als Anwendung des Grenzwertsatzes aus Kapitel 3.1 können nun intermediär subkritische Verzweigungsprozesse, bedingt auf Überleben, wie folgt simuliert werden: Zunächst wird die Umgebung zufällig bestimmt, und zwar als Irrfahrt, bedingt darauf ihr Minimum am Ende anzunehmen. Anschließend wird, der Geiger-Konstruktion folgend, ein Verzweigungsprozess in dieser Umgebung, bedingt auf Überleben, simuliert. Zum Abschluss wird in einem kurzen Ausblick auf aktuelle Forschung verwiesen. Im Anhang befinden sich einige technische Resultate.
The Benchmark Dose (BMD) approach, which was suggested firstly in 1984 by K. Crump [CRUMP (1984)], is a widely used instrument in risk assessment of substances in the environment and in food. In this context, the BMD approach determines a reference point (RfP) on the statistically estimated dose-response curve, for which the risk can be determined with adequate certainty and confidence. In the next step of risk characterization a threshold is calculated, based on this RfP and toxicological considerations. The BMD approach bases upon the fit of a dose-response model on the data. For this fit a stochastic distribution of the response endpoint is taken as a basis. Ultimately, the BMD reflects the dose for which a pre-specified increase in an adverse health effect (the benchmark response) can be expected. Until now, the BMD approach has been specified only for quantal and continuous endpoints. But in risk assessment of carcinogens especially so called time-to-event data are of high interest since they contain more information on the tumor development than quantal incidence data. The goal of this diploma thesis was to extend the BMD approach to such time-to-event data.
Dessins d'enfants (children's drawings) may be defined as hypermaps, i.e. as bipartite graphs embedded in compact Riemann surfaces. They are very important objects in order to describe the surface of the embedding as an algebraic curve. Knowing the combinatorial properties of the dessin may, in fact, help us determining defining equations or the field of definition of the surface. This task is easier if the automorphism group of the dessin is "large". In this thesis we consider a special type of dessins, so-called Wada dessins, for which the underlying graph illustrates the incidence structure of points and of hyperplanes of projective spaces. We determine under which conditions they have a large orientation-preserving automorphism group. We show that applying algebraic operations called "mock" Wilson operations to the underlying graph we may obtain new dessins. We study the automorphism group of the new dessins and we show that the dessins we started with are coverings of the new ones.
New conditions of solvability based on a general theorem on the calculation of the index at infinity for vector fields that have degenerate principal linear part as well as degenerate ... next order ... terms are obtained for the 2 Pi-periodic problem for the scalar equation x'' +n2x=g(|x|)+f(t,x)+b(t) with bounded g(u) and f(t,x) -> 0 as |x| -> 0. The result is also applied to the solvability of a two-point boundary value problem and to resonant problems for equations arising in control theory.
AMS subject classifications: 47Hll, 47H30.
Linear-implicit versions of strong Taylor numerical schemes for finite dimensional Itô stochastic differential equations (SDEs) are shown to have the same order as the original scheme. The combined truncation and global discretization error of an gamma strong linear-implicit Taylor scheme with time-step delta applied to the N dimensional Itô-Galerkin SDE for a class of parabolic stochastic partial differential equation (SPDE) with a strongly monotone linear operator with eigenvalues lambda 1 <= lambda 2 <= ... in its drift term is then estimated by K(lambda N -½ + 1 + delta gamma) where the constant K depends on the initial value, bounds on the other coefficients in the SPDE and the length of the time interval under consideration.
AMS subject classifications: 35R60, 60H15, 65M15, 65U05.
We presented a proof for the classical stable limit laws under use of contraction method in combination with the Zolotarev metric. Furthermore, a stable limit law was proved for scaled sums of growing into sequences. This limit law was alternatively formulated for sequences of random variables defined by a simple degenerate recursion.
We present a new self-contained and rigorous proof of the smoothness of invariant fiber bundles for dynamic equations on measure chains or time scales. Here, an invariant fiber bundle is the generalization of an invariant manifold to the nonautonomous case. Our main result generalizes the “Hadamard-Perron theorem” to the time-dependent, infinite-dimensional, noninvertible, and parameter-dependent case, where the linear part is not necessarily hyperbolic with variable growth rates. As a key feature, our proof works without using complicated technical tools.
Dynamical systems driven by Gaussian noises have been considered extensively in modeling, simulation, and theory. However, complex systems in engineering and science are often subject to non-Gaussian fluctuations or uncertainties. A coupled dynamical system under a class of Lévy noises is considered. After discussing cocycle property, stationary orbits, and random attractors, a synchronization phenomenon is shown to occur, when the drift terms of the coupled system satisfy certain dissipativity and integrability conditions. The synchronization result implies that coupled dynamical systems share a dynamical feature in some asymptotic sense.
This work connects Markov chain imbedding technique (MCIT) introduced by M.V. Koutras and J.C. Fu with distributions concerning the cycle structure of permutations. As a final result program code is given that uses MCIT to deliver proper numerical values for these. The discrete distributions of interest are the one of the cycle structure, the one of the number of cycles, the one of the rth longest and shortest cycle and finally the length of a random chosen cycle. These are analyzed for equiprobable permutations as well as for biased ones. Analytical solutions and limit distributions are also considered to put the results on a safe, theoretical base.
Condensing phenomena for systems biology, ecology and sociology present in real life different complex behaviors. Based on local interaction between agents, we present another result of the Energy-based model presented by [20]. We involve an additional condition providing the total condensing (also called consensus) of a discrete positive measure. Key words: Condensing; consensus; random move; self-organizing groups; collective intelligence; stochastic modeling. AMS Subject Classifications: 81T80; 93A30; 37M05; 68U20
Tropical geometry is the geometry of the tropical semiring \[\mathbb{T}:=(\mathbb{R}\cup\{\infty\},\min,+).\] Classical algebraic structures correspond to tropical structures. If $I\lhd K[x_1,\ldots,x_n]$ is an ideal in a polynomial ring over a field $K$ with valuation $v$, then the classical algebraic variety correspond to the tropical variety $T(I)$. It is the set of all points $w$, such that the minimum $\min\{v(c_\alpha)+w\cdot\alpha\}$ is achieved twice for all $f=\sum_\alpha c_\alpha x^\alpha\in I$. So tropical geometry relates algebraic geometric problems with discrete geometric problems. In this thesis we obtain a tropical version of the Eisenbud-Evans Theorem which states that every algebraic variety in $\mathbb{R}^n$ is the intersection of $n$ hypersurfaces. We find out that in the tropical setting every tropical variety $T(I)$ can be written as an intersection of only $(n+1)$ tropical hypersurfaces. So we get a finite generating system of $I$ such that the corresponding tropical hypersurfaces intersect to the tropical variety, a so-called tropical basis. Let $I \lhd K[x_1,\ldots,x_n]$ be a prime ideal generated by the polynomials $f_1, \ldots, f_r$. Then there exist $g_0,\ldots,g_{n} \in I$ such that \[ T(I) \ = \ \bigcap_{i=0}^{n}T(g_i)\] and thus $\mathcal{G} := \{f_1, \ldots, f_r, g_0, \ldots, g_{n}\}$ is a tropical basis for $I$ of cardinality $r+n+1$. Tropical bases are discussed by Bogart, Jensen, Speyer, Sturmfels and Thomas where it is shown that tropical bases of linear polynomials of a linear ideal have to be very large. We do not restrict the tropical basis to consist of linear polynomials and therefore we get a shorter tropical basis. But the degrees of our polynomials can be very large. The main ingredient to get a short tropical basis is the use of projections, in particular geometrically regular projections. Together with the fact that preimages of projections of tropical varieties are themselves tropical varieties of a certain elimination ideal we get the desired result. Let $I \lhd K[x_1, \ldots, x_n]$ be an $m$-dimensional prime ideal and $\pi : \mathbb{R}^n \to \mathbb{R}^{m+1}$ be a rational projection. Then $\pi^{-1}(\pi(T(I)))$ is a tropical variety, namely \[ \pi^{-1}(\pi(T(I))) \ = \ T(J \cap K[x_1, \ldots, x_n]) \,\] Here $J$ is an ideal in $K[x_1,\ldots,x_n,\lambda_1,\ldots,\lambda_{n-m-1}]$ derived from the ideal $I$. We show that this elimination ideal is a principal ideal which yields a polynomial in our tropical basis. The advantage of our method is that we find our polynomials by projections and therefore we can use the results of Gelfand, Kapranov and Zelevinsky , of Esterov and Khovanskii , and of Sturmfels, Tevelev and Yu. With mixed fiber polytopes we get the structure and combinatorics of the image of a tropical variety and therefore the structure of the polynomials in our tropical basis. Let $I=\lhd K[x_1,\ldots,x_n]$ an $m$-dimensional ideal, generated by generic polynomials $f_1,\ldots, f_{n-m}$, $\pi:\mathbb{R}^n\to\mathbb{R}^{m+1}$ a projection and $\psi$ a projection presented by a matrix with a rowspace equal to the kernel of $\pi$. Then up to affine isomorphisms, the cells of the dual subdivision of $\pi^{-1} \pi T(I)$ are of the form \[ \sum_{i=1}^p \Sigma_{\psi} (C_{i1}^{\vee}, \ldots, C_{i{k}}^{\vee}) \] for some $p\in\mathbb{N}$ and faces $F_1, \ldots, F_p$ of $T(f_1)\cap\ldots\cap T(f_k)$ and the dual cell of $F_i\subseteq U = T(f_1)\cup\ldots\cup T(f_k)$ is given by $F_i^\vee=C_{i1}^{\vee}+ \ldots+ C_{ik}^{\vee}$ with faces $C_{i1}, \ldots, C_{i k}$ of $T(f_1), \ldots, T(f_{k})$. In case that we project a tropical curve we want to find the number of $(n-1)$-cells of the above form with $p>1$, i.e. the cells which are dual to vertices of $\pi(T(I))$ which are the intersection of the images of two non-adjacent $1$-cells of $T(I)$. Vertices of this type are called selfintersection points. We show that there exist a tropcal line $L_n\subset\mathbb{R}^n$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $L_n$ has $\sum_{i=1}^{n-2}i$ selfintersection points. Furthermore we find tropical curves $\mathcal{C}\subset\mathbb{R}^n$, which are transversal intersections of $n-1$ tropical hypersurfaces of degrees $d_1,\ldots,d_{n-1}$ and a projection $\pi:\mathbb{R}^n\to\mathbb{R}^2$, such that $\mathcal{C}$ has at least $(d_1\cdot\ldots\cdot d_{n-1})^2\cdot \sum_{i=1}^{n-2}i) $ selfintersection points. A caterpillar is a certain simple type of a tropical line and for this type we show that it can have at most $\sum_{i=1}^{n-2}i$ selfintersection points.
Mixed volumes, mixed Ehrhart theory and applications to tropical geometry and linkage configurations
(2009)
The aim of this thesis is the discussion of mixed volumes, their interplay with algebraic geometry, discrete geometry and tropical geometry and their use in applications such as linkage configuration problems. Namely we present new technical tools for mixed volume computation, a novel approach to Ehrhart theory that links mixed volumes with counting integer points in Minkowski sums, new expressions in terms of mixed volumes of combinatorial quantities in tropical geometry and furthermore we employ mixed volume techniques to obtain bounds in certain graph embedding problems.
Local interactions between particles of a collection causes all particles to reorganize in new positions. The purpose of this paper is to construct an energy-based model of self-organizing subgroups, which describes the behavior of singular local moves of a particle. The present paper extends the Hegselmann-Krause model on consensus dynamics, where agents simultaneously move to the barycenter of all agents in an epsilon neighborhood. The Energy-based model presented here is analyzed and simulated on finite metric space. AMS Subject Classifications:81T80; 93A30; 37M05; 68U20
Deformation quantization on symplectic stacks and applications to the moduli of flat connections
(2008)
It is a common problem in mathematical physics to describe and quantize the Poisson algebra on a symplectic quotient [...] given in terms of some moment map [...] on a symplectic manifold [...] with a hamiltonian action by a Lie group G. Among others, problems may arise in two parts of the process: c might be a singular value of the moment map and the quotient might not be well-behaving; in the interesting cases the quotient often is singular. By the famous result of Sjamaar and Lerman ([102]) X is a symplectic stratified space. We are interested in cases for which we can give a deformation quantization of the possibly singular Poisson algebra of X. To that purpose we introduce a Poisson algebra on the associated stack [...] for special cases and consider its deformations and their classification. We dedicate ourselves to use the rather geometric methods introduced by Fedosov for symplectic manifolds in [37]. That leads to the question how to perform differential geometry on a smooth stack. The Lie groupoid atlas of a smooth stack is a nice model for the same space (Tu, Xu and Laurent-Gengoux in [107] and Behrend and Xu in [16]), but both have different topoi. We give a morphism (P,R) that compares the topologies of a smooth stack and its atlas. This yields a method to transport sheaves and their sections between a smooth stack and its Lie groupoid atlas. A symplectic stack is a smooth separated Deligne-Mumford stack with a 2-form which is closed and non-degenerate in an atlas. Via (P,R) a deformation quantization on a symplectic stack can be performed in terms of an atlas. We also give a classification functor for the quantizations in the spirit of Deligne ([35]) based on the geometric interpretation given by Gutt and Rawnsely in [49]. As an application we give a deformation quantization for the moduli stack of flat connections in particular configurations. We use Darboux charts provided by Huebschmann (e.g. in [54]) to construct the corresponding Lie groupoid. This captures the symplectic form arising in the reduction process and differs from other approaches using gerbes of bundles (e.g. Teleman [105]).
In this work, we extend the Hegselmann and Krause (HK) model, presented in [16] to an arbitrary metric space. We also present some theoretical analysis and some numerical results of the condensing of particles in finite and continuous metric spaces. For simulations in a finite metric space, we introduce the notion "random metric" using the split metrics studies by Dress and al. [2, 11, 12].
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
This thesis exhibits skeins based on the Homfly polynomial and their relations to Schur functions. The closures of skein-theoretic idempotents of the Hecke algebra are shown to be specializations of Schur functions. This result is applied to the calculation of the Homfly polynomial of the decorated Hopf link. A closed formula for these Homfly polynomials is given. Furthermore, the specialization of the variables to roots of unity is considered. The techniques are skein theory on the one side, and the theory of symmetric functions in the formulation of Schur functions on the other side. Many previously known results have been proved here by only using skein theory and without using knowledge about quantum groups.
Epstein and Penner constructed in [EP88] the Euclidean decomposition of a non-compact hyperbolic n-manifold of finite volume for a choice of cusps, n >= 2. The manifold is cut along geodesic hyperplanes into hyperbolic ideal convex polyhedra. The intersection of the cusps with the Euclidean decomposition determined by them turns out to be rather simple as stated in Theorem 2.2. A dual decomposition resulting from the expansion of the cusps was already mentioned in [EP88]. These two dual hyperbolic decompositions of the manifold induce two dual decompositions in the Euclidean structure of the cusp sections. This observation leads in Theorems 5.1 and 5.2 to easily computable, necessary conditions for an arbitrary ideal polyhedral decomposition of the manifold to be a Euclidean decomposition.
Die vorliegende Arbeit beschäftigt sich mit der BFV-Reduktion von Hamiltonschen Systemen mit erstklassigen Zwangsbedingungen im Rahmen der klassischen Hamiltonschen Mechanik und im Rahmen der Deformationsquantisierung. Besondere Aufmerksamkeit wird dabei Zwangsbedingungen zuteil, die als Nullfaser singulärer äquivarianter Impulsabbildungen entstehen. Es ist schon länger bekannt, daß für Nullfasern regulärer äquivarianter Impulsabbildungen die in der theoretischen Physik gebräuchliche Methode der BFV-Reduktion zur Phasenraumreduktion nach Marsden/Weinstein äquivalent ist. In [24] konnte gezeigt werden, daß in dieser Situation die BFV-Reduktion sich auch im Rahmen der Deformationsquantisierung natürlich formulieren läßt und erfolgreich zur Konstruktion von Sternprodukten auf Marsden/Weinstein-Quotienten verwendet werden kann. Ein Hauptergebnis der vorliegenden Arbeit besteht in der Verallgemeinerung der Ergebnisse aus [24] auf den Fall singulärer Impulsabbildungen, deren Komponenten 1.) das Verschwindungsideal der Zwangsfläche erzeugen und 2.) einen vollständigen Durchschnitt bilden. Die Argumentation von [24] wird durch Gebrauch der Störungslemmata aus dem Anhang A.1 systematisiert und vereinfacht. Zum Existenzbeweis von stetigen Homotopien und stetiger Fortsetzungsabbildung für die Koszulauflösung werden der Zerfällungssatz und der Fortsetzungssatz von Bierstone und Schwarz [20] benutzt. Außerdem wird ein ’Jacobisches Kriterium’ für die Überprüfung von Bedingung 2.) angegeben. Basierend auf diesem Kriterium und Techniken aus [3] werden die Bedingungen 1.) und 2.) an einer Reihe von Beispielen getestet. Als Korollar erhält man den Beweis dafür, daß es symplektisch stratifizierte Räume gibt, die keine Orbifaltigkeiten sind und dennoch eine stetige Deformationsquantisierung zulassen. Ferner wird (ähnlich zu [92]) eine konzeptionielle Erklärung dafür gegeben, warum im Fall vollständiger Durchschnitte das Problem der Quantisierung der BRST-Ladung eine so einfache Lösung hat. Bildet die Impulsabbildung eine erstklassige Zwangsbedingung, ist aber kein vollständiger Durchschnitt, dann ist es im allgemeinen nicht bekannt, wie entsprechende Quantenreduktionsresultate zu erzielen sind. Ein Hauptaugenmerk der Untersuchung wird es deshalb sein, in dieser Situation die klassische BFV-Reduktion besser zu verstehen – natürlich in der Hoffnung, Grundlagen für eine etwaige (Deformations-)Quantisierung zu liefern. Wir werden feststellen, daß es zwei Gründe gibt, die Tate-Erzeuger (alias: Antigeister höheren Niveaus) notwendig machen: die Topologie der Zwangsfläche und die Singularitätentheorie der Impulsabbildung. Die Zahl der Tate-Erzeuger kann durch Übergang zu projektiven Tate-Erzeugern, also Vektorbündeln, verringert werden. Allerdings sorgt Halperins Starrheitssatz [57] dafür, daß im wesentlichen alle Fälle, für die die Zwangsfläche kein lokal vollständiger Durchschnitt ist, zu unendlich vielen Tate-Erzeugern führen. Erzeugen die Komponenten einer Impulsabbildung einer linearen symplektischen Gruppenwirkung das Verschwindungsideal der Zwangsfläche, so kann man eine lokal endliche Tate-Auflösung finden. Diese besitzt nach dem Fortsetzungssatz und dem Zerfällungssatz von Bierstone und Schwarz stetige, kontrahierende Homotopien. Ausgehend von einer solchen Tate-Auflösung konstruieren wir, die klassische BFV-Konstruktion für vollständige Durchschnitte verallgemeinernd, eine graduierte superkommutative Algebra. Wir können zeigen, daß diese graduierte Algebra auch im Vektorbündelfall eine graduierte Poissonklammer besitzt, die sogenannte Rothstein-Poissonklammer. Die Existenz einer solchen Poissonklammer war bereits von Rothstein [87] für die einfachere Situation einer symplektischen Supermannigfaltigkeit bewiesen worden. Darüberhinaus werden wir sehen, daß es auch im Vektorbündelfall eine BRST-Ladung gibt. Diese sieht im Fall von Impulsabbildungen etwas einfacher aus als für allgemeine erstklassige Zwangsbedingungen. Insgesamt wird also die klassische BFV-Konstruktion [95] auf den Fall projektiver Tate-Erzeuger verallgemeinert, und als eine Homotopieäquivalenz in der additiven Kategorie der Fréchet-Räume interpretiert.
Concentration of multivariate random recursive sequences arising in the analysis of algorithms
(2006)
Stochastic analysis of algorithms can be motivated by the analysis of randomized algorithms or by postulating on the sets of inputs of the same length some probability distributions. In both cases implied random quantities are analyzed. Here, the running time is of great concern. Characteristics like expectation, variance, limit law, rates of convergence and tail bounds are studied. For the running time, beside the expectation, upper bounds on the right tail are particularly important, since one wants to know large values of the running time not taking place with possibly high probability. In the first chapter game trees are analyzed. The worst case runnig time of Snir's randomized algorithm is specified and its expectation, asymptotic behavior of the variance, a limit law with uniquely characterized limit and tail bounds are identified. Furthermore, a limit law for the value of the game tree under Pearl's probabilistic modell is proved. In the second chapter upper and lower bounds for the Wiener Index of random binary search trees are identified. In the third chapter tail bounds for the generation size of multitype Galton-Watson processes (with immigration) are derived, depending on their offspring distribution. Therefore, the method used to prove the tail bounds in the first chapter is generalized.
Approximating Perpetuities
(2006)
A perpetuity is a real valued random variable which is characterised by a distributional fixed-point equation of the form X=AX+b, where (A,b) is a vector of random variables independent of X, whereas dependencies between A and b are allowed. Conditions for existence and uniqueness of solutions of such fixed-point equations are known, as is the tail behaviour for most cases. In this work, we look at the central area and develop an algorithm to approximate the distribution function and possibly density of a large class of such perpetuities. For one specific example from the probabilistic analysis of algorithms, the algorithm is implemented and explicit error bounds for this approximation are given. At last, we look at some examples, where the densities or at least some properties are known to compare the theoretical error bounds to the actual error of the approximation. The algorithm used here is based on a method which was developed for another class of fixed-point equations. While adapting to this case, a considerable improvement was found, which can be translated to the original method.
It is commonly agreed that cortical information processing is based on the electric discharges (spikes') of nerve cells. Evidence is accumulating which suggests that the temporal interaction among a large number of neurons can take place with high precision, indicating that the efficiency of cortical processing may depend crucially on the precise spike timing of many cells. This work focuses on two temporal properties of parallel spike trains that attracted growing interest in the recent years: In the first place, specific delays (phase offsets') between the firing times of two spike trains are investigated. In particular, it is studied whether small phase offsets can be identified with confidence between two spike trains that have the tendency to fire almost simultaneously. Second, the temporal relations between multiple spike trains are investigated on the basis of such small offsets between pairs of processes. Since the analysis of all delays among the firing activity of n neurons is extremely complex, a method is required with which this highly dimensional information can be collapsed in a straightforward manner such that the temporal interaction among a large number of neurons can be represented consistently in a single temporal map. Finally, a stochastic model is presented that provides a framework to integrate and explain the observed temporal relations that result from the previous analyses.
The existence of a mean-square continuous strong solution is established for vector-valued Itö stochastic differential equations with a discontinuous drift coefficient, which is an increasing function, and with a Lipschitz continuous diffusion coefficient. A scalar stochastic differential equation with the Heaviside function as its drift coefficient is considered as an example. Upper and lower solutions are used in the proof.
The synchronization of neuronal firing activity is considered an important mechanism in cortical information processing. The tendency of multiple neurons to synchronize their joint firing activity can be investigated with the 'unitary event' analysis (Grün, 1996). This method is based on the nullhypothesis of independent Bernoulli processes and can therefore not tell whether coincidences observed between more than two processes can be considered "genuine" higher- order coincidences or whether they might be caused by coincidences of lower order that coincide by chance ("chance coincidences"). In order to distinguish between genuine and chance coincidences, a parametric model of independent interaction processes (MIIP) is presented. In the framework of this model, Maximum-Likelihood estimates are derived for the firing rates of n single processes and for the rates with which genuine higher order correlations occur. The asymptotic normality of these estimates is used to derive their asymptotic variance and in order to investigate whether higher order coincidences can be considered genuine or whether they can be explained by chance coincidences. The empirical test power of this procedure for n=2 and n=3 processes and for finite analysis windows is derived with simulations and compared to the asymptotic values. Finally, the model is extended in order to allow for the analysis of correlations that are caused by jittered coincidences.
Considered are the classes QL (quasilinear) and NQL (nondet quasllmear) of all those problems that can be solved by deterministic (nondetermlnlsttc, respectively) Turmg machines in time O(n(log n) ~) for some k Effloent algorithms have time bounds of th~s type, it is argued. Many of the "exhausUve search" type problems such as satlsflablhty and colorabdlty are complete in NQL with respect to reductions that take O(n(log n) k) steps This lmphes that QL = NQL iff satisfiabdlty is m QL CR CATEGORIES: 5.25
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
Pseudorandom function tribe ensembles based on one-way permutations: improvements and applications
(1999)
Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on oneway permutations given by Canetti, Micciancio and Reingold [CMR98]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as effcient and if the starting pseudorandom function ensemble is efficiently invertible (given the secret key) then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext independently of the choice of the secret key or the random bits used in the encryption process.
We introduce the relationship between incremental cryptography and memory checkers. We present an incremental message authentication scheme based on the XOR MACs which supports insertion, deletion and other single block operations. Our scheme takes only a constant number of pseudorandom function evaluations for each update step and produces smaller authentication codes than the tree scheme presented in [BGG95]. Furthermore, it is secure against message substitution attacks, where the adversary is allowed to tamper messages before update steps, making it applicable to virus protection. From this scheme we derive memory checkers for data structures based on lists. Conversely, we use a lower bound for memory checkers to show that so-called message substitution detecting schemes produce signatures or authentication codes with size proportional to the message length.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
We present efficient non-malleable commitment schemes based on standard assumptions such as RSA and Discrete-Log, and under the condition that the network provides publicly available RSA or Discrete-Log parameters generated by a trusted party. Our protocols require only three rounds and a few modular exponentiations. We also discuss the difference between the notion of non-malleable commitment schemes used by Dolev, Dwork and Naor [DDN00] and the one given by Di Crescenzo, Ishai and Ostrovsky [DIO98].
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Based on the quadratic residuosity assumption we present a non-interactive crypto-computing protocol for the greater-than function, i.e., a non-interactive procedure between two parties such that only the relation of the parties' inputs is revealed. In comparison to previous solutions our protocol reduces the number of modular multiplications significantly. We also discuss applications to conditional oblivious transfer, private bidding and the millionaires' problem.
We propose a new security measure for commitment protocols, called Universally Composable (UC) Commitment. The measure guarantees that commitment protocols behave like an \ideal commitment service," even when concurrently composed with an arbitrary set of protocols. This is a strong guarantee: it implies that security is maintained even when an unbounded number of copies of the scheme are running concurrently, it implies non-malleability (not only with respect to other copies of the same protocol but even with respect to other protocols), it provides resilience to selective decommitment, and more. Unfortunately two-party uc commitment protocols do not exist in the plain model. However, we construct two-party uc commitment protocols, based on general complexity assumptions, in the common reference string model where all parties have access to a common string taken from a predetermined distribution. The protocols are non-interactive, in the sense that both the commitment and the opening phases consist of a single message from the committer to the receiver.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
We show lower bounds for the signature size of incremental schemes which are secure against substitution attacks and support single block replacement. We prove that for documents of n blocks such schemes produce signatures of \Omega(n^(1/(2+c))) bits for any constant c>0. For schemes accessing only a single block resp. a constant number of blocks for each replacement this bound can be raised to \Omega(n) resp. \Omega(sqrt(n)). Additionally, we show that our technique yields a new lower bound for memory checkers.