Refine
Year of publication
Document Type
- Report (471) (remove)
Language
- English (471) (remove)
Has Fulltext
- yes (471)
Is part of the Bibliography
- no (471)
Keywords
- islamic state (13)
- terrorism (13)
- IS (9)
- Egypt (8)
- Syria (8)
- islamism (7)
- Europe (6)
- Germany (6)
- Goethe, Johann Wolfgang von (6)
- USA (6)
Institute
- Gesellschaftswissenschaften (212)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (112)
- Wirtschaftswissenschaften (96)
- Center for Financial Studies (CFS) (42)
- House of Finance (HoF) (30)
- Sustainable Architecture for Finance in Europe (SAFE) (30)
- Informatik (20)
- Mathematik (16)
- Extern (8)
- Physik (7)
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
To preserve the required beam quality in an e+/e- collider it is necessary to have a very precise beam position control at each accelerating cavity. An elegant method to avoid additional length and beam disturbance is the usage of signals from existing HOM-dampers. The magnitude of the displacement is derived from the amplitude of a dipole mode whereas the sign follows from the phase comparison of a dipole and a monopole HOM. To check the performance of the system, a measurement setup has been built with an antenna which can be moved with micrometer resolution to simulate the beam. Furthermore we have developed a signal processing to determine the absolute beam displacement. Measurements on the HOM-damper cell can be done in the frequency domain using a network analyser. Final measurements with the nonlinear time dependent signal processing circuit has to be done with very short electric pulses simulating electron bunches. Thus, we have designed a sub nanosecond pulse generator using a clipping line and the step recovery effect of a diode. The measurement can be done with a resolution of about 10 micrometers. Measurements and numerical calculations concerning the monitor design and the pulse generator are presented.
Although the commoditisation of illiquid asset exposures through securitisation facilitates the disciplining effect of capital markets on the risk management, private information about securitised debt as well as complex transaction structures could possibly impair the fair market valuation. In a simple issue design model without intermediaries we maximise issuer proceeds over a positive measure of issue quality, where a direct revelation mechanism (DRM) by profitable informed investors engages endogenous price discovery through auction-style allocation preference as a continuous function of perceived issue quality. We derive an optimal allocation schedule for maximum issuer payoffs under different pricing regimes if asymmetric information requires underpricing. In particular, we study how the incidence of uninformed investors at varying levels of valuation uncertainty and their function of clearing the market effects profitable informed investment. We find that the issuer optimises own payoffs at each valuation irrespective of the applicable pricing mechanism by awarding informed investors the lowest possible allocation (and attendant underpricing) that still guarantees profitable informed investment. Under uniform pricing the composition of the investor pool ensures that informed investors appropriate higher profit than uninformed types. Any reservation utility by issuers lowers the probability of information disclosure by informed investors and the scope of issuers to curtail profitable informed investment. JEL Classifications: D82, G12, G14, G23
Asset securitisation as a risk management and funding tool : what does it hold in store for SMES?
(2005)
The following chapter critically surveys the attendant benefits and drawbacks of asset securitisation on both financial institutions and firms. It also elicits salient lessons to be learned about the securitisation of SME-related obligations from a cursory review of SME securitisation in Germany as a foray of asset securitisation in a bank-centred financial system paired with a strong presence of SMEs in industrial production. JEL Classification: D81, G15, M20
As a sign of ambivalence in the regulatory definition of capital adequacy for credit risk and the quest for more efficient refinancing sources collateral loan obligations (CLOs) have become a prominent securitisation mechanism. This paper presents a loss-based asset pricing model for the valuation of constituent tranches within a CLO-style security design. The model specifically examines how tranche subordination translates securitised credit risk into investment risk of issued tranches as beneficial interests on a designated loan pool typically underlying a CLO transaction. We obtain a tranchespecific term structure from an intensity-based simulation of defaults under both robust statistical analysis and extreme value theory (EVT). Loss sharing between issuers and investors according to a simplified subordination mechanism allows issuers to decompose securitised credit risk exposures into a collection of default sensitive debt securities with divergent risk profiles and expected investor returns. Our estimation results suggest a dichotomous effect of loss cascading, with the default term structure of the most junior tranche of CLO transactions (“first loss position”) being distinctly different from that of the remaining, more senior “investor tranches”. The first loss position carries large expected loss (with high investor return) and low leverage, whereas all other tranches mainly suffer from loss volatility (unexpected loss). These findings might explain why issuers retain the most junior tranche as credit enhancement to attenuate asymmetric information between issuers and investors. At the same time, the issuer discretion in the configuration of loss subordination within particular security design might give rise to implicit investment risk in senior tranches in the event of systemic shocks. JEL Classifications: C15, C22, D82, F34, G13, G18, G20
s.a. Deutsche Fassung: Rechtshistorisches Journal 15, 1996, 255-290 und in: Eric Schwarz (Hg.) La théorie des systèmes: une approche inter- et transdisciplinaire. Bösch, Sion 1996, 101-119. Italienische Fassung: La Bukowina globale: il pluralismo giuridico nella società mondiale. Sociologic a politiche sociali 2, 1999, 49-80. Portugiesische Fassung: Bukowina global sobre a emergência de um pluralismo jurídico transnacional. Impulso: Direito e Globalização 14, 2003. Georgische Fassung: Globaluri bukovina: samarTlebrivi pluralizmi msoflio sazogadoebaSi. Journal of the Institute of State and Law of the Georgian Academy of Sciences 2005 (im Erscheinen)
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Given a real vector alpha =(alpha1 ; : : : ; alpha d ) and a real number E > 0 a good Diophantine approximation to alpha is a number Q such that IIQ alpha mod Zk1 ", where k \Delta k1 denotes the 1-norm kxk1 := max 1id jx i j for x = (x1 ; : : : ; xd ). Lagarias [12] proved the NP-completeness of the corresponding decision problem, i.e., given a vector ff 2 Q d , a rational number " ? 0 and a number N 2 N+ , decide whether there exists a number Q with 1 Q N and kQff mod Zk1 ". We prove that, unless ...
We generalize the concept of block reduction for lattice bases from l2-norm to arbitrary norms. This extends the results of Schnorr. We give algorithms for block reduction and apply the resulting enumeration concept to solve subset sum problems. The deterministic algorithm solves all subset sum problems. For up to 66 weights it needs in average less then two hours on a HP 715/50 under HP-UX 9.05.
We present an efficient variant of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lov´asz [LLL82]. We organize LLL-reduction in segments of size k. Local LLL-reduction of segments is done using local coordinates of dimension 2k. Strong segment LLL-reduction yields bases of the same quality as LLL-reduction but the reduction is n-times faster for lattices of dimension n. We extend segment LLL-reduction to iterated subsegments. The resulting reduction algorithm runs in O(n3 log n) arithmetic steps for integer lattices of dimension n with basis vectors of length 2O(n), compared to O(n5) steps for LLL-reduction.
We study the following problem: given x element Rn either find a short integer relation m element Zn, so that =0 holds for the inner product <.,.>, or prove that no short integer relation exists for x. Hastad, Just Lagarias and Schnorr (1989) give a polynomial time algorithm for the problem. We present a stable variation of the HJLS--algorithm that preserves lower bounds on lambda(x) for infinitesimal changes of x. Given x \in {\RR}^n and \alpha \in \NN this algorithm finds a nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no very short relation exists for points \bar{x} within half the x'--distance from x. On the other hand if x'=x then m is, up to a factor 2^{n/2}, a shortest integer relation for \mbox{x.} Our algorithm uses, for arbitrary real input x, at most \mbox{O(n^4(n+\log \alpha))} many arithmetical operations on real numbers. If x is rational the algorithm operates on integers having at most \mbox{O(n^5+n^3 (\log \alpha)^2 + \log (\|q x\|^2))} many bits where q is the common denominator for x.
Black box cryptanalysis applies to hash algorithms consisting of many small boxes, connected by a known graph structure, so that the boxes can be evaluated forward and backwards by given oracles. We study attacks that work for any choice of the black boxes, i.e. we scrutinize the given graph structure. For example we analyze the graph of the fast Fourier transform (FFT). We present optimal black box inversions of FFT-compression functions and black box constructions of collisions. This determines the minimal depth of FFT-compression networks for collision-resistant hashing. We propose the concept of multipermutation, which is a pair of orthogonal latin squares, as a new cryptographic primitive that generalizes the boxes of the FFT. Our examples of multipermutations are based on the operations circular rotation, bitwise xor, addition and multiplication.
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01].
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
We calculate the kaon HBT radius parameters for high energy heavy ion collisions, assuming a first order phase transition from a thermalized Quark-Gluon-Plasma to a gas of hadrons. At high transverse momenta K_T ~ 1 GeV/c direct emission from the phase boundary becomes important, the emission duration signal, i.e., the R_out/R_side ratio, and its sensitivity to T_c (and thus to the latent heat of the phase transition) are enlarged. Moreover, the QGP+hadronic rescattering transport model calculations do not yield unusual large radii (R_i<9fm). Finite momentum resolution effects have a strong impact on the extracted HBT parameters (R_i and lambda) as well as on the ratio R_out/R_side.
We calculate the antibaryon-to-baryon ratios, anti-p/p, anti-Lambda/Lambda, anti-Xi/Xi, and anti-Omega/Omega for Au+Au collisions at RHIC (sqrt{s}_{NN}=200 GeV). The effects of strong color fields associated with an enhanced strangeness and diquark production probability and with an effective decrease of formation times are investigated. Antibaryon-to-baryon ratios increase with the color field strength. The ratios also increase with the strangeness content |S|. The netbaryon number at midrapidity considerably increases with the color field strength while the netproton number remains roughly the same. This shows that the enhanced baryon transport involves a conversion into the hyperon sector (hyperonization) which can be observed in the (Lambda - anti-Lambda)/(p - anti-p) ratio.
Report-no: UFTP-492/1999 Journal-ref: Phys.Rev. C61 (2000) 024909 We investigate flow in semi-peripheral nuclear collisions at AGS and SPS energies within macroscopic as well as microscopic transport models. The hot and dense zone assumes the shape of an ellipsoid which is tilted by an angle Theta with respect to the beam axis. If matter is close to the softest point of the equation of state, this ellipsoid expands predominantly orthogonal to the direction given by Theta. This antiflow component is responsible for the previously predicted reduction of the directed transverse momentum around the softest point of the equation of state.
The wide-area deployment of WiFi hot spots challenges IP access providers. While new profit models are sought after by them, profitability as well as logistics for large-scale deployment of 802.11 wireless technology are still to be proven. Expenditure for hardware, locations, maintenance, connectivity, marketing, billing and customer care must be considered. Even for large carriers with infrastructure, the deployment of a large-scale WiFi infrastructure may be risky. This paper proposes a multi-level scheme for hot spot distribution and customer acquisition that reduces financial risk, cost of marketing and cost of maintenance for the large-scale deployment of WiFi hot spots.
Central wage bargaining and local wage flexibility : evidence from the entire wage distribution
(1998)
We argue that in labor markets with central wage bargaining wage flexibility varies systematically across the wage distribution: local wage flexibility is more relevant for the upper part of the wage distribution, and flexibility of wages negotiated under central wage bargaining affects the lower part of the wage distribution. Using a random sample of German social-security accounts, we estimate wage flexibility across the wage distribution by means of quantile regressions. The results support our hypothesis, as employees with low wages have significantly lower local wage flexibility than high wage employees. This effect is particularly relevant for the lower educational groups. On the other hand, employees with low wages tend to have a higher wage flexibility with respect to national unemployment.
This paper shows that abnormal stock price returns around open market repurchase announcements are about four times higher in Germany than in the US (12% versus 3%). We hypothesize that this observation can be explained by country differences in repurchase regulation. Our empirical evidence indicates that German managers primarily buy back shares to signal an undervaluation of their firm. We demonstrate that the stringent repurchase process prescribed by German law attributes a higher credibility to such a signal than lax US regulations and thereby corroborate our hypothesis.