Refine
Year of publication
Document Type
- Article (15825)
- Part of Periodical (2818)
- Working Paper (2353)
- Preprint (2085)
- Doctoral Thesis (2065)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (753)
- Report (471)
- Review (165)
Language
- English (29537) (remove)
Keywords
- taxonomy (744)
- new species (444)
- morphology (174)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (117)
- biodiversity (101)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5347)
- Physik (3819)
- Wirtschaftswissenschaften (1921)
- Frankfurt Institute for Advanced Studies (FIAS) (1762)
- Biowissenschaften (1550)
- Center for Financial Studies (CFS) (1494)
- Informatik (1401)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1071)
- House of Finance (HoF) (710)
We introduce algorithms for lattice basis reduction that are improvements of the famous L3-algorithm. If a random L3-reduced lattice basis b1,b2,...,bn is given such that the vector of reduced Gram-Schmidt coefficients ({µi,j} 1<= j< i<= n) is uniformly distributed in [0,1)n(n-1)/2, then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor-Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
We call a vector x/spl isin/R/sup n/ highly regular if it satisfies =0 for some short, non-zero integer vector m where <...> is the inner product. We present an algorithm which given x/spl isin/R/sup n/ and /spl alpha//spl isin/N finds a highly regular nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no short relation m~ of length less than /spl alpha//2 exists for points x~ within half the x'-distance from x. The integer relation m for x' is for random x up to an average factor 2/sup /spl alpha//2/ a shortest integer relation for x'. Our algorithm uses, for arbitrary real input x, at most O(n/sup 4/(n+log A)) many arithmetical operations on real numbers. If a is rational the algorithm operates on integers having at most O(n/sup 5/+n/sup 3/(log /spl alpha/)/sup 2/+log(/spl par/qx/spl par//sup 2/)) many bits where q is the common denominator for x.
We study the following problem: given x element Rn either find a short integer relation m element Zn, so that =0 holds for the inner product <.,.>, or prove that no short integer relation exists for x. Hastad, Just Lagarias and Schnorr (1989) give a polynomial time algorithm for the problem. We present a stable variation of the HJLS--algorithm that preserves lower bounds on lambda(x) for infinitesimal changes of x. Given x \in {\RR}^n and \alpha \in \NN this algorithm finds a nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no very short relation exists for points \bar{x} within half the x'--distance from x. On the other hand if x'=x then m is, up to a factor 2^{n/2}, a shortest integer relation for \mbox{x.} Our algorithm uses, for arbitrary real input x, at most \mbox{O(n^4(n+\log \alpha))} many arithmetical operations on real numbers. If x is rational the algorithm operates on integers having at most \mbox{O(n^5+n^3 (\log \alpha)^2 + \log (\|q x\|^2))} many bits where q is the common denominator for x.
Black box cryptanalysis applies to hash algorithms consisting of many small boxes, connected by a known graph structure, so that the boxes can be evaluated forward and backwards by given oracles. We study attacks that work for any choice of the black boxes, i.e. we scrutinize the given graph structure. For example we analyze the graph of the fast Fourier transform (FFT). We present optimal black box inversions of FFT-compression functions and black box constructions of collisions. This determines the minimal depth of FFT-compression networks for collision-resistant hashing. We propose the concept of multipermutation, which is a pair of orthogonal latin squares, as a new cryptographic primitive that generalizes the boxes of the FFT. Our examples of multipermutations are based on the operations circular rotation, bitwise xor, addition and multiplication.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of theL3-algorithm of Lenstra, Lenstra, Lovász (1982). We present a variant of theL3-algorithm with "deep insertions" and a practical algorithm for block Korkin—Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
We call a distribution on n bit strings (", e) locally random, if for every choice of e · n positions the induced distribution on e bit strings is in the L1 norm at most " away from the uniform distribution on e bit strings. We establish local randomness in polynomial random number generators (RNG) that are candidate one way functions. Let N be a squarefree integer and let f1, . . . , f be polynomials with coe±- cients in ZZN = ZZ/NZZ. We study the RNG that stretches a random x 2 ZZN into the sequence of least significant bits of f1(x), . . . , f(x). We show that this RNG provides local randomness if for every prime divisor p of N the polynomials f1, . . . , f are linearly independent modulo the subspace of polynomials of degree · 1 in ZZp[x]. We also establish local randomness in polynomial random function generators. This yields candidates for cryptographic hash functions. The concept of local randomness in families of functions extends the concept of universal families of hash functions by Carter and Wegman (1979). The proofs of our results rely on upper bounds for exponential sums.
We propose two improvements to the Fiat Shamir authentication and signature scheme. We reduce the communication of the Fiat Shamir authentication scheme to a single round while preserving the e±ciency of the scheme. This also reduces the length of Fiat Shamir signatures. Using secret keys consisting of small integers we reduce the time for signature generation by a factor 3 to 4. We propose a variation of our scheme using class groups that may be secure even if factoring large integers becomes easy.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts.
Let b1, . . . , bm 2 IRn be an arbitrary basis of lattice L that is a block Korkin Zolotarev basis with block size ¯ and let ¸i(L) denote the successive minima of lattice L. We prove that for i = 1, . . . ,m 4 i + 3 ° 2 i 1 ¯ 1 ¯ · kbik2/¸i(L)2 · ° 2m i ¯ 1 ¯ i + 3 4 where °¯ is the Hermite constant. For ¯ = 3 we establish the optimal upper bound kb1k2/¸1(L)2 · µ3 2¶m 1 2 1 and we present block Korkin Zolotarev lattice bases for which this bound is tight. We improve the Nearest Plane Algorithm of Babai (1986) using block Korkin Zolotarev bases. Given a block Korkin Zolotarev basis b1, . . . , bm with block size ¯ and x 2 L(b1, . . . , bm) a lattice point v can be found in time ¯O(¯) satisfying kx vk2 · m° 2m ¯ 1 ¯ minu2L kx uk2.
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
Let G be a group of prime order q with generator g. We study hardcore subsets H is include in G of the discrete logarithm (DL) log g in the model of generic algorithms. In this model we count group operations such as multiplication, division while computations with non-group data are for free. It is known from Nechaev (1994) and Shoup (1997) that generic DL-algorithms for the entire group G must perform p2q generic steps. We show that DL-algorithms for small subsets H is include in G require m/ 2 + o(m) generic steps for almost all H of size #H = m with m <= sqrt(q). Conversely, m/2 + 1 generic steps are su±cient for all H is include in G of even size m. Our main result justifies to generate secret DL-keys from seeds that are only 1/2 * log2 q bits long.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
We show that P(n)*(P(n)) for p = 2 with its geometrically induced structure maps is not an Hopf algebroid because neither the augmentation Epsilon nor the coproduct Delta are multiplicative. As a consequence the algebra structure of P(n)*(P(n)) is slightly different from what was supposed to be the case. We give formulas for Epsilon(xy) and Delta(xy) and show that the inversion of the formal group of P(n) is induced by an antimultiplicative involution Xi : P(n) -> P(n). Some consequences for multiplicative and antimultiplicative automorphisms of K(n) for p = 2 are also discussed.
The general subset sum problem is NP-complete. However, there are two algorithms, one due to Brickell and the other to Lagarias and Odlyzko, which in polynomial time solve almost all subset sum problems of sufficiently low density. Both methods rely on basis reduction algorithms to find short nonzero vectors in special lattices. The Lagarias-Odlyzko algorithm would solve almost all subset sum problems of density < 0.6463 . . . in polynomial time if it could invoke a polynomial-time algorithm for finding the shortest non-zero vector in a lattice. This paper presents two modifications of that algorithm, either one of which would solve almost all problems of density < 0.9408 . . . if it could find shortest non-zero vectors in lattices. These modifications also yield dramatic improvements in practice when they are combined with known lattice basis reduction algorithms.
Public key signature schemes are necessary for the access control to communication networks and for proving the authenticity of sensitive messages such as electronic fund transfers. Since the invention of the RSA scheme by Rivest, Shamir and Adleman (1978) research has focused on improving the e±ciency of these schemes. In this paper we present an efficient algorithm for generating public key signatures which is particularly suited for interactions between smart cards and terminals.
We present a novel parallel one-more signature forgery against blind Okamoto-Schnorr and blind Schnorr signatures in which an attacker interacts some times with a legitimate signer and produces from these interactions signatures. Security against the new attack requires that the following ROS-problem is intractable: find an overdetermined, solvable system of linear equations modulo with random inhomogenities (right sides). There is an inherent weakness in the security result of POINTCHEVAL AND STERN. Theorem 26 [PS00] does not cover attacks with 4 parallel interactions for elliptic curves of order 2200. That would require the intractability of the ROS-problem, a plausible but novel complexity assumption. Conversely, assuming the intractability of the ROS-problem, we show that Schnorr signatures are secure in the random oracle and generic group model against the one-more signature forgery.
We present a practical algorithm that given an LLL-reduced lattice basis of dimension n, runs in time O(n3(k=6)k=4+n4) and approximates the length of the shortest, non-zero lattice vector to within a factor (k=6)n=(2k). This result is based on reasonable heuristics. Compared to previous practical algorithms the new method reduces the proven approximation factor achievable in a given time to less than its fourthth root. We also present a sieve algorithm inspired by Ajtai, Kumar, Sivakumar [AKS01].
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
The main subject of this survey are Belyi functions and dessins d'enfants on Riemann surfaces. Dessins are certain bipartite graphs on 2-mainfolds defining there are conformal and even an algebraic structure. In principle, all deeper properties of the resulting Riemann surfaces or algebraic curves should be encoded in these dessins, but the decoding turns out to be difficult and leads to many open problems. We emphasize arithmetical aspects like Galois actions, the relation to the ABC theorem in function filds and arithemtic questions in uniformization theory of algebraic curves defined over number fields.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Presentation at the AMS Southeastern Sectional Meeting 14-16 March 2003, and the Workshop Asymptotic Analysis, Stability, and Generalized Functions', 17-19 March 2003, Louisiana State University, Baton Rouge, Louisiana. See the corresponding papers "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra".
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
Background: The existence of a constitutively expressed machinery for death in individual cells has led to the notion that survival factors repress this machinery and, if such factors are unavailable, cells die by default. In many cells, however, mRNA and protein synthesis inhibitors induce apoptosis, suggesting that in some cases transcriptional activity might actually impede cell death. To identify transcriptional mechanisms that interfere with cell death and survival, we combined gene trap mutagenesis with site-specific recombination (Cre/loxP system) to isolate genes from cells undergoing apoptosis by growth factor deprivation.
Results: From an integration library consisting of approximately 2 × 106 unique proviral integrations obtained by infecting the interleukin-3 (IL-3)-dependent hematopoietic cell line - FLOXIL3 - with U3Cre gene trap virus, we have isolated 125 individual clones that converted to factor independence upon IL-3 withdrawal. Of 102 cellular sequences adjacent to U3Cre integration sites, 17% belonged to known genes, 11% matched single expressed sequence tags (ESTs) or full cDNAs with unknown function and 72% had no match within the public databases. Most of the known genes recovered in this analysis encoded proteins with survival functions.
Conclusions: We have shown that hematopoietic cells undergoing apoptosis after withdrawal of IL-3 activate survival genes that impede cell death. This results in reduced apoptosis and improved survival of cells treated with a transient apoptotic stimulus. Thus, apoptosis in hematopoietic cells is the end result of a conflict between death and survival signals, rather than a simple death by default.
An excess of the proinflammatory substance IL-18 is present in joints of patients with rheumatoid arthritis (RA), and expression of IL-18 receptor (IL-18R) regulates IL-18 bioactivity in various cell types. We examined the expression of IL-18R alpha-chain and beta-chain and the biologic effects of IL-18 in fibroblast-like synoviocytes (FLS) after long-term culture. The presence of both IL-18R chains was a prerequisite for IL-18 signal transduction in FLS. However, all FLS cultures studied were either resistant or barely responsive to IL-18 stimulation as regards cell proliferation, expression of adhesion molecules ICAM-1 and vascular cell adhesion molecule (VCAM)-1, and the release of interstitial collagenase and stromelysin, IL-6 and IL-8, prostaglandin E2, or nitric oxide. We conclude that the presence of macrophages or IL-18R+ T cells that can respond directly to IL-18 is essential for the proinflammatory effects of IL-18 in synovitis in RA. Open Access: Published: 14 November 2001 © 2002 Möller et al., licensee BioMed Central Ltd (Print ISSN 1465-9905; Online ISSN 1465-9913)
Background: Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is performed mainly in patients with high-risk or advanced hematologic malignancies and congenital or acquired aplastic anemias. In the context of the significant risk of graft failure after allo-HSCT from alternative donors and the risk of relapse in recipients transplanted for malignancy, the precise monitoring of posttransplant hematopoietic chimerism is of utmost interest. Useful molecular methods for chimerism quantification after allogeneic transplantation, aimed at distinguishing precisely between donor's and recipient's cells, are PCR-based analyses of polymorphic DNA markers. Such analyses can be performed regardless of donor's and recipient's sex. Additionally, in patients after sex-mismatched allo-HSCT, fluorescent in situ hybridization (FISH) can be applied. Methods: We compared different techniques for analysis of posttransplant chimerism, namely FISH and PCR-based molecular methods with automated detection of fluorescent products in an ALFExpress DNA Sequencer (Pharmacia) or ABI 310 Genetic Analyzer (PE). We used Spearman correlation test. Results: We have found high correlation between results obtained from the PCR/ALF Express and PCR/ABI 310 Genetic Analyzer. Lower, but still positive correlations were found between results of FISH technique and results obtained using automated DNA sizing technology. Conclusions: All the methods applied enable a rapid and accurate detection of post-HSCT chimerism.
Background: To investigate the occupational risk of tuberculosis (TB) infection in a low-incidence setting, data from a prospective study of patients with culture-confirmed TB conducted in Hamburg, Germany, from 1997 to 2002 were evaluated. Methods: M. tuberculosis isolates were genotyped by IS6110 RFLP analysis. Results of contact tracing and additional patient interviews were used for further epidemiological analyses. Results: Out of 848 cases included in the cluster analysis, 286 (33.7%) were classified into 76 clusters comprising 2 to 39 patients. In total, two patients in the non-cluster and eight patients in the cluster group were health-care workers. Logistic regression analysis confirmed work in the health-care sector as the strongest predictor for clustering (OR 17.9). However, only two of the eight transmission links among the eight clusters involving health-care workers had been detected previously. Overall, conventional contact tracing performed before genotyping had identified only 26 (25.2%) of the 103 contact persons with the disease among the clustered cases whose transmission links were epidemiologically verified. Conclusion: Recent transmission was found to be strongly associated with health-care work in a setting with low incidence of TB. Conventional contact tracing alone was shown to be insufficient to discover recent transmission chains. The data presented also indicate the need for establishing improved TB control strategies in health-care settings.
Introduction: ScFv(FRP5)-ETA is a recombinant antibody toxin with binding specificity for ErbB2 (HER2). It consists of an N-terminal single-chain antibody fragment (scFv), genetically linked to truncated Pseudomonas exotoxin A (ETA). Potent antitumoral activity of scFv(FRP5)-ETA against ErbB2-overexpressing tumor cells was previously demonstrated in vitro and in animal models. Here we report the first systemic application of scFv(FRP5)-ETA in human cancer patients.
Methods: We have performed a phase I dose-finding study, with the objective to assess the maximum tolerated dose and the dose-limiting toxicity of intravenously injected scFv(FRP5)-ETA. Eighteen patients suffering from ErbB2-expressing metastatic breast cancers, prostate cancers, head and neck cancer, non small cell lung cancer, or transitional cell carcinoma were treated. Dose levels of 2, 4, 10, 12.5, and 20 μg/kg scFv(FRP5)-ETA were administered as five daily infusions each for two consecutive weeks.
Results: No hematologic, renal, and/or cardiovascular toxicities were noted in any of the patients treated. However, transient elevation of liver enzymes was observed, and considered dose limiting, in one of six patients at the maximum tolerated dose of 12.5 μg/kg, and in two of three patients at 20 μg/kg. Fifteen minutes after injection, peak concentrations of more than 100 ng/ml scFv(FRP5)-ETA were obtained at a dose of 10 μg/kg, indicating that predicted therapeutic levels of the recombinant protein can be applied without inducing toxic side effects. Induction of antibodies against scFv(FRP5)-ETA was observed 8 days after initiation of therapy in 13 patients investigated, but only in five of these patients could neutralizing activity be detected. Two patients showed stable disease and in three patients clinical signs of activity in terms of signs and symptoms were observed (all treated at doses ≥ 10 μg/kg). Disease progression occurred in 11 of the patients.
Conclusion: Our results demonstrate that systemic therapy with scFv(FRP5)-ETA can be safely administered up to a maximum tolerated dose of 12.5 μg/kg in patients with ErbB2-expressing tumors, justifying further clinical development.
Background: The cosmopolitan moon jelly Aurelia is characterized by high degrees of morphological and ecological plasticity, and subsequently by an unclear taxonomic status. The latter has been revised repeatedly over the last century, dividing the genus Aurelia in as many as 12 or as little as two species. We used molecular data and phenotypic traits to unravel speciation processes and phylogeographic patterns in Aurelia.
Results: Mitochondrial and nuclear DNA data (16S and ITS-1/5.8S rDNA) from 66 world-wide sampled specimens reveal star-like tree topologies, unambiguously differentiating 7 (mtDNA) and 8 (ncDNA) genetic entities with sequence divergences ranging from 7.8 to 14% (mtDNA) and 5 to 32% (ncDNA), respectively. Phylogenetic patterns strongly suggest historic speciation events and the reconstruction of at least 7 different species within Aurelia. Both genetic divergences and life history traits showed associations to environmental factors, suggesting ecological differentiation forced by divergent selection. Hybridization and introgression between Aurelia lineages likely occurred due to secondary contacts, which, however, did not disrupt the unambiguousness of genetic separation.
Conclusions: Our findings recommend Aurelia as a model system for using the combined power of organismic, ecological, and molecular data to unravel speciation processes in cosmopolitan marine organisms.
© 2002 Schroth et al; licensee BioMed Central Ltd. Verbatim copying and redistribution of this article are permitted in any medium for any non-commercial purpose, provided this notice is preserved along with the article's original URL: http://www.biomedcentral.com/1471-2148/2/1
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
First paragraph (this article has no abstract) Persistent stimulation of nociceptors results in sensitization of nociceptive sensory neurons, which is associated with hyperalgesia and allodynia. The release of NO and subsequent synthesis of cGMP in the spinal cord are involved in this process. cGMP-dependent protein kinase I (PKG-I) has been suggested to act as a downstream target of cGMP, but its exact role in nociception hadn't been characterized yet. To further evaluate the NO/cGMP/PKG-I pathway in nociception we assessed the effects of PKG-I inhibiton and activaton in the rat formalin assay and analyzed the nociceptive behavior of PKG-I-/- mice. Open access article.
Background: In general shell-less slugs are considered to be slimy animals with a rather dull appearance and a pest to garden plants. But marine slugs usually are beautifully coloured animals belonging to the less-known Opisthobranchia. They are characterized by a large array of interesting biological phenomena, usually related to foraging and/or defence. In this paper our knowledge of shell reduction, correlated with the evolution of different defensive and foraging strategies is reviewed, and new results on histology of different glandular systems are included. Results: Based on a phylogeny obtained by morphological and histological data, the parallel reduction of the shell within the different groups is outlined. Major food sources are given and glandular structures are described as possible defensive structures in the external epithelia, and as internal glands. Conclusion: According to phylogenetic analyses, the reduction of the shell correlates with the evolution of defensive strategies. Many different kinds of defence structures, like cleptocnides, mantle dermal formations (MDFs), and acid glands, are only present in shell-less slugs. In several cases, it is not clear whether the defensive devices were a prerequisite for the reduction of the shell, or reduction occurred before. Reduction of the shell and acquisition of different defensive structures had an implication on exploration of new food sources and therefore likely enhanced adaptive radiation of several groups. © 2005 Wägele and Klussmann-Kolb; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited: http://www.frontiersinzoology.com/content/2/1/3/
Background: Tumor development remains one of the major obstacles following organ transplantation. Immunosuppressive drugs such as cyclosporine and tacrolimus directly contribute to enhanced malignancy, whereas the influence of the novel compound mycophenolate mofetil (MMF) on tumor cell dissemination has not been explored. We therefore investigated the adhesion capacity of colon, pancreas, prostate and kidney carcinoma cell lines to endothelium, as well as their beta1 integrin expression profile before and after MMF treatment. Methods: Tumor cell adhesion to endothelial cell monolayers was evaluated in the presence of 0.1 and 1 μM MMF and compared to unstimulated controls. beta1 integrin analysis included alpha1beta1 (CD49a), alpha2beta1 (CD49b), alpha3beta1 (CD49c), alpha4beta1 (CD49d), alpha5beta1 (CD49e), and alpha6beta1 (CD49f) receptors, and was carried out by reverse transcriptase-polymerase chain reaction, confocal microscopy and flow cytometry. Results: Adhesion of the colon carcinoma cell line HT-29 was strongly reduced in the presence of 0.1 μM MMF. This effect was accompanied by down-regulation of alpha3beta1 and alpha6beta1 surface expression and of alpha3beta1 and alpha6beta1 coding mRNA. Adhesion of the prostate tumor cell line DU-145 was blocked dose-dependently by MMF. In contrast to MMF's effects on HT-29 cells, MMF dose-dependently up-regulated alpha1beta1, alpha2beta1, alpha3beta1, and alpha5beta1 on DU-145 tumor cell membranes. Conclusion: We conclude that MMF possesses distinct anti-tumoral properties, particularly in colon and prostate carcinoma cells. Adhesion blockage of HT-29 cells was due to the loss of alpha3beta1 and alpha6beta1 surface expression, which might contribute to a reduced invasive behaviour of this tumor entity. The enhancement of integrin beta1 subtypes observed in DU-145 cells possibly causes re-differentiation towards a low-invasive phenotype.
Background: In rat, deafferentation of one labyrinth (unilateral labyrinthectomy) results in a characteristic syndrome of ocular and motor postural disorders (e.g., barrel rotation, circling behavior, and spontaneous nystagmus). Behavioral recovery (e.g., diminished symptoms), encompassing 1 week after unilateral labyrinthectomy, has been termed vestibular compensation. Evidence suggesting that the histamine H3 receptor plays a key role in vestibular compensation comes from studies indicating that betahistine, a histamine-like drug that acts as both a partial histamine H1 receptor agonist and an H3 receptor antagonist, can accelerate the process of vestibular compensation. Results: Expression levels for histamine H3 receptor (total) as well as three isoforms which display variable lengths of the third intracellular loop of the receptor were analyzed using in situ hybridization on brain sections containing the rat medial vestibular nucleus after unilateral labyrinthectomy. We compared these expression levels to H3 receptor binding densities. Total H3 receptor mRNA levels (detected by oligo probe H3X) as well as mRNA levels of the three receptor isoforms studied (detected by oligo probes H3A, H3B, and H3C) showed a pattern of increase, which was bilaterally significant at 24 h post-lesion for both H3X and H3C, followed by significant bilateral decreases in medial vestibular nuclei occurring 48 h (H3X and H3B) and 1 week post-lesion (H3A, H3B, and H3C). Expression levels of H3B was an exception to the forementioned pattern with significant decreases already detected at 24 h post-lesion. Coinciding with the decreasing trends in H3 receptor mRNA levels was an observed increase in H3 receptor binding densities occurring in the ipsilateral medial vestibular nuclei 48 h post-lesion. Conclusion: Progressive recovery of the resting discharge of the deafferentated medial vestibular nuclei neurons results in functional restoration of the static postural and occulomotor deficits, usually occurring within a time frame of 48 hours in rats. Our data suggests that the H3 receptor may be an essential part of pre-synaptic mechanisms required for reestablishing resting activities 48 h after unilateral labyrinthectomy.
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
The study of organisms with restricted dispersal abilities and presence in the fossil record is particularly adequate to understand the impact of climate changes on the distribution and genetic structure of species. Trochoidea geyeri (Soós 1926) is a land snail restricted to a patchy, insular distribution in Germany and France. Fossil evidence suggests that current populations of T. geyeri are relicts of a much more widespread distribution during more favourable climatic periods in the Pleistocene. Results: Phylogeographic analysis of the mitochondrial 16S rDNA and nuclear ITS-1 sequence variation was used to infer the history of the remnant populations of T. geyeri. Nested clade analysis for both loci suggested that the origin of the species is in the Provence from where it expanded its range first to Southwest France and subsequently from there to Germany. Estimated divergence times predating the last glacial maximum between 25–17 ka implied that the colonization of the northern part of the current species range occurred during the Pleistocene. Conclusion: We conclude that T. geyeri could quite successfully persist in cryptic refugia during major climatic changes in the past, despite of a restricted capacity of individuals to actively avoid unfavourable conditions.
Western cultures have witnessed a tremendous cultural and social transformation of sexuality in the years since the sexual revolution. Apart from a few public debates and scandals, the process has moved along gradually and quietly. Yet its real and symbolic effects are probably much more consequential than those generated by the sexual revolution of the sixties. Sigusch refers to the broad-based recoding and reassessment of the sexual sphere during the eighties and nineties as the "neosexual revolution". The neosexual revolution is dismantling the old patterns of sexuality and reassembling them anew. In the process, dimensions, intimate relationships, preferences and sexual fragments emerge, many of which had submerged, were unnamed or simply did not exist before. In general, sexuality has lost much of its symbolic meaning as a cultural phenomenon. Sexuality is no longer the great metaphor for pleasure and happiness, nor is it so greatly overestimated as it was during the sexual revolution. It is now widely taken for granted, much like egotism or motility. Whereas sex was once mystified in a positive sense - as ecstasy and transgression, it has now taken on a negative mystification characterized by abuse, violence and deadly infection. While the old sexuality was based primarily upon sexual instinct, orgasm and the heterosexual couple, neosexualities revolve predominantly around gender difference, thrills, self-gratification and prosthetic substitution. From the vast number of interrelated processes from which neosexualities emerge, three empirically observable phenomena have been selected for discussion here: the dissociation of the sexual sphere, the dispersion of sexual fragments and the diversification of intimate relationships. The outcome of the neosexual revolution may be described as "lean sexuality" and "self-sex".
Background: Common warts (verrucae vulgares) are human papilloma virus (HPV) infections with a high incidence and prevalence, most often affecting hands and feet, being able to impair quality of life. About 30 different therapeutic regimens described in literature reveal a lack of a single striking strategy. Recent publications showed positive results of photodynamic therapy (PDT) with 5-aminolevulinic acid (5-ALA) in the treatment of HPV-induced skin diseases, especially warts, using visible light (VIS) to stimulate an absorption band of endogenously formed protoporphyrin IX. Additional experiences adding waterfiltered infrared A (wIRA) during 5-ALA-PDT revealed positive effects. Aim of the study: First prospective randomised controlled blind study including PDT and wIRA in the treatment of recalcitrant common hand and foot warts. Comparison of "5-ALA cream (ALA) vs. placebo cream (PLC)" and "irradiation with visible light and wIRA (VIS+wIRA) vs. irradiation with visible light alone (VIS)". Methods: Pre-treatment with keratolysis (salicylic acid) and curettage. PDT treatment: topical application of 5-ALA (Medac) in "unguentum emulsificans aquosum" vs. placebo; irradiation: combination of VIS and a large amount of wIRA (Hydrosun® radiator type 501, 4 mm water cuvette, waterfiltered spectrum 590-1400 nm, contact-free, typically painless) vs. VIS alone. Post-treatment with retinoic acid ointment. One to three therapy cycles every 3 weeks. Main variable of interest: "Percent change of total wart area of each patient over the time" (18 weeks). Global judgement by patient and by physician and subjective rating of feeling/pain (visual analogue scales). 80 patients with therapy-resistant common hand and foot warts were assigned randomly into one of the four therapy groups with comparable numbers of warts at comparable sites in all groups. Results: The individual total wart area decreased during 18 weeks in group 1 (ALA+VIS+wIRA) and in group 2 (PLC+VIS+wIRA) significantly more than in both groups without wIRA (group 3 (ALA+VIS) and 4 (PLC+VIS)): medians and interquartile ranges: -94% (-100%/-84%) vs. -99% (-100%/-71%) vs. -47% (-75%/0%) vs. -73% (-92%/-27%). After 18 weeks the two groups with wIRA differed remarkably from the two groups without wIRA: 42% vs. 7% completely cured patients; 72% vs. 34% vanished warts. Global judgement by patient and by physician and subjective rating of feeling was much better in the two groups with wIRA than in the two groups without wIRA. Conclusions: The above described complete treatment scheme of hand and foot warts (keratolysis, curettage, PDT treatment, irradiation with VIS+wIRA, retinoic acid ointment; three therapy cycles every 3 weeks) proved to be effective. Within this treatment scheme wIRA as non-invasive and painless treatment modality revealed to be an important, effective factor, while photodynamic therapy with 5-ALA in the described form did not contribute recognisably - neither alone (without wIRA) nor in combination with wIRA - to a clinical improvement. For future treatment of warts an even improved scheme is proposed: one treatment cycle (keratolysis, curettage, wIRA, without PDT) once a week for six to nine weeks. © 2004 Fuchs et al; licensee German Medical Science. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL : http://www.egms.de/en/gms/volume2.shtml
Apparent contradiction between negative effects of UV radiation and positive effects of sun exposure
(2005)
We would like to comment on the three contributions in the Journal of the National Cancer Institute, Vol. 97, No. 3, February 2, 2005: Kathleen M. Egan, Jeffrey A. Sosman, William J. Blot: Editorial: Sunlight and Reduced Risk of Cancer: Is the Real Story Vitamin D? (pp. 161-163) ; Marianne Berwick, Bruce K. Armstrong, Leah Ben-Porat, Judith Fine, Anne Kricker, Carey Eberle, Raymond Barnhill: Sun Exposure and Mortality From Melanoma. (pp. 195-199) ; Karin Ekström Smedby, Henrik Hjalgrim, Mads Melbye, Anna Torrång, Klaus Rostgaard, Lars Munksgaard, et al.: Ultraviolet Radiation Exposure and Risk of Malignant Lymphomas. (pp. 199-209).
In this short note on my talk I want to point out the mathematical difficulties that arise in the study of the relation of Wightman and Euclidean quantum field theory, i.e., the relation between the hierarchies of Wightman and Schwinger functions. The two extreme cases where the reconstructed Wightman functions are either tempered distributions - the well-known Osterwalder-Schrader reconstruction - or modified Fourier hyperfunctions are discussed in some detail. Finally, some perpectives towards a classification of Euclidean reconstruction theorems are outlined and preliminary steps in that direction are presented.
We reconsider estimates for the heat kernel on weighted graphs recently found by Metzger and Stollmann. In the case that the weights satisfy a positive lower bound as well as a finite upper bound, we obtain a specialized lower estimate and a proper generalization of a previous upper estimate. Reviews: Math. Rev. 1979406, Zbl. Math. 0934.46042
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
The dynamical quantum Zeno effect is studied in the context of von Neumann algebras. It is shown that the Zeno dynamics coincides with the modular dynamics of a localized subalgebra. This relates the modular operator of that subalgebra to the modular operator of the original algebra by a variant of the Kato-Lie-Trotter product formula.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
This extended write-up of a talk gives an introductory survey of mathematical problems of the quantization of gauge systems. Using the Schwinger model as an exactly tractable but nontrivial example which exhibits general features of gauge quantum field theory, I cover the following subjects: The axiomatics of quantum field theory, formulation of quantum field theory in terms of Wightman functions, reconstruction of the state space, the local formulation of gauge theories, indefiniteness of the Wightman functions in general and in the special case of the Schwinger model, the state space of the Schwinger model, special features of the model. New results are contained in the Mathematical Appendix, where I consider in an abstract setting the Pontrjagin space structure of a special class of indefinite inner product spaces - the so called quasi-positive ones. This is motivated by the indefinite inner product space structure appearing in the above context and generalizes results of Morchio and Strocchi [J. Math. Phys. 31 (1990) 1467], and Dubin and Tarski [J. Math. Phys. 7 (1966) 574]. See the corresponding paper: Schmidt, Andreas U.: "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra" and the presentation "Infinite Infrared Regularization in Krein Spaces".
Drug target 5-lipoxygenase : a link between cellular enzyme regulation and molecular pharmacology
(2005)
Leukotriene (LT) sind bioaktive Lipidmediatoren, die in einer Vielzahl von Entzündungskrankheiten wie z.B. Asthma, Psoriasis, Arthritis oder allergische Rhinitis involviert sind. Des Weiteren spielen LT in der Pathogenese von Erkrankungen wie Krebs, Osteoarthritis oder Atherosklerose eine Rolle. Die 5-Lipoxygenase (5-LO) ist das Enzym, das für die Bildung von LT verantwortlich ist. Aufgrund der physiologischen Eigenschaften der LT, ist die Entwicklung von potentiellen Arzneistoffen, welche die 5-LO als Zielstruktur besitzen, von erheblichem Interesse. Die Aktivität der 5-LO wird in vitro durch Ca2+, ATP, Phosphatidylcholin und Lipidhydroperoxide (LOOH) und durch die p38-abhängige MK-2/3 5-LO bestimmt. Inhibitorstudien weisen darauf hin, dass der MEK1/2-Signalweg ebenfalls in vivo an der 5-LO Aktivierung beteiligt ist. Hauptziel dieser Arbeit war es zu untersuchen, welche Rolle der MEK1/2-Signalweg bei der Aktivierung der 5-LO besitzt und welchen Einfluss der 5-LO Aktivierungsweg auf die Wirksamkeit potentieller Inhibitoren hat. „In gel kinase“ und „In vitro kinase“ Untersuchungen zeigten, dass die 5-LO ein Substrat für die Extracellular signal-regulated kinase (ERK) und MK-2/3 darstellt. Der Zusatz von mehrfach ungesättigten Fettsäuren (UFA), wie AA oder Ölsäure, verstärkte den Phosphorylierungsgrad der 5-LO sowohl durch ERK1/2 als auch durch MK-2/3. Die genannten Kinasen sind demnach auch für die 5-LO Aktivierung durch natürliche Stimuli verantwortlich, die den zellulären Ca2+-Spiegel kaum beeinflussen. Daraus ist ersichtlich, dass die Phosphorylierung der 5-LO durch ERK1/2 und/oder MK-2/3 einen alternativen Aktivierungsmechanismus neben Ca2+ darstellt. Ursprünglich wurden Nonredox-5-LO-Inhibitoren als kompetitive Wirkstoffe entwickelt, die mit AA um die Bindung an die katalytische Domäne der 5-LO konkurrieren. Vertreter dieser Inhibitoren, wie ZM230487 und L-739,010, zeigen eine potente Hemmung der LT-Biosynthese in verschiedenen Testsystemen. Sie scheiterten jedoch in klinischen Studien. In dieser Arbeit konnten wir zeigen, dass die Wirksamkeit dieser Inhibitoren vom Aktivierungsweg der 5-LO abhängig ist. Verglichen mit 5-LO Aktivität, die durch den unphysiologischen Stimulus Ca2+-Ionophor induziert wird, erfordert die Hemmung zellstress-induzierter Aktivität eine 10- bis 100-fach höhere Konzentration der Nonredox-5-LO-Inhibitoren. Die nicht-phosphorylierbare 5-LO Mutante (Ser271Ala/Ser663Ala) war wesentlich sensitiver gegenüber Nonredox-Inhibitoren als der Wildtyp, wenn das Enzym durch 5-LO Kinasen aktiviert wurde. Somit zeigen diese Ergebnisse, dass, im Gegensatz zu Ca2+, die 5-LO Aktivierung mittels Phosphorylierung die Wirksamkeit der Nonredox-Inhibitoren deutlich verringert. Des Weiteren wurde das pharmakologische Profil des neuen 5-LO Inhibitors CJ-13,610 mittels verschiedener in vitro-Testsysteme charakterisiert. In intakten PMNL, die durch Ca2+-Ionophor stimuliert wurden, hemmte die Substanz die 5-LO Produktbildung mit einem IC50 von 70 nM. Durch Zugabe von exogener AA, wird die Wirkung vermindert und der IC50 des Inhibitors steigt an. Dies deutet auf eine kompetitive Wirkweise hin. Wie die bekannten Nonredox-Inhibitoren, verliert auch CJ-13,610 seine Wirkung bei erhöhtem zellulärem Peroxidspiegel. Der Inhibitor CJ-13,610 zeigt jedoch keine Abhängigkeit vom Aktivierungsweg der 5-LO. Grundsätzlich ist es also von fundamentaler Bedeutung bei der Entwicklung von neuen Arzneistoffen, die zellulären Zusammenhänge, insbesondere die Regulierung der Aktivität von Enzymen, zu kennen. Wie in dieser Arbeit gezeigt, hat die Phosphorylierung der 5-LO einen starken Einfluss auf die Regulation der 5-LO Aktivität und eine elementare Wirkung auf die Hemmung des Enzyms durch verschiedene Wirkstoffe.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
This paper analyzes a comprehensive data set of 108 non venture-backed, 58 venture-backed and 33 bridge financed companies going public at Germany s Neuer Markt between March 1997 and March 2000. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This study provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VCbacked IPOs.
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.
Default risk sharing between banks and markets : the contribution of collateralized debt obligations
(2005)
This paper contributes to the economics of financial institutions risk management by exploring how loan securitization a.ects their default risk, their systematic risk, and their stock prices. In a typical CDO transaction a bank retains through a first loss piece a very high proportion of the expected default losses, and transfers only the extreme losses to other market participants. The size of the first loss piece is largely driven by the average default probability of the securitized assets. If the bank sells loans in a true sale transaction, it may use the proceeds to to expand its loan business, thereby incurring more systematic risk. We find an increase of the banks' betas, but no significant stock price e.ect around the announcement of a CDO issue. Our results suggest a role for supervisory requirements in stabilizing the financial system, related to transparency of tranche allocation, and to regulatory treatment of senior tranches. JEL Klassifikation: D82, G21, D74 .
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
From a macroeconomic perspective, the short-term interest rate is a policy instrument under the direct control of the central bank. From a finance perspective, long rates are risk-adjusted averages of expected future short rates. Thus, as illustrated by much recent research, a joint macro-finance modeling strategy will provide the most comprehensive understanding of the term structure of interest rates. We discuss various questions that arise in this research, and we also present a new examination of the relationship between two prominent dynamic, latent factor models in this literature: the Nelson-Siegel and affine no-arbitrage term structure models. JEL Klassifikation: G1, E4, E5.
What do academics have to offer market risk management practitioners in financial institutions? Current industry practice largely follows one of two extremely restrictive approaches: historical simulation or RiskMetrics. In contrast, we favor flexible methods based on recent developments in financial econometrics, which are likely to produce more accurate assessments of market risk. Clearly, the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress parsimonious models that are easily estimated, and we discuss a variety of practical approaches for high-dimensional covariance matrix modeling, along with what we see as some of the pitfalls and problems in current practice. In so doing we hope to encourage further dialog between the academic and practitioner communities, hopefully stimulating the development of improved market risk management technologies that draw on the best of both worlds.
This study offers a historical review of the monetary policy reform of October 6, 1979, and discusses the influences behind it and its significance. We lay out the record from the start of 1979 through the spring of 1980, relying almost exclusively upon contemporaneous sources, including the recently released transcripts of Federal Open Market Committee (FOMC) meetings during 1979. We then present and discuss in detail the reasons for the FOMC's adoption of the reform and the communications challenge presented to the Committee during this period. Further, we examine whether the essential characteristics of the reform were consistent with monetarism, new, neo, or old-fashioned Keynesianism, nominal income targeting, and inflation targeting. The record suggests that the reform was adopted when the FOMC became convinced that its earlier gradualist strategy using finely tuned interest rate moves had proved inadequate for fighting inflation and reversing inflation expectations. The new plan had to break dramatically with established practice, allow for the possibility of substantial increases in short-term interest rates, yet be politically acceptable, and convince financial markets participants that it would be effective. The new operating procedures were also adopted for the pragmatic reason that they would likely succeed. JEL Klassifikation: E52, E58, E61, E65.
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
The Basle securitisation framework explained: the regulatory treatment of asset securitisation
(2005)
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1