Refine
Year of publication
- 2004 (503) (remove)
Document Type
- Article (162)
- Working Paper (71)
- Part of a Book (67)
- Conference Proceeding (54)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (503) (remove)
Has Fulltext
- yes (503) (remove)
Is part of the Bibliography
- no (503) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (21)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Relativsatz (9)
- Englisch (8)
- Formale Semantik (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background: In rat, deafferentation of one labyrinth (unilateral labyrinthectomy) results in a characteristic syndrome of ocular and motor postural disorders (e.g., barrel rotation, circling behavior, and spontaneous nystagmus). Behavioral recovery (e.g., diminished symptoms), encompassing 1 week after unilateral labyrinthectomy, has been termed vestibular compensation. Evidence suggesting that the histamine H3 receptor plays a key role in vestibular compensation comes from studies indicating that betahistine, a histamine-like drug that acts as both a partial histamine H1 receptor agonist and an H3 receptor antagonist, can accelerate the process of vestibular compensation. Results: Expression levels for histamine H3 receptor (total) as well as three isoforms which display variable lengths of the third intracellular loop of the receptor were analyzed using in situ hybridization on brain sections containing the rat medial vestibular nucleus after unilateral labyrinthectomy. We compared these expression levels to H3 receptor binding densities. Total H3 receptor mRNA levels (detected by oligo probe H3X) as well as mRNA levels of the three receptor isoforms studied (detected by oligo probes H3A, H3B, and H3C) showed a pattern of increase, which was bilaterally significant at 24 h post-lesion for both H3X and H3C, followed by significant bilateral decreases in medial vestibular nuclei occurring 48 h (H3X and H3B) and 1 week post-lesion (H3A, H3B, and H3C). Expression levels of H3B was an exception to the forementioned pattern with significant decreases already detected at 24 h post-lesion. Coinciding with the decreasing trends in H3 receptor mRNA levels was an observed increase in H3 receptor binding densities occurring in the ipsilateral medial vestibular nuclei 48 h post-lesion. Conclusion: Progressive recovery of the resting discharge of the deafferentated medial vestibular nuclei neurons results in functional restoration of the static postural and occulomotor deficits, usually occurring within a time frame of 48 hours in rats. Our data suggests that the H3 receptor may be an essential part of pre-synaptic mechanisms required for reestablishing resting activities 48 h after unilateral labyrinthectomy.
Western cultures have witnessed a tremendous cultural and social transformation of sexuality in the years since the sexual revolution. Apart from a few public debates and scandals, the process has moved along gradually and quietly. Yet its real and symbolic effects are probably much more consequential than those generated by the sexual revolution of the sixties. Sigusch refers to the broad-based recoding and reassessment of the sexual sphere during the eighties and nineties as the "neosexual revolution". The neosexual revolution is dismantling the old patterns of sexuality and reassembling them anew. In the process, dimensions, intimate relationships, preferences and sexual fragments emerge, many of which had submerged, were unnamed or simply did not exist before. In general, sexuality has lost much of its symbolic meaning as a cultural phenomenon. Sexuality is no longer the great metaphor for pleasure and happiness, nor is it so greatly overestimated as it was during the sexual revolution. It is now widely taken for granted, much like egotism or motility. Whereas sex was once mystified in a positive sense - as ecstasy and transgression, it has now taken on a negative mystification characterized by abuse, violence and deadly infection. While the old sexuality was based primarily upon sexual instinct, orgasm and the heterosexual couple, neosexualities revolve predominantly around gender difference, thrills, self-gratification and prosthetic substitution. From the vast number of interrelated processes from which neosexualities emerge, three empirically observable phenomena have been selected for discussion here: the dissociation of the sexual sphere, the dispersion of sexual fragments and the diversification of intimate relationships. The outcome of the neosexual revolution may be described as "lean sexuality" and "self-sex".
Background: Common warts (verrucae vulgares) are human papilloma virus (HPV) infections with a high incidence and prevalence, most often affecting hands and feet, being able to impair quality of life. About 30 different therapeutic regimens described in literature reveal a lack of a single striking strategy. Recent publications showed positive results of photodynamic therapy (PDT) with 5-aminolevulinic acid (5-ALA) in the treatment of HPV-induced skin diseases, especially warts, using visible light (VIS) to stimulate an absorption band of endogenously formed protoporphyrin IX. Additional experiences adding waterfiltered infrared A (wIRA) during 5-ALA-PDT revealed positive effects. Aim of the study: First prospective randomised controlled blind study including PDT and wIRA in the treatment of recalcitrant common hand and foot warts. Comparison of "5-ALA cream (ALA) vs. placebo cream (PLC)" and "irradiation with visible light and wIRA (VIS+wIRA) vs. irradiation with visible light alone (VIS)". Methods: Pre-treatment with keratolysis (salicylic acid) and curettage. PDT treatment: topical application of 5-ALA (Medac) in "unguentum emulsificans aquosum" vs. placebo; irradiation: combination of VIS and a large amount of wIRA (Hydrosun® radiator type 501, 4 mm water cuvette, waterfiltered spectrum 590-1400 nm, contact-free, typically painless) vs. VIS alone. Post-treatment with retinoic acid ointment. One to three therapy cycles every 3 weeks. Main variable of interest: "Percent change of total wart area of each patient over the time" (18 weeks). Global judgement by patient and by physician and subjective rating of feeling/pain (visual analogue scales). 80 patients with therapy-resistant common hand and foot warts were assigned randomly into one of the four therapy groups with comparable numbers of warts at comparable sites in all groups. Results: The individual total wart area decreased during 18 weeks in group 1 (ALA+VIS+wIRA) and in group 2 (PLC+VIS+wIRA) significantly more than in both groups without wIRA (group 3 (ALA+VIS) and 4 (PLC+VIS)): medians and interquartile ranges: -94% (-100%/-84%) vs. -99% (-100%/-71%) vs. -47% (-75%/0%) vs. -73% (-92%/-27%). After 18 weeks the two groups with wIRA differed remarkably from the two groups without wIRA: 42% vs. 7% completely cured patients; 72% vs. 34% vanished warts. Global judgement by patient and by physician and subjective rating of feeling was much better in the two groups with wIRA than in the two groups without wIRA. Conclusions: The above described complete treatment scheme of hand and foot warts (keratolysis, curettage, PDT treatment, irradiation with VIS+wIRA, retinoic acid ointment; three therapy cycles every 3 weeks) proved to be effective. Within this treatment scheme wIRA as non-invasive and painless treatment modality revealed to be an important, effective factor, while photodynamic therapy with 5-ALA in the described form did not contribute recognisably - neither alone (without wIRA) nor in combination with wIRA - to a clinical improvement. For future treatment of warts an even improved scheme is proposed: one treatment cycle (keratolysis, curettage, wIRA, without PDT) once a week for six to nine weeks. © 2004 Fuchs et al; licensee German Medical Science. This is an Open Access article: verbatim copying and redistribution of this article are permitted in all media for any purpose, provided this notice is preserved along with the article's original URL : http://www.egms.de/en/gms/volume2.shtml
We present an overview of the mathematics underlying the quantum Zeno effect. Classical, functional analytic results are put into perspective and compared with more recent ones. This yields some new insights into mathematical preconditions entailing the Zeno paradox, in particular a simplified proof of Misra's and Sudarshan's theorem. We empahsise the complex-analytic structures associated to the issue of existence of the Zeno dynamics. On grounds of the assembled material, we reason about possible future mathematical developments pertaining to the Zeno paradox and its counterpart, the anti-Zeno paradox, both of which seem to be close to complete characterisations. PACS-Klassifikation: 03.65.Xp, 03.65Db, 05.30.-d, 02.30.T . See the corresponding presentation: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Zeno Dynamics in Quantum Statistical Mechanics"
We study the quantum Zeno effect in quantum statistical mechanics within the operator algebraic framework. We formulate a condition for the appearance of the effect in W*-dynamical systems, in terms of the short-time behaviour of the dynamics. Examples of quantum spin systems show that this condition can be effectively applied to quantum statistical mechanical models. Furthermore, we derive an explicit form of the Zeno generator, and use it to construct Gibbs equilibrium states for the Zeno dynamics. As a concrete example, we consider the X-Y model, for which we show that a frequent measurement at a microscopic level, e.g. a single lattice site, can produce a macroscopic effect in changing the global equilibrium. PACS - Klassifikation: 03.65.Xp, 05.30.-d, 02.30. See the corresponding papers: Schmidt, Andreas U.: "Zeno Dynamics of von Neumann Algebras" and "Mathematics of the Quantum Zeno Effect" and the talk "Zeno Dynamics in Quantum Statistical Mechanics" - http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1167/
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The Kochen-Specker theorem has been discussed intensely ever since its original proof in 1967. It is one of the central no-go theorems of quantum theory, showing the non-existence of a certain kind of hidden states models. In this paper, we first offer a new, non-combinatorial proof for quantum systems with a type I_n factor as algebra of observables, including I_infinity. Afterwards, we give a proof of the Kochen-Specker theorem for an arbitrary von Neumann algebra R without summands of types I_1 and I_2, using a known result on two-valued measures on the projection lattice P(R). Some connections with presheaf formulations as proposed by Isham and Butterfield are made.
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
Financial theory creates a puzzle. Some authors argue that high-risk entrepreneurs choose debt contracts instead of equity contracts since risky but high returns are of relatively more value for a loan-financed firm. On the contrary, authors who focus explicitly on start-up finance predict that entrepreneurs are the more likely to seek equity-like venture capital contracts, the more risky their projects are. Our paper makes a first step to resolve this puzzle empirically. We present microeconometric evidence on the determinants of debt and equity financing in young and innovative SMEs. We pay special attention to the role of risk for the choice of the financing method. Since risk is not directly observable we use different indicators for financial and project risk. It turns out that our data generally confirms the hypothesis that the probability that a young high-tech firm receives equity financing is an increasing function of the financial risk. With regard to the intrinsic project risk, our results are less conclusive, as some of our indicators of a risky project are found to have a negative effect on the likelihood to be financed by private equity.
We study the returns the venture capital and private equity investment from 221 venture capital and private equity funds that are part of 72 venture capital and private equity firms, 5040 entrepreneurial firms (3826 venture capital and 1214 private equity), and spanning 32 years (1971 - 2003) and 39 countries from North and South America, Europe and Asia. We make use of four main categories of variables to proxy for value-added activities and risks that explain venture capital and private equity returns: market and legal environment, VC characteristics, entrepreneurial firm characteristics, and the characteristics and structure of the investment. We show Heckman sample selection issues in regards to both unrealized and partially realized investments are important to consider for analysing the determinants of realized returns. We further compare the actual unrealized returns, as reported to investment managers, to the predicted unrealized returns based on the estimates of realized returns from the sample selection models. We show there exists significant systematic biases in the reporting of unrealized investments to institutional investors depending on the level of the earnings aggressiveness and disclosure indices in a country, as well as proxies for the degree of information asymmetry between investment managers and venture capital and private equity fund managers. Klassifikation: G24, G28, G31, G32, G35
We analyze welfare maximizing monetary policy in a dynamic two-country model with price stickiness and imperfect competition. In this context, a typical terms of trade externality affects policy interaction between independent monetary authorities. Unlike the existing literature, we remain consistent to a public finance approach by an explicit consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails two main advantages. First, it allows an accurate characterization of optimal policy in an economy that evolves around a steady-state which is not necessarily efficient. Second, it allows to describe a full range of alternative dynamic equilibria when price setters in both countries are completely forward-looking and households' preferences are not restricted. In this context, we study optimal policy both in the long-run and along a dynamic path, and we compare optimal commitment policy under Nash competition and under cooperation. By deriving a second order accurate solution to the policy functions, we also characterize the welfare gains from international policy cooperation. Klassifikation: E52, F41 . This version: January, 2004. First draft: October 2003 .
This paper considers a theoretical model of n asymmetric firms that reduce their initial unit costs by spending on R&D activities. In accordance with Schumpeterian hypotheses we obtain that more efficient (bigger) firms spend more in R&D and this leads to a more concentrated market structure. We also find a positive relationship between innovation and market concentration. This calls for a corrective tax on R&D activities to curtail strategic incentives to over-invest in R&D trying to achieve a higher market share. Klassifikation: L11, L52, O31 . February, 2004.
This paper aims to analyze the impact of different types of venture capitalists on the performance of their portfolio firms around and after the IPO. We thereby investigate the hypothesis that different governance structures, objectives and track record of different types of VCs have a significant impact on their respective IPOs. We explore this hypothesis by using a data set embracing all IPOs which occurred on Germany's Neuer Markt. Our main finding is that significant differences among the different VCs exist. Firms backed by independent VCs perform significantly better two years after the IPO compared to all other IPOs and their share prices fluctuate less than those of their counterparts in this period of time. Obviously, independent VCs, which concentrated mainly on growth stocks (low book-to-market ratio) and large firms (high market value), were able to add value by leading to less post-IPO idiosyncratic risk and more return (after controlling for all other effects). On the contrary, firms backed by public VCs (being small and having a high book-to-market ratio) showed relative underperformance. Klassifikation: G10, G14, G24 . 29th January 2004 .
How might retirees consider deploying the retirement assets accumulated in a defined contribution pension plan? One possibility would be to purchase an immediate annuity. Another approach, called the "phased withdrawal" strategy in the literature, would have the retiree invest his funds and then withdraw some portion of the account annually. Using this second tactic, the withdrawal rate might be determined according to a fixed benefit level payable until the retiree dies or the funds run out, or it could be set using a variable formula, where the retiree withdraws funds according to a rule linked to life expectancy. Using a range of data consistent with the German experience, we evaluate several alternative designs for phased withdrawal strategies, allowing for endogenous asset allocation patterns, and also allowing the worker to make decisions both about when to retire and when to switch to an annuity. We show that one particular phased withdrawal rule is appealing since it offers relatively low expected shortfall risk, good expected payouts for the retiree during his life, and some bequest potential for the heirs. We also find that unisex mortality tables if used for annuity pricing can make women's expected shortfalls higher, expected benefits higher, and bequests lower under a phased withdrawal program. Finally, we show that delayed annuitization can be appealing since it provides higher expected benefits with lower expected shortfalls, at the cost of somewhat lower anticipated bequests. Klassifikation: G22, G23, J26, J32, H55 . January 2004.
Life of Varroa destructor, Anderson and Trueman, an ectoparasitic mite of honeybees, is divided into a reproductive phase in the bee brood and a phoretic phase during which the mite is attached to the adult bee. Phoretic mites leave the colony with workers involved in foraging tasks. Little information is available on the mortality of mites outside the colony. Mites may or not return to the colony as a result of death of the infested foragers, host change by drifting of foragers, or removal of mites outside the colony. That mites do not return to the colony was indicated by substantially higher infestation of outflying workers compared to the infestation of returning workers (Kutschker, 1999). The main objective of the study was to provide information whether V. destructor influences flight behaviour of foragers and consequently returning frequency of foragers to the colony. I first repeated the experiment of Kutschker (1999) examining the infestation of outflying and returning workers. Further, I registered flight duration of foragers using a video method. In this experiment I compared also the infestation and flight duration of bees of different genetic origin, Carnica from Oberursel and bees from Primorsky region. I investigated returning time of workers, returning frequency until evening, drifting to other colonies and orientation toward the nest entrance in the experiments in which workers were released in close vicinity of the colony. At last, I measured the loss of foragers in relation to colony infestation using a Bee Scan. Results from this study, listed below, showed considerable influence of V. destructor on flight behavior of foragers translating into loss of mites. Loss of mites with foragers add substantial component to mite mortality and was underestimated in previous studies. Such loss might be viewed as a mechanism of resistance against V. destructor. a) The mean infestation of outflying workers (0.019±0.018) was twice as the mean infestation of returning workers (0.009±0.018). The difference in the infestation between outflying and returning workers was more marked in highly infested colonies. b) Investigation of individually tagged workers by use of a two camera video recording device showed significantly higher infestation of outflying workers compared to returning workers. Mites were lost by the non returning of infested foragers (22%) and by loss of mites from foragers that returned to the colony without the mite (20%). A small portion of mites (1.8%) was gained. Loss of mites significantly exceeded mite gain. c) The flight duration of infested workers determined by using the same two camera video system was significantly higher in infested compared to uninfested workers of the same age that flew closest at time. The median flight duration of infested workers was 1.7 higher (214s) than the median duration of unifested workers (128s). d) Infested workers took 2.3 times longer to return to the colony than uninfested workers of the same age when released from the same locations, closest at time. The returning time increased with the distance of release. In a group of bees released simultaneously the infestation was higher in bees returning later and in those that did not return in the observation period of 15 min. e) Released workers did not return to the colony 1.5 more frequently than uninfested workers in evening. The difference in returning was significant for locations of 20 and 50m from the colony. No difference in returning between infested and uninfested workers were observed for the most distant location of 400m. f) No significant difference was found in returning time and/or in the returning frequency until evening between workers artificially infested overnight and naturally infested workers. Artificially infested workers returned later and less frequently than a control group indicating rapid influence of V. destructor on flight behavior of foragers. g) The orientation ability of infested workers toward the nest entrance was impaired. Infested workers compared to uninfested workers twice as often approached a dummy entrance before finding the nest entrance. h) No significant differences were found in drifting between infested and uninfested workers. Drifting in the neighboring nucleus colony occurred in about 1% occasions after release of marked workers. Similarly, more infested, but not significantly more infested workers (2.6%) entered a different colored hive than the same colored hive (1.9%). However, the number of drifting bees were to low to make results conclusive. i) The comparison between Carnica and Primorsky workers revealed higher infestation in Carnica compared to Primorsky. Further, Primorsky workers lost more mites during foraging due to mite loss from foragers and non returning of infested workers. No significant differences in flight duration were observed between the two bee stocks. j) Loss of foragers, as determined by the Bee Scan counts of outflying and returning foragers, and the infestation of outflying bees increased significantly over a period of 70 days. A colony with 7.7. higher infestation of outflying foragers lost 2.2. time more bees per flight per day compared to a low infested colony. k) The estimates of mite loss with foragers from mite population per day up to 3.1% exceeds approximately mite mortality of 1% within the colony as represented by counting dead mites on bottom board inserts.
Alzheimer’s disease (AD) is the most common neurodegenerative disorder world wide, causing presenile dementia and death of millions of people. During AD damage and massive loss of brain cells occur. Alzheimer’s disease is genetically heterogeneous and may therefore represent a common phenotype that results from various genetic and environmental influences and risk factors. In approximately 10% of patients, changes of the genetic information were detected (gene mutations). In these cases, Alzheimer’s disease is inherited as an autosomal dominant trait (familial Alzheimer’s disease, FAD). In rare cases of familial Alzheimer’s disease (about 1-3%), mutations have been detected in genes on chromosomes 14 and 1 (encoding for Presenilin 1 and 2, respectively), and on chromosome 21 encoding for the amyloid precursor protein (APP), which is responsible for the release of the cell-damaging protein amyloid-beta (ß-amyloid, Aß). Familial forms of early-onset Alzheimer’s disease are rare; however, their importance extends far beyond their frequency, because they allow to identify some of the critical pathogenetic pathways of the disease. All familial Alzheimer mutations share a common feature: they lead to an enhanced production of the Aß, which is the major constituent of senile plaques in brains of AD patients. New data indicates that Aß promotes neuronal degeneration. Therefore, one aim of these thesis was to elucidate the neurotoxic biochemical pathways induced by Aß, investigating the effect of the FAD Swedish APP double mutation (APPsw) on oxidative stress-induced cell death mechanisms. This mutation results in a three- to sixfold increased Aß production compared to wild-type APP (APPwt). As cell models, the neuronal PC12 (rat pheochromocytoma) and the HEK (human embryonic kidney 293) cell lines were used, which have been transfected with human wiltyp APP or human APP containing the Swedish double mutation. The used cell models offer two important advantages. First, compared to experiments using high concentrations of Aß at micromolar levels applied extracellularly to cells, PC12 APPsw cells secret low Aß levels similar to the situation in FAD brains. Thus, this cell model represents a very suitable approach to elucidate the AD-specific cell death pathways mimicking physiological conditions. Second, these two cell lines (PC12 and HEK APPwt and APPsw) with different production levels of Aß may additionally allow to study dose-dependent effects of Aß. The here obtained results provide evidence for the enhanced cell vulnerability caused by the Swedish APP mutation and elucidate the cell death mechanism probably initiated by intracellulary produced Aß. Here it seems likely that increased production of Aß at physiological levels primes APPsw PC12 cells to undergo cell death only after additional stress, while chronic high levels in HEK cells already lead to enhanced basal apoptotic levels. Crucial effects of the Swedish APP mutation include the impairments of cellular energy metabolism affecting mitochondrial membrane potential and ATP levels as well as the additional activation of caspase 2, caspase 8 and JNK in response to oxidative stress. Thereby ,the following model can be proposed: PC12 cells harboring the Swedish APP mutation have a reduced energy metabolism compared to APPwt or control cells. However, this effect does not leads to enhanced basal apoptotic levels of cultured cells. An exposure of PC12 cells to oxidative stress leads to mitochondrial dysfunction, e.g., decrease in mitochondrial membrane potential and depletion in ATP. The consequence is the activation of the intrinsic apoptotic pathway releasing cytochrome c and Smac resulting in the activation of caspase 9. This effect is amplified by the overexpression of APP, since both APPsw and APPwt PC12 cells show enhanced cytochrome c and Smac release as well as enhanced caspase 9 activity as vector transfected control. In APPsw PC12 cells a parallel pathway is additionally emphased. Due to reduced ATP levels or enhanced Aß production JNK is activated. Furthermore, the extrinsic apoptotic pathway is enhanced, since caspase 8 and caspase 2 activation was clearly enhanced by the Swedish APP mutation. Both pathways may then converge by activating the effector enzyme, caspase 3, and the execution of cell death. In addition, caspase independent effects also needs to be considered. One possibility could be the implication of AIF since AIF expression was found to be induced by the Swedish APP mutation. In APPsw HEK cells high chronic Aß levels leads to enhanced apoptotic levels, reduce mitochondrial membrane potential and ATP levels even under basal conditions. Summarizing, a hypothetical sequence of events is proposed linking FAD, Aß production, JNK-activation, mitochondrial dysfunction with caspase pathway and neuronal loss for our cell model. The brain has a high metabolic rate and is exposured to gradually rising levels of oxidative stress during life. In Swedish FAD patients the levels of oxidative stress are increased in the temporal inferior cortex. This study using a cell model mimicking the in vivo situation in AD brains indicates that probably both, increased Aß production and the gradual rise of oxidative stress throughout life converge at a final common pathway of an increased vulnerability of neurons to apoptotic cell death from FAD patients. Presenilin (PS) 1 is an aspartyl protease, involved in the gamma-secretase mediated proteolysis of Amyloid-ß-protein (Aß), the major constituent of senile plaques in brains of Alzheimer’s disease (AD) patients. Recent studies have suggested an additional role for presenilin proteins in apoptotic cell death observed in AD. Since PS 1 is proteolytic cleaved by caspase 3, it has been prosposed that the resulting C-terminal fragment of PS1 (PSCas) could play a role in signal transduction during apoptosis. Moreover, it was shown that mutant presenilins causing early-onset of familial Alzheimer's disease (FAD) may render cells vulnerable to apoptosis. The mechanism by which PS1 regulates apoptotic cell death is yet not understood. Therefore one aim of our present study was to clarify the involvement of PS1 in the proteolytic cascade of apoptosis and if the cleavage of PS1 by caspase 3 has an regulatory function. Here it is demonstrated that both, PS1 and PS1Cas lead to a reduced vulnerability of PC12 and Jurkat cells to different apoptotic stimuli. However a mutation at the caspase 3 recognition site (D345A/ PSmut), which inhibits cleavage of PS1 by caspase 3, show no differences in the effect of PS1 or PSCas towards apoptotic stimuli. This suggest that proteolysis of PS1 by caspase 3 is not a determinant, but only a secondary effect during apoptosis. Since several FAD mutation distributed through the whole PS1 gene lead to enhanced apoptosis, an abolishment of the antiapoptotic effect of PS1 might contribute to the massive neurodegeneration in early age of FAD patients. Here, the regulate properties of PS1 in apoptosis may not be through an caspase 3 dependent cleavage and generation of PSCas, but rather through interaction of PS1 with other proteins involved in apoptosis.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
Work on proving congruence of bisimulation in functional programming languages often refers to [How89,How96], where Howe gave a highly general account on this topic in terms of so-called lazy computation systems . Particularly in implementations of lazy functional languages, sharing plays an eminent role. In this paper we will show how the original work of Howe can be extended to cope with sharing. Moreover, we will demonstrate the application of our approach to the call-by-need lambda-calculus lambda-ND which provides an erratic non-deterministic operator pick and a non-recursive let. A definition of a bisimulation is given, which has to be based on a further calculus named lambda-~, since the na1ve bisimulation definition is useless. The main result is that this bisimulation is a congruence and contained in the contextual equivalence. This might be a step towards defining useful bisimulation relations and proving them to be congruences in calculi that extend the lambda-ND-calculus.
This Article concerns the duty of care in American corporate law. To fully understand that duty, it is necessary to distinguish between roles, functions, standards of conduct, and standards of review. A role consists of an organized and socially recognized pattern of activity in which individuals regularly engage. In organizations, roles take the form of positions, such as the position of the director. A function consists of an activity that an actor is expected to engage in by virtue of his role or position. A standard of conduct states the way in which an actor should play a role, act in his position, or conduct his functions. A standard of review states the test that a court should apply when it reviews an actor’s conduct to determine whether to impose liability, grant injunctive relief, or determine the validity of his actions. In many or most areas of law, standards of conduct and standards of review tend to be conflated. For example, the standard of conduct that governs automobile drivers is that they should drive carefully, and the standard of review in a liability claim against a driver is whether he drove carefully. Similarly, the standard of conduct that governs an agent who engages in a transaction with his principal is that the agent must deal fairly, and the standard of review in a claim by the principal against an agent, based on such a transaction, is whether the agent dealt fairly. The conflation of standards of conduct and standards of review is so common that it is easy to overlook the fact that whether the two kinds of standards are or should be identical in any given area is a matter of prudential judgment. In a corporate world in which information was perfect, the risk of liability for assuming a given corporate role was always commensurate with the incentives for assuming the role, and institutional considerations never required deference to a corporate organ, the standards of conduct and review in corporate law might be identical. In the real world, however, these conditions seldom hold, and in American corporate law the standards of review pervasively diverge from the standards of conduct. Traditionally, the two major areas of American corporate law that involved standards of conduct and review have been the duty of care and the duty of loyalty. The duty of loyalty concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does involve his own self-interest. The duty of care concerns the standards of conduct and review applicable to a director or officer who takes action, or fails to act, in a matter that does not involve his own self-interest.
When performance measures are used for evaluation purposes, agents have some incentives to learn how their actions affect these measures. We show that the use of imperfect performance measures can cause an agent to devote too many resources (too much effort) to acquiring information. Doing so can be costly to the principal because the agent can use information to game the performance measure to the detriment of the principal. We analyze the impact of endogenous information acquisition on the optimal incentive strength and the quality of the performance measure used.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
In dieser Arbeit werden Untersuchungen über die Anwendbarkeit von vier Methoden zur selektiven Einführung von Radikalen in DNA vorgestellt. Hierzu wurde die EPR-Spektroskopie (Elektronen-paramagnetische Resonanz) benutzt. Die selektive Einführung und Erzeugung von Radikalen in DNA ist nötig, um J-Kopplungen in DNA zu untersuchen. Vor dem Fernziel der Bestimmung der Austauschkopplungskonstanten J in biradikalischer DNA und deren Korrelation mit der charge-transfer-Geschwindigkeitskonstanten kCT stellen diese Untersuchungen einen wichtigen Ausgangspunkt dar. Stabile aromatische Nitroxide. Simulationen von Raumtemperatur-CW-X-Band-EPRSpektren fünf verschiedener aromatischer Nitroxide, welche potentielle DNA-Interkalatoren sind, wurden durchgeführt. Die aromatischen Nitroxide zeigen aufgelöste Hyperfeinkopplungen, welche zu dem Schluss führen, dass die Spindichte in hohem Maße delokalisiert ist, was die Verwendung dieser Verbindungen zur Messung von J-Kopplungen in biradikalischer DNA erlaubt. Transiente Guanin-Radikale. Transiente Guanin-Radikale werden in DNA selektiv durch die Flash-Quench-Technik erzeugt, bei der optisch anregbare Ruthenium-Interkalatoren verwendet werden. Transiente Thymyl-Radikale aus UV-bestrahltem 4'-Pivaloyl-Thymidin. Es werden photoinduzierte Prozesse untersucht, welche durch Bestrahlung von Thymin-Nukleosiden, die an der 4’-Position die optisch spaltbare Pivaloyl-Gruppe tragen, erzeugt werden. Dieses Nukleosid wurde speziell dafür entworfen, um Elektronenlöcher in DNA zu injizieren. In dieser Arbeit wird gezeigt, dass diese Verbindung benutzt werden kann, um selektiv eine Thymin-Base zu reduzieren. Transiente Thymyl-Radikale erzeugt durch ein neuartig modifiziertes Thymin nach UV-Bestrahlung. Photoinduzierte Prozesse, welche durch Bestrahlung eines ähnlichen Thymidin-Nukleosids erzeugt wurden, werden hier untersucht. Dieses Thymidin- Nukleosid wurde modifiziert, indem die optisch spaltbare Pivaloyl-Gruppe an eine Seitenkette angehängt wurde, welche an der C6-Position der Thymin-Base sitzt. Die Thymin-Base wurde speziell dafür entworfen, um Elektronen in DNA zu injizieren. In dieser Arbeit wurde bestätigt, dass ein Überschuss-Elektron selektiv auf eine Thymin-Base transferiert werden kann.
Die in Englisch verfasste Dissertation, die unter der Betreuung von Herrn Prof. Dr. H. F. de Groote, Fachbereich Mathematik, entstand, ist der Mathematischen Physik zuzuordnen. Sie behandelt Stonesche Spektren von Neumannscher Algebren, observable Funktionen sowie einige Anwendungen in der Physik. Das abschließende Kapitel liefert eine Verallgemeinerung des Kochen-Specker-Theorems. Stonesche Spektren und observable Funktionen wurden von de Groote eingeführt. Das Stonesche Spektrum einer von Neumann-Algebra ist eine Verallgemeinerung des Gelfand-Spektrums, die observablen Funktionen verallgemeinern die Gelfand-Transformierten. Da de Grootes Ergebnisse zum großen Teil unveröffentlicht sind, folgt nach dem Einleitungskapitel im zweiten Kapitel eine Übersichtsdarstellung dieser Ergebnisse. Das dritte Kapitel behandelt die Stoneschen Spektren endlicher von Neumann-Algebren. Für Algebren vom Typ In wird eine vollständige Charakterisierung des Stoneschen Spektrums entwickelt. Zu Typ-II1-Algebren werden einige Resultate vorgestellt. Das vierte Kapitel liefert. einige einfache Anwendungen des Formalismus auf die Physik. Das fünfte Kapitel gibt erstmals einen funktionalanalytischen Beweis des Kochen-Specker-Theorems und liefert die Verallgemeinerung dieses Satzes, wobei die Situation für alle von Neumann-Algebren geklärt wird.
Die Ermittlung von Proteinstukturen mittels NMR-Spektroskopie ist ein komplexer Prozess, wobei die Resonanzfrequenzen und die Signalintensitäten den Atomen des Proteins zugeordnet werden. Zur Bestimmung der räumlichen Proteinstruktur sind folgende Schritte erforderlich: die Präparation der Probe und 15N/13C Isotopenanreicherung, Durchführung der NMR Experimente, Prozessierung der Spektren, Bestimmung der Signalresonanzen ('Peak-picking'), Zuordnung der chemischen Verschiebungen, Zuordnung der NOESY-Spektren und das Sammeln von konformationellen Strukturparametern, Strukturrechnung und Strukturverfeinerung. Aktuelle Methoden zur automatischen Strukturrechnung nutzen eine Reihe von Computeralgorithmen, welche Zuordnungen der NOESY-Spektren und die Strukturrechnung durch einen iterativen Prozess verbinden. Obwohl neue Arten von Strukturparametern wie dipolare Kopplungen, Orientierungsinformationen aus kreuzkorrelierten Relaxationsraten oder Strukturinformationen, die sich in Gegenwart paramagnetischer Zentren in Proteinen ergeben, wichtige Neuerungen für die Proteinstrukturrechnung darstellen, sind die Abstandsinformationen aus NOESY-Spektren weiterhin die wichtigste Basis für die NMR-Strukturbestimmung. Der hohe zeitliche Aufwand des 'peak-picking' in NOESY-Spektren ist hauptsächlich bedingt durch spektrale Überlagerung, Rauschsignale und Artefakte in NOESY-Spektren. Daher werden für das effizientere automatische 'Peak-picking' zuverlässige Filter benötigt, um die relevanten Signale auszuwählen. In der vorliegenden Arbeit wird ein neuer Algorithmus für die automatische Proteinstrukturrechnung beschrieben, der automatisches 'Peak-picking' von NOESY-Spektren beinhaltet, die mit Hilfe von Wavelets entrauscht wurden. Der kritische Punkt dieses Algorithmus ist die Erzeugung inkrementeller Peaklisten aus NOESY-Spektren, die mit verschiedenen auf Wavelets basierenden Entrauschungsprozeduren prozessiert wurden. Mit Hilfe entrauschter NOESY-Spektren erhält man Signallisten mit verschiedenen Konfidenzbereichen, die in unterschiedlichen Schritten der kombinierten NOE-Zuordnung/Strukturrechnung eingesetzt werden. Das erste Strukturmodell beruht auf stark entrauschten Spektren, die die konservativste Signalliste mit als weitgehend sicher anzunehmenden Signalen ergeben. In späteren Stadien werden Signallisten aus weniger stark entrauschten Spektren mit einer größeren Anzahl von Signalen verwendet. Die Auswirkung der verschiedenen Entrauschungsprozeduren auf Vollständigkeit und Richtigkeit der NOESY Peaklisten wurde im Detail untersucht. Durch die Kombination von Wavelet-Entrauschung mit einem neuen Algorithmus zur Integration der Signale in Verbindung mit zusätzlichen Filtern, die die Konsistenz der Peakliste prüfen ('Network-anchoring' der Spinsysteme und Symmetrisierung der Peakliste), wird eine schnelle Konvergenz der automatischen Strukturrechnung erreicht. Der neue Algorithmus wurde in ARIA integriert, einem weit verbreiteten Computerprogramm für die automatische NOE-Zuordnung und Strukturrechnung. Der Algorithmus wurde an der Monomereinheit der Polysulfid-Schwefel-Transferase (Sud) aus Wolinella succinogenes verifiziert, deren hochaufgelöste Lösungsstruktur vorher auf konventionelle Weise bestimmt wurde. Neben der Möglichkeit zur Bestimmung von Proteinlösungsstrukturen bietet sich die NMR-Spektroskopie auch als wirkungsvolles Werkzeug zur Untersuchung von Protein-Ligand- und Protein-Protein-Wechselwirkungen an. Sowohl NMR Spektren von isotopenmarkierten Proteinen, als auch die Spektren von Liganden können für das 'Screening' nach Inhibitoren benutzt werden. Im ersten Fall wird die Sensitivität der 1H- und 15N-chemischen Verschiebungen des Proteinrückgrats auf kleine geometrische oder elektrostatische Veränderungen bei der Ligandbindung als Indikator benutzt. Als 'Screening'-Verfahren, bei denen Ligandensignale beobachtet werden, stehen verschiedene Methoden zur Verfügung: Transfer-NOEs, Sättigungstransferdifferenzexperimente (STD, 'saturation transfer difference'), ePHOGSY, diffusionseditierte und NOE-basierende Methoden. Die meisten dieser Techniken können zum rationalen Design von inhibitorischen Verbindungen verwendet werden. Für die Evaluierung von Untersuchungen mit einer großen Anzahl von Inhibitoren werden effiziente Verfahren zur Mustererkennung wie etwa die PCA ('Principal Component Analysis') verwendet. Sie eignet sich zur Visualisierung von Ähnlichkeiten bzw. Unterschieden von Spektren, die mit verschiedenen Inhibitoren aufgenommen wurden. Die experimentellen Daten werden zuvor mit einer Serie von Filtern bearbeitet, die u.a. Artefakte reduzieren, die auf nur kleinen Änderungen der chemischen Verschiebungen beruhen. Der am weitesten verbreitete Filter ist das sogenannte 'bucketing', bei welchem benachbarte Punkte zu einen 'bucket' aufsummiert werden. Um typische Nachteile der 'bucketing'-Prozedur zu vermeiden, wurde in der vorliegenden Arbeit der Effekt der Wavelet-Entrauschung zur Vorbereitung der NMR-Daten für PCA am Beispiel vorhandener Serien von HSQC-Spektren von Proteinen mit verschiedenen Liganden untersucht. Die Kombination von Wavelet-Entrauschung und PCA ist am effizientesten, wenn PCA direkt auf die Wavelet-Koeffizienten angewandt wird. Durch die Abgrenzung ('thresholding') der Wavelet-Koeffizienten in einer Multiskalenanalyse wird eine komprimierte Darstellung der Daten erreicht, welche Rauschartefakte minimiert. Die Kompression ist anders als beim 'bucketing' keine 'blinde' Kompression, sondern an die Eigenschaften der Daten angepasst. Der neue Algorithmus kombiniert die Vorteile einer Datenrepresentation im Wavelet-Raum mit einer Datenvisualisierung durch PCA. In der vorliegenden Arbeit wird gezeigt, dass PCA im Wavelet- Raum ein optimiertes 'clustering' erlaubt und dabei typische Artefakte eliminiert werden. Darüberhinaus beschreibt die vorliegende Arbeit eine de novo Strukturbestimmung der periplasmatischen Polysulfid-Schwefel-Transferase (Sud) aus dem anaeroben gram-negativen Bakterium Wolinella succinogenes. Das Sud-Protein ist ein polysulfidbindendes und transferierendes Enzym, das bei niedriger Polysulfidkonzentration eine schnelle Polysulfid-Schwefel-Reduktion katalysiert. Sud ist ein 30 kDa schweres Homodimer, welches keine prosthetischen Gruppen oder schwere Metallionen enthält. Jedes Monomer enhält ein Cystein, welches kovalent bis zu zehn Polysulfid-Schwefel (Sn 2-) Ionen bindet. Es wird vermutet, dass Sud die Polysulfidkette auf ein katalytischen Molybdän-Ion transferiert, welches sich im aktiven Zentrum des membranständigen Enzyms Polysulfid-Reduktase (Psr) auf dessen dem Periplasma zugewandten Seite befindet. Dabei wird eine reduktive Spaltung der Kette katalysiert. Die Lösungsstruktur des Homodimeres Sud wurde mit Hilfe heteronuklearer, mehrdimensionaler NMR-Techniken bestimmt. Die Struktur beruht auf von NOESY-Spektren abgeleiteten Distanzbeschränkungen, Rückgratwasserstoffbindungen und Torsionswinkeln, sowie auf residuellen dipolaren Kopplungen, die für die Verfeinerung der Struktur und für die relative Orientierung der Monomereinheiten wichtig waren. In den NMR Spektren der Homodimere haben alle symmetrieverwandte Kerne äquivalente magnetische Umgebungen, weshalb ihre chemischen Verschiebungen entartet sind. Die symmetrische Entartung vereinfacht das Problem der Resonanzzuordnung, da nur die Hälfte der Kerne zugeordnet werden müssen. Die NOESY-Zuordnung und die Strukturrechnung werden dadurch erschwert, dass es nicht möglich ist, zwischen den Intra-Monomer-, Inter-Monomer- und Co-Monomer- (gemischten) NOESY-Signalen zu unterscheiden. Um das Problem der Symmetrie-Entartung der NOESY-Daten zu lösen, stehen zwei Möglichkeiten zur Verfügung: (I) asymmetrische Markierungs-Experimente, um die intra- von den intermolekularen NOESY-Signalen zu unterscheiden, (II) spezielle Methoden der Strukturrechnung, die mit mehrdeutigen Distanzbeschränkungen arbeiten können. Die in dieser Arbeit vorgestellte Struktur wurde mit Hilfe der Symmetrie-ADR- ('Ambigous Distance Restraints') Methode in Kombination mit Daten von asymetrisch isotopenmarkierten Dimeren berechnet. Die Koordinaten des Sud-Dimers zusammen mit den NMR-basierten Strukturdaten wur- den in der RCSB-Proteindatenbank unter der PDB-Nummer 1QXN abgelegt. Das Sud-Protein zeigt nur wenig Homologie zur Primärsequenz anderer Proteine mit ähnlicher Funktion und bekannter dreidimensionaler Struktur. Bekannte Proteine sind die Schwefeltransferase oder das Rhodanese-Enzym, welche beide den Transfer von einem Schwefelatom eines passenden Donors auf den nukleophilen Akzeptor (z.B von Thiosulfat auf Cyanid) katalysieren. Die dreidimensionalen Strukturen dieser Proteine zeigen eine typische a=b Topologie und haben eine ähnliche Umgebung im aktiven Zentrum bezüglich der Konformation des Proteinrückgrades. Die Schleife im aktiven Zentrum umgibt das katalytische Cystein, welches in allen Rhodanese-Enzymen vorhanden ist, und scheint im Sud-Protein flexibel zu sein (fehlende Resonanzzuordnung der Aminosäuren 89-94). Das Polysulfidende ragt aus einer positiv geladenen Bindungstasche heraus (Reste: R46, R67, K90, R94), wo Sud wahrscheinlich in Kontakt mit der Polysulfidreduktase tritt. Das strukturelle Ergebnis wurde durch Mutageneseexperimente bestätigt. In diesen Experimenten konnte gezeigt werden, dass alle Aminosäurereste im aktiven Zentrum essentiell für die Schwefeltransferase-Aktivität des Sud-Proteins sind. Die Substratbindung wurde früher durch den Vergleich von [15N,1H]-TROSY-HSQC-Spektren des Sud-Proteins in An- und Abwesenheit des Polysulfidliganden untersucht. Bei der Substratbindung scheint sich die lokale Geometrie der Polysulfidbindungsstelle und der Dimerschnittstelle zu verändern. Die konformationellen Änderungen und die langsame Dynamik, hervorgerufen durch die Ligandbindung können die weitere Polysulfid-Schwefel-Aktivität auslösen. Ein zweites Polysulfid-Schwefeltransferaseprotein (Str, 40 kDa) mit einer fünffach höheren nativen Konzentration im Vergleich zu Sud wurde im Bakterienperiplasma von Wolinella succinogenes entdeckt. Es wird angenommen, dass beide Protein einen Polysulfid-Schwefel-Komplex bilden, wobei Str wässriges Polysulfid sammelt und an Sud abgibt, welches den Schwefeltransfer zum katalytischen Molybdän-Ion auf das aktive Zentrum der dem Periplasma zugewandten Seite der Polysulfidreduktase durchführt. Änderungen chemischer Verschiebungen in [15N,1H]-TROSY-HSQC-Spektren zeigen, dass ein Polysulfid-Schwefeltransfer zwischen Str und Sud stattfindet. Eine mögliche Protein-Protein-Wechselwirkungsfläche konnte bestimmt werden. In der Abwesenheit des Polysulfidsubstrates wurden keine Wechselwirkungen zwischen Sud und Str beobachtet, was die Vermutung bestätigt, dass beide Proteine nur dann miteinander wechselwirken und den Polysulfid-Schwefeltransfer ermöglichen, wenn als treibende Kraft Polysulfid präsent ist.
We investigate transverse hadron spectra from relativistic nucleus-nucleus collisions which reflect important aspects of the dynamics - such as the generation of pressure - in the hot and dense zone formed in the early phase of the reaction. Our analysis is performed within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. Both transport models show their reliability for elementary pp as well as light-ion (C+C, Si+Si) reactions. However, for central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV the measured K+- transverse mass spectra have a larger inverse slope parameter than expected from the calculation. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding shows that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - is generated by strong partonic interactions in the early phase of central Au+Au (Pb+Pb) collisions.
We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions.
To be published in J. Phys. G - Proceedings of SQM 2004 : We review the results from the various hydrodynamical and transport models on the collective flow observables from AGS to RHIC energies. A critical discussion of the present status of the CERN experiments on hadron collective flow is given. We emphasize the importance of the flow excitation function from 1 to 50 A.GeV: here the hydrodynamic model has predicted the collapse of the v2-flow ~ 10 A.GeV; at 40 A.GeV it has been recently observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy we interpret this observation as evidence for a first order phase transition at high baryon density r b. Moreover, the connection of the elliptic flow v2 to jet suppression is examined. It is proven experimentally that the collective flow is not faked by minijet fragmentation. Additionally, detailed transport studies show that the away-side jet suppression can only partially (< 50%) be due to hadronic rescattering. Furthermore, the change in sign of v1, v2 closer to beam rapidity is related to the occurence of a high density first order phase transition in the RHIC data at 62.5, 130 and 200 A.GeV.