Refine
Year of publication
- 2004 (477) (remove)
Document Type
- Article (162)
- Working Paper (71)
- Part of a Book (67)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Conference Proceeding (28)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (477) (remove)
Has Fulltext
- yes (477) (remove)
Is part of the Bibliography
- no (477) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
We present STAR measurements of the azimuthal anisotropy parameter v2 and the binary-collision scaled centrality ratio RCP for kaons and lambdas ( Lambda + Lambda -bar) at midrapidity in Au+Au collisions at sqrt[sNN]=200 GeV. In combination, the v2 and RCP particle-type dependencies contradict expectations from partonic energy loss followed by standard fragmentation in vacuum. We establish pT ~ 5 GeV/c as the value where the centrality dependent baryon enhancement ends. The K0S and Lambda + Lambda -bar v2 values are consistent with expectations of constituent-quark-number scaling from models of hadron formation by parton coalescence or recombination.
Measurements of the production of forward high-energy pi 0 mesons from transversely polarized proton collisions at sqrt[s]=200 GeV are reported. The cross section is generally consistent with next-to-leading order perturbative QCD calculations. The analyzing power is small at xF below about 0.3, and becomes positive and large at higher xF, similar to the trend in data at sqrt[s] <= 20 GeV. The analyzing power is in qualitative agreement with perturbative QCD model expectations. This is the first significant spin result seen for particles produced with pT>1 GeV/c at a polarized proton collider.
Transverse mass and rapidity distributions for charged pions, charged kaons, protons, and antiprotons are reported for sqrt[sNN]=200 GeV pp and Au+Au collisions at Relativistic Heary Ion Collider (RHIC). Chemical and kinetic equilibrium model fits to our data reveal strong radial flow and long duration from chemical to kinetic freeze-out in central Au+Au collisions. The chemical freeze-out temperature appears to be independent of initial conditions at RHIC energies.
Azimuthally sensitive Hanbury Brown-Twiss interferometry in Au+Au collisions at sqrt[sNN]=200 GeV
(2004)
We present the results of a systematic study of the shape of the pion distribution in coordinate space at freeze-out in Au+Au collisions at BNL RHIC using two-pion Hanbury Brown-Twiss (HBT) interferometry. Oscillations of the extracted HBT radii versus emission angle indicate sources elongated perpendicular to the reaction plane. The results indicate that the pressure and expansion time of the collision system are not sufficient to completely quench its initial shape.
We report results on rho (770)0--> pi + pi - production at midrapidity in p+p and peripheral Au+Au collisions at sqrt[sNN]=200 GeV. This is the first direct measurement of rho (770)0--> pi + pi - in heavy-ion collisions. The measured rho 0 peak in the invariant mass distribution is shifted by ~40 MeV/c2 in minimum bias p+p interactions and ~70 MeV/c2 in peripheral Au+Au collisions. The rho 0 mass shift is dependent on transverse momentum and multiplicity. The modification of the rho 0 meson mass, width, and shape due to phase space and dynamical effects are discussed.
Results on high transverse momentum charged particle emission with respect to the reaction plane are presented for Au+Au collisions at sqrt[sNN]=200 GeV. Two- and four-particle correlations results are presented as well as a comparison of azimuthal correlations in Au+Au collisions to those in p+p at the same energy. The elliptic anisotropy v2 is found to reach its maximum at pt~3 GeV/c, then decrease slowly and remain significant up to pt ~ 7-10 GeV/c. Stronger suppression is found in the back-to-back high-pt particle correlations for particles emitted out of plane compared to those emitted in plane. The centrality dependence of v2 at intermediate pt is compared to simple models based on jet quenching.
The pseudorapidity asymmetry and centrality dependence of charged hadron spectra in d+Au collisions at sqrt[sNN ]=200 GeV are presented. The charged particle density at midrapidity, its pseudorapidity asymmetry, and centrality dependence are reasonably reproduced by a multiphase transport model, by HIJING, and by the latest calculations in a saturation model. Ratios of transverse momentum spectra between backward and forward pseudorapidity are above unity for pT below 5 GeV/c . The ratio of central to peripheral spectra in d+Au collisions shows enhancement at 2< pT <6 GeV/c , with a larger effect at backward rapidity than forward rapidity. Our measurements are in qualitative agreement with gluon saturation and in contrast to calculations based on incoherent multiple partonic scatterings.
We report on the rapidity and centrality dependence of proton and antiproton transverse mass distributions from 197Au + 197Au collisions at sqrt[sNN ]=130 GeV as measured by the STAR experiment at the Relativistic Heavy Ion Collider (RHIC). Our results are from the rapidity and transverse momentum range of |y| <0.5 and 0.35< pt <1.00 GeV/c . For both protons and antiprotons, transverse mass distributions become more convex from peripheral to central collisions demonstrating characteristics of collective expansion. The measured rapidity distributions and the mean transverse momenta versus rapidity are flat within |y| <0.5 . Comparisons of our data with results from model calculations indicate that in order to obtain a consistent picture of the proton (antiproton) yields and transverse mass distributions the possibility of prehadronic collective expansion may have to be taken into account.
We report the first observations of the first harmonic (directed flow, v1) and the fourth harmonic (v4), in the azimuthal distribution of particles with respect to the reaction plane in Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). Both measurements were done taking advantage of the large elliptic flow (v2) generated at RHIC. From the correlation of v2 with v1 it is determined that v2 is positive, or in-plane. The integrated v4 is about a factor of 10 smaller than v2. For the sixth (v6) and eighth (v8) harmonics upper limits on the magnitudes are reported.
We report inclusive photon measurements about midrapidity ( |y| <0.5 ) from 197 Au + 197 Au collisions at sqrt[sNN ]=130 GeV at RHIC. Photon pair conversions were reconstructed from electron and positron tracks measured with the Time Projection Chamber (TPC) of the STAR experiment. With this method, an energy resolution of Delta E/E ~ 2% at 0.5 GeV has been achieved. Reconstructed photons have also been used to measure the transverse momentum ( pt ) spectra of pi 0 mesons about midrapidity ( |y| <1 ) via the pi 0 --> gamma gamma decay channel. The fractional contribution of the pi 0 --> gamma gamma decay to the inclusive photon spectrum decreases by 20%±5% between pt =1.65 GeV/c and pt =2.4 GeV/c in the most central events, indicating that relative to pi 0 --> gamma gamma decay the contribution of other photon sources is substantially increasing.
The transverse mass spectra and midrapidity yields for Xi s and Omega s are presented. For the 10% most central collisions, the Xi -bar+/h- ratio increases from the Super Proton Synchrotron to the Relativistic Heavy Ion Collider energies while the Xi -/h- stays approximately constant. A hydrodynamically inspired model fit to the Xi spectra, which assumes a thermalized source, seems to indicate that these multistrange particles experience a significant transverse flow effect, but are emitted when the system is hotter and the flow is smaller than values obtained from a combined fit to pi , K, p, and Lambda s.
Transverse energy ( ET ) distributions have been measured for Au+Au collisions at sqrt[sNN ]=200 GeV by the STAR Collaboration at RHIC. ET is constructed from its hadronic and electromagnetic components, which have been measured separately. ET production for the most central collisions is well described by several theoretical models whose common feature is large energy density achieved early in the fireball evolution. The magnitude and centrality dependence of ET per charged particle agrees well with measurements at lower collision energy, indicating that the growth in ET for larger collision energy results from the growth in particle production. The electromagnetic fraction of the total ET is consistent with a final state dominated by mesons and independent of centrality.
We present data on e+ e- pair production accompanied by nuclear breakup in ultraperipheral gold-gold collisions at a center of mass energy of 200 GeV per nucleon pair. The nuclear breakup requirement selects events at small impact parameters, where higher-order diagrams for pair production should be enhanced. We compare the data with two calculations: one based on the equivalent photon approximation, and the other using lowest-order quantum electrodynamics (QED). The data distributions agree with both calculations, except that the pair transverse momentum spectrum disagrees with the equivalent photon approach. We set limits on higher-order contributions to the cross section.
We present STAR measurements of charged hadron production as a function of centrality in Au+Au collisions at sqrt[sNN ]=130 GeV . The measurements cover a phase space region of 0.2< pT <6.0 GeV/c in transverse momentum and -1< eta <1 in pseudorapidity. Inclusive transverse momentum distributions of charged hadrons in the pseudorapidity region 0.5< | eta | <1 are reported and compared to our previously published results for | eta | <0.5 . No significant difference is seen for inclusive pT distributions of charged hadrons in these two pseudorapidity bins. We measured dN/d eta distributions and truncated mean pT in a region of pT > pcutT , and studied the results in the framework of participant and binary scaling. No clear evidence is observed for participant scaling of charged hadron yield in the measured pT region. The relative importance of hard scattering processes is investigated through binary scaling fraction of particle production.
Mid-rapidity transverse mass spectra and multiplicity densities of charged and neutral kaons are reported for Au + Au collisions at √sNN = 130 GeV at RHIC. The spectra are exponential in transverse mass, with an inverse slope of about 280 MeV in central collisions. The multiplicity densities for these particles scale with the negative hadron pseudo-rapidity density. The charged kaon to pion ratios are K+/π− = 0.161± 0.002(stat) ± 0.024(syst) and K−/π− = 0.146± 0.002(stat) ± 0.022(syst) for the most central collisions. The K+/π− ratio is lower than the same ratio observed at the SPS while the K−/π− is higher than the SPS result. The ratios are enhanced by about 50% relative to p + p and p¯ + p collision data at similar energies.
his Erratum replaces incorrect plots shown in Fig. 7 with the corrected ones. In the publication, the NA57 [1] ratios of Ξ− and Ξ¯¯¯¯+ to the number of wounded nucleons at ⟨NW⟩=349 by mistake were plotted at the wrong values. The ratios were calculated and plotted by mistake using ⟨NW⟩=249.
The correct normalization does not change the conclusions of the paper. The correctly normalized results are presented in Fig. 7.
It is common knowledge in the field of Philippine linguistics that an ang-marked direct object in a non-actor focus clause must be definite or generic, while a ng-marked object in an actor focus clause typically receives a nonspecific interpretation. However, in contexts like wh-questions, the oblique object in an antipassive may be interpreted as specific, as noted by Schachter & Otanes (1972), Maclachlan & Nakamura (1997), Rackowski (2002), and others. […] In this paper, I propose to account for the specificity effects […] within the analysis of Tagalog syntax put forth by Aldridge (2004). I analyze Tagalog as an ergative language […]. Cross linguistically, antipassive oblique objects receive a nonspecific interpretation, while absolutives are definite or generic. I show in this paper how the Tagalog facts can be subsumed under a general account of ergativity.
The German word also, similar to English so, is traditionally considered to be a sentence adverb with a consecutive meaning, i.e. it indicates that the propositional content of the clause containing it is some kind of consequence of what has previously been said. As a sentence adverb, also has its place within the core of the German sentence, since this is the proper place for an adverb to occur in German. The sentence core offers two proper positions for adverbs: the so-called front field and the middle field. In spoken German, however, also often occurs in sentence-initial position, outside the sentence itself. In this paper, I will use excerpts of German conversations to discuss and illustrate the importance of the sentence positions and the discourse positions for the functions of also on the basis of some German conversations.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
Evidence for an exotic S=-2, Q=-2 baryon resonance in proton-proton collisions at the CERN SPS
(2004)
Results of resonance searches in the Xi - pi -, Xi - pi +, Xi -bar+ pi -, and Xi -bar+ pi + invariant mass spectra in proton-proton collisions at sqrt[s]=17.2 GeV are presented. Evidence is shown for the existence of a narrow Xi - pi - baryon resonance with mass of 1.862±0.002 GeV/c2 and width below the detector resolution of about 0.018 GeV/c2. The significance is estimated to be above 4.2 sigma . This state is a candidate for the hypothetical exotic Xi --3/2 baryon with S=-2, I=3 / 2, and a quark content of (dsdsu-bar). At the same mass, a peak is observed in the Xi - pi + spectrum which is a candidate for the Xi 03/2 member of this isospin quartet with a quark content of (dsusd-bar). The corresponding antibaryon spectra also show enhancements at the same invariant mass.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Production of Lambda and Antilambda hyperons was measured in central Pb-Pb collisions at 40, 80, and 158 A GeV beam energy on a fixed target. Transverse mass spectra and rapidity distributions are given for all three energies. The Lambda/pi ratio at mid-rapidity and in full phase space shows a pronounced maximum between the highest AGS and 40 A GeV SPS energies, whereas the anti-Lambda}/pi ratio exhibits a monotonic increase. PACS numbers: 25.75.-q
The claim advanced in this paper is that the presence of a left-dislocated element together with a resumptive clitic in Bulgarian is a special case of argument saturation with implications for the focus structure of the clause, while contrast involves discontinuous focus (contrastive topics/foci) with no clitics present in the derivation. Contrastive topic/focus constructions in Bulgarian can be united on the view that they involve (sets of) ordered pairs where the higher element is valuing a contrastive feature (cf. OCC in Chomsky 2001) while the element in the VP is a non-contrastive topic or focus. The contrastive feature participates in wh-structures but not in clitic-left-dislocated structures where pairing between arguments is 'accidental'.
The flora of the Lord Howe Island Group (31°30’S, 159°05’E) comprises a unique mix of elements of Australian, New Zealand and New Caledonian floras. It is significant for its high degree of endemism and for its structural and biological (leaves, flowers, fruit) role in supporting a diverse array of fauna. Conservation of this flora is dependant upon: reducing current habitat degradation (mostly the result of exotic weeds); minimising any future impacts, in particular the effects of climate change on the unique cloud forests of the southern mountains and the continued introduction and spread of weeds and the pathogen Phytophthora cinnamomi.
We provide a description of the nature of the major threats to the flora and suggest an area-based scheme, focussed on the relative conservation significance of remaining vegetation, as a mechanism for developing priorities for threat mitigation activities. While a number of threat control works are in place, eg. weed control, some re-emphasis is needed. In addition, some new initiatives are required including: reducing the rate of introductions of new exotics; a system to remove potential environmental weeds from the settlement area; phytosanitary guidelines; pathogen quarantine measures; search and removal of environmental weeds from remote areas; and ex situ initiatives for plant species restricted to the cloud forests of the southern mountains.
Global reserves of coal, oil and natural gas are diminishing; global energy requirements however are dramatically increasing. Renewable energy sources lower the threat to the earth’s climate but are not able to meet the energy consumption in major urban areas. The opinion of many experts is that the future will be dominated by hydrogen. However, this gas is essentially totally manufactured from fossil fuels and is hence of limited abundance – not to mention the hazards involved in its utilisation. - A novel energy concept involving solar and thus carbon-independent hydrogen-based technology necessitates an intermediate storage vehicle for renewable energy. This future energy carrier should be simple to manufacture, be available to an unlimited degree or at least be suitable for recycling, be able to store and transport the energy without hazards, demonstrate a high energy density and release no carbon dioxide or other climatically detrimental substances. - Silicon successfully functions as a tailor-made intermediate linking decentrally operating renewable energy-generation technology with equally decentrally organised hydrogen-based infrastructure at any location of choice. In contrast to oil and in particular hydrogen, the transport and storage of silicon are free from potential hazards and require a simple infrastructure similar to that needed for coal.
Alzheimer’s disease (AD) is the most common neurodegenerative disorder world wide, causing presenile dementia and death of millions of people. During AD damage and massive loss of brain cells occur. Alzheimer’s disease is genetically heterogeneous and may therefore represent a common phenotype that results from various genetic and environmental influences and risk factors. In approximately 10% of patients, changes of the genetic information were detected (gene mutations). In these cases, Alzheimer’s disease is inherited as an autosomal dominant trait (familial Alzheimer’s disease, FAD). In rare cases of familial Alzheimer’s disease (about 1-3%), mutations have been detected in genes on chromosomes 14 and 1 (encoding for Presenilin 1 and 2, respectively), and on chromosome 21 encoding for the amyloid precursor protein (APP), which is responsible for the release of the cell-damaging protein amyloid-beta (ß-amyloid, Aß). Familial forms of early-onset Alzheimer’s disease are rare; however, their importance extends far beyond their frequency, because they allow to identify some of the critical pathogenetic pathways of the disease. All familial Alzheimer mutations share a common feature: they lead to an enhanced production of the Aß, which is the major constituent of senile plaques in brains of AD patients. New data indicates that Aß promotes neuronal degeneration. Therefore, one aim of these thesis was to elucidate the neurotoxic biochemical pathways induced by Aß, investigating the effect of the FAD Swedish APP double mutation (APPsw) on oxidative stress-induced cell death mechanisms. This mutation results in a three- to sixfold increased Aß production compared to wild-type APP (APPwt). As cell models, the neuronal PC12 (rat pheochromocytoma) and the HEK (human embryonic kidney 293) cell lines were used, which have been transfected with human wiltyp APP or human APP containing the Swedish double mutation. The used cell models offer two important advantages. First, compared to experiments using high concentrations of Aß at micromolar levels applied extracellularly to cells, PC12 APPsw cells secret low Aß levels similar to the situation in FAD brains. Thus, this cell model represents a very suitable approach to elucidate the AD-specific cell death pathways mimicking physiological conditions. Second, these two cell lines (PC12 and HEK APPwt and APPsw) with different production levels of Aß may additionally allow to study dose-dependent effects of Aß. The here obtained results provide evidence for the enhanced cell vulnerability caused by the Swedish APP mutation and elucidate the cell death mechanism probably initiated by intracellulary produced Aß. Here it seems likely that increased production of Aß at physiological levels primes APPsw PC12 cells to undergo cell death only after additional stress, while chronic high levels in HEK cells already lead to enhanced basal apoptotic levels. Crucial effects of the Swedish APP mutation include the impairments of cellular energy metabolism affecting mitochondrial membrane potential and ATP levels as well as the additional activation of caspase 2, caspase 8 and JNK in response to oxidative stress. Thereby ,the following model can be proposed: PC12 cells harboring the Swedish APP mutation have a reduced energy metabolism compared to APPwt or control cells. However, this effect does not leads to enhanced basal apoptotic levels of cultured cells. An exposure of PC12 cells to oxidative stress leads to mitochondrial dysfunction, e.g., decrease in mitochondrial membrane potential and depletion in ATP. The consequence is the activation of the intrinsic apoptotic pathway releasing cytochrome c and Smac resulting in the activation of caspase 9. This effect is amplified by the overexpression of APP, since both APPsw and APPwt PC12 cells show enhanced cytochrome c and Smac release as well as enhanced caspase 9 activity as vector transfected control. In APPsw PC12 cells a parallel pathway is additionally emphased. Due to reduced ATP levels or enhanced Aß production JNK is activated. Furthermore, the extrinsic apoptotic pathway is enhanced, since caspase 8 and caspase 2 activation was clearly enhanced by the Swedish APP mutation. Both pathways may then converge by activating the effector enzyme, caspase 3, and the execution of cell death. In addition, caspase independent effects also needs to be considered. One possibility could be the implication of AIF since AIF expression was found to be induced by the Swedish APP mutation. In APPsw HEK cells high chronic Aß levels leads to enhanced apoptotic levels, reduce mitochondrial membrane potential and ATP levels even under basal conditions. Summarizing, a hypothetical sequence of events is proposed linking FAD, Aß production, JNK-activation, mitochondrial dysfunction with caspase pathway and neuronal loss for our cell model. The brain has a high metabolic rate and is exposured to gradually rising levels of oxidative stress during life. In Swedish FAD patients the levels of oxidative stress are increased in the temporal inferior cortex. This study using a cell model mimicking the in vivo situation in AD brains indicates that probably both, increased Aß production and the gradual rise of oxidative stress throughout life converge at a final common pathway of an increased vulnerability of neurons to apoptotic cell death from FAD patients. Presenilin (PS) 1 is an aspartyl protease, involved in the gamma-secretase mediated proteolysis of Amyloid-ß-protein (Aß), the major constituent of senile plaques in brains of Alzheimer’s disease (AD) patients. Recent studies have suggested an additional role for presenilin proteins in apoptotic cell death observed in AD. Since PS 1 is proteolytic cleaved by caspase 3, it has been prosposed that the resulting C-terminal fragment of PS1 (PSCas) could play a role in signal transduction during apoptosis. Moreover, it was shown that mutant presenilins causing early-onset of familial Alzheimer's disease (FAD) may render cells vulnerable to apoptosis. The mechanism by which PS1 regulates apoptotic cell death is yet not understood. Therefore one aim of our present study was to clarify the involvement of PS1 in the proteolytic cascade of apoptosis and if the cleavage of PS1 by caspase 3 has an regulatory function. Here it is demonstrated that both, PS1 and PS1Cas lead to a reduced vulnerability of PC12 and Jurkat cells to different apoptotic stimuli. However a mutation at the caspase 3 recognition site (D345A/ PSmut), which inhibits cleavage of PS1 by caspase 3, show no differences in the effect of PS1 or PSCas towards apoptotic stimuli. This suggest that proteolysis of PS1 by caspase 3 is not a determinant, but only a secondary effect during apoptosis. Since several FAD mutation distributed through the whole PS1 gene lead to enhanced apoptosis, an abolishment of the antiapoptotic effect of PS1 might contribute to the massive neurodegeneration in early age of FAD patients. Here, the regulate properties of PS1 in apoptosis may not be through an caspase 3 dependent cleavage and generation of PSCas, but rather through interaction of PS1 with other proteins involved in apoptosis.
P-O bond destabilization accelerates phosphoenzyme hydrolysis of sarcoplasmic reticulum Ca2+-ATPase
(2004)
The phosphate group of the ADP-insensitive phosphoenzyme (E2-P) of sarcoplasmic reticulum Ca2+-ATPase (SERCA1a) was studied with infrared spectroscopy to understand the high hydrolysis rate of E2-P. By monitoring an autocatalyzed isotope exchange reaction, three stretching vibrations of the transiently bound phosphate group were selectively observed against a background of 50,000 protein vibrations. They were found at 1194, 1137, and 1115 cm–1. This information was evaluated using the bond valence model and empirical correlations. Compared with the model compound acetyl phosphate, structure and charge distribution of the E2-P aspartyl phosphate resemble somewhat the transition state in a dissociative phosphate transfer reaction; the aspartyl phosphate of E2-P has 0.02 Å shorter terminal P–O bonds and a 0.09 Å longer bridging P–O bond that is ∼20% weaker, the angle between the terminal P–O bonds is wider, and –0.2 formal charges are shifted from the phosphate group to the aspartyl moiety. The weaker bridging P–O bond of E2-P accounts for a 1011–1015-fold hydrolysis rate enhancement, implying that P–O bond destabilization facilitates phosphoenzyme hydrolysis. P–O bond destabilization is caused by a shift of noncovalent interactions from the phosphate oxygens to the aspartyl oxygens. We suggest that the relative positioning of Mg2+ and Lys684 between phosphate and aspartyl oxygens controls the hydrolysis rate of the ATPase phosphoenzymes and related phosphoproteins.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the strangeness production as a function of centre of mass energy and of the parameters of the source. We have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation. We show that, in this energy range, the use of hadron yields at midrapidity instead of in full phase space artificially enhances strangeness production and could lead to incorrect conclusions as far as the occurrence of full chemical equilibrium is concerned. In addition to the basic model with an extra strange quark non-equilibrium parameter, we have tested three more schemes: a two-component model superimposing hadrons coming out of single nucleon-nucleon interactions to those emerging from large fireballs at equilibrium, a model with local strangeness neutrality and a model with strange and light quark non-equilibrium parameters. The behaviour of the source parameters as a function of colliding system and collision energy is studied. The description of strangeness production entails a non-monotonic energy dependence of strangeness saturation parameter gamma_S with a maximum around 30A GeV. We also present predictions of the production rates of still unmeasured hadrons including the newly discovered Theta^+(1540) pentaquark baryon.
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Angophora inopina is a vulnerable tree species occurring principally in Wyong and Lake Macquarie local government areas on the Central Coast, with disjunct populations as far north as Bulahdelah on the North Coast of NSW. The largest and most intact stands occur within the Wyee-Morisset areas although even here significant fragmentation is evident. North of Toronto, there are small and scattered residual populations as far as Barnsley near West Wallsend in Lake Macquarie. A total area of occupied habitat of approximately 1500 ha is estimated.
Cluster analysis of floristic information showed that Angophora inopina occurs within three broad habitat types within the Central Coast bioregion, centred mainly on the Gorokan, Doyalson and Wyong soil landscapes. Hybrid forms of the species also occur on the Cockle Creek landscape in northern Lake Macquarie. Most stands are evident within open woodland/ forest vegetation where Eucalyptus haemastoma, Corymbia gummifera, and Eucalyptus capitellata dominate with Angophora inopina. Other populations occur in wet heath, and swamp woodland environments where sedge species are characteristic.
Conservation of Angophora inopina will be most effectively and efficiently achieved if ecological processes that operate across the landscape are maintained. Processes such as fragmentation, altered fire regimes or invasion of habitat by exotic species must be managed in the long-term. These are all significant threats to this species, and will best be effectively managed in the larger remnants in a landscape approach. Such threats are generally associated with urban and agricultural expansion in the area, and these are therefore the most pressing issues to be managed.
Werakata National Park (32° 50 S, 151° 25 E), near Cessnock in the Hunter Valley of New South Wales, conserves 2145 ha of mostly open forest vegetation, which was formerly widespread in the lower Hunter Valley. Six vegetation communities are delineated; Lower Hunter Spotted Gum – Ironbark Forest occupies most of the Park. All communities present are considered to be poorly conserved in the region and Werakata plays a critical role in the protection of these vegetation types. Two vegetation communities, Kurri Sand Swamp Woodland and Hunter Lowlands Redgum Forest, are listed as Endangered Ecological Communities under the NSW Threatened Species Conservation Act 1995, while others may warrant future listing. Considerable variation in the floristic composition of the Kurri Sand Swamp Woodland is apparent in the area and the implications are discussed. Populations of four vulnerable plant taxa — Callistemon linearifolius, Eucalyptus parramattensis subsp. decadens, Eucalyptus glaucina, Grevillea parviflora subsp. parviflora, and two rare plant taxa — Grevillea montana, Macrozamia flexuosa, together with several other regionally significant species occur within Werakata.
Recommendations are made on the conservation of plant taxa and vegetation communities in the Cessnock area, and on general reserve management. It is suggested that further areas be added to the reserve to consolidate and expand upon that which is already contained, particularly in regard to threatened species, and endangered and poorly conserved ecological communities.
Two epiphyllous Lejeuneaceae, Cololejeunea surinamensis and Drepanolejeunea polyrhiza, previously known from Amazonian Brazil, are recorded for the first time in Colombia. They were found as epiphylls on understory shrubs in the middle Caquetá area in Colombian Amazonia. Cololejeunea surinamensis was found in the Tierra Firme forests and D. polyrhiza was found in the floodplains of the Caquetá River.
We investigated patterns of bryophyte species richness and composition in two forest types of Colombian Amazonia, non-flooded tierra firme forest and floodplain forest of the Caquetá River. A total of 109 bryophyte species were recorded from 14 0.2 ha plots. Bryophyte life forms and habitats were analyzed, including the canopy and epiphylls. Bryophyte species did not show significant differences between landscapes but mosses and liverworts were different and with opposite responses balancing the overall richness. Independence test showed differences in both life form and habitat use distribution between the two forest types with more fan and mat bryophytes species in the floodplains, and more epiphytic liverworts in the tierra firme forest. Correspondence analysis showed differences in the bryophyte species assemblage between the two forest types where they may be responding to the higher humidity provided by the flooding. Despite of, the environmental differences detected, epiphyll species assemblages were not strongly affected. Apparently, epiphyll habitat is stressful enough to hide the environmental differences between the flooded and Tierra firme forests.
While the sortal constraints associated with Japanese numeral classifiers are wellstudied, less attention has been paid to the details of their syntax. We describe an analysis implemented within a broadcoverage HPSG that handles an intricate set of numeral classifier construction types and compositionally relates each to an appropriate semantic representation, using Minimal Recursion Semantics.
This paper evaluates the effects of Public Sponsored Training in East Germany in the context of reiterated treatments. Selection bias based on observed characteristics is corrected for by applying kernel matching based on the propensity score. We control for further selection and the presence of Ashenfelter's Dip before the program with conditional difference-in-differences estimators. Training as a first treatment shows insignificant effects on the transition rates. The effect of program sequences and the incremental effect of a second program on the reemployment probability are insignificant. However, the incremental effect on the probability to remain employed is slightly positive. JEL - Klassifikation: H43 , C23 , J6 , J64 , C14
[Abstract] Occurrence of hepatitis B virus (HBV) reactivation following kidney transplantation
(2004)
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
In hindsight, the debate about presupposition following Frege’s discovery that the referential function of names and definite descriptions depended on the fulfillment of an existence and a uniqueness condition was curiously limited for a very long time. On the one hand, it was only in the 1960s that linguists began to take an interest and showed that presupposition was an allpervasive phenomenon far beyond this philosophers’ pet definite descriptions. And on the other hand, and this is our real concern, it is now only too obvious that the uniqueness condition is too restrictive to be applicable to the general case. An utterance of “The cat is on the mat” should not imply that there is only one cat and one mat in the whole world. The obvious move is to limit the uniqueness condition to some notion of utterance context.
Even though tourism has been recognised as an important field for transnational research today, there are few attempts to place tourism in the context of transnational theories or to think about transnationalism from the perspective of tourists. I argue that in researching tourist practices one can add important aspects to transnational approaches. The prerequisites of mobility and interaction for example are the features chosen by backpackers to describe what their Round-The-World-Trip is about. A form of tourism is adopted, or created, that itself confronts many aspects of globalisation: First of all there is the immense dynamic that is involved. Backpackers try to cover as many places and experiences as possible, travelling at high speed. They adopt all kinds of touristic experiences ranging from beach to adventure to culture tourism. They don't focus on a specific area or country but travel the world. They cross national borders perpetually. Additionally they form a transnational network in which they interact with strangers of similar backgrounds (other backpackers, tourist professionals). This network helps them interacting with people from different backgrounds (the socalled hosts or locals). Considering my research Backpackers forge a certain identity from these transnational practices which I want to name globedentity. Globedentity expresses a type of identity construction that not only refers to the individual (I) but reflects the world (globe) in this identity. This globedentity is not fixed but is perpetually re-created and re-defined. It also embraces the increasing popular awareness of globalisation which backpackers, coming from highly educated middle class backgrounds, in particular have identified with. Due to the constant awareness of the latest global social, cultural and economic developments in these educated milieus they know exactly which tools to use to become successful parts of their societies.
Speakers have a wide range of noncanonical syntactic options that allow them to mark the information status of the various elements within a proposition. The correlation between a construction and constraints on information status, however, is not arbitrary; there are broad, consistent, and predictive generalizations that can be made about the information-packaging functions served by preposing, postposing, and argument-reversing constructions. Specifically, preposed constituents are constrained to represent discourse-old information, postposed constituents are constrained to represent information that is either discourse-new or hearer-new, and argument-reversing constructions require that the information represented by the preposed constituent be at least as familiar as that represented by the postposed constituent (Birner & Ward 1998). The status of inferable information (Clark 1977; Prince 1981), however, is problematic; a study of corpus data shows that such information can be preposed in an inversion or a preposing (hence must be discourse-old), yet can also be postposed in constructions requiring hearer-new information (hence must be hearer-new). This information status – discourse-old yet hearer-new – is assumed by Prince (1992) to be non-occurring on the grounds that what has been evoked in the discourse should be known to the hearer. I resolve this difficulty by arguing for a reinterpretation of the term 'discourse-old' as applying not only to information that has been explicitly evoked in the prior discourse, but rather to any information that provides a salient inferential link to the prior discourse. Extending Prince’s notion in this manner allows us to account for the distribution of noncanonically positioned peripheral constituents in a principled and unified way.
Pathologic data indicate that human cytomegalovirus (HCMV) infection might be associated with the pathogenesis of several human malignancies. However, no definitive evidence of a causal link between HCMV infection and cancer dissemination has been established to date. This study describes the modulation of the invasive behavior of NCAM-expressing tumor cell lines by HCMV. Neuroblastoma (NB) cells, persistently infected with the HCMV strain AD169 (UKF-NB-4AD169 and MHH-NB-11AD169), were added to endothelial cell monolayers and adhesion and penetration kinetics were measured. The 140- and 180-kDa isoforms of the adhesion receptor NCAM were evaluated by flow cytometry, Western blot, and reverse transcriptionpolymerase chain reaction (RT-PCR). The relevance of NCAM for tumor cell binding was proven by treating NB with NCAM antisense oligonucleotides or NCAM transfection. HCMV infection profoundly increased the number of adherent and penetrated NB, compared to controls. Surface expression of NCAM was significantly lower on UKF-NB-4AD169 and MHH-NB-11AD169, compared to mock-infected cells. Western-blot and RT-PCR demonstrated reduced protein and RNA levels of the 140- and 180-kDa isoform. An inverse correlation between NCAM expression and adhesion capacity of NB has been shown by antisense and transfection experiments. We conclude that HCMV infection leads to downregulation of NCAM receptors, which is associated with enhanced tumor cell invasiveness.
The production of strange pentaquark states (e.g., Theta baryons and Ξ−− states) in hadronic interactions within a Gribov–Regge approach is explored. In this approach the Θ+(1540) and the Ξ are produced by disintegration of remnants formed by the exchange of pomerons between the two protons. We predict the rapidity and transverse momentum distributions as well as the 4π multiplicity of the Θ+, Ξ−−, Ξ−, Ξ0 and Ξ+ for s=17 GeV (SPS) and 200 GeV (RHIC). For both energies more than 10−3 Θ+ and more than 10−5 Ξ per pp event should be observed by the present experiments.
Determination of the structure of complex I of Yarrowia lipolytica by single particle analysis
(2004)
Komplex I enthält ein Flavinmononukleotid sowie mindestens acht Eisen- Schwefel Zentren als redoxaktive Cofaktoren. Da ein wesentlicher Teil des mitochondrialen Genoms für Untereinheiten von Komplex I codiert, betrifft eine Vielzahl von mitochondrialen Erkrankungen diesen Enzymkomplex.
Komplex I wurde bisher aus Mitochondrien, Chloroplasten und Bakterien isoliert. Die Minimalform von Komplex I wird in Bakterien gefunden, wo er aus 14 (bzw 13 im Falle einer Genfusion) Untereinheiten besteht und eine Masse von etwa 550 kDa aufweist. Generell werden sieben hydrophile und sieben hydrophobe Untereinheiten mit über 50 vorhergesagten Transmembranhelices gefunden. Im Komplex I aus Eukaryoten wurde eine grössere Anzahl zusätzlicher, akzessorischer Untereinheiten nachgewiesen. Hier werden die sieben hydrophoben Untereinheiten vom mitochondrialen Genom codiert, während alle anderen Untereinheiten kerncodiert sind und in das Mitochondrium importiert werden müssen.
Die obligat aerobe Hefe Yarrowia lipolytica wurde als Modellsystem zur Untersuchung von eukaryotischem Komplex I etabliert. Die bisher am besten untersuchte Hefe Saccharomyces cerevisiae enthält keinen Komplex I. Hier wird die Oxidation von NADH durch eine andere Klasse von sogenannten alternativen NADH Dehydrogenasen durchgeführt. Auch Y. lipolytica enthält ein solches alternatives Enzym, das allerdings mit seiner Substratbindungsstelle zur Aussenseite der inneren Mitochondrienmembran orientiert ist. Durch molekularbiologische Manipulation konnte eine interne Version dieses Enzymes exprimiert werden, wodurch es möglich ist, letale Defekte in Komplex I Deletionsmutanten zu kompensieren. Mittlerweile wurden alle Voraussetzungen geschaffen, um kerncodierte Untereinheiten von Komplex I aus Y. lipolytica gezielt genetisch zu verändern. Die Proteinreinigung wird durch die Verwendung einer auf einem His-tag basierenden Affinitätsreinigung erheblich erleichtert...
We point out that during the supernova II type explosion the thermodynamical conditions of stellar matter between the protoneutron star and the shock front correspond to the nuclear liquid–gas coexistence region, which can be investigated in nuclear multifragmentation reactions. We have demonstrated, that neutron-rich hot heavy nuclei can be produced in this region. The production of these nuclei may influence dynamics of the explosion and contribute to the synthesis of heavy elements.
Hartmann and his Prague friends, whether German-Gentile or German-Jewish, rallied enthusiastically to the cause of what at first was a reawakening of suppressed Bohemic cultural nationalism and a move towards across-fertilisation of the two main lingual cultures (Czech/German) andthe three main ethnicities (Czech/German/Jewish) of the country. They soon saw themselves as a "Jungböhmische Bewegung" to correspond to Young Germany. The Prague writer Rudolf Glaser founded a literary journal called 'Ost und West' for the express purpose of bringing together German and Slavic literary impulses under the Goethean motto: "Orient und Occident sind nicht mehr zu trennen". With Bohemia as the bridge, 'Ost und West' published German translations from all the Slavic languages including Pushkin and Gogol, contributions by German writers sympathetic to the cause of emerging nations like Heinrich Laube, Ferdinand Freiligrath, Ernst Willkomm, but above all the Prague circle of Young Bohemians like Alfred Meissner, Isidor Heller, Uffo Horn, Gustav Karpeles and Ignatz Kuranda. Also Hartmann made his literary debut in the journal with a love poem entitled "Der Drahtbinder", and featuring a subtitle which was in keeping with the spirit of the times: "nach einem slavischen Lied".
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
We investigate hadron production as well as transverse hadron spectra in nucleus-nucleus collisions from 2 A.GeV to 21.3 A.TeV within two independent transport approaches (UrQMD and HSD) that are based on quark, diquark, string and hadronic degrees of freedom. The comparison to experimental data demonstrates that both approaches agree quite well with each other and with the experimental data on hadron production. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the 'kink') is well described by both approaches without involving any phase transition. However, the maximum in the K+/Pi+ ratio at 20 to 30 A.GeV (the 'horn') is missed by ~ 40%. A comparison to the transverse mass spectra from pp and C+C (or Si+Si) reactions shows the reliability of the transport models for light systems. For central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV, however, the measured K +/- m-theta-spectra have a larger inverse slope parameter than expected from the calculations. The approximately constant slope of K+/-spectra at SPS (the 'step') is not reproduced either. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding suggests that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions.
We investigate hadron production and transverse hadron spectra in nucleus-nucleus collisions from 2 A·GeV to 21.3 A·TeV within two independent transport approaches (UrQMD and HSD) based on quark, diquark, string and hadronic degrees of freedom. The enhancement of pion production in central Au+Au (Pb+Pb) collisions relative to scaled pp collisions (the ’kink’) is described well by both approaches without involving a phase transition. However, the maximum in the K+ p+ ratio at 20 to 30 A·GeV (the ’horn’) is missed by ~ 40%. Also, at energies above ~5 A·GeV, the measured K± mT-spectra have a larger inverse slope than expected from the models. Thus the pressure generated by hadronic interactions in the transport models at high energies is too low. This finding suggests that the additional pressure - as expected from lattice QCD at finite quark chemical potential and temperature - might be generated by strong interactions in the early pre-hadronic/partonic phase of central heavy-ion collisions. Finally, we discuss the emergence of density perturbations in a first-order phase transition and why they might affect relative hadron multiplicities, collective flow, and hadron mean-free paths at decoupling. A minimum in the collective flow v2 excitation function was discovered experimentally at 40 A·GeV - such a behavior has been predicted long ago as signature for a first order phase transition.
We investigate transverse hadron spectra from relativistic nucleus-nucleus collisions which reflect important aspects of the dynamics - such as the generation of pressure - in the hot and dense zone formed in the early phase of the reaction. Our analysis is performed within two independent transport approaches (HSD and UrQMD) that are based on quark, diquark, string and hadronic degrees of freedom. Both transport models show their reliability for elementary pp as well as light-ion (C+C, Si+Si) reactions. However, for central Au+Au (Pb+Pb) collisions at bombarding energies above ~ 5 A.GeV the measured K+- transverse mass spectra have a larger inverse slope parameter than expected from the calculation. Thus the pressure generated by hadronic interactions in the transport models above ~ 5 A.GeV is lower than observed in the experimental data. This finding shows that the additional pressure - as expected from lattice QCD calculations at finite quark chemical potential and temperature - is generated by strong partonic interactions in the early phase of central Au+Au (Pb+Pb) collisions.
In bioinformatics, biochemical signal pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically obtaining the most appropriate model and learning its parameters is extremely interesting. One of the most often used approaches for model selection is to choose the least complex model which “fits the needs”. For noisy measurements, the model which has the smallest mean squared error of the observed data results in a model which fits too accurately to the data – it is overfitting. Such a model will perform good on the training data, but worse on unknown data. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated. Keywords: biochemical pathways, differential equations, septic shock, parameter estimation, overfitting, minimum description length.
Data driven automatic model selection and parameter adaptation – a case study for septic shock
(2004)
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. This paper propose as model selection criterion the least complex description of the observed data by the model, the minimum description length. For the small, but important example of inflammation modeling the performance of the approach is evaluated.
In bioinformatics, biochemical pathways can be modeled by many differential equations. It is still an open problem how to fit the huge amount of parameters of the equations to the available data. Here, the approach of systematically learning the parameters is necessary. In this paper, for the small, important example of inflammation modeling a network is constructed and different learning algorithms are proposed. It turned out that due to the nonlinear dynamics evolutionary approaches are necessary to fit the parameters for sparse, given data. Keywords: model parameter adaption, septic shock. coupled differential equations, genetic algorithm.
Since the description of sepsis by Schottmüller in 1914, the amount on knowledge available on sepsis and its underlying pathophysiology has substantially increased. Epidemiologic examinations of abdominal septic shock patients show the potential for high risk posed by and the extensive therapy situation in the intensive care unit (ICU) (5). Unfortunately, until now it has not been possible to significantly reduce the mortality rate of septic shock, which is as high as 50-60% worldwide, although PROWESS' results (1) are encouraging. This paper summarizes the main results of the MEDAN project and their medical impacts. Several aspects are already published, see the references. The heterogeneity of patient groups and the variations in therapy strategies is seen as one of the main problems for sepsis trials. In the MEDAN multi-center study of 71 intensive care units in Germany, a group of 382 patients made up exclusively of abdominal septic shock patients who met the consensus criteria for septic shock (3) was analysed. For use within scores or stand-alone experiments variables are often studied as isolated variables, not as a multidimensional whole, e.g. a recent study takes a look at the role thrombocytes play (15). To avoid this limitation, our study compares several established scores (SOFA, APACHE II, SAPS II, MODS) by a multi-dimensional neuronal network analysis. For outcome prediction the data of 382 patients was analysed by using most of the commonly documented vital parameters and doses of medicine (metric variables). Data was collected in German hospitals from 1998 to 2001. The 382 handwritten patient records were transferred to an electronic database giving the amount of 2.5 million data entries. The metric data contained in the database is composed of daily measurements and doses of medicine. We used range and plausibility checks to allow no faulty data in the electronic database. 187 of the 382 patients are deceased (49 %).
We perform a study of the possible existence of hybrid stars with color superconducting quark cores using a specific hadronic model in a combination with an NJL-type quark model. It is shown that the constituent mass of the non-strange quarks in vacuum is a very important parameter that controls the beginning of the hadron–quark phase transition. At relatively small values of the mass, the first quark phase that appears is the two-flavor color superconducting (2SC) phase which, at larger densities, is replaced by the color-flavor locked (CFL) phase. At large values of the mass, on the other hand, the phase transition goes from the hadronic phase directly into the CFL phase avoiding the 2SC phase. It appears, however, that the only stable hybrid stars obtained are those with the 2SC quark cores.
Information literacy is a mosaic of attitudes, understandings, capabilities and knowledge about which there are three myths. The first myth is that it is about the ability to use ICTs to access a wealth of information. The second is that students entering higher education are information literate because student centred, resource based, and ICT focused learning are now pervasive in secondary education. The third myth is that information literacy development can be addressed by library-centric generic approaches. This paper addresses those myths and emphasises the need for information literacy to be recognised as the critical whole of education and societal issue, fundamental to an information-enabled and better world. In formal education, information literacy can only be developed by infusion into curriculum design, pedagogies, and assessment.
Semi-permanent quadrats, located in the South and Central Western Slopes botanical regions of New South Wales, were assessed to indicate suitable periods of the year to conduct surveys of botanical diversity. The quadrats were located in woodland communities with a generally herbaceous understorey, and subject to a wide range of domestic stock grazing intensities. In the mid to western South Western Slopes (SWS) the greatest number of species was generally recorded in an October survey. The main exception was in degraded areas (low species diversity, high proportion of annual weed species), where similar results were recorded in September and October. In the cooler and wetter eastern SWS a relatively high proportion of species were recorded in October to early December surveys. However, when compared to species totals compiled from multiple assessments in all seasons, or from August to November, a single optimal survey usually recorded only 60–75% of the plant species at a site. Surveys in mid to late summer, autumn and early winter usually recorded less than 50% of the plant species present. The results reflect the prevailing Mediterranean-type climate, and that the ground layer vegetation (primarily comprised of annuals and herbaceous perennials) dominates the species diversity.
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
The research performed in the DeepThought project aims at demonstrating the potential of deep linguistic processing if combined with shallow methods for robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. On the basis of this approach, the feasibility of three ambitious applications will be demonstrated, namely: precise information extraction for business intelligence; email response management for customer relationship management; creativity support for document production and collective brainstorming. Common to these applications, and the basis for their development is the XML-based, RMRS-enabled core architecture framework that will be described in detail in this paper. The framework is not limited to the applications envisaged in the DeepThought project, but can also be employed e.g. to generate and make use of XML standoff annotation of documents and linguistic corpora, and in general for a wide range of NLP-based applications and research purposes.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
This study focuses upon a detailed description and analysis of the phonetic structures of Paiwan, an aboriginal language spoken in Taiwan, with around 53,000 speakers, Paiwan, a member of the Austronesian language family, is not typologically related to the other languages such as Mandarin and Taiwanese spoken in its geographically contiguous districts, Earlier work on phonological features of Paiwan (Chang, 1999; Tseng, 2003) sought an account in terms of segments and isolated facts about reduplication and stress, without accounting for the possible roles of phrase-level and sentence-Ievel prosodic structures, Government Teaching Material (1993) listed 25 consonants and 4 vowels, without any description of phonetic features and phonological rules, Chang's (2000) reference grammar included 22 consonants and 4 vowels, with a very brief description of 5 phonological rules on single words, Regional diversity and 25 consonants have been mentioned in Pulaluyan's (2002) teaching material; however, no description of phonological rules was found in his material.
The transporter associated with antigen processing (TAP) plays a pivotal role in the adaptive immune response against virus-infected or malignantly transformed cells. As member of the ABC transporter family, TAP hydrolyzes ATP to energize the transport of antigenic peptides from the cytosol into the lumen of the endoplasmic reticulum. TAP forms a heterodimeric complex composed of TAP1 and TAP2 (ABCB2/3). Both subunits contain a hydrophobic transmembrane domain and a hydrophilic nucleotide-binding domain. The aim of this work was to study the ATP hydrolysis event of the TAP complex and gain further insights into the mechanism of peptide transport process. To analyze ATP hydrolysis of each subunit I developed a method of trapping 8- azido-nucleotides to TAP in the presence of phosphate transition state analogs followed by photocross-linking, immunoprecipitation, and high-resolution SDS-PAGE. Strikingly, trapping of both TAP subunits by beryllium fluoride is peptide-specific. The peptide concentration required for half-maximal trapping is identical for TAP1 and TAP2 and directly correlates with the peptide-binding affinity. Only background levels of trapping were observed for low affinity peptides or in the presence of the herpes simplex viral protein ICP47, which specifically blocks peptide binding to TAP. Importantly, the peptideinduced trapped state is reached after ATP hydrolysis and not in a backward reaction of ADP binding and trapping. In the trapped state, TAP can neither bind nor exchange nucleotides, whereas peptide binding is not affected. In summary, these data support the model that peptide binding induces a conformation that triggers ATP hydrolysis in both subunits of the TAP complex within the catalytic cycle. The role of the ABC signature motif (C-loop) on the functional non-equivalence of the NBDs was investigated. The C-loops of TAP transporter contain a canonical C-loop (LSGGQ) for TAP1 and a degenerated ABC signature motif (LAAGQ) for TAP2. Mutation of the leucine or glycine (LSGGQ) in TAP1 fully abolished peptide transport. TAP complexes with equivalent mutations in TAP2 showed however still residual peptide transport activity. To elucidate the origin of the asymmetry of the NBDs of TAP, we further examined TAP complexes with exchanged C-loops. Strikingly, the chimera with two canonical C-loops showed the highest transport rate whereas the chimera with two degenerated C-loops had the lowest transport rate, demonstrating that the ABC signature motifs control the peptide transport efficiency. All single-site mutants and chimeras showed similar activities in peptide or ATP binding, implying that these mutations affect the ATPase activity of TAP. In addition, these results prove that the serine of the C-loop is not essential for TAP function, but rather coordinates, together with other residues of the C-loop, the ATP hydrolysis in both nucleotide-binding sites. To study the coupling between the ATP binding/hydrolysis and the peptide binding, the putative catalytic bases of the TAP complex were mutated to generate the so-called EQ mutants. The mutations did not influence the peptide-binding ability. Dimerization of the NBDs of EQ mutants upon ATP binding does not alter the peptide binding property. At 27°C, both ATP and ADP could induce the loss of peptide-binding ability (Bmax) only in the variants bearing a mutated TAP2. Further studies are required to deduce at which stage in the catalytic cycle the peptide-binding site is affected. In addition, mutation of the putative catalytic base of both subunits showed a magnesium-dependent peptide transport activity, demonstrating these mutants did not abolish the ATP hydrolysis. Thus, the function of this acidic residue as the catalytic base is not likely to be universe for all ABC transporters.
The transporter associated with antigen processing (TAP) is a key component of the cellular immune system. As a member of the ATP-binding cassette (ABC) superfamily, TAP hydrolyzes ATP to energize the transport of peptides from the cytosol into the lumen of the endoplasmic reticulum. TAP is composed of TAP1 and TAP2, each containing a transmembrane domain and a nucleotide-binding domain (NBD). Here we investigated the role of the ABC signature motif (C-loop) on the functional non-equivalence of the NBDs, which contain a canonical C-loop (LSGGQ) for TAP1 and a degenerate C-loop (LAAGQ) for TAP2. Mutation of the leucine or glycine (LSGGQ) in TAP1 fully abolished peptide transport. However, TAP complexes with equivalent mutations in TAP2 still showed residual peptide transport activity. To elucidate the origin of the asymmetry of the NBDs of TAP, we further examined TAP complexes with exchanged C-loops. Strikingly, the chimera with two canonical C-loops showed the highest transport rate whereas the chimera with two degenerate C-loops had the lowest transport rate, demonstrating that the ABC signature motifs control peptide transport efficiency. All single site mutants and chimeras showed similar activities in peptide or ATP binding, implying that these mutations affect the ATPase activity of TAP. In addition, these results prove that the serine of the C-loop is not essential for TAP function but rather coordinates, together with other residues of the C-loop, the ATP hydrolysis in both nucleotide-binding sites.
The current study focuses on the prosodic realization of negators in Saisiyat, an endangered aboriginal language of Taiwan, and compares its prosodic realization of negation with that of English. The results of this study indicate that sentential subjects are the most acoustically prominent items in the Saisiyat negative sentences measured. This contrasts sharply with the English experimental sentences, in which the negator itself was the most acoustically prominent item. These findings suggest that Saisiyat is a pitch-accent language; thus, the presence of negators does not significantly change the prosodic parameters of surrounding words. English, in contrast, is an intonation language, so the presence of negation results in substantial prosodic modification. This suggests that the phenomenon of negation is universally prominent; however, languages with different prosodic systems will adopt different strategies for realizing prominence.
Vowel dispersion in Truku
(2004)
This study investigates the dispersion of vowel space in Truku, an endangered Austronesian language in Taiwan. Adaptive Dispersion (Liljencrants and Lindblom, 1972; Lindblom, 1986, 1990) proposes that the distinctive sounds of a language tend to be positioned in phonetic space in a way that maximizes perceptual contrast. For example, languages with large vowel inventories tend to expand the overall acoustic vowel space. Adaptive Dispersion predicts that the distance between the point vowels will increase with the size of a language's vowel inventory. Thus, the available acoustic vowel space is utilized in a way that maintains maximal auditory contrast.
We consider three sets of phenomena that feature prominently - and separately - in the financial economics literature: conditional mean dependence (or lack thereof) in asset returns, dependence (and hence forecastability) in asset return signs, and dependence (and hence forecastability) in asset return volatilities. We show that they are very much interrelated, and we explore the relationships in detail. Among other things, we show that: (a) Volatility dependence produces sign dependence, so long as expected returns are nonzero, so that one should expect sign dependence, given the overwhelming evidence of volatility dependence; (b) The standard finding of little or no conditional mean dependence is entirely consistent with a significant degree of sign dependence and volatility dependence; (c) Sign dependence is not likely to be found via analysis of sign autocorrelations, runs tests, or traditional market timing tests, because of the special nonlinear nature of sign dependence; (d) Sign dependence is not likely to be found in very high-frequency (e.g., daily) or very low-frequency (e.g., annual) returns; instead, it is more likely to be found at intermediate return horizons; (e) Sign dependence is very much present in actual U.S. equity returns, and its properties match closely our theoretical predictions; (f) The link between volatility forecastability and sign forecastability remains intact in conditionally non-Gaussian environments, as for example with time-varying conditional skewness and/or kurtosis.
The mode of the antitumoral activity of multimutated oncolytic herpes simplex virus type 1 G207 has not been fully elucidated yet. Because the antitumoral activity of many drugs involves the inhibition of tumor blood vessel formation, we determined if G207 had an influence on angiogenesis. Monolayers of human umbilical vein endothelial cells and human dermal microvascular endothelial cells, but not human dermal fibroblasts, bronchial epithelial cells, and retinal glial cells, were highly sensitive to the replicative and cytotoxic effects of G207. Moreover, G207 infection caused the destruction of endothelial cell tubes in vitro. In the in vivo Matrigel plug assay in mice, G207 suppressed the formation of perfused vessels. Intratumoral treatment of established human rhabdomyosarcoma xenografts with G207 led to the destruction of tumor vessels and tumor regression. Ultrastructural investigations revealed the presence of viral particles in both tumor and endothelial cells of G207-treated xenografts, but not in adjacent normal tissues. These findings show that G207 may suppress tumor growth, in part, due to inhibition of angiogenesis.
The MAM (meprin/A5-protein/PTPmu) domain is present in numerous proteins with diverse functions. PTPμ belongs to the MAM-containing subclass of protein-tyrosine phosphatases (PTP) able to promote cell-to-cell adhesion. Here we provide experimental evidence that the MAM domain is a homophilic binding site of PTPμ. We demonstrate that the MAM domain forms oligomers in solution and binds to the PTPμ ectodomain at the cell surface. The presence of two disulfide bridges in the MAM molecule was evidenced and their integrity was found to be essential for MAM homophilic interaction. Our data also indicate that PTPμ ectodomain forms oligomers and mediates the cellular adhesion, even in the absence of MAM domain homophilic binding. Reciprocally, MAM is able to interact homophilically in the absence of ectodomain trans binding. The MAM domain therefore contains independent cis and trans interaction sites and we predict that its main role is to promote lateral dimerization of PTPμ at the cell surface. This finding contributes to the understanding of the signal transduction mechanism in MAM-containing PTPs.
An improved approach to predicting preferred habitat and targetting survey effort for threatened plant species is needed to aid discovery and conservation of new populations. This study employs several approaches to aid in the delineation of preferred habitat for the Leafless Tongue Orchid, Cryptostylis hunteriana Nicholls. BIOCLIM, a bioclimatic analysis and prediction system, is used initially to generate a bioclimatic habitat envelope within which the species can be expected to occur, based on all known sites in the Shoalhaven Local Government Area. Within the BIOCLIM envelope it is possible to further investigate the extent to which the species exhibits preferences for other habitat factors such as geology, soil landscapes and forest ecosystems. Multivariate techniques are used to compare floristic data from sites where Cryptostylis hunteriana is present, and sites from forest ecosystems where it has not been recorded historically. These techniques are also used to identify species which are diagnostic of each of these sets of sites. All 25 sites with Cryptostylis hunteriana populations are restricted to six forest ecosystems having a total area of 15% of the Shoalhaven Local Government Area and 47% of the BIOCLIM envelope. Within these forest ecosystems, ten plant species deemed indicative of the possible presence of the Cryptostylis hunteriana are identified.
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
Fronting a noun phrase changes the focus structure of a sentence. Therefore, it may affect truth conditions, since some operators, in particular quantificational adverbs, are sensitive to focus. However, the position of the quantificational adverb itself, hence its informational status, is usually assumed not to have any semantic effect. In this paper I discuss a reading of some quantificational adverbs, the relative reading, which disappears if the adverb is fronted. I propose that this reading relies not only on focus, but on B-accent (fall-rise intonation) as well. A fronted Q-adverb is usually pronounced with a B-accent; since only one element can be B-accented, this means that the scope of the adverb contains no B-accented material, hence no relative readings. Thus, the effects of fronting range more widely than is usually assumed, and quantificational adverbs are a useful tool with which to investigate these effects.
Hackethal and Schmidt (2003) criticize a large body of literature on the financing of corporate sectors in different countries that questions some of the distinctions conventionally drawn between financial systems. Their criticism is directed against the use of net flows of finance and they propose alternative measures based on gross flows which they claim re-establish conventional distinctions. This paper argues that their criticism is invalid and that their alternative measures are misleading. There are real issues raised by the use of aggregate data but they are not the ones discussed in Hackethal and Schmidt’s paper. JEL Classification: G30
In Archaea, bacteria, and eukarya, ATP provides metabolic energy for energy-dependent processes. It is synthesized by enzymes known as A-type or F-type ATP synthase, which are the smallest rotatory engines in nature (Yoshida, M., Muneyuki, E., and Hisabori, T. (2001) Nat. Rev. Mol. Cell. Biol. 2, 669-677; Imamura, H., Nakano, M., Noji, H., Muneyuki, E., Ohkuma, S., Yoshida, M., and Yokoyama, K. (2003) Proc. Natl. Acad. Sci. U. S. A. 100, 2312-2315). Here, we report the first projected structure of an intact A(1)A(0) ATP synthase from Methanococcus jannaschii as determined by electron microscopy and single particle analysis at a resolution of 1.8 nm. The enzyme with an overall length of 25.9 nm is organized in an A(1) headpiece (9.4 x 11.5 nm) and a membrane domain, A(0) (6.4 x 10.6 nm), which are linked by a central stalk with a length of approximately 8 nm. A part of the central stalk is surrounded by a horizontal-situated rodlike structure ("collar"), which interacts with a peripheral stalk extending from the A(0) domain up to the top of the A(1) portion, and a second structure connecting the collar structure with A(1). Superposition of the three-dimensional reconstruction and the solution structure of the A(1) complex from Methanosarcina mazei Gö1 have allowed the projections to be interpreted as the A(1) headpiece, a central and the peripheral stalk, and the integral A(0) domain. Finally, the structural organization of the A(1)A(0) complex is discussed in terms of the structural relationship to the related motors, F(1)F(0) ATP synthase and V(1)V(0) ATPases.
This dissertation study argues that 'policy advice formation', as a discourse development, is a differentiated hybrid resultant from merger between comparative education and policy studies disciplines. Through discourse analysis based on John Creswell's format, this study identifies revisions, restatements and shifts in emphasis of theories, methodological models and challenge topics of comparative education and policy studies. Findings which display the development of policy advice formation' discourse. In conclusion, this study found differential patterns seemingly formed because of collaborative affects of standardization in education science knowledge expressed within discourse.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
Twenty-one riparian vascular plant communities are defined, mapped and described using presence/absence data from 460 sites from relatively unmodified stretches of rivers and streams on mainland Tasmania. The process of classification involved selection of groups of floristically distinct sites from a sorted table produced by a polythetic divisive process. The communities have strong geographic patterns. Many communities have a wide range of structural expression and/or dominants. Nearly half of the native vascular flora of Tasmania is present in the sites, including a large number of conservation-significant species, some of which are concentrated in riparian vegetation. In the drier, lowland parts of the State there are large areas with little or no native riparian vegetation remaining. Several of the communities that occur in this environment appear to be totally unreserved, while most of the communities from colder and more humid areas are represented within secure reserves.
Die Ermittlung von Proteinstukturen mittels NMR-Spektroskopie ist ein komplexer Prozess, wobei die Resonanzfrequenzen und die Signalintensitäten den Atomen des Proteins zugeordnet werden. Zur Bestimmung der räumlichen Proteinstruktur sind folgende Schritte erforderlich: die Präparation der Probe und 15N/13C Isotopenanreicherung, Durchführung der NMR Experimente, Prozessierung der Spektren, Bestimmung der Signalresonanzen ('Peak-picking'), Zuordnung der chemischen Verschiebungen, Zuordnung der NOESY-Spektren und das Sammeln von konformationellen Strukturparametern, Strukturrechnung und Strukturverfeinerung. Aktuelle Methoden zur automatischen Strukturrechnung nutzen eine Reihe von Computeralgorithmen, welche Zuordnungen der NOESY-Spektren und die Strukturrechnung durch einen iterativen Prozess verbinden. Obwohl neue Arten von Strukturparametern wie dipolare Kopplungen, Orientierungsinformationen aus kreuzkorrelierten Relaxationsraten oder Strukturinformationen, die sich in Gegenwart paramagnetischer Zentren in Proteinen ergeben, wichtige Neuerungen für die Proteinstrukturrechnung darstellen, sind die Abstandsinformationen aus NOESY-Spektren weiterhin die wichtigste Basis für die NMR-Strukturbestimmung. Der hohe zeitliche Aufwand des 'peak-picking' in NOESY-Spektren ist hauptsächlich bedingt durch spektrale Überlagerung, Rauschsignale und Artefakte in NOESY-Spektren. Daher werden für das effizientere automatische 'Peak-picking' zuverlässige Filter benötigt, um die relevanten Signale auszuwählen. In der vorliegenden Arbeit wird ein neuer Algorithmus für die automatische Proteinstrukturrechnung beschrieben, der automatisches 'Peak-picking' von NOESY-Spektren beinhaltet, die mit Hilfe von Wavelets entrauscht wurden. Der kritische Punkt dieses Algorithmus ist die Erzeugung inkrementeller Peaklisten aus NOESY-Spektren, die mit verschiedenen auf Wavelets basierenden Entrauschungsprozeduren prozessiert wurden. Mit Hilfe entrauschter NOESY-Spektren erhält man Signallisten mit verschiedenen Konfidenzbereichen, die in unterschiedlichen Schritten der kombinierten NOE-Zuordnung/Strukturrechnung eingesetzt werden. Das erste Strukturmodell beruht auf stark entrauschten Spektren, die die konservativste Signalliste mit als weitgehend sicher anzunehmenden Signalen ergeben. In späteren Stadien werden Signallisten aus weniger stark entrauschten Spektren mit einer größeren Anzahl von Signalen verwendet. Die Auswirkung der verschiedenen Entrauschungsprozeduren auf Vollständigkeit und Richtigkeit der NOESY Peaklisten wurde im Detail untersucht. Durch die Kombination von Wavelet-Entrauschung mit einem neuen Algorithmus zur Integration der Signale in Verbindung mit zusätzlichen Filtern, die die Konsistenz der Peakliste prüfen ('Network-anchoring' der Spinsysteme und Symmetrisierung der Peakliste), wird eine schnelle Konvergenz der automatischen Strukturrechnung erreicht. Der neue Algorithmus wurde in ARIA integriert, einem weit verbreiteten Computerprogramm für die automatische NOE-Zuordnung und Strukturrechnung. Der Algorithmus wurde an der Monomereinheit der Polysulfid-Schwefel-Transferase (Sud) aus Wolinella succinogenes verifiziert, deren hochaufgelöste Lösungsstruktur vorher auf konventionelle Weise bestimmt wurde. Neben der Möglichkeit zur Bestimmung von Proteinlösungsstrukturen bietet sich die NMR-Spektroskopie auch als wirkungsvolles Werkzeug zur Untersuchung von Protein-Ligand- und Protein-Protein-Wechselwirkungen an. Sowohl NMR Spektren von isotopenmarkierten Proteinen, als auch die Spektren von Liganden können für das 'Screening' nach Inhibitoren benutzt werden. Im ersten Fall wird die Sensitivität der 1H- und 15N-chemischen Verschiebungen des Proteinrückgrats auf kleine geometrische oder elektrostatische Veränderungen bei der Ligandbindung als Indikator benutzt. Als 'Screening'-Verfahren, bei denen Ligandensignale beobachtet werden, stehen verschiedene Methoden zur Verfügung: Transfer-NOEs, Sättigungstransferdifferenzexperimente (STD, 'saturation transfer difference'), ePHOGSY, diffusionseditierte und NOE-basierende Methoden. Die meisten dieser Techniken können zum rationalen Design von inhibitorischen Verbindungen verwendet werden. Für die Evaluierung von Untersuchungen mit einer großen Anzahl von Inhibitoren werden effiziente Verfahren zur Mustererkennung wie etwa die PCA ('Principal Component Analysis') verwendet. Sie eignet sich zur Visualisierung von Ähnlichkeiten bzw. Unterschieden von Spektren, die mit verschiedenen Inhibitoren aufgenommen wurden. Die experimentellen Daten werden zuvor mit einer Serie von Filtern bearbeitet, die u.a. Artefakte reduzieren, die auf nur kleinen Änderungen der chemischen Verschiebungen beruhen. Der am weitesten verbreitete Filter ist das sogenannte 'bucketing', bei welchem benachbarte Punkte zu einen 'bucket' aufsummiert werden. Um typische Nachteile der 'bucketing'-Prozedur zu vermeiden, wurde in der vorliegenden Arbeit der Effekt der Wavelet-Entrauschung zur Vorbereitung der NMR-Daten für PCA am Beispiel vorhandener Serien von HSQC-Spektren von Proteinen mit verschiedenen Liganden untersucht. Die Kombination von Wavelet-Entrauschung und PCA ist am effizientesten, wenn PCA direkt auf die Wavelet-Koeffizienten angewandt wird. Durch die Abgrenzung ('thresholding') der Wavelet-Koeffizienten in einer Multiskalenanalyse wird eine komprimierte Darstellung der Daten erreicht, welche Rauschartefakte minimiert. Die Kompression ist anders als beim 'bucketing' keine 'blinde' Kompression, sondern an die Eigenschaften der Daten angepasst. Der neue Algorithmus kombiniert die Vorteile einer Datenrepresentation im Wavelet-Raum mit einer Datenvisualisierung durch PCA. In der vorliegenden Arbeit wird gezeigt, dass PCA im Wavelet- Raum ein optimiertes 'clustering' erlaubt und dabei typische Artefakte eliminiert werden. Darüberhinaus beschreibt die vorliegende Arbeit eine de novo Strukturbestimmung der periplasmatischen Polysulfid-Schwefel-Transferase (Sud) aus dem anaeroben gram-negativen Bakterium Wolinella succinogenes. Das Sud-Protein ist ein polysulfidbindendes und transferierendes Enzym, das bei niedriger Polysulfidkonzentration eine schnelle Polysulfid-Schwefel-Reduktion katalysiert. Sud ist ein 30 kDa schweres Homodimer, welches keine prosthetischen Gruppen oder schwere Metallionen enthält. Jedes Monomer enhält ein Cystein, welches kovalent bis zu zehn Polysulfid-Schwefel (Sn 2-) Ionen bindet. Es wird vermutet, dass Sud die Polysulfidkette auf ein katalytischen Molybdän-Ion transferiert, welches sich im aktiven Zentrum des membranständigen Enzyms Polysulfid-Reduktase (Psr) auf dessen dem Periplasma zugewandten Seite befindet. Dabei wird eine reduktive Spaltung der Kette katalysiert. Die Lösungsstruktur des Homodimeres Sud wurde mit Hilfe heteronuklearer, mehrdimensionaler NMR-Techniken bestimmt. Die Struktur beruht auf von NOESY-Spektren abgeleiteten Distanzbeschränkungen, Rückgratwasserstoffbindungen und Torsionswinkeln, sowie auf residuellen dipolaren Kopplungen, die für die Verfeinerung der Struktur und für die relative Orientierung der Monomereinheiten wichtig waren. In den NMR Spektren der Homodimere haben alle symmetrieverwandte Kerne äquivalente magnetische Umgebungen, weshalb ihre chemischen Verschiebungen entartet sind. Die symmetrische Entartung vereinfacht das Problem der Resonanzzuordnung, da nur die Hälfte der Kerne zugeordnet werden müssen. Die NOESY-Zuordnung und die Strukturrechnung werden dadurch erschwert, dass es nicht möglich ist, zwischen den Intra-Monomer-, Inter-Monomer- und Co-Monomer- (gemischten) NOESY-Signalen zu unterscheiden. Um das Problem der Symmetrie-Entartung der NOESY-Daten zu lösen, stehen zwei Möglichkeiten zur Verfügung: (I) asymmetrische Markierungs-Experimente, um die intra- von den intermolekularen NOESY-Signalen zu unterscheiden, (II) spezielle Methoden der Strukturrechnung, die mit mehrdeutigen Distanzbeschränkungen arbeiten können. Die in dieser Arbeit vorgestellte Struktur wurde mit Hilfe der Symmetrie-ADR- ('Ambigous Distance Restraints') Methode in Kombination mit Daten von asymetrisch isotopenmarkierten Dimeren berechnet. Die Koordinaten des Sud-Dimers zusammen mit den NMR-basierten Strukturdaten wur- den in der RCSB-Proteindatenbank unter der PDB-Nummer 1QXN abgelegt. Das Sud-Protein zeigt nur wenig Homologie zur Primärsequenz anderer Proteine mit ähnlicher Funktion und bekannter dreidimensionaler Struktur. Bekannte Proteine sind die Schwefeltransferase oder das Rhodanese-Enzym, welche beide den Transfer von einem Schwefelatom eines passenden Donors auf den nukleophilen Akzeptor (z.B von Thiosulfat auf Cyanid) katalysieren. Die dreidimensionalen Strukturen dieser Proteine zeigen eine typische a=b Topologie und haben eine ähnliche Umgebung im aktiven Zentrum bezüglich der Konformation des Proteinrückgrades. Die Schleife im aktiven Zentrum umgibt das katalytische Cystein, welches in allen Rhodanese-Enzymen vorhanden ist, und scheint im Sud-Protein flexibel zu sein (fehlende Resonanzzuordnung der Aminosäuren 89-94). Das Polysulfidende ragt aus einer positiv geladenen Bindungstasche heraus (Reste: R46, R67, K90, R94), wo Sud wahrscheinlich in Kontakt mit der Polysulfidreduktase tritt. Das strukturelle Ergebnis wurde durch Mutageneseexperimente bestätigt. In diesen Experimenten konnte gezeigt werden, dass alle Aminosäurereste im aktiven Zentrum essentiell für die Schwefeltransferase-Aktivität des Sud-Proteins sind. Die Substratbindung wurde früher durch den Vergleich von [15N,1H]-TROSY-HSQC-Spektren des Sud-Proteins in An- und Abwesenheit des Polysulfidliganden untersucht. Bei der Substratbindung scheint sich die lokale Geometrie der Polysulfidbindungsstelle und der Dimerschnittstelle zu verändern. Die konformationellen Änderungen und die langsame Dynamik, hervorgerufen durch die Ligandbindung können die weitere Polysulfid-Schwefel-Aktivität auslösen. Ein zweites Polysulfid-Schwefeltransferaseprotein (Str, 40 kDa) mit einer fünffach höheren nativen Konzentration im Vergleich zu Sud wurde im Bakterienperiplasma von Wolinella succinogenes entdeckt. Es wird angenommen, dass beide Protein einen Polysulfid-Schwefel-Komplex bilden, wobei Str wässriges Polysulfid sammelt und an Sud abgibt, welches den Schwefeltransfer zum katalytischen Molybdän-Ion auf das aktive Zentrum der dem Periplasma zugewandten Seite der Polysulfidreduktase durchführt. Änderungen chemischer Verschiebungen in [15N,1H]-TROSY-HSQC-Spektren zeigen, dass ein Polysulfid-Schwefeltransfer zwischen Str und Sud stattfindet. Eine mögliche Protein-Protein-Wechselwirkungsfläche konnte bestimmt werden. In der Abwesenheit des Polysulfidsubstrates wurden keine Wechselwirkungen zwischen Sud und Str beobachtet, was die Vermutung bestätigt, dass beide Proteine nur dann miteinander wechselwirken und den Polysulfid-Schwefeltransfer ermöglichen, wenn als treibende Kraft Polysulfid präsent ist.
Maintenance of genomic integrity is essential to avoid cellular transformation, neoplasia, or cell death. DNA synthesis, mitosis, and cytokinesis are important cellular processes required for cell division and the maintenance of cellular homeostasis; they are governed by many extra- and intra-cellular stimuli. Progression of normal cell division depends on cyclin interaction with cyclin-dependent kinases (Cdk) and the degradation of cyclins before chromosomal segregation through ubiquitination. Multiple checkpoints exist and are conserved in the cell cycle in higher eukaryotes to ensure that if one fails, others will take care of genomic integrity and cell survival. Many genes act as either positive or negative regulators of checkpoint function through different kinase cascades, delaying cell cycle progression to repair the DNA lesions and breaks, and assuring equal segregation of chromosomes to daughter cells. Understanding the checkpoint pathways and genes involved in the cellular response to DNA damage and cell division events in normal and cancer cells, provides information about cancer predisposition, and suggests design of small molecules and other strategies for cancer therapy. Key Words: ATM-ATR; ATM/ATR; Aurora kinases; BRCAl; Cdc6; Cdc25; Cdc27-Cdc20/CdhI; Cell cycle; CENP-E; centrosome; checkpoint; Chkl/Chk2; cyc1in-Cdk; cyclindependent kinase inhibitors (CKI); hATRIP; Mad/Bub; MCM; MgcRacGAP; microtubule-associated proteins (MAPs); mitotic exit network (MEN); Mpsl; NIMA kinases; ORC; p53; PCNA; PBK-Akt; Plk; Rad50-Nbsl-Mrell; Ran-GTP; Ras; RB-E2F; SMC; Teml.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
The objective of this paper is the study of the equilibrium behavior of a population on the hierarchical group ΩN consisting of families of individuals undergoing critical branching random walk and in addition these families also develop according to a critical branching process. Strong transience of the random walk guarantees existence of an equilibrium for this two-level branching system. In the limit N→∞ (called the hierarchical mean field limit), the equilibrium aggregated populations in a nested sequence of balls B(N)ℓ of hierarchical radius ℓ converge to a backward Markov chain on R+. This limiting Markov chain can be explicitly represented in terms of a cascade of subordinators which in turn makes possible a description of the genealogy of the population.
Dislocation without movement
(2004)
This paper argues that French Left-Dislocation is a unified phenomenon whether it is resumed by a clitic or a non-clitic element. The syntactic component is shown to play a minimal role in its derivation: all that is required is that the dislocated element be merged by adjunction to a Discourse Projection (generally a finite TP with root properties). No agreement or checking of a topic feature is necessary, hence no syntactic movement of any sort need be postulated. The so-called resumptive element is argued to be a full-fledged pronoun rather than a true syntactic resumptive.
The starting point of Demirovic's text is Adorno's idea that concepts as forms of thinking are constellations of power. Differently from many interpretations of Adorno as resigned, Demirovic shows that this assumption enables Adorno to give his own theory the character of interventions in the ideological consensus of everyday life with regard to emancipation.
Despite powerful advances in yield curve modeling in the last twenty years, comparatively little attention has been paid to the key practical problem of forecasting the yield curve. In this paper we do so. We use neither the no-arbitrage approach, which focuses on accurately fitting the cross section of interest rates at any given time but neglects time-series dynamics, nor the equilibrium approach, which focuses on time-series dynamics (primarily those of the instantaneous rate) but pays comparatively little attention to fitting the entire cross section at any given time and has been shown to forecast poorly. Instead, we use variations on the Nelson-Siegel exponential components framework to model the entire yield curve, period-by-period, as a three-dimensional parameter evolving dynamically. We show that the three time-varying parameters may be interpreted as factors corresponding to level, slope and curvature, and that they may be estimated with high efficiency. We propose and estimate autoregressive models for the factors, and we show that our models are consistent with a variety of stylized facts regarding the yield curve. We use our models to produce term-structure forecasts at both short and long horizons, with encouraging results. In particular, our forecasts appear much more accurate at long horizons than various standard benchmark forecasts. JEL Code: G1, E4, C5