Refine
Year of publication
- 2009 (862) (remove)
Document Type
- Article (362)
- Book (118)
- Doctoral Thesis (116)
- Part of Periodical (89)
- Working Paper (80)
- Conference Proceeding (28)
- Report (21)
- Part of a Book (15)
- Preprint (12)
- Review (10)
Language
- English (862) (remove)
Keywords
- Deutschland (6)
- Haushalt (6)
- Lambda-Kalkül (6)
- Pragmatik (6)
- USA (6)
- new species (6)
- Bank (5)
- Optimalitätstheorie (5)
- China (4)
- Household Finance (4)
Institute
- Medizin (113)
- Biochemie und Chemie (112)
- Biowissenschaften (43)
- Physik (42)
- Geowissenschaften (41)
- Center for Financial Studies (CFS) (35)
- Frankfurt Institute for Advanced Studies (FIAS) (29)
- Informatik (21)
- Wirtschaftswissenschaften (21)
- E-Finance Lab e.V. (20)
A tale of two lost archives
(2009)
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G28
Induced charge computation
(2009)
One of the main aspects of statistical mechanics is that the properties of a thermodynamics state point do not depend on the choice of the statistical ensemble. It breaks down for small systems e.g. single molecules. Hence, the choice of the statistical ensemble is crucial for the interpretation of single molecule experiments, where the outcome of measurements depends on which variables or control parameters, are held fixed and which ones are allowed to fluctuate. Following this principle, this thesis investigates the thermodynamics of a single polymer pulling experiments within two different statistical ensembles. The scaling of the conjugate chain ensembles, the fixed end-to-end vector (Helmholtz) and the fixed applied force (Gibbs), are studied in depth. This thesis further investigates the ensemble equivalence for different force regimes and polymer-chain contour lengths. Using coarse-grained molecular dynamic simulations, i.e. Langevin dynamics, the simulations were found to complement the theoretical predictions for the scaling of ensemble difference of Gaussian chains in different force-regimes, giving special attention to the zero force regime. After constructing Helmholtz and Gibbs conjugate ensembles for a Gaussian chain, two different data sets of thermodynamic states on the force-extension plane, i.e. force-extension curves, were generated. The ensemble difference is computed for different polymer-chain lengths by using force-extension curves. The scaling of the ensemble difference versus relative polymer-chain length under different force regimes has been derived from the simulation data and compared to theoretical predictions. The results demonstrate that the Gaussian chain in the zero force limit generates nonequivalent ensembles, regardless of its equilibrium bond length and polymer-chain contour length. Moreover, if polymers are charged in confinement, coarse-graining is problematic, owing to dielectric interfaces. Hence, the effect of dielectric interfaces must be taken into account when describing physical systems such as ionic channels or biopolymers inside nanopores. It is shown that the effect of dielectrics is crucial for the dynamics of a biopolymer or an ion inside a nanopore. In the simulations, the feasibility of an efficient and accurate computation of electrostatic interactions in the presence of an arbitrarily shaped dielectric domain is challenging. Several solutions for this problem have been previously proposed in the literature such as a density functional approach, or transforming problem at hand into an algebraic problem ( Induced Charge Computation (ICC) ) and boundary element methods. Even though the essential concept is the same, which is to replace the dielectric interface with a polarization charge density, these approaches have been analyzed and the ICC algorithm has been implemented. A new superior boundary element method has been devised utilizing the force computation via the Particle-Particle Particle-Mesh (P3M) method for periodic geometries (ICCP3M). This method has been compared to the ICC algorithm, the algebraic solutions, and to density functional approaches. Extensive numerical tests against analytically tractable geometries have confirmed the correctness and applicability of developed and implemented algorithms, demonstrating that the ICCP3M is the fastest and the most versatile algorithm. Further optimization issues are also discussed in obtaining accurate induced charge densities. The potential of mean force (PMF) of DNA modelled on a coarsed-grain level inside a nanopore is investigated with and without the inclusion of dielectric effects. Despite the simplicity of the model, the dramatic effect of dielectric inclusions is clearly seen in the observed force profile.
Introduction Complex psychopathological and behavioral symptoms, such as delusions and aggression against care providers, are often the primary cause of acute hospital admissions of elderly patients to emergency units and psychiatric departments. This issue resembles an interdisciplinary clinically highly relevant diagnostic and therapeutic challenge across many medical subjects and general practice. At least 50% of the dramatically growing number of patients with dementia exerts aggressive and agitated symptoms during the course of clinical progression, particularly at moderate clinical severity. Methods Commonly used rating scales for agitation and aggression are reviewed and discussed. Furthermore, we focus in this article on benefits and limitations of all available data of anticonvulsants published in this specific indication, such as valproate, carbamazepine, oxcarbazepine, lamotrigine, gabapentin and topiramate. Results To date, most positive and robust data are available for carbamazepine, however, pharmacokinetic interactions with secondary enzyme induction limit its use. Controlled data of valproate do not seem to support the use in this population. For oxcarbazepine only one controlled but negative trial is available. Positive small series and case reports have been reported for lamotrigine, gabapentin and topiramate. Conclusions So far, data of anticonvulsants in demented patients with behavioral disturbances are not convincing. Controlled clinical trials using specific, valid and psychometrically sound instruments of newer anticonvulsants with a better tolerability profile are mandatory to verify whether they can contribute as treatment option in this indication.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Background, aim, and scope Food consumption is an important route of human exposure to endocrine-disrupting chemicals. So far, this has been demonstrated by exposure modeling or analytical identification of single substances in foodstuff (e.g., phthalates) and human body fluids (e.g., urine and blood). Since the research in this field is focused on few chemicals (and thus missing mixture effects), the overall contamination of edibles with xenohormones is largely unknown. The aim of this study was to assess the integrated estrogenic burden of bottled mineral water as model foodstuff and to characterize the potential sources of the estrogenic contamination. Materials, methods, and results In the present study, we analyzed commercially available mineral water in an in vitro system with the human estrogen receptor alpha and detected estrogenic contamination in 60% of all samples with a maximum activity equivalent to 75.2 ng/l of the natural sex hormone 17beta-estradiol. Furthermore, breeding of the molluskan model Potamopyrgus antipodarum in water bottles made of glass and plastic [polyethylene terephthalate (PET)] resulted in an increased reproductive output of snails cultured in PET bottles. This provides first evidence that substances leaching from plastic food packaging materials act as functional estrogens in vivo. Discussion and conclusions Our results demonstrate a widespread contamination of mineral water with xenoestrogens that partly originates from compounds leaching from the plastic packaging material. These substances possess potent estrogenic activity in vivo in a molluskan sentinel. Overall, the results indicate that a broader range of foodstuff may be contaminated with endocrine disruptors when packed in plastics. Keywords Endocrine disrupting chemicals - Estradiol equivalents - Human exposure - In vitro effects - In vivo effects - Mineral water - Plastic bottles - Plastic packaging - Polyethylene terephthalate - Potamopyrgus antipodarum - Yeast estrogen screen - Xenoestrogens
The role of microglial cells in the pathogenesis of Alzheimer’s disease (AD) neurodegeneration is unknown. Although several works suggest that chronic neuroinflammation caused by activated microglia contributes to neurofibrillary degeneration, anti-inflammatory drugs do not prevent or reverse neuronal tau pathology. This raises the question if indeed microglial activation occurs in the human brain at sites of neurofibrillary degeneration. In view of the recent work demonstrating presence of dystrophic (senescent) microglia in aged human brain, the purpose of this study was to investigate microglial cells in situ and at high resolution in the immediate vicinity of tau-positive structures in order to determine conclusively whether degenerating neuronal structures are associated with activated or with dystrophic microglia. We used a newly optimized immunohistochemical method for visualizing microglial cells in human archival brain together with Braak staging of neurofibrillary pathology to ascertain the morphology of microglia in the vicinity of tau-positive structures. We now report histopathological findings from 19 humans covering the spectrum from none to severe AD pathology, including patients with Down’s syndrome, showing that degenerating neuronal structures positive for tau (neuropil threads, neurofibrillary tangles, neuritic plaques) are invariably colocalized with severely dystrophic (fragmented) rather than with activated microglial cells. Using Braak staging of Alzheimer neuropathology we demonstrate that microglial dystrophy precedes the spread of tau pathology. Deposits of amyloid-beta protein (A beta) devoid of tau-positive structures were found to be colocalized with non-activated, ramified microglia, suggesting that A beta does not trigger microglial activation. Our findings also indicate that when microglial activation does occur in the absence of an identifiable acute central nervous system insult, it is likely to be the result of systemic infectious disease. The findings reported here strongly argue against the hypothesis that neuroinflammatory changes contribute to AD dementia. Instead, they offer an alternative hypothesis of AD pathogenesis that takes into consideration: (1) the notion that microglia are neuron-supporting cells and neuroprotective; (2) the fact that development of non-familial, sporadic AD is inextricably linked to aging. They support the idea that progressive, aging-related microglial degeneration and loss of microglial neuroprotection rather than induction of microglial activation contributes to the onset of sporadic Alzheimer’s disease. The results have far-reaching implications in terms of reevaluating current treatment approaches towards AD.
Background The role of the Fcgamma receptor IIa (FcgammaRIIa), a receptor for C-reactive protein (CRP), the classical acute phase protein, in atherosclerosis is not yet clear. We sought to investigate the association of FcgammaRIIa genotype with risk of coronary heart disease (CHD) in two large population-based samples. Methods FcgammaRIIa-R/H131 polymorphisms were determined in a population of 527 patients with a history of myocardial infarction and 527 age and gender matched controls drawn from a population-based MONICA- Augsburg survey. In the LURIC population, 2227 patients with angiographically proven CHD, defined as having at least one stenosis [greater than or equal to]50%, were compared with 1032 individuals with stenosis <50%. Results In both populations genotype frequencies of the FcgammaRIIa gene did not show a significant departure from the Hardy-Weinberg equilibrium. FcgammaRIIa R(-131)->H genotype was not independently associated with lower risk of CHD after multivariable adjustments, neither in the MONICA population (odds ratio (OR) 1.08; 95% confidence interval (CI) 0.81 to 1.44), nor in LURIC (OR 0.96; 95% CI 0.81 to 1.14). Conclusion Our results do not confirm an independent relationship between FcgammaRIIa genotypes and risk of CHD in these populations.
Background Treatment options for metastatic renal cell carcinoma (RCC) are limited due to resistance to chemo- and radiotherapy. The development of small-molecule multikinase inhibitors have now opened novel treatment options. The influence of the receptor tyrosine kinase inhibitor AEE788, applied alone or combined with the mammalian target of rapamycin (mTOR) inhibitor RAD001, on RCC cell adhesion and proliferation in vitro has been evaluated. Methods RCC cell lines Caki-1, KTC-26 or A498 were treated with various concentrations of RAD001 or AEE788 and tumor cell proliferation, tumor cell adhesion to vascular endothelial cells or to immobilized extracellular matrix proteins (laminin, collagen, fibronectin) evaluated. The anti-tumoral potential of RAD001 combined with AEE788 was also investigated. Both, asynchronous and synchronized cell cultures were used to subsequently analyze drug induced cell cycle manipulation. Analysis of cell cycle regulating proteins was done by western blotting. Results RAD001 or AEE788 reduced adhesion of RCC cell lines to vascular endothelium and diminished RCC cell binding to immobilized laminin or collagen. Both drugs blocked RCC cell growth, impaired cell cycle progression and altered the expression level of the cell cycle regulating proteins cdk2, cdk4, cyclin D1, cyclin E and p27. The combination of AEE788 and RAD001 resulted in more pronounced RCC growth inhibition, greater rates of G0/G1 cells and lower rates of S-phase cells than either agent alone. Cell cycle proteins were much more strongly altered when both drugs were used in combination than with single drug application. The synergistic effects were observed in an asynchronous cell culture model, but were more pronounced in synchronous RCC cell cultures. Conclusions Potent anti-tumoral activitites of the multikinase inhibitors AEE788 or RAD001 have been demonstrated. Most importantly, the simultaneous use of both AEE788 and RAD001 offered a distinct combinatorial benefit and thus may provide a therapeutic advantage over either agent employed as a monotherapy for RCC treatment.
Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma=1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusions Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives.
Background Evidence-based guidelines potentially improve healthcare. However, their de-novo-development requires substantial resources - especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development - the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE-instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 unratable (derived from a single guideline). Of the 25 consistencies, 14 based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) the majority was congruent. Incongruencies were found, where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusions The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines.
Riboswitches are a novel class of genetic control elements that function through the direct interaction of small metabolite molecules with structured RNA elements. The ligand is bound with high specificity and affinity to its RNA target and induces conformational changes of the RNA's secondary and tertiary structure upon binding. To elucidate the molecular basis of the remarkable ligand selectivity and affinity of one of these riboswitches, extensive all-atom molecular dynamics simulations in explicit solvent ({approx}1 µs total simulation length) of the aptamer domain of the guanine sensing riboswitch are performed. The conformational dynamics is studied when the system is bound to its cognate ligand guanine as well as bound to the non-cognate ligand adenine and in its free form. The simulations indicate that residue U51 in the aptamer domain functions as a general docking platform for purine bases, whereas the interactions between C74 and the ligand are crucial for ligand selectivity. These findings either suggest a two-step ligand recognition process, including a general purine binding step and a subsequent selection of the cognate ligand, or hint at different initial interactions of cognate and noncognate ligands with residues of the ligand binding pocket. To explore possible pathways of complex dissociation, various nonequilibrium simulations are performed which account for the first steps of ligand unbinding. The results delineate the minimal set of conformational changes needed for ligand release, suggest two possible pathways for the dissociation reaction, and underline the importance of long-range tertiary contacts for locking the ligand in the complex.
Oligonucleotides suppress PKB/Akt and act as superinductors of apoptosis in human keratinocytes
(2009)
DNA oligonucleotides (ODN) applied to an organism are known to modulate the innate and adaptive immune system. Previous studies showed that a CpG-containing ODN (CpG-1-PTO) and interestingly, also a non-CpG-containing ODN (nCpG- 5-PTO) suppress inflammatory markers in skin. In the present study it was investigated whether these molecules also influence cell apoptosis. Here we show that CpG-1-PTO, nCpG-5-PTO, and also natural DNA suppress the phosphorylation of PKB/Akt in a cell-type-specific manner. Interestingly, only epithelial cells of the skin (normal human keratinocytes, HaCaT and A-431) show a suppression of PKB/Akt. This suppressive effect depends from ODN lengths, sequence and backbone. Moreover, it was found that TGFa-induced levels of PKB/Akt and EGFR were suppressed by the ODN tested. We hypothesize that this suppression might facilitate programmed cell death. By testing this hypothesis we found an increase of apoptosis markers (caspase 3/7, 8, 9, cytosolic cytochrome c, histone associated DNA fragments, apoptotic bodies) when cells were treated with ODN in combination with low doses of staurosporin, a wellknown pro-apoptotic stimulus. In summary the present data demonstrate DNA as a modulator of apoptosis which specifically targets skin epithelial cells.
Global warming is expected to be associated with diverse changes in freshwater habitats in north-western Europe. Increasing evaporation, lower oxygen concentration due to increased water temperature and changes in precipitation pattern are likely to affect the survival ratio and reproduction rate of freshwater gastropods (Pulmonata, Basommatophora). This work is a comprehensive analyse of the climatic factors influencing their ranges both in the past and in the near future. A macroecological approach showed that for a great proportion of genera the ranges were projected to contract by 2080, even if unlimited dispersal was assumed. The forecasted warming in the cooler northern ranges predicted the emergence of new suitable areas, but also reduced drastically the available habitat in the southern part of the studied region. In order to better understand the ranges dynamics in the past and the post glacial colonisation patterns, an approach combining ecological niche modelling and phylogeography was used for two model species, Radix balthica and Ancylus fluviatilis. Phylogeographic model selection on a COI mtDNA dataset confirmed that R. balthica most likely spread from two central European disjunct refuges after the last glacial maximum. The phylogeographic analysis of A. fluviatilis, using 16S and COI mtDNA datasets, also inferred central European refugia. The absence of niche conservatism (adaptive potential) inferred for A. fluviatilis puts a cautionary note on the use of climate envelope models to predict the future ranges of this species. However, the other model species exhibited strong niche conservatism, which allow putting confidence into such predictions. A profound faunal shift will take place in Central Europe within the next century, either permitting the establishment of species currently living south of the studied region or the proliferation of organisms relying on the same food resources. This study points out the need for further investigations on the dispersal modes of freshwaters snails, since the future range size of the species depend on their ability to establish in newly available habitats. Likewise, the mixed mating system of these organisms gives them the possibility to fund a new population from a single individual. It will probably affect the colonisation success and needs further investigation.
Lentiviral vectors mediate gene transfer into dividing and most non-dividing cells. Thereby, they stably integrate the transgene into the host cell genome. For this reason, lentiviral vectors are a promising tool for gene therapy. However, safety and efficiency of lentiviral mediated gene transfer still needs to be optimised. Ideally, cell entry should be restricted to the cell population relevant for a particular therapeutic application. Furthermore, lentiviral vectors able to transduce quiescent lymphocytes are desirable. Although many approaches were followed to engineer retroviral envelope proteins, an effective and universally applicable system for retargeting of lentiviral cell entry is still not available. Just before the experimental work of this thesis was started, retargeting of measles virus (MV) cell entry was achieved. This virus has two types of envelope glycoproteins, the hemagglutinin (H) protein responsible for receptor recognition and the fusion (F) protein mediating membrane fusion. For retargeting, the H protein was mutated in its interaction sites for the native MV receptors and a ligand or a single-chain antibody (scAb) was fused to its ectodomain. It was hypothesised that the retargeting system of MV can be transferred to lentiviral vectors by pseudotyping human immunodeficiency virus-1 (HIV-1) derived vector particles with the MV glycoproteins. As the unmodified MV glycoproteins did not pseudotype HIV vectors, two F and 15 H protein variants carrying stepwise truncations or amino acid (aa) exchanges in their cytoplasmic tails were screened for their ability to form MV-HIV pseudotypes. The combinations Hcd18/Fcd30, Hcd19/Fcd30 and Hcd24+4A/Fcd30 led to most efficient pseudotype formation with titers above 10exp6 transducing units /ml, using concentrated particles. The F cytoplasmic tail was truncated by 30 aa and the H cytoplasmic tail was truncated by 18, 19 or 24 residues with four added alanines after the start methionine in the latter case. Western blot analysis indicated that particle incorporation of the MV glycoproteins was enhanced upon truncation of their cytoplasmic tails. With the MV-HIV vectors high titers on different cell lines expressing one or both MV receptors were obtained, whereas MV receptor-negative cells remained untransduced. Titers were enhanced using an optimal H to F plasmid ratio (1:7) during vector particle production. Based on the described pseudotyping with the MV glycoprotein variants, HIV vectors retargeted to the epidermal growth factor receptor (EGFR) or the B cell surface marker CD20 were generated. For the production of the retargeted vectors MVaEGFR-HIV and MVaCD20-HIV, Fcd30 together with a native receptor blind Hcd18 protein, displaying at its ectodomain either the ligand EGF or a scAb directed against CD20 were used. With these vectors, gene transfer into target receptor-positive cells was several orders of magnitude more efficient than into control cells. The almost complete absence of background transduction of non-target cells was e.g. demonstrated in mixed cell populations, where the CD20-targeting vector selectively eliminated CD20-positive cells upon suicide gene transfer. Remarkably, transduction of activated primary human CD20-positive B cells was much more efficient with the MVaCD20-HIV vector than with the standard pseudotype vector VSV-G-HIV. Even more surprisingly, MVaCD20-HIV vectors were able to transduce quiescent primary human B cells, which until then had been resistant towards lentiviral gene transfer. The most critical step during the production of MV-HIV pseudotypes was the identification of H cytoplasmic tail mutants that allowed pseudotyping while retaining the fusion helper function. In contrast to previously inefficient targeting strategies, the reason for the success of this novel targeting system must be based on the separation of the receptor recognition and fusion functions onto two different proteins. Furthermore, with the CD20-targeting vector transduction of quiescent B cells was demonstrated for the first time. Own data and literature data suggest that CD20 binding and hyper-cross-linking by the vector particles results in calcium influx and thus activation of quiescent B cells. Alternatively this feature may be based on a residual binding activity of the MV glycoproteins to the native MV receptors that is insufficient for entry but induces cytoskeleton rearrangements dissolving the post-entry block of HIV vectors. Hence, in this thesis efficient retargeting of lentiviral vectors and transduction of quiescent cells was combined. This novel targeting strategy should be easily adaptable to many other target molecules by extending the modified MV H protein with appropriate specific domains or scAbs. It should now be possible to tailor lentiviral vectors for highly selective gene transfer into any desired target cell population with an unprecedented degree of efficiency.
Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.
Shape complementarity is a compulsory condition for molecular recognition. In our 3D ligand-based virtual screening approach called SQUIRREL, we combine shape-based rigid body alignment with fuzzy pharmacophore scoring. Retrospective validation studies demonstrate the superiority of methods which combine both shape and pharmacophore information on the family of peroxisome proliferator-activated receptors (PPARs). We demonstrate the real-life applicability of SQUIRREL by a prospective virtual screening study, where a potent PPARalpha agonist with an EC50 of 44 nM and 100-fold selectivity against PPARgamma has been identified...
Background The to date evidence for a dose-response relationship between physical workload and the development of lumbar disc diseases is limited. We therefore investigated the possible etiologic relevance of cumulative occupational lumbar load to lumbar disc diseases in a multi-center case-control study. Methods In four study regions in Germany (Frankfurt/Main, Freiburg, Halle/Saale, Regensburg), patients seeking medical care for pain associated with clinically and radiologically verified lumbar disc herniation (286 males, 278 females) or symptomatic lumbar disc narrowing (145 males, 206 females) were prospectively recruited. Population control subjects (453 males and 448 females) were drawn from the regional population registers. Cases and control subjects were between 25 and 70 years of age. In a structured personal interview, a complete occupational history was elicited to identify subjects with certain minimum workloads. On the basis of job task-specific supplementary surveys performed by technical experts, the situational lumbar load represented by the compressive force at the lumbosacral disc was determined via biomechanical model calculations for any working situation with object handling and load-intensive postures during the total working life. For this analysis, all manual handling of objects of about 5 kilograms or more and postures with trunk inclination of 20 degrees or more are included in the calculation of cumulative lumbar load. Confounder selection was based on biologic plausibility and on the change-in-estimate criterion. Odds ratios (OR) and 95% confidence intervals (CI) were calculated separately for men and women using unconditional logistic regression analysis, adjusted for age, region, and unemployment as major life event (in males) or psychosocial strain at work (in females), respectively. To further elucidate the contribution of past physical workload to the development of lumbar disc diseases, we performed lag-time analyses. Results We found a positive dose-response relationship between cumulative occupational lumbar load and lumbar disc herniation as well as lumbar disc narrowing among men and women. Even past lumbar load seems to contribute to the risk of lumbar disc disease. Conclusions According to our study, cumulative physical workload is related to lumbar disc diseases among men and women.
Background Since June 2002, revised regulations in Germany have required "Emergency Medical Care" as an interdisciplinary subject, and state that emergency treatment should be of increasing importance within the curriculum. A survey of the current status of undergraduate medical education in emergency medical care establishes the basis for further committee work. Methods Using a standardized questionnaire, all medical faculties in Germany were asked to answer questions concerning the structure of their curriculum, representation of disciplines, instructors' qualifications, teaching and assessment methods, as well as evaluation procedures. Results Data from 35 of the 38 medical schools in Germany were analysed. In 32 of 35 medical faculties, the local Department of Anaesthesiology is responsible for the teaching of emergency medical care; in two faculties, emergency medicine is taught mainly by the Department of Surgery and in another by Internal Medicine. Lectures, seminars and practical training units are scheduled in varying composition at 97% of the locations. Simulation technology is integrated at 60% (n=21); problem-based learning at 29% (n=10), e-learning at 3% (n=1), and internship in ambulance service is mandatory at 11% (n=4). In terms of assessment methods, multiple-choice exams (15 to 70 questions) are favoured (89%, n=31), partially supplemented by open questions (31%, n=11). Some faculties also perform single practical tests (43%, n=15), objective structured clinical examination (OSCE; 29%, n=10) or oral examinations (17%, n=6). Conclusion Emergency Medical Care in undergraduate medical education in Germany has a practical orientation, but is very inconsistently structured. The innovative options of simulation technology or state-of-the-art assessment methods are not consistently utilized. Therefore, an exchange of experiences and concepts between faculties and disciplines should be promoted to guarantee a standard level of education in emergency medical care.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
The NADH:ubiquinone oxidoreductase (complex I) is a large membrane bound protein complex coupling the redox reaction of NADH oxidation and quinone reduction to vectorial proton translocation across bioenergetic membranes. The mechanism of proton pumping is still unknown; it seems however that the reduction of quinone induces conformational changes which drive proton uptake from one side and release at the other side of the membrane. In this study the proposed quinone and inhibitor binding pocket located at the interface of the 49-kDa and PSST subunits was explored by a large number of point mutations introduced into complex I from the strictly aerobic yeast Yarrowia lipolytica. Point mutations were systematically chosen based on the crystal structure of the hydrophilic domain of complex I from Thermus thermophilus. In total, the properties of 94 mutants at 39 positions which completely cover the lining of the large putative quinone and inhibitor binding cavity are described and discussed here. A structure/function analysis allowed the identification of functional domains within the large putative quinone binding cavity. A possible quinone access path ranging from the N-terminal beta-sheet of the 49-kDa subunit into the pocket to tyrosine 144 could be defined, since all exchanges introduced here, caused an almost complete loss of complex I activity. A region located deeper in the proposed quinone binding pocket is apparently not important for complex I activity. In contrast, all exchanges of tyrosine 144, even the very conservative mutant Y144F, essentially abolished dNADH:DBQ oxidoreductase activity of complex I. However, with higher concentrations of Q1 or Q2 the dNADH:Q oxidoreductase activity was largely restored in the mutants with the more conservative exchanges. Proton pumping experiments showed that this activity was also coupled to proton translocation, indicating that these quinones were reduced at the physiological site. However, the apparent Km values for Q1 or Q2 were drastically increased, clearly demonstrating that tyrosine 144 is central for quinone binding and reduction. These results further prove that the enzymatically relevant quinone binding site of complex I is located at the interface of the 49-kDa and PSST subunits. The quinone binding pocket is thought to comprise the binding sites for a plethora of specific complex I inhibitors that are usually grouped into three classes. The large array of mutants targeting the quinone binding cavity was examined with a representative of each inhibitor class. Many mutants conferring resistance were identified which, depending on the inhibitor tested, clustered in well defined and partially overlapping regions of the large putative quinone and inhibitor binding cavity. Mutants with effects on type A (DQA) and type B (rotenone) inhibitors were found in a subdomain corresponding to the former [NiFe] site in homologous hydrogenases, whereby the type A inhibitor DQA seems to bind deeper in this domain. Mutants with effects on the type C inhibitor (C12E8) were found in a narrow crevice. Exchanging more exposed residues at the border of these well defined domains affected all three inhibitor types. Therefore, the results as a whole provide further support for the concept that different inhibitor classes bind to different but partially overlapping binding sites within a single large quinone binding pocket. In addition, they also indicate the approximate location of the binding sites within the structure of the large quinone and inhibitor binding cavity at the interface of the 49 kDa and the PSST subunit. It has been proposed earlier that the highly conserved HRGXE-motif in the 49-kDa subunit forms a part of the quinone binding site of complex I. Mutagenesis of the HRGXE-motif, revealed that these residues are rather critical for complex I assembly and seem to have an important structural role. The question why iron-sulfur cluster N1a is not detectable by EPR in many models organisms is not solved yet. Introducing polar and positively charged amino acid residues close to this cluster in order to increase its midpoint potential did not result in the appearance of the cluster N1a EPR signal in mitochondrial membranes from the mutants. Clearly, further research will be necessary to gain insights to the function of this iron-sulfur cluster in complex I. In an additional project, a new and simple in vivo screen for complex I deficiency in Y. lipolytica was developed and optimized. This assay probes for defects in complex I assembly and stability, oxidoreductase activity and also proton pumping activity by complex I. Most importantly, this assay is applicable to all Y. lipolytica strains and could be used to identify loss-of-function mutants, gain-of-functions mutants (i.e. resistance towards complex I inhibitors) and revertants due to mutations in both nuclear and mitochondrially encoded genes of complex I subunits.
The light-harvesting complex of photosystem II (LHC-II) is the major antenna complex in plant photosynthesis. It accounts for roughly 30% of the total protein in plant chloroplasts, which makes it arguably the most abundant membrane protein on Earth, and binds about half of plant chlorophyll (Chl). The complex assembles as a trimer in the thylakoid membrane and binds a total of 54 pigment molecules, including 24 Chl a, 18 Chl b, 6 lutein (Lut), 3 neoxanthin (Neo) and 3 violaxanthin (Vio). LHC-II has five key roles in plant photosynthesis. It: (1) harvests sunlight and transmits excitation energy to the reaction centres of photosystems II and I, (2) regulates the amount of excitation energy reaching each of the two photosystems, (3) has a structural role in the architecture of the photosynthetic supercomplexes, (4) contributes to the tight appression of thylakoid membranes in chloroplast grana, and (5) protects the photosynthetic apparatus from photo damage by non photochemical quenching (NPQ). A major fraction of NPQ is accounted for its energy-dependent component qE. Despite being critical for plant survival and having been studied for decades, the exact details of how excess absorbed light energy is dissipated under qE conditions remain enigmatic. Today it is accepted that qE is regulated by the magnitude of the pH gradient (ΔpH) across the thylakoid membrane. It is also well documented that the drop in pH in the thylakoid lumen during high-light conditions activates the enzyme violaxanthin de-epoxidase (VDE), which converts the carotenoid Vio into zeaxanthin (Zea) as part of the xanthophyll cycle. Additionally, studies with Arabidopsis mutants revealed that the photosystem II subunit PsbS is necessary for qE. How these physiological responses switch LHC-II from the active, energy transmitting to the quenched, energy-dissipating state, in which the solar energy is not transmitted to the photosystems but instead dissipated as heat, remains unclear and is the subject of this thesis. From the results obtained during this doctoral work, five main conclusions can be drawn concerning the mechanism of qE: 1. Substitution of Vio by Zea in LHC-II is not sufficient for efficient dissipation of excess excitation energy. 2. Aggregation quenching of LHC-II does not require Vio, Neo nor a specific Chl pair. 3. With one exception, the pigment structure in LHC-II is rigid. 4. The two X-ray structures of LHC-II show the same energy transmitting state of the complex. 5. Crystalline LHC-II resembles the complex in the thylakoid membrane. Models of the aggregation quenching mechanism in vitro and the qE mechanism in vivo are presented as a corollary of this doctoral work. LHC-II aggregation quenching in vitro is attributed to the formation of energy sinks on the periphery of LHC-II through random interaction with other trimers, free pigments or impurities. A similar but unrelated process is proposed to occur in the thylakoid membrane, by which excess excitation energy is dissipated upon specific interaction between LHC-II and a PsbS monomer carrying Zea. At the end of this thesis, an innovative experimental model for the analysis of all key aspects of qE is proposed in order to finally solve the qE enigma, one of the last unresolved problems in photosynthesis research.
Samples of freshly fallen snow were collected at the high alpine research station Jungfraujoch (Switzerland) in February and March 2006 and 2007, during the Cloud and Aerosol Characterization Experiments (CLACE) 5 and 6. In this study a new technique has been developed and demonstrated for the measurement of organic acids in fresh snow. The melted snow samples were subjected to solid phase extraction and resulting solutions analysed for organic acids by HPLC-MS-TOF using negative electrospray ionization. A series of linear dicarboxylic acids from C5 to C13 and phthalic acid, were identified and quantified. In several samples the biogenic acid pinonic acid was also observed. In fresh snow the median concentration of the most abundant acid, adipic acid, was 0.69 micro g L -1 in 2006 and 0.70 micro g L -1 in 2007. Glutaric acid was the second most abundant dicarboxylic acid found with median values of 0.46 micro g L -1 in 2006 and 0.61 micro g L -1 in 2007, while the aromatic acid phthalic acid showed a median concentration of 0.34 micro g L -1 in 2006 and 0.45 micro g L -1 in 2007. The concentrations in the samples from various snowfall events varied significantly, and were found to be dependent on the back trajectory of the air mass arriving at Jungfraujoch. Air masses of marine origin showed the lowest concentrations of acids whereas the highest concentrations were measured when the air mass was strongly influenced by boundary layer air.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 microg m-3, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
It has become popular for journalists who are trying to sell newspapers, and politicians who are trying to solicit votes, to refer to this financial crisis as the worst since the Great Depression or WWII. I don’t know whether it is the worst or not so will leave that question to the historians and economists of the future once the storm has past. But it is indeed a “storm” as described by Vince Cable, Member of Parliament in his UK bestselling book entitled “The Storm – The World Economic Crisis and What it Means”. He describes this “storm” as a very destructive one displacing jobs, businesses, banks and whole economies from Iceland to the United Kingdom to the United States. I propose to offer a short chronology and summary of the causes of the current economic crisis. Then I will review several of the regulatory responses to the crisis focusing on the Turner Report, the de Larosière Group and certain US Treasury statements. I will offer my critiques of these proposals and then make some predictions of what the financial services industry may look like in the future.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
Background Heme oxygenase-1 is an inducible cytoprotective enzyme which handles oxidative stress by generating anti-oxidant bilirubin and vasodilating carbon monoxide. A (GT)n dinucleotide repeat and a -413A>T single nucleotide polymorphism have been reported in the promoter region of HMOX1 to both influence the occurrence of coronary artery disease and myocardial infarction. We sought to validate these observations in persons scheduled for coronary angiography. Methods We included 3219 subjects in the current analysis, 2526 with CAD including a subgroup of CAD and MI (n = 1339) and 693 controls. Coronary status was determined by coronary angiography. Risk factors and biochemical parameters (bilirubin, iron, LDL-C, HDL-C, and triglycerides) were determined by standard procedures. The dinucleotide repeat was analysed by PCR and subsequent sizing by capillary electrophoresis, the -413A>T polymorphism by PCR and RFLP. Results In the LURIC study the allele frequency for the -413A>T polymorphism is A = 0,589 and T = 0,411. The (GT)n repeats spread between 14 and 39 repeats with 22 (19.9%) and 29 (47.1%) as the two most common alleles. We found neither an association of the genotypes or allelic frequencies with any of the biochemical parameters nor with CAD or previous MI. Conclusion Although an association of these polymorphisms with the appearance of CAD and MI have been published before, our results strongly argue against a relevant role of the (GT)n repeat or the -413A>T SNP in the HMOX1 promoter in CAD or MI.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Introduction Impaired renal function and/or pre-existing atherosclerosis in the deceased donor increase the risk of delayed graft function and impaired long-term renal function in kidney transplant recipients. Case presentation We report delayed graft function occurring simultaneously in two kidney transplant recipients, aged 57-years-old and 39-years-old, who received renal allografts from the same deceased donor. The 62-year-old donor died of cardiac arrest during an asthmatic state. Renal-allograft biopsies performed in both kidney recipients because of delayed graft function revealed cholesterol-crystal embolism. An empiric statin therapy in addition to low-dose acetylsalicylic acid was initiated. After 10 and 6 hemodialysis sessions every 48 hours, respectively, both renal allografts started to function. Glomerular filtration rates at discharge were 26 ml/min/1.73 m2 and 23.9 ml/min/1.73 m2, and remained stable in follow-up examinations. Possible donor and surgical procedure-dependent causes for cholesterol-crystal embolism are discussed. Conclusion Cholesterol-crystal embolism should be considered as a cause for delayed graft function and long-term impaired renal allograft function, especially in the older donor population.
Methods for dichoptic stimulus presentation in functional magnetic resonance imaging : a review
(2009)
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
The aim of this paper is to address two main counterarguments raised in Landau (2007) against the movement analysis of Control, and especially against the phenomenon of Backward Control. The paper shows that unlike the situation described in Tsez (Polinsky & Potsdam 2002), Landau's objections do not hold for Greek and Romanian, where all obligatory control verbs exhibit Backward Control. Our results thus provide stronger empirical support for a theoretical approach to Control in terms of Movement, as defended in Hornstein (1999 and subsequent work).
The recent financial crisis has led to a vigorous debate about the pros and cons of fair-value accounting (FVA). This debate presents a major challenge for FVA going forward and standard setters’ push to extend FVA into other areas. In this article, we highlight four important issues as an attempt to make sense of the debate. First, much of the controversy results from confusion about what is new and different about FVA. Second, while there are legitimate concerns about marking to market (or pure FVA) in times of financial crisis, it is less clear that these problems apply to FVA as stipulated by the accounting standards, be it IFRS or U.S. GAAP. Third, historical cost accounting (HCA) is unlikely to be the remedy. There are a number of concerns about HCA as well and these problems could be larger than those with FVA. Fourth, although it is difficult to fault the FVA standards per se, implementation issues are a potential concern, especially with respect to litigation. Finally, we identify several avenues for future research. JEL Classification: G14, G15, G30, K22, M41, M42
The utility-maximizing consumption and investment strategy of an individual investor receiving an unspanned labor income stream seems impossible to find in closed form and very dificult to find using numerical solution techniques. We suggest an easy procedure for finding a specific, simple, and admissible consumption and investment strategy, which is near-optimal in the sense that the wealthequivalent loss compared to the unknown optimal strategy is very small. We first explain and implement the strategy in a simple setting with constant interest rates, a single risky asset, and an exogenously given income stream, but we also show that the success of the strategy is robust to changes in parameter values, to the introduction of stochastic interest rates, and to endogenous labor supply decisions.
In this paper, we analyze economies of scale for German mutual fund complexes. Using 2002-2005 data of 41 investment management companies, we specify a hedonic translog cost function. Applying a fixed effects regression on a one-way error component model there is clear evidence of significant overall economies of scale. On the level of individual mutual fund complexes we find significant economies of scale for all of the companies in our sample. With regard to cost efficiency, we find that the average mutual fund complexes in all size quartiles deviate considerably from the best practice cost frontier. JEL Classification: G2, L25 Keywords: mutual fund complex, investment management company, cost efficiency, economies of scale, hedonic translog cost function, fixed effects regression, one-way error component model
Der vorliegende Beitrag untersucht, ob der Mehrheitsaktionär einer Gesellschaft im Vorfeld eines Zwangsausschlusses von Minderheitsaktionären (sog. Squeeze-Out) versucht, die Kapitalmarkterwartungen negativ zu beeinflussen. Ein solches "manipulatives" Verhalten wird häufig in der juristischen wie betriebswirtschaftlichen Literatur unterstellt, da der Aktienkurs fü die Abfindungshöhe die Wertuntergrenze bildet. Unsere empirische Untersuchung der Bilanz- und Pressemitteilungspolitik von Squeeze-Out-Unternehmen im Vorfeld der Ankündigung einer solchen Maßnahme am deutschen Kapitalmarkt zeigt, dass in diesem Zeitraum tatsächlich ein signifikanter Anstieg (Rückgang) der im Ton pessimistischen (optimistischen) Pressemitteilungen feststellbar ist. Allerdings zeigt sich weiter, dass die Aktien der Squeeze-Out-Kandidaten bereits im Vorfeld und am Tag der Ankündigung so hohe positive Überrenditen erzielen, dass der von uns quantifizierte kumulierte Effekt der Informationspolitik auf die Börsenbewertung einen insgesamt nur sehr geringen Einfluss ausübt und von anderen Faktoren (z.B. Abfindungsspekulationen) dominiert wird. JEL: M41, M40, G14, K22
Gauging risk with higher moments : handrails in measuring and optimising conditional value at risk
(2009)
The aim of the paper is to study empirically the influence of higher moments of the return distribution on conditional value at risk (CVaR). To be more exact, we attempt to reveal the extent to which the risk given by CVaR can be estimated when relying on the mean, standard deviation, skewness and kurtosis. Furthermore, it is intended to study how this relationship can be utilised in portfolio optimisation. First, based on a database of 600 individual equity returns from 22 emerging world markets, factor models incorporating the first four moments of the return distribution have been constructed at different confidence levels for CVaR, and the contribution of the identified factors in explaining CVaR was determined. Following this the influence of higher moments was examined in portfolio context, i.e. asset allocation decisions were simulated by creating emerging market portfolios from the viewpoint of US investors. This can be regarded as a normal decisionmaking process of a hedge fund focusing on investments into emerging markets. In our analysis we compared and contrasted two approaches with which one can overcome the shortcomings of the variance as a risk measure. First of all, we solved in the presence of conflicting higher moment preferences a multi-objective portfolio optimisation problem for different sets of preferences. In addition, portfolio optimisation was performed in the mean-CVaR framework characterised by using CVaR as a measure of risk. As a part of the analysis, the pair-wise comparison of the different higher moment metrics of the meanvariance and the mean-CVaR efficient portfolios were also made. Throughout the work special attention was given to implied preferences to the different higher moments in optimising CVaR. We also examined the extent to which model risk, namely the risk of wrongly assuming normally-distributed returns can deteriorate our optimal portfolio choice. JEL Classification: G11, G15, C61
If we want to develop a semantic analysis for explicit performatives such as I promise you to free Willy, we are faced with the following puzzle: In order to account for the speech act expressed by the performative verb, one can assume that the so-called performative clause is purely performative and provides the illocutionary force of the speech act whose content is given by the semantic object denoted by the complement clause. Yet under this perspective, the performative clause that is, next to the performative verb, the indexicals I and you that refer to the speaker and to the addressee of the utterance context is semantically invisible and does not contribute compositionally its meaning to the meaning of the entire explicit performative sentence. Conversely, if we account for the truth conditional contribution of the performative clause and deny that the meaning of the performative verb is purely performative, then we have to find a way to account for the speech act expressed by the performative verb. Of course, there is already the widely accepted and very appealing indirectness account for explicit performative utterances developed by Bach & Harnish (1979). Roughly, Bach and Harnish solve this puzzle in deriving the performativity by means of a pragmatic inference process. According to them, the important speech act performed by means of the utterance of the explicit performative sentence is a kind of the conventionalized indirect speech act. However, the boundary between semantics and pragmatics can be drawn in many various ways. Therefore, I think there could be other perspectives regarding the interface between the truth-functional treatment of the declarative explicit performative sentences and the speech acts performed with their utterances and which are expressed by the performative verbs. Hence, this thesis consists in the experiment to develop a further analysis and to check out its consequences with respect to the semantics and pragmatics of explicit performative utterances and the new interface emerged. Briefly, the experiment runs as follows: First, I develop an analysis for explicit performative sentences framed by parenthetical structures such as in (1)(a). In a second step, this parenthetical analysis is applied to the proper Austinian explicit performative sentences in (1)(b). (1) a. Tomorrow, I promise you this, I will teach them Tyrolean songs. b. I promise you that I will teach them Tyrolean songs. To analyze at first explicit performatives framed by parenthetical structures bears the convenience that we are faced with two utterances of two main clauses. In (1)(a) there is the utterance of the host sentence Tomorrow I will teach them Tyrolean songs, and the utterance of the explicit parenthetical I promise you this, where the demonstrative this refers to the utterance of Tomorrow I will teach them Tyrolean songs. Since speakers perform speech acts with utterances of main clauses, I assume that the meaning of the explicit parenthetical I promise you this specifies that the actual illocutionary force of the utterance of Tomorrow I will teach them Tyrolean songs is the illocutionary force of a promise. Hence, instead of deriving an indirect illocutionary force by means of a pragmatic inference schema, we can deal with an ordinary direct speech act that is performed with the utterance of the host sentence. This kind of analysis stresses the particular discourse function of explicit performative utterances. Performative verbs are used whenever the contextual information is not sufficient to determine the illocutionary force of the corresponding implicit speech act. The resulting consequences of the parenthetical analysis are interesting since they cast a different light on performative verbs. Surprisingly, the performative verbs are not performative at all. They do not constitute the execution of a speech act, but are execution supporting. Instead of constituting the particular illocutionary force, they merely specify the illocutionary force of the utterance of the host sentence. For instance, the speaker utters the explicit parenthetical I promise you this for specifying what he is simultaneously doing. Hence the speaker does not succeed in performing the promise simply because he is uttering I promise you this. Rather, by means of the information conveyed by the utterance of I promise you this, the potential illocutionary forces of the utterance of the host sentence are disambiguated. Thus, it is not the case that explicit parentheticals are trivially true when uttered. Their function is more complex. Their self-verifying property (‘saying so makes it so’) is explained by means of disambiguation. Furthermore, according to the parenthetical analysis, instead of being purely performative, the performative verbs contribute compositionally their meanings to the truth conditions of the entire explicit performative sentence. Together with its consequences, this analysis is applied to the proper Austinian performatives, which display subordination. I assume that regardless of their structure, explicit performatives always semantically and pragmatically behave as the parenthetical analysis predicts.
Biventricular pacing has been suggested in end-stage heart failure. We present a 59-year-old patient undergoing second re-do CABG (coronary artery bypass graft) and carotid artery endarterectomy. Ejection fraction was 15%, QRS-width 175 ms. Following the carotid and CABG procedure, an implanted single-chamber ICD (implantable cardioverter defibrillator) was upgraded to permanent biventricular DDD pacing by implantation of one epicardial left ventricular and one epicardial atrial electrode. At follow-up two months postoperatively ejection fraction had significantly improved to 45%, the patient underwent stress test with adequate load and reported a good quality of life.
Background Hepatitis C virus (HCV) is a leading cause of chronic liver disease, end-stage cirrhosis, and liver cancer, but little is known about the burden of disease caused by the virus. We summarised burden of disease data presently available for Europe, compared the data to current expert estimates, and identified areas in which better data are needed. Methods Literature and international health databases were systematically searched for HCV-specific burden of disease data, including incidence, prevalence, mortality, disability-adjusted life-years (DALYs), and liver transplantation. Data were collected for the WHO European region with emphasis on 22 countries. If HCV-specific data were unavailable, these were calculated via HCV-attributable fractions. Results HCV-specific burden of disease data for Europe are scarce. Incidence data provided by national surveillance are not fully comparable and need to be standardised. HCV prevalence data are often inconclusive. According to available data, an estimated 7.3–8.8 million people (1.1–1.3%) are infected in our 22 focus countries. HCV-specific mortality, DALY, and transplantation data are unavailable. Estimations via HCV-attributable fractions indicate that HCV caused more than 86000 deaths and 1.2 million DALYs in the WHO European region in 2002. Most of the DALYs (95%) were accumulated by patients in preventable disease stages. About one-quarter of the liver transplants performed in 25 European countries in 2004 were attributable to HCV. Conclusion Our results indicate that hepatitis C is a major health problem and highlight the importance of timely antiviral treatment. However, data on the burden of disease of hepatitis C in Europe are scarce, outdated or inconclusive, which indicates that hepatitis C is still a neglected disease in many countries. What is needed are public awareness, co-ordinated action plans, and better data. European physicians should be aware that many infections are still undetected, provide timely testing and antiviral treatment, and avoid iatrogenic transmission.
Background Public health systems are confronted with constantly rising costs. Furthermore, diagnostic as well as treatment services become more and more specialized. These are the reasons for an interdisciplinary project on the one hand aiming at simplification of planning and scheduling patient appointments, on the other hand at fulfilling all requirements of efficiency and treatment quality. Methods As to understanding procedure and problem solving activities, the responsible project group strictly proceeded with four methodical steps: actual state analysis, analysis of causes, correcting measures, and examination of effectiveness. Various methods of quality management, as for instance opinion polls, data collections, and several procedures of problem identification as well as of solution proposals were applied. All activities were realized according to the requirements of the clinic's ISO 9001:2000 certified quality management system. The development of this project is described step by step from planning phase to inauguration into the daily routine of the clinic and subsequent control of effectiveness. Results Five significant problem fields could be identified. After an analysis of causes the major remedial measures were: installation of a patient telephone hotline, standardization of appointment arrangements for all patients, modification of the appointments book considering the reason for coming in planning defined working periods for certain symptoms and treatments, improvement of telephonic counselling, and transition to flexible time planning by daily updates of the appointments book. After implementation of these changes into the clinic's routine success could be demonstrated by significantly reduced waiting times and resulting increased patient satisfaction. Conclusion Systematic scrutiny of the existing organizational structures of the outpatients' department of our clinic by means of actual state analysis and analysis of causes revealed the necessity of improvement. According to rules of quality management correcting measures and subsequent examination of effectiveness were performed. These changes resulted in higher satisfaction of patients, referring colleagues and clinic staff the like. Additionally the clinic is able to cope with an increasing demand for appointments in outpatients' departments, and the clinic's human resources are employed more effectively.
Background Ongoing changes in cancer care cause an increase in the complexity of cases which is characterized by modern treatment techniques and a higher demand for patient information about the underlying disease and therapeutic options. At the same time, the restructuring of health services and reduced funding have led to the downsizing of hospital care services. These trends strongly influence the workplace environment and are a potential source of stress and burnout among professionals working in radiotherapy. Methods and patients A postal survey was sent to members of the workgroup "Quality of Life" which is part of DEGRO (German Society for Radiooncology). Thus far, 11 departments have answered the survey. 406 (76.1%) out of 534 cancer care workers (23% physicians, 35% radiographers, 31% nurses, 11% physicists) from 8 university hospitals and 3 general hospitals completed the FBAS form (Stress Questionnaire of Physicians and Nurses; 42 items, 7 scales), and a self-designed questionnaire regarding work situation and one question on global job satisfaction. Furthermore, the participants could make voluntary suggestions about how to improve their situation. Results Nurses and physicians showed the highest level of job stress (total score 2.2 and 2.1). The greatest source of job stress (physicians, nurses and radiographers) stemmed from structural conditions (e.g. underpayment, ringing of the telephone) a "stress by compassion" (e.g. "long suffering of patients", "patients will be kept alive using all available resources against the conviction of staff"). In multivariate analyses professional group (p < 0.001), working night shifts (p = 0.001), age group (p = 0.012) and free time compensation (p = 0.024) gained significance for total FBAS score. Global job satisfaction was 4.1 on a 9-point scale (from 1 – very satisfied to 9 – not satisfied). Comparing the total stress scores of the hospitals and job groups we found significant differences in nurses (p = 0.005) and physicists (p = 0.042) and a borderline significance in physicians (p = 0.052). In multivariate analyses "professional group" (p = 0.006) and "vocational experience" (p = 0.036) were associated with job satisfaction (cancer care workers with < 2 years of vocational experience having a higher global job satisfaction). The total FBAS score correlated with job satisfaction (Spearman-Rho = 0.40; p < 0.001). Conclusion Current workplace environments have a negative impact on stress levels and the satisfaction of radiotherapy staff. Identification and removal of the above-mentioned critical points requires various changes which should lead to the reduction of stress.
Analysis of knockout/knockin mice that express a mutant FasL lacking the intracellular domain
(2009)
Fas ligand (FasL; CD178; CD95L) is a type II transmembrane protein belonging to the tumour necrosis factor family; its binding to the Fas receptor (CD95; APO-1) triggers apoptosis in the receptor-bearing cell. Signalling through this pathway plays a pivotal role during the immune response and in immune system homeostasis. Similar to other TNF family members, the intracellular domain has been reported to transmit signals to the inside of the FasL-bearing cell (reverse signalling). Recently, we identified the proteases ADAM10 and SPPL2a as molecules important for the processing of FasL. Protease cleavage releases the intracellular domain, which then is able to translocate to the nucleus and to repress reporter gene activity. To study the physiological importance of FasL reverse signalling in vivo, we established knockout/knockin mice with a FasL deletion mutant that lacks the intracellular portion (FasLDeltaIntra). Co-culture experiments confirmed that the truncated FasL protein is still capable of inducing apoptosis in Fas-sensitive cells. Preliminary immune histochemistry data suggest that, in contrast to published data, the absence of the intracellular FasL domain does not alter the intracellular FasL localization in activated T cells. We are currently investigating signalling and proliferative capacities of T cells derived from homozygous FasLDeltaIntra mice to validate a co-stimulatory role of FasL reverse signalling.
Mitochondria are essential for respiration and oxidative phosphorylation. Mitochondrial dysfunction due to aging processes is involved in pathologies and pathogenesis of a series of cardiovascular disorders. New results accumulate showing that the enzyme telomerase with its catalytic subunit telomerase reverse transcriptase (TERT) has a beneficial effect on heart functions. The benefit of short-term running of mice for heart function is dependent on TERT expression. TERT can translocate into the mitochondria and mitochondrial TERT (mtTERT) is protective against stress induced stimuli and binds to mitochondrial DNA (mtDNA). Because mtDNA is highly susceptible to damage produced by reactive oxygen species (ROS) which are generated in close proximity to the respiratory chain, the aim of this study was to determine the functions of mtTERT in vivo and in vitro. Therefore, mitochondria from hearts of adult, 2nd generation TERT-deficient mice (TERT -/-) and wt littermates were isolated and state 3 respiration was measured. Strikingly mitochondria from TERT -/- revealed a significantly lower state 3 respiration (TERTwt: 987 +/- 72 pmol/s*mg vs. TERT-/-: 774 +/- 38 pmol/s*mg, p < 0.05, n = 5). These results demonstrated that TERT -/- mice have a so far undiscovered heart phenotype. In contrast mitochondria isolated from liver tissues did not show any differences. To get further insights in the molecular mechanisms, we reduced endogenous TERT levels by shRNA and measured mitochondrial reactive oxygen species (mtROS). mtROS were increased after ablation of TERT (scrambled: 4.98 +/- 1.1% gated vs. shTERT: 2.03 +/- 0.7% gated, p < 0.05, n = 4). We next determined mtDNA deletions, which are caused by mtROS. Semiquantitative realtime PCR of mtDNA deletions revealed that mtTERT protects mtDNA from oxidative damage. To analyze whether mitochondrial integrity is required to protect from apoptosis, vectors with mitochondrially targeted TERT (mitoTERT) and wildtype TERT (wtTERT) were transfected and apoptosis was measured. mitoTERT showed the most prominent protective effect on H2O2 induced apoptosis. In conclusion, mtTERT has a protective role in mitochondria by importantly contributing to mtDNA integrity and thereby enhancing respiration capacity of the heart.
Carma-1 is required for B cell receptor-/CD40- and T cell receptor-/CD28-induced B- and T-cell activation via JNK and NF-betaB. In B cells, Carma-1 becomes phosphorylated by PKCbeta, leading to its oligomerization. Subsequent Bcl10 binding induces IKKbeta-activation and, thereby, canonical NF-KB signalling. Despite these findings it is still unknown how exactly Carma-1 is connected to the plasma membrane and to the IKK-complex. Therefore, we purified Carma-1 complexes from mouse CH12 B cells using anti-Carma-1 affinity columns. Mass spectrometric analyses of the column eluates demonstrated the presence of Carma-1 as well as three previously uncharacterized adaptor proteins in B cells, one of which was the Trk-fused gene (Tfg), an adaptor protein containing PB1 and coiledcoil domains. Whereas Tfg was originally identified as fusion partner of oncogenic Trk tyrosine kinase mutants, the normal cellular homologue of Tfg has so far not been described in B cells. However, Tfg has been shown in other systems to interact with IKKgamma and to enhance TNFinduced NF-KB activation. Tfg and Carma-1 co-localized at the plasma membrane and perinuclear structures in B cells. We further corroborated the interactions of Tfg, IKKgamma and Carma-1 by Blue Native gel electrophoresis, where Carma-1 and Tfg formed a 0.7–1 MDa complex. Ectopic expression of Tfg increased the molecular mass of IKKgamma complexes, fused IKKgamma, Bcl10 and Carma-1 complexes to a ~2 MDa complex, and increased basal and CD40-induced canonical activity of NF-KB and IKKbeta. In contrast, shRNA-mediated silencing of Tfg decreased CD40-induced IKKbeta activity. Very interestingly, in primary B cells, highest expression of Tfg was detected in marginal zone and B1 B cells, and Carma-1 and Tfg formed complexes in these B cells. Since Carma-1 is required for marginal zone B cell and B1 B cell development, we suggest that a functional interaction between Carma-1 and Tfg contributes to development and maintenance of these cells by means of canonical NF-KB signals.
Introduction: Lymphocyte infiltration (LI) is often seen in breast cancer but its importance remains controversial. A positive correlation of human epidermal growth factor receptor 2 (HER2) amplification and LI has been described, which was associated with a more favorable outcome. However, specific lymphocytes might also promote tumor progression by shifting the cytokine milieu in the tumor.
Methods: Affymetrix HG-U133A microarray data of 1,781 primary breast cancer samples from 12 datasets were included. The correlation of immune system-related metagenes with different immune cells, clinical parameters, and survival was analyzed.
Results: A large cluster of nearly 600 genes with functions in immune cells was consistently obtained in all datasets. Seven robust metagenes from this cluster can act as surrogate markers for the amount of different immune cell types in the breast cancer sample. An IgG metagene as a marker for B cells had no significant prognostic value. In contrast, a strong positive prognostic value for the T-cell surrogate marker (lymphocyte-specific kinase (LCK) metagene) was observed among all estrogen receptor (ER)-negative tumors and those ER-positive tumors with a HER2 overexpression. Moreover ER-negative tumors with high expression of both IgG and LCK metagenes seem to respond better to neoadjuvant chemotherapy.
Conclusions: Precise definitions of the specific subtypes of immune cells in the tumor can be accomplished from microarray data. These surrogate markers define subgroups of tumors with different prognosis. Importantly, all known prognostic gene signatures uniformly assign poor prognosis to all ER-negative tumors. In contrast, the LCK metagene actually separates the ER-negative group into better or worse prognosis.
Background Imatinib mesylate, a selective inhibitor of Abl tyrosine kinase, is efficacious in treating chronic myeloid leukaemia (CML) and Ph+ acute lymphoblastic leukaemia (ALL). However, most advanced-phase CML and Ph+ ALL patients relapse on Imatinib therapy. Several mechanisms of refractoriness have been reported, including the activation of the Src-family kinases (SFK). Here, we investigated the biological effect of the new specific dual Src/Abl kinase inhibitor AZD0530 on Ph+ leukaemic cells. Methods Cell lines used included BV173 (CML in myeloid blast crisis), SEM t(4;11), Ba/F3 (IL-3 dependent murine pro B), p185Bcr-Abl infected Ba/F3 cells, p185Bcr-Abl mutant infected Ba/F3 cells, SupB15 (Ph+ ALL) and Imatinib resistant SupB15 (RTSupB15) (Ph+ ALL) cells. Cells were exposed to AZD0530 and Imatinib. Cell proliferation, apoptosis, survival and signalling pathways were assessed by dye exclusion, flow cytometry and Western blotting respectively. Results AZD0530 specifically inhibited the growth of, and induced apoptosis in CML and Ph+ ALL cells in a dose dependent manner, but showed only marginal effects on Ph- ALL cells. Resistance to Imatinib due to the mutation Y253F in p185Bcr-Abl was overcome by AZD0530. Combination of AZD0530 and Imatinib showed an additive inhibitory effect on the proliferation of CML BV173 cells but not on Ph+ ALL SupB15 cells. An ongoing transphosphorylation was demonstrated between SFKs and Bcr-Abl. AZD0530 significantly down-regulated the activation of survival signalling pathways in Ph+ cells, resistant or sensitive to Imatinib, with the exception of the RTSupB15. Conclusion Our results indicate that AZD0530 targets both Src and Bcr-Abl kinase activity and reduces the leukaemic maintenance by Bcr-Abl.
Background The effect of additional treatment strategies with antineoplastic agents on intraperitoneal tumor stimulating interleukin levels are unclear. Taurolidine and Povidone-iodine have been mainly used for abdominal lavage in Germany and Europe. Methods In the settings of a multicentre (three University Hospitals) prospective randomized controlled trial 120 patients were randomly allocated to receive either 0.5% taurolidine/2,500 IU heparin (TRD) or 0.25% povidone-iodine (control) intraperitoneally for resectable colorectal, gastric or pancreatic cancers. Due to the fact that IL-1beta (produced by macrophages) is preoperatively indifferent in various gastrointestinal cancer types our major outcome criterion was the perioperative (overall) level of IL-1beta in peritoneal fluid. Results Cytokine values were significantly lower after TRD lavage for IL-1beta, IL-6, and IL-10. Perioperative complications did not differ. The median follow-up was 50.0 months. The overall mortality rate (28 vs. 25, p = 0.36), the cancer-related death rate (17 vs. 19, p = .2), the local recurrence rate (7 vs. 12, p = .16), the distant metastasis rate (13 vs. 18, p = 0.2) as well as the time to relapse were not statistically significant different. Conclusion Reduced cytokine levels might explain a short term antitumorigenic intraperitoneal effect of TRD. But, this study analyzed different types of cancer. Therefore, we set up a multicentre randomized trial in patients undergoing curative colorectal cancer resection. Trial registration : ISRCTN66478538
Background The differential diagnosis between follicular thyroid adenoma and minimal invasive follicular thyroid carcinoma is often difficult for several reasons. One major aspect is the lack of typical cytological criteria in well differentiated specimens. New marker molecules, shown by poly- or monoclonal antibodies proved helpful. Methods We performed global gene expression analysis of 12 follicular thyroid tumours (4 follicular adenomas, 4 minimal invasive follicular carcinomas and 4 widely invasive follicular carcinomas), followed by immunohistochemical staining of 149 cases. The specificity of the antibody was validated by western blot analysis Results In gene expression analysis QPRT was detected as differently expressed between follicular thyroid adenoma and follicular thyroid carcinoma. QPRT protein could be detected by immunohistochemistry in 65% of follicular thyroid carcinomas including minimal invasive variant and only 22% of follicular adenomas. Conclusion Consequently, QPRT is a potential new marker for the immunohistochemical screening of follicular thyroid nodules.
This study addresses the structure-function relationships of three essential membrane proteins: Porin from Paracoccus denitrificans, Porin OmpG from Eschericia coli and BetP from Corynobacterium glutamicum using Fourier transform infrared (FT-IR) spectroscopy and Attenuated Total Reflection (ATR) techniques. The structure of porin from P. denitrificans is known for more than a decade; however, the mechanism for loss of functionality together with the monomerization was not clear. In this study we have addressed the role of lipids for the functionality of porin using FT-IR. OmpF porin was found to interact with the lipid molecules via the aromatic girdles surrounding the protein for functionality. In this study, molecular bonds and groups of the lipids were established as reporter groups probing at different depths of the bilayer in order to understand the interaction partner of the aromatic girdles of porins. Monomerization of the trimeric assembly of OmpF porin reconstituted in lipids is induced by increasing the temperature. Porin (OmpF) was found to be extremely stable: The secondary structure of the protein was unaltered up to the temperature-induced main transition, around 80-90 °C, above which it is denatured. However, the interaction of the aromatic girdle with the lipid molecules exhibited distinct changes at much lower temperature values (40 - 50°) where, according to the previous functional studies, monomerization and the loss of function occurs. The results are compared with OmpG porin from E.coli, for which the functional unit is a monomer. The aromatic girdle-lipid interaction was monitored by the tyrosine aromatic ring C=C vibrational mode, a universal marker for the protein stability and interaction. We have also found that the aromatic girdles of porins are interacting with the interfacial region of the lipid bilayer instead of lipid headgroups. Lipid-protein interaction was found to be not only essential for the structural stability, but also for the functionality of OmpF porin. We have also studied the structural properties of OmpG from E.coli. The structure of OmpG at two pH values has been resolved using X-ray crystallography and the channel has been proposed to attain different states at different pH values as closed (pH < 5.5) and open (pH >7.5). This study, using IR spectroscopy, revealed that the pH-induced opening and closing of the channel is reflected by the frequency shifts of the ? sheet structure. OmpG has more rigid ? barrel properties upon opening of the channel. IR spectral analysis revealed multiple ? sheet signals with different hydrogen bond strengths. This enabled us to monitor the formation of hydrogen bridges between the extracellular loops upon opening of the channel. The conclusion that OmpG porin having two states at different pH values was also confirmed by the three mutants where the role of the histidine pair (H231 & H261) and loop 6 has been addressed. Temperature-profiling of the wild type (WT) protein and the mutants did not show pH dependent structural stability differences in detergent solution. However, the WT protein was found to be more stable in the open form in 2D crystals than the closed form. Reconstitution into lipids has increased the transition temperature value by ~20 °C in the closed state and ~25 °C in the open state. Therefore we conclude that the open and closed state of OmpG has structural stability differences that are only revealed in the lipid environment. A comparison of the transition temperature values of OmpG WT and the mutants suggested that the hydrogen bond network among S218-H231-H261-D267, together with the formation of 12 residue-long ?-sheet contributes to the structural stability of the open channel. In the process of closing and opening of the channel, the globular structure of the protein remains mainly unchanged, while there are changes in the side chain moieties. In addition to the role of the histidine pair and the loop L6, in situ opening/closing experiments showed that the negatively charged amino acids, i.e. Asp and Glu, and Arg residues also play an active role; possibly by interacting with each other inside the pore lumen. Therefore it could be concluded that the closure of the channel at acidic pH values is not only via closing the channel entrance by loop 6, but also via changing the electric potential inside the lumen due to the different states of charged amino acids in order to effectively block the gateway. BetP from C.glutamicum attains an active and inactive state in order to adjust its glycine betaine uptake rate to the osmotic conditions that the cell encounters. The structure of BetP is not yet available. The WT protein exhibited structural differences in the presence of excess K+, which is one of the activation conditions. In 2D crystals, increasing the ionic strength to 700 mM K+ was shown to induce changes in the ?-helical moiety with contributions from the ester groups and one Tyr residue using ATR-FTIR. An increase in ionic strength to 220 mM K+ was found to be the threshold value of potassium concentration ([K+]) where the protein exhibits structural alterations in detergent solution. The determined [K+] values are in good agreement with the previous functional studies. However, there are differences in the activation profile of BetP in 2D crystals and in detergent solution, which points out that the lipids are involved in the conformational transition from the inactive to the active state and their absence can lead to different structural properties. BetP WT was found to have ~65% alpha-helix, ~25% random coil and ~10% turn structure in detergent solution. In the presence of excess K+, the WT protein is found to adapt more unordered structure. Secondary structure analysis of the mutants revealed that both the N- and C-terminus are in ?-helical conformation. Reconstitution of WT protein in 2D crystals increased the main transition (denaturation) temperature value from ~62 °C to ~85 °C, a clear indication that the protein is more stable in lipid environment. Temperature-profiling of the two forms of the WT protein revealed that the structural breakdown is preceeded by monomerization of the trimeric assembly. Comparing the two forms of the WT protein and the mutant BetA, we conclude that the oligomeric status is stabilized via the interactions among hydrophilic regions involving the N terminus. H/D exchange and activation with excess K+ in D2O-buffer revealed that activation of the protein involves the interaction of Arg and Asp/Glu residues in the cytoplasmic region of the protein. BetP WT and the two mutants tested, i.e. BetA and BetP?C45, showed differences in protein packing upon activation. The WT protein and BetP?C45 mutant also show changes in the hydrogen bonding properties of turns. Since BetA does not show such a property in activation, we conclude that the N-terminus interacts with the loops in the inactive state via the interaction of charged amino acids for the WT protein and that this interaction is altered during the activation. It could be argued that the protein packing is affected via the changes in turns upon activation. We also have found experimental evidence that one Tyr residue has different orientations in the active and inactive state of BetP. Based on the previous functional studies, it could be one of the five Tyr residues in the cytoplasmic region of the protein (in loop 3, 6, 7 or C-terminus). The mutant BetP?C45, on the other hand, showed fewer differences between the active and inactive state conditions and based on the H/D exchange rates, the mutant shows the properties of an active WT protein, proving that the C-terminal truncation impairs the conformational transition between the active and inactive states.
The characterization of microscopic properties in correlated low-dimensional materials is a challenging problem due to the effects of dimensionality and the interplay between the many different lattice and electronic degrees of freedom. Competition between these factors gives rise to interesting and exotic magnetic phenomena. An understanding of how these phenomena are driven by these degrees of freedom can be used for rational design of new materials, to control and manipulate these degrees of freedom in order to obtain desired properties. In this work, we study these effects in materials with small exchange interaction between the magnetic ions such as metal-organic and inorganic dilute compounds. We overcome the dfficulties in studying these kind of materials by combining classical and quantum mechanical ab initio methods and many-body theory methods in an effective theoretical approach. To treat metal-organic compounds we elaborate a novel two-step methodology which allows one to include quantum effects while reducing the computational cost. We show that our approach is an effective procedure, leading at each step, to additional insights into the essential features of the phenomena and materials under study. Our investigation is divided into two parts, the first one concerning the exploration of the fundamental physical properties of novel Cu(II) hydroquinone-based compounds. We have studied two representatives of this family, a polymeric system Cu(II)-2,5-bis(pyrazol-1-yl)-1,4-dihydroxybenzene (CuCCP) and a coupled system Cu2S2F6N8O12 (TK91). The second part concerns the study of magnetic phenomena associated with the interplay between different energy scales and dimensionality in zero-, one- and two-dimensional compounds. In the zero-dimensional case, we have performed a comprehensive study of Cu4OCl6L4 with L=diallylcyanamide=NC-N-(CH2-CH=CH2)2 (Cu4OCl6daca4). Interpretations of the magnetic properties for this tetrameric compound have been controversial and inconsistent. From our studies, we conclude that the common models usually applied to this and other representatives in the same family of cluster systems fail to provide a consistent description of their low temperature magnetic properties and we thus postulate that in such systems it is necessary to take into account quantum fluctuations due to possible frustrated behavior. In the one-dimensional case, we studied polymeric Fe(II)-triazole compounds, which are of special relevance due to the possibility of inducing a spin transition between low and high spin state by applying a external perturbation. A long standing problem has been a satisfactory microscopic explanation of this large cooperative phenomenon. A lack of X-ray data has been one mitigating reason for the absence of microscopic studies. In this work, we present a novel approach to the understanding of the microscopic mechanism of spin crossover in such systems and show that in these kind of compounds magnetic exchange between high spin Fe(II) centers plays an important role. The correct description of the underlying physics in many materials is often hindered by the presence of anisotropies. To illustrate this difficulty, we have studied a two dimensional dilute compound K2V3O8 which exhibits an unusual spin reorientation effect when applying magnetic fields. While this effect can be understood when considering anisotropies in the system, it is not sufficient to reproduce experimental observations. Based on our studies of the electronic and magnetic properties in this system, we predict an extra exchange interaction and the presence of an additional magnetic moment at the non-magnetic V site. This sheds a new light into the controversial recent experimental data for the magnetic properties of this material.
In late 2006/early 2007, the Cultural Research Centre (CRC), with financial and technical support from the Cross-Cultural Foundation of Uganda, carried out research in Iganga and Namutumba districts to gauge the impact of the introduction of the local language as a medium of instruction in ‘pilot’ lower primary school classes. Our research was in response to new circumstances in Uganda’s education sector, with Government introducing teaching in local languages in lower primary classes from February 2007. This was accompanied by a “thematic curriculum”, to develop early childhood skills that are fundamental to continuing educational performance in numeracy, literacy and life skills. This was a departure from the earlier emphasis on the acquisition of facts in various subjects in primary schools, mostly focusing on recall, and mostly taught in English. This nationwide policy followed a pilot initiative in four districts, including Iganga (later split into Iganga and parts of Namutumba districts), where 15 pilot schools had been chosen. Instruction in Lusoga in Primary 1 to 3 classes started there in 2005, following a period of teacher training. From the outset however, parents, teachers, pupils and others raised questions: was teaching in the local language possible, and would it make a positive difference to learning?
In Lango, Northern Uganda, 20 years of war, cattle rustling and HIV/AIDS have resulted in widespread loss of life, population displacement, and loss of property. In spite of this turmoil, some traditional cultural practices, such as widow inheritance, early child marriage, and widow cleansing continued, although they were increasingly seen to conflict with ‘modern’ development thinking, especially when infringing women and children’s rights. External development actors first tried to address this situation by ‘sensitising’ communities, but with limited success. It however soon became evident that clan leaders were instrumental in perpetuating cultural practices: in the early 2000’s, they became increasingly identified as key actors to address harmful traditions and to resolve conflicts. With the many trials faced by local communities, women’s roles in supporting the family institution and upholding cultural values had however expanded too. Several development organisations were established to address the challenges related to these changes and one was the Lango Female Clan Leaders’ Association, with a focus on promoting girls’ education and access to justice for women. This case study examines the role that these female clan leaders have successfully played in tackling current gender- related challenges. It explores the interface between traditional and modern gender concepts and the value of working with cultural resource persons to address cultural challenges. The study involved desk research, field based semistructure interviews, focus group discussions with 30 respondents and key informants, and a validation write-shop, all held in the course of 2008.
This paper traces the historical development of lexicography in Gabon. Gabon, like most African countries, is multilingual. The recent inventories of languages spoken in Gabon are those established by Jacquot (1978) and Kwenzi-Mikala (1998). According to Kwenzi-Mikala (1997), there are 62 speech forms divided into 10 language groups or language-units in Gabon. These speech forms co-exist with French, the official language. In fact, in article 2 of paragraph 8 of the revised Constitution of 1994 the following can be read: "The Gabonese Republic adopts French as the official language. Furthermore, she endeavours to protect and promote the national languages." This constitutional arrangement naturally makes French the language used in education, administration and the media. The survey of lexicography in Gabon that is presented here includes the linguistic situation in and the language policy of Gabon, the lexicographic survey itself, as well as the lexicographic needs of the different speech forms (including languages and dialects). Initially, the pioneers of Gabonese lexicography were missionaries or colonial administrators. Very little was done in this field by the Gabonese themselves. Although credit is to be given to these early works, there are a number of shortcomings regarding the linguistic as well as the metalexicographic contents of dictionaries and lexicons produced during this period. In fact, the main weak point of those studies was the lack of tones in the written transcription of oral productions and orthographic problems. Furthermore, in those contributions, the theory of lexicography is largely unknown and lexico-graphic works are hardly ever based on authentic data corpora of the languages being described.
The main goal of this article is to define the problem of vowel duration in Civili (H12a). It shows that the so-called Civili vowel-length desperately needs to be re-examined, because previous works on the sound system of this language hardly explain a number of phonological phenomena, such as vowel lengthening, on the basis of data at hand. Demonstrating the problem in question, the author first reviews previous works that all identify a vowel lengthening in Civili. From different analyses the complexity of the phenomenon is found out by observing differences from an analysis to another, and by regarding difficulties the different phonologists came up against. Then, the problem is also seen through the weakness of each analysis results. This eventually shows more aspects of the vowel duration issue, and leads the author to make a clear distinction between vowel length and vowel lengthening that can be all regarded as only vowel duration. Finally, the article shares a possible way for a solution through an experimental approach of the Civili sound system.
This article raises a number of questions that should be dealt with in drawing up a lexicographic plan for Gabon. For which of the Gabonese languages should lexicographic units be established? This question entrains the issue of inventorying the Gabonese languages and their standardization as well as the issue of language planning for Gabon. What is the status of those foreign languages widely spoken in Gabon? What about French? Should Gabon keep importing its French dictionaries from France, or should the Gabonese compile their own French dictionaries, including French words and expressions exclusively used in Gabon? Finally, after trying to answer these questions, a number of suggestions are made for the establishment of a lexicographic plan for Gabon.
Content A. EXECUTIVE SUMMARY, INCLUDING MAJOR RECOMMENDATIONS B. COMPLETE REPORT 1. INTRODUCTION 2. RISK MAP 2.1 Why a Risk Map is needed, and for what purpose 2.1.1 Creating a unified data base 2.1.2 Assessing systemic risk 2.1.3 Allowing for coordinated policy action 2.2 Recommendations 3. GLOBAL REGISTER FOR LOANS (CREDIT REGISTER) AND BONDS (SECURITIES REGISTER) 3.1 Objectives of a credit register 3.2 Credit registers in Europe (and beyond) 3.3 Suggestions for a supra-national Credit Register 3.4 Integrating a supra-national Securities Register 3.5 Recommendations 4. HEDGE FUNDS: REGULATION AND SUPERVISION 4.1 What are hedge funds (activities, location, size, regulation)? 4.2 What are the risks posed by hedge funds (systematic risks, interaction with prime brokers)? 4.3 Routes to better regulation (direct, indirect) 4.4 Recommendations 5. RATING AGENCIES: REGULATION AND SUPERVISION 5.1 The role of ratings in bond and structured finance markets, past and present 5.2 Elements of rating integrity (independence, compensation and incentives, transparency) 5.3 Recommendations (registration, transparency, annual report on rating performance) 6. PROCYCLICALITY: PROBLEMS AND POTENTIAL SOLUTIONS 6.1 What is meant by “procyclicality” and why is it a problem? 6.2 The roots of procyclicality and the lessons it suggests for policymakers 6.2.1 Underpinnings of the phenomenon 6.2.2 Lessons to be learned 6.3 Characteristics of a macrofinancial stability framework 6.4 Recommendations 7. THE ROLE OF INTERNATIONAL INSTITUTIONS AND FORA, IN PARTICULAR THE IMF, BIS AND FSF 7.1 Legitimacy 7.2 Re-focusing the work 7.3 Recommendations
Content New Financial Architecture (Short Version) 1. Purpose of the paper – causes of the crisis 2. Recommendations 2.1. Incentives 2.2. Transparency 2.3. Regulation and Supervision 2.4. International Institutions 3. Concluding remarks Appendix (Full text) A 1. Causes of the crisis A 2. Improving the Framework A 2.1. Incentives A 2.2. Transparency A 2.3. Regulation and Supervision A 2.4. International Institutions A 3. Concluding remarks
We analyze a national sample of Americans with respect to their debt literacy, financial experiences, and their judgments about the extent of their indebtedness. Debt literacy is measured by questions testing knowledge of fundamental concepts related to debt and by selfassessed financial knowledge. Financial experiences are the participants’ reported experiences with traditional borrowing, alternative borrowing, and investing activities. Overindebtedness is a self-reported measure. Overall, we find that debt literacy is low: only about one-third of the population seems to comprehend interest compounding or the workings of credit cards. Even after controlling for demographics, we find a strong relationship between debt literacy and both financial experiences and debt loads. Specifically, individuals with lower levels of debt literacy tend to transact in high-cost manners, incurring higher fees and using high-cost borrowing. In applying our results to credit cards, we estimate that as much as one-third of the charges and fees paid by less knowledgeable individuals can be attributed to ignorance. The less knowledgeable also report that their debt loads are excessive or that they are unable to judge their debt position. JEL Classification: D14, D91
Suppliers play a major role in innovation processes. We analyze ownership allocations and the choice of R&D technology in vertical R&D cooperations. Given incomplete contracts on the R&D outcome, there is a tradeoff between R&D specifically designed towards a manufacturer (increasing investment productivity) and a general technology (hold-up reduction). We find that the market solution yields the specific technology in too few cases. More intense product market competition shifts optimal ownership towards the supplier. The use of exit clauses increases the gains from the collaboration. JEL Classification: L22, L24, O31, O32
Venture capital exit rights
(2009)
Theorists argue that exit rights can mitigate hold-up problems in venture capital. Using a hand-collected data-set of venture capital contracts from Germany we show that exit rights are included more frequently in venture capital contracts when a hold-up problem associated with the venture capitalist's exit decision is likely. Examples include drag-along and tag-along rights. Additionally, we find that almost all exit rights are allocated to the venture capitalist rather than to the entrepreneur. In addition, we show that besides the basic hold-up mechanism there are other mechanisms such as ex-ante bargaining power and the degree of pledgeable income that drive the allocation of exit rights. JEL Classification: G24, G34, D80
We merge administrative information from a large German discount brokerage firm with regional data to examine if financial advisors improve portfolio performance. Our data track accounts of 32,751 randomly selected individual customers over 66 months and allow direct comparison of performance across self-managed accounts and accounts run by, or in consultation with, independent financial advisors. In contrast to the picture painted by simple descriptive statistics, econometric analysis that corrects for the endogeneity of the choice of having a financial advisor suggests that advisors are associated with lower total and excess account returns, higher portfolio risk and probabilities of losses, and higher trading frequency and portfolio turnover relative to what account owners of given characteristics tend to achieve on their own. Regression analysis of who uses an IFA suggests that IFAs are matched with richer, older investors rather than with poorer, younger ones.
Analyzing interest rate risk: stochastic volatility in the term structure of government bond yields
(2009)
We propose a Nelson-Siegel type interest rate term structure model where the underlying yield factors follow autoregressive processes with stochastic volatility. The factor volatilities parsimoniously capture risk inherent to the term structure and are associated with the time-varying uncertainty of the yield curve’s level, slope and curvature. Estimating the model based on U.S. government bond yields applying Markov chain Monte Carlo techniques we find that the factor volatilities follow highly persistent processes. We show that slope and curvature risk have explanatory power for bond excess returns and illustrate that the yield and volatility factors are closely related to industrial capacity utilization, inflation, monetary policy and employment growth. JEL Classification: C5, E4, G1
This paper provides a joint analysis of household stockholding participation, stock location among stockholding modes, and participation spillovers, using data from the US Survey of Consumer Finances. Our multivariate choice model matches observed participation rates, conditional and unconditional, and asset location patterns. Financial education and sophistication strongly affect direct stockholding and mutual fund participation, while social interactions affect stockholding through retirement accounts only. Household characteristics influence stockholding through retirement accounts conditional on owning retirement accounts, unlike what happens with stockholding through mutual funds. Although stockholding is more common among retirement account owners, this fact is mainly due to their characteristics that led them to buy retirement accounts in the first place rather than to any informational advantages gained through retirement account ownership itself. Finally, our results suggest that, taking stockholding as given, stock location is not arbitrary but crucially depends on investor characteristics. JEL Classification: G11, E21, D14, C35
Milah books & manuals
(2009)
The ENVISAT validation programme for the atmospheric instruments MIPAS, SCIAMACHY and GOMOS is based on a number of balloon-borne, aircraft, satellite and ground-based correlative measurements. In particular the activities of validation scientists were coordinated by ESA within the ENVISAT Stratospheric Aircraft and Balloon Campaign or ESABC. As part of a series of similar papers on other species [this issue] and in parallel to the contribution of the individual validation teams, the present paper provides a synthesis of comparisons performed between MIPAS CH4 and N2O profiles produced by the current ESA operational software (Instrument Processing Facility version 4.61 or IPF v4.61, full resolution MIPAS data covering the period 9 July 2002 to 26 March 2004) and correlative measurements obtained from balloon and aircraft experiments as well as from satellite sensors or from ground-based instruments. In the middle stratosphere, no significant bias is observed between MIPAS and correlative measurements, and MIPAS is providing a very consistent and global picture of the distribution of CH4 and N2O in this region. In average, the MIPAS CH4 values show a small positive bias in the lower stratosphere of about 5%. A similar situation is observed for N2O with a positive bias of 4%. In the lower stratosphere/upper troposphere (UT/LS) the individual used MIPAS data version 4.61 still exhibits some unphysical oscillations in individual CH4 and N2O profiles caused by the processing algorithm (with almost no regularization). Taking these problems into account, the MIPAS CH4 and N2O profiles are behaving as expected from the internal error estimation of IPF v4.61 and the estimated errors of the correlative measurements.
Stocks are exposed to the risk of sudden downward jumps. Additionally, a crash in one stock (or index) can increase the risk of crashes in other stocks (or indices). Our paper explicitly takes this contagion risk into account and studies its impact on the portfolio decision of a CRRA investor both in complete and in incomplete market settings. We find that the investor significantly adjusts his portfolio when contagion is more likely to occur. Capturing the time dimension of contagion, i.e. the time span between jumps in two stocks or stock indices, is thus of first-order importance when analyzing portfolio decisions. Investors ignoring contagion completely or accounting for contagion while ignoring its time dimension suffer large and economically significant utility losses. These losses are larger in complete than in incomplete markets, and the investor might be better off if he does not trade derivatives. Furthermore, we emphasize that the risk of contagion has a crucial impact on investors' security demands, since it reduces their ability to diversify their portfolios.
We provide explicit solutions to life-cycle utility maximization problems simultaneously involving dynamic decisions on investments in stocks and bonds, consumption of perishable goods, and the rental and the ownership of residential real estate. House prices, stock prices, interest rates, and the labor income of the decision-maker follow correlated stochastic processes. The preferences of the individual are of the Epstein-Zin recursive structure and depend on consumption of both perishable goods and housing services. The explicit consumption and investment strategies are simple and intuitive and are thoroughly discussed and illustrated in the paper. For a calibrated version of the model we find, among other things, that the fairly high correlation between labor income and house prices imply much larger life-cycle variations in the desired exposure to house price risks than in the exposure to the stock and bond markets. We demonstrate that the derived closed-form strategies are still very useful if the housing positions are only reset infrequently and if the investor is restricted from borrowing against future income. Our results suggest that markets for REITs or other financial contracts facilitating the hedging of house price risks will lead to non-negligible but moderate improvements of welfare.
This paper relates recursive utility in continuous time to its discrete-time origins and provides a rigorous and intuitive alternative to a heuristic approach presented in [Duffie, Epstein 1992], who formally define recursive utility in continuous time via backward stochastic differential equations (stochastic differential utility). Furthermore, we show that the notion of Gâteaux differentiability of certainty equivalents used in their paper has to be replaced by a different concept. Our approach allows us to address the important issue of normalization of aggregators in non-Brownian settings. We show that normalization is always feasible if the certainty equivalent of the aggregator is of expected utility type. Conversely, we prove that in general L´evy frameworks this is essentially also necessary, i.e. aggregators that are not of expected utility type cannot be normalized in general. Besides, for these settings we clarify the relationship of our approach to stochastic differential utility and, finally, establish dynamic programming results. JEL Classifications: D81, D91, C61
This thesis contributes to the field of soft matter research and studies the importance of hydrodynamic interactions during free-solution electrophoresis of linear polyelectrolytes by means of coarse-grained molecular dynamics simulations including full electro-hydrodynamic interactions. The center of attention is the specific role of hydrodynamic interactions on the electrophoretic behaviour of charged macromolecules. Points of interest are the dependence of hydrodynamic interactions on the chain length, the chain flexibility and the surrounding counterions, and their combined influence on important observables such as the static chain conformations and the dynamic transport coefficients, i.e., the diffusion and the electrophoretic mobility. These problems are addressed by extensive computer simulations that are quantitatively matched with experimental results. Existing theoretical predictions are carefully examined and are augmented by the observations in this thesis.
Opting out of the great inflation: German monetary policy after the break down of Bretton Woods
(2009)
During the turbulent 1970s and 1980s the Bundesbank established an outstanding reputation in the world of central banking. Germany achieved a high degree of domestic stability and provided safe haven for investors in times of turmoil in the international financial system. Eventually the Bundesbank provided the role model for the European Central Bank. Hence, we examine an episode of lasting importance in European monetary history. The purpose of this paper is to highlight how the Bundesbank monetary policy strategy contributed to this success. We analyze the strategy as it was conceived, communicated and refined by the Bundesbank itself. We propose a theoretical framework (following Söderström, 2005) where monetary targeting is interpreted, first and foremost, as a commitment device. In our setting, a monetary target helps anchoring inflation and inflation expectations. We derive an interest rate rule and show empirically that it approximates the way the Bundesbank conducted monetary policy over the period 1975-1998. We compare the Bundesbank´s monetary policy rule with those of the FED and of the Bank of England. We find that the Bundesbank´s policy reaction function was characterized by strong persistence of policy rates as well as a strong response to deviations of inflation from target and to the activity growth gap. In contrast, the response to the level of the output gap was not significant. In our empirical analysis we use real-time data, as available to policy-makers at the time. JEL Classification: E31, E32, E41, E52, E58