Refine
Year of publication
- 2009 (862) (remove)
Document Type
- Article (362)
- Book (118)
- Doctoral Thesis (116)
- Part of Periodical (89)
- Working Paper (80)
- Conference Proceeding (28)
- Report (21)
- Part of a Book (15)
- Preprint (12)
- Review (10)
Language
- English (862) (remove)
Keywords
- Deutschland (6)
- Haushalt (6)
- Lambda-Kalkül (6)
- Pragmatik (6)
- USA (6)
- new species (6)
- Bank (5)
- Optimalitätstheorie (5)
- China (4)
- Household Finance (4)
Institute
- Medizin (113)
- Biochemie und Chemie (112)
- Biowissenschaften (43)
- Physik (42)
- Geowissenschaften (41)
- Center for Financial Studies (CFS) (35)
- Frankfurt Institute for Advanced Studies (FIAS) (29)
- Informatik (21)
- Wirtschaftswissenschaften (21)
- E-Finance Lab e.V. (20)
A tale of two lost archives
(2009)
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G28
Induced charge computation
(2009)
One of the main aspects of statistical mechanics is that the properties of a thermodynamics state point do not depend on the choice of the statistical ensemble. It breaks down for small systems e.g. single molecules. Hence, the choice of the statistical ensemble is crucial for the interpretation of single molecule experiments, where the outcome of measurements depends on which variables or control parameters, are held fixed and which ones are allowed to fluctuate. Following this principle, this thesis investigates the thermodynamics of a single polymer pulling experiments within two different statistical ensembles. The scaling of the conjugate chain ensembles, the fixed end-to-end vector (Helmholtz) and the fixed applied force (Gibbs), are studied in depth. This thesis further investigates the ensemble equivalence for different force regimes and polymer-chain contour lengths. Using coarse-grained molecular dynamic simulations, i.e. Langevin dynamics, the simulations were found to complement the theoretical predictions for the scaling of ensemble difference of Gaussian chains in different force-regimes, giving special attention to the zero force regime. After constructing Helmholtz and Gibbs conjugate ensembles for a Gaussian chain, two different data sets of thermodynamic states on the force-extension plane, i.e. force-extension curves, were generated. The ensemble difference is computed for different polymer-chain lengths by using force-extension curves. The scaling of the ensemble difference versus relative polymer-chain length under different force regimes has been derived from the simulation data and compared to theoretical predictions. The results demonstrate that the Gaussian chain in the zero force limit generates nonequivalent ensembles, regardless of its equilibrium bond length and polymer-chain contour length. Moreover, if polymers are charged in confinement, coarse-graining is problematic, owing to dielectric interfaces. Hence, the effect of dielectric interfaces must be taken into account when describing physical systems such as ionic channels or biopolymers inside nanopores. It is shown that the effect of dielectrics is crucial for the dynamics of a biopolymer or an ion inside a nanopore. In the simulations, the feasibility of an efficient and accurate computation of electrostatic interactions in the presence of an arbitrarily shaped dielectric domain is challenging. Several solutions for this problem have been previously proposed in the literature such as a density functional approach, or transforming problem at hand into an algebraic problem ( Induced Charge Computation (ICC) ) and boundary element methods. Even though the essential concept is the same, which is to replace the dielectric interface with a polarization charge density, these approaches have been analyzed and the ICC algorithm has been implemented. A new superior boundary element method has been devised utilizing the force computation via the Particle-Particle Particle-Mesh (P3M) method for periodic geometries (ICCP3M). This method has been compared to the ICC algorithm, the algebraic solutions, and to density functional approaches. Extensive numerical tests against analytically tractable geometries have confirmed the correctness and applicability of developed and implemented algorithms, demonstrating that the ICCP3M is the fastest and the most versatile algorithm. Further optimization issues are also discussed in obtaining accurate induced charge densities. The potential of mean force (PMF) of DNA modelled on a coarsed-grain level inside a nanopore is investigated with and without the inclusion of dielectric effects. Despite the simplicity of the model, the dramatic effect of dielectric inclusions is clearly seen in the observed force profile.
Introduction Complex psychopathological and behavioral symptoms, such as delusions and aggression against care providers, are often the primary cause of acute hospital admissions of elderly patients to emergency units and psychiatric departments. This issue resembles an interdisciplinary clinically highly relevant diagnostic and therapeutic challenge across many medical subjects and general practice. At least 50% of the dramatically growing number of patients with dementia exerts aggressive and agitated symptoms during the course of clinical progression, particularly at moderate clinical severity. Methods Commonly used rating scales for agitation and aggression are reviewed and discussed. Furthermore, we focus in this article on benefits and limitations of all available data of anticonvulsants published in this specific indication, such as valproate, carbamazepine, oxcarbazepine, lamotrigine, gabapentin and topiramate. Results To date, most positive and robust data are available for carbamazepine, however, pharmacokinetic interactions with secondary enzyme induction limit its use. Controlled data of valproate do not seem to support the use in this population. For oxcarbazepine only one controlled but negative trial is available. Positive small series and case reports have been reported for lamotrigine, gabapentin and topiramate. Conclusions So far, data of anticonvulsants in demented patients with behavioral disturbances are not convincing. Controlled clinical trials using specific, valid and psychometrically sound instruments of newer anticonvulsants with a better tolerability profile are mandatory to verify whether they can contribute as treatment option in this indication.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Background, aim, and scope Food consumption is an important route of human exposure to endocrine-disrupting chemicals. So far, this has been demonstrated by exposure modeling or analytical identification of single substances in foodstuff (e.g., phthalates) and human body fluids (e.g., urine and blood). Since the research in this field is focused on few chemicals (and thus missing mixture effects), the overall contamination of edibles with xenohormones is largely unknown. The aim of this study was to assess the integrated estrogenic burden of bottled mineral water as model foodstuff and to characterize the potential sources of the estrogenic contamination. Materials, methods, and results In the present study, we analyzed commercially available mineral water in an in vitro system with the human estrogen receptor alpha and detected estrogenic contamination in 60% of all samples with a maximum activity equivalent to 75.2 ng/l of the natural sex hormone 17beta-estradiol. Furthermore, breeding of the molluskan model Potamopyrgus antipodarum in water bottles made of glass and plastic [polyethylene terephthalate (PET)] resulted in an increased reproductive output of snails cultured in PET bottles. This provides first evidence that substances leaching from plastic food packaging materials act as functional estrogens in vivo. Discussion and conclusions Our results demonstrate a widespread contamination of mineral water with xenoestrogens that partly originates from compounds leaching from the plastic packaging material. These substances possess potent estrogenic activity in vivo in a molluskan sentinel. Overall, the results indicate that a broader range of foodstuff may be contaminated with endocrine disruptors when packed in plastics. Keywords Endocrine disrupting chemicals - Estradiol equivalents - Human exposure - In vitro effects - In vivo effects - Mineral water - Plastic bottles - Plastic packaging - Polyethylene terephthalate - Potamopyrgus antipodarum - Yeast estrogen screen - Xenoestrogens
The role of microglial cells in the pathogenesis of Alzheimer’s disease (AD) neurodegeneration is unknown. Although several works suggest that chronic neuroinflammation caused by activated microglia contributes to neurofibrillary degeneration, anti-inflammatory drugs do not prevent or reverse neuronal tau pathology. This raises the question if indeed microglial activation occurs in the human brain at sites of neurofibrillary degeneration. In view of the recent work demonstrating presence of dystrophic (senescent) microglia in aged human brain, the purpose of this study was to investigate microglial cells in situ and at high resolution in the immediate vicinity of tau-positive structures in order to determine conclusively whether degenerating neuronal structures are associated with activated or with dystrophic microglia. We used a newly optimized immunohistochemical method for visualizing microglial cells in human archival brain together with Braak staging of neurofibrillary pathology to ascertain the morphology of microglia in the vicinity of tau-positive structures. We now report histopathological findings from 19 humans covering the spectrum from none to severe AD pathology, including patients with Down’s syndrome, showing that degenerating neuronal structures positive for tau (neuropil threads, neurofibrillary tangles, neuritic plaques) are invariably colocalized with severely dystrophic (fragmented) rather than with activated microglial cells. Using Braak staging of Alzheimer neuropathology we demonstrate that microglial dystrophy precedes the spread of tau pathology. Deposits of amyloid-beta protein (A beta) devoid of tau-positive structures were found to be colocalized with non-activated, ramified microglia, suggesting that A beta does not trigger microglial activation. Our findings also indicate that when microglial activation does occur in the absence of an identifiable acute central nervous system insult, it is likely to be the result of systemic infectious disease. The findings reported here strongly argue against the hypothesis that neuroinflammatory changes contribute to AD dementia. Instead, they offer an alternative hypothesis of AD pathogenesis that takes into consideration: (1) the notion that microglia are neuron-supporting cells and neuroprotective; (2) the fact that development of non-familial, sporadic AD is inextricably linked to aging. They support the idea that progressive, aging-related microglial degeneration and loss of microglial neuroprotection rather than induction of microglial activation contributes to the onset of sporadic Alzheimer’s disease. The results have far-reaching implications in terms of reevaluating current treatment approaches towards AD.
Background The role of the Fcgamma receptor IIa (FcgammaRIIa), a receptor for C-reactive protein (CRP), the classical acute phase protein, in atherosclerosis is not yet clear. We sought to investigate the association of FcgammaRIIa genotype with risk of coronary heart disease (CHD) in two large population-based samples. Methods FcgammaRIIa-R/H131 polymorphisms were determined in a population of 527 patients with a history of myocardial infarction and 527 age and gender matched controls drawn from a population-based MONICA- Augsburg survey. In the LURIC population, 2227 patients with angiographically proven CHD, defined as having at least one stenosis [greater than or equal to]50%, were compared with 1032 individuals with stenosis <50%. Results In both populations genotype frequencies of the FcgammaRIIa gene did not show a significant departure from the Hardy-Weinberg equilibrium. FcgammaRIIa R(-131)->H genotype was not independently associated with lower risk of CHD after multivariable adjustments, neither in the MONICA population (odds ratio (OR) 1.08; 95% confidence interval (CI) 0.81 to 1.44), nor in LURIC (OR 0.96; 95% CI 0.81 to 1.14). Conclusion Our results do not confirm an independent relationship between FcgammaRIIa genotypes and risk of CHD in these populations.
Background Treatment options for metastatic renal cell carcinoma (RCC) are limited due to resistance to chemo- and radiotherapy. The development of small-molecule multikinase inhibitors have now opened novel treatment options. The influence of the receptor tyrosine kinase inhibitor AEE788, applied alone or combined with the mammalian target of rapamycin (mTOR) inhibitor RAD001, on RCC cell adhesion and proliferation in vitro has been evaluated. Methods RCC cell lines Caki-1, KTC-26 or A498 were treated with various concentrations of RAD001 or AEE788 and tumor cell proliferation, tumor cell adhesion to vascular endothelial cells or to immobilized extracellular matrix proteins (laminin, collagen, fibronectin) evaluated. The anti-tumoral potential of RAD001 combined with AEE788 was also investigated. Both, asynchronous and synchronized cell cultures were used to subsequently analyze drug induced cell cycle manipulation. Analysis of cell cycle regulating proteins was done by western blotting. Results RAD001 or AEE788 reduced adhesion of RCC cell lines to vascular endothelium and diminished RCC cell binding to immobilized laminin or collagen. Both drugs blocked RCC cell growth, impaired cell cycle progression and altered the expression level of the cell cycle regulating proteins cdk2, cdk4, cyclin D1, cyclin E and p27. The combination of AEE788 and RAD001 resulted in more pronounced RCC growth inhibition, greater rates of G0/G1 cells and lower rates of S-phase cells than either agent alone. Cell cycle proteins were much more strongly altered when both drugs were used in combination than with single drug application. The synergistic effects were observed in an asynchronous cell culture model, but were more pronounced in synchronous RCC cell cultures. Conclusions Potent anti-tumoral activitites of the multikinase inhibitors AEE788 or RAD001 have been demonstrated. Most importantly, the simultaneous use of both AEE788 and RAD001 offered a distinct combinatorial benefit and thus may provide a therapeutic advantage over either agent employed as a monotherapy for RCC treatment.
Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma=1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusions Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives.
Background Evidence-based guidelines potentially improve healthcare. However, their de-novo-development requires substantial resources - especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development - the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE-instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 unratable (derived from a single guideline). Of the 25 consistencies, 14 based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) the majority was congruent. Incongruencies were found, where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusions The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines.
Riboswitches are a novel class of genetic control elements that function through the direct interaction of small metabolite molecules with structured RNA elements. The ligand is bound with high specificity and affinity to its RNA target and induces conformational changes of the RNA's secondary and tertiary structure upon binding. To elucidate the molecular basis of the remarkable ligand selectivity and affinity of one of these riboswitches, extensive all-atom molecular dynamics simulations in explicit solvent ({approx}1 µs total simulation length) of the aptamer domain of the guanine sensing riboswitch are performed. The conformational dynamics is studied when the system is bound to its cognate ligand guanine as well as bound to the non-cognate ligand adenine and in its free form. The simulations indicate that residue U51 in the aptamer domain functions as a general docking platform for purine bases, whereas the interactions between C74 and the ligand are crucial for ligand selectivity. These findings either suggest a two-step ligand recognition process, including a general purine binding step and a subsequent selection of the cognate ligand, or hint at different initial interactions of cognate and noncognate ligands with residues of the ligand binding pocket. To explore possible pathways of complex dissociation, various nonequilibrium simulations are performed which account for the first steps of ligand unbinding. The results delineate the minimal set of conformational changes needed for ligand release, suggest two possible pathways for the dissociation reaction, and underline the importance of long-range tertiary contacts for locking the ligand in the complex.
Oligonucleotides suppress PKB/Akt and act as superinductors of apoptosis in human keratinocytes
(2009)
DNA oligonucleotides (ODN) applied to an organism are known to modulate the innate and adaptive immune system. Previous studies showed that a CpG-containing ODN (CpG-1-PTO) and interestingly, also a non-CpG-containing ODN (nCpG- 5-PTO) suppress inflammatory markers in skin. In the present study it was investigated whether these molecules also influence cell apoptosis. Here we show that CpG-1-PTO, nCpG-5-PTO, and also natural DNA suppress the phosphorylation of PKB/Akt in a cell-type-specific manner. Interestingly, only epithelial cells of the skin (normal human keratinocytes, HaCaT and A-431) show a suppression of PKB/Akt. This suppressive effect depends from ODN lengths, sequence and backbone. Moreover, it was found that TGFa-induced levels of PKB/Akt and EGFR were suppressed by the ODN tested. We hypothesize that this suppression might facilitate programmed cell death. By testing this hypothesis we found an increase of apoptosis markers (caspase 3/7, 8, 9, cytosolic cytochrome c, histone associated DNA fragments, apoptotic bodies) when cells were treated with ODN in combination with low doses of staurosporin, a wellknown pro-apoptotic stimulus. In summary the present data demonstrate DNA as a modulator of apoptosis which specifically targets skin epithelial cells.
Global warming is expected to be associated with diverse changes in freshwater habitats in north-western Europe. Increasing evaporation, lower oxygen concentration due to increased water temperature and changes in precipitation pattern are likely to affect the survival ratio and reproduction rate of freshwater gastropods (Pulmonata, Basommatophora). This work is a comprehensive analyse of the climatic factors influencing their ranges both in the past and in the near future. A macroecological approach showed that for a great proportion of genera the ranges were projected to contract by 2080, even if unlimited dispersal was assumed. The forecasted warming in the cooler northern ranges predicted the emergence of new suitable areas, but also reduced drastically the available habitat in the southern part of the studied region. In order to better understand the ranges dynamics in the past and the post glacial colonisation patterns, an approach combining ecological niche modelling and phylogeography was used for two model species, Radix balthica and Ancylus fluviatilis. Phylogeographic model selection on a COI mtDNA dataset confirmed that R. balthica most likely spread from two central European disjunct refuges after the last glacial maximum. The phylogeographic analysis of A. fluviatilis, using 16S and COI mtDNA datasets, also inferred central European refugia. The absence of niche conservatism (adaptive potential) inferred for A. fluviatilis puts a cautionary note on the use of climate envelope models to predict the future ranges of this species. However, the other model species exhibited strong niche conservatism, which allow putting confidence into such predictions. A profound faunal shift will take place in Central Europe within the next century, either permitting the establishment of species currently living south of the studied region or the proliferation of organisms relying on the same food resources. This study points out the need for further investigations on the dispersal modes of freshwaters snails, since the future range size of the species depend on their ability to establish in newly available habitats. Likewise, the mixed mating system of these organisms gives them the possibility to fund a new population from a single individual. It will probably affect the colonisation success and needs further investigation.
Lentiviral vectors mediate gene transfer into dividing and most non-dividing cells. Thereby, they stably integrate the transgene into the host cell genome. For this reason, lentiviral vectors are a promising tool for gene therapy. However, safety and efficiency of lentiviral mediated gene transfer still needs to be optimised. Ideally, cell entry should be restricted to the cell population relevant for a particular therapeutic application. Furthermore, lentiviral vectors able to transduce quiescent lymphocytes are desirable. Although many approaches were followed to engineer retroviral envelope proteins, an effective and universally applicable system for retargeting of lentiviral cell entry is still not available. Just before the experimental work of this thesis was started, retargeting of measles virus (MV) cell entry was achieved. This virus has two types of envelope glycoproteins, the hemagglutinin (H) protein responsible for receptor recognition and the fusion (F) protein mediating membrane fusion. For retargeting, the H protein was mutated in its interaction sites for the native MV receptors and a ligand or a single-chain antibody (scAb) was fused to its ectodomain. It was hypothesised that the retargeting system of MV can be transferred to lentiviral vectors by pseudotyping human immunodeficiency virus-1 (HIV-1) derived vector particles with the MV glycoproteins. As the unmodified MV glycoproteins did not pseudotype HIV vectors, two F and 15 H protein variants carrying stepwise truncations or amino acid (aa) exchanges in their cytoplasmic tails were screened for their ability to form MV-HIV pseudotypes. The combinations Hcd18/Fcd30, Hcd19/Fcd30 and Hcd24+4A/Fcd30 led to most efficient pseudotype formation with titers above 10exp6 transducing units /ml, using concentrated particles. The F cytoplasmic tail was truncated by 30 aa and the H cytoplasmic tail was truncated by 18, 19 or 24 residues with four added alanines after the start methionine in the latter case. Western blot analysis indicated that particle incorporation of the MV glycoproteins was enhanced upon truncation of their cytoplasmic tails. With the MV-HIV vectors high titers on different cell lines expressing one or both MV receptors were obtained, whereas MV receptor-negative cells remained untransduced. Titers were enhanced using an optimal H to F plasmid ratio (1:7) during vector particle production. Based on the described pseudotyping with the MV glycoprotein variants, HIV vectors retargeted to the epidermal growth factor receptor (EGFR) or the B cell surface marker CD20 were generated. For the production of the retargeted vectors MVaEGFR-HIV and MVaCD20-HIV, Fcd30 together with a native receptor blind Hcd18 protein, displaying at its ectodomain either the ligand EGF or a scAb directed against CD20 were used. With these vectors, gene transfer into target receptor-positive cells was several orders of magnitude more efficient than into control cells. The almost complete absence of background transduction of non-target cells was e.g. demonstrated in mixed cell populations, where the CD20-targeting vector selectively eliminated CD20-positive cells upon suicide gene transfer. Remarkably, transduction of activated primary human CD20-positive B cells was much more efficient with the MVaCD20-HIV vector than with the standard pseudotype vector VSV-G-HIV. Even more surprisingly, MVaCD20-HIV vectors were able to transduce quiescent primary human B cells, which until then had been resistant towards lentiviral gene transfer. The most critical step during the production of MV-HIV pseudotypes was the identification of H cytoplasmic tail mutants that allowed pseudotyping while retaining the fusion helper function. In contrast to previously inefficient targeting strategies, the reason for the success of this novel targeting system must be based on the separation of the receptor recognition and fusion functions onto two different proteins. Furthermore, with the CD20-targeting vector transduction of quiescent B cells was demonstrated for the first time. Own data and literature data suggest that CD20 binding and hyper-cross-linking by the vector particles results in calcium influx and thus activation of quiescent B cells. Alternatively this feature may be based on a residual binding activity of the MV glycoproteins to the native MV receptors that is insufficient for entry but induces cytoskeleton rearrangements dissolving the post-entry block of HIV vectors. Hence, in this thesis efficient retargeting of lentiviral vectors and transduction of quiescent cells was combined. This novel targeting strategy should be easily adaptable to many other target molecules by extending the modified MV H protein with appropriate specific domains or scAbs. It should now be possible to tailor lentiviral vectors for highly selective gene transfer into any desired target cell population with an unprecedented degree of efficiency.
Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.
Shape complementarity is a compulsory condition for molecular recognition. In our 3D ligand-based virtual screening approach called SQUIRREL, we combine shape-based rigid body alignment with fuzzy pharmacophore scoring. Retrospective validation studies demonstrate the superiority of methods which combine both shape and pharmacophore information on the family of peroxisome proliferator-activated receptors (PPARs). We demonstrate the real-life applicability of SQUIRREL by a prospective virtual screening study, where a potent PPARalpha agonist with an EC50 of 44 nM and 100-fold selectivity against PPARgamma has been identified...
Background The to date evidence for a dose-response relationship between physical workload and the development of lumbar disc diseases is limited. We therefore investigated the possible etiologic relevance of cumulative occupational lumbar load to lumbar disc diseases in a multi-center case-control study. Methods In four study regions in Germany (Frankfurt/Main, Freiburg, Halle/Saale, Regensburg), patients seeking medical care for pain associated with clinically and radiologically verified lumbar disc herniation (286 males, 278 females) or symptomatic lumbar disc narrowing (145 males, 206 females) were prospectively recruited. Population control subjects (453 males and 448 females) were drawn from the regional population registers. Cases and control subjects were between 25 and 70 years of age. In a structured personal interview, a complete occupational history was elicited to identify subjects with certain minimum workloads. On the basis of job task-specific supplementary surveys performed by technical experts, the situational lumbar load represented by the compressive force at the lumbosacral disc was determined via biomechanical model calculations for any working situation with object handling and load-intensive postures during the total working life. For this analysis, all manual handling of objects of about 5 kilograms or more and postures with trunk inclination of 20 degrees or more are included in the calculation of cumulative lumbar load. Confounder selection was based on biologic plausibility and on the change-in-estimate criterion. Odds ratios (OR) and 95% confidence intervals (CI) were calculated separately for men and women using unconditional logistic regression analysis, adjusted for age, region, and unemployment as major life event (in males) or psychosocial strain at work (in females), respectively. To further elucidate the contribution of past physical workload to the development of lumbar disc diseases, we performed lag-time analyses. Results We found a positive dose-response relationship between cumulative occupational lumbar load and lumbar disc herniation as well as lumbar disc narrowing among men and women. Even past lumbar load seems to contribute to the risk of lumbar disc disease. Conclusions According to our study, cumulative physical workload is related to lumbar disc diseases among men and women.
Background Since June 2002, revised regulations in Germany have required "Emergency Medical Care" as an interdisciplinary subject, and state that emergency treatment should be of increasing importance within the curriculum. A survey of the current status of undergraduate medical education in emergency medical care establishes the basis for further committee work. Methods Using a standardized questionnaire, all medical faculties in Germany were asked to answer questions concerning the structure of their curriculum, representation of disciplines, instructors' qualifications, teaching and assessment methods, as well as evaluation procedures. Results Data from 35 of the 38 medical schools in Germany were analysed. In 32 of 35 medical faculties, the local Department of Anaesthesiology is responsible for the teaching of emergency medical care; in two faculties, emergency medicine is taught mainly by the Department of Surgery and in another by Internal Medicine. Lectures, seminars and practical training units are scheduled in varying composition at 97% of the locations. Simulation technology is integrated at 60% (n=21); problem-based learning at 29% (n=10), e-learning at 3% (n=1), and internship in ambulance service is mandatory at 11% (n=4). In terms of assessment methods, multiple-choice exams (15 to 70 questions) are favoured (89%, n=31), partially supplemented by open questions (31%, n=11). Some faculties also perform single practical tests (43%, n=15), objective structured clinical examination (OSCE; 29%, n=10) or oral examinations (17%, n=6). Conclusion Emergency Medical Care in undergraduate medical education in Germany has a practical orientation, but is very inconsistently structured. The innovative options of simulation technology or state-of-the-art assessment methods are not consistently utilized. Therefore, an exchange of experiences and concepts between faculties and disciplines should be promoted to guarantee a standard level of education in emergency medical care.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
The NADH:ubiquinone oxidoreductase (complex I) is a large membrane bound protein complex coupling the redox reaction of NADH oxidation and quinone reduction to vectorial proton translocation across bioenergetic membranes. The mechanism of proton pumping is still unknown; it seems however that the reduction of quinone induces conformational changes which drive proton uptake from one side and release at the other side of the membrane. In this study the proposed quinone and inhibitor binding pocket located at the interface of the 49-kDa and PSST subunits was explored by a large number of point mutations introduced into complex I from the strictly aerobic yeast Yarrowia lipolytica. Point mutations were systematically chosen based on the crystal structure of the hydrophilic domain of complex I from Thermus thermophilus. In total, the properties of 94 mutants at 39 positions which completely cover the lining of the large putative quinone and inhibitor binding cavity are described and discussed here. A structure/function analysis allowed the identification of functional domains within the large putative quinone binding cavity. A possible quinone access path ranging from the N-terminal beta-sheet of the 49-kDa subunit into the pocket to tyrosine 144 could be defined, since all exchanges introduced here, caused an almost complete loss of complex I activity. A region located deeper in the proposed quinone binding pocket is apparently not important for complex I activity. In contrast, all exchanges of tyrosine 144, even the very conservative mutant Y144F, essentially abolished dNADH:DBQ oxidoreductase activity of complex I. However, with higher concentrations of Q1 or Q2 the dNADH:Q oxidoreductase activity was largely restored in the mutants with the more conservative exchanges. Proton pumping experiments showed that this activity was also coupled to proton translocation, indicating that these quinones were reduced at the physiological site. However, the apparent Km values for Q1 or Q2 were drastically increased, clearly demonstrating that tyrosine 144 is central for quinone binding and reduction. These results further prove that the enzymatically relevant quinone binding site of complex I is located at the interface of the 49-kDa and PSST subunits. The quinone binding pocket is thought to comprise the binding sites for a plethora of specific complex I inhibitors that are usually grouped into three classes. The large array of mutants targeting the quinone binding cavity was examined with a representative of each inhibitor class. Many mutants conferring resistance were identified which, depending on the inhibitor tested, clustered in well defined and partially overlapping regions of the large putative quinone and inhibitor binding cavity. Mutants with effects on type A (DQA) and type B (rotenone) inhibitors were found in a subdomain corresponding to the former [NiFe] site in homologous hydrogenases, whereby the type A inhibitor DQA seems to bind deeper in this domain. Mutants with effects on the type C inhibitor (C12E8) were found in a narrow crevice. Exchanging more exposed residues at the border of these well defined domains affected all three inhibitor types. Therefore, the results as a whole provide further support for the concept that different inhibitor classes bind to different but partially overlapping binding sites within a single large quinone binding pocket. In addition, they also indicate the approximate location of the binding sites within the structure of the large quinone and inhibitor binding cavity at the interface of the 49 kDa and the PSST subunit. It has been proposed earlier that the highly conserved HRGXE-motif in the 49-kDa subunit forms a part of the quinone binding site of complex I. Mutagenesis of the HRGXE-motif, revealed that these residues are rather critical for complex I assembly and seem to have an important structural role. The question why iron-sulfur cluster N1a is not detectable by EPR in many models organisms is not solved yet. Introducing polar and positively charged amino acid residues close to this cluster in order to increase its midpoint potential did not result in the appearance of the cluster N1a EPR signal in mitochondrial membranes from the mutants. Clearly, further research will be necessary to gain insights to the function of this iron-sulfur cluster in complex I. In an additional project, a new and simple in vivo screen for complex I deficiency in Y. lipolytica was developed and optimized. This assay probes for defects in complex I assembly and stability, oxidoreductase activity and also proton pumping activity by complex I. Most importantly, this assay is applicable to all Y. lipolytica strains and could be used to identify loss-of-function mutants, gain-of-functions mutants (i.e. resistance towards complex I inhibitors) and revertants due to mutations in both nuclear and mitochondrially encoded genes of complex I subunits.
The light-harvesting complex of photosystem II (LHC-II) is the major antenna complex in plant photosynthesis. It accounts for roughly 30% of the total protein in plant chloroplasts, which makes it arguably the most abundant membrane protein on Earth, and binds about half of plant chlorophyll (Chl). The complex assembles as a trimer in the thylakoid membrane and binds a total of 54 pigment molecules, including 24 Chl a, 18 Chl b, 6 lutein (Lut), 3 neoxanthin (Neo) and 3 violaxanthin (Vio). LHC-II has five key roles in plant photosynthesis. It: (1) harvests sunlight and transmits excitation energy to the reaction centres of photosystems II and I, (2) regulates the amount of excitation energy reaching each of the two photosystems, (3) has a structural role in the architecture of the photosynthetic supercomplexes, (4) contributes to the tight appression of thylakoid membranes in chloroplast grana, and (5) protects the photosynthetic apparatus from photo damage by non photochemical quenching (NPQ). A major fraction of NPQ is accounted for its energy-dependent component qE. Despite being critical for plant survival and having been studied for decades, the exact details of how excess absorbed light energy is dissipated under qE conditions remain enigmatic. Today it is accepted that qE is regulated by the magnitude of the pH gradient (ΔpH) across the thylakoid membrane. It is also well documented that the drop in pH in the thylakoid lumen during high-light conditions activates the enzyme violaxanthin de-epoxidase (VDE), which converts the carotenoid Vio into zeaxanthin (Zea) as part of the xanthophyll cycle. Additionally, studies with Arabidopsis mutants revealed that the photosystem II subunit PsbS is necessary for qE. How these physiological responses switch LHC-II from the active, energy transmitting to the quenched, energy-dissipating state, in which the solar energy is not transmitted to the photosystems but instead dissipated as heat, remains unclear and is the subject of this thesis. From the results obtained during this doctoral work, five main conclusions can be drawn concerning the mechanism of qE: 1. Substitution of Vio by Zea in LHC-II is not sufficient for efficient dissipation of excess excitation energy. 2. Aggregation quenching of LHC-II does not require Vio, Neo nor a specific Chl pair. 3. With one exception, the pigment structure in LHC-II is rigid. 4. The two X-ray structures of LHC-II show the same energy transmitting state of the complex. 5. Crystalline LHC-II resembles the complex in the thylakoid membrane. Models of the aggregation quenching mechanism in vitro and the qE mechanism in vivo are presented as a corollary of this doctoral work. LHC-II aggregation quenching in vitro is attributed to the formation of energy sinks on the periphery of LHC-II through random interaction with other trimers, free pigments or impurities. A similar but unrelated process is proposed to occur in the thylakoid membrane, by which excess excitation energy is dissipated upon specific interaction between LHC-II and a PsbS monomer carrying Zea. At the end of this thesis, an innovative experimental model for the analysis of all key aspects of qE is proposed in order to finally solve the qE enigma, one of the last unresolved problems in photosynthesis research.
Samples of freshly fallen snow were collected at the high alpine research station Jungfraujoch (Switzerland) in February and March 2006 and 2007, during the Cloud and Aerosol Characterization Experiments (CLACE) 5 and 6. In this study a new technique has been developed and demonstrated for the measurement of organic acids in fresh snow. The melted snow samples were subjected to solid phase extraction and resulting solutions analysed for organic acids by HPLC-MS-TOF using negative electrospray ionization. A series of linear dicarboxylic acids from C5 to C13 and phthalic acid, were identified and quantified. In several samples the biogenic acid pinonic acid was also observed. In fresh snow the median concentration of the most abundant acid, adipic acid, was 0.69 micro g L -1 in 2006 and 0.70 micro g L -1 in 2007. Glutaric acid was the second most abundant dicarboxylic acid found with median values of 0.46 micro g L -1 in 2006 and 0.61 micro g L -1 in 2007, while the aromatic acid phthalic acid showed a median concentration of 0.34 micro g L -1 in 2006 and 0.45 micro g L -1 in 2007. The concentrations in the samples from various snowfall events varied significantly, and were found to be dependent on the back trajectory of the air mass arriving at Jungfraujoch. Air masses of marine origin showed the lowest concentrations of acids whereas the highest concentrations were measured when the air mass was strongly influenced by boundary layer air.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 microg m-3, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
It has become popular for journalists who are trying to sell newspapers, and politicians who are trying to solicit votes, to refer to this financial crisis as the worst since the Great Depression or WWII. I don’t know whether it is the worst or not so will leave that question to the historians and economists of the future once the storm has past. But it is indeed a “storm” as described by Vince Cable, Member of Parliament in his UK bestselling book entitled “The Storm – The World Economic Crisis and What it Means”. He describes this “storm” as a very destructive one displacing jobs, businesses, banks and whole economies from Iceland to the United Kingdom to the United States. I propose to offer a short chronology and summary of the causes of the current economic crisis. Then I will review several of the regulatory responses to the crisis focusing on the Turner Report, the de Larosière Group and certain US Treasury statements. I will offer my critiques of these proposals and then make some predictions of what the financial services industry may look like in the future.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
Background Heme oxygenase-1 is an inducible cytoprotective enzyme which handles oxidative stress by generating anti-oxidant bilirubin and vasodilating carbon monoxide. A (GT)n dinucleotide repeat and a -413A>T single nucleotide polymorphism have been reported in the promoter region of HMOX1 to both influence the occurrence of coronary artery disease and myocardial infarction. We sought to validate these observations in persons scheduled for coronary angiography. Methods We included 3219 subjects in the current analysis, 2526 with CAD including a subgroup of CAD and MI (n = 1339) and 693 controls. Coronary status was determined by coronary angiography. Risk factors and biochemical parameters (bilirubin, iron, LDL-C, HDL-C, and triglycerides) were determined by standard procedures. The dinucleotide repeat was analysed by PCR and subsequent sizing by capillary electrophoresis, the -413A>T polymorphism by PCR and RFLP. Results In the LURIC study the allele frequency for the -413A>T polymorphism is A = 0,589 and T = 0,411. The (GT)n repeats spread between 14 and 39 repeats with 22 (19.9%) and 29 (47.1%) as the two most common alleles. We found neither an association of the genotypes or allelic frequencies with any of the biochemical parameters nor with CAD or previous MI. Conclusion Although an association of these polymorphisms with the appearance of CAD and MI have been published before, our results strongly argue against a relevant role of the (GT)n repeat or the -413A>T SNP in the HMOX1 promoter in CAD or MI.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Introduction Impaired renal function and/or pre-existing atherosclerosis in the deceased donor increase the risk of delayed graft function and impaired long-term renal function in kidney transplant recipients. Case presentation We report delayed graft function occurring simultaneously in two kidney transplant recipients, aged 57-years-old and 39-years-old, who received renal allografts from the same deceased donor. The 62-year-old donor died of cardiac arrest during an asthmatic state. Renal-allograft biopsies performed in both kidney recipients because of delayed graft function revealed cholesterol-crystal embolism. An empiric statin therapy in addition to low-dose acetylsalicylic acid was initiated. After 10 and 6 hemodialysis sessions every 48 hours, respectively, both renal allografts started to function. Glomerular filtration rates at discharge were 26 ml/min/1.73 m2 and 23.9 ml/min/1.73 m2, and remained stable in follow-up examinations. Possible donor and surgical procedure-dependent causes for cholesterol-crystal embolism are discussed. Conclusion Cholesterol-crystal embolism should be considered as a cause for delayed graft function and long-term impaired renal allograft function, especially in the older donor population.
Methods for dichoptic stimulus presentation in functional magnetic resonance imaging : a review
(2009)
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
The aim of this paper is to address two main counterarguments raised in Landau (2007) against the movement analysis of Control, and especially against the phenomenon of Backward Control. The paper shows that unlike the situation described in Tsez (Polinsky & Potsdam 2002), Landau's objections do not hold for Greek and Romanian, where all obligatory control verbs exhibit Backward Control. Our results thus provide stronger empirical support for a theoretical approach to Control in terms of Movement, as defended in Hornstein (1999 and subsequent work).
The recent financial crisis has led to a vigorous debate about the pros and cons of fair-value accounting (FVA). This debate presents a major challenge for FVA going forward and standard setters’ push to extend FVA into other areas. In this article, we highlight four important issues as an attempt to make sense of the debate. First, much of the controversy results from confusion about what is new and different about FVA. Second, while there are legitimate concerns about marking to market (or pure FVA) in times of financial crisis, it is less clear that these problems apply to FVA as stipulated by the accounting standards, be it IFRS or U.S. GAAP. Third, historical cost accounting (HCA) is unlikely to be the remedy. There are a number of concerns about HCA as well and these problems could be larger than those with FVA. Fourth, although it is difficult to fault the FVA standards per se, implementation issues are a potential concern, especially with respect to litigation. Finally, we identify several avenues for future research. JEL Classification: G14, G15, G30, K22, M41, M42
The utility-maximizing consumption and investment strategy of an individual investor receiving an unspanned labor income stream seems impossible to find in closed form and very dificult to find using numerical solution techniques. We suggest an easy procedure for finding a specific, simple, and admissible consumption and investment strategy, which is near-optimal in the sense that the wealthequivalent loss compared to the unknown optimal strategy is very small. We first explain and implement the strategy in a simple setting with constant interest rates, a single risky asset, and an exogenously given income stream, but we also show that the success of the strategy is robust to changes in parameter values, to the introduction of stochastic interest rates, and to endogenous labor supply decisions.
In this paper, we analyze economies of scale for German mutual fund complexes. Using 2002-2005 data of 41 investment management companies, we specify a hedonic translog cost function. Applying a fixed effects regression on a one-way error component model there is clear evidence of significant overall economies of scale. On the level of individual mutual fund complexes we find significant economies of scale for all of the companies in our sample. With regard to cost efficiency, we find that the average mutual fund complexes in all size quartiles deviate considerably from the best practice cost frontier. JEL Classification: G2, L25 Keywords: mutual fund complex, investment management company, cost efficiency, economies of scale, hedonic translog cost function, fixed effects regression, one-way error component model
Der vorliegende Beitrag untersucht, ob der Mehrheitsaktionär einer Gesellschaft im Vorfeld eines Zwangsausschlusses von Minderheitsaktionären (sog. Squeeze-Out) versucht, die Kapitalmarkterwartungen negativ zu beeinflussen. Ein solches "manipulatives" Verhalten wird häufig in der juristischen wie betriebswirtschaftlichen Literatur unterstellt, da der Aktienkurs fü die Abfindungshöhe die Wertuntergrenze bildet. Unsere empirische Untersuchung der Bilanz- und Pressemitteilungspolitik von Squeeze-Out-Unternehmen im Vorfeld der Ankündigung einer solchen Maßnahme am deutschen Kapitalmarkt zeigt, dass in diesem Zeitraum tatsächlich ein signifikanter Anstieg (Rückgang) der im Ton pessimistischen (optimistischen) Pressemitteilungen feststellbar ist. Allerdings zeigt sich weiter, dass die Aktien der Squeeze-Out-Kandidaten bereits im Vorfeld und am Tag der Ankündigung so hohe positive Überrenditen erzielen, dass der von uns quantifizierte kumulierte Effekt der Informationspolitik auf die Börsenbewertung einen insgesamt nur sehr geringen Einfluss ausübt und von anderen Faktoren (z.B. Abfindungsspekulationen) dominiert wird. JEL: M41, M40, G14, K22