Refine
Year of publication
- 2009 (862) (remove)
Document Type
- Article (362)
- Book (118)
- Doctoral Thesis (116)
- Part of Periodical (89)
- Working Paper (80)
- Conference Proceeding (28)
- Report (21)
- Part of a Book (15)
- Preprint (12)
- Review (10)
Language
- English (862) (remove)
Keywords
- Deutschland (6)
- Haushalt (6)
- Lambda-Kalkül (6)
- Pragmatik (6)
- USA (6)
- new species (6)
- Bank (5)
- Optimalitätstheorie (5)
- China (4)
- Household Finance (4)
Institute
- Medizin (113)
- Biochemie und Chemie (112)
- Biowissenschaften (43)
- Physik (42)
- Geowissenschaften (41)
- Center for Financial Studies (CFS) (35)
- Frankfurt Institute for Advanced Studies (FIAS) (29)
- Informatik (21)
- Wirtschaftswissenschaften (21)
- E-Finance Lab e.V. (20)
A tale of two lost archives
(2009)
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G28
Induced charge computation
(2009)
One of the main aspects of statistical mechanics is that the properties of a thermodynamics state point do not depend on the choice of the statistical ensemble. It breaks down for small systems e.g. single molecules. Hence, the choice of the statistical ensemble is crucial for the interpretation of single molecule experiments, where the outcome of measurements depends on which variables or control parameters, are held fixed and which ones are allowed to fluctuate. Following this principle, this thesis investigates the thermodynamics of a single polymer pulling experiments within two different statistical ensembles. The scaling of the conjugate chain ensembles, the fixed end-to-end vector (Helmholtz) and the fixed applied force (Gibbs), are studied in depth. This thesis further investigates the ensemble equivalence for different force regimes and polymer-chain contour lengths. Using coarse-grained molecular dynamic simulations, i.e. Langevin dynamics, the simulations were found to complement the theoretical predictions for the scaling of ensemble difference of Gaussian chains in different force-regimes, giving special attention to the zero force regime. After constructing Helmholtz and Gibbs conjugate ensembles for a Gaussian chain, two different data sets of thermodynamic states on the force-extension plane, i.e. force-extension curves, were generated. The ensemble difference is computed for different polymer-chain lengths by using force-extension curves. The scaling of the ensemble difference versus relative polymer-chain length under different force regimes has been derived from the simulation data and compared to theoretical predictions. The results demonstrate that the Gaussian chain in the zero force limit generates nonequivalent ensembles, regardless of its equilibrium bond length and polymer-chain contour length. Moreover, if polymers are charged in confinement, coarse-graining is problematic, owing to dielectric interfaces. Hence, the effect of dielectric interfaces must be taken into account when describing physical systems such as ionic channels or biopolymers inside nanopores. It is shown that the effect of dielectrics is crucial for the dynamics of a biopolymer or an ion inside a nanopore. In the simulations, the feasibility of an efficient and accurate computation of electrostatic interactions in the presence of an arbitrarily shaped dielectric domain is challenging. Several solutions for this problem have been previously proposed in the literature such as a density functional approach, or transforming problem at hand into an algebraic problem ( Induced Charge Computation (ICC) ) and boundary element methods. Even though the essential concept is the same, which is to replace the dielectric interface with a polarization charge density, these approaches have been analyzed and the ICC algorithm has been implemented. A new superior boundary element method has been devised utilizing the force computation via the Particle-Particle Particle-Mesh (P3M) method for periodic geometries (ICCP3M). This method has been compared to the ICC algorithm, the algebraic solutions, and to density functional approaches. Extensive numerical tests against analytically tractable geometries have confirmed the correctness and applicability of developed and implemented algorithms, demonstrating that the ICCP3M is the fastest and the most versatile algorithm. Further optimization issues are also discussed in obtaining accurate induced charge densities. The potential of mean force (PMF) of DNA modelled on a coarsed-grain level inside a nanopore is investigated with and without the inclusion of dielectric effects. Despite the simplicity of the model, the dramatic effect of dielectric inclusions is clearly seen in the observed force profile.
Introduction Complex psychopathological and behavioral symptoms, such as delusions and aggression against care providers, are often the primary cause of acute hospital admissions of elderly patients to emergency units and psychiatric departments. This issue resembles an interdisciplinary clinically highly relevant diagnostic and therapeutic challenge across many medical subjects and general practice. At least 50% of the dramatically growing number of patients with dementia exerts aggressive and agitated symptoms during the course of clinical progression, particularly at moderate clinical severity. Methods Commonly used rating scales for agitation and aggression are reviewed and discussed. Furthermore, we focus in this article on benefits and limitations of all available data of anticonvulsants published in this specific indication, such as valproate, carbamazepine, oxcarbazepine, lamotrigine, gabapentin and topiramate. Results To date, most positive and robust data are available for carbamazepine, however, pharmacokinetic interactions with secondary enzyme induction limit its use. Controlled data of valproate do not seem to support the use in this population. For oxcarbazepine only one controlled but negative trial is available. Positive small series and case reports have been reported for lamotrigine, gabapentin and topiramate. Conclusions So far, data of anticonvulsants in demented patients with behavioral disturbances are not convincing. Controlled clinical trials using specific, valid and psychometrically sound instruments of newer anticonvulsants with a better tolerability profile are mandatory to verify whether they can contribute as treatment option in this indication.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Background, aim, and scope Food consumption is an important route of human exposure to endocrine-disrupting chemicals. So far, this has been demonstrated by exposure modeling or analytical identification of single substances in foodstuff (e.g., phthalates) and human body fluids (e.g., urine and blood). Since the research in this field is focused on few chemicals (and thus missing mixture effects), the overall contamination of edibles with xenohormones is largely unknown. The aim of this study was to assess the integrated estrogenic burden of bottled mineral water as model foodstuff and to characterize the potential sources of the estrogenic contamination. Materials, methods, and results In the present study, we analyzed commercially available mineral water in an in vitro system with the human estrogen receptor alpha and detected estrogenic contamination in 60% of all samples with a maximum activity equivalent to 75.2 ng/l of the natural sex hormone 17beta-estradiol. Furthermore, breeding of the molluskan model Potamopyrgus antipodarum in water bottles made of glass and plastic [polyethylene terephthalate (PET)] resulted in an increased reproductive output of snails cultured in PET bottles. This provides first evidence that substances leaching from plastic food packaging materials act as functional estrogens in vivo. Discussion and conclusions Our results demonstrate a widespread contamination of mineral water with xenoestrogens that partly originates from compounds leaching from the plastic packaging material. These substances possess potent estrogenic activity in vivo in a molluskan sentinel. Overall, the results indicate that a broader range of foodstuff may be contaminated with endocrine disruptors when packed in plastics. Keywords Endocrine disrupting chemicals - Estradiol equivalents - Human exposure - In vitro effects - In vivo effects - Mineral water - Plastic bottles - Plastic packaging - Polyethylene terephthalate - Potamopyrgus antipodarum - Yeast estrogen screen - Xenoestrogens
The role of microglial cells in the pathogenesis of Alzheimer’s disease (AD) neurodegeneration is unknown. Although several works suggest that chronic neuroinflammation caused by activated microglia contributes to neurofibrillary degeneration, anti-inflammatory drugs do not prevent or reverse neuronal tau pathology. This raises the question if indeed microglial activation occurs in the human brain at sites of neurofibrillary degeneration. In view of the recent work demonstrating presence of dystrophic (senescent) microglia in aged human brain, the purpose of this study was to investigate microglial cells in situ and at high resolution in the immediate vicinity of tau-positive structures in order to determine conclusively whether degenerating neuronal structures are associated with activated or with dystrophic microglia. We used a newly optimized immunohistochemical method for visualizing microglial cells in human archival brain together with Braak staging of neurofibrillary pathology to ascertain the morphology of microglia in the vicinity of tau-positive structures. We now report histopathological findings from 19 humans covering the spectrum from none to severe AD pathology, including patients with Down’s syndrome, showing that degenerating neuronal structures positive for tau (neuropil threads, neurofibrillary tangles, neuritic plaques) are invariably colocalized with severely dystrophic (fragmented) rather than with activated microglial cells. Using Braak staging of Alzheimer neuropathology we demonstrate that microglial dystrophy precedes the spread of tau pathology. Deposits of amyloid-beta protein (A beta) devoid of tau-positive structures were found to be colocalized with non-activated, ramified microglia, suggesting that A beta does not trigger microglial activation. Our findings also indicate that when microglial activation does occur in the absence of an identifiable acute central nervous system insult, it is likely to be the result of systemic infectious disease. The findings reported here strongly argue against the hypothesis that neuroinflammatory changes contribute to AD dementia. Instead, they offer an alternative hypothesis of AD pathogenesis that takes into consideration: (1) the notion that microglia are neuron-supporting cells and neuroprotective; (2) the fact that development of non-familial, sporadic AD is inextricably linked to aging. They support the idea that progressive, aging-related microglial degeneration and loss of microglial neuroprotection rather than induction of microglial activation contributes to the onset of sporadic Alzheimer’s disease. The results have far-reaching implications in terms of reevaluating current treatment approaches towards AD.
Background The role of the Fcgamma receptor IIa (FcgammaRIIa), a receptor for C-reactive protein (CRP), the classical acute phase protein, in atherosclerosis is not yet clear. We sought to investigate the association of FcgammaRIIa genotype with risk of coronary heart disease (CHD) in two large population-based samples. Methods FcgammaRIIa-R/H131 polymorphisms were determined in a population of 527 patients with a history of myocardial infarction and 527 age and gender matched controls drawn from a population-based MONICA- Augsburg survey. In the LURIC population, 2227 patients with angiographically proven CHD, defined as having at least one stenosis [greater than or equal to]50%, were compared with 1032 individuals with stenosis <50%. Results In both populations genotype frequencies of the FcgammaRIIa gene did not show a significant departure from the Hardy-Weinberg equilibrium. FcgammaRIIa R(-131)->H genotype was not independently associated with lower risk of CHD after multivariable adjustments, neither in the MONICA population (odds ratio (OR) 1.08; 95% confidence interval (CI) 0.81 to 1.44), nor in LURIC (OR 0.96; 95% CI 0.81 to 1.14). Conclusion Our results do not confirm an independent relationship between FcgammaRIIa genotypes and risk of CHD in these populations.
Background Treatment options for metastatic renal cell carcinoma (RCC) are limited due to resistance to chemo- and radiotherapy. The development of small-molecule multikinase inhibitors have now opened novel treatment options. The influence of the receptor tyrosine kinase inhibitor AEE788, applied alone or combined with the mammalian target of rapamycin (mTOR) inhibitor RAD001, on RCC cell adhesion and proliferation in vitro has been evaluated. Methods RCC cell lines Caki-1, KTC-26 or A498 were treated with various concentrations of RAD001 or AEE788 and tumor cell proliferation, tumor cell adhesion to vascular endothelial cells or to immobilized extracellular matrix proteins (laminin, collagen, fibronectin) evaluated. The anti-tumoral potential of RAD001 combined with AEE788 was also investigated. Both, asynchronous and synchronized cell cultures were used to subsequently analyze drug induced cell cycle manipulation. Analysis of cell cycle regulating proteins was done by western blotting. Results RAD001 or AEE788 reduced adhesion of RCC cell lines to vascular endothelium and diminished RCC cell binding to immobilized laminin or collagen. Both drugs blocked RCC cell growth, impaired cell cycle progression and altered the expression level of the cell cycle regulating proteins cdk2, cdk4, cyclin D1, cyclin E and p27. The combination of AEE788 and RAD001 resulted in more pronounced RCC growth inhibition, greater rates of G0/G1 cells and lower rates of S-phase cells than either agent alone. Cell cycle proteins were much more strongly altered when both drugs were used in combination than with single drug application. The synergistic effects were observed in an asynchronous cell culture model, but were more pronounced in synchronous RCC cell cultures. Conclusions Potent anti-tumoral activitites of the multikinase inhibitors AEE788 or RAD001 have been demonstrated. Most importantly, the simultaneous use of both AEE788 and RAD001 offered a distinct combinatorial benefit and thus may provide a therapeutic advantage over either agent employed as a monotherapy for RCC treatment.
Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma=1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusions Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives.
Background Evidence-based guidelines potentially improve healthcare. However, their de-novo-development requires substantial resources - especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development - the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE-instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 unratable (derived from a single guideline). Of the 25 consistencies, 14 based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) the majority was congruent. Incongruencies were found, where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusions The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines.
Riboswitches are a novel class of genetic control elements that function through the direct interaction of small metabolite molecules with structured RNA elements. The ligand is bound with high specificity and affinity to its RNA target and induces conformational changes of the RNA's secondary and tertiary structure upon binding. To elucidate the molecular basis of the remarkable ligand selectivity and affinity of one of these riboswitches, extensive all-atom molecular dynamics simulations in explicit solvent ({approx}1 µs total simulation length) of the aptamer domain of the guanine sensing riboswitch are performed. The conformational dynamics is studied when the system is bound to its cognate ligand guanine as well as bound to the non-cognate ligand adenine and in its free form. The simulations indicate that residue U51 in the aptamer domain functions as a general docking platform for purine bases, whereas the interactions between C74 and the ligand are crucial for ligand selectivity. These findings either suggest a two-step ligand recognition process, including a general purine binding step and a subsequent selection of the cognate ligand, or hint at different initial interactions of cognate and noncognate ligands with residues of the ligand binding pocket. To explore possible pathways of complex dissociation, various nonequilibrium simulations are performed which account for the first steps of ligand unbinding. The results delineate the minimal set of conformational changes needed for ligand release, suggest two possible pathways for the dissociation reaction, and underline the importance of long-range tertiary contacts for locking the ligand in the complex.
Oligonucleotides suppress PKB/Akt and act as superinductors of apoptosis in human keratinocytes
(2009)
DNA oligonucleotides (ODN) applied to an organism are known to modulate the innate and adaptive immune system. Previous studies showed that a CpG-containing ODN (CpG-1-PTO) and interestingly, also a non-CpG-containing ODN (nCpG- 5-PTO) suppress inflammatory markers in skin. In the present study it was investigated whether these molecules also influence cell apoptosis. Here we show that CpG-1-PTO, nCpG-5-PTO, and also natural DNA suppress the phosphorylation of PKB/Akt in a cell-type-specific manner. Interestingly, only epithelial cells of the skin (normal human keratinocytes, HaCaT and A-431) show a suppression of PKB/Akt. This suppressive effect depends from ODN lengths, sequence and backbone. Moreover, it was found that TGFa-induced levels of PKB/Akt and EGFR were suppressed by the ODN tested. We hypothesize that this suppression might facilitate programmed cell death. By testing this hypothesis we found an increase of apoptosis markers (caspase 3/7, 8, 9, cytosolic cytochrome c, histone associated DNA fragments, apoptotic bodies) when cells were treated with ODN in combination with low doses of staurosporin, a wellknown pro-apoptotic stimulus. In summary the present data demonstrate DNA as a modulator of apoptosis which specifically targets skin epithelial cells.
Global warming is expected to be associated with diverse changes in freshwater habitats in north-western Europe. Increasing evaporation, lower oxygen concentration due to increased water temperature and changes in precipitation pattern are likely to affect the survival ratio and reproduction rate of freshwater gastropods (Pulmonata, Basommatophora). This work is a comprehensive analyse of the climatic factors influencing their ranges both in the past and in the near future. A macroecological approach showed that for a great proportion of genera the ranges were projected to contract by 2080, even if unlimited dispersal was assumed. The forecasted warming in the cooler northern ranges predicted the emergence of new suitable areas, but also reduced drastically the available habitat in the southern part of the studied region. In order to better understand the ranges dynamics in the past and the post glacial colonisation patterns, an approach combining ecological niche modelling and phylogeography was used for two model species, Radix balthica and Ancylus fluviatilis. Phylogeographic model selection on a COI mtDNA dataset confirmed that R. balthica most likely spread from two central European disjunct refuges after the last glacial maximum. The phylogeographic analysis of A. fluviatilis, using 16S and COI mtDNA datasets, also inferred central European refugia. The absence of niche conservatism (adaptive potential) inferred for A. fluviatilis puts a cautionary note on the use of climate envelope models to predict the future ranges of this species. However, the other model species exhibited strong niche conservatism, which allow putting confidence into such predictions. A profound faunal shift will take place in Central Europe within the next century, either permitting the establishment of species currently living south of the studied region or the proliferation of organisms relying on the same food resources. This study points out the need for further investigations on the dispersal modes of freshwaters snails, since the future range size of the species depend on their ability to establish in newly available habitats. Likewise, the mixed mating system of these organisms gives them the possibility to fund a new population from a single individual. It will probably affect the colonisation success and needs further investigation.
Lentiviral vectors mediate gene transfer into dividing and most non-dividing cells. Thereby, they stably integrate the transgene into the host cell genome. For this reason, lentiviral vectors are a promising tool for gene therapy. However, safety and efficiency of lentiviral mediated gene transfer still needs to be optimised. Ideally, cell entry should be restricted to the cell population relevant for a particular therapeutic application. Furthermore, lentiviral vectors able to transduce quiescent lymphocytes are desirable. Although many approaches were followed to engineer retroviral envelope proteins, an effective and universally applicable system for retargeting of lentiviral cell entry is still not available. Just before the experimental work of this thesis was started, retargeting of measles virus (MV) cell entry was achieved. This virus has two types of envelope glycoproteins, the hemagglutinin (H) protein responsible for receptor recognition and the fusion (F) protein mediating membrane fusion. For retargeting, the H protein was mutated in its interaction sites for the native MV receptors and a ligand or a single-chain antibody (scAb) was fused to its ectodomain. It was hypothesised that the retargeting system of MV can be transferred to lentiviral vectors by pseudotyping human immunodeficiency virus-1 (HIV-1) derived vector particles with the MV glycoproteins. As the unmodified MV glycoproteins did not pseudotype HIV vectors, two F and 15 H protein variants carrying stepwise truncations or amino acid (aa) exchanges in their cytoplasmic tails were screened for their ability to form MV-HIV pseudotypes. The combinations Hcd18/Fcd30, Hcd19/Fcd30 and Hcd24+4A/Fcd30 led to most efficient pseudotype formation with titers above 10exp6 transducing units /ml, using concentrated particles. The F cytoplasmic tail was truncated by 30 aa and the H cytoplasmic tail was truncated by 18, 19 or 24 residues with four added alanines after the start methionine in the latter case. Western blot analysis indicated that particle incorporation of the MV glycoproteins was enhanced upon truncation of their cytoplasmic tails. With the MV-HIV vectors high titers on different cell lines expressing one or both MV receptors were obtained, whereas MV receptor-negative cells remained untransduced. Titers were enhanced using an optimal H to F plasmid ratio (1:7) during vector particle production. Based on the described pseudotyping with the MV glycoprotein variants, HIV vectors retargeted to the epidermal growth factor receptor (EGFR) or the B cell surface marker CD20 were generated. For the production of the retargeted vectors MVaEGFR-HIV and MVaCD20-HIV, Fcd30 together with a native receptor blind Hcd18 protein, displaying at its ectodomain either the ligand EGF or a scAb directed against CD20 were used. With these vectors, gene transfer into target receptor-positive cells was several orders of magnitude more efficient than into control cells. The almost complete absence of background transduction of non-target cells was e.g. demonstrated in mixed cell populations, where the CD20-targeting vector selectively eliminated CD20-positive cells upon suicide gene transfer. Remarkably, transduction of activated primary human CD20-positive B cells was much more efficient with the MVaCD20-HIV vector than with the standard pseudotype vector VSV-G-HIV. Even more surprisingly, MVaCD20-HIV vectors were able to transduce quiescent primary human B cells, which until then had been resistant towards lentiviral gene transfer. The most critical step during the production of MV-HIV pseudotypes was the identification of H cytoplasmic tail mutants that allowed pseudotyping while retaining the fusion helper function. In contrast to previously inefficient targeting strategies, the reason for the success of this novel targeting system must be based on the separation of the receptor recognition and fusion functions onto two different proteins. Furthermore, with the CD20-targeting vector transduction of quiescent B cells was demonstrated for the first time. Own data and literature data suggest that CD20 binding and hyper-cross-linking by the vector particles results in calcium influx and thus activation of quiescent B cells. Alternatively this feature may be based on a residual binding activity of the MV glycoproteins to the native MV receptors that is insufficient for entry but induces cytoskeleton rearrangements dissolving the post-entry block of HIV vectors. Hence, in this thesis efficient retargeting of lentiviral vectors and transduction of quiescent cells was combined. This novel targeting strategy should be easily adaptable to many other target molecules by extending the modified MV H protein with appropriate specific domains or scAbs. It should now be possible to tailor lentiviral vectors for highly selective gene transfer into any desired target cell population with an unprecedented degree of efficiency.
Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.