Refine
Year of publication
- 2009 (859) (remove)
Document Type
- Article (360)
- Book (118)
- Doctoral Thesis (116)
- Part of Periodical (89)
- Working Paper (80)
- Conference Proceeding (28)
- Report (21)
- Part of a Book (15)
- Preprint (11)
- Review (10)
Language
- English (859) (remove)
Keywords
- Deutschland (6)
- Haushalt (6)
- Lambda-Kalkül (6)
- Pragmatik (6)
- USA (6)
- new species (6)
- Bank (5)
- Optimalitätstheorie (5)
- China (4)
- Household Finance (4)
Institute
- Medizin (113)
- Biochemie und Chemie (111)
- Biowissenschaften (42)
- Physik (42)
- Geowissenschaften (41)
- Center for Financial Studies (CFS) (35)
- Frankfurt Institute for Advanced Studies (FIAS) (27)
- Informatik (21)
- Wirtschaftswissenschaften (21)
- E-Finance Lab e.V. (20)
9,9-Dimethyl-9-silafluorene
(2009)
The title compound, C14H14Si, crystallizes with two almost identical molecules (r.m.s. deviation = 0.080 Å for all non-H atoms) in the asymmetric unit. All atoms of the silafluorene moiety lie in a common plane (r.m.s. deviations = 0.049 and 0.035 Å for the two molecules in the asymmetric unit). The Si-Cmethyl bonds are significantly shorter [1.865 (4)-1.868 (4) Å] than the Si-Caromatic bonds [1.882 (3)-1.892 (3) Å]. Owing to strain in the five-membered ring, the endocyclic C-Si-C angles are reduced to 91.05 (14) and 91.21 (14)°. Key indicators: single-crystal X-ray study; T = 173 K; mean σ(C–C) = 0.005 A°; R factor = 0.061; wR factor = 0.157; data-to-parameter ratio = 16.3.
The complete molecule of the title compound, C18H24N2O2, is generated by a crystallographic inversion centre. The torsion angles in the hexamethylene chain are consistent with an antiperiplanar conformation, whereas the conformation of the O—CH2—CH2—CH2 unit is gauche. The three-dimensional crystal packing is stabilized by N—H⋯O and N—H⋯N hydrogen bonding.
The Mg centre in the title compound, [MgBr2(C2H7N)3], is pentacoordinated in a trigonal-bipyramidal mode with the two Br atoms in axial positions and the N atoms of the dimethylamine ligands in equatorial positions. The MgII centre is located on a crystallographic twofold rotation axis. The crystal structure is stabilized by N—H⋯Br hydrogen bonds. The N atom and H atoms of one dimethylamine ligand are disordered over two equally occupied positions.
Ein wesentliches Ziel der Physik mit schweren Ionen ist die Untersuchung der Zustände von Kernmaterie bei hohen Dichten bzw. Temperaturen. Solche Zustände lassen sich durch Kollisionen von hochenergetischen schweren Ionen in Teilchenbeschleunigern wie dem Super Proton Synchrotron SPS am Europäischen Kernforschungszentrum CERN in Genf erzeugen und untersuchen. Die vorliegende Arbeit beschäftigt sich mit der Analyse des Einflusses des in einer solchen Kollision erzeugten Mediums auf hochenergetische Teilchen, welche dieses Medium durchqueren. Hierzu werden Korrelationen zwischen Teilchen mit hohem Transversalimpuls pt als Funktion der Zentralität der Kollisionen und der Ladung der beteiligten Teilchen untersucht. Ziel ist es, hierdurch eine experimentelle Grundlage für die theoretische Beschreibung der Eigenschaften des Mediums in solchen Kollisionen bereitzustellen. ...
Renewed interest in fiscal policy has increased the use of quantitative models to evaluate policy. Because of modeling uncertainty, it is essential that policy evaluations be robust to alternative assumptions. We find that models currently being used in practice to evaluate fiscal policy stimulus proposals are not robust. Government spending multipliers in an alternative empirically-estimated and widely-cited new Keynesian model are much smaller than in these old Keynesian models; the estimated stimulus is extremely small with GDP and employment effects only one-sixth as large.
Modelling protein flexibility and plasticity is computationally challenging but important for understanding the function of biological systems. Furthermore, it has great implications for the prediction of (macro) molecular complex formation. Recently, coarse-grained normal mode approaches have emerged as efficient alternatives for investigating large-scale conformational changes for which more accurate methods like MD simulation are limited due to their computational burden. We have developed a Normal Mode based Simulation (NMSim) approach for efficient conformation generation of macromolecules. Combinations of low energy normal modes are used to guide a simulation pathway, whereas an efficient constraints correction approach is applied to generate stereochemically allowed conformations. Non-covalent bonds like hydrogen bonds and hydrophobic tethers and phi-psi favourable regions are also modelled as constraints. Conformations from our approach were compared with a 10 ns MD trajectory of lysozyme. A 2-D RMSD plot shows a good overlap of conformational space, and rms fluctuations of residues show a correlation coefficient of 0.78 between the two sets of conformations. Furthermore, a comparison of NMSim simulations starting from apo structures of different proteins show that ligand-bound conformations can be sampled for those cases where conformational changes are mainly correlated, e.g., domain-like motion in adenylate kinase. Efforts are currently being made to also model localized but functionally important motions for protein binding pockets and protein-protein interfaces using relevant normal mode selection criteria and implicit rotamer basin creation.
A new method to bridge the gap between ligand and receptor-based methods in virtual screening (VS) is presented. We introduce a structure-derived virtual ligand (VL) model as an extension to a previously published pseudo-ligand technique [1]: LIQUID [2] fuzzy pharmacophore virtual screening is combined with grid-based protein binding site predictions of PocketPicker [3]. This approach might help reduce bias introduced by manual selection of binding site residues and introduces pocket shape information to the VL. It allows for a combination of several protein structure models into a single "fuzzy" VL representation, which can be used to scan screening compound collections for ligand structures with a similar potential pharmacophore. PocketPicker employs an elaborate grid-based scanning procedure to determine buried cavities and depressions on the protein's surface. Potential binding sites are represented by clusters of grid probes characterizing the shape and accessibility of a cavity. A rule-based system is then applied to project reverse pharmacophore types onto the grid probes of a selected pocket. The pocket pharmacophore types are assigned depending on the properties and geometry of the protein residues surrounding the pocket with regard to their relative position towards the grid probes. LIQUID is used to cluster representative pocket probes by their pharmacophore types describing a fuzzy VL model. The VL is encoded in a correlation vector, which can then be compared to a database of pre-calculated ligand models. A retrospective screening using the fuzzy VL and several protein structures was evaluated by ten fold cross-validation with ROC-AUC and BEDROC metrics, obtaining a significant enrichment of actives. Future work will be devoted to prospective screening using a novel protein target of Helicobacter pylori and compounds from commercial providers.
Protein kinases are targets for drug development. Dysregulation of kinase activity leads to various diseases, e.g. cancer, inflammation, diabetes. Human polo-like kinase 1 (Plk1), a serine/threonine kinase, is a cancer-relevant gene and a potential drug target which attracts increasing attention in the field of cancer therapy. Plk1 is a key player in mitosis and modulates entry into mitosis and the spindle checkpoint at the meta-/anaphase transition. Plk1 overexpression is observed in various human tumors, and it is a negative prognostic factor for cancer patients. The same catalytical mechanism and the same co-substrate (ATP) lead to the problem of inhibitor selectivity. A strategy to solve this problem is represented by targeting the inactive conformation of kinases. Kinases undergo conformational changes between active and inactive conformation and thus an additional hydrophobic pocket is created in the inactive conformation where the surrounding amino acids are less conserved. A "homology model" of the inactive conformation of Plk1 was constructed, as the crystal structure in its inactive conformation is unknown. A crystal structure of Aurora A kinase served as template structure. With this homology model a receptor-based pharmacophore search was performed using SYBYL7.3 software. The raw hits were filtered using physico-chemical properties. The resulting hits were docked using Gold3.2 software, and 13 candidates for biological testing were manually selected. Three compounds of the 13 tested exhibit anti-proliferative effects in HeLa cancer cells. The most potent inhibitor, SBE13, was further tested in various other cancer cell lines of different origins and displayed EC50 values between 12 microM and 39 microM. Cancer cells incubated with SBE13 showed induction of apoptosis, detected by PARP (Poly-Adenosyl-Ribose-Polymerase) cleavage, caspase 9 activation and DAPI staining of apoptotic nuclei.
For a virtual screening study, we introduce a combination of machine learning techniques, employing a graph kernel, Gaussian process regression and clustered cross-validation. The aim was to find ligands of peroxisome-proliferator activated receptor gamma (PPAR-y). The receptors in the PPAR family belong to the steroid-thyroid-retinoid superfamily of nuclear receptors and act as transcription factors. They play a role in the regulation of lipid and glucose metabolism in vertebrates and are linked to various human processes and diseases. For this study, we used a dataset of 176 PPAR-y agonists published by Ruecker et al. ...
Two methods for the fast, fragment-based combinatorial molecule assembly were developed. The software COLIBREE® (Combinatorial Library Breeding) generates candidate structures from scratch, based on stochastic optimization [1]. Result structures of a COLIBREE design run are based on a fixed scaffold and variable linkers and side-chains. Linkers representing virtual chemical reactions and side-chain building blocks obtained from pseudo-retrosynthetic dissection of large compound databases are exchanged during optimization. The process of molecule design employs a discrete version of Particle Swarm Optimization (PSO) [2]. Assembled compounds are scored according to their similarity to known reference ligands. Distance to reference molecules is computed in the space of the topological pharmacophore descriptor CATS [3]. In a case study, the approach was applied to the de novo design of potential peroxisome proliferator-activated receptor (PPAR gamma) selective agonists. In a second approach, we developed the formal grammar Reaction-MQL [4] for the in silico representation and application of chemical reactions. Chemical transformation schemes are defined by functional groups participating in known organic reactions. The substructures are specified by the linear Molecular Query Language (MQL) [5]. The developed software package contains a parser for Reaction-MQL-expressions and enables users to design, test and virtually apply chemical reactions. The program has already been used to create combinatorial libraries for virtual screening studies. It was also applied in fragmentation studies with different sets of retrosynthetic reactions and various compound libraries.
There is a renewed interest in pseudoreceptor models which enable computational chemists to bridge the gap of ligand- and receptor-based drug design. We developed a pseudoreceptor model for the histamine H4 receptor (H4R) based on five potent antagonists representing different chemotypes. Here we present the selection of potential ligand binding pockets that occur during molecular dynamics (MD) simulations of a homology-based receptor model. We present a method for prioritizing receptor models according to their match with the consensus ligand-binding mode represented by the pseudoreceptor. In this way, ligand information can be transferred to receptor-based modelling. We use Geometric Hashing to match three-dimensional points in Cartesion space. This allows for the rapid translation- and rotation-free comparison of atom coordinates, which also permits partial matching. The only prerequisite is a hash table, which uses distance triplets as hash keys. Each time a distance triplet occurring in the candidate point set which corresponds to an existing key, the match is represented by a vote of the respective key. Finally, the global match of both point sets can be easily extracted by selection of voted distance triplets. The results revealed a preferred ligand-binding pocket in H4R, which would not have been identified using an unrefined homology model of the protein. The key idea was to rely on ligand information by pseudoreceptor modelling.
We developed the Pharmacophore Alignment Search Tool (PhAST), a text-based technique for rapid hit and lead structure searching in large compound databases. For each molecule, a two-dimensional graph of potential pharmacophoric points (PPPs) is created, which has an identical topology as the original molecule with implicit hydrogen atoms. Each vertex is coloured by a symbol representing the corresponding PPP. The vertices of the graph are canonically labelled. The symbols associated with the vertices are combined to a so-called PhAST-Sequence beginning with the vertex with the lowest canonical label. Due to the canonical labelling the created PhAST-Sequence is characteristic for each molecule. For similarity assessment, PhAST-Sequences are compared using the sequence identity in their global pairwise alignment. The alignment score lies between 0 (no similarity) and 1 (identical PhAST-Sequences). In order to use global pairwise sequence alignment, a score matrix for pharmacophoric symbols was developed and gap penalties were optimized. PhAST performed comparably and sometimes superior to other similarity search tools (CATS2D, MOE pharmacophore quadruples) in retrospective virtual screenings using the COBRA collection of drugs and lead structures. Most importantly, the PhAST alignment technique allows for the computation of significance estimates that help prioritize a virtual hit list.
The representation of small molecules as molecular graphs is a common technique in various fields of cheminformatics. This approach employs abstract descriptions of topology and properties for rapid analyses and comparison. Receptor-based methods in contrast mostly depend on more complex representations impeding simplified analysis and limiting the possibilities of property assignment. In this study we demonstrate that ligand-based methods can be applied to receptor-derived binding site analysis. We introduce the new method PocketGraph that translates representations of binding site volumes into linear graphs and enables the application of graph-based methods to the world of protein pockets. The method uses the PocketPicker algorithm for characterization of binding site volumes and employs a Growing Neural Gas procedure to derive graph representations of pocket topologies. Self-organizing map (SOM) projections revealed a limited number of pocket topologies. We argue that there is only a small set of pocket shapes realized in the known ligand-receptor complexes.
SIVsmmPBj-derived lentiviral vectors are capable of efficient primary human monocyte transduction, a capacity which is linked to the viral accessory protein Vpx. To enable novel gene therapy approaches targeting monocytes, in this thesis it was aimed to generate enhanced lentiviral vectors that meet the required standards for clinical applications with respect to gene transfer efficiency and safety. The vectors were tested for their suitability in a relevant therapeutic gene transfer approach. At first, it was investigated whether vectors derived from another Vpx-carrying lentivirus reveal the same capacity for monocyte transduction as SIVsmmPBj-derived vectors. A transduction experiment using HIV-2-derived vectors in comparison to PBj-derived vectors revealed a comparable transduction capacity, thus disproving the assumed uniqueness of the PBj vectors. The further generation and analysis of expression constructs for the vpx genes of HIV-2 and SIVmac demonstrated a similar functionality in monocyte transduction as the Vpx of PBj. As VpxPBj, both Vpx proteins facilitated monocyte transduction of a vpx-deficient PBj-derived vector system. For the generation of enhanced SIVsmmPBj and HIV-2 vector systems, only the transfer vectors were optimized, since the packaging vectors available already meet current standards. At first, several modifications were introduced into an available preliminary PBj-derived transfer vector by conventional cloning. The modifications included insertions of cPPT/CTS and WPRE as well as the deletions of the remaining pol sequence, the second exons of tat end rev, and the U3-region within the 3’LTR to generate a SIN vector. Thus, beside safety enhancement, the vector titers were also increased from 9.1x105 TU/ml achieved after concentration with the initial transfer vector up to 1.1x107 TU/ml with the final transfer vector. The PBj vector retained its capability of monocyte transduction when supplemented with Vpx. This conventional method of vector enhancement is time-consuming and may result in only sub-optimal vectors, since it depends on the presence of restriction sites which may not allow deletion of all needless sequences. Moreover, mutations may accumulate during the high number of cloning and amplification steps. Therefore, a new and easier method for lentiviral transfer vector generation was conceived. Three essential segments of the viral genome (5‘ LTR, RRE, ΔU3-3’ LTR) are amplified on the template of the lentiviral wild-type genome and fused by Fusion-PCR. Further necessary elements namely the cPPT/CTS-element, MCS, and PPT are included into the resulting vector by extension of the nucleotide primers used for the PCRs. The amplified and fused vector-scaffold can easily be integrated into a plasmid backbone, followed by insertion of the expression cassette of choice. By applying this approach, two novel lentiviral transfer vectors, based on the non-human SIVsmmPBj and the human HIV-2, were derived. Vector titers achieved for PBj and HIV-2 vectors supplemented with Vpx reached up to 4.0x108 TU/ml and 5.4x108 TU/ml, respectively. The capacity for monocyte transduction was maintained. Thus, safe and efficient, state of the art HIV-2- and PBj-derived vector systems are now available for future gene therapy strategies. Finally, the new vectors were used to set up an approach for gene correction of gp91phox-deficient monocytes for the treatment of X-linked chronic granulomatous disease (xCGD). The administration of autologous, gene-corrected monocytes to counteract systemic and acute infections could lead to a decreased infection load, dissolve granulomas and therefore improve the survival rate of hematopoietic stem cell transplantation (HSCT) which is the current treatment of choice for this disease. First, methods for analysis of gp91phox function were established. Next, they were employed to demonstrate the capacity of monocytes, obtained from healthy humans or mice, for phagocytosis, oxidative burst, and Staphylococcus aureus killing. The in vivo half-life of murine monocytes in the bloodstream and their distribution to specific tissues was determined. Lastly, HIV-1 vectors were used to transfer the gp91phox gene into monocytes from gp91phox-deficient mice. This resulted in the successful restoration of the oxidative burst ability in the cells. In summary, the general suitability of the new vectors for treatment of CGD by monocyte transduction was demonstrated. The results of the mouse experiments provide the foundation for future challenge experiments to evaluate the capability of gene-corrected monocytes to kill off microbes in vivo.
A question of Mesorah?
(2009)
In the upcoming Krias Hatorah in Parshat Shoftim and Parshat Ki Savo there are a number of instances where the meaning of a phrase changes completely based on the pronunciation of a single word – םד – with either a Komatz or Patah. Until recently, most Chumashim and Tikunim which generally followed the famous Yaakov Ben Hayyim 1525 edition of Mikraot Gedolot published in Venice that printed a seemingly inconsistent pattern in the pronunciation of the different occurrences of this word.
Background: Mitochondrial DNA sequencing increasingly results in the recognition of genetically divergent, but morphologically cryptic lineages. Species delimitation approaches that rely on multiple lines of evidence in areas of co-occurrence are particularly powerful to infer their specific status. We investigated the species boundaries of two cryptic lineages of the land snail genus Trochulus in a contact zone, using mitochondrial and nuclear DNA marker as well as shell morphometrics.
Results: Both mitochondrial lineages have a distinct geographical distribution with a small zone of co-occurrence. In the same area, we detected two nuclear genotype clusters, each being highly significantly associated to one mitochondrial lineage. This association however had exceptions: a small number of individuals in the contact zone showed intermediate genotypes (4%) or cytonuclear disequilibrium (12%). Both mitochondrial lineage and nuclear cluster were statistically significant predictors for the shell shape indicating morphological divergence. Nevertheless, the lineage morphospaces largely overlapped (low posterior classification success rate of 69% and 78%, respectively): the two lineages are truly cryptic.
Conclusions: The integrative approach using multiple lines of evidence supported the hypothesis that the investigated Trochulus lineages are reproductively isolated species. In the small contact area, however, the lineages hybridise to a limited extent. This detection of a hybrid zone adds an instance to the rare reported cases of hybridisation in land snails.
In this work, we extend the Hegselmann and Krause (HK) model, presented in [16] to an arbitrary metric space. We also present some theoretical analysis and some numerical results of the condensing of particles in finite and continuous metric spaces. For simulations in a finite metric space, we introduce the notion "random metric" using the split metrics studies by Dress and al. [2, 11, 12].
Beyond "singular" identities : multiculturalism and cultural freedom in Australian literature
(2009)
Die vorliegende Arbeit befasst sich mit der Frage von Wahrnehmung und Entwicklung multipler individueller Identitäten in australischer Literatur unter Berücksichtigung von kultureller Freiheit und Multikulturalismus. Amartya Sen präsentiert in seinem Buch Identity and Violence einen Identitätsansatz, der davon ausgeht, dass jedes Individuum plurale kulturelle Identitäten besitzt, deren Relevanz kontextspezifisch zu wählen ist. Die vorliegende Arbeit soll überprüfen ob Sen's Modell der pluralen Identitäten auch für den Bereich der Literaturwissenschaften adaptiert werden kann. Fragen der Identität sind selbstverständlich nicht neu in diesem Bereich. Insbesondere die Transcultural- und Postcolonial-Studies haben unter Aspekten wie Ethnizität, Gender, oder Hybridität verschiedene Modelle von Identität entwickelt. Da solche Modelle jedoch oft primär an einem dieser spezifischen Aspekte ausgerichtet sind, ist eine generelle Aussage über Wahrnehmung und Entwicklung von Identitäten oft nur bedingt möglich. Sen's Modell hat den Vorteil, dass es einfache allgemeingültige Regeln schafft, auf deren Basis alle identitätsbezogenen Aspekte verhandelt werden können. Während vielen anderen Modellen ein serieller (diachronischer) Ansatz explizit oder implizit zu Grunde liegt, geht Sen von einer parallelen (synchronen) Identitätsstruktur aus. Außerdem rückt er im Gegensatz zu vielen gruppenorientierten Ansätzen das Individuum in das Zentrum seiner Betrachtung und entwickelt auf Basis individueller, pluraler Identitäten seine umfassende Theorie. Gerade die Betonung von Gruppenidentitäten sowie die Verhandlungen von Identitäten zwischen Individuen und / oder Gruppen macht Sen als potentiellen Ursprung von gesellschaftlichen Konflikten aus. Dies liegt unter anderem an der gesellschaftlich weit verbreiteten Annahmen, dass kulturelle Identitäten singulär und gruppenorientiert strukturiert sind. Demnach ist jedes Individuum einer primären kulturellen Gruppenidentität zuzuordnen, welche alle anderen Identitätsaspekte determiniert. Gemeinsame Identitätsmerkmale zweier Individuen mit unterschiedlichen primären Gruppenidentitäten werden somit ausgeschlossen oder als sekundär bzw. nachrangig der primären Identität untergeordnet. Die Definition dieser singulären kulturellen Identitäten und die entsprechenden Regeln der Zugehörigkeit werden innerhalb der jeweiligen Gruppe verhandelt. Kommt es zwischen zwei Individuen zu Missinterpretation von identitätsbezogenen Kausalitäten, entstehen die von Sen beschriebenen Konflikten kommen. Um dieses Konfliktpotenzial zu entschärfen fordert Sen für jedes einzelne Individuum die Freiheit seine Präferenzen kontextspezifischer Identitäten frei zu wählen, ohne Einflussnahme anderer Individuen oder Gruppen. Dies kann als allgemeine Forderung individueller kultureller Freiheit, analog zur Freiheit der eigenen Meinung verstanden werden. Das Bewusstsein für die jeweiligen kontextspezifischen Identitäten anderer kann somit durch ein größeres Verständnis von Kausalitäten zur Vermeidung identitätsbezogener Konflikte führen. Da Sen seine Theorie nicht explizit für literaturwissenschaftliche Anwendungen beschreibt, muss im Rahmen dieser Arbeit zuerst ein methodologisches Modell für die Arbeit an literarischen Texten erarbeitet werden. Dazu werden verschiedene, auf Sen basierende, Aspekte definiert, die dann an den vorliegenden Texten auf ihre Gültigkeit überprüft werden. Erstens wird ermittelt, ob es generell möglich ist individuelle und Gruppenidentitäten zu identifizieren. Zweiten wird untersucht, ob die zentralen Protagonisten plurale kulturelle Identitäten aufweisen. Drittens wird die Frage gestellt, ob ein kausaler Zusammenhang zwischen den Identitätsverhandlungen von Individuen und / oder Gruppen, sowie den in den Texten beschriebenen Konflikten hergestellt werden kann. Viertens wird untersucht, ob die Erzählungen Konzepte von singulärer kultureller Identität, pluralem Monokulturalismus, oder Multikulturalismus widerspiegeln. Fünftens soll geklärt werden, ob Sen's Forderung nach individueller kultureller Freiheit einen realistischen Lösungsansatz für die in den Erzählungen beschriebenen Konflikte bedeuten würde. Die zugrunde liegenden Primärtexte – Behrendt's Home, Haikal's Seducing Mr Maclean und Teo's Love and Vertigo – wurden auf Grund der vergleichbaren Identitätsthematik gewählt. Alle drei schildern die Wahrnehmung und Entwicklung multipler individueller Identitäten vor dem Hintergrund einer australischen Migrationsgesellschaft und deren Umgang mit Angehörigen der australischen Ureinwohner. In Bezug auf die oben genannten Fragen weisen alle drei Texte eine große Übereinstimmung mit Sen's Theorie auf. In allen Erzählungen ließen sich individuelle und Gruppenidentitäten nachweisen, wobei vor allem die zentralen Protagonisten deutliche plurale kulturelle Identitäten aufwiesen. Ebenso konnte ein starker Zusammenhang zwischen den Identitätsverhandlungen von Individuen und / oder Gruppen, sowie den in den Texten beschriebenen Konflikten hergestellt werden. Auch war es möglich bei verschiedenen Protagonisten Vorstellungen von singulärer kultureller Identität oder pluralem Monokulturalismus nachzuweisen. Letztlich kann für alle drei Texte angenommen werden, dass individuelle kulturelle Freiheit einen realistischen Lösungsansatz für die in den Erzählungen beschriebenen Konflikte bedeuten würde. Sen's Modell pluraler individueller Identitäten hat sich somit prinzipiell für den Einsatz im Bereich der Literaturwissenschaften bewährt. Für die Literaturwissenschaften hat dieses Modell den Vorteil, dass im Gegensatz zu vielen anderen Identitätskonzepten verschiedene Aspekte wie Ethnizität, Gender, oder Hybridität auf einem gemeinsamen theoretischen Fundament analysiert und diskutiert werden könnten.
Breaking tolerance to the natural human liver autoantigen cytochrome P450 2D6 by virus infection
(2009)
Autoimmune hepatitis (AIH) is a chronic liver disease of unknown etiology, characterized by a loss of tolerance against hepatocytes leading to the progressive destruction of hepatic parenchyma and cirrhosis. Clinical signs for AIH are interface hepatitis and portal plasma cell infiltration, hypergammaglobulinemia, and autoantibodies. Based on serological markers AIH is defined in subtypes. The hallmark of AIH type 2 are type 1 liver/kidney microsomal autoantibodies (LKM-1), whereas AIH type 1 is characterized by the presence of anti-nuclear (ANA) and/or anti-smooth muscular (SMA) autoantibodies. The major autoantigen recognized specifically by LKM-1 autoantibodies was identified as the 2D6 isoform of the cytochrome P450 enzyme family (CYP2D6). Not much is known about the etiology and pathogenic mechanisms of AIH so far and most animal models available result in only transient hepatic liver damage after a rather complex initiation method. It was the aim of my project to generate a novel animal model for AIH that reflects the chronic and progressive destruction of the liver characteristic for the human disease while using a defined and feasible initiating event to further analyze the pathogenic mechanisms leading to the autoimmune-mediated destruction of the liver. Therefore, mice transgenically expressing the human CYP2D6 in the liver and wild-type mice were infected with a liver-tropic adenovirus expressing the human CYP2D6 (Ad-2D6). Selftolerance to CYP2D6 was broken in Ad-2D6-infected mice, resulting in persistent autoimmune liver damage, apparent by cellular infiltration, hepatic fibrosis and necrosis. Similar to type 2 AIH patients, Ad-2D6-infected mice generated LKM-1-like antibodies recognizing the same immunodominant epitope of CYP2D6. Taken together, we could introduce a new animal model that reflects the persistent autoimmune-mediated liver damage as well as the serological marker characteristic for AIH type 2 and we could demonstrate that chronic autoimmune diseases targeting the liver can be triggered by molecular mimicry occurring in the context of a hepatotropic viral infection.
The budget constraint requires that, eventually, consumption must adjust fully to any permanent shock to income. Intuition suggests that, knowing this, optimizing agents will fully adjust their spending immediately upon experiencing a permanent shock. However, this paper shows that if consumers are impatient and are subject to transitory as well as permanent shocks, the optimal marginal propensity to consume out of permanent shocks (the MPCP) is strictly less than 1, because buffer stock savers have a target wealth-to-permanent-income ratio; a positive shock to permanent income moves the ratio below its target, temporarily boosting saving. Keywords: Risk, Uncertainty, Consumption, Precautionary Saving, Buffer Stock Saving, Permanent Income Hypothesis.
We model the motives for residents of a country to hold foreign assets, including the precautionary motive that has been omitted from much previous literature as intractable. Our model captures many of the principal insights from the existing specialized literature on the precautionary motive, deriving a convenient formula for the economy’s target value of assets. The target is the level of assets that balances impatience, prudence, risk, intertemporal substitution, and the rate of return. We use the model to shed light on two topical questions: The “upstream” flows of capital from developing countries to advanced countries, and the long-run impact of resorbing global financial imbalances
We present a tractable model of the effects of nonfinancial risk on intertemporal choice. Our purpose is to provide a simple framework that can be adopted in fields like representative-agent macroeconomics, corporate finance, or political economy, where most modelers have chosen not to incorporate serious nonfinancial risk because available methods were too complex to yield transparent insights. Our model produces an intuitive analytical formula for target assets, and we show how to analyze transition dynamics using a familiar Ramsey-style phase diagram. Despite its starkness, our model captures most of the key implications of nonfinancial risk for intertemporal choice.
American households have received a triple dose of bad news since the beginning of the current recession: The greatest collapse in asset values since the Great Depression, a sharp tightening in credit availability, and a large increase in unemployment risk. We present measures of the size of these shocks and discuss what a benchmark theory says about their immediate and ultimate consequences. We then provide a forecast based on a simple empirical model that captures the effects of wealth shocks and unemployment fears. Our short-term forecast calls for somewhat weaker spending, and somewhat higher saving rates, than the Consensus survey of macroeconomic forecasters. Over the longer term, our best guess is that the personal saving rate will eventually approach the levels that preceded period of financial liberalization that began in the late 1970s. Classification: C61, D11, E24
This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G28 Keywords: credit risk, risk transfer, systematic risk
The seasonality of transport and mixing of air into the lowermost stratosphere (LMS) is studied using distributions of mean age of air and a mass balance approach, based on in-situ observations of SF6 and CO2 during the SPURT (Spurenstofftransport in der Tropopausenregion, trace gas transport in the tropopause region) aircraft campaigns. Combining the information of the mean age of air and the water vapour distributions we demonstrate that the tropospheric air transported into the LMS above the extratropical tropopause layer (ExTL) originates predominantly from the tropical tropopause layer (TTL). The concept of our mass balance is based on simultaneous measurements of the two passive tracers and the assumption that transport into the LMS can be described by age spectra which are superposition of two different modes. Based on this concept we conclude that the stratospheric influence on LMS composition is strongest in April with extreme values of the tropospheric fractions (alpha1) below 20% and that the strongest tropospheric signatures are found in October with alpha1 greater than 80%. Beyond the fractions, our mass balance concept allows us to calculate the associated transit times for transport of tropospheric air from the tropics into the LMS. The shortest transit times (<0.3 years) are derived for the summer, continuously increasing up to 0.8 years by the end of spring. These findings suggest that strong quasi-horizontal mixing across the weak subtropical jet from summer to mid of autumn and the considerably shorter residual transport time-scales within the lower branch of the Brewer-Dobson circulation in summer than in winter dominates the tropospheric influence in the LMS until the beginning of next year's summer.
Introduction Loss of intestinal integrity has been implicated as an important contributor to the development of excessive inflammation following severe trauma. Thus far, clinical data concerning the occurrence and significance of intestinal damage after trauma remain scarce. This study investigates whether early intestinal epithelial cell damage occurs in trauma patients and, if present, whether such cell injury is related to shock, injury severity and the subsequent inflammatory response. Methods Prospective observational cohort study in 96 adult trauma patients. Upon arrival at the emergency room (ER) plasma levels of intestinal fatty acid binding protein (i-FABP), a specific marker for damage of differentiated enterocytes, were measured. Factors that potentially influence the development of intestinal cell damage after trauma were determined, including the presence of shock and the extent of abdominal trauma and general injury severity. Furthermore, early plasma levels of i-FABP were related to inflammatory markers interleukin-6 (IL-6), procalcitonin (PCT) and C-reactive protein (CRP). Results Upon arrival at the ER, plasma i-FABP levels were increased compared with healthy volunteers, especially in the presence of shock (P < 0.01). The elevation of i-FABP was related to the extent of abdominal trauma as well as general injury severity (P < 0.05). Circulatory i-FABP concentrations at ER correlated positively with IL-6 and PCT levels at the first day (r2 = 0.19; P < 0.01 and r2 = 0.36; P < 0.001 respectively) and CRP concentrations at the second day after trauma (r2 = 0.25; P < 0.01). Conclusions This study reveals early presence of intestinal epithelial cell damage in trauma patients. The extent of intestinal damage is associated with the presence of shock and injury severity. Early intestinal damage precedes and is related to the subsequent developing inflammatory response.
Abstract: Bcl-2 family proteins including the pro-apoptotic BH3-only proteins are central regulators of apoptotic cell death. Here we show by a focused siRNA miniscreen that the synergistic action of the BH3-only proteins Bim and Bmf is required for apoptosis induced by infection with Neisseria gonorrhoeae (Ngo). While Bim and Bmf were associated with the cytoskeleton of healthy cells, they both were released upon Ngo infection. Loss of Bim and Bmf from the cytoskeleton fraction required the activation of Jun-N-terminal kinase-1 (JNK-1), which in turn depended on Rac-1. Depletion and inhibition of Rac-1, JNK-1, Bim, or Bmf prevented the activation of Bak and Bax and the subsequent activation of caspases. Apoptosis could be reconstituted in Bim-depleted and Bmf-depleted cells by additional silencing of antiapoptotic Mcl-1 and Bcl-XL, respectively. Our data indicate a synergistic role for both cytoskeletal-associated BH3-only proteins, Bim, and Bmf, in an apoptotic pathway leading to the clearance of Ngo-infected cells. Author Summary: A variety of physiological death signals, as well as pathological insults, trigger apoptosis, a genetically programmed form of cell death. Pathogens often induce host cell apoptosis to establish a successful infection. Neisseria gonorrhoeae (Ngo), the etiological agent of the sexually transmitted disease gonorrhoea, is a highly adapted obligate human-specific pathogen and has been shown to induce apoptosis in infected cells. Here we unveil the molecular mechanisms leading to apoptosis of infected cells. We show that Ngo-mediated apoptosis requires a special subset of proapoptotic proteins from the group of BH3-only proteins. BH3-only proteins act as stress sensors to translate toxic environmental signals to the initiation of apoptosis. In a siRNA-based miniscreen, we found Bim and Bmf, BH3-only proteins associated with the cytoskeleton, necessary to induce host cell apoptosis upon infection. Bim and Bmf inactivated different inhibitors of apoptosis and thereby induced cell death in response to infection. Our data unveil a novel pathway of infection-induced apoptosis that enhances our understanding of the mechanism by which BH3-only proteins control apoptotic cell death.
CD8 T cells are recognized key players in control of persistent virus infections, but increasing evidence suggests that assistance from other immune mediators is also needed. Here, we investigated whether specific antibody responses contribute to control of lymphocytic choriomeningitis virus (LCMV), a prototypic mouse model of systemic persistent infection. Mice expressing transgenic B cell receptors of LCMV-unrelated specificity, and mice unable to produce soluble immunoglobulin M (IgM) exhibited protracted viremia or failed to resolve LCMV. Virus control depended on immunoglobulin class switch, but neither on complement cascades nor on Fc receptor gamma chain or Fc gamma receptor IIB. Cessation of viremia concurred with the emergence of viral envelope-specific antibodies, rather than with neutralizing serum activity, and even early nonneutralizing IgM impeded viral persistence. This important role for virus-specific antibodies may be similarly underappreciated in other primarily T cell–controlled infections such as HIV and hepatitis C virus, and we suggest this contribution of antibodies be given consideration in future strategies for vaccination and immunotherapy.
Background Endothelium-derived nitric oxide plays an important role for the bone marrow microenvironment. Since several important effects of nitric oxide are mediated by cGMP-dependent pathways, we investigated the role of the cGMP downstream effector cGMP-dependent protein kinase I (cGKI) on postnatal neovascularization. Methodology/Principal Findings In a disc neovascularization model, cGKI -/- mice showed an impaired neovascularization as compared to their wild-type (WT) littermates. Infusion of WT, but not cGKI -/- bone marrow progenitors rescued the impaired ingrowth of new vessels in cGKI-deficient mice. Bone marrow progenitors from cGKI -/- mice showed reduced proliferation and survival rates. In addition, we used cGKI alpha leucine zipper mutant (LZM) mice as model for cGKI deficiency. LZM mice harbor a mutation in the cGKI alpha leucine zipper that prevents interaction with downstream signaling molecules. Consistently, LZM mice exhibited reduced numbers of vasculogenic progenitors and impaired neovascularization following hindlimb ischemia compared to WT mice. Conclusions/Significance Our findings demonstrate that the cGMP-cGKI pathway is critical for postnatal neovascularization and establish a new role for cGKI in vasculogenesis, which is mediated by bone marrow-derived progenitors.
Background Parkinson's disease (PD) is an adult-onset movement disorder of largely unknown etiology. We have previously shown that loss-of-function mutations of the mitochondrial protein kinase PINK1 (PTEN induced putative kinase 1) cause the recessive PARK6 variant of PD. Methodology/Principal Findings Now we generated a PINK1 deficient mouse and observed several novel phenotypes: A progressive reduction of weight and of locomotor activity selectively for spontaneous movements occurred at old age. As in PD, abnormal dopamine levels in the aged nigrostriatal projection accompanied the reduced movements. Possibly in line with the PARK6 syndrome but in contrast to sporadic PD, a reduced lifespan, dysfunction of brainstem and sympathetic nerves, visible aggregates of alpha-synuclein within Lewy bodies or nigrostriatal neurodegeneration were not present in aged PINK1-deficient mice. However, we demonstrate PINK1 mutant mice to exhibit a progressive reduction in mitochondrial preprotein import correlating with defects of core mitochondrial functions like ATP-generation and respiration. In contrast to the strong effect of PINK1 on mitochondrial dynamics in Drosophila melanogaster and in spite of reduced expression of fission factor Mtp18, we show reduced fission and increased aggregation of mitochondria only under stress in PINK1-deficient mouse neurons. Conclusion Thus, aging Pink1 -/- mice show increasing mitochondrial dysfunction resulting in impaired neural activity similar to PD, in absence of overt neuronal death.
The C-module-binding factor (CbfA) is a multidomain protein that belongs to the family of jumonji-type (JmjC) transcription regulators. In the social amoeba Dictyostelium discoideum, CbfA regulates gene expression during the unicellular growth phase and multicellular development. CbfA and a related D. discoideum CbfA-like protein, CbfB, share a paralogous domain arrangement that includes the JmjC domain, presumably a chromatin-remodeling activity, and two zinc finger-like (ZF) motifs. On the other hand, the CbfA and CbfB proteins have completely different carboxy-terminal domains, suggesting that the plasticity of such domains may have contributed to the adaptation of the CbfA-like transcription factors to the rapid genome evolution in the dictyostelid clade. To support this hypothesis we performed DNA microarray and real-time RT-PCR measurements and found that CbfA regulates at least 160 genes during the vegetative growth of D. discoideum cells. Functional annotation of these genes revealed that CbfA predominantly controls the expression of gene products involved in housekeeping functions, such as carbohydrate, purine nucleoside/nucleotide, and amino acid metabolism. The CbfA protein displays two different mechanisms of gene regulation. The expression of one set of CbfA-dependent genes requires at least the JmjC/ZF domain of the CbfA protein and thus may depend on chromatin modulation. Regulation of the larger group of genes, however, does not depend on the entire CbfA protein and requires only the carboxy-terminal domain of CbfA (CbfA-CTD). An AT-hook motif located in CbfA-CTD, which is known to mediate DNA binding to A+T-rich sequences in vitro, contributed to CbfA-CTD-dependent gene regulatory functions in vivo.
Background Transplantation of vasculogenic progenitor cells (VPC) improves neovascularization after ischemia. However, patients with type 2 diabetes mellitus show a reduced VPC number and impaired functional activity. Previously, we demonstrated that p38 kinase inhibition prevents the negative effects of glucose on VPC number by increasing proliferation and differentiation towards the endothelial lineage in vitro. Moreover, the functional capacity of progenitor cells is reduced in a mouse model of metabolic syndrome including type 2 diabetes (Leprdb) in vivo. Findings The aim of this study was to elucidate the underlying signalling mechanisms in vitro and in vivo. Therefore, we performed DNA-protein binding arrays in the bone marrow of mice with metabolic syndrome, in blood-derived progenitor cells of diabetic patients as well as in VPC ex vivo treated with high levels of glucose. The transcriptional activation of ETS transcription factors was increased in all samples analyzed. Downregulation of ETS1 expression by siRNA abrogated the reduction of VPC number induced by high-glucose treatment. In addition, we observed a concomitant suppression of the non-endothelial ETS-target genes matrix metalloproteinase 9 (MMP9) and CD115 upon short term lentiviral delivery of ETS-specific shRNAs. Long term inhibition of ETS expression by lentiviral infection increased the number of cells with the endothelial markers CD144 and CD105. Conclusion These data demonstrate that diabetes leads to dysregulated activation of ETS, which blocks the functional activity of progenitor cells and their commitment towards the endothelial cell lineage.
Blood oxygen level-dependent (BOLD) responses were measured in parts of primary visual cortex that represented unstimulated visual field regions at different distances from a stimulated central target location. The composition of the visual scene varied by the presence or absence of additional peripheral distracter stimuli. Bottom-up effects were assessed by comparing peripheral activity during central stimulation vs. no stimulation. Top-down effects were assessed by comparing active vs. passive conditions. In passive conditions subjects simply watched the central letter stimuli and in active conditions they had to report occurrence of pre-defined targets in a rapid serial letter stream. Onset of the central letter stream enhanced activity in V1 representations of the stimulated region. Within representations of the periphery activation decreased and finally turned into deactivation with increasing distance from the stimulated location. This pattern was most pronounced in the active conditions and during the presence of peripheral stimuli. Active search for a target did not lead to additional enhancement at areas representing the attentional focus but to a stronger deactivation in the vicinity. Suppressed neuronal activity was also found in the non distracter condition suggesting a top-down attention driven effect. Our observations suggest that BOLD signal decreases in primary visual cortex are modulated by bottom-up sensory-driven factors such as the presence of distracters in the visual field as well as by top-down attentional processes.
Mammalian retinae have rod photoreceptors for night vision and cone photoreceptors for daylight and colour vision. For colour discrimination, most mammals possess two cone populations with two visual pigments (opsins) that have absorption maxima at short wavelengths (blue or ultraviolet light) and long wavelengths (green or red light). Microchiropteran bats, which use echolocation to navigate and forage in complete darkness, have long been considered to have pure rod retinae. Here we use opsin immunohistochemistry to show that two phyllostomid microbats, Glossophaga soricina and Carollia perspicillata, possess a significant population of cones and express two cone opsins, a shortwave-sensitive (S) opsin and a longwave-sensitive (L) opsin. A substantial population of cones expresses S opsin exclusively, whereas the other cones mostly coexpress L and S opsin. S opsin gene analysis suggests ultraviolet (UV, wavelengths <400 nm) sensitivity, and corneal electroretinogram recordings reveal an elevated sensitivity to UV light which is mediated by an S cone visual pigment. Therefore bats have retained the ancestral UV tuning of the S cone pigment. We conclude that bats have the prerequisite for daylight vision, dichromatic colour vision, and UV vision. For bats, the UV-sensitive cones may be advantageous for visual orientation at twilight, predator avoidance, and detection of UV-reflecting flowers for those that feed on nectar.
Introduction Chemokines and their receptors control immune cell migration during infections as well as in autoimmune responses. A 32 bp deletion in the gene of the chemokine receptor CCR5 confers protection against HIV infection, but has also been reported to decrease susceptibility to rheumatoid arthritis (RA). The influence of this deletion variant on the clinical course of this autoimmune disease was investigated. Methods Genotyping for CCR5d32 was performed by PCR and subsequent electrophoretic fragment length determination. For the clinical analysis, the following extra-articular manifestations of RA were documented by the rheumatologist following the patient: presence of rheumatoid nodules, major organ vasculitis, pulmonary fibrosis, serositis or a Raynaud's syndrome. All documented CRP levels were analyzed retrospectively, and the last available hand and feet radiographs were analyzed with regards to the presence or absence of erosive disease. Results Analysis of the CCR5 polymorphism in 503 RA patients and in 459 age-matched healthy controls revealed a significantly decreased disease susceptibility for carriers of the CCR5d32 deletion (Odds ratio 0.67, P = 0.0437). Within the RA patient cohort, CCR5d32 was significantly less frequent in patients with extra-articular manifestations compared with those with limited, articular disease (13.2% versus 22.8%, P = 0.0374). In addition, the deletion was associated with significantly lower average CRP levels over time (median 8.85 vs. median 14.1, P = 0.0041) and had a protective effect against the development of erosive disease (OR = 0.40, P = 0.0047). Intriguingly, homozygosity for the RA associated DNASE2 -1066 G allele had an additive effect on the disease susceptibility conferred by the wt allele of CCR5 (OR = 2.24, P = 0.0051 for carrier of both RA associated alleles) Conclusions The presence of CCR5d32 significantly influenced disease susceptibility to and clinical course of RA in a German study population. The protective effect of this deletion, which has been described to lead to a decreased receptor expression in heterozygous patients, underlines the importance of chemokines in the pathogenesis of RA.
Objective. A study supported by the EULAR and the ACR being conducted to establish classification criteria for polymyalgia rheumatica (PMR) will include ultrasound examination of the shoulders and hips. Ultrasound (US) depicts glenohumeral joint effusion, biceps tenosynovitis, subdeltoid bursitis, hip joint synovitis, and trochanteric bursitis in PMR. These findings may aid in distinguishing PMR from other diseases. The purpose of this study was to assess standards and US interreader agreement of participants in the PMR classification criteria study. Methods. Sixteen physicians in four groups examined shoulders and hips of 4 patients and 4 healthy adults with ultrasound. Overall agreement and interobserver agreement were calculated. Results. The overall agreement (OA) between groups was 87%. The OA for healthy shoulders was 88.8%, for healthy hips 100%, for shoulders with pathology 85.2%, and 74.3% for hips with pathology, respectively. Conclusion. There was a high degree of agreement found for the examination of healthy shoulders and pathologic hips. Agreement was moderate for pathologic shoulders and perfect for healthy hips. US of shoulder and hips performed by different examiners is a reliable and feasible tool for assessment of PMR related disease pathology and can be incorporated into a classification criteria study.
Representations of the unknown and the foreign can be found in every culture. Paralleling the method of constructing identity in relation to the Other, all cultures create myths about the ‘foreign’ in order to discern what the ‘native’ is, and thus often essentialize them as either good or bad, ultimately to vindicate one’s own actions and values. The nature of myths has it as such that they lend themselves to images, which are easily transformed into representations. Representations of the foreign in the United States follow the same purpose; they are propagated to define the nation’s identity and set it into political and cultural relation to other nations and civilizations. In this thesis’ context, then, representations of Asian Americans in American culture strengthen the imaginative bonds of American national identity manifesto. However, the interdependency of the Self and the Other clarifies and further entangles the subjects that constitute American national identity and in turn legitimizes the belated claim of Asian Americans to be included into it. Asian American literature is primarily concerned with these myths and (mis)representations that are influenced by Orientalist images in Western culture. Thus, Orientalism – a constructed myth about the Orient, which exists in art, books, and armchair theories of all kinds in the Western world – becomes the main motif for Asian American literature. If we construe this theory a little further then Asian American identity is formed in relation to Orientalist representations that need to be deconstructed first. From the outset, if Orientalism is considered as a produce of imperialism, it seems that time is a defining factor in Orientalism, both as an agent of change and as a factor of perspective. In reality, however, Orientalism seems resilient to time and change; the creation of the Madame Butterfly myth exemplifies what was created in 1887 had been perfected by 1900 and since then enjoys frequent comebacks until today. Thus, for Asian American artists and writers to dismantle Orientalist stereotypes begins a literally archaeological process: excavating the leftovers of American Orientalism, evaluating those finds, and re-relating them with their own cultural and historical actuality. Rather than producing a neat line of argumentation, the approaches on defining Asian American identity within the American national identity manifesto fall into unwieldy clusters and even get tangled up into self-contradictions. The methods of dismantling Orientalist stereotypes are manifold and range from total rejection over evocation and appropriation to reflection. In order to wrestle such disparate issues Orientalism produces in Asian American Literature into an organic whole, it was important to focus consistently on the over arching theme of American national identity. As this thesis aims to show, Orientalist issues that are dealt with in Asian American literature all point toward the greater aim of national inclusion. This thesis is grouped into two parts. PART I provides historical and theoretical background information necessary to understand Orientalist issues in contemporary Asian American literature. Analogous to Asian American writers that feel the necessity to bed their work into the correct historical frame in order to prevent misunderstanding, chapters two and three serve to couch my argument into the correct frame. The theoretical base work is laid with Edward Said’s Orientalism and its implementation on the American and Asian American context. PART II examines literary examples, applying the theorems discussed in PART I. Chapter four is a close analysis of the submissive Butterfly stereotype that has, since its appearance in late nineteenth century, moved, inspired and even outraged writers. Beginning with the literary development of Madame Butterfly, D. H. Hwang’s deconstructivist M. Butterfly gives new perspectives on Orientalism by redefining gender and racial roles. To complement my analysis, in chapter five, I try to trace current Asian American reactions to Orientalism. Texts by comedian Margaret Cho and poet Beau Sia serve as examples of analysis. As a result of the disparate narrative forms of the analyzed works and the unevenness of scholarship on twenty-first century, the analyses vary greatly in scope and detail. In choosing fairly young narrative forms like stand-up comedy and spoken word poetry I want to emphasize how Orientalism pertains to the question of Asian American identity. To close the circle of my discourse I will go back to where I start my thesis: Asian Americans and their position within America’s national identity discourse. It is noteworthy that until today, Asian American identity remains a hostage of these Orientalist stereotypes that mark the boundaries of their American identity.
Volatile organic compounds (VOCs) were analyzed in air and snow samples at the Jungfraujoch high alpine research station in Switzerland as part of CLACE 5 (CLoud and Aerosol Characterization Experiment) during February/March 2006. The fluxes of individual compounds in ambient air were calculated from gas phase concentrations and wind speed. The highest concentrations and flux values were observed for the aromatic hydrocarbons benzene (14.3 μg.m−2 s−1), 1,3,5-trimethylbenzene (5.27 μg.m−2 s−1), toluene (4.40 μg.m−2 −1), and the aliphatic hydrocarbons i-butane (7.87 μg.m−2 s−1), i-pentane (3.61 μg.m−2 s−1) and n-butane (3.23 μg.m−2 s−1). The measured concentrations and fluxes were used to calculate the efficiency of removal of VOCs by snow, which is defined as difference between the initial and final concentration/flux values of compounds before and after wet deposition. The removal efficiency was calculated at −24°C (−13.7°C) and ranged from 37% (35%) for o-xylene to 93% (63%) for i-pentane. The distribution coefficients of VOCs between the air and snow phases were derived from published poly-parameter linear free energy relationship (pp-LFER) data, and compared with distribution coefficients obtained from the simultaneous measurements of VOC concentrations in air and snow at Jungfraujoch. The coefficients calculated from pp-LFER exceeded those values measured in the present study, which indicates more efficient snow scavenging of the VOCs investigated than suggested by theoretical predictions.
A characterization of the ultra-fine aerosol particle counter COPAS (COndensation PArticle counting System) for operation on board the Russian high altitude research aircraft M-55 Geophysika is presented. The COPAS instrument consists of an aerosol inlet and two dual-channel continuous flow Condensation Particle Counters (CPCs) operated with the chlorofluorocarbon FC-43. It operates at pressures between 400 and 50 hPa for aerosol detection in the particle diameter (dp) range from 6 nm up to 1 micro m. The aerosol inlet, designed for the M-55, is characterized with respect to aspiration, transmission, and transport losses. The experimental characterization of counting efficiencies of three CPCs yields dp50 (50% detection particle diameter) of 6 nm, 11 nm, and 15 nm at temperature differences (DeltaT) between saturator and condenser of 17°C, 30°C, and 33°C, respectively. Non-volatile particles are quantified with a fourth CPC, with dp50=11 nm. It includes an aerosol heating line (250°C) to evaporate H2SO4-H2O particles of 11 nm<dp<200 nm at pressures between 70 and 300 hPa. An instrumental in-flight inter-comparison of the different COPAS CPCs yields correlation coefficients of 0.996 and 0.985. The particle emission index for the M-55 in the range of 1.4–8.4×10 16 kg -1 fuel burned has been estimated based on measurements of the Geophysika's own exhaust.
Medium range hydrological forecasts in mesoscale catchments are only possible with the use of hydrological models driven by meteorological forecasts, which in particular contribute quantitative precipitation forecasts (QPF). QPFs are accompanied by large uncertainties, especially for longer lead times, which are propagated within the hydrometeorological model system. To deal with this limitation of predictability, a probabilistic forecasting system is tested, which is based on a hydrological-meteorological ensemble prediction system. The meteorological component of the system is the operational limited-area ensemble prediction system COSMO-LEPS that downscales the global ECMWF ensemble to a horizontal resolution of 10 km, while the hydrological component is based on the semi-distributed hydrological model PREVAH with a spatial resolution of 500 m.
Earlier studies have mostly addressed the potential benefits of hydrometeorological ensemble systems in short case studies. Here we present an analysis of hydrological ensemble hindcasts for two years (2005 and 2006). It is shown that the ensemble covers the uncertainty during different weather situations with appropriate spread. The ensemble also shows advantages over a corresponding deterministic forecast, even under consideration of an artificial spread.
It has been shown that certain chemokine receptor polymorphisms may correspond to certain complications after organ transplantation. Ischemic-type biliary lesion (ITBL) encounters for major morbidity and mortality in liver transplant recipients. So far, the exact cause for ITBL remains unclear. Certain risk factors for the development of ITBL like donor age and cold ischemic time are well described. In a previous study, a 32-nucleotide deletion of the chemokine receptor-5Delta32 (CCR-5Delta32) was strongly associated with the incidence of ITBL in adult liver transplantation. This study re-evaluates the association of CCR-5Delta32 gene polymorphism and the incidence of ITBL. 169 patients were included into this retrospective analysis. 134 patients were homozygous for wild-type CCR-5, 33 patients heterozygous, and 2 patients were homozygous for CCR-5Delta32 mutation. There were no major differences in donor or recipients demographics. No association was found between CCR-5Delta32 mutation and the development of ITBL.We conclude that CCR-5Delta32 is no risk factor for the development of ITBL in our patient cohort.
The CUG-binding protein 1 (CUG-BP1) is a member of the CUG-BP1 and ETR-like factors (CELF) family or the Bruno-like family and is involved in the control of splicing, translation and mRNA degradation. Several target RNA sequences of CUG-BP1 have been predicted, such as the CUG triplet repeat, the GU-rich sequences and the AU-rich element of nuclear pre-mRNAs and/or cytoplasmic mRNA. CUG-BP1 has three RNA-recognition motifs (RRMs), among which the third RRM (RRM3) can bind to the target RNAs on its own. In this study, we solved the solution structure of the CUG-BP1 RRM3 by hetero-nuclear NMR spectroscopy. The CUG-BP1 RRM3 exhibited a noncanonical RRM fold, with the four-stranded b-sheet surface tightly associated with the N-terminal extension. Furthermore, we determined the solution structure of the CUG-BP1 RRM3 in the complex with (UG)3 RNA, and discovered that the UGU trinucleotide is specifically recognized through extensive stacking interactions and hydrogen bonds within the pocket formed by the b-sheet surface and the N-terminal extension. This study revealed the unique mechanism that enables the CUG-BP1 RRM3 to discriminate the short RNA segment from other sequences, thus providing the molecular basis for the comprehension of the role of the RRM3s in the CELF/Bruno-like family.
U1-snRNA is an integral part of the U1 ribonucleoprotein pivotal for pre-mRNA splicing. Toll-like receptor (TLR) signaling has recently been associated with immunoregulatory capacities of U1-snRNA. Using lung A549 epithelial/carcinoma cells, we report for the first time on interferon regulatory factor (IRF)-3 activation initiated by endosomally delivered U1-snRNA. This was associated with expression of the IRF3-inducible genes interferon-b (IFN-b), CXCL10/IP-10 and indoleamine 2,3-dioxygenase. Mutational analysis of the U1-snRNA-activated IFN-b promoter confirmed the crucial role of the PRDIII element, previously proven pivotal for promoter activation by IRF3. Notably, expression of these parameters was suppressed by bafilomycin A1, an inhibitor of endosomal acidification, implicating endosomal TLR activation. Since resiquimod, an agonist of TLR7/8, failed to stimulate A549 cells, data suggest TLR3 to be of prime relevance for cellular activation. To assess the overall regulatory potential of U1-snRNA-activated epithelial cells on cytokine production, co-cultivation with peripheral blood mononuclear cells (PBMC) was performed. Interestingly, A549 cells activated by U1-snRNA reinforced phytohemagglutinin-induced interleukin-10 release by PBMC but suppressed that of tumor necrosis factor-a, indicating an antiinflammatory potential of U1-snRNA. Since U1-snRNA is enriched in apoptotic bodies and epithelial cells are capable of performing efferocytosis, the present data in particular connect to immunobiological aspects of apoptosis at host/environment interfaces.
Background Although current molecular clock methods offer greater flexibility in modelling historical evolutionary events, calibration of the clock with dates from the fossil record is still problematic for many groups. Here we implement several new approaches in molecular dating to estimate evolutionary ages of Lacertidae, an Old World family of lizards with a poor fossil record and uncertain phylogeny. Four different models of rate variation are tested in a new program for Bayesian phylogenetic analysis called TreeTime, based on a combination of mitochondrial and nuclear gene sequences. We incorporate paleontological uncertainty into divergence estimates by expressing multiple calibration dates as a range of probabilistic distributions. We also test the reliability of our proposed calibrations by exploring effects of individual priors on posterior estimates. Results According to the most reliable model, as indicated by Bayes factor comparison, modern lacertids arose shortly after the K/T transition and entered Africa about 45 million years ago, with the majority of their African radiation occurring in the Eocene and Oligocene. Our findings indicate much earlier origins for these clades than previously reported, and we discuss our results in light of paleogeographic trends during the Cenozoic. Conclusions This study represents the first attempt to estimate evolutionary ages of a specific group of reptiles exhibiting uncertain phylogenetic relationships, molecular rate variation and a poor fossil record. Our results emphasize the sensitivity of molecular divergence dates to fossil calibrations, and support the use of combined molecular data sets and multiple, well-spaced dates from the fossil record as minimum node constraints. The bioinformatics program used here, TreeTime, is publicly available, and we recommend its use for molecular dating of taxa faced with similar challenges.
The physics of interacting bosons in the phase with broken symmetry is determined by the presence of the condensate and is very different from the physics in the symmetric phase. The Functional Renormalization Group (FRG) represents a powerful investigation method which allows the description of symmetry breaking with high efficiency. In the present thesis we apply FRG for studying the physics of two different models in the broken symmetry phase. In the first part of this thesis we consider the classical O(1)-model close to the critical point of the second order phase transition. Employing a truncation scheme based on the relevance of coupling parameters we study the behavior of the RG-flow which is shown to be influenced by competition between two characteristic lengths of the system. We also calculate the momentum dependent self-energy and study its dependence on both length scales. In the second part we apply the FRG-formalism to systems of interacting bosons in the phase with spontaneously broken U(1)-symmetry in arbitrary spatial dimensions at zero temperature. We use a truncation scheme based on a new non-local potential approximation which satisfy both exact relations postulated by Hugenholtz and Pines, and Nepomnyashchy and Nepomnyashchy. We study the RG-flow of the model, discuss different scaling regimes, calculate the single-particle spectral density function of interacting bosons and extract both damping of quasi-particles and spectrum of elementary excitations from the latter.
This dissertation consists of three chapters. The first two chapters investigate the real effects of inflation and the third chapter the role of child care for fertility and female female labor supply. Chapter 1 introduces a generalized panel threshold model to analyze the relation between inflation and economic growth for a sample of developing countries. It is demonstrated that allowing for regime intercepts can be crucial for obtaining unbiased estimates of both, inflation thresholds and its marginal effects on growth in the various regimes. The empirical results confirm that the omitted variable bias of standard panel threshold models can be statistically and economically significant. Chapter 2, which is joined work with Dieter Nautz, investigates the impact of inflation on relative price variability (RPV) as a further important channel of the real effects of inflation. With a view to the recent debate on the Fed's implicit lower and upper bounds of its inflation objective, the econometric model introduced in Chapter 1 is used to explore the inflation-RPV linkage in U.S. cities. Chapter 3 investigates the relationship between fertility, female labor supply and child care in the context of a life cycle model for Germany. A particular emphasis is placed on the differences between West and East Germany. Counterfactual policy experiments mimicking recent policy reforms on maternal leave and the provision of subsidized child care are conducted with a structurally estimated version of the model.
Drought and salt stress are the major constraint to increase yield in chickpea (Cicer arietinum). Improving drought and high-salinity tolerance is therefore of outmost importance for breeding. However, the complexity of these traits allowed only marginal progress. A solution to the current stagnation is expected from innovative molecular tools such as transcriptome analyses providing insight into stress-related gene activity, which combined with molecular markers and expression (e)QTL mapping, may accelerate knowledge-based breeding. SuperSAGE, an improved version of the serial analysis of gene expression (SAGE) technique, generating genome-wide, high-quality transcription profiles from any eukaryote, has been employed in the present study. The method produces 26bp long fragments (26bp tags) from defined positions in cDNAs, providing sufficient sequence information to unambiguously characterize the mRNAs. Further, SuperSAGE tags may be immediately used to produce microarrays and probes for real-time-PCR, thereby overcoming the lack of genomic tools in non-model organisms.
Background: To facilitate in the identification of gene products important in regulating renal glomerular structure and function, we have produced an annotated transcriptome database for normal human glomeruli using the SAGE approach. Description: The database contains 22,907 unique SAGE tag sequences, with a total tag count of 48,905. For each SAGE tag, the ratio of its frequency in glomeruli relative to that in 115 non-glomerular tissues or cells, a measure of transcript enrichment in glomeruli, was calculated. A total of 133 SAGE tags representing well-characterized transcripts were enriched 10-fold or more in glomeruli compared to other tissues. Comparison of data from this study with a previous human glomerular Sau3A-anchored SAGE library reveals that 47 of the highly enriched transcripts are common to both libraries. Among these are the SAGE tags representing many podocyte-predominant transcripts like WT-1, podocin and synaptopodin. Enrichment of podocyte transcript tags SAGE library indicates that other SAGE tags observed at much higher frequencies in this glomerular compared to non-glomerular SAGE libraries are likely to be glomerulus-predominant. A higher level of mRNA expression for 19 transcripts represented by glomerulus-enriched SAGE tags was verified by RT-PCR comparing glomeruli to lung, liver and spleen. Conclusions: The database can be retrieved from, or interrogated online at http://cgap.nci.nih.gov/SAGE. The annotated database is also provided as an additional file with gene identification for 9,022, and matches to the human genome or transcript homologs in other species for 1,433 tags. It should be a useful tool for in silico mining of glomerular gene expression.
Many questions regarding gastropod phylogeny have not yet been answered like the molecular confirmation of the Heterobranchia concept based on morphological studies from Haszprunar (1985a; 1988). This taxon contains the “Lower Heterobranchia” with several “primitive” or “basal” members) and the Euthyneura (with the Opisthobranchia and Pulmonata). Phylogenetic relationships of subgroups within the Heterobranchia have not been satisfactorily resolved and monophyly of some taxa within the Heterobranchia (e.g. Opisthobranchia) is questionable. Moreover, most of the “Lower Heterobranchia” have not been included in former molecular studies. In order to resolve phylogenetic relationships within the Heterobranchia, I pursued a molecular systematic approach by sequencing and analysing a variety of genetic markers (including nuclear 28S rDNA + 18S rDNA and mitochondrial 16S rDNA + COI sequences). Maximum likelihood as well as Bayesian inference methods were used for phylogenetic reconstruction. The data were investigated a priori to tree reconstruction in order to find the most appropriate dataset for reconstructing heterobranch phylogeny. A variety of statistical tests (like Chi-Square-Test or Relative-Rate-Test) were applied and the substitution saturation was measured. The Relative-Rate-Test revealed the highest evolution rates within the “Lower Heterobranchia” (Omalogyra sp., Omalogyra fusca, Murchisonella sp., Ebala sp. and Architectonica perspectiva) and Opisthobranchia (Hyalocylis striata). Furthermore, many of the nucleotide positions show a high degree of substitution saturation. Additionally, bipartitions (splits) in the alignment were examined and visualized by split network analyses to estimate data quality. A high level of conflict indicated by many parallel edges of the same lengths could be observed in the neighbournet graphs. Moreover, several taxa with long terminal branches could be identified in all three datasets belonging to the Vetigastropoda, Caenogastropoda, “Lower Heterobranchia” or Opisthobranchia (Nudipleura). All phylogenetic analyses revealed a monophyletic Heterobranchia. Within the Heterobranchia several well supported clades could be resolved. However, the traditional classification based on morphological data could not be confirmed due to paraphyletic Euthyneura (because of the inclusion of the Pyramidellidae and Glacidorboidea) as well as paraphyletic Pulmonata and polyphyletic Opisthobranchia. Based on the phylogenetic inferred evolutionary trends regarding habitat colonisation or character complexes could be deduced. A case study was conducted in order to estimate divergence ages using a “relaxed” molecular clock approach with fossils as minimum age constraints. However, due to large 95% confidence intervals a precise dating of the nodes was not possible. Hence, the results are considered as preliminary. To test the plausibility of the newly obtained hypotheses, the results were evaluated a posteriori using a hypothesis test and secondary structures of the complete 18S rRNA and 28S rRNA. Secondary structure motifs were found within domain 43 and E23 2 &5 of the 18S rRNA as well as within domain E11 and G5_1 of the 28S rRNA, which contain phylogenetic signals to support various groups within the Heterobranchia. In addition, taxon specific motifs were found separating the Vetigastropoda from the Caenogastropoda and Heterobranchia, indicating a possible application of the secondary structure of 18S rRNA and 28S rRNA to reveal phylogenetic relationships at higher taxonomic levels such as Gastropoda or even Mollusca. The utility of the newly invented software RNAsalsa for the reconstruction of secondary structures was tested. The obtained structures were used to adjust evolutionary models specific to rRNA stem (paired basepairs) and loop (unpaired basepairs) regions with the intention of improving phylogenetic results. This approach proved unsuccessful. This molecular phylogenetic investigation provides the most comprehensive molecular study of Heterobranchia relationships to date. Substantial insights into the evolution and phylogeny of this enigmatic taxon have been gained.
Background: Molecular phylogenies are being published increasingly and many biologists rely on the most recent topologies. However, different phylogenetic trees often contain conflicting results and contradict significant background data. Not knowing how reliable traditional knowledge is, a crucial question concerns the quality of newly produced molecular data. The information content of DNA alignments is rarely discussed, as quality statements are mostly restricted to the statistical support of clades. Here we present a case study of a recently published mollusk phylogeny that contains surprising groupings, based on five genes and 108 species, and we apply new or rarely used tools for the analysis of the information content of alignments and for the filtering of noise (masking of random-like alignment regions, split decomposition, phylogenetic networks, quartet mapping). Results: The data are very fragmentary and contain contaminations. We show that that signal-like patterns in the data set are conflicting and partly not distinct and that the reported strong support for a "rather surprising result" (monoplacophorans and chitons form a monophylum Serialia) does not exist at the level of primary homologies. Split-decomposition, quartet mapping and neighbornet analyses reveal conflicting nucleotide patterns and lack of distinct phylogenetic signal for the deeper phylogeny of mollusks. Conclusion: Even though currently a majority of molecular phylogenies are being justified with reference to the 'statistical' support of clades in tree topologies, this confidence seems to be unfounded. Contradictions between phylogenies based on different analyses are already a strong indication of unnoticed pitfalls. The use of tree-independent tools for exploratory analyses of data quality are highly recommended. Concerning the new mollusk phylogeny more convincing evidence is needed.
In this work the preparation of organic donor-acceptor thin films was studied. A chamber for organic molecular beam deposition was designed and integrated into an existing deposition system for metallic thin films. Furthermore, the deposition system was extended by a load-lock with integrated bake-out function, a chamber for the deposition of metallic contacts via stencil mask technique and a sputtering chamber. For the sublimation of the organic compounds several effusion cells were designed. The evaporation characteristic and the temperature profile within the cells was studied. Additionally, a simulation program was developed, which calculates the evaporation characteristics of different cell types. The following processes were integrated: evaporation of particles, migration on the cell walls and collisions in the gas phase. It is also possible to consider a temperature gradient within the cell. All processes can be studied separately and their relative strength can be varied. To verify the simulation results several evaporation experiments with different cell types were employed. The thickness profile of the prepared thin films was measured position-dependently. The results are in good agreement with the simulation. Furthermore, the simulation program was extended to the field of electron beam induced deposition (EBID). The second part of this work deals with the preparation and characterization of organic thin films. The focus hereby lies on the charge transfer salt (BEDT-TTF)(TCNQ), which has three known structure variants. Thin films were prepared by different methods of co-evaporation and were studied with optical microscopy, X-ray diffraction and energy dispersive X-ray spectroscopy (EDX).The formation of the monoclinic phase of (BEDT-TTF)(TCNQ) could be shown. As a last part tunnel structures were prepared as first thin film devices and measured in a He4 cryostat.
Das House of Finance hat im Sommer 2008 sein Gebäude bezogen. Unter seinem Dach führt das House of Finance drei Abteilungen aus den Fachbereichen Rechtswissenschaft und Wirtschaftswissenschaften der Goethe-Universität sowie sechs rechtlich selbstständige Institute – darunter auch das E-Finance Lab - zusammen. Neben den traditionellen Aufgaben in der Forschung und Lehre verfolgt das House of Finance das Vorhaben, die Ergebnisse der Forschung für die Praxis und auch für den Finanzplatz Deutschland nutzbar zu machen. Als ein Element dieses Wissenstransfers veröffentlicht das House of Finance die „Newsletter“. Der „Newsletter“ gibt Auskunft über - drei aktuelle Forschungsergebnisse, - die Entwicklungen in der Executive Education, - die neuesten Veröffentlichungen der im House of Finance ansässigen Wissenschaftler, - den Veranstaltungskalender. Der Newsletter umfasst jeweils 16 Seiten und erscheint vierteljährlich in englischer Sprache.
A tale of two lost archives
(2009)
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules have to be accompanied with a type modification by generalizing or instantiating types.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
This paper analyzes the risk properties of typical asset-backed securities (ABS), like CDOs or MBS, relying on a model with both macroeconomic and idiosyncratic components. The examined properties include expected loss, loss given default, and macro factor dependencies. Using a two-dimensional loss decomposition as a new metric, the risk properties of individual ABS tranches can directly be compared to those of corporate bonds, within and across rating classes. By applying Monte Carlo Simulation, we find that the risk properties of ABS differ significantly and systematically from those of straight bonds with the same rating. In particular, loss given default, the sensitivities to macroeconomic risk, and model risk differ greatly between instruments. Our findings have implications for understanding the credit crisis and for policy making. On an economic level, our analysis suggests a new explanation for the observed rating inflation in structured finance markets during the pre-crisis period 2004-2007. On a policy level, our findings call for a termination of the 'one-size-fits-all' approach to the rating methodology for fixed income instruments, requiring an own rating methodology for structured finance instruments. JEL Classification: G21, G28
Induced charge computation
(2009)
One of the main aspects of statistical mechanics is that the properties of a thermodynamics state point do not depend on the choice of the statistical ensemble. It breaks down for small systems e.g. single molecules. Hence, the choice of the statistical ensemble is crucial for the interpretation of single molecule experiments, where the outcome of measurements depends on which variables or control parameters, are held fixed and which ones are allowed to fluctuate. Following this principle, this thesis investigates the thermodynamics of a single polymer pulling experiments within two different statistical ensembles. The scaling of the conjugate chain ensembles, the fixed end-to-end vector (Helmholtz) and the fixed applied force (Gibbs), are studied in depth. This thesis further investigates the ensemble equivalence for different force regimes and polymer-chain contour lengths. Using coarse-grained molecular dynamic simulations, i.e. Langevin dynamics, the simulations were found to complement the theoretical predictions for the scaling of ensemble difference of Gaussian chains in different force-regimes, giving special attention to the zero force regime. After constructing Helmholtz and Gibbs conjugate ensembles for a Gaussian chain, two different data sets of thermodynamic states on the force-extension plane, i.e. force-extension curves, were generated. The ensemble difference is computed for different polymer-chain lengths by using force-extension curves. The scaling of the ensemble difference versus relative polymer-chain length under different force regimes has been derived from the simulation data and compared to theoretical predictions. The results demonstrate that the Gaussian chain in the zero force limit generates nonequivalent ensembles, regardless of its equilibrium bond length and polymer-chain contour length. Moreover, if polymers are charged in confinement, coarse-graining is problematic, owing to dielectric interfaces. Hence, the effect of dielectric interfaces must be taken into account when describing physical systems such as ionic channels or biopolymers inside nanopores. It is shown that the effect of dielectrics is crucial for the dynamics of a biopolymer or an ion inside a nanopore. In the simulations, the feasibility of an efficient and accurate computation of electrostatic interactions in the presence of an arbitrarily shaped dielectric domain is challenging. Several solutions for this problem have been previously proposed in the literature such as a density functional approach, or transforming problem at hand into an algebraic problem ( Induced Charge Computation (ICC) ) and boundary element methods. Even though the essential concept is the same, which is to replace the dielectric interface with a polarization charge density, these approaches have been analyzed and the ICC algorithm has been implemented. A new superior boundary element method has been devised utilizing the force computation via the Particle-Particle Particle-Mesh (P3M) method for periodic geometries (ICCP3M). This method has been compared to the ICC algorithm, the algebraic solutions, and to density functional approaches. Extensive numerical tests against analytically tractable geometries have confirmed the correctness and applicability of developed and implemented algorithms, demonstrating that the ICCP3M is the fastest and the most versatile algorithm. Further optimization issues are also discussed in obtaining accurate induced charge densities. The potential of mean force (PMF) of DNA modelled on a coarsed-grain level inside a nanopore is investigated with and without the inclusion of dielectric effects. Despite the simplicity of the model, the dramatic effect of dielectric inclusions is clearly seen in the observed force profile.
Introduction Complex psychopathological and behavioral symptoms, such as delusions and aggression against care providers, are often the primary cause of acute hospital admissions of elderly patients to emergency units and psychiatric departments. This issue resembles an interdisciplinary clinically highly relevant diagnostic and therapeutic challenge across many medical subjects and general practice. At least 50% of the dramatically growing number of patients with dementia exerts aggressive and agitated symptoms during the course of clinical progression, particularly at moderate clinical severity. Methods Commonly used rating scales for agitation and aggression are reviewed and discussed. Furthermore, we focus in this article on benefits and limitations of all available data of anticonvulsants published in this specific indication, such as valproate, carbamazepine, oxcarbazepine, lamotrigine, gabapentin and topiramate. Results To date, most positive and robust data are available for carbamazepine, however, pharmacokinetic interactions with secondary enzyme induction limit its use. Controlled data of valproate do not seem to support the use in this population. For oxcarbazepine only one controlled but negative trial is available. Positive small series and case reports have been reported for lamotrigine, gabapentin and topiramate. Conclusions So far, data of anticonvulsants in demented patients with behavioral disturbances are not convincing. Controlled clinical trials using specific, valid and psychometrically sound instruments of newer anticonvulsants with a better tolerability profile are mandatory to verify whether they can contribute as treatment option in this indication.
Algorithmic trading engines versus human traders – do they behave different in securities markets?
(2009)
After exchanges and alternative trading venues have introduced electronic execution mechanisms worldwide, the focus of the securities trading industry shifted to the use of fully electronic trading engines by banks, brokers and their institutional customers. These Algorithmic Trading engines enable order submissions without human intervention based on quantitative models applying historical and real-time market data. Although there is a widespread discussion on the pros and cons of Algorithmic Trading and on its impact on market volatility and market quality, little is known on how algorithms actually place their orders in the market and whether and in which respect this differs form other order submissions. Based on a dataset that – for the first time – includes a specific flag to enable the identification of orders submitted by Algorithmic Trading engines, the paper investigates the extent of Algorithmic Trading activity and specifically their order placement strategies in comparison to human traders in the Xetra trading system. It is shown that Algorithmic Trading has become a relevant part of overall market activity and that Algorithmic Trading engines fundamentally differ from human traders in their order submission, modification and deletion behavior as they exploit real-time market data and latest market movements.
Background, aim, and scope Food consumption is an important route of human exposure to endocrine-disrupting chemicals. So far, this has been demonstrated by exposure modeling or analytical identification of single substances in foodstuff (e.g., phthalates) and human body fluids (e.g., urine and blood). Since the research in this field is focused on few chemicals (and thus missing mixture effects), the overall contamination of edibles with xenohormones is largely unknown. The aim of this study was to assess the integrated estrogenic burden of bottled mineral water as model foodstuff and to characterize the potential sources of the estrogenic contamination. Materials, methods, and results In the present study, we analyzed commercially available mineral water in an in vitro system with the human estrogen receptor alpha and detected estrogenic contamination in 60% of all samples with a maximum activity equivalent to 75.2 ng/l of the natural sex hormone 17beta-estradiol. Furthermore, breeding of the molluskan model Potamopyrgus antipodarum in water bottles made of glass and plastic [polyethylene terephthalate (PET)] resulted in an increased reproductive output of snails cultured in PET bottles. This provides first evidence that substances leaching from plastic food packaging materials act as functional estrogens in vivo. Discussion and conclusions Our results demonstrate a widespread contamination of mineral water with xenoestrogens that partly originates from compounds leaching from the plastic packaging material. These substances possess potent estrogenic activity in vivo in a molluskan sentinel. Overall, the results indicate that a broader range of foodstuff may be contaminated with endocrine disruptors when packed in plastics. Keywords Endocrine disrupting chemicals - Estradiol equivalents - Human exposure - In vitro effects - In vivo effects - Mineral water - Plastic bottles - Plastic packaging - Polyethylene terephthalate - Potamopyrgus antipodarum - Yeast estrogen screen - Xenoestrogens
The role of microglial cells in the pathogenesis of Alzheimer’s disease (AD) neurodegeneration is unknown. Although several works suggest that chronic neuroinflammation caused by activated microglia contributes to neurofibrillary degeneration, anti-inflammatory drugs do not prevent or reverse neuronal tau pathology. This raises the question if indeed microglial activation occurs in the human brain at sites of neurofibrillary degeneration. In view of the recent work demonstrating presence of dystrophic (senescent) microglia in aged human brain, the purpose of this study was to investigate microglial cells in situ and at high resolution in the immediate vicinity of tau-positive structures in order to determine conclusively whether degenerating neuronal structures are associated with activated or with dystrophic microglia. We used a newly optimized immunohistochemical method for visualizing microglial cells in human archival brain together with Braak staging of neurofibrillary pathology to ascertain the morphology of microglia in the vicinity of tau-positive structures. We now report histopathological findings from 19 humans covering the spectrum from none to severe AD pathology, including patients with Down’s syndrome, showing that degenerating neuronal structures positive for tau (neuropil threads, neurofibrillary tangles, neuritic plaques) are invariably colocalized with severely dystrophic (fragmented) rather than with activated microglial cells. Using Braak staging of Alzheimer neuropathology we demonstrate that microglial dystrophy precedes the spread of tau pathology. Deposits of amyloid-beta protein (A beta) devoid of tau-positive structures were found to be colocalized with non-activated, ramified microglia, suggesting that A beta does not trigger microglial activation. Our findings also indicate that when microglial activation does occur in the absence of an identifiable acute central nervous system insult, it is likely to be the result of systemic infectious disease. The findings reported here strongly argue against the hypothesis that neuroinflammatory changes contribute to AD dementia. Instead, they offer an alternative hypothesis of AD pathogenesis that takes into consideration: (1) the notion that microglia are neuron-supporting cells and neuroprotective; (2) the fact that development of non-familial, sporadic AD is inextricably linked to aging. They support the idea that progressive, aging-related microglial degeneration and loss of microglial neuroprotection rather than induction of microglial activation contributes to the onset of sporadic Alzheimer’s disease. The results have far-reaching implications in terms of reevaluating current treatment approaches towards AD.
Background The role of the Fcgamma receptor IIa (FcgammaRIIa), a receptor for C-reactive protein (CRP), the classical acute phase protein, in atherosclerosis is not yet clear. We sought to investigate the association of FcgammaRIIa genotype with risk of coronary heart disease (CHD) in two large population-based samples. Methods FcgammaRIIa-R/H131 polymorphisms were determined in a population of 527 patients with a history of myocardial infarction and 527 age and gender matched controls drawn from a population-based MONICA- Augsburg survey. In the LURIC population, 2227 patients with angiographically proven CHD, defined as having at least one stenosis [greater than or equal to]50%, were compared with 1032 individuals with stenosis <50%. Results In both populations genotype frequencies of the FcgammaRIIa gene did not show a significant departure from the Hardy-Weinberg equilibrium. FcgammaRIIa R(-131)->H genotype was not independently associated with lower risk of CHD after multivariable adjustments, neither in the MONICA population (odds ratio (OR) 1.08; 95% confidence interval (CI) 0.81 to 1.44), nor in LURIC (OR 0.96; 95% CI 0.81 to 1.14). Conclusion Our results do not confirm an independent relationship between FcgammaRIIa genotypes and risk of CHD in these populations.
Background Treatment options for metastatic renal cell carcinoma (RCC) are limited due to resistance to chemo- and radiotherapy. The development of small-molecule multikinase inhibitors have now opened novel treatment options. The influence of the receptor tyrosine kinase inhibitor AEE788, applied alone or combined with the mammalian target of rapamycin (mTOR) inhibitor RAD001, on RCC cell adhesion and proliferation in vitro has been evaluated. Methods RCC cell lines Caki-1, KTC-26 or A498 were treated with various concentrations of RAD001 or AEE788 and tumor cell proliferation, tumor cell adhesion to vascular endothelial cells or to immobilized extracellular matrix proteins (laminin, collagen, fibronectin) evaluated. The anti-tumoral potential of RAD001 combined with AEE788 was also investigated. Both, asynchronous and synchronized cell cultures were used to subsequently analyze drug induced cell cycle manipulation. Analysis of cell cycle regulating proteins was done by western blotting. Results RAD001 or AEE788 reduced adhesion of RCC cell lines to vascular endothelium and diminished RCC cell binding to immobilized laminin or collagen. Both drugs blocked RCC cell growth, impaired cell cycle progression and altered the expression level of the cell cycle regulating proteins cdk2, cdk4, cyclin D1, cyclin E and p27. The combination of AEE788 and RAD001 resulted in more pronounced RCC growth inhibition, greater rates of G0/G1 cells and lower rates of S-phase cells than either agent alone. Cell cycle proteins were much more strongly altered when both drugs were used in combination than with single drug application. The synergistic effects were observed in an asynchronous cell culture model, but were more pronounced in synchronous RCC cell cultures. Conclusions Potent anti-tumoral activitites of the multikinase inhibitors AEE788 or RAD001 have been demonstrated. Most importantly, the simultaneous use of both AEE788 and RAD001 offered a distinct combinatorial benefit and thus may provide a therapeutic advantage over either agent employed as a monotherapy for RCC treatment.
Background Many systems in nature are characterized by complex behaviour where large cascades of events, or avalanches, unpredictably alternate with periods of little activity. Snow avalanches are an example. Often the size distribution f(s) of a system's avalanches follows a power law, and the branching parameter sigma, the average number of events triggered by a single preceding event, is unity. A power law for f(s), and sigma=1, are hallmark features of self-organized critical (SOC) systems, and both have been found for neuronal activity in vitro. Therefore, and since SOC systems and neuronal activity both show large variability, long-term stability and memory capabilities, SOC has been proposed to govern neuronal dynamics in vivo. Testing this hypothesis is difficult because neuronal activity is spatially or temporally subsampled, while theories of SOC systems assume full sampling. To close this gap, we investigated how subsampling affects f(s) and sigma by imposing subsampling on three different SOC models. We then compared f(s) and sigma of the subsampled models with those of multielectrode local field potential (LFP) activity recorded in three macaque monkeys performing a short term memory task. Results Neither the LFP nor the subsampled SOC models showed a power law for f(s). Both, f(s) and sigma, depended sensitively on the subsampling geometry and the dynamics of the model. Only one of the SOC models, the Abelian Sandpile Model, exhibited f(s) and sigma similar to those calculated from LFP activity. Conclusions Since subsampling can prevent the observation of the characteristic power law and sigma in SOC systems, misclassifications of critical systems as sub- or supercritical are possible. Nevertheless, the system specific scaling of f(s) and sigma under subsampling conditions may prove useful to select physiologically motivated models of brain function. Models that better reproduce f(s) and sigma calculated from the physiological recordings may be selected over alternatives.
Background Evidence-based guidelines potentially improve healthcare. However, their de-novo-development requires substantial resources - especially for complex conditions, and adaptation may be biased by contextually influenced recommendations in source guidelines. In this paper we describe a new approach to guideline development - the systematic guideline review method (SGR), and its application in the development of an evidence-based guideline for family physicians on chronic heart failure (CHF). Methods A systematic search for guidelines was carried out. Evidence-based guidelines on CHF management in adults in ambulatory care published in English or German between the years 2000 and 2004 were included. Guidelines on acute or right heart failure were excluded. Eligibility was assessed by two reviewers, methodological quality of selected guidelines was appraised using the AGREE-instrument, and a framework of relevant clinical questions for diagnostics and treatment was derived. Data were extracted into evidence tables, systematically compared by means of a consistency analysis and synthesized in a preliminary draft. Most relevant primary sources were re-assessed to verify the cited evidence. Evidence and recommendations were summarized in a draft guideline. Results Of 16 included guidelines five were of good quality. A total of 35 recommendations were systematically compared: 25/35 were consistent, 9/35 inconsistent, and 1/35 unratable (derived from a single guideline). Of the 25 consistencies, 14 based on consensus, seven on evidence and four differed in grading. Major inconsistencies were found in 3/9 of the inconsistent recommendations. We re-evaluated the evidence for 17 recommendations (evidence-based, differing evidence levels and minor inconsistencies) the majority was congruent. Incongruencies were found, where the stated evidence could not be verified in the cited primary sources, or where the evaluation in the source guidelines focused on treatment benefits and underestimated the risks. The draft guideline was completed in 8.5 man-months. The main limitation to this study was the lack of a second reviewer. Conclusions The systematic guideline review including framework development, consistency analysis and validation is an effective, valid, and resource saving-approach to the development of evidence-based guidelines.
Riboswitches are a novel class of genetic control elements that function through the direct interaction of small metabolite molecules with structured RNA elements. The ligand is bound with high specificity and affinity to its RNA target and induces conformational changes of the RNA's secondary and tertiary structure upon binding. To elucidate the molecular basis of the remarkable ligand selectivity and affinity of one of these riboswitches, extensive all-atom molecular dynamics simulations in explicit solvent ({approx}1 µs total simulation length) of the aptamer domain of the guanine sensing riboswitch are performed. The conformational dynamics is studied when the system is bound to its cognate ligand guanine as well as bound to the non-cognate ligand adenine and in its free form. The simulations indicate that residue U51 in the aptamer domain functions as a general docking platform for purine bases, whereas the interactions between C74 and the ligand are crucial for ligand selectivity. These findings either suggest a two-step ligand recognition process, including a general purine binding step and a subsequent selection of the cognate ligand, or hint at different initial interactions of cognate and noncognate ligands with residues of the ligand binding pocket. To explore possible pathways of complex dissociation, various nonequilibrium simulations are performed which account for the first steps of ligand unbinding. The results delineate the minimal set of conformational changes needed for ligand release, suggest two possible pathways for the dissociation reaction, and underline the importance of long-range tertiary contacts for locking the ligand in the complex.
Oligonucleotides suppress PKB/Akt and act as superinductors of apoptosis in human keratinocytes
(2009)
DNA oligonucleotides (ODN) applied to an organism are known to modulate the innate and adaptive immune system. Previous studies showed that a CpG-containing ODN (CpG-1-PTO) and interestingly, also a non-CpG-containing ODN (nCpG- 5-PTO) suppress inflammatory markers in skin. In the present study it was investigated whether these molecules also influence cell apoptosis. Here we show that CpG-1-PTO, nCpG-5-PTO, and also natural DNA suppress the phosphorylation of PKB/Akt in a cell-type-specific manner. Interestingly, only epithelial cells of the skin (normal human keratinocytes, HaCaT and A-431) show a suppression of PKB/Akt. This suppressive effect depends from ODN lengths, sequence and backbone. Moreover, it was found that TGFa-induced levels of PKB/Akt and EGFR were suppressed by the ODN tested. We hypothesize that this suppression might facilitate programmed cell death. By testing this hypothesis we found an increase of apoptosis markers (caspase 3/7, 8, 9, cytosolic cytochrome c, histone associated DNA fragments, apoptotic bodies) when cells were treated with ODN in combination with low doses of staurosporin, a wellknown pro-apoptotic stimulus. In summary the present data demonstrate DNA as a modulator of apoptosis which specifically targets skin epithelial cells.
Global warming is expected to be associated with diverse changes in freshwater habitats in north-western Europe. Increasing evaporation, lower oxygen concentration due to increased water temperature and changes in precipitation pattern are likely to affect the survival ratio and reproduction rate of freshwater gastropods (Pulmonata, Basommatophora). This work is a comprehensive analyse of the climatic factors influencing their ranges both in the past and in the near future. A macroecological approach showed that for a great proportion of genera the ranges were projected to contract by 2080, even if unlimited dispersal was assumed. The forecasted warming in the cooler northern ranges predicted the emergence of new suitable areas, but also reduced drastically the available habitat in the southern part of the studied region. In order to better understand the ranges dynamics in the past and the post glacial colonisation patterns, an approach combining ecological niche modelling and phylogeography was used for two model species, Radix balthica and Ancylus fluviatilis. Phylogeographic model selection on a COI mtDNA dataset confirmed that R. balthica most likely spread from two central European disjunct refuges after the last glacial maximum. The phylogeographic analysis of A. fluviatilis, using 16S and COI mtDNA datasets, also inferred central European refugia. The absence of niche conservatism (adaptive potential) inferred for A. fluviatilis puts a cautionary note on the use of climate envelope models to predict the future ranges of this species. However, the other model species exhibited strong niche conservatism, which allow putting confidence into such predictions. A profound faunal shift will take place in Central Europe within the next century, either permitting the establishment of species currently living south of the studied region or the proliferation of organisms relying on the same food resources. This study points out the need for further investigations on the dispersal modes of freshwaters snails, since the future range size of the species depend on their ability to establish in newly available habitats. Likewise, the mixed mating system of these organisms gives them the possibility to fund a new population from a single individual. It will probably affect the colonisation success and needs further investigation.
Lentiviral vectors mediate gene transfer into dividing and most non-dividing cells. Thereby, they stably integrate the transgene into the host cell genome. For this reason, lentiviral vectors are a promising tool for gene therapy. However, safety and efficiency of lentiviral mediated gene transfer still needs to be optimised. Ideally, cell entry should be restricted to the cell population relevant for a particular therapeutic application. Furthermore, lentiviral vectors able to transduce quiescent lymphocytes are desirable. Although many approaches were followed to engineer retroviral envelope proteins, an effective and universally applicable system for retargeting of lentiviral cell entry is still not available. Just before the experimental work of this thesis was started, retargeting of measles virus (MV) cell entry was achieved. This virus has two types of envelope glycoproteins, the hemagglutinin (H) protein responsible for receptor recognition and the fusion (F) protein mediating membrane fusion. For retargeting, the H protein was mutated in its interaction sites for the native MV receptors and a ligand or a single-chain antibody (scAb) was fused to its ectodomain. It was hypothesised that the retargeting system of MV can be transferred to lentiviral vectors by pseudotyping human immunodeficiency virus-1 (HIV-1) derived vector particles with the MV glycoproteins. As the unmodified MV glycoproteins did not pseudotype HIV vectors, two F and 15 H protein variants carrying stepwise truncations or amino acid (aa) exchanges in their cytoplasmic tails were screened for their ability to form MV-HIV pseudotypes. The combinations Hcd18/Fcd30, Hcd19/Fcd30 and Hcd24+4A/Fcd30 led to most efficient pseudotype formation with titers above 10exp6 transducing units /ml, using concentrated particles. The F cytoplasmic tail was truncated by 30 aa and the H cytoplasmic tail was truncated by 18, 19 or 24 residues with four added alanines after the start methionine in the latter case. Western blot analysis indicated that particle incorporation of the MV glycoproteins was enhanced upon truncation of their cytoplasmic tails. With the MV-HIV vectors high titers on different cell lines expressing one or both MV receptors were obtained, whereas MV receptor-negative cells remained untransduced. Titers were enhanced using an optimal H to F plasmid ratio (1:7) during vector particle production. Based on the described pseudotyping with the MV glycoprotein variants, HIV vectors retargeted to the epidermal growth factor receptor (EGFR) or the B cell surface marker CD20 were generated. For the production of the retargeted vectors MVaEGFR-HIV and MVaCD20-HIV, Fcd30 together with a native receptor blind Hcd18 protein, displaying at its ectodomain either the ligand EGF or a scAb directed against CD20 were used. With these vectors, gene transfer into target receptor-positive cells was several orders of magnitude more efficient than into control cells. The almost complete absence of background transduction of non-target cells was e.g. demonstrated in mixed cell populations, where the CD20-targeting vector selectively eliminated CD20-positive cells upon suicide gene transfer. Remarkably, transduction of activated primary human CD20-positive B cells was much more efficient with the MVaCD20-HIV vector than with the standard pseudotype vector VSV-G-HIV. Even more surprisingly, MVaCD20-HIV vectors were able to transduce quiescent primary human B cells, which until then had been resistant towards lentiviral gene transfer. The most critical step during the production of MV-HIV pseudotypes was the identification of H cytoplasmic tail mutants that allowed pseudotyping while retaining the fusion helper function. In contrast to previously inefficient targeting strategies, the reason for the success of this novel targeting system must be based on the separation of the receptor recognition and fusion functions onto two different proteins. Furthermore, with the CD20-targeting vector transduction of quiescent B cells was demonstrated for the first time. Own data and literature data suggest that CD20 binding and hyper-cross-linking by the vector particles results in calcium influx and thus activation of quiescent B cells. Alternatively this feature may be based on a residual binding activity of the MV glycoproteins to the native MV receptors that is insufficient for entry but induces cytoskeleton rearrangements dissolving the post-entry block of HIV vectors. Hence, in this thesis efficient retargeting of lentiviral vectors and transduction of quiescent cells was combined. This novel targeting strategy should be easily adaptable to many other target molecules by extending the modified MV H protein with appropriate specific domains or scAbs. It should now be possible to tailor lentiviral vectors for highly selective gene transfer into any desired target cell population with an unprecedented degree of efficiency.
Neutron stars are very dense objects. One teaspoon of their material would have a mass of five billion tons. Their gravitational force is so strong that if an object were to fall from just one meter high it would hit the surface of the respective neutron star at two thousand kilometers per second. In such dense bodies, different particles from the ones present in atomic nuclei, the nucleons, can exist. These particles can be hyperons, that contain non-zero strangeness, or broader resonances. There can also be different states of matter inside neutron stars, such as meson condensates and if the density is height enough to deconfine the nucleons, quark matter. As new degrees of freedom appear in the system, different aspects of matter have to be taken into account. The most important of them being the restoration of the chiral symmetry. This symmetry is spontaneously broken, which is a fact related to the presence of a condensate of scalar quark-antiquark pairs, that for this reason is called chiral condensate. This condensate is present at low densities and even in vacuum. It is important to remember at this point that the modern concept of vacuum is far away from emptiness. It is full of virtual particles that are constantly created and annihilated, being their existence allowed by the uncertainty principle. At very high temperature/density, when the composite particles are dissolved into constituents, the chiral consensate vanishes and the chiral symmetry is restored. To explain how and when chiral symmetry is restored in neutron stars we use a model called non-linear sigma model. This is an effective quantum relativistic model that was developed in order to describe systems of hadrons interacting via meson exchange. The model was constructed from symmetry relations, which allow it to be chiral invariant. The first consequence of this invariance is that there are no bare mass terms in the lagrangian density, causing all, or most of the particles masses to come from the interactions with the medium. There are still other interesting features in neutron stars that cannot be found anywhere else in nature. One of them is the high isospin asymmetry. In a normal nucleus, the amount of protons and neutrons is more or less the same. In a neutron star the amount of neutrons is much higher than the protons. The resulting extra energy (called Fermi energy) increases the energy of the system, allowing the star to support more mass against gravitational collapse. As a consequence of that in early stages of the neutron star evolution, when there are still many trapped neutrinos, the proton fraction is higher than in later stages and consequently the maximum mass that the star can support against gravity is smaller. This, between many other features, shows how the microscopic phenomena of the star can reflect into the macroscopic properties. Another important property of neutron stars is charge neutrality. It is a required assumption for stability in neutron stars, but there are others. One example is chemical equilibrium. It means that the number of particles from each kind is not conserved, but they are created and annihilated through specific reactions that happen at the same rate in both directions. Although to calculate microscopic physics of neutron stars the space-time of special relativity, the Minkowski space, can be used, this is not true for the global properties of the star. In this case general relativity has to be used. The solution of Einstein's equations simplified to static, spherical and isotropic stars correspond to the configurations in which the star is in hydrostatic equilibrium. That means that the internal pressure, coming mainly from the Fermi energy of the neutrons, balances the gravity avoiding the collapse. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also makes it non-spherical, what requires the metric of the star to also be a function of the polar coordinate. Another important feature that has to be taken into account is the dragging of the local inertial frame. It generates centrifugal forces that are not originated in interactions with other bodies, but from the non-rotation of the frame of reference within which observations are made. These modifications are introduced through the Hartle's approximation that solves the problem by applying perturbation theory. In the mean field approximation, the couplings as well as the parameters of the non-linear sigma model are calibrated to reproduce massive neutron stars. The introduction of new degrees of freedom decreases the maximum mass allowed for the neutron star, as they soften the equation of state. In practice, the only baryons present in the star besides the nucleons are the Lambda and Sigma-, in the case in which the baryon octet is included, and Lambda and Delta-,0,+,++, in the case in which the baryon decuplet is included. The leptons are included to ensure charge neutrality. We choose to proceed our calculations including the baryon octet but not the decuplet, in order to avoid uncertainties in the couplings. The couplings of the hyperons were fitted to the depth of their potentials in nuclei. In this case the chiral symmetry restoration can be observed through the behavior of the related order parameter. The symmetry begins to be restored inside neutron stars and the transition is a smooth crossover. Different stages of the neutron star cooling are reproduced taking into account trapped neutrinos, finite temperature and entropy. Finite-temperature calculations include the heat bath of hadronic quasiparticles within the grand canonical potential of the system. Different schemes are considered, with constant temperature, metric dependent temperature and constant entropy. The neutrino chemical potential is introduced by fixing the lepton number in the system, that also controls the amount of electrons and protons (for charge neutrality). The balance between these two features is delicate and influenced mainly by the baryon number conservation. Isolated stars have a fixed number of baryons, which creates a link between different stages of the cooling. The maximum masses allowed in each stage of the cooling process, the one with high entropy and trapped neutrinos, the deleptonized one with high entropy, and the cold one in beta equilibrium. The cooling process is also influenced by constraints related to the rotation of the star. When rotation is included the star becomes more stable, and consequently, can be more massive. The movement also deforms it, requiring the metric of the star to include modifications that are introduced through the use of perturbation theory. The analysis of the first stages of the neutron star, when it is called proto-neutron star, gives certain constraints on the possible rotation frequencies in the colder stages. Instability windows are calculated in which the star can be stable during certain stages but collapses into black holes during the cooling process. In the last part of the work the hadronic SU(3) model is extended to include quark degrees of freedom. A new effective potential to the order parameter for deconfinement, the Polyakov loop, makes the connection between the physics at low chemical potential and hight temperature of the QCD phase diagram with the height chemical potential and low temperature part. This is done through the introduction of a chemical potential dependency on the already temperature dependent potential. Analyzing the effect of both order parameters, the chiral condensate and the Polyakov loop, we can drawn a phase diagram for symmetric as well as for star matter. The diagram contains a crossover region as well as a first order phase transition line. The new couplings and parameters of the model are chosen mainly to fit lattice QCD, including the position of the critical point. Finally, this matter containing different degrees of freedom (depending on which phase of the diagram we are) is used to calculate hybrid star properties.
Shape complementarity is a compulsory condition for molecular recognition. In our 3D ligand-based virtual screening approach called SQUIRREL, we combine shape-based rigid body alignment with fuzzy pharmacophore scoring. Retrospective validation studies demonstrate the superiority of methods which combine both shape and pharmacophore information on the family of peroxisome proliferator-activated receptors (PPARs). We demonstrate the real-life applicability of SQUIRREL by a prospective virtual screening study, where a potent PPARalpha agonist with an EC50 of 44 nM and 100-fold selectivity against PPARgamma has been identified...
Background The to date evidence for a dose-response relationship between physical workload and the development of lumbar disc diseases is limited. We therefore investigated the possible etiologic relevance of cumulative occupational lumbar load to lumbar disc diseases in a multi-center case-control study. Methods In four study regions in Germany (Frankfurt/Main, Freiburg, Halle/Saale, Regensburg), patients seeking medical care for pain associated with clinically and radiologically verified lumbar disc herniation (286 males, 278 females) or symptomatic lumbar disc narrowing (145 males, 206 females) were prospectively recruited. Population control subjects (453 males and 448 females) were drawn from the regional population registers. Cases and control subjects were between 25 and 70 years of age. In a structured personal interview, a complete occupational history was elicited to identify subjects with certain minimum workloads. On the basis of job task-specific supplementary surveys performed by technical experts, the situational lumbar load represented by the compressive force at the lumbosacral disc was determined via biomechanical model calculations for any working situation with object handling and load-intensive postures during the total working life. For this analysis, all manual handling of objects of about 5 kilograms or more and postures with trunk inclination of 20 degrees or more are included in the calculation of cumulative lumbar load. Confounder selection was based on biologic plausibility and on the change-in-estimate criterion. Odds ratios (OR) and 95% confidence intervals (CI) were calculated separately for men and women using unconditional logistic regression analysis, adjusted for age, region, and unemployment as major life event (in males) or psychosocial strain at work (in females), respectively. To further elucidate the contribution of past physical workload to the development of lumbar disc diseases, we performed lag-time analyses. Results We found a positive dose-response relationship between cumulative occupational lumbar load and lumbar disc herniation as well as lumbar disc narrowing among men and women. Even past lumbar load seems to contribute to the risk of lumbar disc disease. Conclusions According to our study, cumulative physical workload is related to lumbar disc diseases among men and women.
Background Since June 2002, revised regulations in Germany have required "Emergency Medical Care" as an interdisciplinary subject, and state that emergency treatment should be of increasing importance within the curriculum. A survey of the current status of undergraduate medical education in emergency medical care establishes the basis for further committee work. Methods Using a standardized questionnaire, all medical faculties in Germany were asked to answer questions concerning the structure of their curriculum, representation of disciplines, instructors' qualifications, teaching and assessment methods, as well as evaluation procedures. Results Data from 35 of the 38 medical schools in Germany were analysed. In 32 of 35 medical faculties, the local Department of Anaesthesiology is responsible for the teaching of emergency medical care; in two faculties, emergency medicine is taught mainly by the Department of Surgery and in another by Internal Medicine. Lectures, seminars and practical training units are scheduled in varying composition at 97% of the locations. Simulation technology is integrated at 60% (n=21); problem-based learning at 29% (n=10), e-learning at 3% (n=1), and internship in ambulance service is mandatory at 11% (n=4). In terms of assessment methods, multiple-choice exams (15 to 70 questions) are favoured (89%, n=31), partially supplemented by open questions (31%, n=11). Some faculties also perform single practical tests (43%, n=15), objective structured clinical examination (OSCE; 29%, n=10) or oral examinations (17%, n=6). Conclusion Emergency Medical Care in undergraduate medical education in Germany has a practical orientation, but is very inconsistently structured. The innovative options of simulation technology or state-of-the-art assessment methods are not consistently utilized. Therefore, an exchange of experiences and concepts between faculties and disciplines should be promoted to guarantee a standard level of education in emergency medical care.
Das libor Markt Modell (LMM) ist seit seiner Entwicklung in den Veröffentlichungen von Brace, Gatarek, Musiela (1997), einerseits, und unabhängig von diesen von Miltersen, Sandmann, Sondermann (1997), andererseits, zu dem anerkanntesten Instrument zur Modellierung der Zinsstruktur und der damit verbundenen Preisfindung für relevante Finanzderivate geworden. libor steht dabei für London Inter-Bank Offered Rate, ein täglich in London fixierter Referenz-Zins für kurzfristige Anlagen. Drei- oder sechsmonatige Laufzeiten sind in Verbindung mit dem LMM üblich. Die Forschung zur Verbesserung dieses Modells hat in den letzten Jahren an Zuwachs gewonnen. Beim Versuch den Fehler der Anpassung an die täglich beobachteten Preise von Zinsoptionen wie Caps und Swaptions zu verringern, erhält man in der Folge auch genauere Bewertungen für andere, exotischere, Derivate. Die zugrunde liegende und zentrale Idee des LMM besteht darin, die Forward (Termin) Zinsen direkt als primären (Vektor) Prozess mehrerer libor Sätze zu betrachten und diese simultan zu modellieren, anstatt sie nur herzuleiten aus einem übergeordneten, unendlich dimensionalen Forward Zinsprozess, wie im zeitlich früher entwickelten Heath-Jarrow-Morton Modell. Das überzeugendste Argument für diese Diskretisierung ist, dass die libor Sätze direkt im Markt beobachtbar sind und ihre Volatilitäten auf eine natürliche Weise in Beziehung gebracht werden können zu bereits liquide gehandelten Produkten, eben jenen Caps und Swaptions. Dennoch beinhaltet das Modell eine gravierende Insuffizienz, indem es keine Krümmung der Volatilitätsoberfläche, im Hinblick auf Optionen mit verschiedenen Basiszinsen, abbildet. Wie im einfachen eindimensionalen Black-Scholes Modell prägen sich auch hier die Ungenauigkeiten der Verteilung in fehlenden heavy tails deutlich aus. Smile und Skew Effekte sind erkennbar. Im klassischen liborMarkt Modell wird in Richtung der Basiszinsdimension nur eine affine Struktur erzeugt, welche bestenfalls als Approximation für die erwünschte Oberfläche dienen kann. Die beobachteten Verzerrungen führen naturgemäss zu einer ungenauen Abbildung der Realität und fehlerhaften Reproduktion der Preise in Regionen, die ein wenig entfernt vom Bereich am Geld liegen. Derartig ungewollte Dissonanzen in Gewinn und Verlustzahlen führten z.B. in 1998 zu gravierenden Verlusten im Zinsderivateportfolio der heutigen Royal Bank of Scotland. ...
The NADH:ubiquinone oxidoreductase (complex I) is a large membrane bound protein complex coupling the redox reaction of NADH oxidation and quinone reduction to vectorial proton translocation across bioenergetic membranes. The mechanism of proton pumping is still unknown; it seems however that the reduction of quinone induces conformational changes which drive proton uptake from one side and release at the other side of the membrane. In this study the proposed quinone and inhibitor binding pocket located at the interface of the 49-kDa and PSST subunits was explored by a large number of point mutations introduced into complex I from the strictly aerobic yeast Yarrowia lipolytica. Point mutations were systematically chosen based on the crystal structure of the hydrophilic domain of complex I from Thermus thermophilus. In total, the properties of 94 mutants at 39 positions which completely cover the lining of the large putative quinone and inhibitor binding cavity are described and discussed here. A structure/function analysis allowed the identification of functional domains within the large putative quinone binding cavity. A possible quinone access path ranging from the N-terminal beta-sheet of the 49-kDa subunit into the pocket to tyrosine 144 could be defined, since all exchanges introduced here, caused an almost complete loss of complex I activity. A region located deeper in the proposed quinone binding pocket is apparently not important for complex I activity. In contrast, all exchanges of tyrosine 144, even the very conservative mutant Y144F, essentially abolished dNADH:DBQ oxidoreductase activity of complex I. However, with higher concentrations of Q1 or Q2 the dNADH:Q oxidoreductase activity was largely restored in the mutants with the more conservative exchanges. Proton pumping experiments showed that this activity was also coupled to proton translocation, indicating that these quinones were reduced at the physiological site. However, the apparent Km values for Q1 or Q2 were drastically increased, clearly demonstrating that tyrosine 144 is central for quinone binding and reduction. These results further prove that the enzymatically relevant quinone binding site of complex I is located at the interface of the 49-kDa and PSST subunits. The quinone binding pocket is thought to comprise the binding sites for a plethora of specific complex I inhibitors that are usually grouped into three classes. The large array of mutants targeting the quinone binding cavity was examined with a representative of each inhibitor class. Many mutants conferring resistance were identified which, depending on the inhibitor tested, clustered in well defined and partially overlapping regions of the large putative quinone and inhibitor binding cavity. Mutants with effects on type A (DQA) and type B (rotenone) inhibitors were found in a subdomain corresponding to the former [NiFe] site in homologous hydrogenases, whereby the type A inhibitor DQA seems to bind deeper in this domain. Mutants with effects on the type C inhibitor (C12E8) were found in a narrow crevice. Exchanging more exposed residues at the border of these well defined domains affected all three inhibitor types. Therefore, the results as a whole provide further support for the concept that different inhibitor classes bind to different but partially overlapping binding sites within a single large quinone binding pocket. In addition, they also indicate the approximate location of the binding sites within the structure of the large quinone and inhibitor binding cavity at the interface of the 49 kDa and the PSST subunit. It has been proposed earlier that the highly conserved HRGXE-motif in the 49-kDa subunit forms a part of the quinone binding site of complex I. Mutagenesis of the HRGXE-motif, revealed that these residues are rather critical for complex I assembly and seem to have an important structural role. The question why iron-sulfur cluster N1a is not detectable by EPR in many models organisms is not solved yet. Introducing polar and positively charged amino acid residues close to this cluster in order to increase its midpoint potential did not result in the appearance of the cluster N1a EPR signal in mitochondrial membranes from the mutants. Clearly, further research will be necessary to gain insights to the function of this iron-sulfur cluster in complex I. In an additional project, a new and simple in vivo screen for complex I deficiency in Y. lipolytica was developed and optimized. This assay probes for defects in complex I assembly and stability, oxidoreductase activity and also proton pumping activity by complex I. Most importantly, this assay is applicable to all Y. lipolytica strains and could be used to identify loss-of-function mutants, gain-of-functions mutants (i.e. resistance towards complex I inhibitors) and revertants due to mutations in both nuclear and mitochondrially encoded genes of complex I subunits.
The light-harvesting complex of photosystem II (LHC-II) is the major antenna complex in plant photosynthesis. It accounts for roughly 30% of the total protein in plant chloroplasts, which makes it arguably the most abundant membrane protein on Earth, and binds about half of plant chlorophyll (Chl). The complex assembles as a trimer in the thylakoid membrane and binds a total of 54 pigment molecules, including 24 Chl a, 18 Chl b, 6 lutein (Lut), 3 neoxanthin (Neo) and 3 violaxanthin (Vio). LHC-II has five key roles in plant photosynthesis. It: (1) harvests sunlight and transmits excitation energy to the reaction centres of photosystems II and I, (2) regulates the amount of excitation energy reaching each of the two photosystems, (3) has a structural role in the architecture of the photosynthetic supercomplexes, (4) contributes to the tight appression of thylakoid membranes in chloroplast grana, and (5) protects the photosynthetic apparatus from photo damage by non photochemical quenching (NPQ). A major fraction of NPQ is accounted for its energy-dependent component qE. Despite being critical for plant survival and having been studied for decades, the exact details of how excess absorbed light energy is dissipated under qE conditions remain enigmatic. Today it is accepted that qE is regulated by the magnitude of the pH gradient (ΔpH) across the thylakoid membrane. It is also well documented that the drop in pH in the thylakoid lumen during high-light conditions activates the enzyme violaxanthin de-epoxidase (VDE), which converts the carotenoid Vio into zeaxanthin (Zea) as part of the xanthophyll cycle. Additionally, studies with Arabidopsis mutants revealed that the photosystem II subunit PsbS is necessary for qE. How these physiological responses switch LHC-II from the active, energy transmitting to the quenched, energy-dissipating state, in which the solar energy is not transmitted to the photosystems but instead dissipated as heat, remains unclear and is the subject of this thesis. From the results obtained during this doctoral work, five main conclusions can be drawn concerning the mechanism of qE: 1. Substitution of Vio by Zea in LHC-II is not sufficient for efficient dissipation of excess excitation energy. 2. Aggregation quenching of LHC-II does not require Vio, Neo nor a specific Chl pair. 3. With one exception, the pigment structure in LHC-II is rigid. 4. The two X-ray structures of LHC-II show the same energy transmitting state of the complex. 5. Crystalline LHC-II resembles the complex in the thylakoid membrane. Models of the aggregation quenching mechanism in vitro and the qE mechanism in vivo are presented as a corollary of this doctoral work. LHC-II aggregation quenching in vitro is attributed to the formation of energy sinks on the periphery of LHC-II through random interaction with other trimers, free pigments or impurities. A similar but unrelated process is proposed to occur in the thylakoid membrane, by which excess excitation energy is dissipated upon specific interaction between LHC-II and a PsbS monomer carrying Zea. At the end of this thesis, an innovative experimental model for the analysis of all key aspects of qE is proposed in order to finally solve the qE enigma, one of the last unresolved problems in photosynthesis research.
Samples of freshly fallen snow were collected at the high alpine research station Jungfraujoch (Switzerland) in February and March 2006 and 2007, during the Cloud and Aerosol Characterization Experiments (CLACE) 5 and 6. In this study a new technique has been developed and demonstrated for the measurement of organic acids in fresh snow. The melted snow samples were subjected to solid phase extraction and resulting solutions analysed for organic acids by HPLC-MS-TOF using negative electrospray ionization. A series of linear dicarboxylic acids from C5 to C13 and phthalic acid, were identified and quantified. In several samples the biogenic acid pinonic acid was also observed. In fresh snow the median concentration of the most abundant acid, adipic acid, was 0.69 micro g L -1 in 2006 and 0.70 micro g L -1 in 2007. Glutaric acid was the second most abundant dicarboxylic acid found with median values of 0.46 micro g L -1 in 2006 and 0.61 micro g L -1 in 2007, while the aromatic acid phthalic acid showed a median concentration of 0.34 micro g L -1 in 2006 and 0.45 micro g L -1 in 2007. The concentrations in the samples from various snowfall events varied significantly, and were found to be dependent on the back trajectory of the air mass arriving at Jungfraujoch. Air masses of marine origin showed the lowest concentrations of acids whereas the highest concentrations were measured when the air mass was strongly influenced by boundary layer air.
Current atmospheric models do not include secondary organic aerosol (SOA) production from gas-phase reactions of polycyclic aromatic hydrocarbons (PAHs). Recent studies have shown that primary semivolatile emissions, previously assumed to be inert, undergo oxidation in the gas phase, leading to SOA formation. This opens the possibility that low-volatility gas-phase precursors are a potentially large source of SOA. In this work, SOA formation from gas-phase photooxidation of naphthalene, 1-methylnaphthalene (1-MN), 2-methylnaphthalene (2-MN), and 1,2-dimethylnaphthalene (1,2-DMN) is studied in the Caltech dual 28-m3 chambers. Under high-NOx conditions and aerosol mass loadings between 10 and 40 microg m-3, the SOA yields (mass of SOA per mass of hydrocarbon reacted) ranged from 0.19 to 0.30 for naphthalene, 0.19 to 0.39 for 1-MN, 0.26 to 0.45 for 2-MN, and constant at 0.31 for 1,2-DMN. Under low-NOx conditions, the SOA yields were measured to be 0.73, 0.68, and 0.58, for naphthalene, 1-MN, and 2-MN, respectively. The SOA was observed to be semivolatile under high-NOx conditions and essentially nonvolatile under low-NOx conditions, owing to the higher fraction of ring-retaining products formed under low-NOx conditions. When applying these measured yields to estimate SOA formation from primary emissions of diesel engines and wood burning, PAHs are estimated to yield 3–5 times more SOA than light aromatic compounds. PAHs can also account for up to 54% of the total SOA from oxidation of diesel emissions, representing a potentially large source of urban SOA.
It has become popular for journalists who are trying to sell newspapers, and politicians who are trying to solicit votes, to refer to this financial crisis as the worst since the Great Depression or WWII. I don’t know whether it is the worst or not so will leave that question to the historians and economists of the future once the storm has past. But it is indeed a “storm” as described by Vince Cable, Member of Parliament in his UK bestselling book entitled “The Storm – The World Economic Crisis and What it Means”. He describes this “storm” as a very destructive one displacing jobs, businesses, banks and whole economies from Iceland to the United Kingdom to the United States. I propose to offer a short chronology and summary of the causes of the current economic crisis. Then I will review several of the regulatory responses to the crisis focusing on the Turner Report, the de Larosière Group and certain US Treasury statements. I will offer my critiques of these proposals and then make some predictions of what the financial services industry may look like in the future.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
Background Heme oxygenase-1 is an inducible cytoprotective enzyme which handles oxidative stress by generating anti-oxidant bilirubin and vasodilating carbon monoxide. A (GT)n dinucleotide repeat and a -413A>T single nucleotide polymorphism have been reported in the promoter region of HMOX1 to both influence the occurrence of coronary artery disease and myocardial infarction. We sought to validate these observations in persons scheduled for coronary angiography. Methods We included 3219 subjects in the current analysis, 2526 with CAD including a subgroup of CAD and MI (n = 1339) and 693 controls. Coronary status was determined by coronary angiography. Risk factors and biochemical parameters (bilirubin, iron, LDL-C, HDL-C, and triglycerides) were determined by standard procedures. The dinucleotide repeat was analysed by PCR and subsequent sizing by capillary electrophoresis, the -413A>T polymorphism by PCR and RFLP. Results In the LURIC study the allele frequency for the -413A>T polymorphism is A = 0,589 and T = 0,411. The (GT)n repeats spread between 14 and 39 repeats with 22 (19.9%) and 29 (47.1%) as the two most common alleles. We found neither an association of the genotypes or allelic frequencies with any of the biochemical parameters nor with CAD or previous MI. Conclusion Although an association of these polymorphisms with the appearance of CAD and MI have been published before, our results strongly argue against a relevant role of the (GT)n repeat or the -413A>T SNP in the HMOX1 promoter in CAD or MI.
We calculate leading-order dilepton yields from a quark-gluon plasma which has a time-dependent anisotropy in momentum space. Such anisotropies can arise during the earliest stages of quark-gluon plasma evolution due to the rapid longitudinal expansion of the created matter. A phenomenological model for the proper time dependence of the parton hard momentum scale, p_hard, and the plasma anisotropy parameter, xi, is proposed. The model describes the transition of the plasma from a 0+1 dimensional collisionally-broadened expansion at early times to a 0+1 dimensional ideal hydrodynamic expansion at late times. We find that high-energy dilepton production is enhanced by pre-equilibrium emission up to 50% at LHC energies, if one assumes an isotropization/thermalization time of 2 fm/c. Given sufficiently precise experimental data this enhancement could be used to determine the plasma isotropization time experimentally.
Introduction Impaired renal function and/or pre-existing atherosclerosis in the deceased donor increase the risk of delayed graft function and impaired long-term renal function in kidney transplant recipients. Case presentation We report delayed graft function occurring simultaneously in two kidney transplant recipients, aged 57-years-old and 39-years-old, who received renal allografts from the same deceased donor. The 62-year-old donor died of cardiac arrest during an asthmatic state. Renal-allograft biopsies performed in both kidney recipients because of delayed graft function revealed cholesterol-crystal embolism. An empiric statin therapy in addition to low-dose acetylsalicylic acid was initiated. After 10 and 6 hemodialysis sessions every 48 hours, respectively, both renal allografts started to function. Glomerular filtration rates at discharge were 26 ml/min/1.73 m2 and 23.9 ml/min/1.73 m2, and remained stable in follow-up examinations. Possible donor and surgical procedure-dependent causes for cholesterol-crystal embolism are discussed. Conclusion Cholesterol-crystal embolism should be considered as a cause for delayed graft function and long-term impaired renal allograft function, especially in the older donor population.
Methods for dichoptic stimulus presentation in functional magnetic resonance imaging : a review
(2009)
Dichoptic stimuli (different stimuli displayed to each eye) are increasingly being used in functional brain imaging experiments using visual stimulation. These studies include investigation into binocular rivalry, interocular information transfer, three-dimensional depth perception as well as impairments of the visual system like amblyopia and stereodeficiency. In this paper, we review various approaches of displaying dichoptic stimulus used in functional magnetic resonance imaging experiments. These include traditional approaches of using filters (red-green, red-blue, polarizing) with optical assemblies as well as newer approaches of using bi-screen goggles.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
In the recent literature the phenomenon of long distance agreement has become the focus of several studies as it seems to violate certain locality conditions which require that agreeing elements in general stand in clause-mate relationships. In particular, it involves a verb agreeing with a constituent which is located in the verb's clausal complement and hence poses a challenge for theories that assume a strictly local relationship for agreement. In this paper we present empirical evidence from Greek and Romanian for the reality of long distance agreement. Specifically, we focus on raising constructions in these two languages and we show that they do not involve movement but rather instantiate long distance agreement. We further argue that subjunctives allowing long distance agreement lack both a CP layer and semantic Tense. However, since the embedded verb also bears phi-features, these constructions pose a further problem for assumptions that view the presence of phi-features as evidence for the presence of a C layer. Finally, we raise the question of the common properties that these languages have that lead to the presence of long distance agreement.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.