Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10976)
- Preprint (1704)
- Doctoral Thesis (1575)
- Working Paper (1440)
- Part of Periodical (569)
- Conference Proceeding (513)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17428) (remove)
Has Fulltext
- yes (17428) (remove)
Keywords
- inflammation (95)
- COVID-19 (91)
- SARS-CoV-2 (63)
- Financial Institutions (48)
- climate change (46)
- Germany (45)
- ECB (43)
- aging (43)
- apoptosis (42)
- cancer (42)
Institute
- Medizin (5153)
- Physik (3114)
- Frankfurt Institute for Advanced Studies (FIAS) (1663)
- Wirtschaftswissenschaften (1650)
- Biowissenschaften (1412)
- Informatik (1262)
- Center for Financial Studies (CFS) (1138)
- Sustainable Architecture for Finance in Europe (SAFE) (1066)
- Biochemie und Chemie (858)
- House of Finance (HoF) (702)
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
How do changes in market structure affect the US business cycle? We estimate a monetary DSGE model with endogenous
rm/product entry and a translog expenditure function by Bayesian methods. The dynamics of net business formation allow us to identify the 'competition effect', by which desired price markups and inflation decrease when entry rises. We
find that a 1 percent increase in the number of competitors lowers desired markups by 0.18 percent. Most of the cyclical variability in inflation is driven by markup fluctuations due to sticky prices or exogenous shocks rather than endogenous changes in desired markups.
This paper characterises optimal monetary policy in an economy with endogenous
firm entry, a cash-in-advance constraint and preset wages. Firms must make pro
fits to cover entry costs; thus the markup on goods prices is efficient. However, because leisure is not priced at a markup, the consumption-leisure tradeoff is distorted. Consequently, the real wage, hours and production are suboptimally low. Due to the labour requirement in entry, insufficient labour supply also implies that entry is too low. The paper shows that in the absence of
fiscal instruments such as labour income subsidies, the optimal monetary policy under sticky wages achieves higher welfare than under flexible wages. The policy maker uses the money supply instrument to raise the real wage - the cost of leisure - above its flexible-wage level, in response to expansionary shocks to productivity and entry costs. This raises labour supply, expanding production and
rm entry.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
Background: After focal neuronal injury the endocannabinioid system becomes activated and protects or harms neurons depending on cannabinoid derivates and receptor subtypes. Endocannabinoids (eCBs) play a central role in controlling local responses and influencing neural plasticity and survival. However, little is known about the functional relevance of eCBs in long-range projection damage as observed in stroke or spinal cord injury (SCI).
Methods: In rat organotypic entorhino-hippocampal slice cultures (OHSC) as a relevant and suitable model for investigating projection fibers in the CNS we performed perforant pathway transection (PPT) and subsequently analyzed the spatial and temporal dynamics of eCB levels. This approach allows proper distinction of responses in originating neurons (entorhinal cortex), areas of deafferentiation/anterograde axonal degeneration (dentate gyrus) and putative changes in more distant but synaptically connected subfields (cornu ammonis (CA) 1 region).
Results: Using LC-MS/MS, we measured a strong increase in arachidonoylethanolamide (AEA), oleoylethanolamide (OEA) and palmitoylethanolamide (PEA) levels in the denervation zone (dentate gyrus) 24 hours post lesion (hpl), whereas entorhinal cortex and CA1 region exhibited little if any changes. NAPE-PLD, responsible for biosynthesis of eCBs, was increased early, whereas FAAH, a catabolizing enzyme, was up-regulated 48hpl.
Conclusion: Neuronal damage as assessed by transection of long-range projections apparently provides a strong time-dependent and area-confined signal for de novo synthesis of eCB, presumably to restrict neuronal damage. The present data underlines the importance of activation of the eCB system in CNS pathologies and identifies a novel site-specific intrinsic regulation of eCBs after long-range projection damage.
Sucrose is known to repress the translation of Arabidopsis thaliana AtbZIP11 transcript which encodes a protein belonging to the group of S (S - stands for small) basic region-leucine zipper (bZIP)-type transcription factor. This repression is called sucrose-induced repression of translation (SIRT). It is mediated through the sucrose-controlled upstream open reading frame (SC-uORF) found in the AtbZIP11 transcript. The SIRT is reported for 4 other genes belonging to the group of S bZIP in Arabidopsis. Tobacco tbz17 is phylogenetically closely related to AtbZIP11 and carries a putative SC-uORF in its 5′-leader region. Here we demonstrate that tbz17 exhibits SIRT mediated by its SC-uORF in a manner similar to genes belonging to the S bZIP group of the Arabidopsis genus. Furthermore, constitutive transgenic expression of tbz17 lacking its 5′-leader region containing the SC-uORF leads to production of tobacco plants with thicker leaves composed of enlarged cells with 3–4 times higher sucrose content compared to wild type plants. Our finding provides a novel strategy to generate plants with high sucrose content.
Freshwater biodiversity has declined dramatically in Europe in recent decades. Because of massive habitat pollution and morphological degradation of water bodies, many once widespread species persist in small fractions of their original range. These range contractions are generally believed to be accompanied by loss of intraspecific genetic diversity, due to the reduction of effective population sizes and the extinction of regional genetic lineages. We aimed to assess the loss of genetic diversity and its significance for future potential reintroduction of the long-tailed mayfly Palingenia longicauda (Olivier), which experienced approximately 98% range loss during the past century. Analysis of 936 bp of mitochondrial DNA of 245 extant specimens across the current range revealed a surprisingly large number of haplotypes (87), and a high level of haplotype diversity (Hd = 0.875). In contrast, historic specimens (6) from the lost range (Rhine catchment) were not differentiated from the extant Rába population (F ST = 0.02, p = 0.61), despite considerable geographic distance separating the two rivers. These observations can be explained by an overlap of the current with the historic (Pleistocene) refugia of the species. Most likely, the massive recent range loss mainly affected the range which was occupied by rapid post-glacial dispersal. We conclude that massive range losses do not necessarily coincide with genetic impoverishment and that a species' history must be considered when estimating loss of genetic diversity. The assessment of spatial genetic structures and prior phylogeographic information seems essential to conserve once widespread species.
Background: Human Parvovirus B19 (PVB19) has been associated with myocarditis putative due to endothelial infection. Whether PVB19 infects endothelial cells and causes a modification of endothelial function and inflammation and, thus, disturbance of microcirculation has not been elucidated and could not be visualized so far.
Methods and Findings: To examine the PVB19-induced endothelial modification, we used green fluorescent protein (GFP) color reporter gene in the non-structural segment 1 (NS1) of PVB19. NS1-GFP-PVB19 or GFP plasmid as control were transfected in an endothelial-like cell line (ECV304). The endothelial surface expression of intercellular-adhesion molecule-1 (CD54/ICAM-1) and extracellular matrix metalloproteinase inducer (EMMPRIN/CD147) were evaluated by flow cytometry after NS-1-GFP or control-GFP transfection. To evaluate platelet adhesion on NS-1 transfected ECs, we performed a dynamic adhesion assay (flow chamber). NS-1 transfection causes endothelial activation and enhanced expression of ICAM-1 (CD54: mean±standard deviation: NS1-GFP vs. control-GFP: 85.3±11.2 vs. 61.6±8.1; P<0.05) and induces endothelial expression of EMMPRIN/CD147 (CD147: mean±SEM: NS1-GFP vs. control-GFP: 114±15.3 vs. 80±0.91; P<0.05) compared to control-GFP transfected cells. Dynamic adhesion assays showed that adhesion of platelets is significantly enhanced on NS1 transfected ECs when compared to control-GFP (P<0.05). The transfection of ECs was verified simultaneously through flow cytometry, immunofluorescence microscopy and polymerase chain reaction (PCR) analysis.
Conclusions: GFP color reporter gene shows transfection of ECs and may help to visualize NS1-PVB19 induced endothelial activation and platelet adhesion as well as an enhanced monocyte adhesion directly, providing in vitro evidence of possible microcirculatory dysfunction in PVB19-induced myocarditis and, thus, myocardial tissue damage.
The human DNA mismatch repair (MMR) process is crucial to maintain the integrity of the genome and requires many different proteins which interact perfectly and coordinated. Germline mutations in MMR genes are responsible for the development of the hereditary form of colorectal cancer called Lynch syndrome. Various mutations mainly in two MMR proteins, MLH1 and MSH2, have been identified so far, whereas 55% are detected within MLH1, the essential component of the heterodimer MutLα (MLH1 and PMS2). Most of those MLH1 variants are pathogenic but the relevance of missense mutations often remains unclear. Many different recombinant systems are applied to filter out disease-associated proteins whereby fluorescent tagged proteins are frequently used. However, dye labeling might have deleterious effects on MutLα's functionality. Therefore, we analyzed the consequences of N- and C-terminal fluorescent labeling on expression level, cellular localization and MMR activity of MutLα. Besides significant influence of GFP- or Red-fusion on protein expression we detected incorrect shuttling of single expressed C-terminal GFP-tagged PMS2 into the nucleus and found that C-terminal dye labeling impaired MMR function of MutLα. In contrast, N-terminal tagged MutLαs retained correct functionality and can be recommended both for the analysis of cellular localization and MMR efficiency.
Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.
We present a computational method for the reaction-based de novo design of drug-like molecules. The software DOGS (Design of Genuine Structures) features a ligand-based strategy for automated ‘in silico’ assembly of potentially novel bioactive compounds. The quality of the designed compounds is assessed by a graph kernel method measuring their similarity to known bioactive reference ligands in terms of structural and pharmacophoric features. We implemented a deterministic compound construction procedure that explicitly considers compound synthesizability, based on a compilation of 25'144 readily available synthetic building blocks and 58 established reaction principles. This enables the software to suggest a synthesis route for each designed compound. Two prospective case studies are presented together with details on the algorithm and its implementation. De novo designed ligand candidates for the human histamine H4 receptor and γ-secretase were synthesized as suggested by the software. The computational approach proved to be suitable for scaffold-hopping from known ligands to novel chemotypes, and for generating bioactive molecules with drug-like properties.
Background: During early stages of brain development, secreted molecules, components of intracellular signaling pathways and transcriptional regulators act in positive and negative feed-back or feed-forward loops at the mid-hindbrain boundary. These genetic interactions are of central importance for the specification and subsequent development of the adjacent mid- and hindbrain. Much less, however, is known about the regulatory relationship and functional interaction of molecules that are expressed in the tectal anlage after tectal fate specification has taken place and tectal development has commenced.
Results: Here, we provide experimental evidence for reciprocal regulation and subsequent cooperation of the paired-type transcription factors Pax3, Pax7 and the TALE-homeodomain protein Meis2 in the tectal anlage. Using in ovo electroporation of the mesencephalic vesicle of chick embryos we show that (i) Pax3 and Pax7 mutually regulate each other's expression in the mesencephalic vesicle, (ii) Meis2 acts downstream of Pax3/7 and requires balanced expression levels of both proteins, and (iii) Meis2 physically interacts with Pax3 and Pax7. These results extend our previous observation that Meis2 cooperates with Otx2 in tectal development to include Pax3 and Pax7 as Meis2 interacting proteins in the tectal anlage.
Conclusion: The results described here suggest a model in which interdependent regulatory loops involving Pax3 and Pax7 in the dorsal mesencephalic vesicle modulate Meis2 expression. Physical interaction with Meis2 may then confer tectal specificity to a wide range of otherwise broadly expressed transcriptional regulators, including Otx2, Pax3 and Pax7.
Background: The European Centres of Reference Network for Cystic Fibrosis (ECORN-CF) established an Internet forum which provides the opportunity for CF patients and other interested people to ask experts questions about CF in their mother language. The objectives of this study were to: 1. develop a detailed quality assessment tool to analyze quality of expert answers, 2. evaluate the intra- and inter-rater agreement of this tool, and 3. explore changes in the quality of expert answers over the time frame of the project.
Methods: The quality assessment tool was developed by an expert panel. Five experts within the ECORN-CF project used the quality assessment tool to analyze the quality of 108 expert answers published on ECORN-CF from six language zones. 25 expert answers were scored at two time points, one year apart. Quality of answers was also assessed at an early and later period of the project. Individual rater scores and group mean scores were analyzed for each expert answer.
Results: A scoring system and training manual were developed analyzing two quality categories of answers: content and formal quality. For content quality, the grades based on group mean scores for all raters showed substantial agreement between two time points, however this was not the case for the grades based on individual rater scores. For formal quality the grades based on group mean scores showed only slight agreement between two time points and there was also poor agreement between time points for the individual grades. The inter-rater agreement for content quality was fair (mean kappa value 0.232+/-0.036, p<0.001) while only slight agreement was observed for the grades of the formal quality (mean kappa value 0.105+/-0.024, p<0.001). The quality of expert answers was rated high (four language zones) or satisfactory (two language zones) and did not change over time.
Conclusions: The quality assessment tool described in this study was feasible and reliable when content quality was assessed by a group of raters. Within ECORN-CF, the tool will help ensure that CF patients all over Europe have equal possibility of access to high quality expert advice on their illness.
The present study addresses the problem whether negative priming (NP) is due to information processing in perception, recognition or selection. We argue that most NP studies confound priming and perceptual similarity of prime-probe episodes and implement a color-switch paradigm in order to resolve the issue. In a series of three identity negative priming experiments with verbal naming response, we determined when NP and positive priming (PP) occur during a trial. The first experiment assessed the impact of target color on priming effects. It consisted of two blocks, each with a different fixed target color. With respect to target color no differential priming effects were found. In Experiment 2 the target color was indicated by a cue for each trial. Here we resolved the confounding of perceptual similarity and priming condition. In trials with coinciding colors for prime and probe, we found priming effects similar to Experiment 1. However, trials with a target color switch showed such effects only in trials with role-reversal (distractor-to-target or target-to-distractor), whereas the positive priming (PP) effect in the target-repetition trials disappeared. Finally, Experiment 3 split trial processing into two phases by presenting the trial-wise color cue only after the stimulus objects had been recognized. We found recognition in every priming condition to be faster than in control trials. We were hence led to the conclusion that PP is strongly affected by perception, in contrast to NP which emerges during selection, i.e., the two effects cannot be explained by a single mechanism.
Few studies have looked at the potential of using diffusion tensor imaging (DTI) in conjunction with machine learning algorithms in order to automate the classification of healthy older subjects and subjects with mild cognitive impairment (MCI). Here we apply DTI to 40 healthy older subjects and 33 MCI subjects in order to derive values for multiple indices of diffusion within the white matter voxels of each subject. DTI measures were then used together with support vector machines (SVMs) to classify control and MCI subjects. Greater than 90% sensitivity and specificity was achieved using this method, demonstrating the potential of a joint DTI and SVM pipeline for fast, objective classification of healthy older and MCI subjects. Such tools may be useful for large scale drug trials in Alzheimer’s disease where the early identification of subjects with MCI is critical.
Place based frequency discrimination (tonotopy) is a fundamental property of the coiled mammalian cochlea. Sound vibrations mechanically conducted to the hearing organ manifest themselves into slow moving waves that travel along the length of the organ, also referred to as traveling waves. These traveling waves form the basis of the tonotopic frequency representation in the inner ear of mammals. However, so far, due to the secure housing of the inner ear, these waves only could be measured partially over small accessible regions of the inner ear in a living animal. Here, we demonstrate the existence of tonotopically ordered traveling waves covering most of the length of a miniature hearing organ in the leg of bushcrickets in vivo using laser Doppler vibrometery. The organ is only 1 mm long and its geometry allowed us to investigate almost the entire length with a wide range of stimuli (6 to 60 kHz). The tonotopic location of the traveling wave peak was exponentially related to stimulus frequency. The traveling wave propagated along the hearing organ from the distal (high frequency) to the proximal (low frequency) part of the leg, which is opposite to the propagation direction of incoming sound waves. In addition, we observed a non-linear compression of the velocity response to varying sound pressure levels. The waves are based on the delicate micromechanics of cellular structures different to those of mammals. Hence place based frequency discrimination by traveling waves is a physical phenomenon that presumably evolved in mammals and bushcrickets independently.
Introduction: Despite the excellent anti-inflammatory and immunosuppressive action of glucocorticoids (GCs), their use for the treatment of inflammatory bowel disease (IBD) still carries significant risks in terms of frequently occurring severe side effects, such as the impairment of intestinal tissue repair. The recently-introduced selective glucocorticoid receptor (GR) agonists (SEGRAs) offer anti-inflammatory action comparable to that of common GCs, but with a reduced side effect profile.
Methods: The in vitro effects of the non-steroidal SEGRAs Compound A (CpdA) and ZK216348, were investigated in intestinal epithelial cells and compared to those of Dexamethasone (Dex). GR translocation was shown by immunfluorescence and Western blot analysis. Trans-repressive effects were studied by means of NF-κB/p65 activity and IL-8 levels, trans-activation potency by reporter gene assay. Flow cytometry was used to assess apoptosis of cells exposed to SEGRAs. The effects on IEC-6 and HaCaT cell restitution were determined using an in vitro wound healing model, cell proliferation by BrdU assay. In addition, influences on the TGF-β- or EGF/ERK1/2/MAPK-pathway were evaluated by reporter gene assay, Western blot and qPCR analysis.
Results: Dex, CpdA and ZK216348 were found to be functional GR agonists. In terms of trans-repression, CpdA and ZK216348 effectively inhibited NF-κB activity and IL-8 secretion, but showed less trans-activation potency. Furthermore, unlike SEGRAs, Dex caused a dose-dependent inhibition of cell restitution with no effect on cell proliferation. These differences in epithelial restitution were TGF-β-independent but Dex inhibited the EGF/ERK1/2/MAPK-pathway important for intestinal epithelial wound healing by induction of MKP-1 and Annexin-1 which was not affected by CpdA or ZK216348.
Conclusion: Collectively, our results indicate that, while their anti-inflammatory activity is comparable to Dex, SEGRAs show fewer side effects with respect to wound healing. The fact that SEGRAs did not have a similar effect on cell restitution might be due to a different modulation of EGF/ERK1/2 MAPK signalling.
Ubiquitination now ranks with phosphorylation as one of the best-studied post-translational modifications of proteins with broad regulatory roles across all of biology. Ubiquitination usually involves the addition of ubiquitin chains to target protein molecules, and these may be of eight different types, seven of which involve the linkage of one of the seven internal lysine (K) residues in one ubiquitin molecule to the carboxy-terminal diglycine of the next. In the eighth, the so-called linear ubiquitin chains, the linkage is between the amino-terminal amino group of methionine on a ubiquitin that is conjugated with a target protein and the carboxy-terminal carboxy group of the incoming ubiquitin. Physiological roles are well established for K48-linked chains, which are essential for signaling proteasomal degradation of proteins, and for K63-linked chains, which play a part in recruitment of DNA repair enzymes, cell signaling and endocytosis. We focus here on linear ubiquitin chains, how they are assembled, and how three different avenues of research have indicated physiological roles for linear ubiquitination in innate and adaptive immunity and suppression of inflammation.
Ubiquitin ligases and beyond
(2012)
First paragraph (this article has no abstract): In a review published in 2004 [1] and that still repays reading today, Cecile Pickart traced the evolution of research on ubiquitination from its origins in the proteasomal degradation of proteins through the revelation that it has a central role in cell cycle regulation and the recognition of regulatory roles for ubiquitin in intracellular membrane transport, cell signalling, transcription, translation, and DNA repair.
Synaptic long-term potentiation (LTP) at spinal neurons directly communicating pain-specific inputs from the periphery to the brain has been proposed to serve as a trigger for pain hypersensitivity in pathological states. Previous studies have functionally implicated the NMDA receptor-NO pathway and the downstream second messenger, cGMP, in these processes. Because cGMP can broadly influence diverse ion-channels, kinases, and phosphodiesterases, pre- as well as post-synaptically, the precise identity of cGMP targets mediating spinal LTP, their mechanisms of action, and their locus in the spinal circuitry are still unclear. Here, we found that Protein Kinase G1 (PKG-I) localized presynaptically in nociceptor terminals plays an essential role in the expression of spinal LTP. Using the Cre-lox P system, we generated nociceptor-specific knockout mice lacking PKG-I specifically in presynaptic terminals of nociceptors in the spinal cord, but not in post-synaptic neurons or elsewhere (SNS-PKG-I−/− mice). Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I−/− mice, although basal synaptic transmission was not affected. Analyses of synaptic failure rates and paired-pulse ratios indicated a role for presynaptic PKG-I in regulating the probability of neurotransmitter release. Inositol 1,4,5-triphosphate receptor 1 and myosin light chain kinase were recruited as key phosphorylation targets of presynaptic PKG-I in nociceptive neurons. Finally, behavioural analyses in vivo showed marked defects in SNS-PKG-I−/− mice in several models of activity-induced nociceptive hypersensitivity, and pharmacological studies identified a clear contribution of PKG-I expressed in spinal terminals of nociceptors. Our results thus indicate that presynaptic mechanisms involving an increase in release probability from nociceptors are operational in the expression of synaptic LTP on spinal-PAG projection neurons and that PKG-I localized in presynaptic nociceptor terminals plays an essential role in this process to regulate pain sensitivity.
We investigate the decisions of listed firms to go private once again. We start by revealing that while a significant number of firms which go public is VC-backed, an overproportional share of these VC-backed firms go private later on (they stay on the exchange for an average of 8.5 years). We interpret this very robust pattern such that IPOs of VC-backed firms are to a large extent a temporary rather than a permanent feature of the corporate governance of these firms. We investigate various potential hypotheses why VCs actually seem to be able to bring marginal firms to the exchange by relating the going-private decisions to various characteristics of the IPO market as well as to VC characteristics. We find strong support for the certification ability of VCs: more experienced and reputable VCs are more able to bring marginal firms to public exchanges via an IPOs. These marginal firms backed-by more reputable and experienced VCs are more likely to go private later on. Hence, our analysis suggests that IPOs backed by experienced VCs are most likely to be a temporary rather than the final stage in the life of the portfolio firm. We find no support that reputable VCs underprice their IPO-exits more implying that they have no need to leave more money on the table to take the marginal firms public.
Diatoms contribute largely to the total primary production of the ecosphere and are key players in global biogeochemical cycles. Their chloroplasts are surrounded by four membranes owing to their secondary endosymbiotic origin. Their thylakoids are arranged into three parallel bands and differentiation of thylakoid membranes into grana or stroma is not observed. The fucoxanthin chlorophyll a/c binding proteins act as the light harvesting proteins and play a role in photoprotection during excess light as well. The diatom genome encodes three different families of antenna proteins. Family I are the classical light harvesting proteins called "Lhcf". Family II are the red algae related Lhca-R1/2 proteins called "Lhcr" and family III are the photoprotective LI818 related proteins called "Lhcx".
All known Fcps have a molecular weight in the range of 17-23 kDa. They are membrane proteins and have shorter loops and termini compared to LHCs of higher plants and are therefore extremely hydrophobic. This makes the isolation of single specific Fcps using routine protein purification techniques difficult.
The purification of a specific Fcp containing complex has not been achieved so far and until this is done several questions concerning light harvesting antenna systems of diatoms cannot be answered. For e.g. Which proteins interact specifically? Are various Fcps differently pigmented? Which pigments interact with each other and how? Which proteins contribute to photosystem specific antenna systems? Can pure Fcps be reconstituted into crystals like LHCII proteins? In order to answer these questions specific Fcp containing complexes have to be purified. ...
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
Introduction: Erectile dysfunction (ED) is common in men with systemic sclerosis (SSc) but the demographics, risk factors and treatment coverage for ED are not well known.
Method: This study was carried out prospectively in the multinational EULAR Scleroderma Trial and Research database by amending the electronic data-entry system with the International Index of Erectile Function-5 and items related to ED risk factors and treatment. Centres participating in this EULAR Scleroderma Trial and Research substudy were asked to recruit patients consecutively.
Results: Of the 130 men studied, only 23 (17.7%) had a normal International Index of Erectile Function-5 score. Thirty-eight per cent of all participants had severe ED (International Index of Erectile Function-5 score ≤ 7). Men with ED were significantly older than subjects without ED (54.8 years vs. 43.3 years, P < 0.001) and more frequently had simultaneous non-SSc-related risk factors such as alcohol consumption. In 82% of SSc patients, the onset of ED was after the manifestation of the first non-Raynaud's symptom (median delay 4.1 years). ED was associated with severe cutaneous, muscular or renal involvement of SSc, elevated pulmonary pressures and restrictive lung disease. ED was treated in only 27.8% of men. The most common treatment was sildenafil, whose efficacy is not established in ED of SSc patients.
Conclusions: Severe ED is a common and early problem in men with SSc. Physicians should address modifiable risk factors actively. More research into the pathophysiology, longitudinal development, treatment and psychosocial impact of ED is needed.
Background: In Emergency and Medical Admission Departments (EDs and MADs), prompt recognition and appropriate infection control management of patients with Highly Infectious Diseases (HIDs, e.g. Viral Hemorrhagic Fevers and SARS) are fundamental for avoiding nosocomial outbreaks.
Methods: The EuroNHID (European Network for Highly Infectious Diseases) project collected data from 41 EDs and MADs in 14 European countries, located in the same facility as a national/regional referral centre for HIDs, using specifically developed checklists, during on-site visits from February to November 2009.
Results: Isolation rooms were available in 34 facilities (82,9%): these rooms had anteroom in 19, dedicated entrance in 15, negative pressure in 17, and HEPA filtration of exhausting air in 12. Only 6 centres (14,6%) had isolation rooms with all characteristics. Personnel trained for the recognition of HIDs was available in 24 facilities; management protocols for HIDs were available in 35.
Conclusions: Preparedness level for the safe and appropriate management of HIDs is partially adequate in the surveyed EDs and MADs.
From 12.12.2010 to 17.12.2010, the Dagstuhl Seminar 10501 "Advances and Applications of Automata on Words and Trees" was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
Seminar: 10501 - Advances and Applications of Automata on Words and Trees. The aim of the seminar was to discuss and systematize the recent fast progress in automata theory and to identify important directions for future research. For this, the seminar brought together more than 40 researchers from automata theory and related fields of applications. We had 19 talks of 30 minutes and 5 one-hour lectures leaving ample room for discussions. In the following we describe the topics in more detail.
This article shows that there exist two particular linear orders such that first-order logic with these two linear orders has the same expressive power as first-order logic with the Bit-predicate FO(Bit). As a corollary we obtain that there also exists a built-in permutation such that first-order logic with a linear order and this permutation is as expressive as FO(Bit).
Background: Hepatitis C decreases health related quality of life (HRQL) which is further diminished by antiviral therapy. HRQL improves after successful treatment. This trial explores the course of and factors associated with HRQL in patients given individualized or standard treatment based on early treatment response (Ditto-study).
Methods: The Short Form (SF)-36 Health Survey was administered at baseline (n = 192) and 24 weeks after the end of therapy (n = 128).
Results: At baseline HRQL was influenced by age, participating center, severity of liver disease and income. Exploring the course of HRQL (scores at follow up minus baseline), only the dimension general health increased. In this dimension patients with a relapse or sustained response differed from non-responders. Men and women differed in the dimension bodily pain. Treatment schedule did not influence the course of HRQL.
Conclusions: Main determinants of HRQL were severity of liver disease, age, gender, participating center and response to treatment. Our results do not exclude a more profound negative impact of individualized treatment compared to standard, possibly caused by higher doses and extended treatment duration in the individualized group. Antiviral therapy might have a more intense and more prolonged negative impact on females.
Background: Europe was certified to be polio-free in 2002 by the WHO. However, wild polioviruses remain endemic in India, Pakistan, Afghanistan, and Nigeria, occasionally causing polio outbreaks, as in Tajikistan in 2010. Therefore, effective surveillance measures and vaccination campaigns remain important. To determine the poliovirus immune status of a German study population, we retrospectively evaluated the seroprevalence of neutralizing antibodies (NA) to the poliovirus types 1, 2 and 3 (PV1, 2, 3) in serum samples collected from 1,632 patients admitted the University Hospital of Frankfurt am Main, Germany, in 2001, 2005 and 2010.
Methods: Testing was done by using a standardized microneutralization assay.
Results: Level of immunity to PV1 ranged between 84.2% (95%CI: 80.3-87.5), 90.4% (88.3-92.3) and 87.5% (85.4-88.8) in 2001, 2005 and 2010. For PV2, we found 90.8% (87.5-90.6), 91.3% (89.3-93.1) and 89.8% (88.7-90.9), in the same period. Seroprevalence to PV3 was 76.6% (72.2-80.6), 69.8% (66.6-72.8) and 72.9% (67.8-77.5) in 2001 and 2005 and 2010, respectively. In 2005 and 2010 significant lower levels of immunity to PV3 in comparison to PV1 and 2 were observed. Since 2001, immunity to PV3 is gradually, but not significantly decreasing.
Conclusion: Immunity to PV3 is insufficient in our cohort. Due to increasing globalization and worldwide tourism, the danger of polio-outbreaks is not averted - even not in developed countries, such as Germany. Therefore, vaccination remains necessary.
Background: Ewing sarcoma patients have a poor prognosis despite multimodal therapy. Integration of combination immunotherapeutic strategies into first-/second-line regimens represents promising treatment options, particularly for patients with intrinsic or acquired resistance to conventional therapies. We evaluated the susceptibility of Ewing sarcoma to natural killer cell-based combination immunotherapy, by assessing the capacity of histone deacetylase inhibitors to improve immune recognition and sensitize for natural killer cell cytotoxicity.
Methods: Using flow cytometry, ELISA and immunohistochemistry, expression of natural killer cell receptor ligands was assessed in chemotherapy-sensitive/-resistant Ewing sarcoma cell lines, plasma and tumours. Natural killer cell cytotoxicity was evaluated in Chromium release assays. Using ATM/ATR inhibitor caffeine, the contribution of the DNA damage response pathway to histone deacetylase inhibitor-induced ligand expression was assessed.
Results: Despite comparable expression of natural killer cell receptor ligands, chemotherapy-resistant Ewing sarcoma exhibited reduced susceptibility to resting natural killer cells. Interleukin-15-activation of natural killer cells overcame this reduced sensitivity. Histone deacetylase inhibitor-pretreatment induced NKG2D-ligand expression in an ATM/ATR-dependent manner and sensitized for NKG2D-dependent cytotoxicity (2/4 cell lines). NKG2D-ligands were expressed in vivo, regardless of chemotherapy-response and disease stage. Soluble NKG2D-ligand plasma concentrations did not differ between patients and controls.
Conclusion: Our data provide a rationale for combination immunotherapy involving immune effector and target cell manipulation in first-/second-line treatment regimens for Ewing sarcoma.
Much is known about the computation in individual neurons in the cortical column. Also, the selective connectivity between many cortical neuron types has been studied in great detail. However, due to the complexity of this microcircuitry its functional role within the cortical column remains a mystery. Some of the wiring behavior between neurons can be interpreted directly from their particular dendritic and axonal shapes. Here, I describe the dendritic density field (DDF) as one key element that remains to be better understood. I sketch an approach to relate DDFs in general to their underlying potential connectivity schemes. As an example, I show how the characteristic shape of a cortical pyramidal cell appears as a direct consequence of connecting inputs arranged in two separate parallel layers.
The small bowel is essential to sustain alimentation and small bowel Crohn's disease (CD) may severely limit its function. Small bowel imaging is a crucial element in diagnosing small
bowel CD, and treatment control with imaging is increasingly used to optimize the patients outcome. Thereby, capsule endoscopy, Balloon-assisted enteroscopy, and Magnetic resonance imaging have become key players to manage CD patients. In this review, role of small bowel imaging is detailed discussed for use in diagnosing and managing Crohn's disease patients.
Editorial : Andreas Dombret "Regulating Systemically Important Financial Institutions is Vitally Important" ; Research Money/Macro : Dimitris Christelis, Dimitris Georgarakos, Michael Haliassos "International Portfolio Differences: Environment versus Characteristics" ; Research Finance : Raimond Maurer, Ralph Rogalla, Yuanyuan Shen "Optimal Asset Allocation in Retirement with Open-end Real Estate Funds" ; Research Law : Theodor Baums "Shareholder Suits in German Company Law – An Empirical Study" ; Policy Platform : Helmut Siekmann, Patrick Tuschl "Constitutional Ruling on Court of Auditors' Review of Banks" ; Interview : Michael S. Barr "Information Does not Necessarily Lead to Understanding"
The study of meson production in proton-proton collisions in the energy range
up to one GeV above the production threshold provides valuable information about
the nature of the nucleon-nucleon interaction. Theoretical models describe the interaction
between nucleons via the exchange of mesons. In such models, different
mechanisms contribute to the production of the mesons in nucleon-nucleon collisions.
The measurement of total and differential production cross sections provide information
which can help in determining the magnitude of the various mechanisms.
Moreover, such cross section information serves as an input to the transport calculations
which describe e.g. the production of e+e− pairs in proton- and pion-induced
reactions as well as in heavy ion collisions.
In this thesis, the production of ω and η mesons in proton-proton collisions at 3.5
GeV beam energy was studied using the High Acceptance DiElectron Spectrometer
(HADES) installed at the Schwerionensynchrotron (SIS 18) at the Helmholtzzenturm
f¨ur Schwerionenforschung in Darmstadt.
About 80 000 ω mesons and 35 000 η mesons were reconstructed. Total production
cross sections of both mesons were determined. Furthermore, the collected statistics
allowed for extracting angular distributions of both mesons as well as performing
Dalitz plot studies.
The ω and η mesons were reconstructed via their decay into three pions (π+π−π0)
in the exclusive reaction pp −→ ppπ+π−π0. The charged particles were identified
via their characteristic energy loss, via the measurement of their time of flight and
momentum, or using kinematics.
The neutral pion was reconstructed using the missing mass method. A kinematic
fit was applied to improve the resolution and to select events in which a π0 was
produced.
The correction of measured yields for the effects of spectrometer acceptance was done
as a function of four variables (two invariant masses and two angles). Systematic
studies of the acceptance for different input distributions were performed.
The measured yields were normalized to the number of measured events of elastic
scattering. Systematic errors due to the methods of the data analysis and the
background subtraction were investigated.
Production angular distributions of ω and η mesons were measured. Both mesons
exhibit a slightly anisotropic angular distribution.
The Dalitz plot of ω meson production shows indications of resonant production.
However, the deviation of the distribution from the one expected by phase space
simulations is not large.
The Dalitz plot of η meson production shows a signal of the production via the
N(1535) resonance, The contribution of N(1535) to the production was quantified
to be about 47%. The angular distribution of η mesons does not show significant
differences between resonant and non resonant production.
The total production cross section of ω mesons in the reaction pp −→ ppω was
determined to be 106.5 ± 0.9 (stat) ± 7.9 (sys) [μb] where stat indicates statistical
error and sys indicates systematic error, while that of η mesons was determined to
be 136.9 ± 0.9 (stat) ± 10.1 (sys) [μb] in the reaction pp −→ ppη
Occurrence and sources of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (TMDD) in the aquatic environment
(2011)
The aim of the present study was to identify the sources of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (TMDD) into the aquatic environment and to investigate its occurrence in rivers and wastewater treatment plants (WWTPs). Therefore, TMDD was analyzed in 441 wastewater samples from influents and effluents of 27 municipal WWTPs, in 6 sludge samples, in 52 wastewater samples from 3 sewage systems of municipal WWTPs, in 489 surface samples from 24 rivers, in 9 wastewater samples of 3 paper-recycling industries and in 65 groundwater samples. TMDD was also analyzed in household paper products, in 23 samples of toilet
papers, in 5 types of paper towels and in 12 types of paper tissues. The samples were collected between 2007 and 2011. The water samples were extracted with solid phase extraction (SPE) and the household paper samples with Soxhlet extraction. Gas chromatography-mass spectrometry (GC-MS) was used for quantification purposes. Between November 2007 and January 2008, TMDD was detected in the river Rhine at Worms with permanent high concentrations (up to 1330 ng/L). The results showed that TMDD is uniformly distributed across the river at Worms. An increase of the mean TMDD concentration from approximately 500 ng/L to 1000 ng/L was registered in January 2008. Due to the minor fluctuations of the TMDD concentration during the sampling period it is expected that the input of TMDD into the river is continuous. Therefore, TMDD might rather originate from effluents of municipal WWTPs than from temporal sources. The mean TMDD load based on the analysis of 147 water samples collected in the River Rhine was 62.8 kg/d which is equivalent to 23 t/a suggesting that TMDD must be used and/or produced in high quantities in order to be found in those high concentrations. To determine if TMDD is discharged by effluents of municipal WWTPs into the rivers, 24 hours influent and effluent samples of four municipal WWTPs in the Frankfurt/Rhine-Main metropolitan region were collected during November 2008 and February 2010 and analyzed for TMDD. The TMDD influent concentrations varied between 134 ng/L and 5846 ng/L and the effluent concentrations between <LOQ (limit of quantitation) and 3539 ng/L. The TMDD elimination rates in the four WWTPs varied between 33% and 68%. The results showed that effluents of municipal WWTPs are an important source of TMDD in the aquatic environment because TMDD is not completely removed from the sewage during the wastewater treatment. Weekly and daily variations of the TMDD concentration in the influents of two municipal WWTPs indicated that both private households and indirect industrial dischargers contribute to the introduction of TMDD into the municipal sewage systems. A more detailed study of the TMDD elimination rate in the different wastewater treatment stages was carried out in the WWTP Niederrad/Griesheim in Frankfurt am Main. The results showed that the removal of TMDD is mainly carried out during the aerobic biological treatments, where the elimination rate was 46%. In contrast, during the anoxic treatment the removal efficiency was only 1.4% and during the mechanical treatment the elimination rate was 19%. To determine the sources of TMDD in the sewage, household paper products (paper tissues, toilet papers and paper towels) were analyzed for TMDD using Soxhlet extraction. TMDD was detected in 83% of the samples (n=40). The highest mean TMDD concentrations were found in recycled toilet paper (0.20 μg/g) and in paper towels (0.11 μg/g). In paper tissues and non-recycled toilet paper the mean TMDD concentrations were lower 0.080 μg/g and 0.025 μg/g respectively. According to these results the high TMDD influent concentrations found previously in municipal WWTPs (mean 1.20 μg/L) cannot be explained due to migration of TMDD from the household paper products into the sewage. Thus indirect industrial dischargers are the cause of the high influent TMDD concentrations. Effluents of municipal WWTPs with different indirect industrial dischargers (textile-, metal processing-, food processing-, electroplating-, paper-recycling- and printing ink factories) were analyzed. The highest mean TMDD concentrations were found in the effluents of municipal WWTPs that have paper-recycling (71.3 μg/L) and printing ink factories (138 μg/L) as indirect industrial dischargers. These results were confirmed by analyzing process wastewater of three paper-recycling factories located in Germany. High TMDD concentrations were detected and fluctuated between 1.83 μg/L and 113 μg/L. TMDD was also analyzed in the wastewater of a non-recycling-paper factory but its concentration was much lower (0.066 μg/L) indicating that TMDD is introduced into the processing water during the papermaking process due to the use of waste paper. Analyses of wastewater samples from different parts of the sewage pipes of a municipal WWTP in Hesse, which receives the wastewater from a printing ink factory, were carried out. The TMDD concentration in the wastewater sample from the sewage pipe of the printing ink factory was much higher (3,300 μg/L) than the TMDD concentration detected in the other wastewater samples from the sewage system (0.030 μg/L – 0.89 g/L). These results confirm the printing ink production as one of the principal sources of TMDD in the sewage. Analysis of surface water samples of the River Modau downstream from the effluent of the WWTP Nieder-Ramstadt showed TMDD concentrations of up to 28.0 μg/L. These high TMDD concentrations might be caused by the indirect wastewater discharges of a paint factory connected to the municipal sewage system. These results indicate that TMDD is introduced into the municipal WWTPs principally by indirect industrial dischargers and they are mainly paint and printing ink factories. The paper-recycling factories also represent an important source of TMDD in municipal WWTPs but indirectly. According to statements given by the representatives of two paper recycling factories neither TMDD or any other TMDD containing product is used or added during the papermaking process. Therefore, TMDD is washed out from the printing inks of the coloured waste paper and concentrated in the process wastewater in the closed water circuits of paper-recycling factories reaching rivers and municipal WWTPs. The occurrence and distribution of TMDD in surface waters in Germany was also studied. The results showed that TMDD is widely distributed across different rivers systems in the federal states of Hesse, North-Rhine-Westphalia, Bavaria, Baden-Wuerttemberg and Rhineland-Palatinate. In Hesse, TMDD was detected in the some of main rivers with mean concentrations of 812 ng/L (Schwarzbach, Hessian Ried), 374 ng/L (Kinzig), 393 ng/L (Main, at Frankfurt), 539 ng/L (Werra), 326 ng/L (Fulda), 151 ng/L (Emsbach) and 161 ng/L (Nidda). In small rivers (creeks) the mean TMDD concentrations varied between <LOQ (Diemel, Urselbach) and 1890 ng/L (Darmbach). The results showed that the TMDD concentrations in creeks are highly influenced by both effluents of WWTPs and by the distance between the sampling point and the nearest WWTP. Surface samples from sampling locations downstream from WWTPs dischargers showed higher TMDD concentrations (mean 518 ng/L) than sampling locations upstream from WWTPs dischargers (mean 35.1 ng/L). The behavior of TMDD during bank filtration was investigated at two locations, at a water utility company at the Lower River Rhine (urban area) and at the Oderbruch polder (rural area). The results indicated that TMDD is removed from the surface water by bank filtration at both sampling locations. The removal process is probably carried out in the first meters of the aquifer (hyporheic zone) by biodegradation processes, since TMDD does not tend to be absorbed by sediments and it was not found in the groundwater of monitoring wells. In groundwater samples from the Hessian Ried (n=23) TMDD was found only in five samples and the highest TMDD concentration was 135 ng/L. According to these results, TMDD does not represent a concern for drinking water in Germany, since it does not reach the groundwater with high concentrations and it has a low toxicity potential. The input of TMDD into the North Sea was estimated to be 60.7 t/a by considering the mean transported loads of TMDD by the River Rhine at Wesel (58.3 t/a) and Meuse in the Netherlands (2.40 t/a). The estimated discharge of TMDD by German municipal WWTPs (8.19 t/a) and paper-recycling factories (9.24 t/a) into rivers seems to be too low considering that the mean TMDD load in the River Rhine downstream from Wesel is 58.3 t/a. However, due to the high density of population and industries at the Lower Rhine it is expected that more relevant sources of TMDD are located along the Rhine River increasing the transported load. According to the results of this PhD project TMDD is a non-ionic surfactant contained in products, which are applied on surfaces (printing inks and paints) and has the potential to reach the aquatic environment. Therefore, TMDD should fulfill the requirement of a biodegradability of 80% established by the “Law on the Environmental Impact of Detergents and Cleaning Products” in Germany. However, due to the partial elimination rates of TMDD obtained in municipal WWTPs (between 33% and 68%) and to the absence of information about the execution of the biodegradation test on TMDD, it is unknown if TMDD is in accordance with this law. Otherwise, its use as surfactant in such products is questionable.
Menschliche Aktivitäten beeinflussen beinahe alle Bereiche des Lebens auf der Erde (MEA 2005a; UNEP 2007). Die Zerstörung und Veränderung natürlicher Lebensräume sind als Hauptursache für den weltweiten Biodiversitätsverlust identifiziert (Harrison and Bruna 1999; Dale et al. 2000; Foley et al. 2005; MEA 2005a). Zusammen mit dem Klimawandel wird die Landnutzungsveränderung daher als einflussreichster Aspekt anthropogen verursachten globalen Wandels betrachtet (MEA 2005a). Landnutzungsveränderung schließt sowohl die Umwandlung natürlicher Habitate in Agrarland oder Siedlungen als auch die Landnutzungsintensivierung in bereits kultivierten Landschaften mit ein. Diese Veränderungen haben weitreichende Konsequenzen für die Artenvielfalt und resultieren häufig in dem Verlust von Arten mit zunehmender Intensität der Landnutzung (Scholes and Biggs 2005).
Biodiversität und Ökosysteme stellen viele verschiedene Funktionen zur Verfügung, wie z. B. die Sauerstoffproduktion, die Reinigung von Wasser und die Bestäubung von Nutzpflanzen.
Einige dieser Funktionen sind hilfreich, andere wichtig und wieder andere notwendig für das menschliche Wohlergehen (MEA 2005b; UNEP 2007). Mittlerweile sind Ökosystemfunktionen und die vielen Nutzen, die sie erbringen, zu einem zentralen Thema der interdisziplinären Forschung von Sozialwissenschaften und Naturwissenschaften geworden (Barkmann et al. 2008 und darin enthaltene Referenzen). Dadurch bedingt ist es zu einiger Verwirrung bezüglich der verwendeten Begriffe der "Ökosystemfunktion" (engl. "ecosystem function") und dem der "Ökosystemdienstleistung" (engl. "ecosystem service") gekommen (deGroot et al. 2002). Da der Fokus meiner Arbeit auf grundlegenden Funktionen von Ökosystemen liegt, verwende ich im Folgenden den Begriff der Ökosystemfunktion.
Für viele Ökosystemfunktionen ist noch sehr unzureichend bekannt, wie diese von externen Störungen beeinflusst werden (Kremen and Ostfeld 2005; Balvanera et al. 2006). Ökosystemfunktionen werden selten von nur einer einzigen Art aufrechterhalten, sondern meist von einer ganzen Reihe unterschiedlicher taxonomischer Gruppen – alle mit ihren ganz eigenen Ansprüchen. Diese Arten, wie auch deren intra- und interspezifischen Interaktionen, können durchaus nterschiedlich auf die gleiche Störungsquelle oder Störungsintensität reagieren. Dies kann Vorhersagen zum Verhalten von Ökosystemfunktionen extrem erschweren. ...
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Ende der 70ger Jahre, fünf Jahre nach der Einführung des ersten kommerziellen, medizinischen Computertomographen wurde die Tomographie am Los Alamos Scientific Laboratory zum ersten Mal für die Diagnose von Teilchenstrahlen angewendet. Bei der Tomographie wird aus eindimensionalen Projektionen, sogenannten Profilen, welche in möglichst vielen Winkeln um ein Objekt herum aufgenommen werden, ein zweidimensionales Abbild der Dichteverteilung (Slice oder Scheibe) approximiert. Dies ist möglich durch das bereits 1917 von Johann Radon eingeführte Fourier-Scheiben-Theorem. In der Theorie kann die zwei-dimensionale Dichteverteilung exakt ermittelt werden, wenn Projektionen mit einer unendlich feinen Auflösung über unendlich viele Winkel um ein Objekt herum in die Rekonstruktion einbezogen werden. Durch die Rekonstruktion vieler Scheiben kann ein drei-dimensionales Abbild der Dichteverteilung in einem Objekt, in diesem Fall einem Ionenstrahl, berechnet werden, sofern dieses nicht optisch dicht ist.
Die Profile in der nicht-invasiven Strahldiagnose entstehen durch CCD-Kameraaufnahmen von strahlinduzierter Fluoreszenz, welche durch den Einlass von Restgas hervorgerufen wird. Es sind aber auch Profile, welche aus anderen Methoden gewonnen werden (z.B. Gittermessungen) denkbar. An Orten mit hoher Energie ist jedoch eine nicht-invasive Form der Profilaufnahme sowohl für die Qualität des Strahls, wie auch den Schutz der Messgeräte unabdingbar.
In den letzten 40 Jahren wurden im Bereich der Strahltomographie viele wichtige Fortschritte erzielt:
1. Anfangs standen nur sehr wenige Profile zur Verfügung, so dass die Methode der gefilterten Rückprojektion(FBP), welche sich direkt aus dem Fourier-Scheiben-Theorem ableitet und welches auch in der Medizin verwendet wird, nicht angewendet werden kann. Um dieses Problem zu lösen wurden iterative Methoden wie die Algebraische Rekonstruktion (ART) und die Methode der Maximalen Entropie (MEM) für die Strahltomographie erschlossen, so dass auch mit sehr geringer Profilanzahl eine Rücktransformation möglich wurde.
2. Neben der Ortsraumtomographie wurde die Phasenraumtomografie entwickelt, so dass mittlerweile eine Rekonstruktion des sechs-dimensionalen Phasenraumes möglich ist, mit welchem ein Ionenstrahl in seiner Gesamtheit beschrieben werden kann.
3. Die Projektionen wurden lange Zeit durch Aufnahmen von mehreren festen Anschlüssen aus gewonnen (Multi-Port-Technik). Auf diese Weise ist die Anzahl der möglichen Projektionen sehr begrenzt. So entwickelte man später eine Methode welche den Strahl mit Hilfe von Quadrupolen dreht (Quad-Scan-Technik), so dass auf diese Weise von einem Anschluss aus viele Projektionen gemessen werden konnten, so dass sogar die FBP angewendet werden konnte.
4. Die meisten Bestrebungen zielten darauf ab, die Tomographie für eine nicht-invasive Emittanzmessmethode zu nutzen, welches bis heute aufgrund der großen und noch immer zunehmenden Energien in modernen Beschleunigern ein wichtiges Problem ist. Um die Tomographie zur Emittanzmessung zu verwenden, führt man eine Rekonstruktion des Phasenraumes durch. Das Problem ist, dass hierfür das a priori Wissen über die Strahltransportmatrix in die Tomographie mit einfließt, die berechnete Strahltransportmatrix
jedoch nicht mit dem tatsächlichen Strahltransport übereinstimmt, da dieser bei hohen Energien durch auftretende Raumladung nicht-linear verändert wird. Hierzu wurden gute Fortschritte in der Abschätzung der tatsächlichen Transportmatrix gemacht um die Phasenraumtomographie trotzdem mit hinreichend gutem Ergebnis durchführen zu können.
Trotz all dieser Fortschritte und Entwicklungen ist die Tomographie bis heute keine weitverbreitete Methode in der Strahldiagnose. Der Grund ist, dass das Einrichten einer Tomografie eine komplexe Abfolge etlicher Entscheidungen und weitgestreutes Wissen aus vielen unterschiedlichen Bereichen erfordert, dieser nicht zu unterschätzende Mehraufwand jedoch auch durch einen signifikanten Nutzen gerechtfertigt sein muss. Der große Nutzen der Tomographie für die Strahldiagnose und Untersuchung der Strahldynamik ist bis heute allerdings weitgehend unerkannt und weiterhin reduziert auf die Entwicklung einer nicht-invasiven Methode für die Emittanzbestimmung. Ein zweites Hindernis stellte bisher auch die Diskrepanz zwischen Genauigkeit und Platzaufwand dar (hohe Genauigkeit durch viele Projektionen mit Quad-Scan-Technik auf mehreren Metern oder niedrige Genauigkeit durch wenig Projektionen mit Multi-Port-Technik auf weniger als einem Meter). Die Tomografie kann großen Nutzen leisten für die Online-Überwachung wichtiger Maschineneparameter im Strahlbetrieb (Monitoring) als auch für detaillierte Analysen zur Strahldynamik (Modellierung) weit über die Implementierung einer nicht-invasiven Emittanzmessmethode hinaus.
Um dies zu gewährleisten Bedarf es Zweierlei. Zum einen muss die Diskrepanz zwischen Genauigkeit und Platzaufwand aufgehoben werden. Hierzu wurde im Rahmen dieser Arbeit eine rotierbare Vakuumkammer entwickelt die nach dem Vorbild medizinischer Tomographen in mehr als 5000 Winkelschritten um den Strahl herum fahren kann, dabei ein Vakuum von mindestens 10-7mbar aufrecht erhält und einen Platzbedarf von weniger als 400 mm in der Strahlstrecke einnimmt. Zum anderen muss die Implementierung der Tomografie durch eine Angabe von schematischen Schritten und Entscheidungen vereinfacht werden. Eine Strahltomographie muss immer auf ihren jeweiligen Zweck hin implementiert werden, da Einzelelemente der Tomografie wie beispielsweise Messvorrichtung und dadurch die Profilanzahl, zu verwendender Tomographiealgorithmus, zu bestimmende Parameter sich je nach Einsatz unterscheiden können. Jedoch können die dazu nötigen Entscheidungen in ein Schema eingeordnet werden, welches die Implementierung der Tomographie vereinfacht und beschleunigt. Hierzu wurde in dieser Arbeit eine Diagnosepipeline und ein Entscheidungsschema eingeführt, sowie die Implementierung nach diesem Schema am Beispiel einer Strahltomographie für die Frankfurter Neutronenquelle (FRANZ) demonstriert und die entsprechenden Fragen und Entscheidungen diskutiert. Es wird gezeigt, wie sich aus den Messdaten über die Aufbereitung der Daten durch die Tomografie die erforderlichen Standardstrahlparameter für ein Monitoring gewinnen lassen. Zusätzlich wird ein Ebenen-Modell eingeführt, über welches nicht-Standardparameter oder neu modellierte Strahlparameter für detaillierte Analysen der Strahldynamik über die Standardparameter hinaus entwickelt werden können. Diese Arbeit soll ein grundlegendes Konzept für die routinemäßige Implementierung der Tomographie in der Strahldiagnose zur Verfügung stellen. Für die Verwendung zum Monitoring im Strahlbetrieb muss die Bestimmung von Standardparametern noch wesentlich im Zeitaufwand verbessert werden. Die Verwendung der Phasenraumtomographie benötigt noch eine Idee um den arcustangensförmigen Verlauf der berechneten Phasenraumrotationswinkel mit der Forderung der FBP nach äquidistanten Projektionswinkeln verträglicher zu machen.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Lipid-laden alveolar macrophages and pH monitoring have been used in the diagnosis of chronic aspiration in children with gastroesophageal reflux (GER). This study was conducted to prove a correlation between the detection of alimentary pulmonary fat phagocytosis and an increasing amount of proximal gastroesophageal reflux. It was assumed that proximal gastroesophageal reflux better correlates with aspiration than distal GER. Patients from 6 months to 16 years with unexplained recurrent wheezy bronchitis and bronchial hyperreactivity, or recurrent pneumonia with chronic cough underwent 24-hour double-channel pH monitoring and bronchoscopy with bronchoalveolar lavage (BAL). Aspiration of gastric content was determined by counting lipid laden alveolar macrophages from BAL specimens. There were no correlations between any pH-monitoring parameters and counts of lipid-laden macrophages in the whole study population, even when restricting analysis to those with abnormal reflux index expressing clinically significant GER. Quantifying lipid-laden alveolar macrophages from BAL in children with gastroesophageal-related respiratory disorders does not have an acceptable specificity to prove chronic aspiration as an underlying etiology. Therefore, research for other markers of pulmonary aspiration is needed.
During the last years, chemopreventive activity of NSAIDs against a great variety of tumors was highly investigated. COX-2 seemingly plays a major part in tumorigensis and tumor development, underlined by several studies in animals and humans. At first, NSAIDs were thought to accomplish chemoprevention by inhibition of COX-2 as their so far known mode of action comprises unselective inhbition of COX-enzymes. However, further studies revealed COX-independent mechanisms. Sulindac is known as a well established drug used to treat inflammation and pain exerting the most prominent chemopreventive action, mainly in colorectal cancer or FAP and can be classified into the group of NSAIDs inhibting both COX-isoformes. As interference with the AA metabolism is evident, it was speculated whether Ssi has targets other than COX-enzymes providing evidence and explanation of its beneficial side effect profile and its ability to reduce tumor growth. 5-LO is another master enzyme in the AA cascade which produces inflammatory lipid mediators (LTs) upon stimulation in inflamed tissues. The present work should answer the question if Ssi targets the 5-LO pathway and should examine the molecular mechanisms behind Ssi-mediated 5-LO inhibiton. As COX-2 is upregulated during carcinogenesis and is inhibited by Ssi, further investigations should show regulatory effects of Ssi on 5-LO gene expression in MM6-cells and whether Sp1 as a common transcriptional factor is involved in such a regulation. As the use of NO-NSAIDs seem to be a promising strategy concerning their chemopreventive and gastroprotective effects compared to the parent NSAIDs, a possible interaction with the 5-LO pathway as a second, potent target should additionally be elucidated. In the first section it was demonstrated that the pharmacologically active metabolite of sulindac, Ssi, targets 5-LO. Ssi inhibited 5-LO in ionophore A23187- and LPS/fMLP-stimulated human PMNL (IC50 ≈ 8 -10 μM). Importantly, Ssi efficiently suppressed 5-LO in human whole blood at clinically relevant plasma levels (IC50 = 18.7 μM). Ssi was 5-LO-selective as no inhibition of related lipoxygenases (12-LO, 15-LO) was observed. The sulindac prodrug and the other metabolite, sulindac sulfone, failed to inhibit 5-LO. Mechanistic analysis demonstrated that Ssi directly suppresses 5-LO with an IC50 of 20 μM. Together, these findings may provide a novel molecular basis to explain the COX-independent pharmacological effects of sulindac under therapy. In the second part of the work dealing with the analysis of Ssi’s inhibitory mechanism on 5-LO it was presented that Ssi shows a lack of potency in cellular systems where membrane constituents are existent. The addition of microsomal fractions of PMNLto crude 5-LO enzyme were able to recover enzyme activity to ~ 100 %. Selectively 5-LO activity stimulating lipids like PC, participating in 5-LO membrane interactions within the regulatory C2-like domain of 5-LO, counteracted the Ssimediated inhibition on 5-LO-wt in a concentration-dependent manner. Lastly, a protein mutant lacking three trp resudies essential for linking the enzyme to nuclear membranes and deploying catalytic activity was not influenced by Ssi and shows enzyme activity in a cell-free assay. Ssi displays the first 5-LO inhibitor on the market interacting with the C2-like domain of the enzyme and therfore can stand for a novel lead structure of 5-LO inhibitors. An influence on 5-LO gene expression by Ssi could be detected in differentiated MM6-cells, described in the results chapter 3 (4.3). Ssi downregulated the 5-LO mRNA level after 72 hrs of incubation in differentiated MM6-cells to ~ 20 % of output control at concentrations of 10 μM. Concomitantly, mRNA levels of Sp1 were suppressed. Reporter gene studies revealed Sp1 most probably as a regulating agent involved in the Ssi-mediated 5-LO mRNA downregulation as co-transfection of increasing amounts of Sp1 could abrogate the effect. A ChIP assay could identify Sp1 as a critical transcriptional factor as Sp1 binding to the 5-LO promoter decreased in presence of Ssi. Lastly, three NO-NSADIs (NO-sulindac, NOnaproxen, NO-aspirin) were tested for the ability of 5-LO product inhibition. In intact PMNL, all compounds showed effective inhibition of 5-LO activity and NO-sulindac was most potent with an IC50 value of ~ 3 μM. NO-ASA inhibited 5-LO with IC50 values of ~ 30 μM and showed a non-competitive mode of action in cell-based assays. On human recombinant 5-LO all compounds again showed inhibitory potency whereas NO-sulindac again suppressed LT biosynthesis with an IC50 vaue comparable to intact cellular systems. Unfortunately, all inhibitors showed a loss of potency when tested for inhibition of 5-LO product synthesis in human whole blood as higher concentrations up to 100 μM were needed to reach at least 55 % enzyme inhibition. However, this strategy of 5-LO inhibition seems promising and needs further experimental approaches to gain more insight into the mechanism of 5-LO inhibition by NONSAIDs.
5-lipoxygenase (5-LO) catalyzes the first two steps in leukotriene (LT) biosynthesis. In a two step reaction the enzyme oxygenates arachidonic acid (AA) to form the highly unstable epoxide leukotriene A4 (LTA4) in dehydrating a hydroperoxide intermediate (20). LTA4 can then be further metabolized by two terminal synthases yielding either the potent chemoattractant leukotriene B4 (LTB4) or the cysteinyl leukotrienes (CysLTs). 5-LO enzyme expression is primarily found in mature leukocytes (22) where it can either reside in the cytoplasm or in the nucleus associated with euchromatin (29). Its enzymatic activity is embedded in a complicated network in intact cells regulating LT synthesis by various factors dependent on the cell type and nature of stimulus. Factors such as the amount of free AA released by phospholipase A2 enzymes, levels of enzymes involved, catalytic activity per enzyme molecule and availability of different small molecules influence 5-LO activity (36).
The 5-LO derived LTs are lipid mediators which were shown to primarily mediate inflammatory and allergic reactions and their role in the pathogenesis of asthma is well defined. CysLTs are among the most potent bronchoconstrictors yet studied in man and play an important role in airway remodeling. LTB4 has no bronchoconstrictory effects in healthy and asthmatic humans but displays potent chemoattractant properties on neutrophils and increases leukocyte adhesion to the vessel wall endothelium (22). Therefore, LTB4 enhances the capacity of macrophages and neutrophils to ingest and kill microbes. In concert with LTB4, histamine and prostaglandin E2 (PGE2) CysLTs are thought to maintain the tone of the human airways (82).
Besides their well studied role in asthma, 5-LO derived LTs have also been implicated to play a role in cardiovascular diseases and cancer. In contrast to healthy tissues, LT pathway enzymes and receptors were found to be abundantly expressed in cancer tissues, atherosclerotic lesions in the aorta, heart and carotid artery (86). Pharmacological inhibition of 5-LO potently suppressed tumour cell growth by inducing cell cycle arrest and triggering cell death via the intrinsic apoptotic pathway (92, 93). In several studies LTs were found to exhibit cardiovascular actions by promotion of plasma leakage in postcapillary venules, coronary artery vasoconstriction and impaired ventricular contraction leading to reduced coronary blood flow and cardiac output (24). Unfortunately, the precise molecular mechanisms through which LTs influence carcinogenesis and cardiovascular diseases are still incompletely understood.
In contrast, an increasing number of studies questions the correlation between 5-LO and cancer (95-97) since extreme LT concentrations were applied to induce proliferative effects in the majority of the publications. A few studies exist which show susceptibility towards 5-LO products in physiological concentrations or achieve anti-proliferation by applying low concentrations of 5-LO inhibitors (98) ...
The aim of this study is a better understanding of radiation processes in regional climate models (RCMs) in order to quantify their impact and to reduce possible errors. A first important task in finding an answer to this question was to examine the accuracy of the components of the radiation budget in regional climate simulations. To this end, the simulated radiation budgets of two regional climate simulations for Europe were compared with a satellite-based reference. In the simulations with the RCM COSMO-CLM there were some serious under- and overestimations of short- and long-wave net radiation in Europe. However, taking into account the differences in the reference datasets, the results of the COSMO-CLM were quite satisfactory.
Using statistical methods, the influence of potential sources of uncertainties was estimated. Uncertainties in the cloud cover and surface albedo had a significant impact on uncertainties in short-wave net radiation, the explained variance of uncertainties in cloud cover was two to three times higher than that of uncertainties in surface albedo. Uncertainties in the cloud cover resulted in significant errors in the net long-wave radiation. However, the influence of uncertainties in soil temperature on errors in the long-wave radiation budget was low or even negligible. These results were confirmed in a comparison with simulations of the REMO and ALADIN regional climate models. It is reasonable to expect that a better parameterization of relatively simple parameters such as cloud cover and surface albedo is a means of significantly improving the simulation of radiation budget components in the COSMO-CLM.
An important question for the application of RCMs is to examine whether the results of radiation uncertainties and their impact factors are comparable if the model is applied in a region that is not the one for which it was originally created. Comparisons of the simulated radiation budgets of different RCMs for West Africa showed that problems in the simulation of short- and long-wave radiation fluxes were a widespread problem. Most of the tested models showed some considerable under- or overestimation of the short- and long-wave radiation fluxes.
Similar to Europe uncertainties in cloud cover were also in the simulations for Africa a significant factor affecting uncertainties in the simulated radiation fluxes. However, for the African simulations uncertainties in the parameterization of surface albedo were much more important than in Europe. On average, overland uncertainties in the cloud cover and surface albedo were of similar importance. Uncertainties in soil temperature simulations were of higher importance in Africa, and reached overland similar values of the mean explained variance (R2 ≈ 0.2) such as uncertainties in the cloud cover. This indicates a geographical dependence of the model error. This study confirmed the assumption that an improved parameterization of relatively simple parameters such as the surface albedo in RCMs leads to a significant improvement in the modeled radiation budget, particularly in Africa.
The influence of errors in the simulated radiation budget components on the simulation of climate processes, such as the West-African monsoon (WAM), was investigated in a next step. The evaluation of ERA-Interim and ECHAM5 driven COSMO-CLM simulations for Africa showed that the main features of the WAM were well reproduced by the model, but there were only slight improvements compared to the driving data. The index of convective activity in the model simulations was much too high and precipitation was underestimated in large parts of tropical Africa. The partly considerable differences between the ERA-Interim and ECHAM5 driven simulations demonstrated the sensitivity of the RCM to the boundary conditions and in particular to the sea surface temperature. An excessive northwards shift of the monsoon in the model was influenced by the land-sea temperature gradient and the strength of the Saharan heat low. Consequently, a part of the error was due to the driving data and the model itself produced another part.
By modifying the parameterization of the bare soil albedo the errors in the radiation budget and 2 m temperature in the Sahara region were significantly reduced. Similarly, the overesti-mation of precipitation and convection has been reduced in the Sahel. The effect of this modifi-cation on the examined WAM area was low. This confirmed that especially in desert regions, errors in the surface albedo were a driving factor for errors in the radiation budget. However, there are other important factors not yet sufficiently understood that have a strong influence on the quality of the simulation of the WAM.
The analysis of the actual state, the quantification of error sources and the highlighting of connections made it possible to find means to reduce uncertainties in the simulated radiation in RCMs and to have a better understanding of radiation processes. However, the magnitude of the errors found, the number of possible influencing factors, and the complexity of interactions, indicate that there is still a need for further research in this area.
Background and Purpose: Targeted drugs have augmented the cancer treatment armamentarium. Based on the molecular specificity, it was initially believed that these drugs had significantly less side effects. However, currently it is accepted that all of these agents have their specific side effects. Based on the given multimodal approach, special emphasis has to be placed on putative interactions of conventional cytostatic drugs, targeted agents and other modalities. The interaction of targeted drugs with radiation harbours special risks, since the awareness for interactions and even synergistic toxicities is lacking. At present, only limited is data available regarding combinations of targeted drugs and radiotherapy. This review gives an overview on the current knowledge on such combined treatments.
Material and methods: Using the following MESH headings and combinations of these terms pubmed database was searched: Radiotherapy AND cetuximab / trastuzumab / panitumumab / nimotuzumab, bevacizumab, sunitinib / sorafenib / lapatinib / gefitinib / erlotinib / sirolimus, thalidomide / lenalidomide as well as erythropoietin. For citation crosscheck the ISI web of science database was used employing the same search terms.
Results: Several classes of targeted substances may be distinguished: Small molecules including kinase inhibitors and specific inhibitors, antibodies, and anti-angiogenic agents. Combination of these agents with radiotherapy may lead to specific toxicities or negatively influence the efficacy of RT. Though there is only little information on the interaction of molecular targeted radiation and radiotherapy in clinical settings, several critical incidents are reported.
Conclusions: The addition of molecular targeted drugs to conventional radiotherapy outside of approved regimens or clinical trials warrants a careful consideration especially when used in conjunction in hypo-fractionated regimens. Clinical trials are urgently needed in order to address the open question in regard to efficacy, early and late toxicity.
In situ measurements of ice crystal size distributions in tropical upper troposphere/lower stratosphere (UT/LS) clouds were performed during the SCOUT-AMMA campaign over West Africa in August 2006. The cloud properties were measured with a Forward Scattering Spectrometer Probe (FSSP-100) and a Cloud Imaging Probe (CIP) operated aboard the Russian high altitude research aircraft M-55 Geophysica with the mission base in Ouagadougou, Burkina Faso. A total of 117 ice particle size distributions were obtained from the measurements in the vicinity of Mesoscale Convective Systems (MCS). Two to four modal lognormal size distributions were fitted to the average size distributions for different potential temperature bins. The measurements showed proportionately more large ice particles compared to former measurements above maritime regions. With the help of trace gas measurements of NO, NOy, CO2, CO, and O3 and satellite images, clouds in young and aged MCS outflow were identified. These events were observed at altitudes of 11.0 km to 14.2 km corresponding to potential temperature levels of 346 K to 356 K. In a young outflow from a developing MCS ice crystal number concentrations of up to (8.3 ± 1.6) cm−3 and rimed ice particles with maximum dimensions exceeding 1.5 mm were found. A maximum ice water content of 0.05 g m−3 was observed and an effective radius of about 90 μm. In contrast the aged outflow events were more diluted and showed a maximum number concentration of 0.03 cm−3, an ice water content of 2.3 × 10−4 g m−3, an effective radius of about 18 μm, while the largest particles had a maximum dimension of 61 μm.
Close to the tropopause subvisual cirrus were encountered four times at altitudes of 15 km to 16.4 km. The mean ice particle number concentration of these encounters was 0.01 cm−3 with maximum particle sizes of 130 μm, and the mean ice water content was about 1.4 × 10−4 g m−3. All known in situ measurements of subvisual tropopause cirrus are compared and an exponential fit on the size distributions is established for modelling purposes.
A comparison of aerosol to ice crystal number concentrations, in order to obtain an estimate on how many ice particles may result from activation of the present aerosol, yielded low ratios for the subvisual cirrus cases of roughly one cloud particle per 30 000 aerosol particles, while for the MCS outflow cases this resulted in a high ratio of one cloud particle per 300 aerosol particles.
According to the World Health Organization (WHO) bacterial resistance to antibiotic drug therapy is emerging as a major public health problem around the world. Infectious diseases seriously threaten the health and economy of all countries. Hence, the preservation of the effectiveness of antibiotics is a world wide priority. The key to preserving the power of antibiotics lies in maintaining their diversity. Many microorganisms are capable of producing these bioactive products, the so called antibiotics. Specifically in microorganisms, polyketide synthases (PKS) and non-ribosomal peptide synthases (NRPS) produce these natural bioactive compounds. Besides being used as antibiotics these non-ribosomal peptides and polyketides display an even broader spectrum of biological activities, e.g. as antivirals, immunosuppressants or in antitumor therapy. The wide functional spectrum of the peptides and ketides is due to their structural diversity. Mostly they are cyclic or branched cyclic compounds, containing non-proteinogenic amino acids, small heterocyclic rings and other unusual modifications such as epimerization, methylation, N‐formylation or heterocyclization. It is has been shown that these modifications are important for biological activity, but little is known about their biosynthetic origin.
PKS and NRPS are multidomain protein assembly lines which function by sequentially elongating a growing polyketide or peptide chain by incorporating acyl units or amino acids, respectively. The growing product is attached via a thioester linkage to the 4’-phosphopantetheine (4’-Ppant) arm of a holo acyl carrier protein (ACP) in PKSs or holo peptidyl carrier protein (PCP) in NRPSs and is passed from one module to another along the chain of reaction centers. The modular arrangement makes PKS and NRPS systems an interesting target for protein engineering. More than 200 novel polyketide compounds have already been created by module swapping, gene deletion or other specific manipulations. Unfortunately, however, engineered PKS often fail to produce significant amounts of the desired products. Structural studies may faciliate yield improvement from engineered systems by providing a more complete understanding of the interface between the different domains. While some information about domain-domain interactions, involving the most common enzymatic modules, ketosynthase and acyltransferase, is starting to emerge, little is known about the interaction of ACP domains with other modifying enzymes such as methyltransferases, epimerases or halogenases.
To further improve the understanding of domain-domain interactions this work focuses on the curacin A assembly line. Curacin A, which exhibits anti-mitotic activity, is from the marine cyanobacterium Lyngbya majuscula. This outstanding natural product contains a cyclopropane ring, a thiazoline ring, an internal cis double bond and a terminal alkene. The biosynthesis of curacin A is performed by a 2.2 Mega Dalton (MDa) hybrid PKS-NRPS cluster. A 10-enzyme assembly catalyzes the formation of the cyclopropane moiety as the first building block of the final product. Interestingly, for these enzymes the substrate is presented by an unusual cluster of three consecutive ACPs (ACPI,II,III). Little is known about the function of multiple ACPs which are supposed to increase the overall flux for enhanced production of secondary metabolites.
The first task in this work was to elucidate the structural effect of the triplet ACP repetition by nuclear magnetic resonance (NMR). The initial data show that the excised ACPI, ACPII or ACPIII proteins resulted in [15N, 1H]-TROSY spectra with strong chemical shift perturbations (CSPs), suggesting an effect on the structure. The triplet ACP domains display a high sequence identity (93- 100%) making structural investigation using usual NMR techniques due to high peak overlap impossible. To enable the investigation of the triplet ACP in its native composition we developed a powerful method, the three fragment ligation. Segmental labeling allows incorporating isotopes into one single domain in its multidomain context. As a result we could prepare the triplet ACP with only one domain isotopically labeled and therefore assign the full length protein. In this way our method paved the way to study the structural effects of the triplet ACP repetition. We could show unexpectedly, that, despite the fact that the triplet repeat of CurA ACPI,II,III has a synergistic effect in the biosynthesis of CurA, the domains are structurally independent.
In the second part of this work, we studied the structure of the isolated ACPI domain. Our results show that the CurA ACPI undergoes no major conformational changes upon activation via phosphopantetheinylation and therefore contradicts the conformational switching model which has been proposed for PCPs. Further we report the NMR solution structures of holo-ACPI and 3-hydroxyl-3-methylglutaryl (HMG)-ACPI. Data obtained from filtered nuclear overhauser effect (NOE) experiments indicate that the substrate HMG is not sequestered but presented on the ACP surface.
In the third part of this work we focussed on the protein-protein interactions of the isolated ACPI with its cognate interaction partners. We were especially interested in the interaction with the halogenase (Cur Hal), the first enzyme within the curacin A sub-cluster, acting on the initial hydroxyl-methyl-glutaryl (HMG) attached to ACPI. Primarily we studied the interaction using NMR titration and fluorescence anisotropy measurements. Surprisingly no complex between ACPI and Cur Hal could be detected. The combination of an activity assay using matrix-assisted laser desorption/ionization (MALDI) mass spectroscopy and mutational analysis revealed several amino acids of ACPI that strongly decrease the activity of CurA Hal. Mapping these mutations according to their effect on the Cur Hal activity onto the structure of HMG-ACPI displays that these amino acids surround the substrate and form a consecutive surface. These results suggest that this surface is important for Cur Hal recognition and selectivity. Our research presented herein is an excellent example for protein-protein interactions in PKS systems underlying a specific recognition process.
Proceedings of 4th International Workshop "Critical Point and Onset of Deconfinement", July 9-13, 2007, Darmstadt, Germany: The multiplicity fluctuations of hadrons are studied within the statistical hadron-resonance gas model in the large volume limit. The role of quantum statistics and resonance decay effects are discussed. The microscopic correlator method is used to enforce conservation of three charges - baryon number, electric charge, and strangeness - in the canonical ensemble. In addition, in the micro-canonical ensemble energy conservation is included. An analytical method is used to account for resonance decays. The multiplicity distributions and the scaled variances for negatively and positively charged hadrons are calculated for the sets of thermodynamical parameters along the chemical freeze-out line of central Pb+Pb (Au+Au) collisions from SIS to LHC energies. Predictions obtained within different statistical ensembles are compared with the preliminary NA49 experimental results on central Pb+Pb collisions in the SPS energy range. The measured fluctuations are significantly narrower than the Poisson ones and clearly favor expectations for the micro-canonical ensemble. Thus, this is a first observation of the recently predicted suppression of the multiplicity fluctuations in relativistic gases in the thermodynamical limit due to conservation laws.
Bioapatite in mammalian teeth is readily preserved in continental sediments and represents a very important archive for reconstructions of environment and climate evolution. This project provides a comprehensive data base of major, minor and trace element and isotope tracers for tooth apatite using a variety of microanalytical techniques. The aim is to identify specific sedimentary environments and to improve our understanding on the interaction between internal metabolic processes during tooth formation and external nutritional control and secondary alteration effects. Here, we use the electron microprobe to determine the major and minor element contents of fossil and modern molar enamel, cement and dentin from Hippopotamids. Most of the studied specimens are from different ecosystems in Eastern Africa, representing modern and fossil lacustrine (Lake Kikorongo, Lake Albert, and Lake Malawi) and modern fluvial environments of the Nile River system. Secondary alteration effects - in particular FeO, MnO, SO3 and F concentrations – are 2 to 10 times higher in fossil than in modern enamel; the secondary enrichment of these components in fossil dentin and cement is even higher. In modern and fossil enamel, along sections perpendicular to the enamel-dentin junction (EDJ) or along cervix-apex profiles, P2O5 and CaO contents and the CaO/P2O5 ratios are very constant (StdDev ∼1%). Linear regression analysis reveals tight control of the MgO (R2∼0.6), Na2O and Cl variation (for both R2>0.84) along EDJ-outer enamel rim profiles, despite large concentration variations (40% to 300%) across the enamel. These minor elements show well defined distribution patterns in enamel, similar in all specimens regardless of their age and origin, as the concentration of MgO and Na2O decrease from the enamel-dentin junction (EDJ) towards the outer rim, whereas Cl displays the opposite trend. Fossil enamel from Hippopotamids which lived in the saline Lake Kikorongo have a much higher MgO/Na2O ratio (∼1.11) than those from the Neogene fossils of Lake Albert (MgO/Na2O∼0.4), which was a large fresh water lake like those in the western Branch of the East African Rift System today. Similarly, the MgO/Na2O ratio in modern enamel from the White Nile River (∼0.36), which has a Precambrian catchment of dominantly granites and gneisses and passes through several saline zones, is higher than that from the Blue Nile River, whose catchment is the Neogene volcanic Ethiopian Highland (MgO/Na2O∼0.22). Thus, particularly MgO/Na2O might be a sensitive fingerprint for environments where river and lake water have suffered strong evaporation. Enamel formation in mammals takes place at successive mineralization fronts within a confined chamber where ion and molecule transport is controlled by the surrounding enamel organ. During the secretion and maturation phases the epithelium generates different fluid composition, which in principle, should determine the final composition of enamel apatite. This is supported by co-linear relationships between MgO, Cl and Na2O which can be interpreted as binary mixing lines. However, if maturation starts after secretion is completed, the observed element distribution can only be explained by equilibration of existing and addition of new apatite during maturation. It appears the initial enamel crystallites precipitating during secretion and the newly formed bioapatite crystals during maturation equilibrate with a continuously evolving fluid. During crystallization of bioapatite the enamel fluid becomes continuously depleted in MgO and Na2O, but enriched in Cl which results in the formation of MgO, and Na2O-rich, but Cl-poor bioapatite near the EDJ and MgO- and Na2O-poor, but Cl-rich bioapatite at the outer enamel rim. The linkage between lake and river water compositions, bioavailability of elements for plants, animal nutrition and tooth formation is complex and multifaceted. The quality and limits of the MgO/Na2O and other proxies have to be established with systematic investigations relating chemical distribution patterns to sedimentary environment and to growth structures developing as secretion and maturation proceed during tooth formation.
Globally, tropical forest soils represent the second largest source of N2O and NO. However, there is still considerable uncertainty on the spatial variability and soil properties controlling N trace gas emission. To investigate how soil properties affect N2O and NO emission, we carried out an incubation experiment with soils from 31 locations in the Nyungwe tropical mountain forest in southwestern Rwanda. All soils were incubated at three different moisture levels (50, 70 and 90% water filled pore space (WFPS)) at 17 °C. Nitrous oxide emission varied between 4.5 and 400 μg N m−2 h−1, while NO emission varied from 6.6 to 265 μg N m−2 h−1. Mean N2O emission at different moisture levels was 46.5 ± 11.1 (50% WFPS), 71.7 ± 11.5 (70% WFPS) and 98.8 ± 16.4 (90% WFPS) μg N m−2 h−1, while mean NO emission was 69.3 ± 9.3 (50% WFPS), 47.1 ± 5.8 (70% WFPS) and 36.1 ± 4.2 (90% WFPS) μg N m−2 h−1. The latter suggests that climate (i.e. dry vs. wet season) controls N2O and NO emissions. Positive correlations with soil carbon and nitrogen indicate a biological control over N2O and NO production. But interestingly N2O and NO emissions also showed a negative correlation (only N2O) with soil pH and a positive correlation with free iron. The latter suggest that chemo-denitrification might, at least for N2O, be an important production pathway. In conclusion improved understanding and process based modeling of N trace gas emission from tropical forests will not only benefit from better spatial explicit trace gas emission and basic soil property monitoring, but also by differentiating between biological and chemical pathways for N trace gas formation.
Background: Due to constantly rising air pollution levels as well as an increasing awareness of the hazardousness of air pollutants, new laws and rules have recently been passed. Although there has been a large amount of research on this topic, bibliometric data is still to be collected. Thus this study provides a scientometric approach to the material published on this subject so far.
Methods: For this purpose, data retrieved from the "Web of Science" provided by the Thomson Scientific Institute was analyzed and visualized both with density-equalizing methods and classic data-processing methods such as tables and charts.
Results: For the time span between 1955 and 2006, 26,253 items were listed and related to the topic of air pollution, published by 124 countries in 24 different languages. General citation activity has been constantly increasing since the beginning of the examined period. However, beginning with the year 1991, citation levels have been rising exponentially each year, reaching 39,220 citations in the year 2006. The United States, the UK and Germany were the three most productive countries in the area, with English and German ranked first and second in publishing languages, followed by French. An article published by Dockery, Pope, Xu et al. was both the most cited in total numbers and in average citation rate. J. Schwartz was able to claim the highest total number of citations on his publications, while D.W. Dockery has the highest citation rate per publication. As to the subject areas the items are assigned with, the most item were published in Environmental Sciences, followed by Meteorology & Atmospheric Sciences and Public, Environmental & Occupational Health. Nine out of the ten publishing journals with more than 300 entries dealt with environmental interests and one dealt with epidemiology.
Conclusions: Using the method of density-equalizing mapping and further common data processing procedures, it can be concluded that scientific work concerning air pollution and related topics enjoys unbrokenly growing scientific interest. This can be observed both in publication numbers and in citation activity.
Background: Leishmaniasis is a chronic disease that is found in various countries of the world. The aim of the current study was to investigate the impact of leishmaniasis on the world's research output. The present study assessed benchmarking of research output for the period between 1957 and 2006. Using large database analyses, research in the field of leishmaniasis was evaluated. Furthermore, cooperation between different countries was identified.
Results: The number of publications increased with time. Most publications came from Western countries such as the US, UK or Germany. Interestingly, countries like Brazil and India had a high research output. We found a substantial amount of cooperation between countries.
Conclusion: Although leishmaniasis is of limited geographic distribution it attracts a wide research interest. The central hub of research cooperation is the USA.
Environmental tobacco smoke (ETS) is a major contributor to indoor air pollution. Since decades it is well documented that ETS can be harmful to human health and cause premature death and disease. In comparison to the huge research on toxicological substances of ETS, less attention was paid on the concentration of indoor ETS-dependent particulate matter (PM). Especially, investigation that focuses on different tobacco products and their concentration of deeply into the airways depositing PM-fractions (PM10, PM2.5 and PM1) must be stated. The tobacco smoke particles and indoor air quality study (ToPIQS) will approach this issue by device supported generation of indoor ETS and simultaneously measurements of PM concentration by laser aerosol spectrometry. Primarily, the ToPIQ study will conduct a field research with focus on PM concentration of different tobacco products and within various microenvironments. It is planned to extend the analysis to basic research on influencing factors of ETS-dependent PM concentration.
Calibration of TCCON column-averaged CO₂: the first aircraft campaign over European TCCON sites
(2011)
The Total Carbon Column Observing Network (TCCON) is a ground-based network of Fourier Transform Spectrometer (FTS) sites around the globe, where the column abundances of CO2, CH4, N2O, CO and O2 are measured. CO2 is constrained with a precision better than 0.25% (1-σ). To achieve a similarly high accuracy, calibration to World Meteorological Organization (WMO) standards is required. This paper introduces the first aircraft calibration campaign of five European TCCON sites and a mobile FTS instrument. A series of WMO standards in-situ profiles were obtained over European TCCON sites via aircraft and compared with retrievals of CO2 column amounts from the TCCON instruments. The results of the campaign show that the FTS measurements are consistently biased 1.1% ± 0.2% low with respect to WMO standards, in agreement with previous TCCON calibration campaigns. The standard a priori profile for the TCCON FTS retrievals is shown to not add a bias. The same calibration factor is generated using aircraft profiles as a priori and with the TCCON standard a priori. With a calibration to WMO standards, the highly precise TCCON CO2 measurements of total column concentrations provide a suitable database for the calibration and validation of nadir-viewing satellites
Background: Current prognostic gene signatures for breast cancer mainly reflect proliferation status and have limited value in triple-negative (TNBC) cancers. The identification of prognostic signatures from TNBC cohorts was limited in the past due to small sample sizes.
Methodology/Principal Findings: We assembled all currently publically available TNBC gene expression datasets generated on Affymetrix gene chips. Inter-laboratory variation was minimized by filtering methods for both samples and genes. Supervised analysis was performed to identify prognostic signatures from 394 cases which were subsequently tested on an independent validation cohort (n = 261 cases).
Conclusions/Significance: Using two distinct false discovery rate thresholds, 25% and <3.5%, a larger (n = 264 probesets) and a smaller (n = 26 probesets) prognostic gene sets were identified and used as prognostic predictors. Most of these genes were positively associated with poor prognosis and correlated to metagenes for inflammation and angiogenesis. No correlation to other previously published prognostic signatures (recurrence score, genomic grade index, 70-gene signature, wound response signature, 7-gene immune response module, stroma derived prognostic predictor, and a medullary like signature) was observed. In multivariate analyses in the validation cohort the two signatures showed hazard ratios of 4.03 (95% confidence interval [CI] 1.71–9.48; P = 0.001) and 4.08 (95% CI 1.79–9.28; P = 0.001), respectively. The 10-year event-free survival was 70% for the good risk and 20% for the high risk group. The 26-gene signatures had modest predictive value (AUC = 0.588) to predict response to neoadjuvant chemotherapy, however, the combination of a B-cell metagene with the prognostic signatures increased its response predictive value. We identified a 264-gene prognostic signature for TNBC which is unrelated to previously known prognostic signatures.
Fibroblast growth factor receptor substrate 2 (FRS2α) is a signaling adaptor protein that regulates downstream signaling of many receptor tyrosine kinases. During signal transduction, FRS2 can be both tyrosine and threonine phosphorylated and forms signaling complexes with other adaptor proteins and tyrosine phosphatases. We have here identified flotillin-1 and the cbl-associated protein/ponsin (CAP) as novel interaction partners of FRS2. Flotillin-1 binds to the phosphotyrosine binding domain (PTB) of FRS2 and competes for the binding with the fibroblast growth factor receptor. Flotillin-1 knockdown results in increased Tyr phosphorylation of FRS2, in line with the inhibition of ERK activity in the absence of flotillin-1. CAP directly interacts with FRS2 by means of its sorbin homology (SoHo) domain, which has previously been shown to interact with flotillin-1. In addition, the third SH3 domain in CAP binds to FRS2. Due to the overlapping binding domains, CAP and flotillin-1 appear to compete for the binding to FRS2. Thus, our results reveal a novel signaling network containing FRS2, CAP and flotillin-1, whose successive interactions are most likely required to regulate receptor tyrosine kinase signaling, especially the mitogen activated protein kinase pathway.
The development of insecticides requires valid risk assessment procedures to avoid causing harm to beneficial insects and especially to pollinators such as the honeybee Apis mellifera. In addition to testing according to current guidelines designed to detect bee mortality, tests are needed to determine possible sublethal effects interfering with the animal's vitality and behavioral performance. Several methods have been used to detect sublethal effects of different insecticides under laboratory conditions using olfactory conditioning. Furthermore, studies have been conducted on the influence insecticides have on foraging activity and homing ability which require time-consuming visual observation. We tested an experimental design using the radiofrequency identification (RFID) method to monitor the influence of sublethal doses of insecticides on individual honeybee foragers on an automated basis. With electronic readers positioned at the hive entrance and at an artificial food source, we obtained quantifiable data on honeybee foraging behavior. This enabled us to efficiently retrieve detailed information on flight parameters. We compared several groups of bees, fed simultaneously with different dosages of a tested substance. With this experimental approach we monitored the acute effects of sublethal doses of the neonicotinoids imidacloprid (0.15–6 ng/bee) and clothianidin (0.05–2 ng/bee) under field-like circumstances. At field-relevant doses for nectar and pollen no adverse effects were observed for either substance. Both substances led to a significant reduction of foraging activity and to longer foraging flights at doses of ≥0.5 ng/bee (clothianidin) and ≥1.5 ng/bee (imidacloprid) during the first three hours after treatment. This study demonstrates that the RFID-method is an effective way to record short-term alterations in foraging activity after insecticides have been administered once, orally, to individual bees. We contribute further information on the understanding of how honeybees are affected by sublethal doses of insecticides.
Calibration of TCCON column-averaged CO₂: the first aircraft campaign over European TCCON sites
(2011)
The Total Carbon Column Observing Network (TCCON) is a ground-based network of Fourier Transform Spectrometer (FTS) sites around the globe, where the column abundances of CO2, CH4, N2O, CO and O2 are measured. CO2 is constrained with a precision better than 0.25 %. To achieve a similarly high accuracy, calibration to World Meteorological Organization (WMO) standards is required. This paper introduces the first aircraft calibration campaign of five European TCCON sites and a mobile FTS instrument. A series of WMO standards in-situ profiles were obtained over European TCCON sites via aircraft and compared with retrievals of CO2 column amounts from the TCCON instruments. The results of the campaign show that the FTS measurements are consistently biased 1.0 % ± 0.2 % low with respect to WMO standards, in agreement with previous TCCON calibration campaigns. The standard a priori profile for the TCCON FTS retrievals is shown to not add a bias. The same calibration factor is generated using aircraft profiles as a priori and with the TCCON standard a priori. With a calibration to WMO standards, the highly precise TCCON CO2 measurements of total column concentrations provide a suitable database for the calibration and validation of nadir-viewing satellites.
Die vorliegende Arbeit beschäftigt sich mit der zeitstetigen Portfoliooptimierung sowie mit Themen aus dem Bereich des Kreditrisikos. Das Ziel der Portfoliooptimierung ist es, zu einem gegebenen Anfangskapital die bestmöglichen Konsum- und Investmentstrategien zu finden. In dieser Arbeit wird dabei vor allem der Einfluss von Einkommen auf diese Entscheidungen untersucht. Da einerseits jedoch der zukünftige Einkommensstrom vom Zufall bestimmt ist und es andererseits keine Finanzprodukte gibt, die diesen replizieren können, stellt die Einbindung von Einkommen in die Portfoliooptimierung ein großes Problem dar. Es führt dazu, dass die Annahmen eines vollständigen Marktes nicht weiter gelten, so dass die Standardmethoden zur Lösung nicht angewendet werden können. Diese Arbeit analysiert mehrere Ausprägungen dieses Problems und geht auf verschiedene Verfahren zur Lösung ein. Weiterhin untersucht diese Studie den Einfluss des Kreditrisikos einer Firma auf die jeweilige Firmenrendite. Dabei wird vor allem auf eine Anomalie, die bereits umfassend in der Literatur diskutiert wurde, Bezug genommen. Diese Anomalie besagt, dass Firmen mit hohen Ausfallwahrscheinlichkeiten geringere Renditen erwirtschaften als Firmen mit kleineren Ausfallwahrscheinlichkeiten. Eine weitere Frage, die in den Bereich des Kreditrisikos fällt, ist die Frage, inwieweit Modelle dazu in der Lage sind, strukturierte Produkte zu bewerten und abzusichern. Diese Arbeit versucht Antworten darauf zu geben.
The objective of this work is twofold. First, we explore the performance of the density functional theory (DFT) when it is applied to solids with strong electronic correlations, such as transition metal compounds. Along this direction, particular effort is put into the refinement and development of parameterization techniques for deriving effective models on a basis of DFT calculations. Second, within the framework of the DFT, we address a number of questions related to the physics of Mott insulators, such as magnetic frustration and electron-phonon coupling (Cs2CuCl4 and Cs2CuBr4), high-temperature superconductivity (BSCCO) and doping of Mott insulators (TiOCl). In the frustrated antiferromagnets Cs2CuCl4 and Cs2CuBr4, we investigate the interplay between strong electronic correlations and magnetism on one hand and electron-lattice coupling on the other as well as the effect of this interplay on the microscopic model parameters. Another object of our investigations is the oxygen-doped cuprate superconductor BSCCO, where nano-scale electronic inhomogeneities have been observed in scanning tunneling spectroscopy experiments. By means of DFT and many-body calculations, we analyze the connection between the structural and electronic inhomogeneities and the superconducting properties of BSCCO. We use the DFT and molecular dynamic simulations to explain the microscopic origin of the persisting under doping Mott insulating state in the layered compound TiOCl.
Regulation of dissimilatory sulfur oxidation in the purple sulfur bacterium Allochromatium vinosum
(2011)
In the purple sulfur bacterium Allochromatium vinosum, thiosulfate oxidation is strictly dependent on the presence of three periplasmic Sox proteins encoded by the soxBXAK and soxYZ genes. It is also well documented that proteins encoded in the dissimilatory sulfite reductase (dsr) operon, dsrABEFHCMKLJOPNRS, are essential for the oxidation of sulfur that is stored intracellularly as an obligatory intermediate during the oxidation of thiosulfate and sulfide. Until recently, detailed knowledge about the regulation of the sox genes was not available. We started to fill this gap and show that these genes are expressed on a low constitutive level in A. vinosum in the absence of reduced sulfur compounds. Thiosulfate and possibly sulfide lead to an induction of sox gene transcription. Additional translational regulation was not apparent. Regulation of soxXAK is probably performed by a two-component system consisting of a multi-sensor histidine kinase and a regulator with proposed di-guanylate cyclase activity. Previous work already provided some information about regulation of the dsr genes encoding the second important sulfur-oxidizing enzyme system in the purple sulfur bacterium. The expression of most dsr genes was found to be at a low basal level in the absence of reduced sulfur compounds and enhanced in the presence of sulfide. In the present work, we focused on the role of DsrS, a protein encoded by the last gene of the dsr locus in A. vinosum. Transcriptional and translational gene fusion experiments suggest a participation of DsrS in the post-transcriptional control of the dsr operon. Characterization of an A. vinosum ΔdsrS mutant showed that the monomeric cytoplasmic 41.1-kDa protein DsrS is important though not essential for the oxidation of sulfur stored in the intracellular sulfur globules.
Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants.
Members of the genus Xenorhabdus are entomopathogenic bacteria that associate with nematodes. The nematode-bacteria pair infects and kills insects, with both partners contributing to insect pathogenesis and the bacteria providing nutrition to the nematode from available insect-derived nutrients. The nematode provides the bacteria with protection from predators, access to nutrients, and a mechanism of dispersal. Members of the bacterial genus Photorhabdus also associate with nematodes to kill insects, and both genera of bacteria provide similar services to their different nematode hosts through unique physiological and metabolic mechanisms. We posited that these differences would be reflected in their respective genomes. To test this, we sequenced to completion the genomes of Xenorhabdus nematophila ATCC 19061 and Xenorhabdus bovienii SS-2004. As expected, both Xenorhabdus genomes encode many anti-insecticidal compounds, commensurate with their entomopathogenic lifestyle. Despite the similarities in lifestyle between Xenorhabdus and Photorhabdus bacteria, a comparative analysis of the Xenorhabdus, Photorhabdus luminescens, and P. asymbiotica genomes suggests genomic divergence. These findings indicate that evolutionary changes shaped by symbiotic interactions can follow different routes to achieve similar end points.
Chickpea (Cicer arietinum L.) is the third most important cool season food legume, cultivated in arid and semi-arid regions of the world. The goal of this study was to develop novel molecular markers such as microsatellite or simple sequence repeat (SSR) markers from bacterial artificial chromosome (BAC)-end sequences (BESs) and diversity arrays technology (DArT) markers, and to construct a high-density genetic map based on recombinant inbred line (RIL) population ICC 4958 (C. arietinum)×PI 489777 (C. reticulatum). A BAC-library comprising 55,680 clones was constructed and 46,270 BESs were generated. Mining of these BESs provided 6,845 SSRs, and primer pairs were designed for 1,344 SSRs. In parallel, DArT arrays with ca. 15,000 clones were developed, and 5,397 clones were found polymorphic among 94 genotypes tested. Screening of newly developed BES-SSR markers and DArT arrays on the parental genotypes of the RIL mapping population showed polymorphism with 253 BES-SSR markers and 675 DArT markers. Segregation data obtained for these polymorphic markers and 494 markers data compiled from published reports or collaborators were used for constructing the genetic map. As a result, a comprehensive genetic map comprising 1,291 markers on eight linkage groups (LGs) spanning a total of 845.56 cM distance was developed (http://cmap.icrisat.ac.in/cmap/sm/cp/thudi/). The number of markers per linkage group ranged from 68 (LG 8) to 218 (LG 3) with an average inter-marker distance of 0.65 cM. While the developed resource of molecular markers will be useful for genetic diversity, genetic mapping and molecular breeding applications, the comprehensive genetic map with integrated BES-SSR markers will facilitate its anchoring to the physical map (under construction) to accelerate map-based cloning of genes in chickpea and comparative genome evolution studies in legumes.
In an ongoing clinical phase I/II study, 16 pediatric patients suffering from high risk leukemia/tumors received highly purified donor natural killer (NK) cell immunotherapy (NK-DLI) at day (+3) +40 and +100 post haploidentical stem cell transplantation. However, literature about the influence of NK-DLI on recipient's immune system is scarce. Here we present concomitant results of a noninvasive in vivo monitoring approach of recipient's peripheral blood (PB) cells after transfer of either unstimulated (NK-DLI(unstim)) or IL-2 (1000 U/ml, 9–14 days) activated NK cells (NK-DLI(IL-2 stim)) along with their ex vivo secreted cytokine/chemokines. We performed phenotypical and functional characterizations of the NK-DLIs, detailed flow cytometric analyses of various PB cells and comprehensive cytokine/chemokine arrays before and after NK-DLI. Patients of both groups were comparable with regard to remission status, immune reconstitution, donor chimerism, KIR mismatching, stem cell and NK-DLI dose. Only after NK-DLI(IL-2 stim) was a rapid, almost complete loss of CD56(bright)CD16(dim/−) immune regulatory and CD56(dim)CD16(+) cytotoxic NK cells, monocytes, dendritic cells and eosinophils from PB circulation seen 10 min after infusion, while neutrophils significantly increased. The reduction of NK cells was due to both, a decrease in patients' own CD69(−) NCR(low)CD62L(+) NK cells as well as to a diminishing of the transferred cells from the NK-DLI(IL-2 stim) with the CD56(bright)CD16(+/−)CD69(+)NCR(high)CD62L(−) phenotype. All cell counts recovered within the next 24 h. Transfer of NK-DLI(IL-2 stim) translated into significantly increased levels of various cytokines/chemokines (i.e. IFN-γ, IL-6, MIP-1β) in patients' PB. Those remained stable for at least 1 h, presumably leading to endothelial activation, leukocyte adhesion and/or extravasation. In contrast, NK-DLI(unstim) did not cause any of the observed effects. In conclusion, we assume that the adoptive transfer of NK-DLI(IL-2 stim) under the influence of ex vivo and in vivo secreted cytokines/chemokines may promote NK cell trafficking and therefore might enhance efficacy of immunotherapy.
Temporal information processing in short- and long-term memory of patients with schizophrenia
(2011)
Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia.
Background: Oral anticoagulant therapy (OAT) with warfarin is the standard of stroke prevention in patients with atrial fibrillation. Approximately 30% of patients with cardioembolic strokes are on OAT at the time of symptom onset. We investigated whether warfarin exacerbates the risk of thrombolysis-associated hemorrhagic transformation (HT) in a mouse model of ischemic stroke.
Methods: 62 C57BL/6 mice were used for this study. To achieve effective anticoagulation, warfarin was administered orally. We performed right middle cerebral artery occlusion (MCAO) for 3 h and assessed functional deficit and HT blood volume after 24 h.
Results: In non-anticoagulated mice, treatment with rt-PA (10 mg/kg i.v.) after 3 h MCAO led to a 5-fold higher degree of HT compared to vehicle-treated controls (4.0±0.5 µl vs. 0.8±0.1, p<0.001). Mice on warfarin revealed larger amounts of HT after rt-PA treatment in comparison to non-anticoagulated mice (9.2±3.2 µl vs. 2.8±1.0, p<0.05). The rapid reversal of anticoagulation by means of prothrombin complex concentrates (PCC, 100 IU/kg) at the end of the 3 h MCAO period, but prior to rt-PA administration, neutralized the exacerbated risk of HT as compared to sham-treated controls (3.8±0.7 µl vs. 15.0±3.8, p<0.001).
Conclusion: In view of the vastly increased risk of HT, it seems to be justified to withhold tPA therapy in effectively anticoagulated patients with acute ischemic stroke. The rapid reversal of anticoagulation with PCC prior to tPA application reduces the risk attributed to warfarin pretreatment and may constitute an interesting therapeutic option.
Recent work has demonstrated that the formation of platelet neutrophil complexes (PNCs) affects inflammatory tissue injury. Vasodilator-stimulated phosphoprotein (VASP) is crucially involved into the control of PNC formation and myocardial reperfusion injury. Given the clinical importance of hepatic IR injury we pursued the role of VASP during hepatic ischemia followed by reperfusion. We report here that VASP−/− animals demonstrate reduced hepatic IR injury compared to wildtype (WT) controls. This correlated with serum levels of lactate dehydrogenase (LDH), aspartate (AST) and alanine (ALT) aminotransferase and the presence of PNCs within ischemic hepatic tissue and could be confirmed using repression of VASP through siRNA. In studies employing bone marrow chimeric mice we identified hematopoietic VASP to be of crucial importance for the extent of hepatic injury. Phosphorylation of VASP on Ser153 through Prostaglandin E1 or on Ser235 through atrial natriuretic peptide resulted in a significant reduction of hepatic IR injury. This was associated with a reduced presence of PNCs in ischemic hepatic tissue. Taken together, these studies identified VASP and VASP phosphorylation as crucial target for future hepatoprotective strategies.
Accumulating evidence indicates that increased generation of reactive oxygen species (ROS) contributes to the development of exaggerated pain hypersensitivity during persistent pain. In the present study, we investigated the antinociceptive efficacy of the antioxidants vitamin C and vitamin E in mouse models of inflammatory and neuropathic pain. We show that systemic administration of a combination of vitamins C and E inhibited the early behavioral responses to formalin injection and the neuropathic pain behavior after peripheral nerve injury, but not the inflammatory pain behavior induced by Complete Freund's Adjuvant. In contrast, vitamin C or vitamin E given alone failed to affect the nociceptive behavior in all tested models. The attenuated neuropathic pain behavior induced by the vitamin C and E combination was paralleled by a reduced p38 phosphorylation in the spinal cord and in dorsal root ganglia, and was also observed after intrathecal injection of the vitamins. Moreover, the vitamin C and E combination ameliorated the allodynia induced by an intrathecally delivered ROS donor. Our results suggest that administration of vitamins C and E in combination may exert synergistic antinociceptive effects, and further indicate that ROS essentially contribute to nociceptive processing in special pain states.
Parasites of the nematode genus Anisakis are associated with aquatic organisms. They can be found in a variety of marine hosts including whales, crustaceans, fish and cephalopods and are known to be the cause of the zoonotic disease anisakiasis, a painful inflammation of the gastro-intestinal tract caused by the accidental consumptions of infectious larvae raw or semi-raw fishery products. Since the demand on fish as dietary protein source and the export rates of seafood products in general is rapidly increasing worldwide, the knowledge about the distribution of potential foodborne human pathogens in seafood is of major significance for human health. Studies have provided evidence that a few Anisakis species can cause clinical symptoms in humans. The aim of our study was to interpolate the species range for every described Anisakis species on the basis of the existing occurrence data. We used sequence data of 373 Anisakis larvae from 30 different hosts worldwide and previously published molecular data (n = 584) from 53 field-specific publications to model the species range of Anisakis spp., using a interpolation method that combines aspects of the alpha hull interpolation algorithm as well as the conditional interpolation approach. The results of our approach strongly indicate the existence of species-specific distribution patterns of Anisakis spp. within different climate zones and oceans that are in principle congruent with those of their respective final hosts. Our results support preceding studies that propose anisakid nematodes as useful biological indicators for their final host distribution and abundance as they closely follow the trophic relationships among their successive hosts. The modeling might although be helpful for predicting the likelihood of infection in order to reduce the risk of anisakiasis cases in a given area.
Acanthocephalans are attractive candidates as model organisms for studying the ecology and co-evolutionary history of parasitic life cycles in the marine ecosystem. Adding to earlier molecular analyses of this taxon, a total of 36 acanthocephalans belonging to the classes Archiacanthocephala (3 species), Eoacanthocephala (3 species), Palaeacanthocephala (29 species), Polyacanthocephala (1 species) and Rotifera as outgroup (3 species) were analyzed by using Bayesian Inference and Maximum Likelihood analyses of nuclear 18S rDNA sequence. This data set included three re-collected and six newly collected taxa, Bolbosoma vasculosum from Lepturacanthus savala, Filisoma rizalinum from Scatophagus argus, Rhadinorhynchus pristis from Gempylus serpens, R. lintoni from Selar crumenophthalmus, Serrasentis sagittifer from Johnius coitor, and Southwellina hispida from Epinephelus coioides, representing 5 new host and 3 new locality records. The resulting trees suggest a paraphyletic arrangement of the Echinorhynchida and Polymorphida inside the Palaeacanthocephala. This questions the placement of the genera Serrasentis and Gorgorhynchoides within the Echinorhynchida and not the Polymorphida, necessitating further insights into the systematic position of these taxa based on morphology.
Recent phylogenomic studies have failed to conclusively resolve certain branches of the placental mammalian tree, despite the evolutionary analysis of genomic data from 32 species. Previous analyses of single genes and retroposon insertion data yielded support for different phylogenetic scenarios for the most basal divergences. The results indicated that some mammalian divergences were best interpreted not as a single bifurcating tree, but as an evolutionary network. In these studies the relationships among some orders of the super-clade Laurasiatheria were poorly supported, albeit not studied in detail. Therefore, 4775 protein-coding genes (6,196,263 nucleotides) were collected and aligned in order to analyze the evolution of this clade. Additionally, over 200,000 introns were screened in silico, resulting in 32 phylogenetically informative long interspersed nuclear elements (LINE) insertion events.
The present study shows that the genome evolution of Laurasiatheria may best be understood as an evolutionary network. Thus, contrary to the common expectation to resolve major evolutionary events as a bifurcating tree, genome analyses unveil complex speciation processes even in deep mammalian divergences. We exemplify this on a subset of 1159 suitable genes that have individual histories, most likely due to incomplete lineage sorting or introgression, processes that can make the genealogy of mammalian genomes complex.
These unexpected results have major implications for the understanding of evolution in general, because the evolution of even some higher level taxa such as mammalian orders may sometimes not be interpreted as a simple bifurcating pattern.
Background: Fishes show an amazing diversity in hearing abilities, inner ear structures, and otolith morphology. Inner ear morphology, however, has not yet been investigated in detail in any member of the diverse order Cyprinodontiformes. We, therefore, studied the inner ear of the cyprinodontiform freshwater fish Poecilia mexicana by analyzing the position of otoliths in situ, investigating the 3D structure of sensory epithelia, and examining the orientation patterns of ciliary bundles of the sensory hair cells, while combining μ-CT analyses, scanning electron microscopy, and immunocytochemical methods. P. mexicana occurs in different ecotypes, enabling us to study the intra-specific variability (on a qualitative basis) of fish from regular surface streams, and the Cueva del Azufre, a sulfidic cave in southern Mexico.
Results: The inner ear of Poecilia mexicana displays a combination of several remarkable features. The utricle is connected rostrally instead of dorso-rostrally to the saccule, and the macula sacculi, therefore, is very close to the utricle. Moreover, the macula sacculi possesses dorsal and ventral bulges. The two studied ecotypes of P. mexicana showed variation mainly in the shape and curvature of the macula lagenae, in the curvature of the macula sacculi, and in the thickness of the otolithic membrane.
Conclusions: Our study for the first time provides detailed insights into the auditory periphery of a cyprinodontiform inner ear and thus serves a basis—especially with regard to the application of 3D techniques—for further research on structure-function relationships of inner ears within the species-rich order Cyprinodontiformes. We suggest that other poeciliid taxa, or even other non-poeciliid cyprinodontiforms, may display similar inner ear morphologies as described here.
The duration of use is usually significantly longer for marine vessels than for roadside vehicles. Therefore, these vessels are often powered by relatively old engines which may propagate air pollution. Also, the quality of fuel used for marine vessels is usually not comparable to the quality of fuels used in the automotive sector and therefore, port areas may exhibit a high degree of air pollution. In contrast to the multitude of studies that addressed outdoor air pollution due to road traffic, only little is known about ship-related air pollution. Therefore the present article aims to summarize recent studies that address air pollution, i.e. particulate matter exposure, due to marine vessels. It can be stated that the data in this area of research is still largely limited. Especially, knowledge on the different air pollutions in different sea areas is needed.
Background: Drowning is a constant global problem which claims proximately half a million victims worldwide each year, whereas the number of near-drowning victims is considerably higher. Public health strategies to reduce the burden of death are still limited. While research activities in the subject drowning grow constantly, yet there is no scientometric evaluation of the existing literature at the present time.
Methods: The current study uses classical bibliometric tools and visualizing techniques such as density equalizing mapping to analyse and evaluate the scientific research in the field of drowning. The interpretation of the achieved results is also implemented in the context of the data collection of the WHO.
Results: All studies related to drowning and listed in the ISI-Web of Science database since 1900 were identified using the search term "drowning". Implementing bibliometric methods, a constant increase in quantitative markers such as number of publications per state, publication language or collaborations as well as qualitative markers such as citations were observed for research in the field of drowning. The combination with density equalizing mapping exposed different global patterns for research productivity and the total number of drowning deaths and drowning rates respectively. Chart techniques were used to illustrate bi- and multilateral research cooperation.
Conclusions: The present study provides the first scientometric approach that visualizes research activity on the subject of drowning. It can be assumed that the scientific approach to this topic will achieve even greater dimensions because of its continuing actuality.
Background: Closely related lineages of livebearing fishes have independently adapted to two extreme environmental factors: toxic hydrogen sulphide (H2S) and perpetual darkness. Previous work has demonstrated in adult specimens that fish from these extreme habitats convergently evolved drastically increased head and offspring size, while cave fish are further characterized by reduced pigmentation and eye size. Here, we traced the development of these (and other) divergent traits in embryos of Poecilia mexicana from benign surface habitats (“surface mollies”) and a sulphidic cave (“cave mollies”), as well as in embryos of the sister taxon, Poecilia sulphuraria from a sulphidic surface spring (“sulphur mollies”). We asked at which points during development changes in the timing of the involved processes (i.e., heterochrony) would be detectible.
Methods and Results: Data were extracted from digital photographs taken of representative embryos for each stage of development and each type of molly. Embryo mass decreased in convergent fashion, but we found patterns of embryonic fat content and ovum/embryo diameter to be divergent among all three types of mollies. The intensity of yellow colouration of the yolk (a proxy for carotenoid content) was significantly lower in cave mollies throughout development. Moreover, while relative head size decreased through development in surface mollies, it increased in both types of extremophile mollies, and eye growth was arrested in mid-stage embryos of cave mollies but not in surface or sulphur mollies.
Conclusion: Our results clearly demonstrate that even among sister taxa convergence in phenotypic traits is not always achieved by the same processes during embryo development. Furthermore, teleost development is crucially dependent on sufficient carotenoid stores in the yolk, and so we discuss how the apparent ability of cave mollies to overcome this carotenoid-dependency may represent another potential mechanism explaining the lack of gene flow between surface and cave mollies.
Background and Objective: The slow delayed rectifier current (IKs) is important for cardiac action potential termination. The underlying channel is composed of Kv7.1 α-subunits and KCNE1 β-subunits. While most evidence suggests a role of KCNE1 transmembrane domain and C-terminus for the interaction, the N-terminal KCNE1 polymorphism 38G is associated with reduced IKs and atrial fibrillation (a human arrhythmia). Structure-function relationship of the KCNE1 N-terminus for IKs modulation is poorly understood and was subject of this study.
Methods: We studied N-terminal KCNE1 constructs disrupting structurally important positively charged amino-acids (arginines) at positions 32, 33, 36 as well as KCNE1 constructs that modify position 38 including an N-terminal truncation mutation. Experimental procedures included molecular cloning, patch-clamp recording, protein biochemistry, real-time-PCR and confocal microscopy.
Results: All KCNE1 constructs physically interacted with Kv7.1. IKs resulting from co-expression of Kv7.1 with non-atrial fibrillation ‘38S’ was greater than with any other construct. Ionic currents resulting from co-transfection of a KCNE1 mutant with arginine substitutions (‘38G-3xA’) were comparable to currents evoked from cells transfected with an N-terminally truncated KCNE1-construct (‘Δ1-38’). Western-blots from plasma-membrane preparations and confocal images consistently showed a greater amount of Kv7.1 protein at the plasma-membrane in cells co-transfected with the non-atrial fibrillation KCNE1-38S than with any other construct.
Conclusions: The results of our study indicate that N-terminal arginines in positions 32, 33, 36 of KCNE1 are important for reconstitution of IKs. Furthermore, our results hint towards a role of these N-terminal amino-acids in membrane representation of the delayed rectifier channel complex.
Background: MicroRNA-21 (miR-21) is up-regulated in tumor tissue of patients with malignant diseases, including hepatocellular carcinoma (HCC). Elevated concentrations of miR-21 have also been found in sera or plasma from patients with malignancies, rendering it an interesting candidate as serum/plasma marker for malignancies. Here we correlated serum miR-21 levels with clinical parameters in patients with different stages of chronic hepatitis C virus infection (CHC) and CHC-associated HCC.
Methodology/Principal Findings: 62 CHC patients, 29 patients with CHC and HCC and 19 healthy controls were prospectively enrolled. RNA was extracted from the sera and miR-21 as well as miR-16 levels were analyzed by quantitative real-time PCR; miR-21 levels (normalized by miR-16) were correlated with standard liver parameters, histological grading and staging of CHC. The data show that serum levels of miR-21 were elevated in patients with CHC compared to healthy controls (P<0.001); there was no difference between serum miR-21 in patients with CHC and CHC-associated HCC. Serum miR-21 levels correlated with histological activity index (HAI) in the liver (r = −0.494, P = 0.00002), alanine aminotransferase (ALT) (r = −0.309, P = 0.007), aspartate aminotransferase (r = −0.495, P = 0.000007), bilirubin (r = −0.362, P = 0.002), international normalized ratio (r = −0.338, P = 0.034) and γ-glutamyltransferase (r = −0.244, P = 0.034). Multivariate analysis revealed that ALT and miR-21 serum levels were independently associated with HAI. At a cut-off dCT of 1.96, miR-21 discriminated between minimal and mild-severe necroinflammation (AUC = 0.758) with a sensitivity of 53.3% and a specificity of 95.2%.
Conclusions/Significance: The serum miR-21 level is a marker for necroinflammatory activity, but does not differ between patients with HCV and HCV-induced HCC.
Although climate is known to be one of the key factors determining animal species distributions amongst others, projections of global change impacts on their distributions often rely on bioclimatic envelope models. Vegetation structure and landscape configuration are also key determinants of distributions, but they are rarely considered in such assessments. We explore the consequences of using simulated vegetation structure and composition as well as its associated landscape configuration in models projecting global change effects on Iberian bird species distributions. Both present-day and future distributions were modelled for 168 bird species using two ensemble forecasting methods: Random Forests (RF) and Boosted Regression Trees (BRT). For each species, several models were created, differing in the predictor variables used (climate, vegetation, and landscape configuration). Discrimination ability of each model in the present-day was then tested with four commonly used evaluation methods (AUC, TSS, specificity and sensitivity). The different sets of predictor variables yielded similar spatial patterns for well-modelled species, but the future projections diverged for poorly-modelled species. Models using all predictor variables were not significantly better than models fitted with climate variables alone for ca. 50% of the cases. Moreover, models fitted with climate data were always better than models fitted with landscape configuration variables, and vegetation variables were found to correlate with bird species distributions in 26–40% of the cases with BRT, and in 1–18% of the cases with RF. We conclude that improvements from including vegetation and its landscape configuration variables in comparison with climate only variables might not always be as great as expected for future projections of Iberian bird species.
Functional near-infrared spectroscopy (fNIRS) is an established optical neuroimaging method for measuring functional hemodynamic responses to infer neural activation. However, the impact of individual anatomy on the sensitivity of fNIRS measuring hemodynamics within cortical gray matter is still unknown. By means of Monte Carlo simulations and structural MRI of 23 healthy subjects (mean age: (25.0 +- 2.8 years), we characterized the individual distribution of tissue-specific NIR-light absorption underneath 24 prefrontal fNIRS channels. We, thereby, investigated the impact of scalp-cortex distance (SCD), frontal sinus volume as well as sulcal morphology on gray matter volumes (V gray) traversed by NIR-light, i.e. anatomy-dependent fNIRS sensitivity. The NIR-light absorption between optodes was distributed describing a rotational ellipsoid with a mean penetration depth of (23.6 +- 0.7 mm) considering the deepest 5% of light. Of the detected photon packages scalp and bone absorbed (96.4 +- 9.7)% and absorbed (3,1 +- 1.8)% of the energy. The mean V gray volume (1.1 +- 0.4)cm 3 was negatively correlated (r = -.76) with the SCD and frontal sinus volume (r= -.57) and was reduced by in subjects with relatively large compared to small frontal sinus. Head circumference was significantly positively correlated with the mean SCD (r= .46) and the traversed frontal sinus volume (r= .43). Sulcal morphology had no significant impact on . Our findings suggest to consider individual SCD and frontal sinus volume as anatomical factors impacting fNIRS sensitivity. Head circumference may represent a practical measure to partly control for these sources of error variance.
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements.
This paper considers the logic FOcard, i.e., first-order logic with cardinality predicates that can specify the size of a structure modulo some number. We study the expressive power of FOcard on the class of languages of ranked, finite, labelled trees with successor relations. Our first main result characterises the class of FOcard-definable tree languages in terms of algebraic closure properties of the tree languages. As it can be effectively checked whether the language of a given tree automaton satisfies these closure properties, we obtain a decidable characterisation of the class of regular tree languages definable in FOcard. Our second main result considers first-order logic with unary relations, successor relations, and two additional designated symbols < and + that must be interpreted as a linear order and its associated addition. Such a formula is called addition-invariant if, for each fixed interpretation of the unary relations and successor relations, its result is independent of the particular interpretation of < and +. We show that the FOcard-definable tree languages are exactly the regular tree languages definable in addition-invariant first-order logic. Our proof techniques involve tools from algebraic automata theory, reasoning with locality arguments, and the use of logical interpretations. We combine and extend methods developed by Benedikt and Segoufin (ACM ToCL, 2009) and Schweikardt and Segoufin (LICS, 2010).
Membrane proteins (MPs) constitute about 30% of the genome and are essential in many cellular processes. In particular structural characterisation of MPs is challenged by their hydrophobic nature resulting in expression difficulties and structural instability upon extraction from the membrane. Despite these challenges, progress in sample preparation and the techniques to solve MP structures has led to 281 unique MP structures as of January 2011. Through the combination of a cell-free expression system and selective labelling strategies, this thesis aimed to advance the structure determination of α-helical MPs by NMR spectroscopy and resulted in the structure determination of a seven-ransmembrane-helix protein. Results were obtained for the 5-lipoxygenase-activating protein (FLAP) and proteorhodopsin (PR). The detergent-based cell-free expression mode proved most efficient for production of both targets, but optimisation of FLAP and PR followed different routes. The presence of a retinal cofactor in PR greatly facilitated the search for an appropriate hydrophobic environment. For structural studies, NMR spectra of FLAP indicated favourable properties of the lysolipid LPPG. In contrast, PR was stable and homogenous in the short-chain lipid diC7PC. As NMR spectra of α-helical MPs are generally characterised by broad lines and signal overlap, selective labelling strategies were essential in the assignment process of both targets. For the backbone assignment of FLAP the transmembrane segment-enhanced (TMS) labelling was developed, employing the six amino acids AFGILV. These residues cluster predominantly in transmembrane helices and form long stretches allowing a large extent of backbone assignment. Besides that, the combinatorial labelling enables identification of unique pairs in the sequence based on a mixture of 15N and 1-13C-labelled amino acids. To find the optimal labelling pattern for a given primary structure, the UPLABEL algorithm has been made available and successfully applied in the backbone assignment of PR. Both selective labelling approaches greatly benefitted from the use of a cell-free expression system to reduce isotope scrambling. Additionally, the de novo structure of PR was determined with an average backbone rmsd of 1.2 Å based on TALOS-derived backbone torsion angles, intrahelical hydrogen bond restraints and distance restraints from the NOE and paramagnetic relaxation enhancement (PRE). A major bottleneck in the NMR structure determination of MPs concerns the number of long-range distances which are often limited. In PR, side chain assignment was enabled by stereo-array isotope labelling as well as selective labelling which provided 33 long-range NOEs. These NOEs stabilised the symmetry of the seven helix bundle. With a total number of 1031, the majority of long-range distances were derived from PREs. The structure of PR reveals differences to its homologues such as the absence of an anti-parallel β-sheet between helices B and C and allows conclusions towards the mechanism of colour tuning.
The role of small leucine-rich proteoglycans, biglycan and decorin, in podocytopathy and albuminuria
(2011)
Biglycan is a member of the small leucine-rich proteoglycan (SLRP) family and is involved in the assembly of extracellular matrix components. In macrophages soluble biglycan acts as an endogenous ligand of the innate immunity receptors TLR2 and TLR4. Data addressing the role of biglycan in renal pathology are surprisingly limited. In a normal kidney, biglycan is expressed mainly in the tubulointerstitium; however, in the course of various renal diseases its expression may be altered. The biological role and mechanisms of biglycan action in the pathology of renal diseases, especially those affecting glomeruli, remain poorly understood.
Albuminuria is the first detectable clinical abnormality in diabetic nephropathy. In this study we detected increased biglycan mRNA expression in glomeruli of renal biopsies of patients with incipient diabetic nephropathy, with predominant localization in podocytes. This novel finding raised the question about the role and mechanisms of biglycan action in diabetic podocyte injury and whether the mechanisms of biglycan signaling causing podocyte injury and albuminuria could be extrapolated to other glomerular diseases.
To investigate the role of biglycan in the cause of diabetic podocyte injury and albuminuria we used the murine model of STZ-induced diabetic nephropathy and wild type (Bgn+/0) and biglycan deficient (Bgn-/0) mice. We observed that biglycan was expressed on mRNA and protein levels in podocytes of diabetic Bgn+/0 mice and that diabetic Bgn+/0 mice also had significantly higher albuminuria compared to non-diabetic mice 6 and 12 weeks after disease induction. Biglycan deficiency was shown to be an important factor in albuminuria development. Namely, we observed that diabetic Bgn-/0 mice had significantly lower levels of urinary albumin compared to diabetic Bgn+/0 mice. We showed that less severe podocyte loss in the urine of diabetic Bgn-/0 mice was associated with significantly higher nephrin and podocin glomerular expression compared to diabetic Bgn+/0 mice. Our data suggested that biglycan deficiency was protective against podocyte loss into urine and might be beneficial against development of albuminuria in diabetes.
Biglycan contributed to podocyte actin rearrangement due to increased phosphorylation of Rac1 in vitro. Furthermore, biglycan induced caspase-3 activity and production of reactive oxygen species (ROS), thus enhancing apoptosis in cultured podocytes. Biglycan-induced ROS generation was TLR2/TLR4-dependent. Overexpression of soluble biglycan in wild type mice induced albuminuria under normal conditions and significantly increased albuminuria under pathological conditions (murine model of LPS-induced albuminuria). Inhibition of Rac1 activity in vivo decreased the albuminuria induced by biglycan overexpression. In patients with glomerular diseases, biglycan was detected in urine and was associated with nephrin appearance in the urine of these patients and with increased albuminuria. Collectively, our results elucidate a novel mechanism for biglycan-induced TLR2- and TLR4-dependent, Rac1- and ROS-mediated podocytopathy leading to podocyturia, albuminuria development and progression of glomerular diseases. Interfering with biglycan actions and blocking its signaling via TLR2 and TLR4 might be a potential therapeutic strategy against these diseases. To achieve this goal, the specific mechanisms for binding of biglycan to TLR2 and TLR4 must be elucidated and effective ways of preventing this binding must be developed. Nevertheless, biglycan remains the “danger signal” that activates innate immune receptors in non-immune cells and triggers the deleterious mechanisms leading to aggravation of renal injury.
Soil biogenic NO emissions (SNOx) play important direct and indirect roles in chemical processes of the troposphere. The most widely applied algorithm to calculate SNOx in global models was published 15 years ago by Yienger and Levy (1995), was based on very few measurements. Since then numerous new measurements have been published, which we used to build up a atabase of field measurements conducted world wide covering the period from 1978 to 2009, including 108 publications with 560 measurements.
Recently, several satellite based top-down approaches, which recalculated the different sources of NOx (fossil fuel, biomass burning, soil and lightning), have shown an underestimation of SNOx by the algorithm of Yienger and Levy (1995). Nevertheless, to our knowledge no general improvements of this algorithm have yet been published.
Here we present major improvements to the algorithm, which should help to optimize the representation of SNOx in atmospheric-chemistry global climate models, without modifying the underlying principal or mathematical equations. The changes include: 1) Using a new up to date land cover map, with twice the number of land cover classes, and using annually varying fertilizer application rates; 2) Adopting the fraction of SNOx induced by fertilizer application based on our database; 3) Switching from soil water column to volumetric soil moisture, to distinguish between the wet and dry state; 4) Tuning the emission factors to reproduce the measured emissions in our database and calculate the emissions based on their mean value. These steps lead us to increased global yearly SNOx, and our total SNOx source ends up being close to one of the top-down approaches. In some geographical regions the new results agree better with the top-down approach, but there are also distinct differences in other regions. This suggests that a ombination of both top-down and bottom-up approaches could be combined in a future attempt to provide an even better calculation of SNOx.
Biogenic NO emissions from soils (SNOx) play important direct and indirect roles in tropospheric chemistry. The most widely applied algorithm to calculate SNOx in global models was published 15 years ago by Yienger and Levy (1995), and was based on very few measurements. Since then, numerous new measurements have been published, which we used to build up a compilation of world wide field measurements covering the period from 1978 to 2010. Recently, several satellite-based top-down approaches, which recalculated the different sources of NOx (fossil fuel, biomass burning, soil and lightning), have shown an underestimation of SNOx by the algorithm of Yienger and Levy (1995). Nevertheless, to our knowledge no general improvements of this algorithm, besides suggested scalings of the total source magnitude, have yet been published. Here we present major improvements to the algorithm, which should help to optimize the representation of SNOx in atmospheric-chemistry global climate models, without modifying the underlying principals or mathematical equations. The changes include: (1) using a new landcover map, with twice the number of landcover classes, and using annually varying fertilizer application rates; (2) adopting a fraction of 1.0 % for the applied fertilizer lost as NO, based on our compilation of measurements; (3) using the volumetric soil moisture to distinguish between the wet and dry states; and (4) adjusting the emission factors to reproduce the measured emissions in our compilation (based on either their geometric or arithmetic mean values). These steps lead to increased global annual SNOx, and our total above canopy SNOx source of 8.6 Tg yr−1 (using the geometric mean) ends up being close to one of the satellite-based top-down approaches (8.9 Tg yr−1). The above canopy SNOx source using the arithmetic mean is 27.6 Tg yr−1, which is higher than all previous estimates, but compares better with a regional top-down study in eastern China. This suggests that both top-down and bottom-up approaches will be needed in future attempts to provide a better calculation of SNOx.
Which factors determine whether a stimulus is consciously perceived or unconsciously processed? Here, I investigate how previous experience on two different time scales – long term experience over the course of several days, and short term experience based on the previous trial – impact conscious perception. Regarding long term experience, I investigate how perceptual learning does not only change the capacity to process stimuli, but also the capacity to consciously perceive them. To this end, subjects are trained extensively to discriminate between masked stimuli, and concurrently rate their subjective experience. Both the ability to discriminate the stimuli as well as subjective awareness of the stimuli increase as a function of training. However, these two effects are not simple byproducts of each other. On the contrary, they display different time courses, with above chance discrimination performance emerging before subjective experience; importantly, the two learning effects also rely on different circuits in the brain: Moving the stimuli outside the trained receptive field size abolishes the learning effects on discrimination ability, but preserves the learning effects on subjective awareness.
This indicates that the receptive fields serving subjective experience are larger than the ones serving objective performance, and that the channels through which they receive their information are arranged in parallel. Regarding short term experience, I investigate how memory based predictions arising from information acquired on the trial before affect visibility and the neural correlates of consciousness. To this end, I vary stimulus evidence as well as predictability and acquire electroencephalographic data.
A comparison of the neural processes distinguishing consciously perceived from unperceived trials with and without predictions reveals that predictions speed up processing, thus shifting the neural correlates forward in time. Thus, the neural correlates of consciousness display a previously unappreciated flexibility in time and do not arise invariably late as had been predicted by some theorists.
Admittedly, however, previous experience does not always stabilize perception. Instead, previous experience can have the reverse effect: Seeing the opposite of what was there, as in so-called repulsive aftereffects. Here, I investigate what determines the direction of previous experience using multistable stimuli. In a functional magnetic resonance imaging experiment, I find that a widespread network of frontal, parietal, and ventral occipital brain areas is involved in perceptual stabilization, whereas the reverse effect is only evident in extrastriate cortex. This areal separation possibly endows the brain with the flexibility to switch between exploiting already available information and emphasizing the new.
Taken together, my data show that conscious perception and its neuronal correlates display a remarkable degree of flexibility and plasticity, which should be taken into account in future theories of consciousness.
Residual circulation trajectories and transit times into the extratropical lowermost stratosphere
(2011)
Transport into the extratropical lowermost stratosphere (LMS) can be divided into a slow part (time-scale of several months to years) associated with the global-scale stratospheric residual circulation and a fast part (time-scale of days to a few months) associated with (mostly quasi-horizontal) mixing (i.e. two-way irreversible transport, including extratropical stratosphere-troposphere exchange). The stratospheric residual circulation may be considered to consist of two branches: a deep branch more strongly associated with planetary waves breaking in the middle to upper stratosphere, and a shallow branch associated with synoptic and planetary scale waves breaking in the subtropical lower stratosphere. In this study the contribution due to the stratospheric residual circulation alone to transport into the LMS is quantified using residual circulation trajectories, i.e. trajectories driven by the (time-dependent) residual mean meridional and vertical velocities. This contribution represents the advective part of the overall transport into the LMS and can be viewed as providing a background onto which the effect of mixing has to be added. Residual mean velocities are obtained from a comprehensive chemistry-climate model as well as from reanalysis data. Transit times of air traveling from the tropical tropopause to the LMS along the residual circulation streamfunction are evaluated and compared to recent mean age of air estimates. A time-scale separation with much smaller transit times into the mid-latitudinal LMS than into polar LMS is found that is indicative of a separation of the shallow from the deep branch of the residual circulation. This separation between the shallow and the deep circulation branch is further manifested in a distinction in the aspect ratio of the vertical to meridional extent of the trajectories, the integrated mass flux along the residual circulation trajectories, as well as the stratospheric entry latitude of the trajectories. The residual transit time distribution reproduces qualitatively the observed seasonal cycle of youngest air in the extratropical LMS in fall and oldest air in spring.
Residual circulation trajectories and transit times into the extratropical lowermost stratosphere
(2010)
Transport into the extratropical lowermost stratosphere (LMS) can be divided into a slow part (time-scale of several months to years) associated with the global-scale stratospheric residual circulation and a fast part (time-scale of days to a few months) associated with (mostly quasi-horizontal) mixing (i.e. two-way irreversible transport, including stratosphere-troposphere exchange). The stratospheric residual circulation can be considered to consist of two branches: a deep branch more strongly associated with planetary waves breaking in the middle to upper stratosphere, and a shallow branch more strongly associated with synoptic-scale waves breaking in the subtropical lower stratosphere. In this study the contribution due to the stratospheric residual circulation alone to transport into the LMS is quantified using residual circulation trajectories, i.e. trajectories driven by the (time-dependent) residual mean meridional and vertical velocities. This contribution represents the advective part of the overall transport into the LMS and can be viewed as providing a background onto which the effect of mixing has to be added. Residual mean velocities are obtained from a comprehensive chemistry-climate model as well as from reanalysis data. Transit times of air traveling from the tropical tropopause to the LMS along the residual circulation streamfunction are evaluated and compared to recent mean age of air estimates. A clear time-scale separation with much smaller transit times into the mid-latitudinal LMS than into polar LMS is found that is indicative of a clear separation of the shallow from the deep branch of the residual circulation. This separation between the shallow and the deep circulation branch is further manifested in a clear distinction in the aspect ratio of the vertical to meridional extent of the trajectories as well as the integrated mass flux along the residual circulation trajectories. The residual transit time distribution reproduces qualitatively the observed seasonal cycle of youngest air in the extratropical LMS in fall and oldest air in spring.
A complete, well-preserved record of the Cenomanian/Turonian (C/T) Oceanic Anoxic Event 2 (OAE-2) was recovered from Demerara Rise in the southern North Atlantic Ocean (ODP site 1260). Across this interval, we determined changes in the stable carbon isotopic composition of sulfur-bound phytane (δ13Cphytane, a biomarker for photosynthetic algae. The δ13Cphytane record shows a positive excursion at the onset of the OAE-2 interval, with an unusually large amplitude (~7 ‰) compared to existing C/T proto-North Atlantic δ13Cphytane records (3–6 ‰). Overall, the amplitude of the excursion of δ13Cphytane decreases with latitude. Using reconstructed sea surface temperature (SST) gradients for the proto-North Atlantic, we investigated environmental factors influencing the latitudinal δ13Cphytane gradient. The observed gradient is best explained by high productivity at DSDP Site 367 and Tarfaya basin before OAE-2, which changed in overall high productivity throughout the proto-North Atlantic during OAE-2. During OAE-2, productivity at site 1260 and 603B was thus more comparable to the mid-latitude sites. Using these constraints as well as the SST and δ13Cphytane-records from Site 1260, we subsequently reconstructed pCO2 levels across the OAE-2 interval. Accordingly, pCO2 decreased from ca. 1750 to 900 ppm during OAE-2, consistent with enhanced organic matter burial resulting in lowering pCO2. Whereas the onset of OAE-2 coincided with increased pCO2, in line with a volcanic trigger for this event, the observed cooling within OAE-2 probably resulted from CO2 sequestration in black shales outcompeting CO2 input into the atmosphere. Together these results show that the ice-free Cretaceous world was sensitive to changes in pCO2 related to perturbations of the global carbon cycle.
Bioapatite in mammalian teeth is readily preserved in continental sediments and represents a very important archive for reconstructions of environment and climate evolution. This project intends to provide a detailed data base of major, minor and trace element and isotope tracers for tooth apatite using a variety of microanalytical techniques. The aim is to identify specific sedimentary environments and to improve our understanding on the interaction between internal metabolic processes during tooth formation and external nutritional control and secondary alteration effects. Here, we use the electron microprobe, to determine the major and minor element contents of fossil and modern molar enamel, cement and dentin from hippopotamids. Most of the studied specimens are from different ecosystems in Eastern Africa, representing modern and fossil lakustrine (Lake Kikorongo, Lake Albert, and Lake Malawi) and modern fluvial environments of the Nile River system.
Secondary alteration effects in particular FeO, MnO, SO3 and F concentrations, which are 2 to 10 times higher in fossil than in modern enamel; secondary enrichments in fossil dentin and cement are even higher. In modern and fossil enamel, along sections perpendicular to the enamel-dentin junction (EDJ) or along cervix-apex profiles, P2O5 and CaO contents and the CaO/P2O5 ratios are very constant (StdDev ~1 %). Linear regression analysis reveals very tight control of the MgO (R2∼0.6), Na2O and Cl variation (for both R2>0.84) along EDJ-outer enamel rim profiles, despite large concentration variations (40 % to 300 %) across the enamel. These minor elements show well defined distribution patterns in enamel, similar in all specimens regardless of their age and origin, as the concentration of MgO and Na2O decrease from the enamel-dentin junction (EDJ) towards the outer rim, whereas Cl displays the opposite variation.
Fossil enamel from hippopotamids which lived in the saline Lake Kikorongo have a much higher MgO/Na2O ratio (∼1.11) than those from the Neogene fossils of Lake Albert (MgO/Na2O∼0.4), which was a large fresh water lake like those in the western Branch of the East African Rift System today. Similarly, the MgO/Na2O ratio in modern enamel from the White Nile River (∼0.36), which has a Precambrian catchment of dominantly granite and gneisses and passes through several saline zones, is higher than that from the Blue Nile River, whose catchment is the Neogene volcanic Ethiopian Highland (MgO/Na2O∼0.22). Thus, particularly MgO/Na2O might be a sensitive fingerprint for environments where river and lake water have suffered strong evaporation.
Enamel formation in mammals takes place at successive mineralization fronts within a confined chamber where ion and molecule transport is controlled by the surrounding enamel organ. During the secretion and maturation phases the epithelium generates different fluid composition, which in principle, should determine the final composition of enamel apatite. This is supported by co-linear relationships between MgO, Cl and Na2O which can be interpreted as binary mixing lines. However, if maturation starts after secretion is completed the observed element distribution can only be explained by recrystallization of existing and addition of new apatite during maturation. Perhaps the initial enamel crystallites precipitating during secretion and the newly formed bioapatite crystals during maturation equilibrate with a continuously evolving fluid. During crystallization of bioapatite the enamel fluid becomes continuously depleted in MgO and Na2O, but enriched in Cl which results in the formation of MgO, and Na2O-rich, but Cl-poor bioapatite near the EDJ and MgO- and Na2O-poor, but Cl-rich bioapatite at the outer enamel rim.
The linkage between lake and river water composition, bioavailability of elements for plants, animal nutrition and tooth formation is complex and multifaceted. The quality and limits of the MgO/Na2O and other proxies have to be established with systematic investigations relating chemical distribution patterns to sedimentary environment and to growth structures developing as secretion and maturation proceed during tooth formation.
Ubiquitin is a highly conserved protein involved in several cellular processes like protein degradation, endocytosis, signal transduction and DNA repair. The discovery of ubiquitin-like proteins (UBL) and ubiquitin-like domains (ULD) increases the number of regulation pathways where the property of the ubiquitin-fold is profitable.
Autophagy is the catabolic pathway used in cells to deliver cytosolic components and dysfunctional organelles to the lysosome for degradation. MAP1LC3 proteins are ubiquitin-like proteins involved in one hand for the expansion of the autophagosome, which sequesters cytosolic substrates. In the other hand, these proteins (LC3- and GABARAP- subfamilies) bind to autophagic receptors linked to polyubiquitinated proteins aggregates. For this project, the 3D structure of the GABARAPL-1/NBR1-LIR complex was determined and confirmed that GABARAPL-1 belongs to the MAP1LC3 proteins family, structurally characterized by an ubiquitin-fold, consisting of a central beta-sheet formed by four beta-strands and two alpha-helices on one side of the beta-sheet, preceded N terminally by two alpha-helices, resulting in the formation of two hydrophobic pockets, hp1 and hp2. The autophagic receptor NBR1 interacts with GABARAPL-1 through the hp1 and hp2 with its LIR motif taking an extended beta conformation upon binding, forming an intermolecular beta-sheet with the second beta-strand of GABARAPL 1. This LC3- interacting region (LIR) consists of an Theta XX Gamma sequence preceded by acidic amino acids, with Theta and Gamma represented by any aromatic and hydrophobic residues, respectively. Interaction studies of the LIR domains of p62, Nix and NBR1 with different members of the MAP1LC3 proteins family indicate that the presence of a tryptophan in the LIR motif increases the binding affinity. Substitution to other aromatic amino acids or increasing the number of negatively charged residues at the N-terminus of the LIR motif, however, has little effect on the binding affinity due to enthalpy-entropy compensation, suggesting that effector proteins can interact with a wide variety of different sequences with similar and moderate binding affinities.
Additionally to be present in proteins dealing with protein folding and degradation, ubiquitin-like domain were found protein involved in the regulation of signal transduction like TBK1, a serine/threonine kinase responsible for induction of immune response. In this second project, based on the NMR chemical shifts of the TBK1 domain contained between amino acids 302 and 383, secondary structure prediction programs (TALOS and CSI) confirmed the presence of an Ubiquitin-like domain in TBK1 by identifying one alpha-helix and four beta-strands sequentially aligned like following beta-beta-alpha-beta-beta. This alignment corresponds perfectly with the secondary structure elements of Ubiquitin and proved that TBK1_ULD belongs to the UBL protein superfamily. The similarity to ubiquitin was even bigger by the presence in addition of a small beta-strand and a short helix, which are observed as the beta 5-strand and a 310-helix in Ubiquitin, respectively. The first attempts on the 3D structure determination confirmed the Ub-fold but due to the lack of assignment in TBK1_ULD, only a structure based on ubiquitin as a model was determined. Interaction studies of TBK1_ULD with the IAD-SRR domain of IRF3 showed that both side of the molecule seems involved and that the TBK1/IRF3 interaction is more complex than a one to one binding process. Unfortunately, the instability of TBK1_ULD associated to the difficulty in the purification of IAD-SRR did not allow to further study this interaction more precisely.
Finally, to overcome the difficulty encountered in NMR experiments because of low expression and/or poor solubility, an expression vector using the intrinsic property of ubiquitin was designed. Fused to proteins or peptides targets, this construct produced proteins and peptides in a larger amount than with traditional expression vectors and also with a less cost than chemical synthesis for pure labeled peptides for NMR structural studies. The presence of a hexa histidine tag was useful for the isolation and the purification of the constructs. The existence of a TEV cleavage site was created to keep the possibility of releasing the ubiquitin moiety from the expressed protein or peptide. Moreover, the ubiquitin-tag could also still be attached to the protein/peptide of interest when biophysical methods like NMR, ITC or CD spectroscopy are applied, providing the same results than for the protein/peptide moiety alone.
Tubular carbonate concretions of up to 1 m in length and perpendicular to bedding, occur abundantly in the Upper Pliensbachian (upper Amaltheus margaritatus Zone, Gibbosus Subzone) in outcrops (Fontaneilles section) in the vicinity of Rivière-sûr-Tarn, southern France. Stable isotope analyses of these concretions show negative δ13C values that decrease from the rim to the center from −18.8‰ to −25.7‰ (V-PDB), but normal marine δ18O values (−1.8‰). Carbon isotope analyses of Late Pliensbachian bulk carbonate (matrix) samples from the Fontaneilles section show clearly decreasing C-isotope values across the A. margaritatus Zone, from +1‰ to −3‰ (V-PDB). Isotope analyses of coeval belemnite rostra do not document such a negative C-isotope trend with values remaining stable around +2‰ (V-PDB). Computer tomographic (CT) scanning of the tubular concretions show multiple canals that are lined or filled entirely with pyrite. Previously, the formation of these concretions with one, two, or more central tubes, has been ascribed to the activity of an enigmatic organism, possibly with annelid or arthropod affinities, known as Tisoa siphonalis. Our results suggest tisoan structures are abiogenic. Based on our geochemical analyses and sedimentological observations we suggest that these concretions formed as a combination of the anaerobic oxidation of methane (AOM) and sulfate reduction within the sediment. Fluids rich in methane and/or hydrocarbons likely altered local bulk rock carbon isotope records, but did not affect the global carbon cycle. Interestingly, Tisoa siphonalis has been described from many locations in the Grands Causses Basin in southern France, and from northern France and Luxemburg, always occurring at the same stratigraphic level. Upper Pliensbachian authigenic carbonates thus possibly cover an area of many thousand square kilometers. Greatly reduced sedimentation rates are needed to explain the stabilization of the sulfate-methane transition zone in the sedimentary column in order for the tubular concretions to form. Late Pliensbachian cooling, reducing run-off, and/or the influx of colder water and more vigorous circulation could be responsible for a halt in sedimentation. At the same time (thermogenic) methane may have destabilized during a major phase of Late Pliensbachian sea level fall. As such Tisoa siphonalis is more than a geological curiosity, and its further study could prove pivotal in understanding Early Jurassic paleoenvironmental change.
Floodplains play an important role in the terrestrial water cycle and are very important for biodiversity. Therefore, an improved representation of the dynamics of floodplain water flows and storage in global hydrological and land surface models is required. To support model validation, we combined monthly time series of satellite-derived inundation areas (Papa et al., 2010) with data on irrigated rice areas (Portmann et al., 2010). In this way, we obtained global-scale time series of naturally inundated areas (NIA), with monthly values of inundation extent during 1993–2004 and a spatial resolution of 0.5°. For most grid cells (0.5°×0.5°), the mean annual maximum of NIA agrees well with the static open water extent of the Global Lakes and Wetlands database (GLWD) (Lehner and Döll, 2004), but in 16% of the cells NIA is larger than GLWD. In some regions, like Northwestern Europe, NIA clearly overestimates inundated areas, probably because of confounding very wet soils with inundated areas. In other areas, such as South Asia, it is likely that NIA can help to enhance GLWD. NIA data will be very useful for developing and validating a floodplain modeling algorithm for the global hydrological model WGHM. For example, we found that monthly NIAs correlate with observed river discharges.