Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10929)
- Preprint (1673)
- Doctoral Thesis (1575)
- Working Paper (1441)
- Part of Periodical (569)
- Conference Proceeding (513)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17362) (remove)
Keywords
- inflammation (94)
- COVID-19 (91)
- SARS-CoV-2 (63)
- Financial Institutions (48)
- climate change (46)
- Germany (45)
- ECB (43)
- aging (43)
- apoptosis (42)
- cancer (42)
Institute
- Medizin (5142)
- Physik (3060)
- Frankfurt Institute for Advanced Studies (FIAS) (1664)
- Wirtschaftswissenschaften (1653)
- Biowissenschaften (1410)
- Informatik (1259)
- Center for Financial Studies (CFS) (1139)
- Sustainable Architecture for Finance in Europe (SAFE) (1067)
- Biochemie und Chemie (858)
- House of Finance (HoF) (704)
Debt-induced crises, including the subprime, are usually attributed exclusively to supply-side factors. We examine the role of social influences on debt culture, emanating from perceived average income of peers. Utilizing unique information from a household survey representative of the Dutch population, that circumvents the issue of defining the social circle, we consider collateralized, consumer, and informal loans. We find robust social effects on borrowing, especially among those who consider themselves poorer than their peers; and on indebtedness, suggesting a link to financial distress. We employ a number of approaches to rule out spurious associations and to handle correlated effects.
Trading under limited pre-trade transparency becomes increasingly popular on financial markets. We provide first evidence on traders’ use of (completely) hidden orders which might be placed even inside of the (displayed) bid-ask spread. Employing TotalView-ITCH data on order messages at NASDAQ, we propose a simple method to conduct statistical inference on the location of hidden depth and to test economic hypotheses. Analyzing a wide cross-section of stocks, we show that market conditions reflected by the (visible) bid-ask spread, (visible) depth, recent price movements and trading signals significantly affect the aggressiveness of ’dark’ liquidity supply and thus the ’hidden spread’. Our evidence suggests that traders balance hidden order placements to (i) compete for the provision of (hidden) liquidity and (ii) protect themselves against adverse selection, front-running as well as ’hidden order detection strategies’ used by high-frequency traders. Accordingly, our results show that hidden liquidity locations are predictable given the observable state of the market.
In the aftermath of the global financial crisis, the state of macroeconomic modeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development.
How do changes in market structure affect the US business cycle? We estimate a monetary DSGE model with endogenous
rm/product entry and a translog expenditure function by Bayesian methods. The dynamics of net business formation allow us to identify the 'competition effect', by which desired price markups and inflation decrease when entry rises. We
find that a 1 percent increase in the number of competitors lowers desired markups by 0.18 percent. Most of the cyclical variability in inflation is driven by markup fluctuations due to sticky prices or exogenous shocks rather than endogenous changes in desired markups.
This paper characterises optimal monetary policy in an economy with endogenous
firm entry, a cash-in-advance constraint and preset wages. Firms must make pro
fits to cover entry costs; thus the markup on goods prices is efficient. However, because leisure is not priced at a markup, the consumption-leisure tradeoff is distorted. Consequently, the real wage, hours and production are suboptimally low. Due to the labour requirement in entry, insufficient labour supply also implies that entry is too low. The paper shows that in the absence of
fiscal instruments such as labour income subsidies, the optimal monetary policy under sticky wages achieves higher welfare than under flexible wages. The policy maker uses the money supply instrument to raise the real wage - the cost of leisure - above its flexible-wage level, in response to expansionary shocks to productivity and entry costs. This raises labour supply, expanding production and
rm entry.
In the aftermath of the global financial crisis, the state of macroeconomicmodeling and the use of macroeconomic models in policy analysis has come under heavy criticism. Macroeconomists in academia and policy institutions have been blamed for relying too much on a particular class of macroeconomic models. This paper proposes a comparative approach to macroeconomic policy analysis that is open to competing modeling paradigms. Macroeconomic model comparison projects have helped produce some very influential insights such as the Taylor rule. However, they have been infrequent and costly, because they require the input of many teams of researchers and multiple meetings to obtain a limited set of comparative findings. This paper provides a new approach that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. Using this approach a model archive is built that includes many well-known empirically estimated models that may be used for quantitative analysis of monetary and fiscal stabilization policies. A computational platform is created that allows straightforward comparisons of models’ implications. Its application is illustrated by comparing different monetary and fiscal policies across selected models. Researchers can easily include new models in the data base and compare the effects of novel extensions to established benchmarks thereby fostering a comparative instead of insular approach to model development
Background: After focal neuronal injury the endocannabinioid system becomes activated and protects or harms neurons depending on cannabinoid derivates and receptor subtypes. Endocannabinoids (eCBs) play a central role in controlling local responses and influencing neural plasticity and survival. However, little is known about the functional relevance of eCBs in long-range projection damage as observed in stroke or spinal cord injury (SCI).
Methods: In rat organotypic entorhino-hippocampal slice cultures (OHSC) as a relevant and suitable model for investigating projection fibers in the CNS we performed perforant pathway transection (PPT) and subsequently analyzed the spatial and temporal dynamics of eCB levels. This approach allows proper distinction of responses in originating neurons (entorhinal cortex), areas of deafferentiation/anterograde axonal degeneration (dentate gyrus) and putative changes in more distant but synaptically connected subfields (cornu ammonis (CA) 1 region).
Results: Using LC-MS/MS, we measured a strong increase in arachidonoylethanolamide (AEA), oleoylethanolamide (OEA) and palmitoylethanolamide (PEA) levels in the denervation zone (dentate gyrus) 24 hours post lesion (hpl), whereas entorhinal cortex and CA1 region exhibited little if any changes. NAPE-PLD, responsible for biosynthesis of eCBs, was increased early, whereas FAAH, a catabolizing enzyme, was up-regulated 48hpl.
Conclusion: Neuronal damage as assessed by transection of long-range projections apparently provides a strong time-dependent and area-confined signal for de novo synthesis of eCB, presumably to restrict neuronal damage. The present data underlines the importance of activation of the eCB system in CNS pathologies and identifies a novel site-specific intrinsic regulation of eCBs after long-range projection damage.
Sucrose is known to repress the translation of Arabidopsis thaliana AtbZIP11 transcript which encodes a protein belonging to the group of S (S - stands for small) basic region-leucine zipper (bZIP)-type transcription factor. This repression is called sucrose-induced repression of translation (SIRT). It is mediated through the sucrose-controlled upstream open reading frame (SC-uORF) found in the AtbZIP11 transcript. The SIRT is reported for 4 other genes belonging to the group of S bZIP in Arabidopsis. Tobacco tbz17 is phylogenetically closely related to AtbZIP11 and carries a putative SC-uORF in its 5′-leader region. Here we demonstrate that tbz17 exhibits SIRT mediated by its SC-uORF in a manner similar to genes belonging to the S bZIP group of the Arabidopsis genus. Furthermore, constitutive transgenic expression of tbz17 lacking its 5′-leader region containing the SC-uORF leads to production of tobacco plants with thicker leaves composed of enlarged cells with 3–4 times higher sucrose content compared to wild type plants. Our finding provides a novel strategy to generate plants with high sucrose content.
Freshwater biodiversity has declined dramatically in Europe in recent decades. Because of massive habitat pollution and morphological degradation of water bodies, many once widespread species persist in small fractions of their original range. These range contractions are generally believed to be accompanied by loss of intraspecific genetic diversity, due to the reduction of effective population sizes and the extinction of regional genetic lineages. We aimed to assess the loss of genetic diversity and its significance for future potential reintroduction of the long-tailed mayfly Palingenia longicauda (Olivier), which experienced approximately 98% range loss during the past century. Analysis of 936 bp of mitochondrial DNA of 245 extant specimens across the current range revealed a surprisingly large number of haplotypes (87), and a high level of haplotype diversity (Hd = 0.875). In contrast, historic specimens (6) from the lost range (Rhine catchment) were not differentiated from the extant Rába population (F ST = 0.02, p = 0.61), despite considerable geographic distance separating the two rivers. These observations can be explained by an overlap of the current with the historic (Pleistocene) refugia of the species. Most likely, the massive recent range loss mainly affected the range which was occupied by rapid post-glacial dispersal. We conclude that massive range losses do not necessarily coincide with genetic impoverishment and that a species' history must be considered when estimating loss of genetic diversity. The assessment of spatial genetic structures and prior phylogeographic information seems essential to conserve once widespread species.
Background: Human Parvovirus B19 (PVB19) has been associated with myocarditis putative due to endothelial infection. Whether PVB19 infects endothelial cells and causes a modification of endothelial function and inflammation and, thus, disturbance of microcirculation has not been elucidated and could not be visualized so far.
Methods and Findings: To examine the PVB19-induced endothelial modification, we used green fluorescent protein (GFP) color reporter gene in the non-structural segment 1 (NS1) of PVB19. NS1-GFP-PVB19 or GFP plasmid as control were transfected in an endothelial-like cell line (ECV304). The endothelial surface expression of intercellular-adhesion molecule-1 (CD54/ICAM-1) and extracellular matrix metalloproteinase inducer (EMMPRIN/CD147) were evaluated by flow cytometry after NS-1-GFP or control-GFP transfection. To evaluate platelet adhesion on NS-1 transfected ECs, we performed a dynamic adhesion assay (flow chamber). NS-1 transfection causes endothelial activation and enhanced expression of ICAM-1 (CD54: mean±standard deviation: NS1-GFP vs. control-GFP: 85.3±11.2 vs. 61.6±8.1; P<0.05) and induces endothelial expression of EMMPRIN/CD147 (CD147: mean±SEM: NS1-GFP vs. control-GFP: 114±15.3 vs. 80±0.91; P<0.05) compared to control-GFP transfected cells. Dynamic adhesion assays showed that adhesion of platelets is significantly enhanced on NS1 transfected ECs when compared to control-GFP (P<0.05). The transfection of ECs was verified simultaneously through flow cytometry, immunofluorescence microscopy and polymerase chain reaction (PCR) analysis.
Conclusions: GFP color reporter gene shows transfection of ECs and may help to visualize NS1-PVB19 induced endothelial activation and platelet adhesion as well as an enhanced monocyte adhesion directly, providing in vitro evidence of possible microcirculatory dysfunction in PVB19-induced myocarditis and, thus, myocardial tissue damage.
The human DNA mismatch repair (MMR) process is crucial to maintain the integrity of the genome and requires many different proteins which interact perfectly and coordinated. Germline mutations in MMR genes are responsible for the development of the hereditary form of colorectal cancer called Lynch syndrome. Various mutations mainly in two MMR proteins, MLH1 and MSH2, have been identified so far, whereas 55% are detected within MLH1, the essential component of the heterodimer MutLα (MLH1 and PMS2). Most of those MLH1 variants are pathogenic but the relevance of missense mutations often remains unclear. Many different recombinant systems are applied to filter out disease-associated proteins whereby fluorescent tagged proteins are frequently used. However, dye labeling might have deleterious effects on MutLα's functionality. Therefore, we analyzed the consequences of N- and C-terminal fluorescent labeling on expression level, cellular localization and MMR activity of MutLα. Besides significant influence of GFP- or Red-fusion on protein expression we detected incorrect shuttling of single expressed C-terminal GFP-tagged PMS2 into the nucleus and found that C-terminal dye labeling impaired MMR function of MutLα. In contrast, N-terminal tagged MutLαs retained correct functionality and can be recommended both for the analysis of cellular localization and MMR efficiency.
Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.
We present a computational method for the reaction-based de novo design of drug-like molecules. The software DOGS (Design of Genuine Structures) features a ligand-based strategy for automated ‘in silico’ assembly of potentially novel bioactive compounds. The quality of the designed compounds is assessed by a graph kernel method measuring their similarity to known bioactive reference ligands in terms of structural and pharmacophoric features. We implemented a deterministic compound construction procedure that explicitly considers compound synthesizability, based on a compilation of 25'144 readily available synthetic building blocks and 58 established reaction principles. This enables the software to suggest a synthesis route for each designed compound. Two prospective case studies are presented together with details on the algorithm and its implementation. De novo designed ligand candidates for the human histamine H4 receptor and γ-secretase were synthesized as suggested by the software. The computational approach proved to be suitable for scaffold-hopping from known ligands to novel chemotypes, and for generating bioactive molecules with drug-like properties.
Background: During early stages of brain development, secreted molecules, components of intracellular signaling pathways and transcriptional regulators act in positive and negative feed-back or feed-forward loops at the mid-hindbrain boundary. These genetic interactions are of central importance for the specification and subsequent development of the adjacent mid- and hindbrain. Much less, however, is known about the regulatory relationship and functional interaction of molecules that are expressed in the tectal anlage after tectal fate specification has taken place and tectal development has commenced.
Results: Here, we provide experimental evidence for reciprocal regulation and subsequent cooperation of the paired-type transcription factors Pax3, Pax7 and the TALE-homeodomain protein Meis2 in the tectal anlage. Using in ovo electroporation of the mesencephalic vesicle of chick embryos we show that (i) Pax3 and Pax7 mutually regulate each other's expression in the mesencephalic vesicle, (ii) Meis2 acts downstream of Pax3/7 and requires balanced expression levels of both proteins, and (iii) Meis2 physically interacts with Pax3 and Pax7. These results extend our previous observation that Meis2 cooperates with Otx2 in tectal development to include Pax3 and Pax7 as Meis2 interacting proteins in the tectal anlage.
Conclusion: The results described here suggest a model in which interdependent regulatory loops involving Pax3 and Pax7 in the dorsal mesencephalic vesicle modulate Meis2 expression. Physical interaction with Meis2 may then confer tectal specificity to a wide range of otherwise broadly expressed transcriptional regulators, including Otx2, Pax3 and Pax7.
Background: The European Centres of Reference Network for Cystic Fibrosis (ECORN-CF) established an Internet forum which provides the opportunity for CF patients and other interested people to ask experts questions about CF in their mother language. The objectives of this study were to: 1. develop a detailed quality assessment tool to analyze quality of expert answers, 2. evaluate the intra- and inter-rater agreement of this tool, and 3. explore changes in the quality of expert answers over the time frame of the project.
Methods: The quality assessment tool was developed by an expert panel. Five experts within the ECORN-CF project used the quality assessment tool to analyze the quality of 108 expert answers published on ECORN-CF from six language zones. 25 expert answers were scored at two time points, one year apart. Quality of answers was also assessed at an early and later period of the project. Individual rater scores and group mean scores were analyzed for each expert answer.
Results: A scoring system and training manual were developed analyzing two quality categories of answers: content and formal quality. For content quality, the grades based on group mean scores for all raters showed substantial agreement between two time points, however this was not the case for the grades based on individual rater scores. For formal quality the grades based on group mean scores showed only slight agreement between two time points and there was also poor agreement between time points for the individual grades. The inter-rater agreement for content quality was fair (mean kappa value 0.232+/-0.036, p<0.001) while only slight agreement was observed for the grades of the formal quality (mean kappa value 0.105+/-0.024, p<0.001). The quality of expert answers was rated high (four language zones) or satisfactory (two language zones) and did not change over time.
Conclusions: The quality assessment tool described in this study was feasible and reliable when content quality was assessed by a group of raters. Within ECORN-CF, the tool will help ensure that CF patients all over Europe have equal possibility of access to high quality expert advice on their illness.
The present study addresses the problem whether negative priming (NP) is due to information processing in perception, recognition or selection. We argue that most NP studies confound priming and perceptual similarity of prime-probe episodes and implement a color-switch paradigm in order to resolve the issue. In a series of three identity negative priming experiments with verbal naming response, we determined when NP and positive priming (PP) occur during a trial. The first experiment assessed the impact of target color on priming effects. It consisted of two blocks, each with a different fixed target color. With respect to target color no differential priming effects were found. In Experiment 2 the target color was indicated by a cue for each trial. Here we resolved the confounding of perceptual similarity and priming condition. In trials with coinciding colors for prime and probe, we found priming effects similar to Experiment 1. However, trials with a target color switch showed such effects only in trials with role-reversal (distractor-to-target or target-to-distractor), whereas the positive priming (PP) effect in the target-repetition trials disappeared. Finally, Experiment 3 split trial processing into two phases by presenting the trial-wise color cue only after the stimulus objects had been recognized. We found recognition in every priming condition to be faster than in control trials. We were hence led to the conclusion that PP is strongly affected by perception, in contrast to NP which emerges during selection, i.e., the two effects cannot be explained by a single mechanism.
Few studies have looked at the potential of using diffusion tensor imaging (DTI) in conjunction with machine learning algorithms in order to automate the classification of healthy older subjects and subjects with mild cognitive impairment (MCI). Here we apply DTI to 40 healthy older subjects and 33 MCI subjects in order to derive values for multiple indices of diffusion within the white matter voxels of each subject. DTI measures were then used together with support vector machines (SVMs) to classify control and MCI subjects. Greater than 90% sensitivity and specificity was achieved using this method, demonstrating the potential of a joint DTI and SVM pipeline for fast, objective classification of healthy older and MCI subjects. Such tools may be useful for large scale drug trials in Alzheimer’s disease where the early identification of subjects with MCI is critical.
Place based frequency discrimination (tonotopy) is a fundamental property of the coiled mammalian cochlea. Sound vibrations mechanically conducted to the hearing organ manifest themselves into slow moving waves that travel along the length of the organ, also referred to as traveling waves. These traveling waves form the basis of the tonotopic frequency representation in the inner ear of mammals. However, so far, due to the secure housing of the inner ear, these waves only could be measured partially over small accessible regions of the inner ear in a living animal. Here, we demonstrate the existence of tonotopically ordered traveling waves covering most of the length of a miniature hearing organ in the leg of bushcrickets in vivo using laser Doppler vibrometery. The organ is only 1 mm long and its geometry allowed us to investigate almost the entire length with a wide range of stimuli (6 to 60 kHz). The tonotopic location of the traveling wave peak was exponentially related to stimulus frequency. The traveling wave propagated along the hearing organ from the distal (high frequency) to the proximal (low frequency) part of the leg, which is opposite to the propagation direction of incoming sound waves. In addition, we observed a non-linear compression of the velocity response to varying sound pressure levels. The waves are based on the delicate micromechanics of cellular structures different to those of mammals. Hence place based frequency discrimination by traveling waves is a physical phenomenon that presumably evolved in mammals and bushcrickets independently.
Introduction: Despite the excellent anti-inflammatory and immunosuppressive action of glucocorticoids (GCs), their use for the treatment of inflammatory bowel disease (IBD) still carries significant risks in terms of frequently occurring severe side effects, such as the impairment of intestinal tissue repair. The recently-introduced selective glucocorticoid receptor (GR) agonists (SEGRAs) offer anti-inflammatory action comparable to that of common GCs, but with a reduced side effect profile.
Methods: The in vitro effects of the non-steroidal SEGRAs Compound A (CpdA) and ZK216348, were investigated in intestinal epithelial cells and compared to those of Dexamethasone (Dex). GR translocation was shown by immunfluorescence and Western blot analysis. Trans-repressive effects were studied by means of NF-κB/p65 activity and IL-8 levels, trans-activation potency by reporter gene assay. Flow cytometry was used to assess apoptosis of cells exposed to SEGRAs. The effects on IEC-6 and HaCaT cell restitution were determined using an in vitro wound healing model, cell proliferation by BrdU assay. In addition, influences on the TGF-β- or EGF/ERK1/2/MAPK-pathway were evaluated by reporter gene assay, Western blot and qPCR analysis.
Results: Dex, CpdA and ZK216348 were found to be functional GR agonists. In terms of trans-repression, CpdA and ZK216348 effectively inhibited NF-κB activity and IL-8 secretion, but showed less trans-activation potency. Furthermore, unlike SEGRAs, Dex caused a dose-dependent inhibition of cell restitution with no effect on cell proliferation. These differences in epithelial restitution were TGF-β-independent but Dex inhibited the EGF/ERK1/2/MAPK-pathway important for intestinal epithelial wound healing by induction of MKP-1 and Annexin-1 which was not affected by CpdA or ZK216348.
Conclusion: Collectively, our results indicate that, while their anti-inflammatory activity is comparable to Dex, SEGRAs show fewer side effects with respect to wound healing. The fact that SEGRAs did not have a similar effect on cell restitution might be due to a different modulation of EGF/ERK1/2 MAPK signalling.
Ubiquitination now ranks with phosphorylation as one of the best-studied post-translational modifications of proteins with broad regulatory roles across all of biology. Ubiquitination usually involves the addition of ubiquitin chains to target protein molecules, and these may be of eight different types, seven of which involve the linkage of one of the seven internal lysine (K) residues in one ubiquitin molecule to the carboxy-terminal diglycine of the next. In the eighth, the so-called linear ubiquitin chains, the linkage is between the amino-terminal amino group of methionine on a ubiquitin that is conjugated with a target protein and the carboxy-terminal carboxy group of the incoming ubiquitin. Physiological roles are well established for K48-linked chains, which are essential for signaling proteasomal degradation of proteins, and for K63-linked chains, which play a part in recruitment of DNA repair enzymes, cell signaling and endocytosis. We focus here on linear ubiquitin chains, how they are assembled, and how three different avenues of research have indicated physiological roles for linear ubiquitination in innate and adaptive immunity and suppression of inflammation.
Ubiquitin ligases and beyond
(2012)
First paragraph (this article has no abstract): In a review published in 2004 [1] and that still repays reading today, Cecile Pickart traced the evolution of research on ubiquitination from its origins in the proteasomal degradation of proteins through the revelation that it has a central role in cell cycle regulation and the recognition of regulatory roles for ubiquitin in intracellular membrane transport, cell signalling, transcription, translation, and DNA repair.
Synaptic long-term potentiation (LTP) at spinal neurons directly communicating pain-specific inputs from the periphery to the brain has been proposed to serve as a trigger for pain hypersensitivity in pathological states. Previous studies have functionally implicated the NMDA receptor-NO pathway and the downstream second messenger, cGMP, in these processes. Because cGMP can broadly influence diverse ion-channels, kinases, and phosphodiesterases, pre- as well as post-synaptically, the precise identity of cGMP targets mediating spinal LTP, their mechanisms of action, and their locus in the spinal circuitry are still unclear. Here, we found that Protein Kinase G1 (PKG-I) localized presynaptically in nociceptor terminals plays an essential role in the expression of spinal LTP. Using the Cre-lox P system, we generated nociceptor-specific knockout mice lacking PKG-I specifically in presynaptic terminals of nociceptors in the spinal cord, but not in post-synaptic neurons or elsewhere (SNS-PKG-I−/− mice). Patch clamp recordings showed that activity-induced LTP at identified synapses between nociceptors and spinal neurons projecting to the periaqueductal grey (PAG) was completely abolished in SNS-PKG-I−/− mice, although basal synaptic transmission was not affected. Analyses of synaptic failure rates and paired-pulse ratios indicated a role for presynaptic PKG-I in regulating the probability of neurotransmitter release. Inositol 1,4,5-triphosphate receptor 1 and myosin light chain kinase were recruited as key phosphorylation targets of presynaptic PKG-I in nociceptive neurons. Finally, behavioural analyses in vivo showed marked defects in SNS-PKG-I−/− mice in several models of activity-induced nociceptive hypersensitivity, and pharmacological studies identified a clear contribution of PKG-I expressed in spinal terminals of nociceptors. Our results thus indicate that presynaptic mechanisms involving an increase in release probability from nociceptors are operational in the expression of synaptic LTP on spinal-PAG projection neurons and that PKG-I localized in presynaptic nociceptor terminals plays an essential role in this process to regulate pain sensitivity.
We investigate the decisions of listed firms to go private once again. We start by revealing that while a significant number of firms which go public is VC-backed, an overproportional share of these VC-backed firms go private later on (they stay on the exchange for an average of 8.5 years). We interpret this very robust pattern such that IPOs of VC-backed firms are to a large extent a temporary rather than a permanent feature of the corporate governance of these firms. We investigate various potential hypotheses why VCs actually seem to be able to bring marginal firms to the exchange by relating the going-private decisions to various characteristics of the IPO market as well as to VC characteristics. We find strong support for the certification ability of VCs: more experienced and reputable VCs are more able to bring marginal firms to public exchanges via an IPOs. These marginal firms backed-by more reputable and experienced VCs are more likely to go private later on. Hence, our analysis suggests that IPOs backed by experienced VCs are most likely to be a temporary rather than the final stage in the life of the portfolio firm. We find no support that reputable VCs underprice their IPO-exits more implying that they have no need to leave more money on the table to take the marginal firms public.
Diatoms contribute largely to the total primary production of the ecosphere and are key players in global biogeochemical cycles. Their chloroplasts are surrounded by four membranes owing to their secondary endosymbiotic origin. Their thylakoids are arranged into three parallel bands and differentiation of thylakoid membranes into grana or stroma is not observed. The fucoxanthin chlorophyll a/c binding proteins act as the light harvesting proteins and play a role in photoprotection during excess light as well. The diatom genome encodes three different families of antenna proteins. Family I are the classical light harvesting proteins called "Lhcf". Family II are the red algae related Lhca-R1/2 proteins called "Lhcr" and family III are the photoprotective LI818 related proteins called "Lhcx".
All known Fcps have a molecular weight in the range of 17-23 kDa. They are membrane proteins and have shorter loops and termini compared to LHCs of higher plants and are therefore extremely hydrophobic. This makes the isolation of single specific Fcps using routine protein purification techniques difficult.
The purification of a specific Fcp containing complex has not been achieved so far and until this is done several questions concerning light harvesting antenna systems of diatoms cannot be answered. For e.g. Which proteins interact specifically? Are various Fcps differently pigmented? Which pigments interact with each other and how? Which proteins contribute to photosystem specific antenna systems? Can pure Fcps be reconstituted into crystals like LHCII proteins? In order to answer these questions specific Fcp containing complexes have to be purified. ...
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
Introduction: Erectile dysfunction (ED) is common in men with systemic sclerosis (SSc) but the demographics, risk factors and treatment coverage for ED are not well known.
Method: This study was carried out prospectively in the multinational EULAR Scleroderma Trial and Research database by amending the electronic data-entry system with the International Index of Erectile Function-5 and items related to ED risk factors and treatment. Centres participating in this EULAR Scleroderma Trial and Research substudy were asked to recruit patients consecutively.
Results: Of the 130 men studied, only 23 (17.7%) had a normal International Index of Erectile Function-5 score. Thirty-eight per cent of all participants had severe ED (International Index of Erectile Function-5 score ≤ 7). Men with ED were significantly older than subjects without ED (54.8 years vs. 43.3 years, P < 0.001) and more frequently had simultaneous non-SSc-related risk factors such as alcohol consumption. In 82% of SSc patients, the onset of ED was after the manifestation of the first non-Raynaud's symptom (median delay 4.1 years). ED was associated with severe cutaneous, muscular or renal involvement of SSc, elevated pulmonary pressures and restrictive lung disease. ED was treated in only 27.8% of men. The most common treatment was sildenafil, whose efficacy is not established in ED of SSc patients.
Conclusions: Severe ED is a common and early problem in men with SSc. Physicians should address modifiable risk factors actively. More research into the pathophysiology, longitudinal development, treatment and psychosocial impact of ED is needed.
Background: In Emergency and Medical Admission Departments (EDs and MADs), prompt recognition and appropriate infection control management of patients with Highly Infectious Diseases (HIDs, e.g. Viral Hemorrhagic Fevers and SARS) are fundamental for avoiding nosocomial outbreaks.
Methods: The EuroNHID (European Network for Highly Infectious Diseases) project collected data from 41 EDs and MADs in 14 European countries, located in the same facility as a national/regional referral centre for HIDs, using specifically developed checklists, during on-site visits from February to November 2009.
Results: Isolation rooms were available in 34 facilities (82,9%): these rooms had anteroom in 19, dedicated entrance in 15, negative pressure in 17, and HEPA filtration of exhausting air in 12. Only 6 centres (14,6%) had isolation rooms with all characteristics. Personnel trained for the recognition of HIDs was available in 24 facilities; management protocols for HIDs were available in 35.
Conclusions: Preparedness level for the safe and appropriate management of HIDs is partially adequate in the surveyed EDs and MADs.
From 12.12.2010 to 17.12.2010, the Dagstuhl Seminar 10501 "Advances and Applications of Automata on Words and Trees" was held in Schloss Dagstuhl - Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
Seminar: 10501 - Advances and Applications of Automata on Words and Trees. The aim of the seminar was to discuss and systematize the recent fast progress in automata theory and to identify important directions for future research. For this, the seminar brought together more than 40 researchers from automata theory and related fields of applications. We had 19 talks of 30 minutes and 5 one-hour lectures leaving ample room for discussions. In the following we describe the topics in more detail.
This article shows that there exist two particular linear orders such that first-order logic with these two linear orders has the same expressive power as first-order logic with the Bit-predicate FO(Bit). As a corollary we obtain that there also exists a built-in permutation such that first-order logic with a linear order and this permutation is as expressive as FO(Bit).
Background: Hepatitis C decreases health related quality of life (HRQL) which is further diminished by antiviral therapy. HRQL improves after successful treatment. This trial explores the course of and factors associated with HRQL in patients given individualized or standard treatment based on early treatment response (Ditto-study).
Methods: The Short Form (SF)-36 Health Survey was administered at baseline (n = 192) and 24 weeks after the end of therapy (n = 128).
Results: At baseline HRQL was influenced by age, participating center, severity of liver disease and income. Exploring the course of HRQL (scores at follow up minus baseline), only the dimension general health increased. In this dimension patients with a relapse or sustained response differed from non-responders. Men and women differed in the dimension bodily pain. Treatment schedule did not influence the course of HRQL.
Conclusions: Main determinants of HRQL were severity of liver disease, age, gender, participating center and response to treatment. Our results do not exclude a more profound negative impact of individualized treatment compared to standard, possibly caused by higher doses and extended treatment duration in the individualized group. Antiviral therapy might have a more intense and more prolonged negative impact on females.
Background: Europe was certified to be polio-free in 2002 by the WHO. However, wild polioviruses remain endemic in India, Pakistan, Afghanistan, and Nigeria, occasionally causing polio outbreaks, as in Tajikistan in 2010. Therefore, effective surveillance measures and vaccination campaigns remain important. To determine the poliovirus immune status of a German study population, we retrospectively evaluated the seroprevalence of neutralizing antibodies (NA) to the poliovirus types 1, 2 and 3 (PV1, 2, 3) in serum samples collected from 1,632 patients admitted the University Hospital of Frankfurt am Main, Germany, in 2001, 2005 and 2010.
Methods: Testing was done by using a standardized microneutralization assay.
Results: Level of immunity to PV1 ranged between 84.2% (95%CI: 80.3-87.5), 90.4% (88.3-92.3) and 87.5% (85.4-88.8) in 2001, 2005 and 2010. For PV2, we found 90.8% (87.5-90.6), 91.3% (89.3-93.1) and 89.8% (88.7-90.9), in the same period. Seroprevalence to PV3 was 76.6% (72.2-80.6), 69.8% (66.6-72.8) and 72.9% (67.8-77.5) in 2001 and 2005 and 2010, respectively. In 2005 and 2010 significant lower levels of immunity to PV3 in comparison to PV1 and 2 were observed. Since 2001, immunity to PV3 is gradually, but not significantly decreasing.
Conclusion: Immunity to PV3 is insufficient in our cohort. Due to increasing globalization and worldwide tourism, the danger of polio-outbreaks is not averted - even not in developed countries, such as Germany. Therefore, vaccination remains necessary.
Background: Ewing sarcoma patients have a poor prognosis despite multimodal therapy. Integration of combination immunotherapeutic strategies into first-/second-line regimens represents promising treatment options, particularly for patients with intrinsic or acquired resistance to conventional therapies. We evaluated the susceptibility of Ewing sarcoma to natural killer cell-based combination immunotherapy, by assessing the capacity of histone deacetylase inhibitors to improve immune recognition and sensitize for natural killer cell cytotoxicity.
Methods: Using flow cytometry, ELISA and immunohistochemistry, expression of natural killer cell receptor ligands was assessed in chemotherapy-sensitive/-resistant Ewing sarcoma cell lines, plasma and tumours. Natural killer cell cytotoxicity was evaluated in Chromium release assays. Using ATM/ATR inhibitor caffeine, the contribution of the DNA damage response pathway to histone deacetylase inhibitor-induced ligand expression was assessed.
Results: Despite comparable expression of natural killer cell receptor ligands, chemotherapy-resistant Ewing sarcoma exhibited reduced susceptibility to resting natural killer cells. Interleukin-15-activation of natural killer cells overcame this reduced sensitivity. Histone deacetylase inhibitor-pretreatment induced NKG2D-ligand expression in an ATM/ATR-dependent manner and sensitized for NKG2D-dependent cytotoxicity (2/4 cell lines). NKG2D-ligands were expressed in vivo, regardless of chemotherapy-response and disease stage. Soluble NKG2D-ligand plasma concentrations did not differ between patients and controls.
Conclusion: Our data provide a rationale for combination immunotherapy involving immune effector and target cell manipulation in first-/second-line treatment regimens for Ewing sarcoma.
Much is known about the computation in individual neurons in the cortical column. Also, the selective connectivity between many cortical neuron types has been studied in great detail. However, due to the complexity of this microcircuitry its functional role within the cortical column remains a mystery. Some of the wiring behavior between neurons can be interpreted directly from their particular dendritic and axonal shapes. Here, I describe the dendritic density field (DDF) as one key element that remains to be better understood. I sketch an approach to relate DDFs in general to their underlying potential connectivity schemes. As an example, I show how the characteristic shape of a cortical pyramidal cell appears as a direct consequence of connecting inputs arranged in two separate parallel layers.
The small bowel is essential to sustain alimentation and small bowel Crohn's disease (CD) may severely limit its function. Small bowel imaging is a crucial element in diagnosing small
bowel CD, and treatment control with imaging is increasingly used to optimize the patients outcome. Thereby, capsule endoscopy, Balloon-assisted enteroscopy, and Magnetic resonance imaging have become key players to manage CD patients. In this review, role of small bowel imaging is detailed discussed for use in diagnosing and managing Crohn's disease patients.
Editorial : Andreas Dombret "Regulating Systemically Important Financial Institutions is Vitally Important" ; Research Money/Macro : Dimitris Christelis, Dimitris Georgarakos, Michael Haliassos "International Portfolio Differences: Environment versus Characteristics" ; Research Finance : Raimond Maurer, Ralph Rogalla, Yuanyuan Shen "Optimal Asset Allocation in Retirement with Open-end Real Estate Funds" ; Research Law : Theodor Baums "Shareholder Suits in German Company Law – An Empirical Study" ; Policy Platform : Helmut Siekmann, Patrick Tuschl "Constitutional Ruling on Court of Auditors' Review of Banks" ; Interview : Michael S. Barr "Information Does not Necessarily Lead to Understanding"
The study of meson production in proton-proton collisions in the energy range
up to one GeV above the production threshold provides valuable information about
the nature of the nucleon-nucleon interaction. Theoretical models describe the interaction
between nucleons via the exchange of mesons. In such models, different
mechanisms contribute to the production of the mesons in nucleon-nucleon collisions.
The measurement of total and differential production cross sections provide information
which can help in determining the magnitude of the various mechanisms.
Moreover, such cross section information serves as an input to the transport calculations
which describe e.g. the production of e+e− pairs in proton- and pion-induced
reactions as well as in heavy ion collisions.
In this thesis, the production of ω and η mesons in proton-proton collisions at 3.5
GeV beam energy was studied using the High Acceptance DiElectron Spectrometer
(HADES) installed at the Schwerionensynchrotron (SIS 18) at the Helmholtzzenturm
f¨ur Schwerionenforschung in Darmstadt.
About 80 000 ω mesons and 35 000 η mesons were reconstructed. Total production
cross sections of both mesons were determined. Furthermore, the collected statistics
allowed for extracting angular distributions of both mesons as well as performing
Dalitz plot studies.
The ω and η mesons were reconstructed via their decay into three pions (π+π−π0)
in the exclusive reaction pp −→ ppπ+π−π0. The charged particles were identified
via their characteristic energy loss, via the measurement of their time of flight and
momentum, or using kinematics.
The neutral pion was reconstructed using the missing mass method. A kinematic
fit was applied to improve the resolution and to select events in which a π0 was
produced.
The correction of measured yields for the effects of spectrometer acceptance was done
as a function of four variables (two invariant masses and two angles). Systematic
studies of the acceptance for different input distributions were performed.
The measured yields were normalized to the number of measured events of elastic
scattering. Systematic errors due to the methods of the data analysis and the
background subtraction were investigated.
Production angular distributions of ω and η mesons were measured. Both mesons
exhibit a slightly anisotropic angular distribution.
The Dalitz plot of ω meson production shows indications of resonant production.
However, the deviation of the distribution from the one expected by phase space
simulations is not large.
The Dalitz plot of η meson production shows a signal of the production via the
N(1535) resonance, The contribution of N(1535) to the production was quantified
to be about 47%. The angular distribution of η mesons does not show significant
differences between resonant and non resonant production.
The total production cross section of ω mesons in the reaction pp −→ ppω was
determined to be 106.5 ± 0.9 (stat) ± 7.9 (sys) [μb] where stat indicates statistical
error and sys indicates systematic error, while that of η mesons was determined to
be 136.9 ± 0.9 (stat) ± 10.1 (sys) [μb] in the reaction pp −→ ppη
Occurrence and sources of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (TMDD) in the aquatic environment
(2011)
The aim of the present study was to identify the sources of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (TMDD) into the aquatic environment and to investigate its occurrence in rivers and wastewater treatment plants (WWTPs). Therefore, TMDD was analyzed in 441 wastewater samples from influents and effluents of 27 municipal WWTPs, in 6 sludge samples, in 52 wastewater samples from 3 sewage systems of municipal WWTPs, in 489 surface samples from 24 rivers, in 9 wastewater samples of 3 paper-recycling industries and in 65 groundwater samples. TMDD was also analyzed in household paper products, in 23 samples of toilet
papers, in 5 types of paper towels and in 12 types of paper tissues. The samples were collected between 2007 and 2011. The water samples were extracted with solid phase extraction (SPE) and the household paper samples with Soxhlet extraction. Gas chromatography-mass spectrometry (GC-MS) was used for quantification purposes. Between November 2007 and January 2008, TMDD was detected in the river Rhine at Worms with permanent high concentrations (up to 1330 ng/L). The results showed that TMDD is uniformly distributed across the river at Worms. An increase of the mean TMDD concentration from approximately 500 ng/L to 1000 ng/L was registered in January 2008. Due to the minor fluctuations of the TMDD concentration during the sampling period it is expected that the input of TMDD into the river is continuous. Therefore, TMDD might rather originate from effluents of municipal WWTPs than from temporal sources. The mean TMDD load based on the analysis of 147 water samples collected in the River Rhine was 62.8 kg/d which is equivalent to 23 t/a suggesting that TMDD must be used and/or produced in high quantities in order to be found in those high concentrations. To determine if TMDD is discharged by effluents of municipal WWTPs into the rivers, 24 hours influent and effluent samples of four municipal WWTPs in the Frankfurt/Rhine-Main metropolitan region were collected during November 2008 and February 2010 and analyzed for TMDD. The TMDD influent concentrations varied between 134 ng/L and 5846 ng/L and the effluent concentrations between <LOQ (limit of quantitation) and 3539 ng/L. The TMDD elimination rates in the four WWTPs varied between 33% and 68%. The results showed that effluents of municipal WWTPs are an important source of TMDD in the aquatic environment because TMDD is not completely removed from the sewage during the wastewater treatment. Weekly and daily variations of the TMDD concentration in the influents of two municipal WWTPs indicated that both private households and indirect industrial dischargers contribute to the introduction of TMDD into the municipal sewage systems. A more detailed study of the TMDD elimination rate in the different wastewater treatment stages was carried out in the WWTP Niederrad/Griesheim in Frankfurt am Main. The results showed that the removal of TMDD is mainly carried out during the aerobic biological treatments, where the elimination rate was 46%. In contrast, during the anoxic treatment the removal efficiency was only 1.4% and during the mechanical treatment the elimination rate was 19%. To determine the sources of TMDD in the sewage, household paper products (paper tissues, toilet papers and paper towels) were analyzed for TMDD using Soxhlet extraction. TMDD was detected in 83% of the samples (n=40). The highest mean TMDD concentrations were found in recycled toilet paper (0.20 μg/g) and in paper towels (0.11 μg/g). In paper tissues and non-recycled toilet paper the mean TMDD concentrations were lower 0.080 μg/g and 0.025 μg/g respectively. According to these results the high TMDD influent concentrations found previously in municipal WWTPs (mean 1.20 μg/L) cannot be explained due to migration of TMDD from the household paper products into the sewage. Thus indirect industrial dischargers are the cause of the high influent TMDD concentrations. Effluents of municipal WWTPs with different indirect industrial dischargers (textile-, metal processing-, food processing-, electroplating-, paper-recycling- and printing ink factories) were analyzed. The highest mean TMDD concentrations were found in the effluents of municipal WWTPs that have paper-recycling (71.3 μg/L) and printing ink factories (138 μg/L) as indirect industrial dischargers. These results were confirmed by analyzing process wastewater of three paper-recycling factories located in Germany. High TMDD concentrations were detected and fluctuated between 1.83 μg/L and 113 μg/L. TMDD was also analyzed in the wastewater of a non-recycling-paper factory but its concentration was much lower (0.066 μg/L) indicating that TMDD is introduced into the processing water during the papermaking process due to the use of waste paper. Analyses of wastewater samples from different parts of the sewage pipes of a municipal WWTP in Hesse, which receives the wastewater from a printing ink factory, were carried out. The TMDD concentration in the wastewater sample from the sewage pipe of the printing ink factory was much higher (3,300 μg/L) than the TMDD concentration detected in the other wastewater samples from the sewage system (0.030 μg/L – 0.89 g/L). These results confirm the printing ink production as one of the principal sources of TMDD in the sewage. Analysis of surface water samples of the River Modau downstream from the effluent of the WWTP Nieder-Ramstadt showed TMDD concentrations of up to 28.0 μg/L. These high TMDD concentrations might be caused by the indirect wastewater discharges of a paint factory connected to the municipal sewage system. These results indicate that TMDD is introduced into the municipal WWTPs principally by indirect industrial dischargers and they are mainly paint and printing ink factories. The paper-recycling factories also represent an important source of TMDD in municipal WWTPs but indirectly. According to statements given by the representatives of two paper recycling factories neither TMDD or any other TMDD containing product is used or added during the papermaking process. Therefore, TMDD is washed out from the printing inks of the coloured waste paper and concentrated in the process wastewater in the closed water circuits of paper-recycling factories reaching rivers and municipal WWTPs. The occurrence and distribution of TMDD in surface waters in Germany was also studied. The results showed that TMDD is widely distributed across different rivers systems in the federal states of Hesse, North-Rhine-Westphalia, Bavaria, Baden-Wuerttemberg and Rhineland-Palatinate. In Hesse, TMDD was detected in the some of main rivers with mean concentrations of 812 ng/L (Schwarzbach, Hessian Ried), 374 ng/L (Kinzig), 393 ng/L (Main, at Frankfurt), 539 ng/L (Werra), 326 ng/L (Fulda), 151 ng/L (Emsbach) and 161 ng/L (Nidda). In small rivers (creeks) the mean TMDD concentrations varied between <LOQ (Diemel, Urselbach) and 1890 ng/L (Darmbach). The results showed that the TMDD concentrations in creeks are highly influenced by both effluents of WWTPs and by the distance between the sampling point and the nearest WWTP. Surface samples from sampling locations downstream from WWTPs dischargers showed higher TMDD concentrations (mean 518 ng/L) than sampling locations upstream from WWTPs dischargers (mean 35.1 ng/L). The behavior of TMDD during bank filtration was investigated at two locations, at a water utility company at the Lower River Rhine (urban area) and at the Oderbruch polder (rural area). The results indicated that TMDD is removed from the surface water by bank filtration at both sampling locations. The removal process is probably carried out in the first meters of the aquifer (hyporheic zone) by biodegradation processes, since TMDD does not tend to be absorbed by sediments and it was not found in the groundwater of monitoring wells. In groundwater samples from the Hessian Ried (n=23) TMDD was found only in five samples and the highest TMDD concentration was 135 ng/L. According to these results, TMDD does not represent a concern for drinking water in Germany, since it does not reach the groundwater with high concentrations and it has a low toxicity potential. The input of TMDD into the North Sea was estimated to be 60.7 t/a by considering the mean transported loads of TMDD by the River Rhine at Wesel (58.3 t/a) and Meuse in the Netherlands (2.40 t/a). The estimated discharge of TMDD by German municipal WWTPs (8.19 t/a) and paper-recycling factories (9.24 t/a) into rivers seems to be too low considering that the mean TMDD load in the River Rhine downstream from Wesel is 58.3 t/a. However, due to the high density of population and industries at the Lower Rhine it is expected that more relevant sources of TMDD are located along the Rhine River increasing the transported load. According to the results of this PhD project TMDD is a non-ionic surfactant contained in products, which are applied on surfaces (printing inks and paints) and has the potential to reach the aquatic environment. Therefore, TMDD should fulfill the requirement of a biodegradability of 80% established by the “Law on the Environmental Impact of Detergents and Cleaning Products” in Germany. However, due to the partial elimination rates of TMDD obtained in municipal WWTPs (between 33% and 68%) and to the absence of information about the execution of the biodegradation test on TMDD, it is unknown if TMDD is in accordance with this law. Otherwise, its use as surfactant in such products is questionable.
Menschliche Aktivitäten beeinflussen beinahe alle Bereiche des Lebens auf der Erde (MEA 2005a; UNEP 2007). Die Zerstörung und Veränderung natürlicher Lebensräume sind als Hauptursache für den weltweiten Biodiversitätsverlust identifiziert (Harrison and Bruna 1999; Dale et al. 2000; Foley et al. 2005; MEA 2005a). Zusammen mit dem Klimawandel wird die Landnutzungsveränderung daher als einflussreichster Aspekt anthropogen verursachten globalen Wandels betrachtet (MEA 2005a). Landnutzungsveränderung schließt sowohl die Umwandlung natürlicher Habitate in Agrarland oder Siedlungen als auch die Landnutzungsintensivierung in bereits kultivierten Landschaften mit ein. Diese Veränderungen haben weitreichende Konsequenzen für die Artenvielfalt und resultieren häufig in dem Verlust von Arten mit zunehmender Intensität der Landnutzung (Scholes and Biggs 2005).
Biodiversität und Ökosysteme stellen viele verschiedene Funktionen zur Verfügung, wie z. B. die Sauerstoffproduktion, die Reinigung von Wasser und die Bestäubung von Nutzpflanzen.
Einige dieser Funktionen sind hilfreich, andere wichtig und wieder andere notwendig für das menschliche Wohlergehen (MEA 2005b; UNEP 2007). Mittlerweile sind Ökosystemfunktionen und die vielen Nutzen, die sie erbringen, zu einem zentralen Thema der interdisziplinären Forschung von Sozialwissenschaften und Naturwissenschaften geworden (Barkmann et al. 2008 und darin enthaltene Referenzen). Dadurch bedingt ist es zu einiger Verwirrung bezüglich der verwendeten Begriffe der "Ökosystemfunktion" (engl. "ecosystem function") und dem der "Ökosystemdienstleistung" (engl. "ecosystem service") gekommen (deGroot et al. 2002). Da der Fokus meiner Arbeit auf grundlegenden Funktionen von Ökosystemen liegt, verwende ich im Folgenden den Begriff der Ökosystemfunktion.
Für viele Ökosystemfunktionen ist noch sehr unzureichend bekannt, wie diese von externen Störungen beeinflusst werden (Kremen and Ostfeld 2005; Balvanera et al. 2006). Ökosystemfunktionen werden selten von nur einer einzigen Art aufrechterhalten, sondern meist von einer ganzen Reihe unterschiedlicher taxonomischer Gruppen – alle mit ihren ganz eigenen Ansprüchen. Diese Arten, wie auch deren intra- und interspezifischen Interaktionen, können durchaus nterschiedlich auf die gleiche Störungsquelle oder Störungsintensität reagieren. Dies kann Vorhersagen zum Verhalten von Ökosystemfunktionen extrem erschweren. ...
We provide a mathematical framework to model continuous time trading in limit order markets of a small investor whose transactions have no impact on order book dynamics. The investor can continuously place market and limit orders. A market order is executed immediately at the best currently available price, whereas a limit order is stored until it is executed at its limit price or canceled. The limit orders can be chosen from a continuum of limit prices.
In this framework we show how elementary strategies (hold limit orders with only finitely many different limit prices and rebalance at most finitely often) can be extended in a suitable
way to general continuous time strategies containing orders with infinitely many different limit prices. The general limit buy order strategies are predictable processes with values in the set of nonincreasing demand functions (not necessarily left- or right-continuous in the price variable). It turns out that this family of strategies is closed and any element can be approximated by a sequence of elementary strategies.
Furthermore, we study Merton’s portfolio optimization problem in a specific instance of this framework. Assuming that the risky asset evolves according to a geometric Brownian
motion, a proportional bid-ask spread, and Poisson execution times for the limit orders of the small investor, we show that the optimal strategy consists in using market orders to keep the
proportion of wealth invested in the risky asset within certain boundaries, similar to the result for proportional transaction costs, while within these boundaries limit orders are used to profit from the bid-ask spread.
Ende der 70ger Jahre, fünf Jahre nach der Einführung des ersten kommerziellen, medizinischen Computertomographen wurde die Tomographie am Los Alamos Scientific Laboratory zum ersten Mal für die Diagnose von Teilchenstrahlen angewendet. Bei der Tomographie wird aus eindimensionalen Projektionen, sogenannten Profilen, welche in möglichst vielen Winkeln um ein Objekt herum aufgenommen werden, ein zweidimensionales Abbild der Dichteverteilung (Slice oder Scheibe) approximiert. Dies ist möglich durch das bereits 1917 von Johann Radon eingeführte Fourier-Scheiben-Theorem. In der Theorie kann die zwei-dimensionale Dichteverteilung exakt ermittelt werden, wenn Projektionen mit einer unendlich feinen Auflösung über unendlich viele Winkel um ein Objekt herum in die Rekonstruktion einbezogen werden. Durch die Rekonstruktion vieler Scheiben kann ein drei-dimensionales Abbild der Dichteverteilung in einem Objekt, in diesem Fall einem Ionenstrahl, berechnet werden, sofern dieses nicht optisch dicht ist.
Die Profile in der nicht-invasiven Strahldiagnose entstehen durch CCD-Kameraaufnahmen von strahlinduzierter Fluoreszenz, welche durch den Einlass von Restgas hervorgerufen wird. Es sind aber auch Profile, welche aus anderen Methoden gewonnen werden (z.B. Gittermessungen) denkbar. An Orten mit hoher Energie ist jedoch eine nicht-invasive Form der Profilaufnahme sowohl für die Qualität des Strahls, wie auch den Schutz der Messgeräte unabdingbar.
In den letzten 40 Jahren wurden im Bereich der Strahltomographie viele wichtige Fortschritte erzielt:
1. Anfangs standen nur sehr wenige Profile zur Verfügung, so dass die Methode der gefilterten Rückprojektion(FBP), welche sich direkt aus dem Fourier-Scheiben-Theorem ableitet und welches auch in der Medizin verwendet wird, nicht angewendet werden kann. Um dieses Problem zu lösen wurden iterative Methoden wie die Algebraische Rekonstruktion (ART) und die Methode der Maximalen Entropie (MEM) für die Strahltomographie erschlossen, so dass auch mit sehr geringer Profilanzahl eine Rücktransformation möglich wurde.
2. Neben der Ortsraumtomographie wurde die Phasenraumtomografie entwickelt, so dass mittlerweile eine Rekonstruktion des sechs-dimensionalen Phasenraumes möglich ist, mit welchem ein Ionenstrahl in seiner Gesamtheit beschrieben werden kann.
3. Die Projektionen wurden lange Zeit durch Aufnahmen von mehreren festen Anschlüssen aus gewonnen (Multi-Port-Technik). Auf diese Weise ist die Anzahl der möglichen Projektionen sehr begrenzt. So entwickelte man später eine Methode welche den Strahl mit Hilfe von Quadrupolen dreht (Quad-Scan-Technik), so dass auf diese Weise von einem Anschluss aus viele Projektionen gemessen werden konnten, so dass sogar die FBP angewendet werden konnte.
4. Die meisten Bestrebungen zielten darauf ab, die Tomographie für eine nicht-invasive Emittanzmessmethode zu nutzen, welches bis heute aufgrund der großen und noch immer zunehmenden Energien in modernen Beschleunigern ein wichtiges Problem ist. Um die Tomographie zur Emittanzmessung zu verwenden, führt man eine Rekonstruktion des Phasenraumes durch. Das Problem ist, dass hierfür das a priori Wissen über die Strahltransportmatrix in die Tomographie mit einfließt, die berechnete Strahltransportmatrix
jedoch nicht mit dem tatsächlichen Strahltransport übereinstimmt, da dieser bei hohen Energien durch auftretende Raumladung nicht-linear verändert wird. Hierzu wurden gute Fortschritte in der Abschätzung der tatsächlichen Transportmatrix gemacht um die Phasenraumtomographie trotzdem mit hinreichend gutem Ergebnis durchführen zu können.
Trotz all dieser Fortschritte und Entwicklungen ist die Tomographie bis heute keine weitverbreitete Methode in der Strahldiagnose. Der Grund ist, dass das Einrichten einer Tomografie eine komplexe Abfolge etlicher Entscheidungen und weitgestreutes Wissen aus vielen unterschiedlichen Bereichen erfordert, dieser nicht zu unterschätzende Mehraufwand jedoch auch durch einen signifikanten Nutzen gerechtfertigt sein muss. Der große Nutzen der Tomographie für die Strahldiagnose und Untersuchung der Strahldynamik ist bis heute allerdings weitgehend unerkannt und weiterhin reduziert auf die Entwicklung einer nicht-invasiven Methode für die Emittanzbestimmung. Ein zweites Hindernis stellte bisher auch die Diskrepanz zwischen Genauigkeit und Platzaufwand dar (hohe Genauigkeit durch viele Projektionen mit Quad-Scan-Technik auf mehreren Metern oder niedrige Genauigkeit durch wenig Projektionen mit Multi-Port-Technik auf weniger als einem Meter). Die Tomografie kann großen Nutzen leisten für die Online-Überwachung wichtiger Maschineneparameter im Strahlbetrieb (Monitoring) als auch für detaillierte Analysen zur Strahldynamik (Modellierung) weit über die Implementierung einer nicht-invasiven Emittanzmessmethode hinaus.
Um dies zu gewährleisten Bedarf es Zweierlei. Zum einen muss die Diskrepanz zwischen Genauigkeit und Platzaufwand aufgehoben werden. Hierzu wurde im Rahmen dieser Arbeit eine rotierbare Vakuumkammer entwickelt die nach dem Vorbild medizinischer Tomographen in mehr als 5000 Winkelschritten um den Strahl herum fahren kann, dabei ein Vakuum von mindestens 10-7mbar aufrecht erhält und einen Platzbedarf von weniger als 400 mm in der Strahlstrecke einnimmt. Zum anderen muss die Implementierung der Tomografie durch eine Angabe von schematischen Schritten und Entscheidungen vereinfacht werden. Eine Strahltomographie muss immer auf ihren jeweiligen Zweck hin implementiert werden, da Einzelelemente der Tomografie wie beispielsweise Messvorrichtung und dadurch die Profilanzahl, zu verwendender Tomographiealgorithmus, zu bestimmende Parameter sich je nach Einsatz unterscheiden können. Jedoch können die dazu nötigen Entscheidungen in ein Schema eingeordnet werden, welches die Implementierung der Tomographie vereinfacht und beschleunigt. Hierzu wurde in dieser Arbeit eine Diagnosepipeline und ein Entscheidungsschema eingeführt, sowie die Implementierung nach diesem Schema am Beispiel einer Strahltomographie für die Frankfurter Neutronenquelle (FRANZ) demonstriert und die entsprechenden Fragen und Entscheidungen diskutiert. Es wird gezeigt, wie sich aus den Messdaten über die Aufbereitung der Daten durch die Tomografie die erforderlichen Standardstrahlparameter für ein Monitoring gewinnen lassen. Zusätzlich wird ein Ebenen-Modell eingeführt, über welches nicht-Standardparameter oder neu modellierte Strahlparameter für detaillierte Analysen der Strahldynamik über die Standardparameter hinaus entwickelt werden können. Diese Arbeit soll ein grundlegendes Konzept für die routinemäßige Implementierung der Tomographie in der Strahldiagnose zur Verfügung stellen. Für die Verwendung zum Monitoring im Strahlbetrieb muss die Bestimmung von Standardparametern noch wesentlich im Zeitaufwand verbessert werden. Die Verwendung der Phasenraumtomographie benötigt noch eine Idee um den arcustangensförmigen Verlauf der berechneten Phasenraumrotationswinkel mit der Forderung der FBP nach äquidistanten Projektionswinkeln verträglicher zu machen.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Lipid-laden alveolar macrophages and pH monitoring have been used in the diagnosis of chronic aspiration in children with gastroesophageal reflux (GER). This study was conducted to prove a correlation between the detection of alimentary pulmonary fat phagocytosis and an increasing amount of proximal gastroesophageal reflux. It was assumed that proximal gastroesophageal reflux better correlates with aspiration than distal GER. Patients from 6 months to 16 years with unexplained recurrent wheezy bronchitis and bronchial hyperreactivity, or recurrent pneumonia with chronic cough underwent 24-hour double-channel pH monitoring and bronchoscopy with bronchoalveolar lavage (BAL). Aspiration of gastric content was determined by counting lipid laden alveolar macrophages from BAL specimens. There were no correlations between any pH-monitoring parameters and counts of lipid-laden macrophages in the whole study population, even when restricting analysis to those with abnormal reflux index expressing clinically significant GER. Quantifying lipid-laden alveolar macrophages from BAL in children with gastroesophageal-related respiratory disorders does not have an acceptable specificity to prove chronic aspiration as an underlying etiology. Therefore, research for other markers of pulmonary aspiration is needed.
During the last years, chemopreventive activity of NSAIDs against a great variety of tumors was highly investigated. COX-2 seemingly plays a major part in tumorigensis and tumor development, underlined by several studies in animals and humans. At first, NSAIDs were thought to accomplish chemoprevention by inhibition of COX-2 as their so far known mode of action comprises unselective inhbition of COX-enzymes. However, further studies revealed COX-independent mechanisms. Sulindac is known as a well established drug used to treat inflammation and pain exerting the most prominent chemopreventive action, mainly in colorectal cancer or FAP and can be classified into the group of NSAIDs inhibting both COX-isoformes. As interference with the AA metabolism is evident, it was speculated whether Ssi has targets other than COX-enzymes providing evidence and explanation of its beneficial side effect profile and its ability to reduce tumor growth. 5-LO is another master enzyme in the AA cascade which produces inflammatory lipid mediators (LTs) upon stimulation in inflamed tissues. The present work should answer the question if Ssi targets the 5-LO pathway and should examine the molecular mechanisms behind Ssi-mediated 5-LO inhibiton. As COX-2 is upregulated during carcinogenesis and is inhibited by Ssi, further investigations should show regulatory effects of Ssi on 5-LO gene expression in MM6-cells and whether Sp1 as a common transcriptional factor is involved in such a regulation. As the use of NO-NSAIDs seem to be a promising strategy concerning their chemopreventive and gastroprotective effects compared to the parent NSAIDs, a possible interaction with the 5-LO pathway as a second, potent target should additionally be elucidated. In the first section it was demonstrated that the pharmacologically active metabolite of sulindac, Ssi, targets 5-LO. Ssi inhibited 5-LO in ionophore A23187- and LPS/fMLP-stimulated human PMNL (IC50 ≈ 8 -10 μM). Importantly, Ssi efficiently suppressed 5-LO in human whole blood at clinically relevant plasma levels (IC50 = 18.7 μM). Ssi was 5-LO-selective as no inhibition of related lipoxygenases (12-LO, 15-LO) was observed. The sulindac prodrug and the other metabolite, sulindac sulfone, failed to inhibit 5-LO. Mechanistic analysis demonstrated that Ssi directly suppresses 5-LO with an IC50 of 20 μM. Together, these findings may provide a novel molecular basis to explain the COX-independent pharmacological effects of sulindac under therapy. In the second part of the work dealing with the analysis of Ssi’s inhibitory mechanism on 5-LO it was presented that Ssi shows a lack of potency in cellular systems where membrane constituents are existent. The addition of microsomal fractions of PMNLto crude 5-LO enzyme were able to recover enzyme activity to ~ 100 %. Selectively 5-LO activity stimulating lipids like PC, participating in 5-LO membrane interactions within the regulatory C2-like domain of 5-LO, counteracted the Ssimediated inhibition on 5-LO-wt in a concentration-dependent manner. Lastly, a protein mutant lacking three trp resudies essential for linking the enzyme to nuclear membranes and deploying catalytic activity was not influenced by Ssi and shows enzyme activity in a cell-free assay. Ssi displays the first 5-LO inhibitor on the market interacting with the C2-like domain of the enzyme and therfore can stand for a novel lead structure of 5-LO inhibitors. An influence on 5-LO gene expression by Ssi could be detected in differentiated MM6-cells, described in the results chapter 3 (4.3). Ssi downregulated the 5-LO mRNA level after 72 hrs of incubation in differentiated MM6-cells to ~ 20 % of output control at concentrations of 10 μM. Concomitantly, mRNA levels of Sp1 were suppressed. Reporter gene studies revealed Sp1 most probably as a regulating agent involved in the Ssi-mediated 5-LO mRNA downregulation as co-transfection of increasing amounts of Sp1 could abrogate the effect. A ChIP assay could identify Sp1 as a critical transcriptional factor as Sp1 binding to the 5-LO promoter decreased in presence of Ssi. Lastly, three NO-NSADIs (NO-sulindac, NOnaproxen, NO-aspirin) were tested for the ability of 5-LO product inhibition. In intact PMNL, all compounds showed effective inhibition of 5-LO activity and NO-sulindac was most potent with an IC50 value of ~ 3 μM. NO-ASA inhibited 5-LO with IC50 values of ~ 30 μM and showed a non-competitive mode of action in cell-based assays. On human recombinant 5-LO all compounds again showed inhibitory potency whereas NO-sulindac again suppressed LT biosynthesis with an IC50 vaue comparable to intact cellular systems. Unfortunately, all inhibitors showed a loss of potency when tested for inhibition of 5-LO product synthesis in human whole blood as higher concentrations up to 100 μM were needed to reach at least 55 % enzyme inhibition. However, this strategy of 5-LO inhibition seems promising and needs further experimental approaches to gain more insight into the mechanism of 5-LO inhibition by NONSAIDs.
5-lipoxygenase (5-LO) catalyzes the first two steps in leukotriene (LT) biosynthesis. In a two step reaction the enzyme oxygenates arachidonic acid (AA) to form the highly unstable epoxide leukotriene A4 (LTA4) in dehydrating a hydroperoxide intermediate (20). LTA4 can then be further metabolized by two terminal synthases yielding either the potent chemoattractant leukotriene B4 (LTB4) or the cysteinyl leukotrienes (CysLTs). 5-LO enzyme expression is primarily found in mature leukocytes (22) where it can either reside in the cytoplasm or in the nucleus associated with euchromatin (29). Its enzymatic activity is embedded in a complicated network in intact cells regulating LT synthesis by various factors dependent on the cell type and nature of stimulus. Factors such as the amount of free AA released by phospholipase A2 enzymes, levels of enzymes involved, catalytic activity per enzyme molecule and availability of different small molecules influence 5-LO activity (36).
The 5-LO derived LTs are lipid mediators which were shown to primarily mediate inflammatory and allergic reactions and their role in the pathogenesis of asthma is well defined. CysLTs are among the most potent bronchoconstrictors yet studied in man and play an important role in airway remodeling. LTB4 has no bronchoconstrictory effects in healthy and asthmatic humans but displays potent chemoattractant properties on neutrophils and increases leukocyte adhesion to the vessel wall endothelium (22). Therefore, LTB4 enhances the capacity of macrophages and neutrophils to ingest and kill microbes. In concert with LTB4, histamine and prostaglandin E2 (PGE2) CysLTs are thought to maintain the tone of the human airways (82).
Besides their well studied role in asthma, 5-LO derived LTs have also been implicated to play a role in cardiovascular diseases and cancer. In contrast to healthy tissues, LT pathway enzymes and receptors were found to be abundantly expressed in cancer tissues, atherosclerotic lesions in the aorta, heart and carotid artery (86). Pharmacological inhibition of 5-LO potently suppressed tumour cell growth by inducing cell cycle arrest and triggering cell death via the intrinsic apoptotic pathway (92, 93). In several studies LTs were found to exhibit cardiovascular actions by promotion of plasma leakage in postcapillary venules, coronary artery vasoconstriction and impaired ventricular contraction leading to reduced coronary blood flow and cardiac output (24). Unfortunately, the precise molecular mechanisms through which LTs influence carcinogenesis and cardiovascular diseases are still incompletely understood.
In contrast, an increasing number of studies questions the correlation between 5-LO and cancer (95-97) since extreme LT concentrations were applied to induce proliferative effects in the majority of the publications. A few studies exist which show susceptibility towards 5-LO products in physiological concentrations or achieve anti-proliferation by applying low concentrations of 5-LO inhibitors (98) ...
The aim of this study is a better understanding of radiation processes in regional climate models (RCMs) in order to quantify their impact and to reduce possible errors. A first important task in finding an answer to this question was to examine the accuracy of the components of the radiation budget in regional climate simulations. To this end, the simulated radiation budgets of two regional climate simulations for Europe were compared with a satellite-based reference. In the simulations with the RCM COSMO-CLM there were some serious under- and overestimations of short- and long-wave net radiation in Europe. However, taking into account the differences in the reference datasets, the results of the COSMO-CLM were quite satisfactory.
Using statistical methods, the influence of potential sources of uncertainties was estimated. Uncertainties in the cloud cover and surface albedo had a significant impact on uncertainties in short-wave net radiation, the explained variance of uncertainties in cloud cover was two to three times higher than that of uncertainties in surface albedo. Uncertainties in the cloud cover resulted in significant errors in the net long-wave radiation. However, the influence of uncertainties in soil temperature on errors in the long-wave radiation budget was low or even negligible. These results were confirmed in a comparison with simulations of the REMO and ALADIN regional climate models. It is reasonable to expect that a better parameterization of relatively simple parameters such as cloud cover and surface albedo is a means of significantly improving the simulation of radiation budget components in the COSMO-CLM.
An important question for the application of RCMs is to examine whether the results of radiation uncertainties and their impact factors are comparable if the model is applied in a region that is not the one for which it was originally created. Comparisons of the simulated radiation budgets of different RCMs for West Africa showed that problems in the simulation of short- and long-wave radiation fluxes were a widespread problem. Most of the tested models showed some considerable under- or overestimation of the short- and long-wave radiation fluxes.
Similar to Europe uncertainties in cloud cover were also in the simulations for Africa a significant factor affecting uncertainties in the simulated radiation fluxes. However, for the African simulations uncertainties in the parameterization of surface albedo were much more important than in Europe. On average, overland uncertainties in the cloud cover and surface albedo were of similar importance. Uncertainties in soil temperature simulations were of higher importance in Africa, and reached overland similar values of the mean explained variance (R2 ≈ 0.2) such as uncertainties in the cloud cover. This indicates a geographical dependence of the model error. This study confirmed the assumption that an improved parameterization of relatively simple parameters such as the surface albedo in RCMs leads to a significant improvement in the modeled radiation budget, particularly in Africa.
The influence of errors in the simulated radiation budget components on the simulation of climate processes, such as the West-African monsoon (WAM), was investigated in a next step. The evaluation of ERA-Interim and ECHAM5 driven COSMO-CLM simulations for Africa showed that the main features of the WAM were well reproduced by the model, but there were only slight improvements compared to the driving data. The index of convective activity in the model simulations was much too high and precipitation was underestimated in large parts of tropical Africa. The partly considerable differences between the ERA-Interim and ECHAM5 driven simulations demonstrated the sensitivity of the RCM to the boundary conditions and in particular to the sea surface temperature. An excessive northwards shift of the monsoon in the model was influenced by the land-sea temperature gradient and the strength of the Saharan heat low. Consequently, a part of the error was due to the driving data and the model itself produced another part.
By modifying the parameterization of the bare soil albedo the errors in the radiation budget and 2 m temperature in the Sahara region were significantly reduced. Similarly, the overesti-mation of precipitation and convection has been reduced in the Sahel. The effect of this modifi-cation on the examined WAM area was low. This confirmed that especially in desert regions, errors in the surface albedo were a driving factor for errors in the radiation budget. However, there are other important factors not yet sufficiently understood that have a strong influence on the quality of the simulation of the WAM.
The analysis of the actual state, the quantification of error sources and the highlighting of connections made it possible to find means to reduce uncertainties in the simulated radiation in RCMs and to have a better understanding of radiation processes. However, the magnitude of the errors found, the number of possible influencing factors, and the complexity of interactions, indicate that there is still a need for further research in this area.
Background and Purpose: Targeted drugs have augmented the cancer treatment armamentarium. Based on the molecular specificity, it was initially believed that these drugs had significantly less side effects. However, currently it is accepted that all of these agents have their specific side effects. Based on the given multimodal approach, special emphasis has to be placed on putative interactions of conventional cytostatic drugs, targeted agents and other modalities. The interaction of targeted drugs with radiation harbours special risks, since the awareness for interactions and even synergistic toxicities is lacking. At present, only limited is data available regarding combinations of targeted drugs and radiotherapy. This review gives an overview on the current knowledge on such combined treatments.
Material and methods: Using the following MESH headings and combinations of these terms pubmed database was searched: Radiotherapy AND cetuximab / trastuzumab / panitumumab / nimotuzumab, bevacizumab, sunitinib / sorafenib / lapatinib / gefitinib / erlotinib / sirolimus, thalidomide / lenalidomide as well as erythropoietin. For citation crosscheck the ISI web of science database was used employing the same search terms.
Results: Several classes of targeted substances may be distinguished: Small molecules including kinase inhibitors and specific inhibitors, antibodies, and anti-angiogenic agents. Combination of these agents with radiotherapy may lead to specific toxicities or negatively influence the efficacy of RT. Though there is only little information on the interaction of molecular targeted radiation and radiotherapy in clinical settings, several critical incidents are reported.
Conclusions: The addition of molecular targeted drugs to conventional radiotherapy outside of approved regimens or clinical trials warrants a careful consideration especially when used in conjunction in hypo-fractionated regimens. Clinical trials are urgently needed in order to address the open question in regard to efficacy, early and late toxicity.