Universitätspublikationen
Refine
Year of publication
Document Type
- Article (11005)
- Preprint (1737)
- Doctoral Thesis (1578)
- Working Paper (1441)
- Part of Periodical (570)
- Conference Proceeding (514)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
- Master's Thesis (38)
- Bachelor Thesis (26)
- Periodical (11)
- magisterthesis (6)
- diplomthesis (5)
- Habilitation (5)
- Other (5)
- Diploma Thesis (4)
- Magister's Thesis (3)
- Contribution to a Periodical (1)
Language
- English (17507) (remove)
Keywords
- inflammation (95)
- COVID-19 (91)
- SARS-CoV-2 (63)
- Financial Institutions (48)
- climate change (46)
- Germany (45)
- ECB (43)
- aging (43)
- apoptosis (42)
- cancer (42)
Institute
- Medizin (5160)
- Physik (3165)
- Frankfurt Institute for Advanced Studies (FIAS) (1666)
- Wirtschaftswissenschaften (1654)
- Biowissenschaften (1417)
- Informatik (1262)
- Center for Financial Studies (CFS) (1139)
- Sustainable Architecture for Finance in Europe (SAFE) (1068)
- Biochemie und Chemie (858)
- House of Finance (HoF) (704)
Top-down and bottom-up approaches are the general methods used to analyse proteomic samples today, however, the bottom-up approach has been dominant in the last decade. Establishing a bottom-up method involves not only the choice of adequate instruments and the optimisation of the experimental parameters, but also choosing the right experimental conditions and sample preparation steps. LC-ESI MS/MS has widely been used in this field due to its advanced automation. The primary objective of the present study was to establish a sensitive high-throughput nLC-MALDI MS/MS method for the identification and characterisation of proteins in biological samples. The method establishment included optimisation and validation of parameters such as the capillaries in the HPLC systems, gradient slopes, column temperature, spotting frequencies or the MS and MS/MS acquisition methods. The optimisation was performed using two HPLC-systems (Agilent 1100 series and Proxeon Easy nLC system), three spotters and the 4800 MALDI-TOF/TOF analyzer. Furthermore, samples preparation protocols were modified to fit to the established nLCMALDI- TOF/TOF-platform. The potentials of this method was demonstrated by the successful analysis of complex protein samples isolated from lipid particles, pre-adipocytes/adipocytes tissues, membrane proteins and proteins pulled-down from protein-proteins interaction studies. Despite the small amount of proteins in the lipid particles or oil bodies, and the challenges encountered in studying such proteins, 41(6 novel + 14 mammal specific + 21 visceral specific) proteins were added to the already existing proteins of the secretome of human subcutaneous (pre)adipocytes and 6 novel proteins localised in the yeast lipid particles. Protein-protein interaction studies present another area of application. Here the analytical challenges are mostly due to the loss of binding partner upon sample clean-up and to differentiate from non-specific background. Novel interaction partners for AF4•MLL and AF4 protein complex were identified. Furthermore, a novel sample protocol for the analysis of membrane proteins, based on the less specific protease, elastase, was established. Compared to trypsin, a higher sequence coverage and higher coverage of the transmembrane domains were achieved. The use of this enzyme in proteomics has been limited because of its non specific cleavage. However, from the results obtained in these studies, elastase was found to cleave preferentially at the C-terminal site of the amino acids AVLIST. The advantage of the established protocol over conventional protocols is that the same enzyme can be used for shaving of the soluble dormains of intact proteins in membranes and the digestion of the hydrophobic domain after solubilisation. Furthermore, the solvents used are compatible with the nLC-MALDI method setup. In addition, it was also shown that for less specific enzymes, a higher mass accuracy is required to reduce the rate of false positive identifications, since current search engines are not perfectly adapted for these types of enzymes. A brief statistical analysis of the MS/MS data obtained from the LC-MALDI TOF/TOF system showed that for less specific enzymes, under high-energy collision conditions, approximately 43 % of the fragment ions could not be matched to the known y- b type ions and their resultant internal fragments. This limitation greatly influenced the search results. However, this limitation can be overcome by modifying the N-terminal amino acids with basic moieties such as TMT. The use of elastase as a digestion enzyme in proteomic workflow further increased the complexity of the sample. Therefore, orthogonal multidimensional separation was necessary. Offgel-IEF was used as the separation technique for the first dimension. Here peptides are separated according to the pI. However, the acquired samples could not be loaded to the nLC due to the high viscosity of the concentrated samples when using the standard protocol. In order to achieve compatibility of the Offgel-IEF to the nLC-MALDI-TOF/TOF-platform, the separation protocol of the Offgel-IEF was modified by omitting the glycerol, which was the cause of the viscous solution. The novel glycerol free protocol is advantageous over the conventional method because the samples could directly be picked-up and loaded onto the pre-column without resulting in an increase in back pressure or a subsequent pre-column clogging. The glycerol free protocol was then assessed using purple membrane and membrane fraction of C. glutamicum. The results obtained were comparable to those applied in published reports. Therefore, the absence of glycerol did not affect the separation efficiency of the Offgel-IEF. In addition the applicability of elastase and the glycerol free Offgel-IEF for quantitation of membrane proteins was assessed. Most of the unique peptides identified were in the acidic region and 85 % were focused only into one fraction and approximately 95 % in only two fractions. These results are in accordance with previously published results (Lengqvist et al., 2007). When compared with theoretical digests of the proteins identified in this study, it can be concluded that basic moiety (TMT) on the peptide backbone, did not affect the separation efficiency of the Offgel-IEF. In an applied study, changes in the protein content of yeast strain grown in two different media were relatively quantified. For example, prominent proteins, such as the hexose tranporter proteins responsible for transporting glucose accross the membrane, were successfully quantified. Last but not least, the nLC-MALDI-TOF/TOF platform also served as a basis for the development of a high-throughput method for the identification of protein phosphorylation. The establishment of such a method using MALDI has been challenging due to the lack of sensitive matrices, such as CHCA for non-modified peptides, which exhibit a homogenous crystallisation and thus yield stable signal intensity over a long period of time in an automated setup. The first step of this method was the establishment of a matrix/matrix mixture with better crystal morphology and higher analyte signal intensity than the matrix of choice, i.e. DHB. From MS and MS/MS measurements of standard phosphopeptides, a combination of FCCA and CHAC in a 3:1 ratio and 3 mM NH4H2PO4 facilitated high analyte signal intensities and good fragmentation behaviour. Combining a custom-packed biphasic column for the enrichment of phosphopeptides, the applicability of the matrix mixture was assessed in anautomated phosphopeptide analysis using standard phosphopeptides spiked to a 20-fold excess BSA digest. These analyses showed that this method is reproducibile and both flow throughs can be analysed. Applying the method to the analysis of 2 standard phosphoproteins, alpha/beta-casein, and a leukemia related protein, ENL, 13 phosphopeptides from both alpha/beta-Casein and 13 phosphopeptides with 6 phosphorylation sites from the ENL were identified. As a general conclusion, it can be stated that the nLC-MALDI-TOF/TOF method established here in various modifications for different analytical purposes is a robust platform for proteomic analyses.
In this study, I investigate the crustal and upper mantle velocity structure beneath the Rwenzori Mountains in western Uganda. This mountain range is situated within the western branch of the East African Rift and reaches altitudes of more than 5000 m. I use four different approaches that belong to the travel-time tomography method. The first approach is based on the isotropic tomographic inversion of local data, which contain information about 2053 earthquakes recorded by a network of up to 35 stations covering an area of 140×90 km2. The LOTOS-09 algorithm described here is used to realize this approach. The second approach is based on the anisotropic tomographic inversion of the same local dataset. This method employs the tomographic code ANITA, developed with my participation, which provides 3D anisotropic P and isotropic S velocity distributions based on P and S travel-times from local seismicity. For the P anisotropic model, four parameters for each parameterization cell are determined. This represents an orthorhombic anisotropy with one vertically-oriented predefined direction. Three of the parameters describe slowness variations along three horizontal orientations with azimuths of 0°, 60° and 120°, and one is a perturbation along the vertical axis. The third approach is based on tomographic inversion of the teleseismic data, which contain information about the traveltimes of P-waves coming from 284 teleseismic events recorded by the seismic network stations. The TELELOTOS code, which is my own modification of the LOTOS-09 algorithm, is used in this approach. The TELELOTOS code is designed to iteratively invert the local and/or teleseismic datasets. Finally, I present the results of the new tomographic approach, which is based on the simultaneous inversion of the joint local and teleseismic data. The simultaneous use of these datasets for the tomographic inversion has several advantages. In this case, the velocity structure in the study area can be resolved as deep as in the teleseismic approach. At the same time, in the upper part of the study volume, the resolution of the obtained models is as good as in the local tomography. The TELELOTOS algorithm is used to perform the joint tomographic inversion. Special attention is paid in this work to synthetic testing. A number of different synthetic and real data tests are performed to estimate the resolution ability and robustness of the obtained models. In particular, synthetic tests have shown that the results of the anisotropic tomographic inversion of the local data have to be considered as unsatisfactory. For all approaches used in this study, I present synthetic models that reproduce the same pattern of anomalies as that obtained by inverting the real data. These models are used to interpret the results and estimate the real amplitudes of the obtained anomalies. The obtained models exhibit a relatively strong negative P anomaly (up to -10%) beneath the Rwenzori Mountains. Low velocities are found in the northeastern part of the array at shallower depths and are most likely related to sedimentary deposits, while higher velocities are found beneath the eastern rift shoulder and are thought to be related to old cratonic crust. The presence of low velocities in the northwestern part of the array may be caused by a magmatic intrusion beneath the Buranga hot springs. Relatively low velocities were observed within the lower crust and upper mantle in the western and southern parts of the study area (beneath the rift valley and the entire length of the Rwenzori range). The higher amplitude of the low-velocity anomaly in the south can be related to the thinner lithosphere in the southern part of the Albertine rift. In the center of the study area, a small negative anomaly is observed, with the intensity increasing with depth. This anomaly is presumably related to a fluids rising up from a plume branch in the deeper part of the mantle. According to the interpretation of the local earthquake distribution, the Rwenzori Mountains are located between two rift valleys with flanks marked by normal faults. The Rwenzori block is bounded by thrust faults that are probably due to compression.
Capoeta damascina (Teleostei: Cyprinidae) is one of the most common freshwater fish species, found throughout the Levant, Mesopotamia, Turkey and Iran. According to the state of knowledge prior to this study, C. damascina, which is distributed over a wide range of isolated water bodies, was not a well-defined species. It was questionable whether it represents a single species or a complex of closely related species with high intraspecific and comparatively low interspecific variability. The goal of this study was to investigate the taxonomy, systematic position of the C. damascina species complex and the phylogenetic relationships among its members, based on morphological features as well as molecular phylogeny. Samples obtained from throughout the geographic range of this species complex were subjected to comparative morphological analyses in order to define, properly diagnose and separate species within the C. damascina complex. To elucidate phylogenetic relationships among members of the C. damascina species complex, samples were subjected to genetic analyses, using two molecular markers targeting the mitochondrial cytochrome oxidase I (COI, n = 103) and the two adjacent divergence regions (D1-D2) of the nuclear 28S rRNA genes (LSU, n = 65). Based on morphological and molecular genetic data, six closely related species were recognized within the C. damascina complex: C. buhsei, C. caelestis, C. damascina, C. saadii, C. umbla and an undescribed species, Capoeta sp.1. Analyses of the morphometric and meristic data obtained in this study revealed phenotypic variability among the various populations within a species and among the different species. Such differences in morphological characters reflect genetic differences, environmentally induced phenotypic variation or both, as the meristic phenotype of fish is sometimes a consequence of environmental parameters acting on the genotype. Based on phylogenetic analyses, two main lineages were identified within the C. damascina species complex: a western lineage represented by C. caelestis, C. damascina and C. umbla and an eastern lineage represented by C. buhsei, C. saadii and Capoeta sp.1. The close phylogenetic relationships between C. damascina and C. umbla and the sharing of same haplotypes between one specimen of C. damascina from Euphrates and another of C. umbla from Tigris reflect one of three possibilites: recent speciation, mitochondrial introgression or a combination of both. The results obtained in this study indicate that speciation of the above-mentioned six taxa is quite recent and that their dispersal and present-day distribution can be related to Pleistocene events. The drying out of the Persian Gulf, probably during one of the first glacials of the Pleistocene, led the ancestor of the C. damascina species complex in Mesopotamia to reach the rivers of the Gulf and of Hormuz basins and differentiate there, giving rise to the eastern lineage (ancestor of C. buhsei, C. saadii and Capoeta sp.1). As connections presumably existed among the different river drainages and basins in Iran during the wet periods of the Pleistocene, the ancestor of C. buhsei, C. saadii and Capoeta sp.1 was subsequently able to colonize the various Iranian drainages and differentiate there, giving rise to C. buhsei, C. saadii and Capoeta sp.1. After the separation from the eastern lineage, the western lineage, represented by the ancestor of C. damascina, C. umbla and C. caelestis, most likely reached the Levant from the Tigris-Euphrates system during the Pleistocene glacials, when river connections existed in the regions of the upper courses of Ceyhan Nehri (southern Turkey) and some western affluents to the Euphrates. From Ceyhan Nehri, it dispersed into other rivers in southern Turkey during Pleistocene periods of low sea levels until it reached Göksu Nehri and evolved into C. caelestis. The sister population differentiated into C. damascina and C. umbla. Based on the results obtained in this study, it is likely that C. damascina colonized the Levant and southern Turkey during the Pleistocene glacials. This is well supported by the low genetic variability among the C. damascina populations. Direct connections existed among the river drainages in the Levant during the Pleistocene periods of low sea level, thus serving as a pathway for the dispersal of C. damascina. The results of this study provide a coherent picture of the taxonomic position, phylogenetic relationships and evolutionary history of the C. damascina species complex and explain present patterns of distribution considering paleogeographic events.
Background: Although literature provides support for cognitive behavioral therapy (CBT) as an efficacious intervention for social phobia, more research is needed to improve treatments for children. Methods: Forty four Caucasian children (ages 8-14) meeting diagnostic criteria of social phobia according to the Diagnostic and Statistical Manual of Mental Disorders (4th ed.; APA, 1994) were randomly allocated to either a newly developed CBT program focusing on cognition according to the model of Clark and Wells (n = 21) or a wait-list control group (n = 23). The primary outcome measure was clinical improvement. Secondary outcomes included improvements in anxiety coping, dysfunctional cognitions, interaction frequency and comorbid symptoms. Outcome measures included child report and clinican completed measures as well as a diagnostic interview. Results: Significant differences between treatment participants (4 dropouts) and controls (2 dropouts) were observed at post test on the German version of the Social Phobia and Anxiety Inventory for Children. Furthermore, in the treatment group, significantly more children were free of diagnosis than in wait-list group at post-test. Additional child completed and clinician completed measures support the results. Discussion: The study is a first step towards investigating whether CBT focusing on cognition is efficacious in treating children with social phobia. Future research will need to compare this treatment to an active treatment group. There remain the questions of whether the effect of the treatment is specific to the disorder and whether the underlying theoretical model is adequate. Conclusion: Preliminary support is provided for the efficacy of the cognitive behavioral treatment focusing on cognition in socially phobic children. Active comparators should be established with other evidence-based CBT programs for anxiety disorders, which differ significantly in their dosage and type of cognitive interventions from those of the manual under evaluation (e.g. Coping Cat).
Using faculty-librarian partnerships to ensure that students become information fluent in the 21st century In the 21st century educators in partnership with librarians must prepare students effectively for productive use of information especially in higher education. Students will need to graduate from universities with appropriate information and technology skills to enable them to become productive citizens in the workplace and in society. Technology is having a major impact on society; in economics e-business is moving to the forefront; in communication e-mail, the Internet and cellular telephones have reformed how people communicate; in the work environment computers and web utilizations are emphasized and in education virtual learning and teaching are becoming more important. These few examples indicate how the 21st century information environment requires future members of the workforce to be information fluent so they will have the ability to locate information efficiently, evaluate information for specific needs, organize information to address issues, apply information skillfully to solve problems, use information to communicate effectively, and use information responsibly to ensure a productive work environment. Individuals can achieve information fluency by acquiring cultural, visual, computer, technology, research and information management skills to enable them to think critically.
Teaching information literacy: substance and process This presentation explores the concept of information literacy within the broader context of higher education. It argues that, certain assertions in the library literature notwithstanding, the concepts associated with information literacy are not new, but rather very closely resemble the qualities traditionally considered to characterize a well-educated person. The presentation also considers the extent to which the higher education system does indeed foster the attributes commonly associated with information literacy. The term information literacy has achieved the immediacy it currently enjoys within the library community with the advent of the so-called "information age" The information age is commonly touted in the literature, both popular and professional, as constituting nothing short of a revolution. Academic librarians and other educators have of course felt called upon to make their teaching reflect both the growing proliferation of information formats and the major transformations affecting the process of information seeking. Faced with so much novelty and uncertainty, it is no surprise that many have felt that these changes call for a revolution in teaching. It is within this context that the concept of information literacy has flourished. It is argued in this presentation, however, that by treating information literacy as an essentially new specialty that owes much of its importance to the plethora of electronic information, we risk obscuring some of the most fundamental and enduring educational values we should be imparting to our students. Much of the literature on information literacy assumes - rather than argues - that recent changes in the way we approach education are indications of progress. Indeed, much of the self-narrative that institutions produce (in bulletins, mission statements, web sites, etc.) endorses an approach to education that will result in lifelong learners who are critical consumers of information. After critically examining the degree to which such statements of educational approach reflect reality, this presentation concludes by considering the effects of certain changes in the culture of higher education. It considers particularly the transformation - at least in North America - of the traditional model of higher education as a public good to a market-driven business model. It poses the question of whether a change of this significance might in fact detract from, rather than promote, the development of information literate students.
Development of a computational method for reaction-driven de novo design of druglike compounds
(2010)
A new method for computer-based de novo design of drug candidate structures is proposed. DOGS (Design of Genuine Structures) features a ligand-based strategy to suggest new molecular structures. The quality of designed compounds is assessed by a graph kernel method measuring the distance of designed molecules to a known reference ligand. Two graph representations of molecules (molecular graph and reduced graph) are implemented to feature different levels of abstraction from the molecular structure. A fully deterministic construction procedure explicitly designed to facilitate synthesizability of proposed structures is realized: DOGS uses readily available synthesis building blocks and established reaction schemes to assemble new molecules. This approach enables the software to propose not only the final compounds, but also to give suggestions for synthesis routes to generate them at the bench. The set of synthesis schemes comprises about 83 chemical reactions. Special focus was put on ring closure reactions forming drug-like substructures. The library of building blocks consists of about 25,000 readily available synthesis building blocks. DOGS builds up new structures in a stepwise process. Each virtual synthesis step adds a fragment to the growing molecule until a stop criterion (upper threshold for molecular mass or number of synthesis steps) is fulfilled. In a theoretical evaluation, a set of ~1,800 molecules proposed by DOGS is analyzed for critical properties of de novo designed compounds. The software is able to suggest drug-like molecules (79% violate less than two of Lipinski’s ‘rule of five’). In addition, a trained classifier for drug-likeness assigns a score >0.8 to 51% of the designed molecules (with 1.0 being the top score). In addition, most of the DOGS molecules are deemed to be synthesizable by a retro-synthesis descriptor (77% of molecules score in the top 10% of the decriptor’s value range). Calculated logP(o/w) values of constructed molecules resemble a unimodal distribution centred close to the mean of logP(o/w) values calculated for the reference compounds. A structural analysis of selected designs reveals that DOGS is capable of constructing molecules reflecting the overall topological arrangement of pharmacophoric features found in the reference ligands. At the same time, the DOGS designs represent innovative compounds being structurally distinct from the references. Synthesis routes for these examples are short and seem feasible in most cases. Some reaction steps might need modification by using protecting groups to avoid unwanted side reactions. Plausible bioisosters for known privileged fragments addressing the S1 pocket of trypsin were proposed by DOGS in a case study. Three of them can be found in known trypsin inhibitors as S1-adressing side chains. The software was also tested in two prospective case studies to design bioactive compounds. DOGS was applied to design ligands for human gamma-secretase and human histamine receptor subtype 4 (hH4R). Two selected designs for gamma-secretase were readily synthesizable as suggested by the software in one-step reactions. Both compounds represent inverse modulators of the target molecule. In a second case study, a ligand candidate selected for hH4R was synthesized exactly following the three-step synthesis plan suggested by DOGS. This compound showed low activity on the target structure. The concept of DOGS is able to deliver synthesizable and bioactive compounds. Suggested synthesis plans of selected compounds were readily pursuable. DOGS can therefore serve as a valuable idea generator for the design of new pharmacological active compounds.
Biodegradation and elimination of industrial wastewater in the context of whole effluent assessment
(2010)
The focus of this thesis is on the assessment of the degradability of indirectly discharged wastewater in municipal treatment plants and on assessing indirectly discharged effluents by coupling the Zahn-Wellens test with effect-based bioassays. With this approach persistent toxicity of an indirectly discharged effluent can be detected and attributed to the respective emission source. In the first study 8 wastewater samples from different industrial sectors were analysed according to the “Whole-Effluent Assessment“ (WEA) approach developed by OSPAR. In another study this concept has been applied with 20 wastewater samples each from paper manufacturing and metal surface treating industry. In the first study generally low to moderate ecotoxic effects of wastewater samples have been determined. One textile wastewater sample was mutagenic in the Ames test and genotoxic in the umu test. The source of these effects could not be identified. After treatment in the Zahn-Wellens test the mutagenicity in the Ames test was eliminated completely while in the umu test genotoxicity could still be observed. Another wastewater sample from chemical industry was mutagenic in the Ames test. The mutagenicity with this wastewater sample was investigated by additional chemical analysis and backtracking. A nitro-aromatic compound (2-methoxy-4-nitroaniline) used for batchwise azo dye synthesis and its transformation products are the probable cause for the mutagenic effects analysed. Testing the mother liquor from dye production confirmed that this partial wastewater stream was mutagenic in the Ames test. The wasteweater samples from paper manufacturing industry of the second study were not toxic or genotoxic in the acute Daphnia test, fish egg test and umu test. In the luminescent bacteria test, moderate toxicity was observed. Wastewater of four paper mills demonstrated elevated or high algae toxicity, which was in line with the results of the Lemna test, which mostly was less sensitive than the algae test. The colouration of the wastewater samples in the visible band did not correlate with algae toxicity and thus is not considered as its primary origin. The algae toxicity in wastewater of the respective paper factory could also not be explained with the thermomechanically produced groundwood pulp (TMP) partial stream. Presumably other raw materials such as biocides might be the source of algae toxicity. In the algae test, often flat dose–response relationships and growth promotion at higher dilution factors have been observed, indicating that several effects are overlapping. The wastewater samples from the printed circuit board and electroplating industries (all indirectly discharged) were biologically pre-treated for 7 days in the Zahn–Wellens test before ecotoxicity testing. Thus, persistent toxicity could be discriminated from non-persistent toxicity caused, e.g. by ammonium or readily biodegradable compounds. With respect to the metal concentrations, all samples were not heavily polluted. The maximum conductivity of the samples was 43,700 micro S cm -1 and indicates that salts might contribute to the overall toxicity. Half of the wastewater samples proved to be biologically well treatable in the Zahn–Wellens test with COD elimination above 80%, whilst the others were insufficiently biodegraded (COD elimination 28–74%). After the pre-treatment in the Zahn–Wellens test, wastewater samples from four companies were extremely ecotoxic especially to algae. Three wastewater samples were genotoxic in the umu test. Applying the rules for salt correction to the test results following the German Wastewater Ordinance, only a small part of toxicity could be attributed to salts. In one factory, the origin of ecotoxicity has been attributed to the organosulphide dimethyldithiocarbamate (DMDTC) used as a water treatment chemical for metal precipitation. The assumption, based on rough calculation of input of the organosulphide into the wastewater, was confirmed in practice by testing its ecotoxicity at the corresponding dilution ratio after pre-treatment in the Zahn–Wellens test. The results show that bioassays are a suitable tool for assessing the ecotoxicological relevance of these complex organic mixtures. The combination of the Zahn–Wellens test followed by the performance of ecotoxicity tests turned out to be a cost-efficient suitable instrument for the evaluation of indirect dischargers and considers the requirements of the IPPC Directive.
In Philadelphia Chromosome (Ph) positive ALL and CML the fusion between BCR and ABL leads to the BCR/ABL fusion proteins, which induces the leukemic phenotype because of the constitutive activation of multiple signaling pathways down-stream to the aberrant BCR/ABL fusion tyrosine kinase. Targeted inhibition of BCR/ABL by ABL-kinase inhibitors induces apoptosis in BCR/ABL transformed cells and leads to complete remission in Ph positive leukemia patients. However, a large portion of patients with advanced Ph+ leukemia relapse and acquire resistance. Kinase domain (KD) mutations interfering with inhibitor binding represent the major mechanism of acquired resistance in patients with Ph+ leukemia. Tetramerization of BCR/ABL through the N-terminal coiled-coil region (CC) of BCR is essential for the ABL-kinase activation. Targeting the CC-domain forces BCR/ABL into a monomeric conformation, reduces its kinase activity and increases the sensitivity for Imatinib. Here we show that i.) targeting the tetramerization by a peptide representing the Helix-2 of the CC efficiently reduced the autophosphorylation of both WT BCR/ABL and its mutants; ii.) Helix-2 inhibited the transformation potential of BCR/ABL independently of the presence of mutations; iii.) Helix-2 efficiently cooperated with Imatinib as revealed by their effects on the transformation potential and the factor-independence related to BCR/ABL with the exception of mutant T315I. These findings suggest that BCR/ABL harboring the T315I mutation have a transformation potential which is at least partially independent from its kinase activity. Targeted inhibition of BCR/ABL by small molecule inhibitors reverses the transformation potential of BCR/ABL. We definitively proved that targeting the tetramerization of BCR/ABL mediated by the N-terminal coiled-coil domain (CC) using competitive peptides, representing the Helix-2 of the CC, represents a valid therapeutic approach for treating Ph+ leukemia. To further develop competitive peptides for targeting BCR/ABL, we created a membrane permeable Helix-2 peptide (MPH-2) by fusing the Helix-2 peptide with a peptide transduction tag. In this study, we report that the MPH-2: (i) interacted with BCR/ABL in vivo; (ii) efficiently inhibited the autophosphorylation of BCR/ABL; (iii) suppressed the growth and viability of Ph+ leukemic cells; and (iv) was efficiently transduced into mononuclear cells (MNC) in an in vivo mouse model. The T315I mutation confers resistance against all actually approved ABL-kinase inhibitors and competitive peptides. It seems not only to decrease affinity for kinase inhibitors but to confer additional features to the leukemogenic potential of BCR/ABL. To determine the role of T315I in resistance to the inhibition of oligomerization and in the leukemogenic potential of BCR/ABL, we investigated its influence on loss-of-function mutants with regard to the capacity to mediate factor-independence. Thus we studied the effects of T315I on BCR/ABL mutants lacking functional domains in the BCR portion indispensable for the oncogenic activity of BCR/ABL such as the N-terminal coiled coil (CC), the tyrosine phosphorylation site Y177 and the serine/threonine kinase domain (ST), as well as on the ABL portion of BCR/ABL (#ABL-T315I) with or without the inhibitory SH3 (delta SH3-ABL) domain. Here we report that i.) T315I restored the capacity to mediate factor independence of oligomerization_deficient p185BCR/ABL; ii.) resistance of p185-T315I against inhibition of the oligomerization depends on the phosphorylation at Y177; iii.) autophosphorylation at Y177 is not affected by the oligomerization inhibition, but phosphorylation at Y177 of endogenous BCR parallels the effects of T315I; iv.) the effects of T315I are associated with an intact ABL_kinase activity; v.) the presence of T315I is associated with an increased ABL_kinase activity also in mutants unable to induce Y177 phosphorylation of endogenous BCR; vi.) there is no direct relationship between the ABL-kinase activity and the capacity to mediate factor_independence induced by T315I as revealed by the #ABL-T315I mutant, which was unable to induce Y177 phosphorylation of BCR only in the presence of the SH3 domain. In contrast to its physiological counterpart c-ABL, the BCR/ABL kinase is constitutively activated, inducing the leukemic phenotype. The N-terminus of c-ABL (Cap region) contributes to the regulation of its kinase function. It is myristoylated, and the myristate residue binds to a hydrophobic pocket in the kinase domain known as the myristoyl binding pocket in a process called “capping”, which results in an auto-inhibited conformation. Because the cap region is replaced by the N-terminus of BCR, BCR/ABL “escapes” this auto-inhibition. Allosteric inhibition by myristate “mimics”, such as GNF-2, is able to inhibit unmutated BCR/ABL, but not the BCR/ABL that harbors the “gatekeeper” mutation T315I. Here we investigated the possibility of increasing the efficacy of allosteric inhibition by blocking BCR/ABL oligomerization. We demonstrate that inhibition of oligomerization was able not only to increase the efficacy of GNF-2 on unmutated BCR/ABL, but also to overcome the resistance of BCR/ABL-T315I to allosteric inhibition. These results strongly suggest that the response to allosteric inhibition by GNF-2 is inversely related to the degree of oligomerization of BCR/ABL. Taken together these data suggest that the inhibition of tetramerization inhibits BCR/ABL-mediated transformation and can contribute to overcome Imatinib-resistance. The study provides the first evidence that an efficient peptide transduction system facilitates the employ-ment of competitive peptides to target the oligomerization interface of BCR/ABL in vivo. Further the data show that T315I confers additional leukemogenic activity to BCR/ABL, which might explain the clinical behavior of patients with BCR/ABL -T315I-positive blasts. In summary, our observations establish a new approach for the molecular targeting of BCR/ABL and its resistant mutants represented by the combination of oligomerization and allosteric inhibitors.
In the microstructure literature, information asymmetry is an important determinant of market liquidity. The classic setting is that uninformed dedicated liquidity suppliers charge price concessions when incoming market orders are likely to be informationally motivated. In limit order book markets, however, this relationship is less clear, as market participants can switch roles, and freely choose to immediately demand or patiently supply liquidity by submitting either market or limit orders. We study the importance of information asymmetry in limit order books based on a recent sample of thirty German DAX stocks. We find that Hasbrouck’s (1991) measure of trade informativeness Granger-causes book liquidity, in particular that required to fill large market orders. Picking-off risk due to public news induced volatility is more important for top-of-the book liquidity supply. In our multivariate analysis we control for volatility, trading volume, trading intensity and order imbalance to isolate the effect of trade informativeness on book liquidity. JEL Classification: G14 Keywords: Price Impact of Trades , Trading Intensity , Dynamic Duration Models, Spread Decomposition Models , Adverse Selection Risk
We revisit the role of time in measuring the price impact of trades using a new empirical method that combines spread decomposition and dynamic duration modeling. Previous studies which have addressed the issue in a vector-autoregressive framework conclude that times when markets are most active are times when there is an increased presence of informed trading. Our empirical analysis based on recent European and U.S. data offers challenging new evidence. We find that as trade intensity increases, the informativeness of trades tends to decrease. This result is consistent with the predictions of Admati and Pfleiderer’s (1988) rational expectations model, and also with models of dynamic trading like those proposed by Parlour (1998) and Foucault (1999). Our results cast doubt on the common wisdom that fast markets bear particularly high adverse selection risks for uninformed market participants. JEL Classification: G10, C32 Keywords: Price Impact of Trades, Trading Intensity, Dynamic Duration Models, Spread Decomposition Models, Adverse Selection Risk
We present an intertemporal consumption model of consumer investment in financial literacy. Consumers benefit from such investment because their stock of financial literacy allows them to increase the returns on their wealth. Since literacy depreciates over time and has a cost in terms of current consumption, the model determines an optimal investment in literacy. The model shows that financial literacy and wealth are determined jointly, and are positively correlated over the life cycle. Empirically, the model leads to an instrumental variables approach, in which the initial stock of financial literacy (as measured by math performance in school) is used as an instrument for the current stock of literacy. Using microeconomic and aggregate data, we find a strong effect of financial literacy on wealth accumulation and national saving, and also show that ordinary least squares estimates underestate the impact of financial literacy on saving. JEL Classification: E2, D8, G1, J24 Keywords: Financial Literacy, Cognitive Abilities, Human Capital, Saving
We analytically show that a common across rich/poor individuals Stone-Geary utility function with subsistence consumption in the context of a simple two-asset portfolio-choice model is capable of qualitatively and quantitatively explaining: (i) the higher saving rates of the rich, (ii) the higher fraction of personal wealth held in risky assets by the rich, and (iii) the higher volatility of consumption of the wealthier. On the contrary, time-variant “keeping-up-with-the-Joneses” weighted average consumption which plays the role of moving benchmark subsistence consumption gives the same portfolio composition and saving rates across the rich and the poor, failing to reconcile the model with what micro data say. JEL Classification: G11, D91, E21, D81, D14, D11
Purpose of the Study: The purpose of the current study was to evaluate the role of radiofrequency (RF) and microwave (MW) ablation in the treatment of pulmonary neoplasms. Materials and Methods: From March 2004 to January 2009, 164 patients (92 males, 72 females; mean age 59.7 years, SD: 10.2) underwent computed tomography (CT)-guided percutaneous RFA of pulmonary malignancies. RFA was performed on 248 lung lesions (20 primary lesions and 228 metastatic lesions) in 248 sessions (one lesion per session). Tumors were pathologically proven and were classified as primary lung neoplasms in 20 patients (non-small cell lung cancer) and as metastatic lung neoplasms in 144 patients. RFA was performed using: a) CelonProSurge bipolar internally cooled applicator b) RITA®StarburstTMXL. From December 2007 to October 2009, 80 patients (30 males, 50 females; mean age 59.7 years, range: 48-68, SD: 6.4) underwent computed tomography (CT) guided percutaneous MW ablation of pulmonary metastases from variable histopathological primaries. MW was performed on 130 lung lesions in 130 sessions (one lesion per session) using Valleylab TM system. Results: The overall success rate of RFA was 67.7% (168/248 lesions), with overall failure rate either due to tumor residue or recurrence on follow up in 32.3% (80/248) with mean time to tumor progress was 5.6 months SD: 2.99 (Range:1-18 months). Complete successful ablation was achieved in patients treated by MWA in 73.1% (95/130 lesions), with failure rate either due to tumor residue or recurrence on follow up in 26.9% (35/130) with mean time to tumor progress 6 months SD: 2.83 (Range:1-12months). Correlation of the histopathological type of the lesion and the end result of ablation therapy revealed insignificant correlation in both RFA and MWA (p > 0.1). The preablation tumor size was one of the most significant factors that determined the end result of ablation. In RFA successful tumor ablation was significant statistically for lesions with maximal axial diameter up to 2.5 cm (110/140) in comparison to lesions of more than 2.5 cm in maximal axial diameter (58/108) (Fisher’s exact test: p < 0.0001). While in MW ablated lesions successful tumor ablation was significant statistically for lesions with maximal axial diameter up to 3 cm (90/110) in comparison to lesions of more than 3 cm in maximal axial diameter (5/20) (Fisher’s exact test: p < 0.001). The location of the lesion was another important factor that determined the end result of ablation. In both RFA and MWA successful ablation was significantly more correlated to peripheral lesions (RFA: 120/160, 80% / MWA: 80/100, 80%) than centrally located lesions (RFA: 48/88, 50%; MWA: 15/30, 50%) (Fisher’s Exact Test: p > 0.001). For successfully RFA ablated cases mean preablation tumor volumes 1.9 cc SD: 0.9 (range: 0.3 - 4.25 cc) while for failed cases the mean tumor volume was 3.7 SD: 2.4 (range: 0.8 – 6.8cc). For successfully MW ablated cases the mean preablation tumor volume: 2.4 cc SD: 2.2 (range: 0.25-8.2 cc) while for failed cases the mean tumor volume was 3.5 SD: 2.6 (range: 0.3 – 7.1 cc). In RFA the survival rates at 12, 24 and 36 months were 90%, 78% and 68% respectively while in MWA treated patients the survival rate within 12 months follow up period was 96% while at 20 month the survival rate was 77%. Complications associated with the ablation therapy were: a) procedure related mortality: 0.4% (1/248) in RFA due to massive pulmonary hemorrhage versus 0% (0/130) in MWA, b) pneumothorax: 11.3% (28/240) in RFA versus 8.5% (11/130) in MWA, c) pulmonary Hemorrhage: 17.7% (44 of 248 sessions) of which one patient had massive uncontrolled bleeding and immediate death versus 6.2% (8/130) in MWA, d) pleural effusion: 3.2 % (8 of 248 sessions) in RFA versus 3.8 % (6/130) in MWA, e) hemoptysis: 4% (10/248) in RFA versus 4.6% (6/130) in MWA ranging from mild tinged sputum to frank bleeding, f) infection: 0.4% (1/248) in RFA, versus 0% in MWA, and g) post ablation pain: 10% (25/248) in RFA versus 9.2% (12/130) in MWA. Pain was generally adequately controlled by analgesics. Conclusion: Radiofrequency and microwave ablation are effective minimally invasive tools and may be safely applied for management of lung malignancy. The success of ablation therapy is significantly correlated to the preablation tumor size, volume and tumor location.
Background: The interferon-inducible immunity-related GTPases (IRG proteins/p47 GTPases) are a distinctive family of GTPases that function as powerful cell-autonomous resistance factors. The IRG protein, Irga6 (IIGP1), participates in the disruption of the vacuolar membrane surrounding the intracellular parasite, Toxoplasma gondii, through which it communicates with its cellular hosts. Some aspects of the protein's behaviour have suggested a dynamin-like molecular mode of action, in that the energy released by GTP hydrolysis is transduced into mechanical work that results in deformation and ultimately rupture of the vacuolar membrane. Results: Irga6 forms GTP-dependent oligomers in vitro and thereby activates hydrolysis of the GTP substrate. In this study we define the catalytic G-domain interface by mutagenesis and present a structural model, of how GTP hydrolysis is activated in Irga6 complexes, based on the substrate-twinning reaction mechanism of the signal recognition particle (SRP) and its receptor (SRalpha). In conformity with this model, we show that the bound nucleotide is part of the catalytic interface and that the 3'hydroxyl of the GTP ribose bound to each subunit is essential for trans-activation of hydrolysis of the GTP bound to the other subunit. We show that both positive and negative regulatory interactions between IRG proteins occur via the catalytic interface. Furthermore, mutations that disrupt the catalytic interface also prevent Irga6 from accumulating on the parasitophorous vacuole membrane of T. gondii, showing that GTP-dependent Irga6 activation is an essential component of the resistance mechanism. Conclusions: The catalytic interface of Irga6 defined in the present experiments can probably be used as a paradigm for the nucleotide-dependent interactions of all members of the large family of IRG GTPases, both activating and regulatory. Understanding the activation mechanism of Irga6 will help to explain the mechanism by which IRG proteins exercise their resistance function. We find no support from sequence or G-domain structure for the idea that IRG proteins and the SRP GTPases have a common phylogenetic origin. It therefore seems probable, if surprising, that the substrate-assisted catalytic mechanism has been independently evolved in the two protein families.
Background: Although being considered as a rarely observed HIV-1 protease mutation in clinical isolates, the L76V-prevalence increased 1998-2008 in some European countries most likely due to the approval of Lopinavir, Amprenavir and Darunavir which can select L76V. Beside an enhancement of resistance, L76V is also discussed to confer hypersusceptibility to the drugs Atazanavir and Saquinavir which might enable new treatment strategies by trying to take advantage of particular mutations. Results: Based on a cohort of 47 L76V-positive patients, we examined if there might exist a clinical advantage for L76V-positive patients concerning long-term success of PI-containing regimens in patients with limited therapy options. Genotypic- and phenotypic HIV-resistance tests from 47 mostly multi-resistant, L76V-positive patients throughout Germany were accomplished retrospectively 1999-2009. Five genotype-based drug-susceptibility predictions received from online interpretation-tools for Atazanavir, Saquinavir, Amprenavir and Lopinavir, were compared to phenotype-based predictions that were determined by using a recombinant virus assay along with a Virtual Phenotype™(Virco). The clinical outcome of the L76V-adapted follow-up therapy was determined by monitoring viral load for 96 weeks. Conclusions: In this analysis, the mostly used interpretation systems overestimated the L76V-mutation concerning Atazanavir- and SQV resistance. In fact, a clear benefit in drug susceptibility for these drugs was observed in phenotype analysis after establishment of L76V. More importantly, long-term therapy success was significantly higher in patients receiving Atazanavir and/or Saquinavir plus one L76V-selecting drug compared to patients without L76V-selecting agents (p = 0.002). In case of L76V-occurrence ATV and/or SQV may represent encouraging options for patients in deep salvage situations.
Background: The combination of high-throughput transcript profiling and next-generation sequencing technologies is a prerequisite for genome-wide comprehensive transcriptome analysis. Our recent innovation of deepSuperSAGE is based on an advanced SuperSAGE protocol and its combination with massively parallel pyrosequencing on Roche's 454 sequencing platform. As a demonstration of the power of this combination, we have chosen the salt stress transcriptomes of roots and nodules of the third most important legume crop chickpea (Cicer arietinum L.). While our report is more technology-oriented, it nevertheless addresses a major world-wide problem for crops generally: high salinity. Together with low temperatures and water stress, high salinity is responsible for crop losses of millions of tons of various legume (and other) crops. Continuously deteriorating environmental conditions will combine with salinity stress to further compromise crop yields. As a good example for such stress-exposed crop plants, we started to characterize salt stress responses of chickpeas on the transcriptome level. Results: We used deepSuperSAGE to detect early global transcriptome changes in salt-stressed chickpea. The salt stress responses of 86,919 transcripts representing 17,918 unique 26bp deepSuperSAGE tags (UniTags) from roots of the salt-tolerant variety INRAT-93 two hours after treatment with 25 mM NaCl were characterized. Additionally, the expression of 57,281 transcripts representing 13,115 UniTags was monitored in nodules of the same plants. From a total of 144,200 analyzed 26bp tags in roots and nodules together, 21,401 unique transcripts were identified. Of these, only 363 and 106 specific transcripts, respectively, were commonly up- or down-regulated (>3.0-fold) under salt stress in both organs, witnessing a differential organ-specific response to stress. Profiting from recent pioneer works on massive cDNA sequencing in chickpea, more than 9,400 UniTags were able to be linked to UniProt entries. Additionally, gene ontology (GO) categories over-representation analysis enabled to filter out enriched biological processes among the differentially expressed UniTags. Subsequently, the gathered information was further cross-checked with stress-related pathways. From several filtered pathways, here we focus exemplarily on transcripts associated with the generation and scavenging of reactive oxygen species (ROS), as well as on transcripts involved in Na+ homeostasis. Although both processes are already very well characterized in other plants, the information generated in the present work is of high value. Information on expression profiles and sequence similarity for several hundreds of transcripts of potential interest is now available. Conclusions: This report demonstrates, that the combination of the high-throughput transcriptome profiling technology SuperSAGE with one of the next-generation sequencing platforms allows deep insights into the first molecular reactions of a plant exposed to salinity. Cross validation with recent reports enriched the information about the salt stress dynamics of more than 9,000 chickpea ESTs, and enlarged their pool of alternative transcripts isoforms. As an example for the high resolution of the employed technology that we coin deepSuperSAGE, we demonstrate that ROS-scavenging and -generating pathways undergo strong global transcriptome changes in chickpea roots and nodules already 2 hours after onset of moderate salt stress (25mM NaCl). Additionally, a set of more than 15 candidate transcripts are proposed to be potential components of the salt overly sensitive (SOS) pathway in chickpea. Newly identified transcript isoforms are potential targets for breeding novel cultivars with high salinity tolerance. We demonstrate that these targets can be integrated into breeding schemes by micro-arrays and RT-PCR assays downstream of the generation of 26bp tags by SuperSAGE.
G-protein coupled receptors (GPCRs) are the key players in signal perception and transduction and one of the currently most important class of drug targets. An example of high pharmacological relevance is the human endothelin (ET) system comprising two rhodopsin-like GPCRs, the endothelin A (ETA) and the endothelin B (ETB) receptor. Both receptors are major modulators in cardiovascular regulation and show striking diversities in biological responses affecting vasoconstriction and blood pressure regulation as well as many other physiological processes. Numerous disorders are associated with ET dysfunction and ET antagonism is considered an efficient treatment of diseases like heart failure, hypertension, diabetes, artherosclerosis and even cancer. This study exemplifies strategies and approaches for the preparative scale synthesis of GPCRs in individual cell-free (CF) systems based on E. coli, a newly emerging and promising technique for the production of even very difficult membrane proteins. The preparation of high quality samples in sufficient amounts is still a major bottleneck for the structural determination of the ET receptors. Heterologous overexpression has been a challenge now for decades but extensive studies with conventional cell-based systems had only limited success. A central milestone of this study was the development of efficient preparative scale expression protocols of the ETA receptor in qualities sufficient for structural analysis by using individual CF systems. Newly designed optimization strategies, the implementation of a variety of CF expression modes and the development of specific quality control assays finally resulted in the production of several milligrams of ETA receptor per one millilitre of reaction mixture. The versatility of CF expression was extensively used to modulate GPCR sample quality by modification of the solubilization environment with detergents and lipids in a variety of combinations at different stages of the production process. Downstream processing procedures of CF synthesized GPCRs were systematically optimized and sample properties were analysed with respect to homogeneity, protein stability and receptor ligand binding competence. Evaluation was accomplished by an array of complementary and specifically modified techniques. Depending on its hydrophobic environment, CF production of the ETA receptor resulted in non-aggregated, monodisperse forms with sufficient long-term stability and high degrees of secondary structure thermostability. The obtained results document the CF production of the ETA receptor in two different modes as an example of a class A GPCR in ligand-binding competent and non-aggregated form in quantities sufficient for structural approaches. The presented strategy could serve as basic guideline for the production of related receptors in similar systems.
Signal-dependent regulation of actin dynamics is essential for many cellular processes, including directional cell migration. In particular, cell migration is initiated by lamellipodia, actin-based protrusions of the plasma membrane. The formation of these protruding structures require incessant assembly and disassembly of actin filaments. The Arp2/3 complex and WAVE proteins are essential for both lamellipodium formation and its dynamics. WAVEs mediate the activation of the Arp2/3 complex downstream of the small GTPase Rac, thus being critical for Rac- and RTK-induced actin polymerization and cell migration. The WAVE-family proteins are always found associated with multiprotein complexes. The most abundant WAVE-based complex is referred to as the WANP (WAVE2-Abi-1-Nap1-PIR121) complex. IQGAP1 is a huge scaffolding protein with multiple protein-interacting domains. IQGAP1 participates in many fundamental activities, including regulation of the actin cytoskeleton, mitogenic, adhesive and migratory responses, as well as in cell polarity and cellular trafficking. IQGAP1 binds to N-WASP, thus raising the possibility that it might control actin nucleation by the Arp2/3 complex. In this study, IQGAP1 was found co-immunoprecipitated not only with WAVE, but also with the endogenous WANP-complex subunits. Correspondingly, IQGAP1 associated to both anti-WAVE and anti-Abi-1 immuno-complexes. Pull-down experiments proved that IQGAP1 binds directly to the WANP-complex subunits. Physical interaction between IQGAP1 and the reconstituted WANP complex could also be demonstrated. Together, these data indicate that IQGAP1 is an accessory component of the WANP complex. Interestingly, the IQGAP-WANP complex disassembled after either EGF stimulation or transfection with constitutively active Cdc42 and Rac1. HeLa cells devoid of IQGAP1 showed diminished and less persistent ruffling upon EGF, but not HGF, stimulation in comparison with the control. This phenotype was accompanied by a strong reduction in chemotaxis towards both growth factors, which was as dramatic as in WANP-complex knockdown (KD) cells. Moreover, GM130 and Giantin showed a polarized and flat ribbon-like pattern in control cells, as it is expected for cis- and cis/medial-Golgi markers. Conversely, small and dispersed vesicular structures were found in both IQGAP1 KD and WANP-complex KD cells. Importantly, Arp2/3-complex silencing resulted in the same phenotypes. Consistently, Brefeldin A-induced disassembly of the Golgi strongly inhibited the IQGAP1-WANP-complex interaction and chemotaxis towards EGF in wild-type cells. The re-expression of an RNAi-resistant wild-type IQGAP1 in IQGAP1 KD cells fully rescued both the ruffling abilities and Golgi structure. A constitutively active mutant, unable to bind to neither Rac1 /Cdc42 nor the WANP complex, could reconstitute only the former defect. Hence, this study shows that actin dynamics regulated by the IQGAP1-WANP complex controls Golgi-apparatus architecture and its contribution to cell chemotaxis. The working model here proposes that at the Golgi apparatus, recruitment of the WANP complex by IQGAP1 leads to the assembly of actin filaments required to maintain the appropriated Golgi morphology. The dissociation of the complex may be required to allow the remodeling of the Golgi membranes in order to respond following a chemoattractant gradient.
In this work we study compact stars, i.e. neutron stars, as cosmic laboratories for the nuclear matter. With a mass of around 1 - 3 solar masses and a radius of around 10km, compact stars are very dense and, besides nucleons, can contain exotic matter such as hyperons or quark matter. The KaoS collaboration studied nuclear matter for densities up to 2-3 times saturation density by analysing kaon multiplicities from Au+Au and C+C collisions. The results show that nuclear matter in the corresponding density region is very compressible, with a compressibility of <200MeV. For such soft nuclear equations of state the maximum masses of neutron stars are ca. 1.8 - 1.9 solar masses, whereas the central densities are higher than 5 times nuclear saturation density and therefore point towards a possible phase transition to quark matter. If quark matter would be present in the interior of neutron stars, so-called hybrid stars, it could be produced already during their birth in supernova explosions. To study this we implement a quark matter phase transition in a hadronic equation of state which is used in supernova simulations. Supernova simulations of low and intermediate mass progenitors and two different bag constants show a collapse of the proto neutron star due to the softening of the equations of state in the quark-hadron mixed phase. The stiffening of the equation of state for pure quark matter halts the collapse and leads to the production of a second shock wave. The second shock wave is energetic enough to lead to an explosion of the star and produces a neutrino burst when passing the neutrinospheres. Furthermore, first studies of the longtime cooling of hybrid stars show, that colour superconductivity can significantly influence the cooling behaviour of hybrid stars, if all quarks form Cooper Pairs. For the so-called CSL phase (colour-spin locking) with pairing energies of several MeV, the cooling of the quark phase is suppressed and the hybrid star appears as a pure hadronic star.
All-over in Europe, unemployment became a growing problem from the mid 1980s to the mid 1990s. Nevertheless, the effects on the economical situation of the unemployed and the whole population are quite different in European countries. In this paper we first give a brief overview over the development of unemployment rates in eight member states of the European Union and over the different reactions to provide the social protection of the unemployed. Therefore we look at the social security expenditures, the level of income replacement for the unemployed and recent social policy reforms concerning them. In the second section of the paper, we examine the development of income distribution and poverty taking different poverty lines into consideration. There is no general pattern neither for the relationship of inequality among the unemployed to the whole economically active population nor for the development from the 80s to the 90s. But one can say that in countries with increasing income inequality also poverty is rising (especially in the UK) and that where inequality among the unemployed is less pronounced the proportions of the poor went down from the mid 80s to the mid 90s (France and Ireland). In nearly all countries the risk of being poor is ernormously high for the unemployed, Denmark is the only exception.
To sum up our findings we come to the following statements. - During the period from 1973 to 1993 inequality of the personal distribution of equivalent pre-government income increased to some extent, as was to be expected given the enormous rise in unemployment. - Inequality of post-government income also increased slightly, but was much lower than inequality of pre-government income due to the equalizing effect of the German tax and transfer system. - In 1993 inequality of pre-government income was higher, and inequality of post-government income was considerably lower in East Germany than in West Germany; the West German tax and transfer system that was transferred to East Germany after reunification - with some additional but temporary minimum regulations - seems to have had a stronger equalizing effect in the East than in the West. - A decomposition into three age groups, the young and the middle-aged group sub-divided further according to whether household members were affected by unemployment, showed that within-groups inequality explained by far more of overall inequality than between-groups inequality. - The relative positions of the two young groups as well as of the middle-aged group with unemployed members deteriorated with respect to their equivalent pre-government and post-government incomes. - During the first period with rising unemployment (1973 to 1978), the development of within-groups inequality and of between-groups inequality contributed to about the same extent to the increase of overall inequality of pre-government income. But this was fully compensated by the tax and transfer system as there were only a negligible change in inequality of equivalent net income and very slight effects of the (four) components of change which nearly compensated each other. - During the last period from 1988 to 1993 the equalizing effect of the German tax and transfer system seems to have weakened, at least in the western part of Germany. The increase in inequality of equivalent net income is mainly due to developments of within group inequalities.
Vibronic (vibrational-electronic) transition is one of the fundamental processes in molecular physics. Indeed, vibronic transition is essential both in radiative and nonradiative photophysical or photochemical properties of molecules such as absorption, emission, Raman scattering, circular dichroism, electron transfer, internal conversion, etc. A detailed understanding of these transitions in varying systems, especially for (large) biomolecules, is thus of particular interest. Describing vibronic transitions in polyatomic systems with hundreds of atoms is, however, a difficult task due to the large number of coupled degrees of freedom. Even within the relatively crude harmonic approximation, such as for Born-Oppenheimer harmonic potential energy surfaces, the brute-force evaluation of Franck-Condon intensity profiles in a time-independent sum-over-states approach is prohibitive for complex systems owing to the vast number of multi-dimensional Franck-Condon integrals. The main goal of this thesis is to describe a variety of molecular vibronic transitions, with special focus on the development of approaches that are applicable to extended molecular systems. We use various representations of Fermi’s golden rule in frequency, time and phase spaces via coherent states to reduce the computational complexity. Although each representation has benefits and shortcomings in its evaluation, they complement each other. Peak assignment of a spectrum can be made directly after calculation in the frequency domain but this sum-over-states route is usually slow. In contrast, computation is considerably faster in the time domain with Fourier transformation but the peak assignment is not directly available. The representation in phase space does not immediately provide physically-meaningful quantities but it can link frequency and time domains. This has been applied to, herein, for example (non-Condon) absorption spectra of benzene and electron transfer of bacteriochlorophyll in the photosynthetic reaction center at finite temperature. This work is a significant step in the treatment of vibronic structure, allowing for the accurate and efficient treatment of complex systems, and provides a new analysis tool for molecular science.
Succinate:quinone oxidoreductases (SQORs) are integral membrane protein complexes, which couple the two-electron oxidation of succinate to fumarate (succinate → fumarate + 2H+ + 2e-) to the two-electron reduction of quinone to quinol (quinone + 2H+ + 2e- → quinol) as well as catalyzing the opposite reaction, the reduction of fumarate by quinol. In mitochondria and some aerobic bacteria, succinate:ubiquinone reductase, also known as complex II of the aerobic respiratory chain or as succinate dehydrogenase from the tricarboxylic acid (TCA or Krebs) cycle, catalyzes the oxidation of succinate by ubiquinone, which is mildly exergonic under standart conditions and not directly associated with energy storage in the form of a transmembrane electrochemical proton potential (Δp). Gram-positive bacteria do not contain ubiquinone but rather menaquinone, a quinone with significantly lower oxidation-reduction (“redox”) midpoint potential. In these cases, the catalyzed oxidation of succinate by quinone is endergonic under standard conditions. Consequently, these bacteria face a thermodynamic problem in supporting the catalysis of this reaction in vivo. Based on experimental evidence obtained on whole cells and purified membranes, it had previously been proposed that the SQR from Gram-positive bacteria supports this reaction at the expense of the protonmotive force, Δp. Nonetheless, it has been argued that the observed Δp dependence is not associated specifically with the activity of SQR because the occurrence of artifacts in experiments with bacterial membranes and whole cells can not be fully excluded. Clearly, definitive insight into the mechanism of catalysis of this intriguing reaction required a corresponding functional characterization of an isolated, membranebound SQR from a Gram-positive bacterium. The first aim of the present work addresses the question if the general feasibility of the energetically uphill electron transfer from succinate to menaquinone is associated specifically to a single enzyme complex, the SQR. The prerequisite to achieve this goal was stable preparation of this enzyme.
Vacuum-assisted closure (VAC) of complex infected wounds has recently gained popularity among various surgical specialties. The system is based on the application of negative pressure by controlled suction to the wound surface. The effectiveness of the VAC System on microcirculation and the promotion of granulation tissue proliferation are proved. In our case report we illustrate a scenario were a patient developed severe bleeding from the ascending aorta by penetration of wire fragments in the vessel. We conclude that all free particles in the sternum have to be removed completely before negative pressure is used.
ProtoSociology is an interdisciplinary journal which crosses the borders of philosophy, social sciences, and their corresponding disciplines. Each issue concentrates on a specific topic taken from the current discussion to which scientists from different fields contribute the results of their research. ProtoSociology is further a project that examines the nature of mind, language and social systems. In this context theoretical work has been done by investigating such theoretical concepts like interpretation and (social) action, globalization, the global world-system, social evolution, and the sociology of membership. Our purpose is to initiate and enforce basic research on relevant topics from different perspectives and traditions.
The glycine receptor (GlyR) is the major inhibitory neurotransmitter receptor in spinal cord and brainstem. Heteropentameric GlyRs are clustered and anchored at inhibitory postsynaptic sites by the binding of the large intracellular loop between transmembrane domains 3 and 4 of the GlyRbeta subunit (GlyRbeta-loop) to the cytoplasmic scaffolding protein gephyrin. GlyRs are also cotransported with gephyrin along microtubules in the anterograde and retrograde direction due to the binding of gephyrin to microtubule-associated motor proteins. Additionally, GlyRs undergo lateral diffusion in the plasma membrane from extrasynaptic to synaptic sites and vice versa. Since its discovery, gephyrin has remained for many years the only binding partner interacting directly with the GlyRbeta subunit. In an attempt to elucidate further mechanisms involved in GlyR function and regulation at inhibitory postsynaptic sites, a proteomic screen for putative binding partners to the GlyRbeta loop was performed. Three proteins were identified as putative interactors. In this thesis, the interaction between these putative binding proteins and the GlyRbeta subunit was analyzed and characterized. Binding studies with glutathione-S-transferase fusion proteins revealed that all putative binding proteins, Syndapin (Sdp), Vacuolar Protein Sorting 35 (Vps35) and Neurobeachin (Nbea), interact specifically with the GlyRbeta loop. The Sdp family of proteins are F-BAR and SH3 domain containing proteins. Inmmunocytochemical experiments showed that SdpI as well as the isoforms SdpII-S and SdpIIL colocalize with the full-length GlyRbeta subunit in a mammalian cell expression system. In cultured spinal cord neurons, a partial colocalization of endogenous SdpI with several excitatory and inhibitory synaptic markers was demonstrated. Mapping experiments using deletion mutants narrowed the SdpI binding site down to 22 amino acids. Peptide competition experiments confirmed the specificity of the interaction between SdpI and this sequence of the GlyRbeta subunit. Point mutation analysis revealed a SH3-proline rich domain dependent interaction between SdpI and the GlyRbeta subunit, respectively. In addition, binding studies in mammalian cells showed that both splice variants of SdpII as well as SdpI interact with the GlyR scaffolding protein gephyrin. Although the SdpI and gephyrin binding sites do not overlap, protein competition studies revealed that interaction of the E-domain of gephyrin with the GlyRbeta loop interferes with SdpI binding. Since SdpI is a dynamin binding protein involved in vesicle endocytosis and recycling pathways, a possible function of SdpI in the regulation of GlyR synaptic distribution was investigated. Co-immunoprecipitation experiments confirmed a SdpI-GlyR association in the vesicle-enriched fraction of rat spinal cord tissue. Immunocytochemical studies of SdpI knock out mice showed that the clustering and distribution of GlyRs in the brain stem is unchanged. However, acute down-regulation of SdpI in rat spinal cord neurons by viral shRNA expression led to a reduction in the number and size of GlyR clusters, an effect that could be rescued upon shRNA-resistant SdpI overexpression. Further immunocytochemical analysis of the localization of gephyrin, the gamma2 subunit of the type A gamma-aminobutyric acid receptor (GABAARgamma2 subunit) and the vesicular inhibitory amino acid transporter (VIAAT) under SdpI knock-down conditions showed that both the number and average size of the gamma2-subunit containing GABAA receptor clusters were significantly reduced in spinal cord neurons. In contrast to GlyR and GABAARgamma2 immunoreactivity, the number and average size of gephyrin and VIAAT clusters were barely reduced upon SdpI downregulation. These results suggest that SdpI has a role in GlyR trafficking that can be compensated by other syndapin isoforms or other trafficking pathways. Furthermore, SdpI might be required for the clusters of GlyRs and gamma2-subunit containing GABAARs in spinal cord and brainstem. Vps35 is the core protein of the retromer complex, which mediates the endosome to Golgi apparatus retrieval of different types of receptors in mammals and yeast. Here, protein-protein interaction assays revealed for the first time that Vps35 interacts directly with the GlyRbeta loop as well as with gephyrin. The generation of specific Vps35 antibodies allowed to determine the distribution of this protein in the central nervous system. Immunocytochemical analyses revealed the presence of Vps35 in the somata and neurites of spinal cord neurons, suggesting a possible interaction of Vps35 with the GlyR under physiological conditions. Nbea is a BEACH domain containing, neuron-specific protein. Binding studies revealed a direct interaction between two regions of Nbea and the GlyRbeta loop. Immunocytochemical experiments confirmed a somatic and synaptic distribution of Nbea in primary cultures. In spinal cord neurons, a partial colocalization of Nbea with excitatory and inhibitory synaptic markers suggests a possible interaction of Nbea with the GlyR at inhibitory synaptic sites.
Summary: Information and communication is critical to the successful management of infectious diseases because an effective communication strategy prevents the surge of anxious patients who have not been genuinely exposed to the pathogen ('low risk patients') affecting medical infrastructures (1) and the future transmission of the infectious agent (2). Surge of low risk patients: The arrival of large numbers of low risk patients at hospitals following an infectious diseases emergency would be problematic for three main reasons. First, it would complicate the situation at hospitals receiving exposed patients, delaying the treatment of the acutely ill, creating difficulties of crowd control and tying up medical resources. Second, for the low risk patients themselves, attending hospital following an infectious disease emergency might increase their risk of exposure to the agent in question. Third, the needs of low risk patients may be poorly attended to at hospitals which are already overstretched dealing with medical casualties. Future transmission: Obtaining early information about symptoms and isolating infected patients is the most effective strategy to interrupt the chain of infection in the public in the absence of specific prophylaxis or treatment. Particularly at the beginning of an outbreak, these nonpharmaceutical interventions play an important role in enabling the early detection of signs or symptoms and in encouraging passengers to adopt appropriate preventive behaviour in order to limit the spread of the disease. This thesis includes two papers dealing with this problem: The first part is a systemic literature review of information needs following an infectious disease emergency (Anthrax, SARS, Pneumonic Plague). The key question was: what are the information needs of the public during an infectious disease emergency? The second part is an empirical investigation of information needs and communication strategies at the airport during the early stage of the Influenza Pandemic. The key question here was: what communication strategies help to meet the information needs and to enable the public to behave appropriately and responsibly? Conclusions: Evidence from the anthrax attacks in the United States suggested that a surge of low risk patients is by no means inevitable. Data from the SARS outbreak illustrated that if hospitals are seen as sources of contagion, many patients with non-bioterrorism related health care needs may delay seeking help. Finally, the events surrounding the Pneumonic Plague outbreak of 1994 in Surat, India, highlighted the need for the public to be kept adequately informed about an incident to avoid creating rumours. Clear, consistent and credible information is key to the successful management of infectious disease outbreaks. The results of the empirical investigation suggested that the desire for information is a reflection of current anxiety and does not mirror the objective scientific assessment of exposure. The airport study showed that perceived information needs were directly related to anxiety – the least anxious did not require any further information, the most anxious reported significant information needs concerning medical treatment, public health management and the assessment of the ongoing situation – irrespective of their actual exposure. A communication strategy only focussing on the 'real' exposed individuals neglects the information needs of those worrying about having contracted the virus and seeking medical attendance. Effective communication strategies should enable the general public to detect early signs or symptoms and provide them with behaviour advice to prevent the further transmission of the infectious agent. These include the provision of clear information about the incident, the symptoms and what to do to prevent the further transmission, detailed and regularly updated information in various media formats (telephone, internet, etc.) and rapid triage at hospital entrances to guide patients to the appropriate medical infrastructures. Relevance: These research findings could contribute to a shift in the organisational and communicative approach responding to infectious diseases outbreaks and could be considered relevant for future risk communication and policy decision making.
The objective of this thesis is to develop new methodologies for formal verification of nonlinear analog circuits. Therefore, new approaches to discrete modeling of analog circuits, specification of analog circuit properties and formal verification algorithms are introduced. Formal approaches to verification of analog circuits are not yet introduced into industrial design flows and still subject to research. Formal verification proves specification conformance for all possible input conditions and all possible internal states of a circuit. Automatically proving that a model of the circuit satisfies a declarative machine-readable property specification is referred to as model checking. Equivalence checking proves the equivalence of two circuit implementations. Starting from the state of the art in modeling analog circuits for simulation-based verification, discrete modeling of analog circuits for state space-based formal verification methodologies is motivated in this thesis. In order to improve the discrete modeling of analog circuits, a new trajectory-directed partitioning algorithm was developed in the scope of this thesis. This new approach determines the partitioning of the state space parallel or orthogonal to the trajectories of the state space dynamics. Therewith, a high accuracy of the successor relation is achieved in combination with a lower number of states necessary for a discrete model of equal accuracy compared to the state-of-the-art hyperbox-approach. The mapping of the partitioning to a discrete analog transition structure (DATS) enables the application of formal verification algorithms. By analyzing digital specification concepts and the existing approaches to analog property specification, the requirements for a new specification language for analog properties have been discussed in this thesis. On the one hand, it shall meet the requirements for formal specification of verification approaches applied to DATS models. On the other hand, the language syntax shall be oriented on natural language phrases. By synthesis of these requirements, the analog specification language (ASL) was developed in the scope of this thesis. The verification algorithms for model checking, that were developed in combination with ASL for application to DATS models generated with the new trajectory-directed approach, offer a significant enhancement compared to the state of the art. In order to prepare a transition of signal-based to state space-based verification methodologies, an approach to transfer transient simulation results from non-formal test bench simulation flows into a partial state space representation in form of a DATS has been developed in the scope of this thesis. As has been demonstrated by examples, the same ASL specification that was developed for formal model checking on complete discrete models could be evaluated without modifications on transient simulation waveforms. An approach to counterexample generation for the formal ASL model checking methodology offers to generate transition sequences from a defined starting state to a specification-violating state for inspection in transient simulation environments. Based on this counterexample generation, a new formal verification methodology using complete state space-covering input stimuli was developed. By conducting a transient simulation with these complete state space-covering input stimuli, the circuit adopts every state and transition that were visited during stimulus generation. An alternative formal verification methodology is given by retransferring the transient simulation responses to a DATS model and by applying the ASL verification algorithms in combination with an ASL property specification. Moreover, the complete state space-covering input stimuli can be applied to develop a formal equivalence checking methodology. Therewith, the equivalence of two implementations can be proven for every inner state of both systems by comparing the transient simulation responses to the complete-coverage stimuli of both circuits. In order to visually inspect the results of the newly introduced verification methodologies, an approach to dynamic state space visualization using multi-parallel particle simulation was developed. Due to the particles being randomly distributed over the complete state space and moving corresponding to the state space dynamics, another perspective to the system's behavior is provided that covers the state space and hence offers formal results. The prototypic implementations of the formal verification methodologies developed in the scope of this thesis have been applied to several example circuits. The acquired results for the new approaches to discrete modeling, specification and verification algorithms all demonstrate the capability of the new verification methodologies to be applied to complex circuit blocks and their properties.
Interview with Dario Azzellini, author of The Business of War and the new documentary film, Comuna Under Construction. What is it about Venezuela that is so interesting? Since 2003 I have practically lived in Venezuela. What motivates me is that I am interested in the social transformation process happening here. It’s a different type of revolution, a new left that draws from all the experiences of the 60s, 70s, 80s and 90s. ...
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.
This thesis consists of four chapters. Each chapter covers a topic in international macroeconomics and monetary policy. The first chapter investigates the impact of unexpected monetary policy shocks on exchange rates in a multi-country econometric model. The second chapter examines the linkage between macroeconomic fundamentals and exchange rates through the monetary policy expectation channel. The third chapter focuses on the international transmission of bank and corporate distress. The last chapter unfolds the interest rate channel of monetary policy transmission in-an emerging economy-China, where regulations and market forces co-exist in this transmission.
The pathophysiology of schizophrenia is still poorly understood. Investigating the neurophysiological correlates of cognitive dysfunction with functional neuroimaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is widely considered to be a possible solution for this problem. Working memory impairment is one of the most prominent cognitive impairments found in schizophrenia. Working memory can be divided into a number of component processes, encoding, maintenance and retrieval. They appear to be differentially affected in schizophrenia, but little is known about the neurophysiological disturbances which contribute to deficits in these component processes. The aim of this dissertation was to elucidate the neurophysiological underpinnings of the component processes of working memory and their disturbance in schizophrenia. In the first study the the neurophysiological substrates of visual working memory capacity limitations were investigated during encoding, maintenance and retrieval in 12 healthy subjects using event-related fMRI. Subjects had to encode up to four abstract visual shapes and maintain them in working memory for 12 seconds. Afterwards a test stimulus was presented, which matched one of the previously shown shapes in fifty percent of the trials. A bilateral inverted U-shape pattern of BOLD activity with increasing memory load in areas closely linked with selective attention, i.e. the frontal eye fields and areas around the intraparietal sulcus, was observed already during encoding. The increase of the number of stored items from memory load three to memory load four in these regions was negatively correlated with the increase of BOLD activity from memory load three to memory load four. These results point to a crucial role of attentional processes for the limited capacity of working memory. In the second study, the contribution of early perceptual processing deficits during encoding and retrieval to working memory dysfunction was investigated in 17 patients with schizophrenia and 17 healthy control subjects using EEG and event-related fMRI. A slightly modified version of the working memory task used in the fist study was employed. Participants only had to encode and maintain up to three items. In patients the amplitude of the P1 event-related potential was significantly reduced already during encoding in all memory load conditions. Similarly, BOLD activity in early visual areas known to generate the P1 was significantly reduced in patients. In controls, a stronger P1 amplitude increase with increasing memory load predicted better performance. These findings indicate that in addition to later memory related processing stages early visual processing is disturbed in schizophrenia and contributes to working memory dysfunction by impairing the encoding of information. In the third study, which was based on the same data set as the second study, cortical activity and functional connectivity in 17 patients with schizophrenia and 17 to healthy control subjects during the working memory encoding, maintenance and retrieval was investigated using event-related fMRI. Patients had reduced working memory capacity. During encoding activation in the left ventrolateral prefrontal cortex and extrastriate visual cortex was reduced in patients but positively correlated with working memory capacity in controls. During early maintenance patients switched from hyper- to hypoactivation with increasing memory load in a fronto-parietal network which included left dorsolateral prefrontal cortex. During retrieval right ventrolateral prefrontal hyperactivation was correlated with encoding-related hypoactivation of left ventrolateral prefrontal cortex in patients. Cortical dysfunction in patients during encoding and retrieval was accompanied by abnormal functional connectivity between fronto-parietal and visual areas. These findings indicate a primary encoding deficit in patients caused by a dysfunction of prefrontal and visual areas. The findings of these studies suggest that isolating the component processes of working memory leads to more specific markers of cortical dysfunction in schizophrenia, which had been obscured in previous studies. This approach may help to identify more reliable biomarkers and endophenotypes of schizophrenia.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
The mTOR kinase inhibitor rapamycin (sirolimus) is a drug with potent immunosuppressive and antiproliferative properties. We found that rapamycin induces the TGF/Smad signaling cascade in rat mesangial cells (MC) as depicted by the nuclear translocation of phospho-Smads 2, -3 and Smad-4, respectively. Concomitantly rapamycin increases the nuclear DNA binding of receptor (R)- and co-Smad proteins to a cognate Smad-binding element (SBE) which in turn causes an increase in profibrotic gene expression as exemplified by the connective tissue growth factor (CTGF) and plasminogen activator inhibitor 1 (PAI-1). Using small interfering (si)RNA we demonstrate that Smad 2/3 activation by rapamycin depends on its endogenous receptor FK-binding protein 12 (FKBP12). Mechanistically, Smad induction by rapamycin is initiated by an increase in active TGF1 as shown by ELISA and by the inhibitory effects of a neutralizing TGF antibody. Using an activin receptor-like kinase (ALK)-5 inhibitor and by siRNA against the TGF type II receptor TGF-RII) we furthermore demonstrate a functional involvement of both types of TGF receptors. However, rapamycin did not compete with TGFfor TGF-receptor binding as found in radioligand-binding assay. Besides SB203580, a specific inhibitor of the p38 MAPK, the reactive oxygen species (ROS) scavenger N-acetyl-cysteine (NAC) and a cell-permeable superoxide dismutase (SOD) mimetic strongly abrogated the stimulatory effects of rapamycin on Smad 2 and 3 phosphorylation. Furthermore, the rapid increase in Dichlorofluorescein (DCF) formation implies that rapamycin mainly acts through ROS. In conclusion, activation of the profibrotic TGFSmad signaling cascade accompanies the immunosuppressive and antiproliferative actions of rapamycin. Keywords: FK506 binding protein; p38 MAP kinase; rapamycin; renal fibrosis; Smads; TGFβ
Ernst Bloch pointed out in a particularly emphatic way that the concept of human dignity featured centrally in historical struggles against different forms of unjustified rule, i.e. domination – to which one must add that it continues to do so to the present day. The “upright gait,” putting an end to humiliation and insult: this is the most powerful demand, in both political and rhetorical terms, that a “human rights-based” claim expresses. It marks the emergence of a radical, context-transcending reference point immanent to social conflicts which raises fundamental questions concerning the customary opposition between immanent and transcendent criticism. For within the idiom of demanding respect for human dignity, a right is invoked “here and now,” in a particular, context-specific form, which at its core is owed to every human being as a person. Thus Bloch is in one respect correct when he asserts that human rights are not a natural “birthright” but must be achieved through struggle; but in another respect this struggle can develop its social power only if it has a firm and in a certain sense “absolute” normative anchor. Properly understood, it becomes apparent that these social conflicts always affect “two worlds”: the social reality, on the one hand, which is criticized in part or radically in the light of an ideal normative dimension, on the other. For those who engage in this criticism there is no doubt that the normative dimension is no less real than the reality to which they refuse to resign themselves. Those who critically transcend reality always also live elsewhere.
The overvaluation hypothesis (Miller 1977) predicts that a) stocks are overvalued in the presence of short selling restrictions and that b) the overvaluation increases in the degree of divergence of opinion. We design an experiment that allows us to test these predictions in the laboratory. The results indicate that prices are higher with short selling constraints, but the overvaluation does not increase in the degree of divergence of opinion. We further find that trading volume is lower and bid-ask spreads are higher when short sale restrictions are imposed. JEL Classification: C92, G14 Keywords: Overvaluation Hypothesis , Short Selling Constraints , Divergence of Opinion
Regulations in the pre-Sarbanes–Oxley era allowed corporate insiders considerable flexibility in strategically timing their trades and SEC filings, for example, by executing several trades and reporting them jointly after the last trade. We document that even these lax reporting requirements were frequently violated and that the strategic timing of trades and reports was common. Event study abnormal re-turns are larger after reports of strategic insider trades than after reports of otherwise similar nonstrategic trades. Our results also imply that delayed reporting is detrimental to market efficiency and lend strong support to the more stringent trade reporting requirements established by the Sarbanes–Oxley Act. JEL Classification: G14, G30, G32 Keywords: Insider Trading , Directors' Dealings , Corporate Governance , Market Efficiency
This paper studies the market quality of an internalization system which is designed as part of an open limit order book (the Xetra system operated by Deutsche Börse AG). The internalization sys-tem (Xetra BEST) guarantees a price improvement over the inside spread in the Xetra order book. We develop a structural model of this unique dual market environment and show that, while adverse selection costs of internalized trades are significantly lower than those of regular order book trades, the realized spreads (the revenue earned by the suppliers of liquidity) is significantly larger. The cost savings of the internalizer are larger than the mandatory price improvement. This suggests that internalization can be profitable both for the customer and the internalizer. JEL Classification: G10
This paper reconsiders the effect of investor sentiment on stock prices. Using survey-based sentiment indicators from Germany and the US we confirm previous findings of predictability at intermediate time horizons. The main contribution of our paper is that we also analyze the immediate price reaction to the publication of sentiment indicators. We find that the sign of the immediate price reaction is the same as that of the predictability at intermediate time horizons. This is consistent with sentiment being related to mispricing but is inconsistent with the alternative explanation that sentiment indicators provide information about future expected returns. JEL Classification: G12, G14 Keywords: Investor Sentiment , Event Study , Return Predictability
This paper examines to what extent the build-up of "global imbalances" since the mid-1990s can be explained in a purely real open-economy DSGE model in which agents’ perceptions of long-run growth are based on filtering observed changes in productivity. We show that long-run growth estimates based on filtering U.S. productivity data comove strongly with long-horizon survey expectations. By simulating the model in which agents filter data on U.S. productivity growth, we closely match the U.S. current account evolution. Moreover, with household preferences that control the wealth effect on labor supply, we can generate output movements in line with the data. JEL Classification: E13, E32, D83, O40
Background: The treatment of high-risk neuroblastoma patients consists of multimodal induction therapy to achieve remission followed by consolidation therapy to prevent relapses. However, the type of consolidation therapy is still discussed controversial. We applied metronomic chemotherapy in the prospective NB90 trial and monoclonal anti-GD2-antibody (MAB) ch14.18 in the NB97 trial. Here, we present the long term outcome data of the patient cohort. Methods: A total of 334 stage 4 neuroblastoma patients one year or older were included. All patients successfully completed the induction therapy. In the NB90 trial, 99 patients received at least one cycle of the oral maintenance chemotherapy (NB90 MT, 12 alternating cycles of oral melphalan/etoposide and vincristine/cyclophosphamide). In the NB97 trial, 166 patients commenced the MAB ch14.18 consolidation therapy (six cycles over 12 months). Patients who received no maintenance therapy according to the NB90 protocol or by refusal in NB97 (n = 69) served as controls. Results: The median observation time was 11.11 years. The nine-year event-free survival rates were 41 ± 4%, 31 ± 5%, and 32 ± 6% for MAB ch14.18, NB90 MT, and no consolidation, respectively (p = 0.098). In contrast to earlier reports, MAB ch14.18 treatment improved the long-term outcome compared to no additional therapy (p = 0.038). The overall survival was better in the MAB ch14.18-treated group (9-y-OS 46 ± 4%) compared to NB90 MT (34 ± 5%, p = 0.026) and to no consolidation (35 ± 6%, p = 0.019). Multivariable Cox regression analysis revealed ch14.18 consolidation to improve outcome compared to no consolidation, however, no difference between NB90 MT and MAB ch14.18-treated patients was found. Conclusions: Follow-up analysis of the patient cohort indicated that immunotherapy with MAB ch14.18 may prevent late relapses. Finally, metronomic oral maintenance chemotherapy also appeared effective.
Atherosclerosis is accompanied by infiltration of macrophages to the intima of blood vessels. There they engulf oxLDL (oxidized low-density lipoproteins) and differentiate to foam cells. These cells are known as major promoters of atherosclerosis progression. In initial experiments I could demonstrate that foam cell formation caused a severe loss in the ability to produce IFNA (interferon A) in response to stimulation with the bacterial cell wall component LPS (lipopolysaccharide). Since IFNA is discussed to have anti-atherosclerotic potential and has the capability to induce immune tolerance, its inhibition in foam cells might promote the atherosclerotic process. For this reason the aim of my PhD project was to clarify the underlying molecular mechanisms that attenuate LPS-induced IFNA expression in foam cells. LPS activates TLR4 (Toll-like receptor 4) in macrophages. Downstream this receptor two distinct signaling pathways are activated, namely a MyD88 (myeloid differentiation primary response gene 88)-dependent and a TRIF (TIR-domain-containing adapter-inducing IFNA)-dependent one. Foam cell formation targeted the TRIF-dependent TLR4 signaling pathway, as seen by loss of IRF3 activation and IFNA expression inhibition, whereas MyD88-initiated NFBB (nuclear factor 'B-light-chain-enhancer' of activated B-cells) activation and subsequent TNF@ (tumor necrosis factor @) expression remained unaltered. The TRIF signaling cascade results in transactivation of the transcription factor IRF3 (interferon regulatory factor 3), the main activator of IFNA expression. This event demands IRF3 phosphorylation by TBK1 (TANK-binding kinase 1), whereas TBK1 needs to be recruited to TRAF3 (TNF receptor associated factor 3) by the scaffold protein TANK (TRAF family member-associated NFBB activator) for its activation. This work allowed to propose the following scheme: OxLDL utilizes SR-A1 (scavenger receptor A1) to activate IRAK4 (interleukin-1 receptor-associated kinase 4), IRAK1 and Pellino3. Active IRAK1 and Pellino3 associate with TRAF3 and Pellino3 promotes mono-ubiquitination of the adaptor molecule TANK. Mono-ubiquitination of TANK interrupts TBK1 recruitment to TRAF3 and thereby abrogates phosphorylation and transactivation of IRF3 as well as subsequent expression of IFNA. In this study I provide evidence for a negative regulatory role of Pellino3 for TRIF-dependent TLR4 signaling. This expands the current knowledge of the interplay between pathways downstream scavenger and Toll-like receptors. Due to the multifaceted roles of TLR4 signaling in pathology, the new TRIF-signaling inhibitor Pellino3 might be of importance as therapeutical target for disease intervention.
This dissertation investigated the development of the complementiser that from the demonstrative pronoun in the Germanic languages; each chapter dealt with a different aspect. In the introduction, the terms ‘reanalysis’ and ‘analogy’ and their relevance for grammaticalisation were explained, and the issues of the chapters were presented. The second chapter introduced some information about the Germanic language family and the languages which were relevant for this investigation, namely Gothic, Old English, Old Icelandic, Old Saxon and Old High German. Previous assumptions about the diachrony of that were presented and discussed. One of these proposals which mainly draws on evidence from West Germanic involves the idea that the source construction contained two independent main clauses with a demonstrative pronoun (that) at the end of the first clause (cf. e.g. Paul 1962, § 248). In contrast to this, the Gothic evidence showed that the source construction of the reanalysis of ϸatei was not a proper paratactic construction (at least in Gothic) but already a complex construction which contained a complementiser (ei) in the appositional subordinate clause (cf. also e.g. Longobardi 1994 for the diachrony of ϸatei). This contradiction raised the question whether the analysis of the Gothic that-complementiser also applies to the diachrony of that in West Germanic. This issue was taken up in the third chapter which presented an overview of subordination and complementisers in Northwest Germanic. The aim was to show that the Northwest Germanic languages also show a subordinating particle, which functions like the Gothic ei, namely ϸe (OE), er/es (OI), the (OHG, OS). As a result, the subordinating particle could be observed in relative and adverbial clauses in all Northwest Germanic languages. In complement clauses, which are most crucial for the argumentation, the subordinating particle is found in Old English and Old Icelandic but not in Old Saxon. In Old High German, there are only combinations of the with a following pronoun, theih and theiz, in ‘Otfrids Evangelienbuch’ (see Wunder 1965). Consequently, the presence of a subordinating particle is confirmed in North and West Germanic. The fact that the patterns of subordination are quite similar in all Germanic languages suggested a unitary analysis of the development of that in Germanic was appropriate. In chapter four, the similarities and differences between the Germanic languages with respect to the development of that were explained. It was argued that the preconditions of the reanalysis were the same, whereas the consequences of the reanalysis are realised differently in each language. The most important precondition was that the appositional source construction (explained in more detail below) was generally available in Germanic. Since the demonstrative pronoun at the end of the matrix clause and the subordinating particle of the subordinate clause were adjacent, phonological combination might have been crucial for the subsequent reanalysis to take place. After reanalysis, however, different changes can be observed in the different languages. For instance, it appears that during the Old English period the final syllable of the form ϸætte was deleted (see chapter 4 for references), whereas the final –ei is still present in the Gothic ϸatei, and completely absent in Old High German and Old Saxon. The source structure of the reanalysis was discussed in detail in a separate subsection. The appositional source construction, which was already assumed for the reanalysis of Gothic ϸatei, was compared with analyses of clitic left dislocation which propose that two constituents with the same theta-role derive from a Big DP (see e.g. Grewendorf 2009, Belletti 2005). Based on the Big DP analysis of Grewendorf (2009), it was claimed that the appositional clause, introduced by the subordinating particle, is generated in the Spec of a DP, and adjoined to this DP on the surface. It was argued that this whole complement DP-node occurred in an extraposed position in OV-languages so that the verb, when it stays in-situ, does not appear between the demonstrative pronoun and the subordinating particle. The structure in (1) illustrates the syntactic source structure which is assumed to apply to the development of the complementiser that in Germanic. ...
Background: Many cancer patients seek homeopathy as a complementary therapy. It has rarely been studied systematically, whether homeopathic care is of benefit for cancer patients. Methods: We conducted a prospective observational study with cancer patients in two differently treated cohorts: one cohort with patients under complementary homeopathic treatment (HG; n=259), and one cohort with conventionally treated cancer patients (CG; n=380). For a direct comparison, matched pairs with patients of the same tumour entity and comparable prognosis were to be formed. Main outcome parameter: change of quality of life (FACT-G, FACIT-Sp) after 3 months. Secondary outcome parameters: change of quality of life (FACT-G, FACIT-Sp) after a year, as well as impairment by fatigue (MFI) and by anxiety and depression (HADS). Results: HG: FACT-G, or FACIT-Sp, respectively improved statistically significantly in the first three months, from 75.6 (SD 14.6) to 81.1 (SD 16.9), or from 32.1 (SD 8.2) to 34.9 (SD 8.32), respectively. After 12 months, a further increase to 84.1 (SD 15.5) or 35.2 (SD 8.6) was found. Fatigue (MFI) decreased; anxiety and depression (HADS) did not change. CG: FACT-G remained constant in the first three months: 75.3 (SD 17.3) at t0, and 76.6 (SD 16.6) at t1. After 12 months, there was a slight increase to 78.9 (SD 18.1). FACIT-Sp scores improved significantly from t0 (31.0 - SD 8.9) to t1 (32.1 - SD 8.9) and declined again after a year (31.6 - SD 9.4). For fatigue, anxiety, and depression, no relevant changes were found. 120 patients of HG and 206 patients of CG met our criteria for matched-pairs selection. Due to large differences between the two patient populations, however, only 11 matched pairs could be formed. This is not sufficient for a comparative study. Conclusion: In our prospective study, we observed an improvement of quality of life as well as a tendency of fatigue symptoms to decrease in cancer patients under complementary homeopathic treatment. It would take considerably larger samples to find matched pairs suitable for comparison in order to establish a definite causal relation between these effects and homeopathic treatment.
Aim: To study the changes in leiomyoma volume following uterine artery embolization (UAE) and to correlate these changes with the initial leiomyoma volume and location within the uterus and to evaluate the impact of preprocedural prediction of the best tube angle obliquity for visualization of the uterine artery origin using 3D-reconstructed contrast-enhanced MR angiography (CE-MRA) on the radiation dose, fluoroscopy time and contrast medium volume used during UAE. Materials and Methods: The study was performed in two parts. The first part was retrospectively done on 28 patients (age range: 37-57 years, mean: 48 years, SD: 4.81) in whom UAE was performed. All leiomyomas in all patients were evaluated. In total, 84 leiomyomas were evaluated. MRI studies were performed before, 3 months and 1 year after UAE. The volumes and location of each leiomyoma in each patient were evaluated in consensus by two radiologists. The second part included 40 consecutive patients (age range: 37-56 years, mean: 46 years, SD: 4.49) and was done in a controlled prospective/retrospective manner. In 20 sample patients (prospective part) pre-procedural prediction of the best tube angle obliquity was predicted using 3D-reconstructed CE-MRA and provided to the interventionalist. 3D-reconstruction was done using Inspace application. The radiation dose, fluoroscopy time and contrast medium volume for those patients were compared with the data of the last 20 procedures (control) performed by the same interventionalist (retrospective part). Results: For the first part the mean pre-embolization volume was 51.6 cm3 range:0.72-371.1cm3, SD=79.3). At 3-month follow-up 83 (98.8%) leiomyomas showed a mean volume reduction of 52.62% (range: 12.79–96.67%, SD=21.85) and 1 leiomyoma (1.2%) increased in volume. At 1-year follow-up 5 (6%) leiomyomas were not detectable, 72 (85.7%) showed a further mean of 20.5% (range: 2.52–58.72%, SD=11.92) volume reduction compared to the 3-month follow-up volume and 7 (8.3%) leiomyomas increased in volume. A statistically significant (p=0.026 at 3-month, p=0.0046 at 1-year) difference in percentage of volume change was observed based on leiomyoma location; submucous leiomyomas showed the largest volume reduction. The initial leiomyoma volume showed a weak negative correlation (Spearman's correlation-coefficient =-0.35 at 3m and -0.36 at 1y) with the leiomyoma volume change. For the second part the tube angle prediction resulted in a significant reduction of the radiation dose utilized (p<0.001), fluoroscopy time (p=0.002) and contrast medium volume (p<0.001) for the sample patients when compared with the control patients. The overall radiation dose was reduced from a mean of 11044 μGym2 to a mean of 4172.5 μGym2, fluoroscopy time was reduced from a mean of 15.45 minutes to 8.81 minutes and contrast medium volume was reduced from a mean of 135 ml to 75 ml. Conclusion: UAE results in significant leiomyoma volume reduction at 3-month and 1- year follow-up. The leiomyoma location plays an important role in volume changes while the initial leiomyoma volume plays a minor role. Pre-procedural prediction of the best tube angle obliquity for visualization of the origin of the uterine artery using 3D-reconstructed CE-MRA results in a significant reduction of the radiation dose, fluoroscopy time and contrast medium volume used during UAE.
The documentation of life on Earth, that is, the inventorization of nature and the naming and classification of organisms found therein, is a major task for biologists today and a fundamental precondition for nature conservation efforts. This study aimed at contributing to the inventory of amphibians and reptiles in selected, previously understudied ecoregions of Bolivia. I strove to document diversity patterns and seek possible ecological and historical reasons for these patterns. Special attention was paid to the Chiquitano Region situated in the eastern lowlands of Bolivia in a climatic transition zone between the humid evergreen Amazon Forests and the deciduous thorn-scrub vegetation of the Gran Chaco. In congruence with its location in the transition zone, the Chiquitano Region displays a mosaic of habitats: The vegetation is dominated by the endemic Chiquitano Dry Forest, which is probably the largest extant patch of Seasonal Dry Tropical Forest, with enclaves of savanna, the western outliers of the Cerrado biome of central Brazil. Taxonomic revisions: The taxonomic data in this study are used as a tool to measure biodiversity, to assess biogeographic relationships, and to evaluate conservation needs. Since all is predicated on the taxonomic decisions made, an adequate taxonomy is essential, and taxonomy can be regarded as the foundation of this study. The methodology encompassed a variety of herpetological field techniques, such as different survey methods, preparation and documentation of voucher specimens, recording of frog calls, and herpetological laboratory techniques, such as morphology, molecular procedures with mtDNA, phylogenetic analyses, and bioacoustic analysis and descriptions of frog calls. A total of 1251 specimens belonging to 200 species were obtained during this study, including 87 amphibian and 123 reptile species. This constitutes about 36% of the herpetofauna currently known for Bolivia, about 34% of the amphibians currently known for Bolivia and about 40% of the reptiles, respectively. In the course of this study, a new species of frog was described from the study site Caparu in the eastern lowlands of Bolivia; this species, Hydrolaetare caparu Jansen, Gonzales & G. Köhler 2007, differs from the other two congeners in external morphology (e.g., lateral fringes and relative length of fingers, size of palmar tubercle, webbing of toes, and colouration) and advertisement call. Two new colubrid snake species were also described from the study site San Sebastián. Thus far, both are known only from the Chiquitano Region, Provincia Ñuflo de Chávez. Phalotris sansebastiani Jansen & G. Köhler 2008 differs from all the other species in the genus in having a triangular projection of the red snout colouration reaching onto the parietals. Xenopholis werdingorum Jansen, Gonzales & G. Köhler 2009 can be identified as a member of the genus Xenopholis by its vertebral morphology. It differs from the other two species of Xenopholis in having a unique uniform dorsal colour pattern, and from X. scalaris in having two prefrontals and a narrow septum within the neural spine and perpendicular to its long axis as evident in the x-ray images. A review of a small collection of pitvipers from different lowland localities and from the Inter-Andean dry valleys of the region of Pampagrande revealed one new species of Bothrops and one of Bothrocophias (both to be formally described elsewhere). The two pitviper species differ morphologically and genetically from their congeners. The results of a brief review of a small collection of frogs of the genus Scinax (Anura: Hylidae) from different localities in the lowlands, together with analyses of their bioacoustics, suggest an unknown cryptic diversity in Bolivian species of Scinax cf. fuscomarginatus and allies. However, further studies are necessary to clarify the taxonomic status of these populations. In addition, this study provides new data on the morphology (e.g., pholidosis) of snakes, many of them previously known only from few museum specimens. Keys to the Bolivian lizard species of Cercosaura and the Bolivian snake species of Chironius, Clelia, Liophis, Lystrophis, Phalotris, and Xenodon are presented here for the first time. New information on distribution includes many range extensions of amphibian and reptile species, such as five new country records (one frog species, four snake species) and six new departmental records (two frog species, four snake species). Observations on ecology and natural history: Several observations on ecology and natural history were made during field work. Visual signaling, an aspect of territorial behavior that was already known for several species of the genus Phyllomedusa, could be described for the first time for Phyllomedusa boliviana (Jansen & J. Köhler 2007). Furthermore, during audio surveys of an anuran community at the study site San Sebastián from 2005 to 2007, a decline of certain amphibian populations was observed in the rainy season 2006/2007 (Jansen et al., in press). This is possibly related to an extreme drought in the dry season of 2006 where 158 consecutive days without rainfall were recorded. In addition, a new method for measuring intensity of anuran choruses by means of a continuous sound pressure metre was developed (Jansen 2009). The method was suitable to detect calling phenology (during one night), as well as differences in calling activity (between two nights). Biodiversity and biogeographical relationships: Species lists were compiled at the six study sites Pampagrande, Los Volcanes, San Sebastián, Caparú, El Espinal und El Corbalan. The total amphibian and reptile species numbers observed ranged from 37 to 101 with the highest species numbers in San Sebastián (101) and Caparú (89) and the lowest in Los Volcanes (37) and El Espinal (41). A preliminary species list of the herpetofauna of the Chiquitano Region was presented, including 60 amphibian and 84 reptile species. The majority of the amphibians of the Chiquitano Region are classified predominantly as inhabitants of open formations (41 species, 68.3%). Interestingly, even the majority of species recorded from the Chiquitano Dry Forest (32 species) are usually associated with open formations (22 species, 66.7%), followed by the number of species associated with open and forest formations (8 species, 24.4%). Only two of the observed species (6.0%) are predominant forest dwellers. The amphibian assemblage of the Chiquitano Region is most similar in composition to that of the Cerrado biome: 46 species (76.7%) occur in the Cerrado as well, and three species are regarded as Cerrado endemics (5.0%). The Chiquitano Region shares considerably fewer amphibian species with the other biomes (Amazon: 22 species, 36.7%; Gran Chaco: 13 species, 21.7%; Caatinga: 16 species, 26.7%). The reptile assemblage also has significant affinities to the Cerrado, which can be seen in the high proportion of reptile species distributed in that biome (68 species; 81.0%). Affinities to the other biomes are as follows: Amazon (48 species, 57.1%), Chaco (37 species, 40.1%), and Caatinga (30 species, 35.7%). When arranged in mutually exclusive biome categories, reptiles and amphibians showed similar patterns so that the majority of both amphibians and reptiles of the Chiquitano Region can be regarded as widespread. The high proportion of reptile species probably endemic to this region (5 species, 6.0%) is remarkable (i.e. Tropidurus xanthochilus, Apostolepis phillipsi, Phalotris sansebastiani, Xenopholis werdingorum, and Micrurus diana). In an analysis of the biodiversity patterns and biogeographical relationships of the herpetofauna of the study sites, these sites were compared with literature data from 37 localities and included in a presence/absence matrix with a total of 657 amphibian and reptile species in the surrounding South American biomes Amazon, Cerrado and Gran Chaco. The biogeographic relationships between these sites were evaluated using the Coefficient of Biogeographic Resemblance (CBR), cluster analysis, and multidimensional scaling (MDS) of sites. The analyses were first conducted on amphibians and reptiles combined, and than group-specific each for amphibians, reptiles, lizards, and snakes, separately. A “bias-reduced analysis” was developed for a better understanding of the affinities of the amphibians. In this analysis, e.g., the distinct habitat types of the Chiquitano Region, the Chiquitano Dry Forest and the Cerrado were taken into account. Analyses of the biodiversity patterns revealed that the sites in the Amazon comprise highest species numbers, as expected, followed successively by the sites in the Cerrado biome and sites in-between the two biomes. Within the eastern lowlands of Bolivia, the Chiquitano Region is the most rich in species. Comparing it with the other South American sites, the Chiquitano Region has a surprisingly high alpha diversity, especially in amphibians. The microgeographic variation in species composition (beta diversity) in the Chiquitano Region is also remarkably high and obviously related to the mosaic character of the vegetation and habitats. However, the bias-reduced analysis revealed that the amphibian fauna of the open areas and savannas at Hacienda San Sebastián (with 36 species in the Cerrado and pastureland) was one of the most species-rich savanna sites known for amphibians in South America. Considering that the Hacienda San Sebastián site is only ca. 3300 ha (= 1.29 amphibian species per km2), this outcome is particularly suprising. The results of the analyses of the biogeographical relationships suggest that the herpetofauna of Bolivia’s lowlands, including the Beni, the Pantanal and the Chiquitano Region, is as distinct from the herpetofauna of the Gran Chaco, Amazon, and Cerrado as these biomes are from each other. The Chiquitano herpetofauna in particular represents a unique and well-defined herpetofaunal assemblage when compared to all surrounding localities and biomes. This is supported by high CBR-values, findings from the cluster analysis, as well as a clear separation of the Chiquitano sites in the MDS. Biogeographic relations exist in all the surrounding biomes, but are strongest to Cerrado, followed by the Amazon. This study strongly suggests that the Chiquitano herpetofauna is composite and has multiple affinities. This is congruent with a well-defined Chiquitano flora, avifauna and mammalian fauna, suggesting a similar history. The bias-reduced analysis revealed a more detailed picture of the biogeographic relations of the Chiquitano Region, especially the Chiquitano Dry Forest. I argue here that the Chiquitano Dry Forest herpetofauna is a “young”, and “former savanna herpetofauna”. Whereas the Chiquitano Dry Forest is rather poor in amphibian and reptile species, and endemics are lacking from this forest type, the isolated Cerrado enclaves are especially diverse in species and probably contain locally endemic species, such as Phalotris sansebastiani and Xenopholis werdingorum. The colonization of the young Chiquitano Dry Forest may have taken place from savannas by mainly open area species, and only briefly through the Amazon. The results emphasise the importance of bias-reduction in studies of biogeography, e.g., by using group-specific analyses or by taking into account criterias as area size and heterogeneity of compared sites. The different biogeographic patterns of reptiles and amphibians of the Andean valleys indicate a different history of these two groups. In regard to reptiles, dispersals and withdrawals into the valleys in warm humid and dry cool periods in the Pleistocene seem likely, supported by a relation between the valleys and the dry lowland (e.g., Chaco). However, it is more plausible that, during these climatic fluctuations, amphibians migrated to adjacent, more humid regions, such as Yungas. The study verified the known patterns of sister-species pairs in the Inter-Andean Dry Forest and the lowlands. Additionally, pairs of populations with slight differences in morphology were found in the valleys and in the lowlands (Cercosaura parkeri and Xenodon rhapdocephalus). Further studies must test the taxonomic status of these populations. The discovery of new species of Bothrops and Bothrocophias from the Andean valleys has several implications, and possible reasons for the high endemism in the dry valleys are discussed. Conservation and outlook: The high local alpha and beta diversity of the Chiquitano herpetofauna shows that this is a region of complex faunal interaction, which reflects the present heterogeneity of the region, but which is possibly also related to a complex geological and environmental history. The Chiquitano Region can be assessed as a region of distinct regional herpetofaunal diversity charaterised by small scale diversity patterns. It therefore merits recognition as a unique ecoregion, and conservation effort should be increased. Further research is necessary to solve the taxonomic problems addressed in this study. Moreover, future work should be directed towards the development and institution of longterm monitoring programs to evaluate the effects of climate change and changes in land-use on biodiversity, especially that of the Chiquitano Region.
Orthopoxviruses are large DNA viruses that replicate within the cytoplasm of infected cells encoding over a hundred different proteins. The orthopoxviral 68k ankyrin‐like protein (68k‐ank) is highly conserved among orthopoxviruses, and this study aimed at elucidating the function of 68k‐ank. The 68k‐ank protein is composed of four ankyrin repeats (ANK) and an F‐box‐like domain; both motifs are known proteinprotein interaction domains. The F‐box is found in cellular F‐box proteins (FBP), crucial components of cellular E3 ubiquitin (Ub) ligases. With yeast‐two‐hybrid screens and subsequent co‐immunoprecipitation analyses, it was possible to identify S‐phase kinase‐associated protein 1a (Skp1a) as a cellular counterpart of 68k‐ank via binding to the F‐box‐like domain. Additionally, Cullin‐1 was co‐precipitated, suggesting the formation of a viral‐cellular SCF E3 Ub ligase complex. Modified Vaccinia virus Ankara (MVA) ‐ being attenuated and unable to replicate in most mammalian cell lines due to a block in morphogenesis – nevertheless, expresses its complete genetic information attributing to its properties as promising vector vaccine. Conservation of 68k‐ank as the only ANK protein encoded by MVA implied a substantial role of this viral factor. Hence, its function in the viral life cycle was assessed by studying a 68k‐ank knock‐out MVA. A mutant phenotype manifested in nonpermissive mammalian cells characterized by a block succeeding viral early gene expression and by a reduced ability of the virus to shutoff host protein synthesis. Studies with MVA encoding a 68k‐ank F‐box‐like domain truncated protein revealed that viral‐cellular SCF complex formation and maintenance of viral gene expression are two distinct, unrelated functions fulfilled by 68k‐ank. Moreover, K1, a well‐described VACV host range factor of the ANK protein family, is able to complement 68k‐ank function. This suggests that gene expression of MVA putatively depends on the ANKs encoded in 68k‐ank. In addition to the important findings in vitro, first virulence studies with the mouse pox agent, ectromelia virus (ECTV) deleted of the 68k‐ank ortholog (C11) suggested that this factor contributes to ECTV virulence in vivo.
Clinical application of transcranial Doppler for detection of cerebral emboli during cardiac surgery
(2010)
Objective: Neurologic injury is one of the most damaging complications for cardiac surgery. How to decrease neurologic impairment by improving perioperative monitoring remains a challenge for both cardiac surgeons and anesthetists. For this reason, transcranial doppler (TCD) has been widely used in cerebral monitoring during cardiac surgery. In this study, two experiments of clinical application of TCD for detection of cerebral emboli during cardiac surgery were to be done. One was “Solid and gaseous cerebral emboli during valvular surgery are significantly reduced with axillary artery cannulation”. The other was “Do intraoperative cerebral embolic signals differ between valvular surgery (VS) and CABG”. Methods: In experiment one, 20 valve and combined procedures with aortic cannulation (AoC group) were compared to 18 procedures with axillary cannulation (AxC group) in a prospective non-randomized study. In experiment two, 18 VS patients and 18 CABG patients were matched by extracorporeal circulation (ECC) time retrospectively. Intraoperative monitoring of both middle cerebral arteries was performed with TCD discriminating between solid and gaseous embolic signals (ES). Results: In experiment one, the AxC group had less solid ES than the AoC group (38±22 vs 55±25, P<0.05), but no significant difference was found in gaseous (501±271 vs 538±333, P>0.05) and total (539 ± 279 vs 593 ± 350, P>0.05) ES. The AxC group had less solid ES during arterial cannulation (2.1±1.5 vs 6.6±3.6, P<0.05) and during aortic cross-clamp time (4.4 ±3.1 vs 10.2 ± 5.1, P<0.05) than the AoC group. During ECC, gaseous ES was not significantly different between groups (398±210 vs 448±291, P>0.05). However, AxC showed less gaseous ES (85±68 vs 187±148, P<0.05) and less gaseous ES per minute (1.8±1.5 vs 4.5±3.2, P<0.05) during weaning off extracorporeal circulation than the AoC group. No significant difference in gaseous ES (313±163 vs 261±189, P>0.05) and gaseous ES per minute (3.1±2.2 vs 2.8±2.2, P>0.05) was found between groups from bypass start to aortic declamping. No neurologic complications occurred. In experiment two, no significant difference was found in solid (38±20 vs 40±26, P>0.05) or gaseous (457±263 vs 412±157, P>0.05) ES between the VS and CABG group during the whole recording time. During ECC, solid ES (20±10 vs 24±19, P>0.05) and gaseous ES (368±230 vs 317±157, P>0.05) were comparable between groups. Specifically, during weaning off ECC, the VS group had more gaseous ES/min (5.6±3.6 vs 3.1±1.2, P<0.05) than the CABG group. But this difference in gaseous ES/min was not significant during the period from bypass start to aortic declamping (2.5±1.8 vs 3.0±1.8, P>0.05). Conclusion: Cerebral embolization does occur during cardiac surgery. Through these two experiments, we demonstrated the feasibility and importance of clinical application of transcranial doppler for detection of cerebral emboli during cardiac surgery. Due to the diversity in clinical application of TCD, it is impossible to compare the number of ES between different research centers. More unified standards should be drawn in order to make wider clinical application possible. Up till now, no robust evidence shows the correlation between intraoperative ES and postoperative neurological impairment. The research on intraoperative ES and postoperative neurological impairment should rely on a complete concept.
The Benchmark Dose (BMD) approach, which was suggested firstly in 1984 by K. Crump [CRUMP (1984)], is a widely used instrument in risk assessment of substances in the environment and in food. In this context, the BMD approach determines a reference point (RfP) on the statistically estimated dose-response curve, for which the risk can be determined with adequate certainty and confidence. In the next step of risk characterization a threshold is calculated, based on this RfP and toxicological considerations. The BMD approach bases upon the fit of a dose-response model on the data. For this fit a stochastic distribution of the response endpoint is taken as a basis. Ultimately, the BMD reflects the dose for which a pre-specified increase in an adverse health effect (the benchmark response) can be expected. Until now, the BMD approach has been specified only for quantal and continuous endpoints. But in risk assessment of carcinogens especially so called time-to-event data are of high interest since they contain more information on the tumor development than quantal incidence data. The goal of this diploma thesis was to extend the BMD approach to such time-to-event data.
1. Fab co-complexes of proton pumping NADH:ubiquinone oxidoreductase (complex I) Fab fragments suitable for co-crystallization with complex I were generated using an immobilized papainbased protocol. The binding of the antibody fragments to complex I was verified using Surface Plasmon Resonance and size exclusion chromatography. The binding constants of the antibodies and their respective Fab fragments were found to be in the nanomolar range. This work presents the first report on successful crystallization of complex I (proton pumping NADH:ubiquinone oxidoreductase) from Yarrowia lipolytica with proteolytic Fab fragments. The quality of the crystals was significantly improved when compared to the initial experiments and the best crystals diffracted X-rays to a resolution of ~7 Å. The activity of complex I remained uninfluenced by antibody fragment binding. The initial diffraction data suggest that the complex I/Fab co-complex crystals represent a space group different to the one observed for the native protein. Ongoing experiments are aimed at further enhancements of the diffraction quality of the crystals. Providing a different space group the CI/Fab co-complexes may become a very useful approach for structure determination of the enzyme. Moreover, the bound Fab offers an additional possibility to generate phase information. The antibody-mediated crystallization represents a valuable tool in structural characterization of the NADH:oxidoreductase subcomplexes or even single subunits. 2. UDP-glucose pyrophosphorylase UDP-glucose pyrophosphorylase from Yarrowia lipolytica displays affinity towards Ni2+ NTA and was first detected in a contaminated sample of complex I. Following, separation from complex I, Ugp1p was purified using anion exchange chromatography. Sequence similarity studies revealed high identity to other known pyrophosphorylases. As indicated by laser-based mass spectrometry method (LILBID) Ugp1p from Y. lipolytica builds octamers similarly to the enzyme from Saccharomyces cerevisiae. The initial crystals grew as thin needles favorably in sitting drop setups. The size of the crystals was increased by employment of a micro batch technique. The improved crystals diffracted X-rays to a resolution of 3.2 Å at the synchrotron beamline. Structural characterization is under way using a molecular replacement approach based on the published structure of baker’s yeast UGPase.
Paleoecology is the study of organismal interactions with the environment in the geological past. Organisms are influenced in their distribution and abundance by abiotic factors such as temperature and precipitation. A change in these factors, for example by major climatic shifts, would then affect the communities of organisms. Studying this hypothesized causal link between climatic and faunal change is especially interesting for the Plio-Pleistocene of East Africa due to the fact that our own ancestors also inhabited these regions. Both the Turkana basin in Kenya and the Lake Albert region in Uganda offer unique opportunities to investigate these paleoecological issues. Their late Miocene through Pleistocene deposits provide a very good record of climatic, vegetation and faunal change in East Africa (Pickford et al. 1993, Leakey et al. 1995, 1998, McDougall & Feibel 2003, Wynn 2004). This study focuses on the mammal family Bovidae as they are good indicator of vegetation and environment (e.g. Vrba 1980, 1995, Shipman & Harris 1988, Bobe & Eck 2001, Bobe & Behrensmeyer 2004, Bobe et al. 2007). Bovidae are quite species-rich and inhabit a wide range of habitats from tropical rain forests to deserts which predicates their array of morphological adaptations (ecovariables) to these environments. Diet is the ecovariable that is most to climate and thus habitat change. Therefore, the fossil Bovidae are especially suitable for reconstructing past environments. The objective of this thesis is to test the hypothesis that, from the late Miocene through the Holocene, Africa has experienced an overall increase in aridity and concomitant pulses of habitat change. The hypothesis predicts that increasing aridity causes a likewise growth in the abundance of taxa adapted to open arid environments. In particular, an increase in bovid grazers should be observed in combination with a decrease of bovid browsers. To test this hypothesis, I examine the fossil bovid communities from each stratigraphic member of Lake Turkana (Lothagam, Kanapoi, West Turkana and Koobi Fora) and Lake Albert (Nkondo-Kaiso region) and through a taxonomic and a functional perspective reconstruct the paleoenvironments and -climates from approximately 8 to 0.6 Ma. This study is the first to use taxonomic and ecomorphological data together to reconstruct the paleoenvironments of the Turkana basin and the Nkondo-Kaiso region of Lake Albert. In a first analysis, mesowear, as introduced by Fortelius & Solounias (2000), is used to gather information about the diet of bovids. As a result of my preliminary investigations on upper vs. lower molars of recent species, the sample of fossil bovid specimens from the Turkana basin and Lake Albert were found to be unsuitable to reveal a meaningful diet reconstruction. Therefore, the bovids are assigned to diet categories based on literature. For each member of the time period from 8.0 to 0.6 Ma, I provide a detailed characterization of the bovid fauna in terms of α- and β- diversity both on tribe and diet level based on presence-absence as well as for the Turkana basin on abundance data. Statistical comparisons between the fossil bovid communities and those in modern protected areas with known vegetation and climatic conditions have yielded modern analogues for each stratigraphic member. Following that I provide paleoclimatic conditions such as assumed mean annual temperature for each member. Based on abundance of diet categories in the bovid communities, the paleoclimate of the Turkana basin was in general cooler and considerably more humid during the late Miocene to the Pleistocene than today. The mean annual temperature at Lothagam is assumed as 22.2 °C, the annual precipitation as 685 mm for 8.0 – 6.54 Ma and 4.9 – 3.4 Ma. The intervening time period is characterized by a slightly lower mean annual temperature and precipitation (20.3 °C, 583 mm). From 4.17 to 4.07 Ma Kanapoi faced 21.3 °C and 592 mm rainfall. In the eastern part of the basin the climate was warmer and more humid (3.4 – 2.68 Ma: 26.2, 961 mm; 2.68 – 1.3 Ma: 27.1 °C, 935 mm) from 3.4 to 1.3 Ma than in the preceeding eras. In the western part, the climate became warmer and more humid ~500,000 years later and was more variable than that in the eastern basin. From 2.94 to 2.52 Ma the mean annual temperature was 26.2 °C and the annual precipitation 961 mm. Between 2.34 and 1.6 Ma the climate again cooled and became drier as before 2.94 Ma. A second shift to higher temperature and precipitation occurred after 1.6 Ma (27.1 °C, 935 mm) lasted until 1.34 Ma. The results of the bovid community analyses do not support the hypothesis of increasing aridity in Eastern Africa during the late Mio- to Pleistocene. Instead, the results show that the bovid communities differed much over time and on a relatively small spatial scale. Regional paleovegetation and paleoclimate exhibit fluctuations through the studied time period at western Turkana and differences between the western and eastern part of the Turkana basin. This is indicative of a patchy habitat distribution both on temporal and spatial levels. Increased climate variability predicts an increase in landscape complexity as proposed by the ‘variability selection hypothesis’ (Potts 1998a+b). Therefore, this thesis research supports the hypothesis of increased landscape complexity on the spatial level. This study has important implications for future research. First, an analysis based on ecovariable characteristics such as diet may be preferred to a taxonomic analysis. Second, abundance data should be used for an ecovariable analysis because the results then provide more precise information on the paleovegetation and –climate than just the presence of these adaptations in the faunal community. Lastly, as this study is based on one mammal family, further studies on other mammal groups should be conducted to increase the database of exploited resource by the entire faunal community. Most significantly this study provides a basis for new interpretations of faunal community distributions. It also raises the question whether small scale spatial community variability is also to be expected at other fossil sites. If so then this methodology has important implications for reconstructions of paleovegetation and paleoclimate.
Magnetic characteristics of metal organic low-dimensional quantum spin systems at low temperatures
(2010)
In dieser Arbeit wurden neue Klassen von niedrigdimensionalen metallisch-organischen Materialien untersucht, die es ermöglichen interessante quantenkritische Phänomene (quantum critical phenomena, QCP) wie die Bose-Einstein-Kondensation (Bose-Einstein condensation, BEC) der magnetischen Anregung in gekoppelten Spin-Dimer-Systemen, den Berezinskii-Kosterlitz-Thouless Übergang (Berezinskii-Kosterlitz-Thouless transition, BKT) und die Divergenz des magnetokalorischen Effekts (magnetocaloric effect, MCE) in Quanten-Spinsystemen beim Anlegen eines magnetischen Feldes zu beobachten. Die Niedrigdimensionalität der untersuchten Systeme war sowohl für die theoretische Beschreibung, als auch für die experimentelle Beobachtung der Phänomene von großer Bedeutung. Aus theoretischer Sicht eröffnet die Beschäftigung mit diesen Systemen die Möglichkeit, einfache Modelle zu entwickeln, die exakt lösbar sind und erlaubt somit ein qualitatives Verständnis der magnetischen Phänomene. Von experimenteller Seite ist es von größtem Interesse, dass durch das Zusammenspiel von Niedrigdimensionalität, konkurrierenden Wechselwirkungen und starker Quantenfluktuation exotische und aufregende magnetische Phänomene (quantenkritische Phänomene) entstehen, die mit verschiedenen experimentellen Methoden untersucht werden können. Um die intrinsischen Eigenschaften der quantenkritischen Phänomene zu verstehen ist es wichtig, die Phänomene an einfachen und gut kontrollierbaren niedrigdimensionalen Modellsystemen wie ein- oder zweidimensionalen Systemen zu untersuchen. ...
The TTL is the transition layer between the tropical troposphere and stratosphere, and is the main region where tropospheric air enters the stratosphere. In this thesis different transport processes are studied by using in situ measurements of tracers. Long-lived tracers were measured with the High Altitude Gas Analyzer (HAGAR) on board the M55 Geophysica aircraft. The instrument was developed by the University of Frankfurt and measures the long-lived tracers CO2, N2O, CFC-12, CFC-11, H-1211, SF6, CH4 and H2 with two gas chromatographic channels and a CO2 sensor (LICOR). The measurements are supported by CO and O3 measurements of other instruments. Two campaigns were conducted to obtain measurements in the TTL: SCOUT-O3 (November/December 2005 in Darwin, Australia) and AMMA-SCOUT-O3 (August 2006 in Ouagadougou, Burkina Faso). After a general introduction of the thesis in chapters one and two, the third chapter describes the findings during this last campaign. Five local flights are analyzed to study the different transport processes that occur in the tropical tropopause layer above West-Africa: deep convection up to the level of main convective outflow, vertical mixing after overshooting of air in deep convection, horizontal inmixing from the extratropical lower stratosphere, and horizontal transport across the subtropical barrier. Main findings are that the TTL over West-Africa is mostly influenced by remote convection. The subtropical barrier is not a strong barrier but more a region of transition between the extratropical and the tropical stratosphere. Chapter 4 presents the results obtained during the SCOUT-O3 campaign. From the eight local flights the last four flights (051129, 051130a, 051130b, 051205) show enhanced values of ozone, CO and CO2 between 355 and 380 K potential temperature in comparison with the first four flights (051116, 051119, 051123, 051125). Horizontal inmixing from the extra-tropical stratosphere and influence of the local convective system Hector cannot explain the enhanced values of the two flights on 30 November Therefore, other possible explanations for these enhanced CO, CO2 and ozone levels are proposed. The first explanation is vertical mixing in the vicinity of the jet stream. However, the jet cannot explain the differences between the flights on 30 November and the flights on 29 November and 5 December. Another possible explanation is influence of polluted boundary layer air masses from the Indonesian region. Especially air sampled during the flights on November 30 crossed large parts of northern Indonesia between 8 and 10 days before the measurements. Convective uplift of biomass burning and other pollution plumes can transport CO and ozone precursors into the upper troposphere, where they can significantly enhance the ozone production. The last chapter deals with the vertical ascent rate in the TTL and uses measurements of both the SCOUT-O3 and AMMA-SCOUT-O3 campaign as well as data from previous aircraft campaigns (TROCCINOX and APE-THESEO). Time scales and residence times for mean vertical transport in the background TTL are estimated for different seasons and over different geographic regions using in situ observations of CO2 and long-lived tracers. The vertical transport time scales are constrained using the seasonal variation of CO2 in the tropical troposphere as a “tracer clock” for vertical ascent. Two methods are applied to calculate the residence time in the layer between 360 and 390 K potential temperature. The first method uses the slope of the CO2 index, the second method fits the CO2 index directly to the measurements assuming a constant ascent rate. The first method yields residence times for Australia,West Africa, and Brazil of the same order, 35-45 days to 380 K and 50 days to 390 K (where no value can be derived for Australia as the slope is changing approximately one month before the campaign). For APE-THESEO, the method does not yield reasonable results. The best estimates using the second method show moderate residence times between 360 and 390 K of 60±25 days SCOUT-O3 (NH autumn) and 43±8 days for AMMA/SCOUT-O3 (NH summer). These results agree well with the results calculated using the first method. For APE-THESEO and TROCCINOX the best fits yield shorter residence times of 23±7 and 40±10 days, respectively, both during winter. These results correspond well to the expectations based on the seasonal variation of the Brewer-Dobson circulation.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
Background: To evaluate the effectivity of fractionated radiotherapy in adolescent and adult patients with pineal parenchymal tumors (PPT). Methods: Between 1982 and 2003, 14 patients with PPTs were treated with fractionated radiotherapy. 4 patients had a pineocytoma (PC), one a PPT with intermediate differentiation (PPTID) and 9 patients a pineoblastoma (PB), 2 of which were recurrences. All patients underwent radiotherapy to the primary tumor site with a median total dose of 54 Gy. In 9 patients with primary PB treatment included whole brain irradiation (3 patients) or irradiation of the craniospinal axis (6 patients) with a median total dose of 35 Gy. Results: Median follow-up was 123 months in the PC patients and 109 months in the patients with primary PB. 7 patients were free from relapse at the end of follow-up. One PC patient died from spinal seeding. Among 5 PB patients treated with radiotherapy without chemotherapy, 3 developed local or spinal tumor recurrence. Both patients treated for PB recurrences died. The patient with PPTID is free of disease 7 years after radiotherapy. Conclusion: Local radiotherapy seems to be effective in patients with PC and some PPTIDs. Diagnosis and treatment of patients with more aggressive variants of PPTIDs as well as treatment of PB need to be further improved, since local and spinal failure even despite craniospinal irradiation (CSI) is common. As PPT are very rare tumors, treatment within multi-institutional trials remains necessary.
Background: It has been demonstrated that cognitive behavioural therapy (CBT) has a moderate effect on symptom reduction and on general well being of patients suffering from psychosis. However, questions regarding the specific efficacy of CBT, the treatment safety, the cost-effectiveness, and the moderators and mediators of treatment effects are still a major issue. The major objective of this trial is to investigate whether CBT is specifically efficacious in reducing positive symptoms when compared with non-specific supportive therapy (ST) which does not implement CBT-techniques but provides comparable therapeutic attention. Methods: The POSITIVE study is a multicenter, prospective, single-blind, parallel group, randomised clinical trial, comparing CBT and ST with respect to the efficacy in reducing positive symptoms in psychotic disorders. CBT as well as ST consist of 20 sessions altogether, 165 participants receiving CBT and 165 participants receiving ST. Major methodological aspects of the study are systematic recruitment, explicit inclusion criteria, reliability checks of assessments with control for rater shift, analysis by intention to treat, data management using remote data entry, measures of quality assurance (e.g. on-site monitoring with source data verification, regular query process), advanced statistical analysis, manualized treatment, checks of adherence and competence of therapists. Research relating the psychotherapy process with outcome, neurobiological research addressing basic questions of delusion formation using fMRI and neuropsychological assessment and treatment research investigating adaptations of CBT for adolescents is combined in this network. Problems of transfer into routine clinical care will be identified and addressed by a project focusing on cost efficiency. Discussion: This clinical trial is part of efforts to intensify psychotherapy research in the field of psychosis in Germany, to contribute to the international discussion on psychotherapy in psychotic disorders, and to help implement psychotherapy in routine care. Furthermore, the study will allow drawing conclusions about the mediators of treatment effects of CBT of psychotic disorders. Trial Registration Current Controlled Trials ISRCTN29242879
On tradition
(1992)
Hepatitis C virus (HCV) naturally infects only humans and chimpanzees. The determinants responsible for this narrow species tropism are not well defined. Virus cell entry involves human scavenger receptor class B type I (SR-BI), CD81, claudin-1 and occludin. Among these, at least CD81 and occludin are utilized in a highly species-specific fashion, thus contributing to the narrow host range of HCV. We adapted HCV to mouse CD81 and identified three envelope glycoprotein mutations which together enhance infection of cells with mouse or other rodent receptors approximately 100-fold. These mutations enhanced interaction with human CD81 and increased exposure of the binding site for CD81 on the surface of virus particles. These changes were accompanied by augmented susceptibility of adapted HCV to neutralization by E2-specific antibodies indicative of major conformational changes of virus-resident E1/E2-complexes. Neutralization with CD81, SR-BI- and claudin-1-specific antibodies and knock down of occludin expression by siRNAs indicate that the adapted virus remains dependent on these host factors but apparently utilizes CD81, SR-BI and occludin with increased efficiency. Importantly, adapted E1/E2 complexes mediate HCV cell entry into mouse cells in the absence of human entry factors. These results further our knowledge of HCV receptor interactions and indicate that three glycoprotein mutations are sufficient to overcome the species-specific restriction of HCV cell entry into mouse cells. Moreover, these findings should contribute to the development of an immunocompetent small animal model fully permissive to HCV.
HDL, through sphingosine-1-phosphate (S1P), exerts direct cardioprotective effects on ischemic myocardium. It remains unclear whether other HDL-associated sphingophospholipids have similar effects. We therefore examined if HDL-associated sphingosylphosphorylcholine (SPC) reduces infarct size in a mouse model of transient myocardial ischemia/reperfusion. Intravenously administered SPC dose-dependently reduced infarct size after 30 minutes of myocardial ischemia and 24 hours reperfusion compared to controls. Infarct size was also reduced by postischemic, therapeutical administration of SPC. Immunohistochemistry revealed reduced polymorphonuclear neutrophil recruitment to the infarcted area after SPC treatment, and apoptosis was attenuated as measured by TUNEL. In vitro, SPC inhibited leukocyte adhesion to TNFα-activated endothelial cells and protected rat neonatal cardiomyocytes from apoptosis. S1P3 was identified as the lysophospholipid receptor mediating the cardioprotection by SPC, since its effect was completely absent in S1P3-deficient mice. We conclude that HDL-associated SPC directly protects against myocardial reperfusion injury in vivo via the S1P3 receptor.
Snake bite is one of the most neglected public health issues in poor rural communities living in the tropics. Because of serious misreporting, the true worldwide burden of snake bite is not known. South Asia is the world's most heavily affected region, due to its high population density, widespread agricultural activities, numerous venomous snake species and lack of functional snake bite control programs. Despite increasing knowledge of snake venoms' composition and mode of action, good understanding of clinical features of envenoming and sufficient production of antivenom by Indian manufacturers, snake bite management remains unsatisfactory in this region. Field diagnostic tests for snake species identification do not exist and treatment mainly relies on the administration of antivenoms that do not cover all of the important venomous snakes of the region. Care-givers need better training and supervision, and national guidelines should be fed by evidence-based data generated by well-designed research studies. Poorly informed rural populations often apply inappropriate first-aid measures and vital time is lost before the victim is transported to a treatment centre, where cost of treatment can constitute an additional hurdle. The deficiency of snake bite management in South Asia is multi-causal and requires joint collaborative efforts from researchers, antivenom manufacturers, policy makers, public health authorities and international funders.
Piracetam, the prototype of the so-called nootropic drugs’ is used since many years in different countries to treat cognitive impairment in aging and dementia. Findings that piracetam enhances fluidity of brain mitochondrial membranes led to the hypothesis that piracetam might improve mitochondrial function, e.g., might enhance ATP synthesis. This assumption has recently been supported by a number of observations showing enhanced mitochondrial membrane potential, enhanced ATP production, and reduced sensitivity for apoptosis in a variety of cell and animal models for aging and Alzheimer disease. As a specific consequence, substantial evidence for elevated neuronal plasticity as a specific effect of piracetam has emerged. Taken together, this new findings can explain many of the therapeutic effects of piracetam on cognition in aging and dementia as well as different situations of brain dysfunctions. Keywords: mitochondrial dysfunction, alzheimer’s disease, aging, oxidative stress, piracetam
Leukotrienes constitute a group of bioactive lipids generated by the 5-lipoxygenase (5-LO) pathway. An increasing body of evidence supports an acute role for 5-LO products already during the earliest stages of pancreatic, prostate, and colorectal carcinogenesis. Several pieces of experimental data form the basis for this hypothesis and suggest a correlation between 5-LO expression and tumor cell viability. First, several independent studies documented an overexpression of 5-LO in primary tumor cells as well as in established cancer cell lines. Second, addition of 5-LO products to cultured tumor cells also led to increased cell proliferation and activation of anti-apoptotic signaling pathways. 5-LO antisense technology approaches demonstrated impaired tumor cell growth due to reduction of 5-LO expression. Lastly, pharmacological inhibition of 5-LO potently suppressed tumor cell growth by inducing cell cycle arrest and triggering cell death via the intrinsic apoptotic pathway. However, the documented strong cytotoxic off-target effects of 5-LO inhibitors, in combination with the relatively high concentrations of 5-LO products needed to achieve mitogenic effects in cell culture assays, raise concern over the assignment of the cause, and question the relationship between 5-LO products and tumorigenesis. Keywords: leukotriene, apoptosis, cell proliferation, mitogenic effects, cytotoxicity
Introduction: The Vbeta12-transgenic mouse was previously generated to investigate the role of antigen-specific T cells in collagen-induced arthritis (CIA), an animal model for rheumatoid arthritis. This mouse expresses a transgenic collagen type II (CII)-specific T-cell receptor (TCR) beta-chain and consequently displays an increased immunity to CII and increased susceptibility to CIA. However, while the transgenic Vbeta12 chain recombines with endogenous alpha-chains, the frequency and distribution of CII-specific T cells in the Vbeta12-transgenic mouse has not been determined. The aim of the present report was to establish a system enabling identification of CII-specific T cells in the Vbeta12-transgenic mouse in order to determine to what extent the transgenic expression of the CII-specific beta-chain would skew the response towards the immunodominant galactosylated T-cell epitope and to use this system to monitor these cells throughout development of CIA. Methods: We have generated and thoroughly characterized a clonotypic antibody, which recognizes a TCR specific for the galactosylated CII(260-270) peptide in the Vbeta12-transgenic mouse. Hereby, CII-specific T cells could be quantified and followed throughout development of CIA, and their phenotype was determined by combinatorial analysis with the early activation marker CD154 (CD40L) and production of cytokines. Results: The Vbeta12-transgenic mouse expresses several related but distinct T-cell clones specific for the galactosylated CII peptide. The clonotypic antibody could specifically recognize the majority (80%) of these. Clonotypic T cells occurred at low levels in the naïve mouse, but rapidly expanded to around 4% of the CD4+ T cells, whereupon the frequency declined with developing disease. Analysis of the cytokine profile revealed an early Th1-biased response in the draining lymph nodes that would shift to also include Th17 around the onset of arthritis. Data showed that Th1 and Th17 constitute a minority among the CII-specific population, however, indicating that additional subpopulations of antigen-specific T cells regulate the development of CIA. Conclusions: The established system enables the detection and detailed phenotyping of T cells specific for the galactosylated CII peptide and constitutes a powerful tool for analysis of the importance of these cells and their effector functions throughout the different phases of arthritis.
House of Finance
(2010)
Theses against occultism
(1974)
Background: The immune system is a complex adaptive system of cells and molecules that are interwoven in a highly organized communication network. Primary immune deficiencies are disorders in which essential parts of the immune system are absent or do not function according to plan. X-linked agammaglobulinemia is a B-lymphocyte maturation disorder in which the production of immunoglobulin is prohibited by a genetic defect. Patients have to be put on life-long immunoglobulin substitution therapy in order to prevent recurrent and persistent opportunistic infections. Methodology: We formulate an immune response model in terms of stochastic differential equations and perform a systematic analysis of empirical therapy protocols that differ in the treatment frequency. The model accounts for the immunoglobulin reduction by natural degradation and by antigenic consumption, as well as for the periodic immunoglobulin replenishment that gives rise to an inhomogeneous distribution of immunoglobulin specificities in the shape space. Results are obtained from computer simulations and from analytical calculations within the framework of the Fokker-Planck formalism, which enables us to derive closed expressions for undetermined model parameters such as the infection clearance rate. Conclusions: We find that the critical value of the clearance rate, below which a chronic infection develops, is strongly dependent on the strength of fluctuations in the administered immunoglobulin dose per treatment and is an increasing function of the treatment frequency. The comparative analysis of therapy protocols with regard to the treatment frequency yields quantitative predictions of therapeutic relevance, where the choice of the optimal treatment frequency reveals a conflict of competing interests: In order to diminish immunomodulatory effects and to make good economic sense, therapeutic immunoglobulin levels should be kept close to physiological levels, implying high treatment frequencies. However, clearing infections without additional medication is more reliably achieved by substitution therapies with low treatment frequencies. Our immune response model predicts that the compromise solution of immunoglobulin substitution therapy has a treatment frequency in the range from one infusion per week to one infusion per two weeks.
In total, this dissertation comprises three research papers. Objective of all of these papers are to detect mistakes of private investors when conducting mutual funds investments and to analyze the implications. Moreover, the question is addressed whether financial advisors help private investors to avoid these investment mistakes. All three research papers use the same data base which has been provided by a German online brokerage house. The detailed data set allows contributing to existing literature on mutual fund investments, smart decision making, household finance as well as financial advice on an investor- and transaction-specific level. The first paper addresses the question which particular decision criteria private investors use when purchasing mutual funds. It can be shown that funds volume is the dominating decision criterion, whereas historical performance is only of minor importance. As performance persistence exists in the underlying data set, it can be concluded that the majority of investors make investment mistakes. In the second paper it is shown that smart investors, i.e. investors who purchase mutual funds by chasing historical performance, are older, wealthier, more experienced and less likely to be overconfident. In addition, it can be verified that there exists a positive impact of the ability to select mutual funds by chasing historical performance on the overall investment success. Hence, the quality of mutual fund selection ability is an ex-ante measure for investment success. Finally, the third paper analyzes the influence of financial advice on mutual fund decision making of private investments. Evidence can be provided that financial advisors do not help their customers to purchase mutual funds by chasing historical performance. In fact, advisors recommend high-volume mutual funds from well-known fund families. Apparently, financial advisors are much more salesmen than real advisors. These results hold when controlling for potential endogeneity issues.
At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.
Direct photon emission from heavy-ion collisions has been calculated and compared to available experimental data. Three different models have been combined to extract direct photons from different environments in a heavy-ion collision: Thermal photons from partonic and hadronic matter have been extracted from relativistic, non-viscous 3+1-dimensional hydrodynamic calculations. Thermal and non-thermal photons from hadronic interactions have been calculated from relativistic transport theory. The impact of different physics assumptions about the thermalized matter has been studied. In pure transport calculations, a viscous hadron gas is present. This is juxtaposed with ideal gases of hadrons with vacuum properties, hadrons which undergo a chiral and deconfinement phase transition and with a system that has a strong first-order phase transition to a deconfined ideal gas of quarks and gluons in the hybrid model calculations with the various Equations of State. The models used for the determination of photons from both hydrodynamic and transport calculations have been elucidated and their numerical properties tested. The origin of direct photons, itemised by emission stage, emission time, channel and baryon number density, has been investigated for various systems, as have the transverse momentum spectra and elliptic flow patterns of direct photons. The differences of photon emission rates from a thermalized transport box and the hadronic photon emission rates that are used in hydrodynamic calculations are found to be very similar, as are the spectra from calculations of heavy-ion collisions with transport model and hybrid model with hadronic Equation of State. Taking into account the full (vacuum) spectral function of the rho-meson decreases the direct photon emission by approximately 10% at low photon transverse momentum. The numerical investigations show that the parameter with the largest impact on the direct photon spectra is the time at which the hydrodynamic description is started. Its variation shows deviations of one to two orders of magnitude. In the regime that can be considered physical, however, the variation is less than a factor of 3. Other parameters change the direct photon yield by up to approximately 20%. In all systems that have been considered -- heavy-ion collisions at E_lab = 35 AGeV and 158 AGeV, (s_NN)**1/2 = 62.4 GeV, 130 GeV and 200 GeV -- thermal emission from a system with partonic degrees of freedom is greatly enhanced over that from hadronic systems, while the difference between the direct photon yields from a viscous and a non-viscous hadronic system (transport vs. hydrodynamics) is found to be very small. Predictions for direct photon emission in central U+U-collisions at 35 AGeV have been made. Since non-soft photon sources are very much suppressed at this energy, experimental results should very easily be able to distinguish between a medium that is entirely hadronic and a system that undergoes a phase transition from partonic to hadronic matter. In the case of lead-lead collisions at 158 AGeV, the situation is not so clear. In central collisions, the complete direct photon spectra including prompt photons seem to favour hadronic emission sources, while the partonic calculations only slightly overpredict the data. In peripheral collisions at the same energy, the hadronic contribution is more than one order of magnitude smaller than the prompt photon contribution, which fits the available experimental data. A similar picture presents itself at higher energies. At RHIC energies, however, the difference between transport calculations and hadronic hybrid model calculations is largest. Hybrid model calculations with partonic degrees of freedom can describe the experimental results in gold-gold collisions at 200 GeV. The elliptic flow component of direct photon emission is found to be consistently positive at small transverse momenta. This means that the initial photon emission from a non-flowing medium does not completely overshine the emission patterns from later stages. High-pt photons dominantly come from the beginning of a heavy-ion collision and therefore do not carry the directed information of an evolving medium.
The role of gamma oscillatory activity in magnetoencephalogram for auditory memory processing
(2010)
Recent studies have suggested an important role of cortical gamma oscillatory activity (30-100 Hz) as a correlate of encoding, maintaining and retrieving auditory, visual or tactile information in and from memory. It was shown that these cortical stimulus representations were modulated by attention processes. Gamma-band activity (GBA) occurred as an induced response peaking at approximately 200-300 ms after stimulus presentation. Induced cortical responses appear as non-phase-locked activity and are assumed to reflect active cortical processing rather than passive perception. Induced GBA peaking 200-300 ms after stimulus presentation has been assumed to reflect differences between experimental conditions containing various stimuli. By contrast, the relationship between specific oscillatory signals and the representation of individual stimuli has remained unclear. The present study aimed at the identification of such stimulus-specific gamma-band components. We used magnetoencephalography (MEG) to assess gamma activity during an auditory spatial delayed matching-to-sample task. 28 healthy adults were assigned to one of two groups R and L who were presented with only right- or left-lateralized sounds, respectively. Two sample stimuli S1 with lateralization angles of either 15° or 45° deviation from the midsagittal plane were used in each group. Participants had to memorize the lateralization angle of S1 and compare it to a second lateralized sound S2 presented after an 800-ms delay phase. S2 either had the same or a different lateralization angle as S1. After the presentation of S2, subjects had to indicate whether S1 and S2 matched or not. Statistical probability mapping was applied to the signals at sensor level to identify spectral amplitude differences between 15° and 45° stimuli. We found distinct gamma-band components reflecting each sample stimulus with center frequencies ranging between 59 and 72 Hz in different sensors over parieto-occipital cortex contralateral to the side of stimulation. These oscillations showed maximal spectral amplitudes during the middle 200-300 ms of the delay phase and decreased again towards its end. Additionally, we investigated correlations between the activation strength of the gamma-band components and memory task performance. The magnitude of differentiation between oscillatory components representing 'preferred' and 'nonpreferred' stimuli during the final 100 ms of the delay phase correlated positively with task performance. These findings suggest that the observed gamma-band components reflect the activity of neuronal networks tuned to specific auditory spatial stimulus features. The activation of these networks seems to contribute to the maintenance of task-relevant information in short-term memory.
Type 1 diabetes (T1D) is a chronic T cell-mediated autoimmune disorder that results in the destruction of insulin-producing pancreatic ß cells leading to life-long dependence on exogenous insulin. Attraction, activation and transmigration of inflammatory cells to the site of ß-cell injury depend on two major molecular interactions. First, interactions between chemokines and their receptors expressed on leukocytes result in the recruitment of circulating inflammatory cells to the site of injury. In this context, it has been demonstrated in various studies that the interaction of the chemokine CXCL10 with its receptor CXCR3 expressed on circulating cells plays a key role in the development of T1D. Second, once arrived at the site of inflammation adhesion molecules promote the extravasation of arrested cells through the endothelial cell layer to penetrate the site of injury. Here, the junctional adhesion molecule (JAM) JAM-C expressed on endothelial cells is involved in the process of leukocyte diabedesis. It was recently demonstrated that blocking of JAM-C efficiently attenuated cerulein-induced pancreatitis in mice. In my thesis I studied the influence of the CXCL10/CXCR3 interaction on the one hand, and of the adhesion molecule JAM-C on the other hand, on trafficking and transmigration of antigen-specific, autoaggressive T cells in the RIP-LCMV mouse model. RIP-LCMV mice express the glycoprotein (GP) or the nucleoprotein (NP) of the lymphocytic choriomeningitis virus (LCMV) as a target autoantigen specifically in the ß cells of the islets of Langerhans and turn diabetic after LCMV-infection. In my first project I found that pharmacologic blockade of CXCR3 during development of virus-induced T1D results in a significant delay but not in an abrogation of overt disease. However, neither the frequency nor the migratory properties of islet-specific T cells was significantly changed during CXCR3 blockade. In the second project I was able to demonstrate that JAM-C was upregulated around the islets in RIP-LCMV mice after LCMV infection and its expression correlated with islet infiltration and functional ß-cell impairment. Blockade with a neutralizing anti-JAM-C antibody slightly reduced T1D incidence, whereas overexpression of JAM-C on endothelial cells did not accelerate virus-induced diabetes. In summary, our data suggest that both CXCR3 as well as JAM-C are involved in trafficking and transmigration of antigen-specific autoaggressive T cells to the islets of Langerhans. However, the detection of only a moderate influence on the onset of clinical disease during CXCR3 or JAM-C blockade reflects the complex pathogenesis of T1D and indicates that several different inflammatory factors need to be neutralized in order to achieve a stable and persistent protection from disease.
In this retrospective study, case records of clinical forensic examinations and respective investigation records of the police and the public prosecutor’s (state attorney) office along with the resulting verdicts were examined in terms of type and site of injury found and extent of agreement or discrepancy between the story given by the accused party and the medical conclusions drawn from the injury pattern. Particular attention was focussed on the relevance of the expert opinion for the legal assessment through case-specific analysis of the respective verdicts. A total of 118 cases originating from the scope of the Institute of Forensic Medicine, Goethe-University Frankfurt/Main (2002 – 2005) were examined. These included bodily injury, child abuse, sexual compulsion, self-mutilation and injury patterns of individuals under suspicion of attempted or completed manslaughter/homicide. As compared to former studies, the results of this analysis were additionally correlated with the investigation records of the public prosecutor’s office (state attorney) to elucidate the importance of the forensic findings for police investigation and legal evaluation. The forensic examination involved 19 accused and 99 victims. As for the gender distribution of the victims, 51 females and 48 males were encountered. Slight female preponderance was seen in cases of sexual compulsion. The group of accused individuals consisted of 16 males and 3 females. Injuries due to blunt force impact, in particular hematomas involving skull and trunk, dominated as diagnostic findings in cases of bodily injury, sexual offenses and child abuse. In cases with suspected self-mutilation and in examinations of accused perpetrators of manslaughter/homicide scratches and lacerations prevailed. Correlating injury patterns and police inquiries, conclusions drawn from medical findings and results of police investigations were in good agreement in 46 % of the cases, but showed major discrepancies in another 25 %. In the remaining 29 % of the cases, the injury pattern did not allow for a definite expert opinion on the mode of infliction. Nevertheless, a detailed documentation of the medical findings proved to be of substantial value for police investigations. 39 % of the cases resulted in a final verdict, whilst in 59 % of the cases the charge was dismissed. Especially in the ladder forensic expert opinion was of considerable importance, since forensic assessment of injuries could either not be attributed to a certain perpetrator or contributed to the exoneration of the accused. In 2 % the judicial assessment was not available. In 82 % of the cases of child abuse the proceedings were stopped, e.g. since maltreatment could not be assigned to a particular perpetrator. In these cases, it became obvious, that forensic examination and assessment alone does not suffice, but has to be embedded in police investigations to achieve optimal results. Medical conclusions by forensic experts were – almost without any exception – considered in legal assessment and differentiatedly taken into account when weighing the sentence, thus reflecting the objectivity and neutrality of the medical assessment. In synopsis, albeit evidential value of forensic examination is assessed to be high optimal clarification of a case requires integration into the complete spectrum of investigations performed in a case.
A basic introduction to RFQs has been given in the first part of this thesis. The principle and the main ideas of the RFQ have been described and a small summary of different resonator concepts has been given. Two different strategies of designing RFQs have been introduced. The analytic description of the electric fields inside the quadrupole channel has been derived and the limitation of these approaches were shown. The main work of this thesis was the implementation and analysis of a Multigrid Poisson solver to describe the potential and electric field of RFQs which are needed to simulate the particle dynamics accurately. The main two ingredients of a Multigrid Poisson solver are the ability of a Gauß-Seidel iteration method to smooth the error of an approximation within a few iteration steps and the coarse grid principle. The smoothing corresponds to a damping of the high frequency components of the error. After the smoothing, the error term can well be approximated on a coarser grid in which the low frequency components of the error on the fine grid are converted to high frequency errors on the coarse grid which can be damped further with the same Gauß-Seidel method. After implementation, the multigrid Poisson solver was analyzed using two different type of test problems: with and without a charge density. After illustrating the results of the multigrid Poisson solver, a comparison to the field of the old multipole expansion method was made. The multipole expansion method is an accurate representation of the field within the minimum aperture, as limited by cylindrical symmetry. Within these limitations the multigrid Poisson solver and the multipole expansion method agree well. Beyond the limitation the two method give different fields. It was shown that particles leave the region in which the multipole expansion method gives correct fields and that the transmission is affected therefrom as well as the single particle dynamic. The multigridPoisson solver also gives a more realistic description of the field in the beginning of the RFQ, because it takes the tank wall into account, and this effect is shown as well. Closing the analysis of the external field, the transmission and fraction of accelerated particles of the set of 12 RFQs for the two different methods were shown. For RFQs with small apertures and big modulations the two different method give different values for the transmission due to the limitation of the multipole expansion method. The internal space charge fields without images was analyzed at the level of single particle dynamic and compared to the well known SCHEFF routine from LANL, showing major differences for the analyzed particle. For comparing influences on the transmissions of the set of 12 RFQs a third space charge routine (PICNIC) was considered as well. The basic shape of the transmission curve was the same independent of space charge routines, but the absolute values differ a little from routine to routine, with SCHEFF about 2% lower than the other routines. The multigrid Poisson solver and PICNIC agree quite well (less than 1%), but PICNIC has an extremely long running time. The major advantage of the multigrid Poisson solver in calculating space charge effects compared to the other two routines used here is that the Poisson solver can take the effect of image charges on the electrodes into account by just changing the boundaries to have the shape of the vanes whereas all other settings remain unchanged. It was demonstrated that the effect of image charges on the vanes on the space charge field is very big in the region close to the electrodes. Particles in that region will see a stronger transversely defocusing force than without images. The result is that the transmission decreases by as much as 10% which is considerably more than determined by other (inexact) routines before. This is an important result, because knowing about the big effect of image charges on the electrodes it allows it to taken into account while designing the RFQ to increase the performance of the machine. It is also an important factor in resolving the traditional difference observed between the transmission of actual RFQs and the transmission predicted by earlier simulations. In the last chapter of this thesis some experimental work on the MAFF (Munich Accelerator for Fission Fragments) IH-RFQ is described. The machine was assembled in Frankfurt and a beam test stand was built. The shunt impedance of the structure was measured using different techniques, the output energy of the structure were measured and finally its transmission was determined and compared to the beam dynamics simulations of the RFQ. Unfortunately, the transmission measurements were done without exact knowledge of the beam’s emittance. So the comparison to the simulation is somewhat rough, but with a reasonable guess of the emittance a good comparison between the measurement and simulation was obtained.
In order to fully understand the new state of matter formed in heavy ion collisions, it is vital to isolate the always present final state hadronic contributions within the primary Quark-Gluon Plasma (QGP) experimental signatures. Previously, the hadronic contributions were determined using the properties of the known mesons and baryons. However, according to Hagedorn, hadrons should follow an exponential mass spectrum, which the known hadrons follow only up to masses of M = 2 GeV. Beyond this point the mass spectrum is flat, which indicates that there are "missing" hadrons, that could potentially contribute significantly to experimental observables. In this thesis I investigate the influence of these "missing" Hagedorn states on various experimental signatures of QGP. Strangeness enhancement is considered a signal for QGP because hadronic interactions (even including multi-mesonic reactions) underpredict the hadronic yields (especially for strange particles) at the Relativistic Heavy Ion Collider, RHIC. One can conclude that the time scales to produce the required amount of hadronic yields are too long to allow for the hadrons to reach chemical equilibrium within the lifetime of a cooling hadronic fireball. Because gluon fusion can quickly produce strange quarks, it has been suggested that the hadrons are born into chemical equilibrium following the Quantum Chromodynamics (QCD) phase transition. However, we show here that the missing Hagedorn states provide extra degrees of freedom that can contribute to fast chemical equilibration times for a hadron gas. We develop a dynamical scheme in which possible Hagedorn states contribute to fast chemical equilibration times of X X pairs (where X = p, K, Lambda, or Omega) inside a hadron gas and just below the critical temperature. Within this scheme, we use master equations and derive various analytical estimates for the chemical equilibration times. Applying a Bjorken picture to the expanding fireball, the hadrons can, indeed, quickly chemically equilibrate for both an initial overpopulation or underpopulation of Hagedorn resonances. We compare the thermodynamic properties of our model to recent lattice results and find that for both critical temperatures, Tc = 176 MeV and Tc = 196 MeV, the hadrons can reach chemical equilibrium on very short time scales. Furthermore the ratios p/pi, K/pi , Lambda/pi, and Omega/pi match experimental values well in our dynamical scenario. The effects of the "missing" Hagedorn states are not limited to the chemical equilibration time. Many believe that the new state of matter formed at RHIC is the closet to a perfect fluid found in nature, which implies that it has a small shear viscosity to entropy density ratio close to the bound derived using the uncertainty principle. Our hadron resonance gas model, including the additional Hagedorn states, is used to obtain an upper bound on the shear viscosity to entropy density ratio, eta/s, of hadronic matter near Tc that is close to 1/(4pi). Furthermore, the large trace anomaly and the small speed of sound near Tc computed within this model agree well with recent lattice calculations. We also comment on the behavior of the bulk viscosity to entropy density ratio of hadronic matter close to the phase transition, which qualitatively has a different behavior close to Tc than a hadron gas model with only the known resonances. We show how the measured particle ratios can be used to provide non-trivial information about Tc of the QCD phase transition. This is obtained by including the effects of highly massive Hagedorn resonances on statistical models, which are generally used to describe hadronic yields. The inclusion of the "missing" Hagedorn states creates a dependence of the thermal fits on the Hagedorn temperature, TH , and leads to a slight overall improvement of thermal fits. We find that for Au+Au collisions at RHIC at sqrt{sN N} = 200 GeV the best square fit measure, chi^2 , occurs at TH = Tc = 176 MeV and produces a chemical freeze-out temperature of 172.6 MeV and a baryon chemical potential of 39.7 MeV.
The recent financial crisis has highlighted the limits of the “originate to distribute” model of banking, but its nexus with the macroeconomy and monetary policy remains unexplored. I build a DSGE model with banks (along the lines of Holmström and Tirole [28] and Parlour and Plantin [39] and examine its properties with and without active secondary markets for credit risk transfer. The possibility of transferring credit reduces the impact of liquidity shocks on bank balance sheets, but also reduces the bank incentive to monitor. As a result, secondary markets allow to release bank capital and exacerbate the effect of productivity and other macroeconomic shocks on output and inflation. By offering a possibility of capital recycling and by reducing bank monitoring, secondary credit markets in general equilibrium allow banks to take on more risk. Keywords: Credit Risk Transfer , Dual Moral Hazard , Monetary Policy , Liquidity , Welfare JEL Classification: E3, E5, G3 First Draft: December 2009, This Draft: September 2010
According to disposition effect theory, people hold losing investments too long. However, many investors eventually sell at a loss, and little is known about which psychological factors contribute to these capitulation decisions. This study integrates prospect theory, utility maximization theory, and theory on reference point adaptation to argue that the combination of a negative expectation about an investment’s future performance and a low level of adaptation to previous losses leads to a greater capitulation probability. The test of this hypothesis in a dynamic experimental setting reveals that a larger total loss and longer time spent in a losing position lead to downward adaptations of the reference point. Negative expectations about future investment performance lead to a greater capitulation probability. Consistent with the theoretical framework, empirical evidence supports the relevance of the interaction between adaptation and expectation as a determinant of capitulation decisions. Keywords: Investments , Adaptation , Reference Point , Capitulation , Selling Decisions , Disposition Effect , Financial Markets JEL Classification: D91, D03, D81
We investigate the incentives for vertical or horizontal integration in the financial security service industry, consisting of trading, clearing and settlement. We thereby focus on firms’ decisions but also look on the implications of these decisions on competition and welfare. Our analysis shows that the incentives for vertical integration crucially depend on industry as well as market characteristics. A more pronounced demand for liquidity clearly favors vertical integration whereas deeper financial integration increases the incentives to undertake vertical integration only if the efficiency gains associated with vertical integration are sufficiently large. Furthermore, we show that market forces can suffer from a coordination problem that end in vertically integrated structures that are not in the best interest of the firms. We believe this problem can be addressed by policy measures such as the TARGET2-Securities program. Furthermore, we use our framework to discuss major industry trends and policy initiatives. Keywords: Vertical Integration , Horizontal Integration , Competition , Trading , Settlement JEL Classification: G15, L13, L22
During the last decades households in the U.S. have experienced that residential house prices move in a persistent manner, i.e. that returns are positively serially correlated. Since an owner-occupied home is usually the largest investment of a household it is important to understand how households act when they base their consumption and investment decisions on this experience. We show in a setting with housing market cycles and households who can decide whether they rent or own the home, that - besides the consumption and the precautionary savings motive - serial correlation in house prices generates a new speculative motive for homeownership. In particular, we show how good and bad housing market cycles affect homeownership rates, leverage, stock investments and consumption and can explain empirically observed household behavior during housing market boom and bust periods. Keywords: Asset Allocation , Portfolio Choice , Housing Market Cycles , Real Estate JEL Classification: G11, D91
We test whether asymmetric preferences for losses versus gains as in Ang, Chen, and Xing (2006) also affect the pricing of cash flow versus discount rate news as in Campbell and Vuolteenaho (2004). We construct a new four-fold beta decomposition, distinguishing cash flow and discount rate betas in up and down markets. Using CRSP data over 1963–2008, we find that the downside cash flow beta and downside discount rate beta carry the largest premia. We subject our result to an extensive number of robustness checks. Overall, downside cash flow risk is priced most consistently across different samples, periods, and return decomposition methods, and is the only component of beta that has significant out-of-sample predictive ability. The downside cash flow risk premium is mainly attributable to small stocks. The risk premium for large stocks appears much more driven by a compensation for symmetric, cash flow related risk. Finally, we multiply our premia estimates by average betas to compute the contribution of the different risk components to realized average returns. We find that up and down discount rate components dominate the contribution to average returns of downside cash flow risk. Keywords: Asset Pricing, Beta, Downside Risk, Upside Risk, Cash Flow Risk, Discount Rate Risk JEL Classification: G11, G12, G14
Capturing the zero: a new class of zero-augmented distributions and multiplicative error processes
(2010)
We propose a novel approach to model serially dependent positive-valued variables which realize a non-trivial proportion of zero outcomes. This is a typical phenomenon in financial time series observed on high frequencies, such as cumulated trading volumes or the time between potentially simultaneously occurring market events. We introduce a flexible pointmass mixture distribution and develop a semiparametric specification test explicitly tailored for such distributions. Moreover, we propose a new type of multiplicative error model (MEM) based on a zero-augmented distribution, which incorporates an autoregressive binary choice component and thus captures the (potentially different) dynamics of both zero occurrences and of strictly positive realizations. Applying the proposed model to high-frequency cumulated trading volumes of liquid NYSE stocks, we show that the model captures both the dynamic and distribution properties of the data very well and is able to correctly predict future distributions. Keywords: High-frequency Data , Point-mass Mixture , Multiplicative Error Model , Excess Zeros , Semiparametric Specification Test , Market Microstructure JEL Classification: C22, C25, C14, C16, C51
Despite sensible guidelines for the use of opioid analgesics, respiratory depression remains a significant risk with a possibility of fatal outcomes. Clinicians need to find a balance of analgesia with manageable respiratory effects. The ampakine CX717 (Cortex Pharmaceuticals, Irvine, CA, USA), an allosteric enhancer of glutamate-stimulated AMPA receptor activation, has been shown to counteract opioid-induced respiratory depression in rats while preserving opioid-induced analgesia. Adopting a translational approach, we orally administered 1500 mg of CX717 to 16 male healthy volunteers in a placebo controlled double-blind study. Starting 100 min after CX717 or placebo intake, alfentanil was administered by computerized intravenous infusion targeting a plateau of effective alfentanil plasma concentrations of 100 ng/ml. One hour after start of opioid infusion, its effects were antagonized by intravenous injection of 1.6 mg of the classical opioid antidote naloxone. Respiration was quantified prior to drug administration (baseline), during alfentanil infusion and after naloxone administration by (i) counting the spontaneous respiratory frequency at rest and (ii) by employing hypercapnic challenge with CO2 rebreathing that assessed the expiratory volume at a carbon dioxide concentration in the breathable air of 55% (VE55). Pain was quantified at the same time points, immediately after assessment of respiratory parameters, by (i) measuring the tolerance to electrical stimuli (5 Hz sine increased by 0.2 mA/s from 0 to 20 mA and applied via two gold electrodes placed on the medial and lateral side of the mid-phalanx of the right middle finger) and (ii) by measuring the tolerance to heat (increased by 0.3°C/s from 32 to 52.5°C applied to a 3 x 3 cm2 skin area of the left volar forearm, after sensitization with 0.15 g capsaicin cream 0.1%). CX717 was tolerated by all subjects without side effects that would have required medical intervention. We observed that CX717 was approximately as effective as naloxone in reversing the opioid induced reduction of the respiratory frequency. Despite the presence of high plasma alfentanil concentrations, the respiratory frequency decreased only by 8.9 ± 22.4% when CX717 was pre-administered, which was comparable to the 7.0 ± 19.3% decrease observed after administration of naloxone. In contrast, after placebo pre-administration the respiratory rate decreased by 30.0 ± 21.3% (p=0.0054 for CX717 versus placebo). In agreement with this, periods of a very low respiratory frequency of <= 4 min-1 under alfentanil alone were shortened by ampakine pre-dosing by 52.9% (p=0.0182 for CX717 versus placebo). Furthermore, VE55 was decreased during alfentanil infusion by 55.9 ± 16.7% under placebo preadministration but only by 46.0 ± 18.1% under CX717 pre-administration (p=0.017 for CX717 versus placebo). Most importantly, in contrast to naloxone, CX717 had no effect on opioid induced analgesia. Alfentanil increased the pain tolerance to electrical stimuli by 68.7 ± 59.5% with placebo pre-administration. With CX717 pre-administration, the increase of the electrical pain tolerance was similar (54.6 ± 56.7%, p=0.1 for CX717 versus placebo). Similarly, alfentanil increased the heat pain tolerance threshold by 24.6 ± 10.0% with placebo pre-administration. Ampakine co-administration had also no effect on the increase of the heat pain tolerance of the capsaicin-sensitized skin (23.1 ± 8.3%, p=0.46 for CX717 versus placebo). The results of this study allow us to draw the conclusion, that opioid induced ventilatory depression can be selectively antagonized in humans by co-administering an ampakine. This is the first successful translation of a selective antagonism of opioidinduced respiratory depression from animal research into application in humans. Ampakines, namely CX717, thus are the first selective antidote for opioid-induced respiratory depression without loss of analgesia, available for the use in humans.
Within this thesis, an experimental study of the photo double ionization (PDI) and the simultaneous ionization-excitation is performed for lithium in different initial states Li (1s22l) (l = s, p). The excess energy of the linearly polarized VUV-light is between 4 and 12 eV above the PDI-threshold. Three forefront technologies are combined: a magneto-optical trap (MOT) for lithium generating an ultra-cold and, by means of optical pumping, a state-prepared target; a reaction microscope (ReMi), enabling the momentum resolved detection of all reaction fragments with high-resolution and the free-electron laser in Hamburg (FLASH), providing an unprecedented brilliant photon beam at favourable time structure to access small cross sections. Close to threshold the total as well as differential PDI cross sections are observed to critically depend on the excitation level and the symmetry of the initial state. For the excited state Li (1s22p) the PDI dynamics strongly depends on the alignment of the 2p-orbital with respect to the VUV-light polarization and, thus, from the population of the magnetic substates (mp = 0, ±1). This alignment sensitivity decreases for increasing excess energy and is completely absent for ionization-excitation. Time-dependent close-coupling calculations are able to reproduce the experimental total cross sections with deviations of at most 30%. All the experimental observations can be consistently understood in terms of the long range electron correlation among the continuum electrons which gives rise to their preferential back-to-back emission. This alignment effect, which is observed here for the first time, allows controlling the PDI dynamics through a purely geometrical modification of the target initial state without changing its internal energy.
Transmissible spongiform encephalopathies (TSEs) are rare but fatal neurodegenerative diseases affecting human and animals. The prion protein which is the causative agent, according to “protein-only” hypothesis misfold in to rogue amyloid conformer. Despite several years of studies, the atomic structural details of the rogue conformers have not been clearly understood. This study focused on developing an in-vitro conversion method, which allows us to monitor the transition from unfolded state of prion protein to fibril state. In order to reach maximal unfolded state, we have used 8 M urea as chemical denaturant, pH 2 and prion fragment 90-230 as the model. It has been demonstrated earlier that acidic pH and mild denaturant induce the fibril formation. The mechanism underlying the structural transition from monomeric state to polymeric form is largely unknown. We have confirmed by EM and AFM that fibrils are formed in our conditions, which resemble to naturally occurring fibrils in morphologies observed. The agitation accelerates the rate of fibril formation and, which allow us to do time-resolved NMR on these preparations. The conformational flexibility is inherent to amyloid fibrils and has been observed in our preparations. We aimed to map the important segment of prion protein, which forms the rigid core in its fibrillar structured form. Our time-resolved NMR studies allowed us to monitor the changes happening from unfolded state to fibrillar state. Analysis of data identified the segment between residues 145 to 223 forming the rigid core in these fibrils, which correspond to β strand 2, helix 2 and major part of helix 3 of native prion monomeric structure. Most of the point mutations which are associated with hereditary prion disease are part of rigid core, which undergo a refolding on fibril formation. The C-terminal residues from 224 to 230 displayed peak shifting and therefore, indicate the adaptation to a fibril specific conformation. The major part of N-terminal 90-144 segment, remains dynamic, which can be understood by their accessibility to amyloid specific antibodies. This provides novel structural insight to the amyloid formation from unfolded state of prion protein fragment 90-230, which represents the proteinase-K resistant part naturally occurring prions. Earlier studies have established the core to 160-220 where hydrogen-deuterium exchange mass spectrometry or site-directed spin labeling EPR spectroscopy was used for analysis. Those studies have been initiated from either native-like or partially unfolded state of recombinant prion protein, and therefore, it is quite striking to find out that fibrils initiated from unfolded monomeric state share the same “amyloid core”. This structural insight has important implications for understanding the molecular basis of prion propagation.
The first part of the following paper deals with varying points of criticism forwarded against Ordoliberalism. Here, it is not the aim to directly falsify each argument on its own; rather, the author tries to give a precise overview of the spectrum of critique. The second section picks out one argument of critical review – namely that the ordoliberal concept of the state is somewhat elitist and grounded on intellectual experts. Based on the previous sections, the final part differentiates two kinds of genesis of norms: an evolutionary and an elitist one – both (latently) present within Ordoliberalism. In combination with the two-level differentiation between individual and regulatory ethics, the essay allows for a distinction between individual-ethical norms based on an evolutionary genesis of norms and regulatory-ethical norms based on an elitist understanding of norms. A by-product of the author’s argument is a (further) demarcation within neoliberalism.
Iron uptake is an essential process in all Gram-negative bacteria including cyanobacteria and therefore different transport systems evolved during evolution. In cyanobacteria, however, the iron demand is higher than in proteobacteria due to the function of iron as cofactor in e.g. photosynthesis and nitrogen fixation. Most of the transport systems depend on outer membrane localized TonB-dependent transporters (TBDTs), a periplasma-facing TonB protein and a plasma membrane localized machinery (ExbBD). So far, iron chelators (siderophores), oligosaccharides and polypeptides have been identified as substrates of TBDTs. However, in proteobacteria TonB-dependent outer membrane transporter represent a well-explored subject whereas for cyanobacteria almost nothing is known about possible TonB-dependent uptake systems for iron or other substrates. The heterocyst-forming filamentous cyanobacterium Anabaena sp. PCC 7120 is known to secrete the siderophore schizokinen, but its transport system has remained unidentified. For Anabaena sp. PCC 7120 22 genes were identified as putative TBDTs covering almost all known TBDT subclasses. This is a high number of TBDTs compared to other cyanobacteria. The expression of the 22 putative TBDTs individually depends on the presence of iron, copper or nitrogen. The atypical dependence of TBDT gene expression on different nutrition points to a yet unknown regulatory mechanism. In addition, the hypothesis of the absence of TonB in Anabaena sp. PCC 7120 was clarified by the identification of an according sequence, all5036. Inspection of the genome of Anabaena sp. PCC 7120 shows that only one gene encoding a putative TonB-dependent iron transporter, namely alr0397, is positioned close to genes encoding enzymes involved in the biosynthesis of a hydroxamate siderophore. The expression of alr0397 was elevated under iron-limited conditions. Inactivation of this gene caused a moderate phenotype of iron starvation in the mutant cells. The characterization of the mutant strain showed that Alr0397 is a TonB-dependent schizokinen transporter (SchT) of the outer membrane and that alr0397 expression and schizokinen production are regulated by the iron homeostasis of the cell. Additional two genes of Anabaena sp. PCC 7120 involved in this process were identified. SchE encoded by all4025 is a putative cytoplasmic membrane-localized transporter involved in TolC-dependent siderophore secretion. The mutation of schE resulted in an enhanced sensitivity to high metal concentrations and in drastically reduction of secretion of hydroxamate-type siderophores. IacT coded by all4026 is a predicted outer membrane-localized TonB-dependent iron transporter. Inactivation of iacT resulted in reduced sensitivity to elevated iron and copper levels, whereas decoupling the expression from putative regulation by exchange of the promoter resulted in sensitization against tested metals. Further analysis showed that iron and copper effects are synergistic because decrease of iron induced a significant decrease of copper levels in the iacT insertion mutant but an increase of those levels in Anabaena sp. PCC 7120 where expression of all4026 is under the trc-promoter. In consequence, the results unravel a link between iron and copper homeostasis.
Acute myeloid/lymphoid leukemia is a fatal hematological malignancy characterized by accumulation of nonfunctional, immature blasts, which interferes with the production of normal blood cells. Activating mutations of receptor tyrosine kinases are common genetic lesions in leukemia. FLT3-ITD is a frequent activating mutation found in AML patients, leading to uncontrolled proliferation of leukemic blasts. FLT3-ITD directly activates STAT5, leading to the induction of STAT5 target gene expression like PIM kinases and SOCS genes. STAT5 and PIM kinases have been shown to play a crucial role in the FLT3-ITD mediated transformation. On the other hand, the role of SOCS proteins in FLT3-ITD mediated transformation has not been studied to date. SOCS proteins are part of a negative feedback mechanism that controls Jak kinases downstream of cytokine receptors. One of the SOCS family members, SOCS1 has been reported to suppress oncogenecity of several activating kinases implicated in hematologic malignancies. In this thesis the role of these SOCS proteins in FLT3-ITD mediated transformation (in vitro) and leukemogenesis (in vivo) is systematically explored. Expression of FLT3-ITD in cell lines of myeloid (32D) and lymphoid (Ba/F3) origin, led to CIS, SOCS1 and SOCS2 expression. FLT3-ITD expression in primary murine bone marrow stem/progenitor cells led to a 59 fold induction of SOCS1 expression. Furthermore, FLT3-ITD positive AML cell lines (MV4-11, MOLM-13) show kinase dependent CIS, SOCS1, and SOCS3 expression. Importantly SOCS1 is highly expressed in AML patients with FLT3-ITD compared to healthy individuals. SOCS1 protein was expressed in FLT3-ITD transduced murine bone marrow stem cells and SOCS1 expression was abolished with kinase inhibition in MOLM-13 cell line. In conclusion, SOCS1 was highly regulated by FLT3-ITD in myeloid, lymphoid cell lines, in bone marrow stem/progenitors and in AML patient samples. SOCS1 co-expression did not affect FLT3-ITD mediated signaling and proliferation, but abolished IL-3 mediated proliferation and protected 32D cells from interferon-α and interferon-γ mediated growth inhibition. FLT3-ITD expressing 32D cells showed diminished STAT1 activation in response to interferons (α and γ). Alone, SOCS1 strongly inhibited cytokine induced colony formation of bone marrow stem and progenitors, but not FLT3-ITD induced colony formation. Most importantly, in the presence of growth inhibitory interferon-γ, SOCS1 co-expression with FLT3-ITD led to increased colony formation compared to FLT3-ITD alone. Taken together, FLT3-ITD induced and exogenously expressed SOCS1, shielded cells from external cytokines, signals, while not affecting FLT3-ITD induced proliferation/signaling. In further experiments the in vivo effects of SOCS1 were studied in a bone marrow transplantation model. SOCS1 bone marrow transplants were unable to engraft/proliferate in mice. FLT3-ITD was shown to induce a myeloproliferative disease. Both control (empty vector), SOCS1 transplanted mice were normal and did not show any disease phenotype. FLT3-ITD alone and SOCS1 co-expressing FLT3-ITD developed either myeloproliferative disease or acute lymphoblastic leukemia with equal distribution. SOCS1 co-expression with FLT3-ITD led to a decreased latency. Mice transplanted with FLT3-ITD alone and SOCS1 co-expressing FLT3-ITD displayed enlarged spleens, liver and hypercellular bone marrow indicating infiltration of leukemic cells. Mice were also anemic and showed decreased platelet counts. Importantly SOCS1 co-expression particularly shortened the latency of myeloproliferative disease but not of acute lymphoblastic leukemia. In summary, in the context of FLT3-ITD, SOCS1 acts as a ‘conditional oncogene’ and cooperates with FLT3-ITD in the development of myeloproliferative disease. With these data we propose the following model: FLT3-ITD induces SOCS gene expression, which shields cells against proliferation and differentiation signals from cytokines, while not affecting FLT3-ITD mediated proliferative signals. This leaves cells under the dictate of FLT3-ITD thereby contributing to leukemogenesis. Similar to FLT3-ITD, BCR/ABL (P190) (an oncogenic fusion kinase often found in acute lymphoblastic leukemia) induces SOCS gene expression in K562 and long-term cultured cells from patients with acute lymphoblastic leukemia. SOCS1 co-expression does not affect BCR/ABL mediated proliferation while abrogating IL-3 mediated proliferation. These findings suggest that SOCS proteins may play a general co-operative role in the context of oncogenes which aberrantly activate STAT3/5 independently of JAK kinases. This study reveals a novel molecular mechanism of FLT3-ITD mediated leukemogenesis and suggests SOCS genes as potential therapeutic targets.
By adopting a variety of shapes, proteins can perform a wide number of functions in the cell, from being structural elements or enabling communication with the environment to performing complex enzymatic reactions needed to sustain metabolism. The number of proteins in the cell is limited by the number of genes encoding them. However, several mechanisms exist to increase the overall number of protein functions. One of them are post-translational modifications, i.e. covalent attachment of various molecules onto proteins. Ubiquitin was the first protein to be found to modify other proteins, and, faithful to its evocative name, it is involved in nearly all the activities of a cell. Ubiquitylation of proteins was believed for a long time only to be responsible for proteasomal degradation of modified proteins. However, with the discovery of various types of ubiquitylation, such as mono-, multiple- or poly-ubiquitylation, new functions of this post-translational modification emerged. Mono-ubiquitylation has been implicated in endocytosis, chromatin remodelling and DNA repair, while poly-ubiquitylation influences the half-life of proteins or modulates signal transduction pathways. DNA damage repair and tolerance are example of pathways extensively regulated by ubiquitylation. PCNA, a protein involved in nearly all types of DNA transaction, can undergo both mono- and poly-ubiquitylation. These modifications are believed to change the spectrum of proteins that interact with PCNA. Monoubiquitylation of PCNA is induced by stalling of replication forks when replicative polymerases (pols) encounter an obstacle, such as DNA damage or tight DNA-protein complexes. It is believed that monoubiquitylation of PCNA stimulates the exchange between replicative pols to one of polymerases that can synthesize DNA across various lesions, a mechanism of damage tolerance known as translesion synthesis (TLS). Our work has helped to understand why monoubiqutylation of PCNA favours this polymerase switch. We have identified two novel domains with the ability to bind Ub non-covalently. These domains are present in all the members of Y polymerases performing TLS, and were named Ub-binding zinc finger (UBZ) (in polη and polκ) and Ub-binding motif (UBM) (in polι and Rev1). We have shown that these domains enable Y polymerases to preferentially gain access to PCNA upon stalling of replication, when the action of translesion polymerases is required. While the region of direct interaction between Y pols and PCNA had been known (BRCT domain in Rev1 and PIP box motif (PIP) in three others members), we propose that Ub-binding domains (UBDs) in translesion Y pols enhance the PIP- or BRCT-domain-mediated interaction between these polymerases and PCNA by binding to the Ub moiety attached onto PCNA. Following these initial studies, we have also discovered that Y polymerases themselves undergo monoubiquitylation and that their UBDs mediate this modification. This auto-ubiquitylation is believed to lead to an intramolecular interaction between UBD and Ub attached in cis onto the UBD-containing protein. We have mapped monoubiquitylation sites in polη in the C-terminal portion of the protein containing the nuclear localization signal (NLS) and the PIP box. Beside PIP, the NLS motif is also involved in direct interaction of polη with PCNA. Based on these findings, we propose that monoubiquitylation of either NLS or PIP masks them from potential interaction with PCNA. Lastly, using several functional assays, we have demonstrated the importance of all these three motifs in the C-terminus of polη (UBZ, NLS and PIP) for efficient TLS. We have also constructed a mimic of monoubiquitylated polη by genetically fusing polη with Ub. Interestingly, this chimera is deficient in TLS as compared to the wild-type protein. Altogether, these studies demonstrate that the C-terminus of polη constitutes a regulatory module involved in multiple-site interaction with monoubiquitylated PCNA, and that monoubiquitylation of this region inhibits the interaction between polη and PCNA. Our work has also revealed that the UBDs of Y pols as well as of other proteins implicated in DNA damage repair and tolerance, such as the Werner helicase-interacting protein 1 (Wrnip1), are required for their proper sub-nuclear localization. All these proteins localize to discrete focal structures inside the nucleus and mutation of their UBDs results in inability to accumulate in these foci. Interestingly, by exchanging UBDs between different proteins we have learned that each UBD seems to have a distinct functional role, surprisingly not limited to Ubbinding ability. In fact, swapping the UBZ of Wrnip1 with the UBM of polι abolished the localization of Wrnip1 to foci despite preserving the Ub-binding ability of the chimeric protein. In summary, this work provides an overview of how post-translation modification of proteins by Ub can regulate several DNA transactions. Firstly, key regulators (e.g. PCNA) can be differentially modified by Ub. Secondly, specialized UBDs (e.g. UBM, UBZ) embedded only in a subset of proteins act as modules able to recognize these modifications. Thirdly, by means of mediating auto-ubiquitylation, UBDs can modulate the behaviour of host proteins by allowing for either in cis or in trans Ub-UBD interactions.
Relational data exchange deals with translating relational data according to a given specification. This problem is one of the many tasks that arise in data integration, for example, in data restructuring, in ETL (Extract-Transform-Load) processes used for updating data warehouses, or in data exchange between different, possibly independently created, applications. Systems for relational data exchange exist for several decades now. Motivated by their experiences with one of those systems, Fagin, Kolaitis, Miller, and Popa (2003) studied fundamental and algorithmic issues arising in relational data exchange. One of these issues is how to answer queries that are posed against the target schema (i.e., against the result of the data exchange) so that the answers are consistent with the source data. For monotonic queries, the certain answers semantics proposed by Fagin, Kolaitis, Miller, and Popa (2003) is appropriate. For many non-monotonic queries, however, the certain answers semantics was shown to yield counter-intuitive results. This thesis deals with computing the certain answers for monotonic queries on the one hand, and on the other hand, it deals with the issue of which semantics are appropriate for answering non-monotonic queries, and how hard it is to evaluate non-monotonic queries under these semantics. As shown by Fagin, Kolaitis, Miller, and Popa (2003), computing the certain answers for unions of conjunctive queries - a subclass of the monotonic queries - basically reduces to computing universal solutions, provided the data transformation is specified by a set of tgds (tuple-generating dependencies) and egds (equality-generating dependencies). If M is such a specification and S is a source database, then T is called a solution for S under M if T is a possible result of translating S according to M. Intuitively, universal solutions are most general solutions. Since the above-mentioned work by Fagin, Kolaitis, Miller, and Popa it was unknown whether it is decidable if a source database has a universal solution under a given data exchange specification. In this thesis, we show that this problem is undecidable. More precisely, we construct a specification M that consists of tgds only so that it is undecidable whether a given source database has a universal solution under M. From the proof it also follows that it is undecidable whether the chase procedure - by which universal models can be obtained - terminates on a given source database and the set of tgds in M. The above results in particular strengthen results of Deutsch, Nash, and Remmel (2008). Concerning the issue of which semantics are appropriate for answering non-monotonic queries, we study several semantics for answering such queries. All of these semantics are based on the closed world assumption (CWA). First, the CWA-semantics of Libkin (2006) are extended so that they can be applied to specifications consisting of tgds and egds. The key is to extend the concept of CWA-solution, on which the CWA-semantics are based. CWA-solutions are characterized as universal solutions that are derivable from the source database using a suitably controlled version of the chase procedure. In particular, if CWA-solutions exist, then there is a minimal CWA-solution that is unique up to isomorphism: the core of the universal solutions introduced by Fagin, Kolaitis, and Popa (2003). We show that evaluation of a query under some of the CWA-semantics reduces to computing the certain answers to the query on the minimal CWA-solution. The CWA-semantics resolve some the known problems with answering non-monotonic queries. There are, however, two natural properties that are not possessed by the CWA-semantics. On the one hand, queries may be answered differently with respect to data exchange specifications that are logically equivalent. On the other hand, there are queries whose answer under the CWA-semantics intuitively contradicts the information derivable from the source database and the data exchange specification. To find an alternative semantics, we first test several CWA-based semantics from the area of deductive databases for their suitability regarding non-monotonic query answering in relational data exchange. More precisely, we focus on the CWA-semantics by Reiter (1978), the GCWA-semantics (Minker 1982), the EGCWA-semantics (Yahya, Henschen 1985) and the PWS-semantics (Chan 1993). It turns out that these semantics are either too weak or too strong, or do not possess the desired properties. Finally, based on the GCWA-semantics we develop the GCWA*-semantics which intuitively possesses the desired properties. For monotonic queries, some of the CWA-semantics as well as the GCWA*-semantics coincide with the certain answers semantics, that is, results obtained for the certain answers semantics carry over to those semantics. When studying the complexity of evaluating non-monotonic queries under the above-mentioned semantics, we focus on the data complexity, that is, the complexity when the data exchange specification and the query are fixed. We show that in many cases, evaluating non-monotonic queries is hard: co-NP- or NP-complete, or even undecidable. For example, evaluating conjunctive queries with at least one negative literal under simple specifications may be co-NP-hard. Notice, however, that this result only says that there is such a query and such a specification for which the problem is hard, but not that the problem is hard for all such queries and specifications. On the other hand, we identify a broad class of queries - the class of universal queries - which can be evaluated in polynomial time under the GCWA*-semantics, provided the data exchange specification is suitably restricted. More precisely, we show that universal queries can be evaluated on the core of the universal solutions, independent of the source database and the specification.
Enantioselective carbon-carbon bond-forming reactions, particularly, using organocatalysts represent one of the most important areas in modern synthetic chemistry. New concepts and methods in organocatalysis are emerging continuously, allowing more selective, economically more appealing and environmentally friendlier transformations. Chiral Brønsted-acid catalysts have recently emerged as a new class of organocatalysts for a number of enantioselective carbon-carbon bond-forming reactions. The first part of this thesis focused on the new development of new Brønsted acid-catalyzed enantioselective Nazarov cyclizations. The Nazarov reaction belongs to the group of electrocyclic reactions and is one of the most versatile methods for the synthesis of five-membered rings, which are the key structural elements of numerous natural products. In general, the Nazarov cyclization can be catalyzed by Brønsted or Lewis acids. However, only a few asymmetric variations have been described, of which most require the use of large amounts of chiral metal complexes. The reactivities of Nazarov cyclizations are also depending on the substituents of the divinyl ketone substrates as described in the first chapter. The substrates to study Brønsted acid-catalyzed enantioselective Nazarov cyclization were prepared following the known procedures. The dihydropyran was treated with tBuLi in THF at –78 oC and then the α,β-unsaturated aldehydes 1 were added to the reaction mixture to afford the corresponding alcohols 2 in moderate to good yields. The alcohols 2 were oxidized to divinyl ketones 3 employing Dess-Martin periodinane/pyridine (DMP/py) in CH2Cl2 at room temperature to obtain the divinyl ketones 3 in moderate to good yields (Scheme 1). Scheme 1. Preparation of substrates in order to study Brønsted acid-catalyzed enantioselective Nazarov cyclization and subsequent transformations. At the starting point, an evaluation of suitable Brønsted acid catalysts for the enantioselective Nazarov cyclization of divinyl ketone 3a was performed. The initial reactions conducted with various BINOL-phosphoric acids 4a-4e in toluene at 60 oC provided the mixture of cis and trans cyclopentenones 5a with enantioselectivities of up to 82% ee (Table 1, entries 1-5). Eventually, improved reactivity could be achieved by using the corresponding N-triflylphosphoramides 4f and 4g, which even at 0 oC gave complete conversion after ten minutes. Additionally, it was shown that the use of these catalysts significantly enhanced both the diastereoselectivity (cis/trans ratio up to 7:1) and the enantioselectivity (up to 96% ee; Table 1, entries 6 and 7). Table 1. Evaluation of Brønsted acids 4a-4g in the enantioselective Nazarov cyclization. The scope of the Brønsted acid-catalyzed enantioselective Nazarov cyclization of various divinyl ketones 3 was explored under an optimized reaction condition (Scheme 2). Treatment of divinyl ketones 3 in CHCl3 in the presence of 2 mol% chiral BINOL-Ntriflylphosphoramide 4g at 0 oC for 1-6 h provided the corresponding cyclopentenone 5 in good yields (45-92%) with excellent enantioselectivities (up to 93% ee) (Scheme 2). Furthermore, the isomerization of cis-cyclopentenone under basic condition led to the corresponding trans-cyclopentenone without loss of enantiomeric purity. This efficient method introduced here was not only the first example of an organocatalytic electrocyclic reaction but also represented the first enantioselective activation of a carbonyl group catalyzed by a chiral BINOL phosphoric acid. Compared to the metal-catalyzed reaction, special features of this new Brønsted acid-catalyzed electrocyclization are the lower catalyst loadings (2 mol%), higher enantioselectivities, accessibility to all possible stereoisomers, as well as the mild conditions. ....
An eclogite barometer has profound importance in the study of upper mantle processes and potential application to diamond prospecting. Studies on the partitioning of Li between clinopyroxene (cpx) and garnet (grt) in natural samples have shown that this particular element is very sensitive to changes in pressure and could be calibrated as the barometer demanded for bimineralic eclogites. Experiments were performed from 4 to 13 GPa and 1100 to 1400°C in the CMAS (CaO, MgO, Al2O3, SiO2) system with Li added as Li3PO4 to quantify this pressure dependence into a barometer expressed in the following equation: P= (0.00255*T-lnKd)/0.2351 where P is in GPa, T is in °C and Kd is defined as the partition coefficient of Li (in ppm) between clinopyroxene and garnet. The experimental pressures are reproduced to ± 0.38 GPa (1σ) by this equation. This barometer is strictly applicable only to CMAS. Experiments at 1300°C, 8-12 GPa showed that Henry’s Law is fulfilled for Li partitioning between cpx and grt in the concentration range of approximately 0.01 – 1 wt% Li. Direct application of the equation to experiments in natural systems performed at 1300°C from 4 GPa to 13 GPa consistently overestimates pressures by approximately 2 GPa. Our previous experiments in the system CaO-MgO-Al2O3-SiO2 + Li3PO4 showed that the partitioning of Li between garnet and clinopyroxene is pressure dependent in eclogitic bulk compositions. This supports experimentally the hypothesis of Seitz et al. (2003), based on the analysis of Li in eclogitic xenoliths and inclusions in diamond, that the partitioning of this particular element between clinopyroxene and garnet is very sensitive to changes in pressure and could be calibrated as a barometer for bimineralic eclogites. In order to calibrate this pressure dependence into a barometer, experiments were performed in natural systems using starting materials sourced from a well preserved eclogitic xenolith from the Roberts Victor kimberlite pipe (South Africa) to extrapolate our findings in CMAS to natural systems. Sixteen multianvil experiments were performed from 4-13 GPa and 1100-1500°C. Our findings reinforced the general trend we observed in the CMAS system, that KdLi cpx-grt for Li decreases with increasing P, and that at P ≥ 12 GPa, garnet is able to incorporate more Li than clinopyroxene. Multiple linear regression was applied to our experimental results to create the barometer: P = (0.000963*T – ln KdLi cpx-grt + 1.581) / 0.252 Where P is pressure in GPa, T is temperature in °C and KdLi cpx-grt is defined as the partitioning coefficient of Li obtained by dividing the concentration of Li in cpx by the concentration of Li in garnet. This barometer reproduces the experimental conditions to ± 0.2 GPa. It is applicable to eclogitic xenoliths, to garnet pyroxenites and to peridotitic and eclogitic inclusions in diamond. Application of the barometer to diamond bearing xenoliths results in pressures in the diamond stability field. Clinopyroxene is easily corrupted in xenoliths and also preferentially takes in Li during short lived metasomatic processes. Care must be taken therefore to analyse primary, unaltered clinopyroxene. Our preliminary application to natural samples shows that the barometer can be applied beyond the experimental range to pressures down to 3 GPa. Seventeen eclogitic xenoliths were chosen from a sample set of greater than 200 for their fresh microscopic and macroscopic appearance and were analyzed for Li content in coexisting garnet (grt) and clinopyroxene (cpx). These samples can be subdivided into two groups on the basis of Mg in cpx (cpfu: cations per formula unit, based on 6 oxygens): Group 1 with Mg > 0.75, and Group 2 with Mg < 0.75. Group 1 xenoliths show lower Li contents in both grt and cpx compared to Group 2. The Li barom ter calibrated in Hanrahan et al. (2009b)/Chapter 3 was applied to these samples as well as available literature data to obtain pressures of provenance - Group 2 xenoliths often provide pressures that appear unrealistic for eclogitic xenoliths. In light of observed crystal chemical relations in the natural samples, a new fitting procedure was applied to the experimental data presented in Chapter 3. This new fit appears to be more realistic than the previous fit, although a strong relationship with Mg# remains present, suggesting that Li-barometry is, at present, only applicable to Mg-rich eclogites. Inclusions in diamond, with the exception of eclogitic inclusions of coexisting majorite and cpx, often yield pressures that are inconsistent with the pressures required for diamond formation. Although an interesting observation when comparing all of the data is that inclusions in diamond have significantly higher average Li concentrations compared to xenoliths, which suggests that Li is highly present in the fluids from which diamonds form in the mantle, an observation which was previously made for the deep mantle as a result of high Li in ferropericlase inclusions in diamond (Seitz et al. 2003).
The physiology of our most complex organ, the brain, is still not comprehensively understood. The brain basically serves the processing, storing and binding of external and internal information, and thereby generates amazing phenomena like the understanding of oneself as an individual entitiy. How exactly information is encoded and represented, how individual neurons or networks of neurons actually interact, is a gigantic puzzle, whose pieces were collected since many decades. Subject of scientific discussions are the basic spatiotemporal structures of neuronal representations. Suggestions and observations reach hereby from simple rate coding of individual neurons to synchronous activity of larger ensembles. To approach answers to these questions, our working group has used a combination of different recording techniques that allowed for the comparison of neuronal interactions on different spatial scales. We focused on prefrontal neuronal interactions during visual short-term memory. Herefore two rhesus monkeys had been trained to perform a visual short-term memory task. We measured and recorded their neuronal activity by means of a microelectrode matrix that could be inserted into the cortex via a closable chamber, which had been previously implanted above prefrontal cortex. The acquired signal was separated into two components: a high-frequency component, that represents the spiking output activity of few neurons in the vicinity of each electrode tip (multi-unit activity), and a low-frequency component, that results from dendritic input activity of larger neuronal assemblies (local field potential). From one of the experimental animals we also recorded mass signals of even larger neuronal populations by means of small silverball electrodes, that had been implated into the skull above prefrontal cortex (skull EEG) in the context of a pilot project. In the first subproject, we analyzed the selectivity of output signals with respect to the memorized stimulus and task performance. We compared selectivities of local recording sites (multi-unit activity) with the selectivities of patterns created by the combined activity of all recording sites, thus representing the activity of large and distributed ensembles. Local neuronal activity correlated with the course of the visual short-term memory task, but was not highly discriminative with respect to different visual stimuli. We could show that the population activity was significantly more specific. Concerning task performance, we obtained the same result, albeit less pronounced. Further analyses revealed that the patterns of distributed ensemble activity were only partly based on realtime coordination of neuronal activity, and in addition, did not remain stable across the time course of the short-term memory task. In the second subproject, we focused on the oscillatory behavior of the local field potential. After a time-frequency analysis, we studied different frequency bands concerning stimulus selectivity and task performance of the monkey. We hereby found significant modulations of oscillations in the beta- and gamma-frequency range, that correlated with different periods of the task. Especially for oscillations in beta- and low-gamma-range, we observed phase-locking of oscillations between different recording sites, which could play an important role as internal clock to coordinate spatially separate activity. Local high-gamma oscillations themselves seemed to be important for the maintenance of information. These results could be partly confirmed by mass signals of EEG. In sum, our results support the hypothesis that information is represented in the brain by means of concerted activity of spatially distributed neuronal ensembles. This activity again appears to be coordinated by oscillatory activity in beta- and low-gamma-frequency ranges. A deeper understanding of central nervous information processing could contribute to better treatment of diseases like Parkinson’s, Alzheimer’s as well as epilepsy, and neuropsychiatric disorders like schizophrenia.
Fas Ligand (FasL; CD95L; CD178; TNSF6) is a 40 kDa glycosylated type II transmembrane protein with 279 aa in mice and 281 aa in humans that belongs to the tumor necrosis factor (TNF) family. The extracellular domain (ECD) harbors a TNF homology domain, the receptor binding site, a motif for self assembly and trimerization, and several putative N-glycosylation and a metalloprotease cleavage site/s. The cytoplasmic tail of FasL is the longest of all TNFL family members and contains several conserved signaling motifs, such as a putative tandem Casein kinase I phosphorylation site, a unique proline-rich domain (PRD) and phosphorylatable tyrosine residues (Y7 in mice; Y7, Y9, Y13 in human). The FasL/Fas system is renowned for the potent induction of apoptosis in the receptor-bearing cell and is especially important for immune system functions. It is involved in the killing of target cells by natural killer (NK) and cytotoxic T cells, in the (self) elimination of effector cells following the proliferative phase of an immune response (activation-induced cell death; AICD), in the maintenance of immuneprivileged sites and in the induction and maintenance of peripheral tolerance. Owing to its potent pro-apoptotic signaling capacity and important functions, FasL expression and activity are tightly regulated at transcriptional and posttranscriptional levels and restricted to few cell types, such as immune effector cells and cells of immune-privileged sites. In contrast, Fas is expressed in a variety of tissues including lymphoid tissues, liver, heart, kidney, pancreas, brain and ovary. In addition to its pro-apoptotic function, the FasL/Fas system can also elicit nonapoptotic signals in the receptor-expressing cell. Among others, Fas-signaling exerts co-stimulatory functions in the immune system, e.g. by promoting survival, activation and proliferation of T cells. Besides the capacity to deliver a signal into receptor-bearing cells (‘forward signal’), FasL can receive and transmit signals into the ligand-expressing cell. This phenomenon has been described for several TNF family ligands and is known as ‘reverse signaling’. The first evidence for the existence of reverse signaling into FasL-bearing cells stems from two studies that demonstrated either co-stimulation of murine CD8+ T cell lines by FasL cross-linking or inhibition of activation-induced proliferation of murine CD4+ T cells. In both cases, the observed changes of proliferative behaviour critically depended on the presence of a signaling-competent FasL. Almost certainly, the FasL ICD is functionally involved in signal-transmission: (i) The ICD is highly conserved across species and harbors several signaling motifs, most notably a unique PRD. (ii) Numerous proteins have been identified which interact with the FasL PRD via their SH3 or WW domains and regulate various aspects of FasL biology, such as FasL sorting, storage, cell surface expression and the linkage of FasL to intracellular signaling pathways. (iii) Post-translational modifications of the ICD have been implicated in the sorting of FasL to vesicles and the FasL-dependent activation of Nuclear factor of activated T cells (NFAT). (iv) Proteolytic processing of FasL liberates the ICD and allows its translocation into the nucleus where it might influence gene transcription. (v) It could be shown that overexpression of the FasL ICD is sufficient to initiate reverse signaling upon concomitant T cell receptor (TCR) stimulation and ICD cross-linking. Conflicting data on the consequences of FasL reverse signaling exist, and costimulatory as well as inhibitory functions have been reported. These discrepancies probably reflect the use of artificial experimental systems. Neither the precise molecular mechanism underlying FasL reverse signaling, nor its physiological relevance have been addressed at the endogenous protein level in vivo. Therefore, a ‘knockout/knockin’ mouse model in which wildtype FasL was replaced with a deletion mutant lacking the intracellular portion (FasL Delta Intra) was established in the group of PD Dr. Martin Zörnig. In the present study, FasL Delta Intra mice were phenotypically characterized and were employed to investigate the physiological consequences of FasL reverse signaling at the molecular and cellular level. To ensure that FasL Delta Intra mice represent a suitable model to study the consequences of FasL reverse signaling, we demonstrated that activated lymphocytes from homozygous FasL Delta Intra or wildtype mice express comparable amounts of (truncated) FasL at the cell surface. The truncated protein retains the capacity to induce apoptosis in Fas receptor-positive target cells, as co-culture assays with FasL-expressing activated lymphocytes and Fas-sensitive target cells showed. Additionally, systematic screening of unchallenged mice did not reveal any phenotypic abnormalities. Notably, signs of a lymphoproliferative autoimmune disease associated with FasL-deficiency could not be detected. As several reports have implicated FasL reverse signaling in the regulation of T cell expansion and activation, proliferation of lymphocytes isolated from FasL Delta Intra and wildtype mice in response to antigen receptor stimulation was investigated. Using CFSE dilution assays it could be demonstrated that the proliferative response of CD4+ T cells, CD8+ T cells and of B cells was enhanced in the absence of the FasL ICD. Interestingly, this effect was most pronounced in B cells and could only be detected in CD4+ T cells after depletion of CD4+CD25+ regulatory T cells. To our Summary knowledge, this is the first time that FasL reverse signaling has been demonstrated in B cells. In a series of experiments, the activation of several pathways that are known to play important roles in signal-transmission initiated upon antigen receptor triggering was assessed. As a molecular correlate for the observed enhancement of activation-induced proliferation, Extracellular signal regulated kinase (ERK1/2) phosphorylation was significantly increased in FasL Delta Intra mice following antigen receptor crosslinking. Surprisingly, B cell stimulation lead to a comparable extent of activating phosphorylations on S38 in c-Raf and S218/S222 in MEK1/2 in cells isolated from wildtype and FasL Delta Intra mice, indicating that Mitogen activated protein kinases (MAPKs) upstream of ERK1/2 (Raf-1 and MEK1/2) apparently do not contribute to the differential regulation of ERK1/2. Experiments in which activation-induced Akt phosphorylation (S473) was quantified also did not suggest a participation of Phosphoinositol specific kinase 3 (PI3K)/Akt signals in this process. Instead, further characterization of the upstream pathway revealed an involvement of Phospholipase C gamma (PLC gamma) and Protein kinase C (PKC) signals in FasL-dependent ERK1/2- regulation. Previous studies in our group revealed a Notch-like processing of FasL, resulting in the transcriptional regulation of a reporter gene. Furthermore, an interaction of the FasL ICD with the transcription factor Lymphoid-enhancer binding factor-1 (Lef-1) that affected Lef-1-dependent reporter gene transcription could be demonstrated. Therefore, a molecular analysis of activated lymphocytes was performed to identify FasL reverse signaling target genes. The differential expression of promising candidates was verified by quantitative real-time PCR (qRT-PCR), which showed that the transcription of genes associated with lymphocyte proliferation and activation was increased in FasL Delta Intra mice compared to wildtype mice. Interestingly, an extensive regulation of Lef-1-dependent Wnt/beta-Catenin signalingrelated genes was found. Lef-1 mRNA (RT-PCR) and protein (intracellular FACS staining) could be detected in mature B cells, suggesting the possibility of FasL ICD-mediated inhibition of Lef-1-dependent gene expression in these cells, initiated by Notch-like processing of FasL. To investigate the consequences of FasL reverse signaling in vivo, a potential participation of the FasL ICD in the regulation of immune responses upon various challenges was analyzed. In experiments in which thymocyte proliferation or the expansion of antigen-specific T cells following a challenge with the superantigen Staphylococcus enterotoxin B (SEB), with Lymphocytic choriomeningitis virus (LCMV) or with Listeria monocytogenes were investigated, comparable results were obtained with wildtype and FasL Delta Intra mice. Likewise, the recruitment of neutrophils in a thioglycollate-induced model of peritonitis was not affected by deletion of the FasL ICD. These findings might reflect regulatory mechanisms operating in vivo, such as control exerted by regulatory T cells. Along these lines, proliferative differences in CD4+ T cells could only be detected ex vivo after depletion of CD4+CD25+ regulatory T cells. Furthermore, several in vitro studies indicate that retrograde FasL signals can be observed under conditions of suboptimal lymphocyte stimulation, but not when the TCR is optimally stimulated. Therefore, the potent initiation of antigen receptor signaling by stimuli like SEB or LCMV might have masked inhibitory FasL reverse signaling in these experiments. In agreement with the observed hyperactivation of lymphocytes in the absence of the ICD ex vivo, the increase in germinal center B cells (GCs) following immunization with the hapten 3-hydroxy 4-nitrophenylacetyl (NP) and the number of antibody-secreting PCs was significantly higher in FasL Delta Intra mice. The larger quantity of PCs correlated with increased titers of NP-binding, i.e. antigen-specific, IgM and IgG1 antibodies in the serum of FasL Delta Intra mice after immunization. These data suggest that FasL reverse signaling exerts immunmodulatory functions. Supporting this notion, a model of Ovalbumin-induced allergic airway inflammation revealed an involvement of retrograde FasL-signals in the recruitment of immune effector cells into the lung and in the activation of T cells following exposure of mice to Ovalbumin. Together, our ex vivo and in vivo findings based on endogenous FasL protein levels demonstrate that FasL ICD-mediated reverse signaling is a negative modulator of certain immune responses. It is tempting to speculate that FasL reverse signaling might be a fine-tuning mechanism to prevent autoimmune diseases, a theory which will be tested in adequate mouse models in the future.
Lattice Yang-Mills theories at finite temperature can be mapped onto effective 3d spin systems, thus facilitating their numerical investigation. Using strong-coupling expansions we derive effective actions for Polyakov loops in the SU(2) and SU(3) cases and investigate the effect of higher order corrections. Once a formulation is obtained which allows for Monte Carlo analysis, the nature of the phase transition in both classes of models is investigated numerically, and the results are then used to predict – with an accuracy within a few percent – the deconfinement point in the original 4d Yang-Mills pure gauge theories, for a series of values of Nt at once.
Relying on the existing estimates for the production cross sections of mini black holes in models with large extra dimensions, we review strategies for identifying those objects at collider experiments. We further consider a possible stable final state of such black holes and discuss their characteristic signatures. Keywords: Black holes
We discuss the present collective flow signals for the phase transition to the quark-gluon plasma (QGP) and the collective flow as a barometer for the equation of state (EoS). We emphasize the importance of the flow excitation function from 1 to 50A GeV: here the hydrodynamicmodel has predicted the collapse of the v1-flow at ~ 10A GeV and of the v2-flow at ~ 40A GeV. In the latter case, this has recently been observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy, we interpret this observation as potential evidence for a first order phase transition at high baryon density pB.
We study various fluctuation and correlation signals of the deconfined state using a dynamical recombination approach (quark Molecular Dynamics, qMD). We analyse charge ratio fluctuations, charge transfer fluctuations and baryon-strangeness correlations as a function of the center of mass energy with a set of central Pb+Pb/Au+Au events from AGS energies on (Elab = 4 AGeV) up to the highest RHIC energy available (V sNN = 200 GeV) and as a function of time with a set of central Au+Au qMD events at V sNN = 200 GeV with and without applying our hadronization procedure. For all studied quantities, the results start from values compatible with a weakly coupled QGP in the early stage and end with values compatible with the hadronic result in the final state. We show that the loss of the signal occurs at the same time as hadronization and trace it back to the dynamical recombination process implemented in our model.
To investigate the formation and the propagation of relativistic shock waves in viscous gluon matter we solve the relativistic Riemann problem using a microscopic parton cascade. We demonstrate the transition from ideal to viscous shock waves by varying the shear viscosity to entropy density ratio n/s. Furthermore we compare our results with those obtained by solving the relativistic causal dissipative fluid equations of Israel and Stewart (IS), in order to show the validity of the IS hydrodynamics. Employing the parton cascade we also investigate the formation of Mach shocks induced by a high-energy gluon traversing viscous gluon matter. For n/s = 0.08 a Mach cone structure is observed, whereas the signal smears out for n/s >=0.32.