Refine
Year of publication
- 2010 (855) (remove)
Document Type
- Article (355)
- Book (127)
- Doctoral Thesis (101)
- Part of Periodical (98)
- Part of a Book (50)
- Working Paper (47)
- Report (33)
- Conference Proceeding (19)
- Review (11)
- diplomthesis (3)
Language
- English (855) (remove)
Keywords
- Bachtin, Michail M. (11)
- Chronotopos (11)
- Erzähltheorie (11)
- Intonation <Linguistik> (9)
- distribution (9)
- Cape Verde Islands (8)
- Relativsatz (8)
- Phonologie (7)
- Prosodie (7)
- Tension (7)
Institute
- Medizin (117)
- Physik (60)
- Biochemie und Chemie (58)
- Biowissenschaften (55)
- Geowissenschaften (45)
- Center for Financial Studies (CFS) (37)
- Wirtschaftswissenschaften (26)
- Frankfurt Institute for Advanced Studies (FIAS) (23)
- E-Finance Lab e.V. (20)
- Extern (19)
Top-down and bottom-up approaches are the general methods used to analyse proteomic samples today, however, the bottom-up approach has been dominant in the last decade. Establishing a bottom-up method involves not only the choice of adequate instruments and the optimisation of the experimental parameters, but also choosing the right experimental conditions and sample preparation steps. LC-ESI MS/MS has widely been used in this field due to its advanced automation. The primary objective of the present study was to establish a sensitive high-throughput nLC-MALDI MS/MS method for the identification and characterisation of proteins in biological samples. The method establishment included optimisation and validation of parameters such as the capillaries in the HPLC systems, gradient slopes, column temperature, spotting frequencies or the MS and MS/MS acquisition methods. The optimisation was performed using two HPLC-systems (Agilent 1100 series and Proxeon Easy nLC system), three spotters and the 4800 MALDI-TOF/TOF analyzer. Furthermore, samples preparation protocols were modified to fit to the established nLCMALDI- TOF/TOF-platform. The potentials of this method was demonstrated by the successful analysis of complex protein samples isolated from lipid particles, pre-adipocytes/adipocytes tissues, membrane proteins and proteins pulled-down from protein-proteins interaction studies. Despite the small amount of proteins in the lipid particles or oil bodies, and the challenges encountered in studying such proteins, 41(6 novel + 14 mammal specific + 21 visceral specific) proteins were added to the already existing proteins of the secretome of human subcutaneous (pre)adipocytes and 6 novel proteins localised in the yeast lipid particles. Protein-protein interaction studies present another area of application. Here the analytical challenges are mostly due to the loss of binding partner upon sample clean-up and to differentiate from non-specific background. Novel interaction partners for AF4•MLL and AF4 protein complex were identified. Furthermore, a novel sample protocol for the analysis of membrane proteins, based on the less specific protease, elastase, was established. Compared to trypsin, a higher sequence coverage and higher coverage of the transmembrane domains were achieved. The use of this enzyme in proteomics has been limited because of its non specific cleavage. However, from the results obtained in these studies, elastase was found to cleave preferentially at the C-terminal site of the amino acids AVLIST. The advantage of the established protocol over conventional protocols is that the same enzyme can be used for shaving of the soluble dormains of intact proteins in membranes and the digestion of the hydrophobic domain after solubilisation. Furthermore, the solvents used are compatible with the nLC-MALDI method setup. In addition, it was also shown that for less specific enzymes, a higher mass accuracy is required to reduce the rate of false positive identifications, since current search engines are not perfectly adapted for these types of enzymes. A brief statistical analysis of the MS/MS data obtained from the LC-MALDI TOF/TOF system showed that for less specific enzymes, under high-energy collision conditions, approximately 43 % of the fragment ions could not be matched to the known y- b type ions and their resultant internal fragments. This limitation greatly influenced the search results. However, this limitation can be overcome by modifying the N-terminal amino acids with basic moieties such as TMT. The use of elastase as a digestion enzyme in proteomic workflow further increased the complexity of the sample. Therefore, orthogonal multidimensional separation was necessary. Offgel-IEF was used as the separation technique for the first dimension. Here peptides are separated according to the pI. However, the acquired samples could not be loaded to the nLC due to the high viscosity of the concentrated samples when using the standard protocol. In order to achieve compatibility of the Offgel-IEF to the nLC-MALDI-TOF/TOF-platform, the separation protocol of the Offgel-IEF was modified by omitting the glycerol, which was the cause of the viscous solution. The novel glycerol free protocol is advantageous over the conventional method because the samples could directly be picked-up and loaded onto the pre-column without resulting in an increase in back pressure or a subsequent pre-column clogging. The glycerol free protocol was then assessed using purple membrane and membrane fraction of C. glutamicum. The results obtained were comparable to those applied in published reports. Therefore, the absence of glycerol did not affect the separation efficiency of the Offgel-IEF. In addition the applicability of elastase and the glycerol free Offgel-IEF for quantitation of membrane proteins was assessed. Most of the unique peptides identified were in the acidic region and 85 % were focused only into one fraction and approximately 95 % in only two fractions. These results are in accordance with previously published results (Lengqvist et al., 2007). When compared with theoretical digests of the proteins identified in this study, it can be concluded that basic moiety (TMT) on the peptide backbone, did not affect the separation efficiency of the Offgel-IEF. In an applied study, changes in the protein content of yeast strain grown in two different media were relatively quantified. For example, prominent proteins, such as the hexose tranporter proteins responsible for transporting glucose accross the membrane, were successfully quantified. Last but not least, the nLC-MALDI-TOF/TOF platform also served as a basis for the development of a high-throughput method for the identification of protein phosphorylation. The establishment of such a method using MALDI has been challenging due to the lack of sensitive matrices, such as CHCA for non-modified peptides, which exhibit a homogenous crystallisation and thus yield stable signal intensity over a long period of time in an automated setup. The first step of this method was the establishment of a matrix/matrix mixture with better crystal morphology and higher analyte signal intensity than the matrix of choice, i.e. DHB. From MS and MS/MS measurements of standard phosphopeptides, a combination of FCCA and CHAC in a 3:1 ratio and 3 mM NH4H2PO4 facilitated high analyte signal intensities and good fragmentation behaviour. Combining a custom-packed biphasic column for the enrichment of phosphopeptides, the applicability of the matrix mixture was assessed in anautomated phosphopeptide analysis using standard phosphopeptides spiked to a 20-fold excess BSA digest. These analyses showed that this method is reproducibile and both flow throughs can be analysed. Applying the method to the analysis of 2 standard phosphoproteins, alpha/beta-casein, and a leukemia related protein, ENL, 13 phosphopeptides from both alpha/beta-Casein and 13 phosphopeptides with 6 phosphorylation sites from the ENL were identified. As a general conclusion, it can be stated that the nLC-MALDI-TOF/TOF method established here in various modifications for different analytical purposes is a robust platform for proteomic analyses.
Development of a computational method for reaction-driven de novo design of druglike compounds
(2010)
A new method for computer-based de novo design of drug candidate structures is proposed. DOGS (Design of Genuine Structures) features a ligand-based strategy to suggest new molecular structures. The quality of designed compounds is assessed by a graph kernel method measuring the distance of designed molecules to a known reference ligand. Two graph representations of molecules (molecular graph and reduced graph) are implemented to feature different levels of abstraction from the molecular structure. A fully deterministic construction procedure explicitly designed to facilitate synthesizability of proposed structures is realized: DOGS uses readily available synthesis building blocks and established reaction schemes to assemble new molecules. This approach enables the software to propose not only the final compounds, but also to give suggestions for synthesis routes to generate them at the bench. The set of synthesis schemes comprises about 83 chemical reactions. Special focus was put on ring closure reactions forming drug-like substructures. The library of building blocks consists of about 25,000 readily available synthesis building blocks. DOGS builds up new structures in a stepwise process. Each virtual synthesis step adds a fragment to the growing molecule until a stop criterion (upper threshold for molecular mass or number of synthesis steps) is fulfilled. In a theoretical evaluation, a set of ~1,800 molecules proposed by DOGS is analyzed for critical properties of de novo designed compounds. The software is able to suggest drug-like molecules (79% violate less than two of Lipinski’s ‘rule of five’). In addition, a trained classifier for drug-likeness assigns a score >0.8 to 51% of the designed molecules (with 1.0 being the top score). In addition, most of the DOGS molecules are deemed to be synthesizable by a retro-synthesis descriptor (77% of molecules score in the top 10% of the decriptor’s value range). Calculated logP(o/w) values of constructed molecules resemble a unimodal distribution centred close to the mean of logP(o/w) values calculated for the reference compounds. A structural analysis of selected designs reveals that DOGS is capable of constructing molecules reflecting the overall topological arrangement of pharmacophoric features found in the reference ligands. At the same time, the DOGS designs represent innovative compounds being structurally distinct from the references. Synthesis routes for these examples are short and seem feasible in most cases. Some reaction steps might need modification by using protecting groups to avoid unwanted side reactions. Plausible bioisosters for known privileged fragments addressing the S1 pocket of trypsin were proposed by DOGS in a case study. Three of them can be found in known trypsin inhibitors as S1-adressing side chains. The software was also tested in two prospective case studies to design bioactive compounds. DOGS was applied to design ligands for human gamma-secretase and human histamine receptor subtype 4 (hH4R). Two selected designs for gamma-secretase were readily synthesizable as suggested by the software in one-step reactions. Both compounds represent inverse modulators of the target molecule. In a second case study, a ligand candidate selected for hH4R was synthesized exactly following the three-step synthesis plan suggested by DOGS. This compound showed low activity on the target structure. The concept of DOGS is able to deliver synthesizable and bioactive compounds. Suggested synthesis plans of selected compounds were readily pursuable. DOGS can therefore serve as a valuable idea generator for the design of new pharmacological active compounds.
Biodegradation and elimination of industrial wastewater in the context of whole effluent assessment
(2010)
The focus of this thesis is on the assessment of the degradability of indirectly discharged wastewater in municipal treatment plants and on assessing indirectly discharged effluents by coupling the Zahn-Wellens test with effect-based bioassays. With this approach persistent toxicity of an indirectly discharged effluent can be detected and attributed to the respective emission source. In the first study 8 wastewater samples from different industrial sectors were analysed according to the “Whole-Effluent Assessment“ (WEA) approach developed by OSPAR. In another study this concept has been applied with 20 wastewater samples each from paper manufacturing and metal surface treating industry. In the first study generally low to moderate ecotoxic effects of wastewater samples have been determined. One textile wastewater sample was mutagenic in the Ames test and genotoxic in the umu test. The source of these effects could not be identified. After treatment in the Zahn-Wellens test the mutagenicity in the Ames test was eliminated completely while in the umu test genotoxicity could still be observed. Another wastewater sample from chemical industry was mutagenic in the Ames test. The mutagenicity with this wastewater sample was investigated by additional chemical analysis and backtracking. A nitro-aromatic compound (2-methoxy-4-nitroaniline) used for batchwise azo dye synthesis and its transformation products are the probable cause for the mutagenic effects analysed. Testing the mother liquor from dye production confirmed that this partial wastewater stream was mutagenic in the Ames test. The wasteweater samples from paper manufacturing industry of the second study were not toxic or genotoxic in the acute Daphnia test, fish egg test and umu test. In the luminescent bacteria test, moderate toxicity was observed. Wastewater of four paper mills demonstrated elevated or high algae toxicity, which was in line with the results of the Lemna test, which mostly was less sensitive than the algae test. The colouration of the wastewater samples in the visible band did not correlate with algae toxicity and thus is not considered as its primary origin. The algae toxicity in wastewater of the respective paper factory could also not be explained with the thermomechanically produced groundwood pulp (TMP) partial stream. Presumably other raw materials such as biocides might be the source of algae toxicity. In the algae test, often flat dose–response relationships and growth promotion at higher dilution factors have been observed, indicating that several effects are overlapping. The wastewater samples from the printed circuit board and electroplating industries (all indirectly discharged) were biologically pre-treated for 7 days in the Zahn–Wellens test before ecotoxicity testing. Thus, persistent toxicity could be discriminated from non-persistent toxicity caused, e.g. by ammonium or readily biodegradable compounds. With respect to the metal concentrations, all samples were not heavily polluted. The maximum conductivity of the samples was 43,700 micro S cm -1 and indicates that salts might contribute to the overall toxicity. Half of the wastewater samples proved to be biologically well treatable in the Zahn–Wellens test with COD elimination above 80%, whilst the others were insufficiently biodegraded (COD elimination 28–74%). After the pre-treatment in the Zahn–Wellens test, wastewater samples from four companies were extremely ecotoxic especially to algae. Three wastewater samples were genotoxic in the umu test. Applying the rules for salt correction to the test results following the German Wastewater Ordinance, only a small part of toxicity could be attributed to salts. In one factory, the origin of ecotoxicity has been attributed to the organosulphide dimethyldithiocarbamate (DMDTC) used as a water treatment chemical for metal precipitation. The assumption, based on rough calculation of input of the organosulphide into the wastewater, was confirmed in practice by testing its ecotoxicity at the corresponding dilution ratio after pre-treatment in the Zahn–Wellens test. The results show that bioassays are a suitable tool for assessing the ecotoxicological relevance of these complex organic mixtures. The combination of the Zahn–Wellens test followed by the performance of ecotoxicity tests turned out to be a cost-efficient suitable instrument for the evaluation of indirect dischargers and considers the requirements of the IPPC Directive.
Purpose of the Study: The purpose of the current study was to evaluate the role of radiofrequency (RF) and microwave (MW) ablation in the treatment of pulmonary neoplasms. Materials and Methods: From March 2004 to January 2009, 164 patients (92 males, 72 females; mean age 59.7 years, SD: 10.2) underwent computed tomography (CT)-guided percutaneous RFA of pulmonary malignancies. RFA was performed on 248 lung lesions (20 primary lesions and 228 metastatic lesions) in 248 sessions (one lesion per session). Tumors were pathologically proven and were classified as primary lung neoplasms in 20 patients (non-small cell lung cancer) and as metastatic lung neoplasms in 144 patients. RFA was performed using: a) CelonProSurge bipolar internally cooled applicator b) RITA®StarburstTMXL. From December 2007 to October 2009, 80 patients (30 males, 50 females; mean age 59.7 years, range: 48-68, SD: 6.4) underwent computed tomography (CT) guided percutaneous MW ablation of pulmonary metastases from variable histopathological primaries. MW was performed on 130 lung lesions in 130 sessions (one lesion per session) using Valleylab TM system. Results: The overall success rate of RFA was 67.7% (168/248 lesions), with overall failure rate either due to tumor residue or recurrence on follow up in 32.3% (80/248) with mean time to tumor progress was 5.6 months SD: 2.99 (Range:1-18 months). Complete successful ablation was achieved in patients treated by MWA in 73.1% (95/130 lesions), with failure rate either due to tumor residue or recurrence on follow up in 26.9% (35/130) with mean time to tumor progress 6 months SD: 2.83 (Range:1-12months). Correlation of the histopathological type of the lesion and the end result of ablation therapy revealed insignificant correlation in both RFA and MWA (p > 0.1). The preablation tumor size was one of the most significant factors that determined the end result of ablation. In RFA successful tumor ablation was significant statistically for lesions with maximal axial diameter up to 2.5 cm (110/140) in comparison to lesions of more than 2.5 cm in maximal axial diameter (58/108) (Fisher’s exact test: p < 0.0001). While in MW ablated lesions successful tumor ablation was significant statistically for lesions with maximal axial diameter up to 3 cm (90/110) in comparison to lesions of more than 3 cm in maximal axial diameter (5/20) (Fisher’s exact test: p < 0.001). The location of the lesion was another important factor that determined the end result of ablation. In both RFA and MWA successful ablation was significantly more correlated to peripheral lesions (RFA: 120/160, 80% / MWA: 80/100, 80%) than centrally located lesions (RFA: 48/88, 50%; MWA: 15/30, 50%) (Fisher’s Exact Test: p > 0.001). For successfully RFA ablated cases mean preablation tumor volumes 1.9 cc SD: 0.9 (range: 0.3 - 4.25 cc) while for failed cases the mean tumor volume was 3.7 SD: 2.4 (range: 0.8 – 6.8cc). For successfully MW ablated cases the mean preablation tumor volume: 2.4 cc SD: 2.2 (range: 0.25-8.2 cc) while for failed cases the mean tumor volume was 3.5 SD: 2.6 (range: 0.3 – 7.1 cc). In RFA the survival rates at 12, 24 and 36 months were 90%, 78% and 68% respectively while in MWA treated patients the survival rate within 12 months follow up period was 96% while at 20 month the survival rate was 77%. Complications associated with the ablation therapy were: a) procedure related mortality: 0.4% (1/248) in RFA due to massive pulmonary hemorrhage versus 0% (0/130) in MWA, b) pneumothorax: 11.3% (28/240) in RFA versus 8.5% (11/130) in MWA, c) pulmonary Hemorrhage: 17.7% (44 of 248 sessions) of which one patient had massive uncontrolled bleeding and immediate death versus 6.2% (8/130) in MWA, d) pleural effusion: 3.2 % (8 of 248 sessions) in RFA versus 3.8 % (6/130) in MWA, e) hemoptysis: 4% (10/248) in RFA versus 4.6% (6/130) in MWA ranging from mild tinged sputum to frank bleeding, f) infection: 0.4% (1/248) in RFA, versus 0% in MWA, and g) post ablation pain: 10% (25/248) in RFA versus 9.2% (12/130) in MWA. Pain was generally adequately controlled by analgesics. Conclusion: Radiofrequency and microwave ablation are effective minimally invasive tools and may be safely applied for management of lung malignancy. The success of ablation therapy is significantly correlated to the preablation tumor size, volume and tumor location.
G-protein coupled receptors (GPCRs) are the key players in signal perception and transduction and one of the currently most important class of drug targets. An example of high pharmacological relevance is the human endothelin (ET) system comprising two rhodopsin-like GPCRs, the endothelin A (ETA) and the endothelin B (ETB) receptor. Both receptors are major modulators in cardiovascular regulation and show striking diversities in biological responses affecting vasoconstriction and blood pressure regulation as well as many other physiological processes. Numerous disorders are associated with ET dysfunction and ET antagonism is considered an efficient treatment of diseases like heart failure, hypertension, diabetes, artherosclerosis and even cancer. This study exemplifies strategies and approaches for the preparative scale synthesis of GPCRs in individual cell-free (CF) systems based on E. coli, a newly emerging and promising technique for the production of even very difficult membrane proteins. The preparation of high quality samples in sufficient amounts is still a major bottleneck for the structural determination of the ET receptors. Heterologous overexpression has been a challenge now for decades but extensive studies with conventional cell-based systems had only limited success. A central milestone of this study was the development of efficient preparative scale expression protocols of the ETA receptor in qualities sufficient for structural analysis by using individual CF systems. Newly designed optimization strategies, the implementation of a variety of CF expression modes and the development of specific quality control assays finally resulted in the production of several milligrams of ETA receptor per one millilitre of reaction mixture. The versatility of CF expression was extensively used to modulate GPCR sample quality by modification of the solubilization environment with detergents and lipids in a variety of combinations at different stages of the production process. Downstream processing procedures of CF synthesized GPCRs were systematically optimized and sample properties were analysed with respect to homogeneity, protein stability and receptor ligand binding competence. Evaluation was accomplished by an array of complementary and specifically modified techniques. Depending on its hydrophobic environment, CF production of the ETA receptor resulted in non-aggregated, monodisperse forms with sufficient long-term stability and high degrees of secondary structure thermostability. The obtained results document the CF production of the ETA receptor in two different modes as an example of a class A GPCR in ligand-binding competent and non-aggregated form in quantities sufficient for structural approaches. The presented strategy could serve as basic guideline for the production of related receptors in similar systems.
In this work we study compact stars, i.e. neutron stars, as cosmic laboratories for the nuclear matter. With a mass of around 1 - 3 solar masses and a radius of around 10km, compact stars are very dense and, besides nucleons, can contain exotic matter such as hyperons or quark matter. The KaoS collaboration studied nuclear matter for densities up to 2-3 times saturation density by analysing kaon multiplicities from Au+Au and C+C collisions. The results show that nuclear matter in the corresponding density region is very compressible, with a compressibility of <200MeV. For such soft nuclear equations of state the maximum masses of neutron stars are ca. 1.8 - 1.9 solar masses, whereas the central densities are higher than 5 times nuclear saturation density and therefore point towards a possible phase transition to quark matter. If quark matter would be present in the interior of neutron stars, so-called hybrid stars, it could be produced already during their birth in supernova explosions. To study this we implement a quark matter phase transition in a hadronic equation of state which is used in supernova simulations. Supernova simulations of low and intermediate mass progenitors and two different bag constants show a collapse of the proto neutron star due to the softening of the equations of state in the quark-hadron mixed phase. The stiffening of the equation of state for pure quark matter halts the collapse and leads to the production of a second shock wave. The second shock wave is energetic enough to lead to an explosion of the star and produces a neutrino burst when passing the neutrinospheres. Furthermore, first studies of the longtime cooling of hybrid stars show, that colour superconductivity can significantly influence the cooling behaviour of hybrid stars, if all quarks form Cooper Pairs. For the so-called CSL phase (colour-spin locking) with pairing energies of several MeV, the cooling of the quark phase is suppressed and the hybrid star appears as a pure hadronic star.
Vibronic (vibrational-electronic) transition is one of the fundamental processes in molecular physics. Indeed, vibronic transition is essential both in radiative and nonradiative photophysical or photochemical properties of molecules such as absorption, emission, Raman scattering, circular dichroism, electron transfer, internal conversion, etc. A detailed understanding of these transitions in varying systems, especially for (large) biomolecules, is thus of particular interest. Describing vibronic transitions in polyatomic systems with hundreds of atoms is, however, a difficult task due to the large number of coupled degrees of freedom. Even within the relatively crude harmonic approximation, such as for Born-Oppenheimer harmonic potential energy surfaces, the brute-force evaluation of Franck-Condon intensity profiles in a time-independent sum-over-states approach is prohibitive for complex systems owing to the vast number of multi-dimensional Franck-Condon integrals. The main goal of this thesis is to describe a variety of molecular vibronic transitions, with special focus on the development of approaches that are applicable to extended molecular systems. We use various representations of Fermi’s golden rule in frequency, time and phase spaces via coherent states to reduce the computational complexity. Although each representation has benefits and shortcomings in its evaluation, they complement each other. Peak assignment of a spectrum can be made directly after calculation in the frequency domain but this sum-over-states route is usually slow. In contrast, computation is considerably faster in the time domain with Fourier transformation but the peak assignment is not directly available. The representation in phase space does not immediately provide physically-meaningful quantities but it can link frequency and time domains. This has been applied to, herein, for example (non-Condon) absorption spectra of benzene and electron transfer of bacteriochlorophyll in the photosynthetic reaction center at finite temperature. This work is a significant step in the treatment of vibronic structure, allowing for the accurate and efficient treatment of complex systems, and provides a new analysis tool for molecular science.
Succinate:quinone oxidoreductases (SQORs) are integral membrane protein complexes, which couple the two-electron oxidation of succinate to fumarate (succinate → fumarate + 2H+ + 2e-) to the two-electron reduction of quinone to quinol (quinone + 2H+ + 2e- → quinol) as well as catalyzing the opposite reaction, the reduction of fumarate by quinol. In mitochondria and some aerobic bacteria, succinate:ubiquinone reductase, also known as complex II of the aerobic respiratory chain or as succinate dehydrogenase from the tricarboxylic acid (TCA or Krebs) cycle, catalyzes the oxidation of succinate by ubiquinone, which is mildly exergonic under standart conditions and not directly associated with energy storage in the form of a transmembrane electrochemical proton potential (Δp). Gram-positive bacteria do not contain ubiquinone but rather menaquinone, a quinone with significantly lower oxidation-reduction (“redox”) midpoint potential. In these cases, the catalyzed oxidation of succinate by quinone is endergonic under standard conditions. Consequently, these bacteria face a thermodynamic problem in supporting the catalysis of this reaction in vivo. Based on experimental evidence obtained on whole cells and purified membranes, it had previously been proposed that the SQR from Gram-positive bacteria supports this reaction at the expense of the protonmotive force, Δp. Nonetheless, it has been argued that the observed Δp dependence is not associated specifically with the activity of SQR because the occurrence of artifacts in experiments with bacterial membranes and whole cells can not be fully excluded. Clearly, definitive insight into the mechanism of catalysis of this intriguing reaction required a corresponding functional characterization of an isolated, membranebound SQR from a Gram-positive bacterium. The first aim of the present work addresses the question if the general feasibility of the energetically uphill electron transfer from succinate to menaquinone is associated specifically to a single enzyme complex, the SQR. The prerequisite to achieve this goal was stable preparation of this enzyme.
The glycine receptor (GlyR) is the major inhibitory neurotransmitter receptor in spinal cord and brainstem. Heteropentameric GlyRs are clustered and anchored at inhibitory postsynaptic sites by the binding of the large intracellular loop between transmembrane domains 3 and 4 of the GlyRbeta subunit (GlyRbeta-loop) to the cytoplasmic scaffolding protein gephyrin. GlyRs are also cotransported with gephyrin along microtubules in the anterograde and retrograde direction due to the binding of gephyrin to microtubule-associated motor proteins. Additionally, GlyRs undergo lateral diffusion in the plasma membrane from extrasynaptic to synaptic sites and vice versa. Since its discovery, gephyrin has remained for many years the only binding partner interacting directly with the GlyRbeta subunit. In an attempt to elucidate further mechanisms involved in GlyR function and regulation at inhibitory postsynaptic sites, a proteomic screen for putative binding partners to the GlyRbeta loop was performed. Three proteins were identified as putative interactors. In this thesis, the interaction between these putative binding proteins and the GlyRbeta subunit was analyzed and characterized. Binding studies with glutathione-S-transferase fusion proteins revealed that all putative binding proteins, Syndapin (Sdp), Vacuolar Protein Sorting 35 (Vps35) and Neurobeachin (Nbea), interact specifically with the GlyRbeta loop. The Sdp family of proteins are F-BAR and SH3 domain containing proteins. Inmmunocytochemical experiments showed that SdpI as well as the isoforms SdpII-S and SdpIIL colocalize with the full-length GlyRbeta subunit in a mammalian cell expression system. In cultured spinal cord neurons, a partial colocalization of endogenous SdpI with several excitatory and inhibitory synaptic markers was demonstrated. Mapping experiments using deletion mutants narrowed the SdpI binding site down to 22 amino acids. Peptide competition experiments confirmed the specificity of the interaction between SdpI and this sequence of the GlyRbeta subunit. Point mutation analysis revealed a SH3-proline rich domain dependent interaction between SdpI and the GlyRbeta subunit, respectively. In addition, binding studies in mammalian cells showed that both splice variants of SdpII as well as SdpI interact with the GlyR scaffolding protein gephyrin. Although the SdpI and gephyrin binding sites do not overlap, protein competition studies revealed that interaction of the E-domain of gephyrin with the GlyRbeta loop interferes with SdpI binding. Since SdpI is a dynamin binding protein involved in vesicle endocytosis and recycling pathways, a possible function of SdpI in the regulation of GlyR synaptic distribution was investigated. Co-immunoprecipitation experiments confirmed a SdpI-GlyR association in the vesicle-enriched fraction of rat spinal cord tissue. Immunocytochemical studies of SdpI knock out mice showed that the clustering and distribution of GlyRs in the brain stem is unchanged. However, acute down-regulation of SdpI in rat spinal cord neurons by viral shRNA expression led to a reduction in the number and size of GlyR clusters, an effect that could be rescued upon shRNA-resistant SdpI overexpression. Further immunocytochemical analysis of the localization of gephyrin, the gamma2 subunit of the type A gamma-aminobutyric acid receptor (GABAARgamma2 subunit) and the vesicular inhibitory amino acid transporter (VIAAT) under SdpI knock-down conditions showed that both the number and average size of the gamma2-subunit containing GABAA receptor clusters were significantly reduced in spinal cord neurons. In contrast to GlyR and GABAARgamma2 immunoreactivity, the number and average size of gephyrin and VIAAT clusters were barely reduced upon SdpI downregulation. These results suggest that SdpI has a role in GlyR trafficking that can be compensated by other syndapin isoforms or other trafficking pathways. Furthermore, SdpI might be required for the clusters of GlyRs and gamma2-subunit containing GABAARs in spinal cord and brainstem. Vps35 is the core protein of the retromer complex, which mediates the endosome to Golgi apparatus retrieval of different types of receptors in mammals and yeast. Here, protein-protein interaction assays revealed for the first time that Vps35 interacts directly with the GlyRbeta loop as well as with gephyrin. The generation of specific Vps35 antibodies allowed to determine the distribution of this protein in the central nervous system. Immunocytochemical analyses revealed the presence of Vps35 in the somata and neurites of spinal cord neurons, suggesting a possible interaction of Vps35 with the GlyR under physiological conditions. Nbea is a BEACH domain containing, neuron-specific protein. Binding studies revealed a direct interaction between two regions of Nbea and the GlyRbeta loop. Immunocytochemical experiments confirmed a somatic and synaptic distribution of Nbea in primary cultures. In spinal cord neurons, a partial colocalization of Nbea with excitatory and inhibitory synaptic markers suggests a possible interaction of Nbea with the GlyR at inhibitory synaptic sites.
Summary: Information and communication is critical to the successful management of infectious diseases because an effective communication strategy prevents the surge of anxious patients who have not been genuinely exposed to the pathogen ('low risk patients') affecting medical infrastructures (1) and the future transmission of the infectious agent (2). Surge of low risk patients: The arrival of large numbers of low risk patients at hospitals following an infectious diseases emergency would be problematic for three main reasons. First, it would complicate the situation at hospitals receiving exposed patients, delaying the treatment of the acutely ill, creating difficulties of crowd control and tying up medical resources. Second, for the low risk patients themselves, attending hospital following an infectious disease emergency might increase their risk of exposure to the agent in question. Third, the needs of low risk patients may be poorly attended to at hospitals which are already overstretched dealing with medical casualties. Future transmission: Obtaining early information about symptoms and isolating infected patients is the most effective strategy to interrupt the chain of infection in the public in the absence of specific prophylaxis or treatment. Particularly at the beginning of an outbreak, these nonpharmaceutical interventions play an important role in enabling the early detection of signs or symptoms and in encouraging passengers to adopt appropriate preventive behaviour in order to limit the spread of the disease. This thesis includes two papers dealing with this problem: The first part is a systemic literature review of information needs following an infectious disease emergency (Anthrax, SARS, Pneumonic Plague). The key question was: what are the information needs of the public during an infectious disease emergency? The second part is an empirical investigation of information needs and communication strategies at the airport during the early stage of the Influenza Pandemic. The key question here was: what communication strategies help to meet the information needs and to enable the public to behave appropriately and responsibly? Conclusions: Evidence from the anthrax attacks in the United States suggested that a surge of low risk patients is by no means inevitable. Data from the SARS outbreak illustrated that if hospitals are seen as sources of contagion, many patients with non-bioterrorism related health care needs may delay seeking help. Finally, the events surrounding the Pneumonic Plague outbreak of 1994 in Surat, India, highlighted the need for the public to be kept adequately informed about an incident to avoid creating rumours. Clear, consistent and credible information is key to the successful management of infectious disease outbreaks. The results of the empirical investigation suggested that the desire for information is a reflection of current anxiety and does not mirror the objective scientific assessment of exposure. The airport study showed that perceived information needs were directly related to anxiety – the least anxious did not require any further information, the most anxious reported significant information needs concerning medical treatment, public health management and the assessment of the ongoing situation – irrespective of their actual exposure. A communication strategy only focussing on the 'real' exposed individuals neglects the information needs of those worrying about having contracted the virus and seeking medical attendance. Effective communication strategies should enable the general public to detect early signs or symptoms and provide them with behaviour advice to prevent the further transmission of the infectious agent. These include the provision of clear information about the incident, the symptoms and what to do to prevent the further transmission, detailed and regularly updated information in various media formats (telephone, internet, etc.) and rapid triage at hospital entrances to guide patients to the appropriate medical infrastructures. Relevance: These research findings could contribute to a shift in the organisational and communicative approach responding to infectious diseases outbreaks and could be considered relevant for future risk communication and policy decision making.
Interview with Dario Azzellini, author of The Business of War and the new documentary film, Comuna Under Construction. What is it about Venezuela that is so interesting? Since 2003 I have practically lived in Venezuela. What motivates me is that I am interested in the social transformation process happening here. It’s a different type of revolution, a new left that draws from all the experiences of the 60s, 70s, 80s and 90s. ...
This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.
This thesis consists of four chapters. Each chapter covers a topic in international macroeconomics and monetary policy. The first chapter investigates the impact of unexpected monetary policy shocks on exchange rates in a multi-country econometric model. The second chapter examines the linkage between macroeconomic fundamentals and exchange rates through the monetary policy expectation channel. The third chapter focuses on the international transmission of bank and corporate distress. The last chapter unfolds the interest rate channel of monetary policy transmission in-an emerging economy-China, where regulations and market forces co-exist in this transmission.
The pathophysiology of schizophrenia is still poorly understood. Investigating the neurophysiological correlates of cognitive dysfunction with functional neuroimaging techniques such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) is widely considered to be a possible solution for this problem. Working memory impairment is one of the most prominent cognitive impairments found in schizophrenia. Working memory can be divided into a number of component processes, encoding, maintenance and retrieval. They appear to be differentially affected in schizophrenia, but little is known about the neurophysiological disturbances which contribute to deficits in these component processes. The aim of this dissertation was to elucidate the neurophysiological underpinnings of the component processes of working memory and their disturbance in schizophrenia. In the first study the the neurophysiological substrates of visual working memory capacity limitations were investigated during encoding, maintenance and retrieval in 12 healthy subjects using event-related fMRI. Subjects had to encode up to four abstract visual shapes and maintain them in working memory for 12 seconds. Afterwards a test stimulus was presented, which matched one of the previously shown shapes in fifty percent of the trials. A bilateral inverted U-shape pattern of BOLD activity with increasing memory load in areas closely linked with selective attention, i.e. the frontal eye fields and areas around the intraparietal sulcus, was observed already during encoding. The increase of the number of stored items from memory load three to memory load four in these regions was negatively correlated with the increase of BOLD activity from memory load three to memory load four. These results point to a crucial role of attentional processes for the limited capacity of working memory. In the second study, the contribution of early perceptual processing deficits during encoding and retrieval to working memory dysfunction was investigated in 17 patients with schizophrenia and 17 healthy control subjects using EEG and event-related fMRI. A slightly modified version of the working memory task used in the fist study was employed. Participants only had to encode and maintain up to three items. In patients the amplitude of the P1 event-related potential was significantly reduced already during encoding in all memory load conditions. Similarly, BOLD activity in early visual areas known to generate the P1 was significantly reduced in patients. In controls, a stronger P1 amplitude increase with increasing memory load predicted better performance. These findings indicate that in addition to later memory related processing stages early visual processing is disturbed in schizophrenia and contributes to working memory dysfunction by impairing the encoding of information. In the third study, which was based on the same data set as the second study, cortical activity and functional connectivity in 17 patients with schizophrenia and 17 to healthy control subjects during the working memory encoding, maintenance and retrieval was investigated using event-related fMRI. Patients had reduced working memory capacity. During encoding activation in the left ventrolateral prefrontal cortex and extrastriate visual cortex was reduced in patients but positively correlated with working memory capacity in controls. During early maintenance patients switched from hyper- to hypoactivation with increasing memory load in a fronto-parietal network which included left dorsolateral prefrontal cortex. During retrieval right ventrolateral prefrontal hyperactivation was correlated with encoding-related hypoactivation of left ventrolateral prefrontal cortex in patients. Cortical dysfunction in patients during encoding and retrieval was accompanied by abnormal functional connectivity between fronto-parietal and visual areas. These findings indicate a primary encoding deficit in patients caused by a dysfunction of prefrontal and visual areas. The findings of these studies suggest that isolating the component processes of working memory leads to more specific markers of cortical dysfunction in schizophrenia, which had been obscured in previous studies. This approach may help to identify more reliable biomarkers and endophenotypes of schizophrenia.
Atherosclerosis is accompanied by infiltration of macrophages to the intima of blood vessels. There they engulf oxLDL (oxidized low-density lipoproteins) and differentiate to foam cells. These cells are known as major promoters of atherosclerosis progression. In initial experiments I could demonstrate that foam cell formation caused a severe loss in the ability to produce IFNA (interferon A) in response to stimulation with the bacterial cell wall component LPS (lipopolysaccharide). Since IFNA is discussed to have anti-atherosclerotic potential and has the capability to induce immune tolerance, its inhibition in foam cells might promote the atherosclerotic process. For this reason the aim of my PhD project was to clarify the underlying molecular mechanisms that attenuate LPS-induced IFNA expression in foam cells. LPS activates TLR4 (Toll-like receptor 4) in macrophages. Downstream this receptor two distinct signaling pathways are activated, namely a MyD88 (myeloid differentiation primary response gene 88)-dependent and a TRIF (TIR-domain-containing adapter-inducing IFNA)-dependent one. Foam cell formation targeted the TRIF-dependent TLR4 signaling pathway, as seen by loss of IRF3 activation and IFNA expression inhibition, whereas MyD88-initiated NFBB (nuclear factor 'B-light-chain-enhancer' of activated B-cells) activation and subsequent TNF@ (tumor necrosis factor @) expression remained unaltered. The TRIF signaling cascade results in transactivation of the transcription factor IRF3 (interferon regulatory factor 3), the main activator of IFNA expression. This event demands IRF3 phosphorylation by TBK1 (TANK-binding kinase 1), whereas TBK1 needs to be recruited to TRAF3 (TNF receptor associated factor 3) by the scaffold protein TANK (TRAF family member-associated NFBB activator) for its activation. This work allowed to propose the following scheme: OxLDL utilizes SR-A1 (scavenger receptor A1) to activate IRAK4 (interleukin-1 receptor-associated kinase 4), IRAK1 and Pellino3. Active IRAK1 and Pellino3 associate with TRAF3 and Pellino3 promotes mono-ubiquitination of the adaptor molecule TANK. Mono-ubiquitination of TANK interrupts TBK1 recruitment to TRAF3 and thereby abrogates phosphorylation and transactivation of IRF3 as well as subsequent expression of IFNA. In this study I provide evidence for a negative regulatory role of Pellino3 for TRIF-dependent TLR4 signaling. This expands the current knowledge of the interplay between pathways downstream scavenger and Toll-like receptors. Due to the multifaceted roles of TLR4 signaling in pathology, the new TRIF-signaling inhibitor Pellino3 might be of importance as therapeutical target for disease intervention.
Aim: To study the changes in leiomyoma volume following uterine artery embolization (UAE) and to correlate these changes with the initial leiomyoma volume and location within the uterus and to evaluate the impact of preprocedural prediction of the best tube angle obliquity for visualization of the uterine artery origin using 3D-reconstructed contrast-enhanced MR angiography (CE-MRA) on the radiation dose, fluoroscopy time and contrast medium volume used during UAE. Materials and Methods: The study was performed in two parts. The first part was retrospectively done on 28 patients (age range: 37-57 years, mean: 48 years, SD: 4.81) in whom UAE was performed. All leiomyomas in all patients were evaluated. In total, 84 leiomyomas were evaluated. MRI studies were performed before, 3 months and 1 year after UAE. The volumes and location of each leiomyoma in each patient were evaluated in consensus by two radiologists. The second part included 40 consecutive patients (age range: 37-56 years, mean: 46 years, SD: 4.49) and was done in a controlled prospective/retrospective manner. In 20 sample patients (prospective part) pre-procedural prediction of the best tube angle obliquity was predicted using 3D-reconstructed CE-MRA and provided to the interventionalist. 3D-reconstruction was done using Inspace application. The radiation dose, fluoroscopy time and contrast medium volume for those patients were compared with the data of the last 20 procedures (control) performed by the same interventionalist (retrospective part). Results: For the first part the mean pre-embolization volume was 51.6 cm3 range:0.72-371.1cm3, SD=79.3). At 3-month follow-up 83 (98.8%) leiomyomas showed a mean volume reduction of 52.62% (range: 12.79–96.67%, SD=21.85) and 1 leiomyoma (1.2%) increased in volume. At 1-year follow-up 5 (6%) leiomyomas were not detectable, 72 (85.7%) showed a further mean of 20.5% (range: 2.52–58.72%, SD=11.92) volume reduction compared to the 3-month follow-up volume and 7 (8.3%) leiomyomas increased in volume. A statistically significant (p=0.026 at 3-month, p=0.0046 at 1-year) difference in percentage of volume change was observed based on leiomyoma location; submucous leiomyomas showed the largest volume reduction. The initial leiomyoma volume showed a weak negative correlation (Spearman's correlation-coefficient =-0.35 at 3m and -0.36 at 1y) with the leiomyoma volume change. For the second part the tube angle prediction resulted in a significant reduction of the radiation dose utilized (p<0.001), fluoroscopy time (p=0.002) and contrast medium volume (p<0.001) for the sample patients when compared with the control patients. The overall radiation dose was reduced from a mean of 11044 μGym2 to a mean of 4172.5 μGym2, fluoroscopy time was reduced from a mean of 15.45 minutes to 8.81 minutes and contrast medium volume was reduced from a mean of 135 ml to 75 ml. Conclusion: UAE results in significant leiomyoma volume reduction at 3-month and 1- year follow-up. The leiomyoma location plays an important role in volume changes while the initial leiomyoma volume plays a minor role. Pre-procedural prediction of the best tube angle obliquity for visualization of the origin of the uterine artery using 3D-reconstructed CE-MRA results in a significant reduction of the radiation dose, fluoroscopy time and contrast medium volume used during UAE.
Clinical application of transcranial Doppler for detection of cerebral emboli during cardiac surgery
(2010)
Objective: Neurologic injury is one of the most damaging complications for cardiac surgery. How to decrease neurologic impairment by improving perioperative monitoring remains a challenge for both cardiac surgeons and anesthetists. For this reason, transcranial doppler (TCD) has been widely used in cerebral monitoring during cardiac surgery. In this study, two experiments of clinical application of TCD for detection of cerebral emboli during cardiac surgery were to be done. One was “Solid and gaseous cerebral emboli during valvular surgery are significantly reduced with axillary artery cannulation”. The other was “Do intraoperative cerebral embolic signals differ between valvular surgery (VS) and CABG”. Methods: In experiment one, 20 valve and combined procedures with aortic cannulation (AoC group) were compared to 18 procedures with axillary cannulation (AxC group) in a prospective non-randomized study. In experiment two, 18 VS patients and 18 CABG patients were matched by extracorporeal circulation (ECC) time retrospectively. Intraoperative monitoring of both middle cerebral arteries was performed with TCD discriminating between solid and gaseous embolic signals (ES). Results: In experiment one, the AxC group had less solid ES than the AoC group (38±22 vs 55±25, P<0.05), but no significant difference was found in gaseous (501±271 vs 538±333, P>0.05) and total (539 ± 279 vs 593 ± 350, P>0.05) ES. The AxC group had less solid ES during arterial cannulation (2.1±1.5 vs 6.6±3.6, P<0.05) and during aortic cross-clamp time (4.4 ±3.1 vs 10.2 ± 5.1, P<0.05) than the AoC group. During ECC, gaseous ES was not significantly different between groups (398±210 vs 448±291, P>0.05). However, AxC showed less gaseous ES (85±68 vs 187±148, P<0.05) and less gaseous ES per minute (1.8±1.5 vs 4.5±3.2, P<0.05) during weaning off extracorporeal circulation than the AoC group. No significant difference in gaseous ES (313±163 vs 261±189, P>0.05) and gaseous ES per minute (3.1±2.2 vs 2.8±2.2, P>0.05) was found between groups from bypass start to aortic declamping. No neurologic complications occurred. In experiment two, no significant difference was found in solid (38±20 vs 40±26, P>0.05) or gaseous (457±263 vs 412±157, P>0.05) ES between the VS and CABG group during the whole recording time. During ECC, solid ES (20±10 vs 24±19, P>0.05) and gaseous ES (368±230 vs 317±157, P>0.05) were comparable between groups. Specifically, during weaning off ECC, the VS group had more gaseous ES/min (5.6±3.6 vs 3.1±1.2, P<0.05) than the CABG group. But this difference in gaseous ES/min was not significant during the period from bypass start to aortic declamping (2.5±1.8 vs 3.0±1.8, P>0.05). Conclusion: Cerebral embolization does occur during cardiac surgery. Through these two experiments, we demonstrated the feasibility and importance of clinical application of transcranial doppler for detection of cerebral emboli during cardiac surgery. Due to the diversity in clinical application of TCD, it is impossible to compare the number of ES between different research centers. More unified standards should be drawn in order to make wider clinical application possible. Up till now, no robust evidence shows the correlation between intraoperative ES and postoperative neurological impairment. The research on intraoperative ES and postoperative neurological impairment should rely on a complete concept.
The Benchmark Dose (BMD) approach, which was suggested firstly in 1984 by K. Crump [CRUMP (1984)], is a widely used instrument in risk assessment of substances in the environment and in food. In this context, the BMD approach determines a reference point (RfP) on the statistically estimated dose-response curve, for which the risk can be determined with adequate certainty and confidence. In the next step of risk characterization a threshold is calculated, based on this RfP and toxicological considerations. The BMD approach bases upon the fit of a dose-response model on the data. For this fit a stochastic distribution of the response endpoint is taken as a basis. Ultimately, the BMD reflects the dose for which a pre-specified increase in an adverse health effect (the benchmark response) can be expected. Until now, the BMD approach has been specified only for quantal and continuous endpoints. But in risk assessment of carcinogens especially so called time-to-event data are of high interest since they contain more information on the tumor development than quantal incidence data. The goal of this diploma thesis was to extend the BMD approach to such time-to-event data.
Paleoecology is the study of organismal interactions with the environment in the geological past. Organisms are influenced in their distribution and abundance by abiotic factors such as temperature and precipitation. A change in these factors, for example by major climatic shifts, would then affect the communities of organisms. Studying this hypothesized causal link between climatic and faunal change is especially interesting for the Plio-Pleistocene of East Africa due to the fact that our own ancestors also inhabited these regions. Both the Turkana basin in Kenya and the Lake Albert region in Uganda offer unique opportunities to investigate these paleoecological issues. Their late Miocene through Pleistocene deposits provide a very good record of climatic, vegetation and faunal change in East Africa (Pickford et al. 1993, Leakey et al. 1995, 1998, McDougall & Feibel 2003, Wynn 2004). This study focuses on the mammal family Bovidae as they are good indicator of vegetation and environment (e.g. Vrba 1980, 1995, Shipman & Harris 1988, Bobe & Eck 2001, Bobe & Behrensmeyer 2004, Bobe et al. 2007). Bovidae are quite species-rich and inhabit a wide range of habitats from tropical rain forests to deserts which predicates their array of morphological adaptations (ecovariables) to these environments. Diet is the ecovariable that is most to climate and thus habitat change. Therefore, the fossil Bovidae are especially suitable for reconstructing past environments. The objective of this thesis is to test the hypothesis that, from the late Miocene through the Holocene, Africa has experienced an overall increase in aridity and concomitant pulses of habitat change. The hypothesis predicts that increasing aridity causes a likewise growth in the abundance of taxa adapted to open arid environments. In particular, an increase in bovid grazers should be observed in combination with a decrease of bovid browsers. To test this hypothesis, I examine the fossil bovid communities from each stratigraphic member of Lake Turkana (Lothagam, Kanapoi, West Turkana and Koobi Fora) and Lake Albert (Nkondo-Kaiso region) and through a taxonomic and a functional perspective reconstruct the paleoenvironments and -climates from approximately 8 to 0.6 Ma. This study is the first to use taxonomic and ecomorphological data together to reconstruct the paleoenvironments of the Turkana basin and the Nkondo-Kaiso region of Lake Albert. In a first analysis, mesowear, as introduced by Fortelius & Solounias (2000), is used to gather information about the diet of bovids. As a result of my preliminary investigations on upper vs. lower molars of recent species, the sample of fossil bovid specimens from the Turkana basin and Lake Albert were found to be unsuitable to reveal a meaningful diet reconstruction. Therefore, the bovids are assigned to diet categories based on literature. For each member of the time period from 8.0 to 0.6 Ma, I provide a detailed characterization of the bovid fauna in terms of α- and β- diversity both on tribe and diet level based on presence-absence as well as for the Turkana basin on abundance data. Statistical comparisons between the fossil bovid communities and those in modern protected areas with known vegetation and climatic conditions have yielded modern analogues for each stratigraphic member. Following that I provide paleoclimatic conditions such as assumed mean annual temperature for each member. Based on abundance of diet categories in the bovid communities, the paleoclimate of the Turkana basin was in general cooler and considerably more humid during the late Miocene to the Pleistocene than today. The mean annual temperature at Lothagam is assumed as 22.2 °C, the annual precipitation as 685 mm for 8.0 – 6.54 Ma and 4.9 – 3.4 Ma. The intervening time period is characterized by a slightly lower mean annual temperature and precipitation (20.3 °C, 583 mm). From 4.17 to 4.07 Ma Kanapoi faced 21.3 °C and 592 mm rainfall. In the eastern part of the basin the climate was warmer and more humid (3.4 – 2.68 Ma: 26.2, 961 mm; 2.68 – 1.3 Ma: 27.1 °C, 935 mm) from 3.4 to 1.3 Ma than in the preceeding eras. In the western part, the climate became warmer and more humid ~500,000 years later and was more variable than that in the eastern basin. From 2.94 to 2.52 Ma the mean annual temperature was 26.2 °C and the annual precipitation 961 mm. Between 2.34 and 1.6 Ma the climate again cooled and became drier as before 2.94 Ma. A second shift to higher temperature and precipitation occurred after 1.6 Ma (27.1 °C, 935 mm) lasted until 1.34 Ma. The results of the bovid community analyses do not support the hypothesis of increasing aridity in Eastern Africa during the late Mio- to Pleistocene. Instead, the results show that the bovid communities differed much over time and on a relatively small spatial scale. Regional paleovegetation and paleoclimate exhibit fluctuations through the studied time period at western Turkana and differences between the western and eastern part of the Turkana basin. This is indicative of a patchy habitat distribution both on temporal and spatial levels. Increased climate variability predicts an increase in landscape complexity as proposed by the ‘variability selection hypothesis’ (Potts 1998a+b). Therefore, this thesis research supports the hypothesis of increased landscape complexity on the spatial level. This study has important implications for future research. First, an analysis based on ecovariable characteristics such as diet may be preferred to a taxonomic analysis. Second, abundance data should be used for an ecovariable analysis because the results then provide more precise information on the paleovegetation and –climate than just the presence of these adaptations in the faunal community. Lastly, as this study is based on one mammal family, further studies on other mammal groups should be conducted to increase the database of exploited resource by the entire faunal community. Most significantly this study provides a basis for new interpretations of faunal community distributions. It also raises the question whether small scale spatial community variability is also to be expected at other fossil sites. If so then this methodology has important implications for reconstructions of paleovegetation and paleoclimate.
Magnetic characteristics of metal organic low-dimensional quantum spin systems at low temperatures
(2010)
In dieser Arbeit wurden neue Klassen von niedrigdimensionalen metallisch-organischen Materialien untersucht, die es ermöglichen interessante quantenkritische Phänomene (quantum critical phenomena, QCP) wie die Bose-Einstein-Kondensation (Bose-Einstein condensation, BEC) der magnetischen Anregung in gekoppelten Spin-Dimer-Systemen, den Berezinskii-Kosterlitz-Thouless Übergang (Berezinskii-Kosterlitz-Thouless transition, BKT) und die Divergenz des magnetokalorischen Effekts (magnetocaloric effect, MCE) in Quanten-Spinsystemen beim Anlegen eines magnetischen Feldes zu beobachten. Die Niedrigdimensionalität der untersuchten Systeme war sowohl für die theoretische Beschreibung, als auch für die experimentelle Beobachtung der Phänomene von großer Bedeutung. Aus theoretischer Sicht eröffnet die Beschäftigung mit diesen Systemen die Möglichkeit, einfache Modelle zu entwickeln, die exakt lösbar sind und erlaubt somit ein qualitatives Verständnis der magnetischen Phänomene. Von experimenteller Seite ist es von größtem Interesse, dass durch das Zusammenspiel von Niedrigdimensionalität, konkurrierenden Wechselwirkungen und starker Quantenfluktuation exotische und aufregende magnetische Phänomene (quantenkritische Phänomene) entstehen, die mit verschiedenen experimentellen Methoden untersucht werden können. Um die intrinsischen Eigenschaften der quantenkritischen Phänomene zu verstehen ist es wichtig, die Phänomene an einfachen und gut kontrollierbaren niedrigdimensionalen Modellsystemen wie ein- oder zweidimensionalen Systemen zu untersuchen. ...
The TTL is the transition layer between the tropical troposphere and stratosphere, and is the main region where tropospheric air enters the stratosphere. In this thesis different transport processes are studied by using in situ measurements of tracers. Long-lived tracers were measured with the High Altitude Gas Analyzer (HAGAR) on board the M55 Geophysica aircraft. The instrument was developed by the University of Frankfurt and measures the long-lived tracers CO2, N2O, CFC-12, CFC-11, H-1211, SF6, CH4 and H2 with two gas chromatographic channels and a CO2 sensor (LICOR). The measurements are supported by CO and O3 measurements of other instruments. Two campaigns were conducted to obtain measurements in the TTL: SCOUT-O3 (November/December 2005 in Darwin, Australia) and AMMA-SCOUT-O3 (August 2006 in Ouagadougou, Burkina Faso). After a general introduction of the thesis in chapters one and two, the third chapter describes the findings during this last campaign. Five local flights are analyzed to study the different transport processes that occur in the tropical tropopause layer above West-Africa: deep convection up to the level of main convective outflow, vertical mixing after overshooting of air in deep convection, horizontal inmixing from the extratropical lower stratosphere, and horizontal transport across the subtropical barrier. Main findings are that the TTL over West-Africa is mostly influenced by remote convection. The subtropical barrier is not a strong barrier but more a region of transition between the extratropical and the tropical stratosphere. Chapter 4 presents the results obtained during the SCOUT-O3 campaign. From the eight local flights the last four flights (051129, 051130a, 051130b, 051205) show enhanced values of ozone, CO and CO2 between 355 and 380 K potential temperature in comparison with the first four flights (051116, 051119, 051123, 051125). Horizontal inmixing from the extra-tropical stratosphere and influence of the local convective system Hector cannot explain the enhanced values of the two flights on 30 November Therefore, other possible explanations for these enhanced CO, CO2 and ozone levels are proposed. The first explanation is vertical mixing in the vicinity of the jet stream. However, the jet cannot explain the differences between the flights on 30 November and the flights on 29 November and 5 December. Another possible explanation is influence of polluted boundary layer air masses from the Indonesian region. Especially air sampled during the flights on November 30 crossed large parts of northern Indonesia between 8 and 10 days before the measurements. Convective uplift of biomass burning and other pollution plumes can transport CO and ozone precursors into the upper troposphere, where they can significantly enhance the ozone production. The last chapter deals with the vertical ascent rate in the TTL and uses measurements of both the SCOUT-O3 and AMMA-SCOUT-O3 campaign as well as data from previous aircraft campaigns (TROCCINOX and APE-THESEO). Time scales and residence times for mean vertical transport in the background TTL are estimated for different seasons and over different geographic regions using in situ observations of CO2 and long-lived tracers. The vertical transport time scales are constrained using the seasonal variation of CO2 in the tropical troposphere as a “tracer clock” for vertical ascent. Two methods are applied to calculate the residence time in the layer between 360 and 390 K potential temperature. The first method uses the slope of the CO2 index, the second method fits the CO2 index directly to the measurements assuming a constant ascent rate. The first method yields residence times for Australia,West Africa, and Brazil of the same order, 35-45 days to 380 K and 50 days to 390 K (where no value can be derived for Australia as the slope is changing approximately one month before the campaign). For APE-THESEO, the method does not yield reasonable results. The best estimates using the second method show moderate residence times between 360 and 390 K of 60±25 days SCOUT-O3 (NH autumn) and 43±8 days for AMMA/SCOUT-O3 (NH summer). These results agree well with the results calculated using the first method. For APE-THESEO and TROCCINOX the best fits yield shorter residence times of 23±7 and 40±10 days, respectively, both during winter. These results correspond well to the expectations based on the seasonal variation of the Brewer-Dobson circulation.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
Background: To evaluate the effectivity of fractionated radiotherapy in adolescent and adult patients with pineal parenchymal tumors (PPT). Methods: Between 1982 and 2003, 14 patients with PPTs were treated with fractionated radiotherapy. 4 patients had a pineocytoma (PC), one a PPT with intermediate differentiation (PPTID) and 9 patients a pineoblastoma (PB), 2 of which were recurrences. All patients underwent radiotherapy to the primary tumor site with a median total dose of 54 Gy. In 9 patients with primary PB treatment included whole brain irradiation (3 patients) or irradiation of the craniospinal axis (6 patients) with a median total dose of 35 Gy. Results: Median follow-up was 123 months in the PC patients and 109 months in the patients with primary PB. 7 patients were free from relapse at the end of follow-up. One PC patient died from spinal seeding. Among 5 PB patients treated with radiotherapy without chemotherapy, 3 developed local or spinal tumor recurrence. Both patients treated for PB recurrences died. The patient with PPTID is free of disease 7 years after radiotherapy. Conclusion: Local radiotherapy seems to be effective in patients with PC and some PPTIDs. Diagnosis and treatment of patients with more aggressive variants of PPTIDs as well as treatment of PB need to be further improved, since local and spinal failure even despite craniospinal irradiation (CSI) is common. As PPT are very rare tumors, treatment within multi-institutional trials remains necessary.
Background: It has been demonstrated that cognitive behavioural therapy (CBT) has a moderate effect on symptom reduction and on general well being of patients suffering from psychosis. However, questions regarding the specific efficacy of CBT, the treatment safety, the cost-effectiveness, and the moderators and mediators of treatment effects are still a major issue. The major objective of this trial is to investigate whether CBT is specifically efficacious in reducing positive symptoms when compared with non-specific supportive therapy (ST) which does not implement CBT-techniques but provides comparable therapeutic attention. Methods: The POSITIVE study is a multicenter, prospective, single-blind, parallel group, randomised clinical trial, comparing CBT and ST with respect to the efficacy in reducing positive symptoms in psychotic disorders. CBT as well as ST consist of 20 sessions altogether, 165 participants receiving CBT and 165 participants receiving ST. Major methodological aspects of the study are systematic recruitment, explicit inclusion criteria, reliability checks of assessments with control for rater shift, analysis by intention to treat, data management using remote data entry, measures of quality assurance (e.g. on-site monitoring with source data verification, regular query process), advanced statistical analysis, manualized treatment, checks of adherence and competence of therapists. Research relating the psychotherapy process with outcome, neurobiological research addressing basic questions of delusion formation using fMRI and neuropsychological assessment and treatment research investigating adaptations of CBT for adolescents is combined in this network. Problems of transfer into routine clinical care will be identified and addressed by a project focusing on cost efficiency. Discussion: This clinical trial is part of efforts to intensify psychotherapy research in the field of psychosis in Germany, to contribute to the international discussion on psychotherapy in psychotic disorders, and to help implement psychotherapy in routine care. Furthermore, the study will allow drawing conclusions about the mediators of treatment effects of CBT of psychotic disorders. Trial Registration Current Controlled Trials ISRCTN29242879
Background In October 2007, the working group CEN/TC 216 of the European Committee for standardisation suggested that the Sabin oral poliovirus vaccine type 1 strain (LSc-2ab) presently used for virucidal tests should be replaced by another attenuated vaccine poliovirus type 1 strain, CHAT. Both strains were historically used as oral vaccines, but the Sabin type 1 strain was acknowledged to be more attenuated. In Germany, vaccination against poliomyelitis was introduced in 1962 using the oral polio vaccine (OPV) containing Sabin strain LSc-2ab. The vaccination schedule was changed from OPV to an inactivated polio vaccine (IPV) containing wild polio virus type 1 strain Mahoney in 1998. In the present study, we assessed potential differences in neutralising antibody titres to Sabin and CHAT in persons with a history of either OPV, IPV, or OPV with IPV booster. Methods Neutralisation poliovirus antibodies against CHAT and Sabin 1 were measured in sera of 41 adults vaccinated with OPV. Additionally, sera from 28 children less than 10 years of age and immunised with IPV only were analysed. The neutralisation assay against poliovirus was performed according to WHO guidelines. Results The neutralisation activity against CHAT in adults with a complete OPV vaccination series was significantly lower than against Sabin poliovirus type 1 strains (Wilcoxon signed-rank test P < 0.025). In eight sera, the antibody titres measured against CHAT were less than 8, although the titre against Sabin 1 varied between 8 and 64. Following IPV booster, anti-CHAT antibodies increased rapidly in sera of CHAT-negative adults with OPV history. Sera from children with IPV history neutralised CHAT and Sabin 1 strains equally. Conclusion The lack of neutralising antibodies against the CHAT strain in persons vaccinated with OPV might be associated with an increased risk of reinfection with the CHAT polio virus type 1, and this implies a putative risk of transmission of the virus to polio-free communities. We strongly suggest that laboratory workers who were immunised with OPV receive a booster vaccination with IPV before handling CHAT in the laboratory.
Hepatitis C virus (HCV) naturally infects only humans and chimpanzees. The determinants responsible for this narrow species tropism are not well defined. Virus cell entry involves human scavenger receptor class B type I (SR-BI), CD81, claudin-1 and occludin. Among these, at least CD81 and occludin are utilized in a highly species-specific fashion, thus contributing to the narrow host range of HCV. We adapted HCV to mouse CD81 and identified three envelope glycoprotein mutations which together enhance infection of cells with mouse or other rodent receptors approximately 100-fold. These mutations enhanced interaction with human CD81 and increased exposure of the binding site for CD81 on the surface of virus particles. These changes were accompanied by augmented susceptibility of adapted HCV to neutralization by E2-specific antibodies indicative of major conformational changes of virus-resident E1/E2-complexes. Neutralization with CD81, SR-BI- and claudin-1-specific antibodies and knock down of occludin expression by siRNAs indicate that the adapted virus remains dependent on these host factors but apparently utilizes CD81, SR-BI and occludin with increased efficiency. Importantly, adapted E1/E2 complexes mediate HCV cell entry into mouse cells in the absence of human entry factors. These results further our knowledge of HCV receptor interactions and indicate that three glycoprotein mutations are sufficient to overcome the species-specific restriction of HCV cell entry into mouse cells. Moreover, these findings should contribute to the development of an immunocompetent small animal model fully permissive to HCV.
Snake bite is one of the most neglected public health issues in poor rural communities living in the tropics. Because of serious misreporting, the true worldwide burden of snake bite is not known. South Asia is the world's most heavily affected region, due to its high population density, widespread agricultural activities, numerous venomous snake species and lack of functional snake bite control programs. Despite increasing knowledge of snake venoms' composition and mode of action, good understanding of clinical features of envenoming and sufficient production of antivenom by Indian manufacturers, snake bite management remains unsatisfactory in this region. Field diagnostic tests for snake species identification do not exist and treatment mainly relies on the administration of antivenoms that do not cover all of the important venomous snakes of the region. Care-givers need better training and supervision, and national guidelines should be fed by evidence-based data generated by well-designed research studies. Poorly informed rural populations often apply inappropriate first-aid measures and vital time is lost before the victim is transported to a treatment centre, where cost of treatment can constitute an additional hurdle. The deficiency of snake bite management in South Asia is multi-causal and requires joint collaborative efforts from researchers, antivenom manufacturers, policy makers, public health authorities and international funders.
Piracetam, the prototype of the so-called nootropic drugs’ is used since many years in different countries to treat cognitive impairment in aging and dementia. Findings that piracetam enhances fluidity of brain mitochondrial membranes led to the hypothesis that piracetam might improve mitochondrial function, e.g., might enhance ATP synthesis. This assumption has recently been supported by a number of observations showing enhanced mitochondrial membrane potential, enhanced ATP production, and reduced sensitivity for apoptosis in a variety of cell and animal models for aging and Alzheimer disease. As a specific consequence, substantial evidence for elevated neuronal plasticity as a specific effect of piracetam has emerged. Taken together, this new findings can explain many of the therapeutic effects of piracetam on cognition in aging and dementia as well as different situations of brain dysfunctions. Keywords: mitochondrial dysfunction, alzheimer’s disease, aging, oxidative stress, piracetam
Leukotrienes constitute a group of bioactive lipids generated by the 5-lipoxygenase (5-LO) pathway. An increasing body of evidence supports an acute role for 5-LO products already during the earliest stages of pancreatic, prostate, and colorectal carcinogenesis. Several pieces of experimental data form the basis for this hypothesis and suggest a correlation between 5-LO expression and tumor cell viability. First, several independent studies documented an overexpression of 5-LO in primary tumor cells as well as in established cancer cell lines. Second, addition of 5-LO products to cultured tumor cells also led to increased cell proliferation and activation of anti-apoptotic signaling pathways. 5-LO antisense technology approaches demonstrated impaired tumor cell growth due to reduction of 5-LO expression. Lastly, pharmacological inhibition of 5-LO potently suppressed tumor cell growth by inducing cell cycle arrest and triggering cell death via the intrinsic apoptotic pathway. However, the documented strong cytotoxic off-target effects of 5-LO inhibitors, in combination with the relatively high concentrations of 5-LO products needed to achieve mitogenic effects in cell culture assays, raise concern over the assignment of the cause, and question the relationship between 5-LO products and tumorigenesis. Keywords: leukotriene, apoptosis, cell proliferation, mitogenic effects, cytotoxicity
Introduction: The Vbeta12-transgenic mouse was previously generated to investigate the role of antigen-specific T cells in collagen-induced arthritis (CIA), an animal model for rheumatoid arthritis. This mouse expresses a transgenic collagen type II (CII)-specific T-cell receptor (TCR) beta-chain and consequently displays an increased immunity to CII and increased susceptibility to CIA. However, while the transgenic Vbeta12 chain recombines with endogenous alpha-chains, the frequency and distribution of CII-specific T cells in the Vbeta12-transgenic mouse has not been determined. The aim of the present report was to establish a system enabling identification of CII-specific T cells in the Vbeta12-transgenic mouse in order to determine to what extent the transgenic expression of the CII-specific beta-chain would skew the response towards the immunodominant galactosylated T-cell epitope and to use this system to monitor these cells throughout development of CIA. Methods: We have generated and thoroughly characterized a clonotypic antibody, which recognizes a TCR specific for the galactosylated CII(260-270) peptide in the Vbeta12-transgenic mouse. Hereby, CII-specific T cells could be quantified and followed throughout development of CIA, and their phenotype was determined by combinatorial analysis with the early activation marker CD154 (CD40L) and production of cytokines. Results: The Vbeta12-transgenic mouse expresses several related but distinct T-cell clones specific for the galactosylated CII peptide. The clonotypic antibody could specifically recognize the majority (80%) of these. Clonotypic T cells occurred at low levels in the naïve mouse, but rapidly expanded to around 4% of the CD4+ T cells, whereupon the frequency declined with developing disease. Analysis of the cytokine profile revealed an early Th1-biased response in the draining lymph nodes that would shift to also include Th17 around the onset of arthritis. Data showed that Th1 and Th17 constitute a minority among the CII-specific population, however, indicating that additional subpopulations of antigen-specific T cells regulate the development of CIA. Conclusions: The established system enables the detection and detailed phenotyping of T cells specific for the galactosylated CII peptide and constitutes a powerful tool for analysis of the importance of these cells and their effector functions throughout the different phases of arthritis.
House of Finance
(2010)
At present, there is a huge lag between the artificial and the biological information processing systems in terms of their capability to learn. This lag could be certainly reduced by gaining more insight into the higher functions of the brain like learning and memory. For instance, primate visual cortex is thought to provide the long-term memory for the visual objects acquired by experience. The visual cortex handles effortlessly arbitrary complex objects by decomposing them rapidly into constituent components of much lower complexity along hierarchically organized visual pathways. How this processing architecture self-organizes into a memory domain that employs such compositional object representation by learning from experience remains to a large extent a riddle. The study presented here approaches this question by proposing a functional model of a self-organizing hierarchical memory network. The model is based on hypothetical neuronal mechanisms involved in cortical processing and adaptation. The network architecture comprises two consecutive layers of distributed, recurrently interconnected modules. Each module is identified with a localized cortical cluster of fine-scale excitatory subnetworks. A single module performs competitive unsupervised learning on the incoming afferent signals to form a suitable representation of the locally accessible input space. The network employs an operating scheme where ongoing processing is made of discrete successive fragments termed decision cycles, presumably identifiable with the fast gamma rhythms observed in the cortex. The cycles are synchronized across the distributed modules that produce highly sparse activity within each cycle by instantiating a local winner-take-all-like operation. Equipped with adaptive mechanisms of bidirectional synaptic plasticity and homeostatic activity regulation, the network is exposed to natural face images of different persons. The images are presented incrementally one per cycle to the lower network layer as a set of Gabor filter responses extracted from local facial landmarks. The images are presented without any person identity labels. In the course of unsupervised learning, the network creates simultaneously vocabularies of reusable local face appearance elements, captures relations between the elements by linking associatively those parts that encode the same face identity, develops the higher-order identity symbols for the memorized compositions and projects this information back onto the vocabularies in generative manner. This learning corresponds to the simultaneous formation of bottom-up, lateral and top-down synaptic connectivity within and between the network layers. In the mature connectivity state, the network holds thus full compositional description of the experienced faces in form of sparse memory traces that reside in the feed-forward and recurrent connectivity. Due to the generative nature of the established representation, the network is able to recreate the full compositional description of a memorized face in terms of all its constituent parts given only its higher-order identity symbol or a subset of its parts. In the test phase, the network successfully proves its ability to recognize identity and gender of the persons from alternative face views not shown before. An intriguing feature of the emerging memory network is its ability to self-generate activity spontaneously in absence of the external stimuli. In this sleep-like off-line mode, the network shows a self-sustaining replay of the memory content formed during the previous learning. Remarkably, the recognition performance is tremendously boosted after this off-line memory reprocessing. The performance boost is articulated stronger on those face views that deviate more from the original view shown during the learning. This indicates that the off-line memory reprocessing during the sleep-like state specifically improves the generalization capability of the memory network. The positive effect turns out to be surprisingly independent of synapse-specific plasticity, relying completely on the synapse-unspecific, homeostatic activity regulation across the memory network. The developed network demonstrates thus functionality not shown by any previous neuronal modeling approach. It forms and maintains a memory domain for compositional, generative object representation in unsupervised manner through experience with natural visual images, using both on- ("wake") and off-line ("sleep") learning regimes. This functionality offers a promising departure point for further studies, aiming for deeper insight into the learning mechanisms employed by the brain and their consequent implementation in the artificial adaptive systems for solving complex tasks not tractable so far.
Direct photon emission from heavy-ion collisions has been calculated and compared to available experimental data. Three different models have been combined to extract direct photons from different environments in a heavy-ion collision: Thermal photons from partonic and hadronic matter have been extracted from relativistic, non-viscous 3+1-dimensional hydrodynamic calculations. Thermal and non-thermal photons from hadronic interactions have been calculated from relativistic transport theory. The impact of different physics assumptions about the thermalized matter has been studied. In pure transport calculations, a viscous hadron gas is present. This is juxtaposed with ideal gases of hadrons with vacuum properties, hadrons which undergo a chiral and deconfinement phase transition and with a system that has a strong first-order phase transition to a deconfined ideal gas of quarks and gluons in the hybrid model calculations with the various Equations of State. The models used for the determination of photons from both hydrodynamic and transport calculations have been elucidated and their numerical properties tested. The origin of direct photons, itemised by emission stage, emission time, channel and baryon number density, has been investigated for various systems, as have the transverse momentum spectra and elliptic flow patterns of direct photons. The differences of photon emission rates from a thermalized transport box and the hadronic photon emission rates that are used in hydrodynamic calculations are found to be very similar, as are the spectra from calculations of heavy-ion collisions with transport model and hybrid model with hadronic Equation of State. Taking into account the full (vacuum) spectral function of the rho-meson decreases the direct photon emission by approximately 10% at low photon transverse momentum. The numerical investigations show that the parameter with the largest impact on the direct photon spectra is the time at which the hydrodynamic description is started. Its variation shows deviations of one to two orders of magnitude. In the regime that can be considered physical, however, the variation is less than a factor of 3. Other parameters change the direct photon yield by up to approximately 20%. In all systems that have been considered -- heavy-ion collisions at E_lab = 35 AGeV and 158 AGeV, (s_NN)**1/2 = 62.4 GeV, 130 GeV and 200 GeV -- thermal emission from a system with partonic degrees of freedom is greatly enhanced over that from hadronic systems, while the difference between the direct photon yields from a viscous and a non-viscous hadronic system (transport vs. hydrodynamics) is found to be very small. Predictions for direct photon emission in central U+U-collisions at 35 AGeV have been made. Since non-soft photon sources are very much suppressed at this energy, experimental results should very easily be able to distinguish between a medium that is entirely hadronic and a system that undergoes a phase transition from partonic to hadronic matter. In the case of lead-lead collisions at 158 AGeV, the situation is not so clear. In central collisions, the complete direct photon spectra including prompt photons seem to favour hadronic emission sources, while the partonic calculations only slightly overpredict the data. In peripheral collisions at the same energy, the hadronic contribution is more than one order of magnitude smaller than the prompt photon contribution, which fits the available experimental data. A similar picture presents itself at higher energies. At RHIC energies, however, the difference between transport calculations and hadronic hybrid model calculations is largest. Hybrid model calculations with partonic degrees of freedom can describe the experimental results in gold-gold collisions at 200 GeV. The elliptic flow component of direct photon emission is found to be consistently positive at small transverse momenta. This means that the initial photon emission from a non-flowing medium does not completely overshine the emission patterns from later stages. High-pt photons dominantly come from the beginning of a heavy-ion collision and therefore do not carry the directed information of an evolving medium.
The role of gamma oscillatory activity in magnetoencephalogram for auditory memory processing
(2010)
Recent studies have suggested an important role of cortical gamma oscillatory activity (30-100 Hz) as a correlate of encoding, maintaining and retrieving auditory, visual or tactile information in and from memory. It was shown that these cortical stimulus representations were modulated by attention processes. Gamma-band activity (GBA) occurred as an induced response peaking at approximately 200-300 ms after stimulus presentation. Induced cortical responses appear as non-phase-locked activity and are assumed to reflect active cortical processing rather than passive perception. Induced GBA peaking 200-300 ms after stimulus presentation has been assumed to reflect differences between experimental conditions containing various stimuli. By contrast, the relationship between specific oscillatory signals and the representation of individual stimuli has remained unclear. The present study aimed at the identification of such stimulus-specific gamma-band components. We used magnetoencephalography (MEG) to assess gamma activity during an auditory spatial delayed matching-to-sample task. 28 healthy adults were assigned to one of two groups R and L who were presented with only right- or left-lateralized sounds, respectively. Two sample stimuli S1 with lateralization angles of either 15° or 45° deviation from the midsagittal plane were used in each group. Participants had to memorize the lateralization angle of S1 and compare it to a second lateralized sound S2 presented after an 800-ms delay phase. S2 either had the same or a different lateralization angle as S1. After the presentation of S2, subjects had to indicate whether S1 and S2 matched or not. Statistical probability mapping was applied to the signals at sensor level to identify spectral amplitude differences between 15° and 45° stimuli. We found distinct gamma-band components reflecting each sample stimulus with center frequencies ranging between 59 and 72 Hz in different sensors over parieto-occipital cortex contralateral to the side of stimulation. These oscillations showed maximal spectral amplitudes during the middle 200-300 ms of the delay phase and decreased again towards its end. Additionally, we investigated correlations between the activation strength of the gamma-band components and memory task performance. The magnitude of differentiation between oscillatory components representing 'preferred' and 'nonpreferred' stimuli during the final 100 ms of the delay phase correlated positively with task performance. These findings suggest that the observed gamma-band components reflect the activity of neuronal networks tuned to specific auditory spatial stimulus features. The activation of these networks seems to contribute to the maintenance of task-relevant information in short-term memory.
Type 1 diabetes (T1D) is a chronic T cell-mediated autoimmune disorder that results in the destruction of insulin-producing pancreatic ß cells leading to life-long dependence on exogenous insulin. Attraction, activation and transmigration of inflammatory cells to the site of ß-cell injury depend on two major molecular interactions. First, interactions between chemokines and their receptors expressed on leukocytes result in the recruitment of circulating inflammatory cells to the site of injury. In this context, it has been demonstrated in various studies that the interaction of the chemokine CXCL10 with its receptor CXCR3 expressed on circulating cells plays a key role in the development of T1D. Second, once arrived at the site of inflammation adhesion molecules promote the extravasation of arrested cells through the endothelial cell layer to penetrate the site of injury. Here, the junctional adhesion molecule (JAM) JAM-C expressed on endothelial cells is involved in the process of leukocyte diabedesis. It was recently demonstrated that blocking of JAM-C efficiently attenuated cerulein-induced pancreatitis in mice. In my thesis I studied the influence of the CXCL10/CXCR3 interaction on the one hand, and of the adhesion molecule JAM-C on the other hand, on trafficking and transmigration of antigen-specific, autoaggressive T cells in the RIP-LCMV mouse model. RIP-LCMV mice express the glycoprotein (GP) or the nucleoprotein (NP) of the lymphocytic choriomeningitis virus (LCMV) as a target autoantigen specifically in the ß cells of the islets of Langerhans and turn diabetic after LCMV-infection. In my first project I found that pharmacologic blockade of CXCR3 during development of virus-induced T1D results in a significant delay but not in an abrogation of overt disease. However, neither the frequency nor the migratory properties of islet-specific T cells was significantly changed during CXCR3 blockade. In the second project I was able to demonstrate that JAM-C was upregulated around the islets in RIP-LCMV mice after LCMV infection and its expression correlated with islet infiltration and functional ß-cell impairment. Blockade with a neutralizing anti-JAM-C antibody slightly reduced T1D incidence, whereas overexpression of JAM-C on endothelial cells did not accelerate virus-induced diabetes. In summary, our data suggest that both CXCR3 as well as JAM-C are involved in trafficking and transmigration of antigen-specific autoaggressive T cells to the islets of Langerhans. However, the detection of only a moderate influence on the onset of clinical disease during CXCR3 or JAM-C blockade reflects the complex pathogenesis of T1D and indicates that several different inflammatory factors need to be neutralized in order to achieve a stable and persistent protection from disease.
In this retrospective study, case records of clinical forensic examinations and respective investigation records of the police and the public prosecutor’s (state attorney) office along with the resulting verdicts were examined in terms of type and site of injury found and extent of agreement or discrepancy between the story given by the accused party and the medical conclusions drawn from the injury pattern. Particular attention was focussed on the relevance of the expert opinion for the legal assessment through case-specific analysis of the respective verdicts. A total of 118 cases originating from the scope of the Institute of Forensic Medicine, Goethe-University Frankfurt/Main (2002 – 2005) were examined. These included bodily injury, child abuse, sexual compulsion, self-mutilation and injury patterns of individuals under suspicion of attempted or completed manslaughter/homicide. As compared to former studies, the results of this analysis were additionally correlated with the investigation records of the public prosecutor’s office (state attorney) to elucidate the importance of the forensic findings for police investigation and legal evaluation. The forensic examination involved 19 accused and 99 victims. As for the gender distribution of the victims, 51 females and 48 males were encountered. Slight female preponderance was seen in cases of sexual compulsion. The group of accused individuals consisted of 16 males and 3 females. Injuries due to blunt force impact, in particular hematomas involving skull and trunk, dominated as diagnostic findings in cases of bodily injury, sexual offenses and child abuse. In cases with suspected self-mutilation and in examinations of accused perpetrators of manslaughter/homicide scratches and lacerations prevailed. Correlating injury patterns and police inquiries, conclusions drawn from medical findings and results of police investigations were in good agreement in 46 % of the cases, but showed major discrepancies in another 25 %. In the remaining 29 % of the cases, the injury pattern did not allow for a definite expert opinion on the mode of infliction. Nevertheless, a detailed documentation of the medical findings proved to be of substantial value for police investigations. 39 % of the cases resulted in a final verdict, whilst in 59 % of the cases the charge was dismissed. Especially in the ladder forensic expert opinion was of considerable importance, since forensic assessment of injuries could either not be attributed to a certain perpetrator or contributed to the exoneration of the accused. In 2 % the judicial assessment was not available. In 82 % of the cases of child abuse the proceedings were stopped, e.g. since maltreatment could not be assigned to a particular perpetrator. In these cases, it became obvious, that forensic examination and assessment alone does not suffice, but has to be embedded in police investigations to achieve optimal results. Medical conclusions by forensic experts were – almost without any exception – considered in legal assessment and differentiatedly taken into account when weighing the sentence, thus reflecting the objectivity and neutrality of the medical assessment. In synopsis, albeit evidential value of forensic examination is assessed to be high optimal clarification of a case requires integration into the complete spectrum of investigations performed in a case.
A basic introduction to RFQs has been given in the first part of this thesis. The principle and the main ideas of the RFQ have been described and a small summary of different resonator concepts has been given. Two different strategies of designing RFQs have been introduced. The analytic description of the electric fields inside the quadrupole channel has been derived and the limitation of these approaches were shown. The main work of this thesis was the implementation and analysis of a Multigrid Poisson solver to describe the potential and electric field of RFQs which are needed to simulate the particle dynamics accurately. The main two ingredients of a Multigrid Poisson solver are the ability of a Gauß-Seidel iteration method to smooth the error of an approximation within a few iteration steps and the coarse grid principle. The smoothing corresponds to a damping of the high frequency components of the error. After the smoothing, the error term can well be approximated on a coarser grid in which the low frequency components of the error on the fine grid are converted to high frequency errors on the coarse grid which can be damped further with the same Gauß-Seidel method. After implementation, the multigrid Poisson solver was analyzed using two different type of test problems: with and without a charge density. After illustrating the results of the multigrid Poisson solver, a comparison to the field of the old multipole expansion method was made. The multipole expansion method is an accurate representation of the field within the minimum aperture, as limited by cylindrical symmetry. Within these limitations the multigrid Poisson solver and the multipole expansion method agree well. Beyond the limitation the two method give different fields. It was shown that particles leave the region in which the multipole expansion method gives correct fields and that the transmission is affected therefrom as well as the single particle dynamic. The multigridPoisson solver also gives a more realistic description of the field in the beginning of the RFQ, because it takes the tank wall into account, and this effect is shown as well. Closing the analysis of the external field, the transmission and fraction of accelerated particles of the set of 12 RFQs for the two different methods were shown. For RFQs with small apertures and big modulations the two different method give different values for the transmission due to the limitation of the multipole expansion method. The internal space charge fields without images was analyzed at the level of single particle dynamic and compared to the well known SCHEFF routine from LANL, showing major differences for the analyzed particle. For comparing influences on the transmissions of the set of 12 RFQs a third space charge routine (PICNIC) was considered as well. The basic shape of the transmission curve was the same independent of space charge routines, but the absolute values differ a little from routine to routine, with SCHEFF about 2% lower than the other routines. The multigrid Poisson solver and PICNIC agree quite well (less than 1%), but PICNIC has an extremely long running time. The major advantage of the multigrid Poisson solver in calculating space charge effects compared to the other two routines used here is that the Poisson solver can take the effect of image charges on the electrodes into account by just changing the boundaries to have the shape of the vanes whereas all other settings remain unchanged. It was demonstrated that the effect of image charges on the vanes on the space charge field is very big in the region close to the electrodes. Particles in that region will see a stronger transversely defocusing force than without images. The result is that the transmission decreases by as much as 10% which is considerably more than determined by other (inexact) routines before. This is an important result, because knowing about the big effect of image charges on the electrodes it allows it to taken into account while designing the RFQ to increase the performance of the machine. It is also an important factor in resolving the traditional difference observed between the transmission of actual RFQs and the transmission predicted by earlier simulations. In the last chapter of this thesis some experimental work on the MAFF (Munich Accelerator for Fission Fragments) IH-RFQ is described. The machine was assembled in Frankfurt and a beam test stand was built. The shunt impedance of the structure was measured using different techniques, the output energy of the structure were measured and finally its transmission was determined and compared to the beam dynamics simulations of the RFQ. Unfortunately, the transmission measurements were done without exact knowledge of the beam’s emittance. So the comparison to the simulation is somewhat rough, but with a reasonable guess of the emittance a good comparison between the measurement and simulation was obtained.
Twenty-nine species of caddisflies in the genus Agapetus Curtis in eastern and central North America are reviewed. Twelve are described as new species: Agapetus aphallus (known only from females); Agapetus baueri, Agapetus flinti, Agapetus harrisi, Agapetus hesperus, Agapetus ibis, Agapetus kirchneri, Agapetus meridionalis, Agapetus pegram, Agapetus ruiteri, Agapetus stylifer, and Agapetus tricornutus. Agapetus rossi Denning 1941 is recognized as a junior subjective synonym of Agapetus walkeri (Betten and Mosely 1940), new synonym. A key to males is provided, and species’ distributions are mapped.
The Neotropical ambrosia beetle genus Camptocerus Dejean was revised. Monophyly of the genus was tested using 66 morphological characters in a cladistic analysis. Camptocerus was recovered as monophyletic and 31 species were recognized. Six new synonyms were discovered: C. auricomus Blandford 1896 (= C. striatulus Hagedorn 1905), C. inoblitus (Schedl) 1939 (= C. morio (Schedl) 1952), C. niger (Fabricius) 1801 (= C. tectus Eggers 1943), C. opacicollis (Eggers) 1929 (= C. infidelis Wood 1969; = C. uniseriatus Schedl 1972), C. suturalis (Fabricius) 1801 (= C. cinctus Chapuis 1869). Two species were removed from synonymy: C. charpentierae Schedl and C. hirtipennis Schedl. Twelve new species of Camptocerus were described: C. coccoformus (Brazil, Ecuador), C. distinctus (Ecuador), C. doleae (Ecuador), C. igniculus (Brazil), C. mallopterus (Ecuador), C. noel (widely distributed across Amazonia), C. petrovi (Ecuador), C. pilifrons (Ecuador), C. pseudoangustior (widely distributed across Amazonia), C. satyrus (Brazil), C. unicornus (Brazil) and C. zucca (Ecuador). Lectotypes are here designated for the following species: Camptocerus auricomus Blandford, Camptocerus squammiger Chapuis, Hylesinus gibbus Fabricius, Hylesinus suturalis Fabricius, Hylesinus fasciatus Fabricius. A key, diagnosis, distribution, host records and images were provided for each species.
We provide new records of biting and predaceous midges (Diptera: Ceratopogonidae) from Florida, including the first documented United States records of Atrichopogon (Atrichopogon) caribbeanus Ewen, Dasyhelea griseola Wirth, D. scissurae Macfie, and Brachypogon (Brachypogon) woodruffi Spinelli and Grogan. Atrichopogon (Meloehelea) downesi Wirth, Forcipomyia (Thyridomyia) monilicornis (Coquillett), F. (T.) nodosa Saunders, Ceratoculicoides blantoni Wirth and Ratanaworabhan, Mallochohelea albibasis (Malloch), Bezzia (Bezzia) imbifida Dow and Turner and B. (B.) mallochi Wirth are recorded for the first time from Florida. Forcipomyia (Thyridomyia) johannseni Thomsen, Bezzia (Bezzia) expolita (Coquillett), and B. (B.) pulverea (Coquillett) are deleted from the ceratopogonid fauna of Florida. Dasyhelea koenigi Delécolle and Rieb is a junior objective synonym of Dasyhelea scissurae Macfie (NEW SYNONYM). The total number of Ceratopogonidae recorded from Florida is now 249 species contained within 27 genera.
Homophileurus neptunus Dechambre was found to be conspecific with H. waldenfelsi Endrödi after examination of types, descriptions, and illustrations. Accordingly, H. neptunus is placed in junior synonymy with H. waldenfelsi, new synonymy. Homophileurus waldenfelsi is an uncommon species and occurs in Ecuador, Colombia, Brazil, and Peru. Brazil and Peru are new country records.
This paper summarizes the information published on the beetle fauna of the island of St. Vincent (excluding the Grenadine islands). The fauna contains 62 families, with 371 genera, and 536 species. The families with the largest number of species are Staphylinidae (128), Curculionidae (54), Chrysomelidae (47), Scarabaeidae (31), Tenebrionidae (30), and Cerambycidae (29). At least 17 species (3.17%) were probably accidentally introduced to the island by human activities. One hundred four species (19.40%) are endemic (restricted) to the island and likely speciated on the island. One hundred twenty species (22.39%) are shared only with other islands of the Lesser Antilles (Lesser Antillean endemics), and 41 species (7.65%) are more widespread Antilles endemics. The remaining 254 species (47.38%) in the fauna are otherwise mostly widely distributed in the Antilles and the Neotropical Region. The St. Vincent beetle fauna has thus mostly originated elsewhere than on St. Vincent and is largely an immigrant fauna from other islands of the West Indies or the continental Neotropics. Of the St. Vincent species known to occur on other islands, the largest numbers are shared with (north to south) Guadeloupe (206), Dominica (115), Martinique (76), St. Lucia (87) and Grenada (298). Undoubtedly, the real number of species on St. Vincent is higher than now reported and may actually be around 1200 or more species.
Review of Synapsis Bates (Scarabaeidae: Scarabaeinae: Coprini), with description of a new species
(2010)
Presented are a checklist, a discussion of and keys to species groups and their constituent species, and a description of one new species: Synapsis horaki. The species Synapsis cambeforti Krikken and S. thoas Sharp are synonymized with S. ritsemae Lansberge, Balthasar’s synonymy of S. yunnana Arrow with S. tridens Sharp is revived, and the status of six recently described species is left unresolved because of insufficient data.
Six new species of the weevil genus Cercopeus Schoenherr are described from South Carolina: C. alexi, C. cornelli, C. femoratus, C. paulus, C. skelleyi, and C. tibialis. Three other species also found in South Carolina are redescribed: C. chrysorrhoeus (Say), C. maspavancus Sleeper, and C. strigicollis Sleeper. Keys to known males and females of all 17 species of Cercopeus are given, along with photographs of habitus, leg features, and antennae, and line illustrations of genitalia. Nearly all specimens of the new species were collected from January-March and these species are winter active.
Eight new state records and the three newly described species are the subject of this publication. Whiteflies (Hemiptera: Sternorrhyncha: Aleyrodidae: Aleyrodinae) were collected from 2003 through 2009 within the Las Vegas area of Clark County, Nevada to determine the occurrence of newly established species and host range and distribution. Prior to 2003 the following ten whiteflies were known to be established in Nevada: Aleuroglandulus subtilis Bondar, Aleuroplatus berbericolus Quaintance and Baker, Aleyrodes spiraeoides Quaintance, Bemisia tabaci (Gennadius), Dialeurodes citri (Ashmead), Siphoninus phillyreae (Haliday), Tetraleurodes mori (Quaintance), Trialeurodes abutiloneus (Haldeman), Trialeurodes packardi (Morrill), and Trialeurodes vaporariorum (Westwood). Based on collections made after 2003, eleven additional whitefly species were found in Nevada. Of these the following eight were described species from California and other western U.S. states: Aleuroparadoxus arctostaphyli Russell, Aleuroplatus gelatinosus (Cockerell), Aleuropleurocelus ceanothi (Sampson), Aleuropleurocelus nigrans (Bemis), Tetraleurodes quercicola Nakahara, Trialeurodes corollis (Penny), Trialeurodes eriodictyonis Russell, and Trialeurodes glacialis (Bemis). Three new species are described and illustrated: Aleuropleurocelus nevadensis Dooley sp. nov., Tetraleurodes quercophyllae Dooley sp. nov., and Trialeurodes pseudoblongifoliae Dooley sp. nov.
The five genera and eight species of dynastine scarabs occurring in the Cayman Islands in the West Indies are reviewed. Two new, endemic species are described from Little Cayman, with supporting illustrations: Tomarus adoceteus Ratcliffe and Cave (Pentodontini), new species, and Caymania nitidissima Ratcliffe and Cave (Phileurini), new genus and species.