Refine
Year of publication
Document Type
- Doctoral Thesis (2052) (remove)
Language
- English (2052) (remove)
Has Fulltext
- yes (2052)
Is part of the Bibliography
- no (2052)
Keywords
- ALICE (8)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
- LHC (5)
Institute
- Biowissenschaften (424)
- Physik (377)
- Biochemie und Chemie (281)
- Biochemie, Chemie und Pharmazie (207)
- Medizin (125)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (54)
- Mathematik (46)
Protein quality control systems (PQC), i.e. UPS and aggresome-autophagy pathway, have been suggested to be a promising target in cancer therapy. Simultaneous pharmacological inhibition of both pathways have shown increase efficacy in various tumors, such as ovarian and colon carcinoma. Here, we investigate the effect of concomitant inhibition of 26S proteasome by FDA-approved inhibitor Bortezomib, and HDAC6, as key mediator of the aggresome-autophagy system, by the highly specific inhibitor ST80 in rhabdomyosarcoma (RMS) cell lines. We demonstrated that simultaneous inhibition of 26S proteasome and selective aggresome-autophagy pathway significantly increases apoptosis in all tested RMS cell lines. Interestingly, we observed that a subpopulation of RMS cells was able to survive the co-treatment and, upon drug removal, to recover similarly to untreated cells. In this study, we identified co-chaperone BAG3 as the key mediator of this recovery: BAG3 is transcriptionally up-regulated specifically in the ST80/Bortezomib surviving cells and mediates clearance of cytotoxic protein aggregates by selective autophagy. Impairment of the autophagic pathway during the recovery phase, both by conditional knock-down of ATG7 or by inhibition of lysosomal degradation by BafylomicinA1, triggers accumulation of insoluble protein aggregates, loss of cell recovery and cell death similarly to stable short harpin RNA (shRNA) BAG3 knock-down. Our results are the first demonstration that BAG3 mediated selective autophagy is engaged to cope with proteotoxicity induced by simultaneous inhibition of constitutive PQC systems in cancer cell lines during cell recovery. Moreover, our data give new insights in the regulation of constitutive and on demand PQC mechanisms pointing to BAG3 as a promising target in RMS therapy.
Nuclear matter, that takes the form of protons and neutrons under normal conditions, is subject to a phase transition at high temperatures and densities, liberating the quarks and gluons that are usually confined in nucleons and creating a medium of free partons: the Quark-Gluon-Plasma. It is generally believed that this state of matter can be created in relativistic collisions of heavy nuclei. The study of the medium created in these collisions is the subject of heavy-ion physics. One topic within this field are particles with high transverse momentum, that are created in initial hard collisions between partons of the incoming nuclei. The energetic partons lose energy due to interactions with the medium before they fragment into a jet of hadrons. Due to momentum conservation, these jets are usually created as back-to-back pairs, or less commonly as three-jet or photon-jet events, where a single jet is balanced by a hard photon. The energy loss can be measured using correlations between particles with high transverse momenta. A trigger particle is selected with very high transversemomentum and the distribution of the azimuthal angle of associated particles in the same event is studied, relative to the azimuth of the trigger particle.These azimuthal correlations show a peak for opening angles around 0 from particles selected from the same jet, and a second peak at opening angles around 180 degrees from back-to-back di-jets. Random combinations with the underlying event generate a flat background, extending over the full range of opening angles. The STAR experiment observed a modification of these correlations in central Au+Au collisions, where trigger particles with 4GeV < pT(trigger) < 6GeV and associated particles with 2GeV < pT(trigger) < 4GeV were selected. A strong suppression has been observed for away-side correlations in central Au+Au collisions, relative to p+p, d+Au and peripheral Au+Au data. This can be explained by assuming two partons going in opposite directions, where at least one has to travel a large distance through the medium, causing energy loss and effectively removing the event from the analysis. For near-side correlations, no significant modification has been observed, which can be explained by surface emission, assuming that the observed jets have travelled only a short distance in themedium, not leaving enough time for interactions with the medium. Both trigger- and associated particles in a correlation analysis with charged hadrons are subject to modifications due to the medium. This can be avoided by using photon-jet events instead of di-jets, because the photon does not interact with the medium and therefore provides the best available measure of the properties of the opposite jet in the presence of the underlying event. This thesis studies azimuthal correlations between regions of high energy deposition in the electro-magnetic calorimeter as trigger- and charged tracks as associated particles. The data sample had been enriched by online event selection, allowing for the selection of trigger particles with a transverse energy of more than 10GeV and associated particles with more than 2,3 or 4 GeV. The away-side yield per trigger particle is strongly suppressed like in correlations between charged particles. The near-side yield is also reduced by about a factor two, clearly different from charged correlations. The trigger particles are a mixture of photon pairs from the decays of neutral pions and single photons, mainly from photon-jet events, with small contributions from other hadron decays and fragmentation photons. Pythia simulations predict a ratio of neutral pions to prompt photons of 3.5:1 in p+p collisions with the same cuts as in the presented analysis. Single particle suppression further reduces this ratio in central Au_Au collisions, down to about 0.8:1, indicating that the majority of trigger particles in central Au+Au collisions are prompt photons. The increasing fraction of prompt photon triggers without an accompanying jet and therefore zero associated yield reduces the average yield per trigger particle. The magnitude of the observed effect agrees well with the expectation from Pythia simulations and the assumption of a single particle suppression by a factor 4-5. An analysis of away-side correlations is more difficult, because both photon-jet and di-jet events contribute. The aim is the separation of these two contributions. As a clear separation is not possible with the available dataset, a comparison with two different scenarios is given, where a surprisingly small suppression by only a factor of about 5 is favoured for both dijet- and photon-jet-correlations. A separate measurement of both contributions will be possible by a shower-shape analysis with the EM calorimeter or a comparison with charged correlations in the same kinematic region.
The Kaon-Spectrometer (KaoS) at the heavy-ion synchrotron (SIS) at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt has been used to study production and propagation of K+ and K- mesons from Au+Au collisions at a kinetic beam energy of 1.5 AGeV. This energy for K+ mesons is close to the corresponding production threshold in binary nucleon-nucleon collisions and far below for K- mesons. The azimuthal angular distributions of particles as a function of the collision centrality and particle transverse momenta have been measured. The properties of strange mesons are expected to be modified by the in-medium meson-baryon potential. Theoretical calculations show that the superposition of the scalar and vector potentials leads to a small repulsive K+N and a strong attractive K-N potential. Additionally, the interaction of kaons and antikaons with nuclear matter is different. The strangeness conservation law inhibits the absorption probability of K+ mesons as they contain an s-quark. K- mesons, however, interact with nucleons via strangenessexchange (K- + N ->Y + pion, where Y = lambda, sigma). Moreover, the reverse process (pion + Y -> K- + N) is the dominant production mechanism of K- mesons at SIS energies. The azimuthal angular emission patterns of kaons are expected to be sensitive to the in-medium potentials. An enhanced out-of-plane emission of K+ mesons was observed in Au+Au reactions at 1.0 AGeV and 1.5 AGeV, and also in Ni+Ni at 1.93 AGeV. The out-of-plane emission of K+ mesons in Au+Au reactions at 1.0 AGeV was interpreted as a consequence of a repulsive K+N potential in the nuclear medium, however, recent transport calculations show that the emission patterns obtained in Au+Au at 1.5 AGeV and Ni+Ni at 1.93 AGeV are additionally influenced by the re-scattering of kaons. For K- mesons the calculations predict an almost isotropic emission pattern due to the attractive K-N potential which counteracts the absorption of K- mesons in the spectator fragments. In Ni+Ni collisions at 1.93 AGeV the azimuthal distribution of K- mesons has been found to be isotropic. In this case, however, the spectators are rather small and have large relative velocities. In addition, the delay of antikaon emission due to strangenessexchange reaction minimizes the interaction with the spectators. As a consequence the sensitivity of the K- meson emission pattern to the K-N in-medium potential is reduced. In Au+Au collisions we found a dependence of the K- meson azimuthal emission pattern on the transverse momentum. The antikaons registered with pt < 0.5 GeV/c are preferentially emitted in the reaction plane and the particles with pt > 0.5 GeV/c show strong out-of-plane enhancement. The emission patterns of K- can be explained in terms of two competing phenomena: one of them is indeed the influence of the attractive K-N potential, however, the second one originates from the strangeness-exchange process.
Das geographische Verbreitungsgebiet von Arten ist ein fundamentales Struktur gebendes Merkmal der biologischen Welt. Warum Arten so verteilt sind, wie sie sind ist seit langem eine der zentralen Fragen in Ökologie, Biogeographie und Evolution. Gegenwärtig verändern sich, im Wesentlichen als unbeabsichtigtes Nebenprodukt menschlicher ökonomischen Aktivitäten und Populationsdynamik, die geographischen Verbreitungsgebiete von Arten mit entscheidender Bedeutung für Land- und Forstwirtschaft, als Krankheitsvektoren oder als Teil der biologischen Systeme, die Ökosystemfunktionen bereitstellen. Daher ist es dringend notwendig, dass wir unser Verständnis über die Dynamiken, aus denen die geographische Verbreitung von Arten erwachsen, verbessern. Mit dieser Doktorarbeit versuche ich, in drei Untersuchungen zur Dynamik der Verbreitungsgebiete von Singvögeln einen Beitrag zu unserem in Entwicklung begriffenen Verständnis der multiplen Faktoren die Artverbreitungsgebiete beeinflussen, zu leisten.
1) Zu einem mechanistischeren Verständnis von Artmerkmalen und Verbreitungsgebietsgrößen: Ein wichtiger, ungelöster Fragenkomplex in der Makroökologie ist, die immense interspezifische Variation in der Größe geographischer Verbreitungsgebiete zu verstehen. Während man davon ausgeht, dass Artmerkmale wie Fekundität und Körpergröße einen Effekt auf Verbreitungsgebietsgrößen haben, fehlt ein allgemeines Verständnis davon, wie Verbreitungsgebietsgrößen von mehreren Merkmalen gemeinsam beeinflusst werden. Hier beurteilen wir den Effekt von Lebensgeschichtsmerkmalen (Fekundität, Ausbreitungsfähigkeit), ökologischen Merkmalen (Habitatnische, Nahrungsnische, Zugverhalten, Flexibilität im Zugverhalten) und morphologischen Merkmalen (Körpergröße) auf die globale Verbreitungsgebietsgröße von 165 europäischen Singvögeln. Wir identifizieren Hypothesen zur Beziehung von Artmerkmalen und Verbreitungsgebietsgrößen aus der Literatur und verwenden die Methodik der Pfadanalyse, um sie zu testen. Die Größe der globalen geographischen Verbreitungsgebiete europäischer Singvögel wurde von Lebensgeschichtsmerkmalen (Fekundidtät und Ausbreitungsfähigkeit), ökologischen Merkmalen (Habitatnischenbreite, Nahrungsnischenposition und Zugverhalten) und von Körpergröße beeinflusst. Artmerkmale beeinflussten Verbreitungsgebietsgrößen auf direktem und indirektem Weg. Insbesondere der Einfluss von Körpergröße war komplex mit positiven und negativen Effekten über verschiedene Pfade. Die Größe von Verbreitungsgebieten ist sehr wahrscheinlich auch von anderen Faktoren als von Artmerkmalen abhängig. Wir zeigen, dass es notwendig ist, den direkten und indirekten Einfluss einer Vielzahl von Merkmalen zu entwirren, um die Mechanismen, die makroökologische Beziehungen generieren, aufzuklären.
2) Konkurrenz und Ausbreitungsfähigkeit interagieren bei der Bestimmung der geographischen Verbreitung von Vögeln: Es ist weiterhin eine Herausforderung für Ökologie und Evolutionsbiologie, die Faktoren zu verstehen , die die geographische Verbreitung von Arten beeinflussen. Wir untersuchen wie Konkurrenz, Ausbreitungsfähigkeit, das Alter eines Taxons und Habitatverschiebungen seit dem letzten glazialen Maximum das Ausmaß beeinflussen, in dem Arten der Vogelgattung Sylvia in allen Gegenden mit geeigneten Umweltbedingungen vorkommen (d.h. range filling).
Wir haben range filling in der Vogelgattung Sylvia (Grasmücken) unter Verwendung von Boosted Regression Trees und Ridge-Regression quantifiziert. Mittels multipler Regression haben wir für die Effekte von intragenerischer Konkurrenz, Ausbreitungfähigkeit, Alter des Taxons und Habitatverschiebung seit dem letzten glazialen Maximum auf range filling getestet.
Grasmücken mit hoher Ausbreitungsfähigkeit zeigten höheres range filling, aber nur wenn Konkurrenz in Gebieten mit weniger geeignetem Habitat innerhalb ihres potentiellen Verbreitungsgebietes niedrig war. Das Alter eines Taxon und Habitatverschiebung seit dem letzten glazialen Maximum hatten keinen konsistenten Effekt. Wir zeigen, dass die Verbreitungsgebiete von Grasmücken mit hoher Wahrscheinlichkeit durch den simultanen, interaktiven Effekt von Konkurrenz und Ausbreitungsfähigkeit geformt werden. Wenn biotische Interaktionen wie Konkurrenz generell die Fähigkeit von Arten beeinflussen auf der kontinentalen Skala neue Gebiete zu kolonisieren, wird es eine Herausforderung sein, den Effekt von Klimawandel auf Biodiversität vorherzusagen.
3) Nischenverfügbarkeit in Zeit und Raum: Vogelzug der Grasmücken: Im Kontext neuer Fortschritte in der ökologischen Nischenmodellierung sind sowohl die Umwelt als auch die ökologische Nische einer Art als statische Entitäten behandelt und quantifiziert worden. In der Realität sind aber die Umwelt und die Nischenanforderungen einer Art auf einer Vielzahl von Skalen dynamisch. Wir schlagen ein konzeptionelles System vor das berücksichtigt, wie die realisierte Nische und geographische Verbreitung von Arten durch die entkoppelte raumzeitliche Verfügbarkeit unterschiedlicher Umweltbedingungen und durch Veränderungen der Nischenanforderungen über die Lebenszeit eines Organismus geformt werden. Das Testen von aus dem konzeptionellen System abgeleiteten Vorhersagen am Beispiel des Vogelzugs der Grasmücken ergab neue Erkenntnisse: Das Verfolgen der Klimanische im geographischen Raum war höchstwahrscheinlich nicht die treibende Kraft für Migration in der Gattung und steht potentiell im Konflikt mit dem Verfolgen der Landnutzungsnische. Die Nischen der Grasmücken waren während der Brutsaison schmaler, was zeigt, dass Nischenanforderungen zeitlich dynamisch sein können. Wir legen nahe, dass die Berücksichtigung dynamischer Umwelten und Nischenanforderungen zu einer entscheidenden Verbessserung unseres Verständnisses der treibenden Faktoren hinter der Bewegung von Organismen im Raum und der Dynamik ihrer Nischen und Verbreitungsgebiete führt.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
Dessins d'enfants (children's drawings) may be defined as hypermaps, i.e. as bipartite graphs embedded in compact Riemann surfaces. They are very important objects in order to describe the surface of the embedding as an algebraic curve. Knowing the combinatorial properties of the dessin may, in fact, help us determining defining equations or the field of definition of the surface. This task is easier if the automorphism group of the dessin is "large". In this thesis we consider a special type of dessins, so-called Wada dessins, for which the underlying graph illustrates the incidence structure of points and of hyperplanes of projective spaces. We determine under which conditions they have a large orientation-preserving automorphism group. We show that applying algebraic operations called "mock" Wilson operations to the underlying graph we may obtain new dessins. We study the automorphism group of the new dessins and we show that the dessins we started with are coverings of the new ones.
Das Hauptziel dieser Dissertation lag in der Verbesserung einzelner Schritte im Prozess der automatischen Proteinstrukturbestimmung mittels Kernmagnetischer Resonanz (NMR). Dieser Prozess besteht aus einer Reihe von sequenziellen Schritten, welche zum Teil bereits erfolgreich automatisiert wurden. CYANA ist ein Programmpaket, welches routinemäßig zur automatischen Zuordnung der chemischen Verschiebungen, der Nuclear Overhauser Enhancement (NOE) Signalen und der Strukturrechnung von Proteinen verwendet wird. Einer der Schritte, der noch nicht erfolgreich automatisiert wurde, stellt die Signalidentifizierung von NMR Spektren dar. Dieser Schritt ist besonders wichtig, da Listen von NMR-Signalen Grundlage aller Folgeschritte sind. Fehler in den Signallisten pflanzen sich in allen Folgeschritten der Datenauswertung fort und können am Ende in falschen Strukturen resultieren. Daher war ein Ziel dieser Arbeit, einen robusten und verlässlichen Algorithmus zur Signalidentifizierung von NMR Spektren in CYANA zu implementieren. Dieser Algorithmus sollte mit dem in FLYA implementierten Ansatz zur automatischen Resonanzzuordnung, der automatischen NOE-Zuordnung und der Strukturrechnung mit CYANA kombiniert werden. Der in CYANA implementierte CYPICK Algorithmus ahmt den von Hand durchgeführten Ansatz nach. Bei der manuellen Methode schaut sich der Wissenschaftler zweidimensionale Konturliniendarstellungen der NMR Spektren an und entscheidet anhand verschiedener Geomtrie- und Ähnlichkeitskriterien, ob es sich um ein Signal des Proteins oder um einen Artefakt handelt. Proteinsignale sind ähnlich zu konzentrischen Ellipsen und erfüllen bestimmte geometrische Kriterien, wie zum Beispiel ungefähr kreisförmiges Aussehen nach entsprechender Skalierung der spektralen Achsen und gänzlich konvexe Formen, die Artefakte nicht aufzeigen. CYPICK bewertet die Konturlinien lokaler Extrema nach diesen Bedingungen und entscheidet anhand dieser, ob es sich um ein echtes Signal handelt oder nicht. Das zweite Ziel dieser Arbeit war es ein Maß zur Quantifizierung der Information von strukturellen NMR Distanzeinschränkungen zu entwickeln. Der sogenannte Informationsgehalt (I) ist vergleichbar mit der Auflösung in der Röntgenkristallographie. Ein weiteres Projekt dieser Dissertation beschäftigte sich mit der strukturbasierten Medikamentenentwicklung (SBDD). SBDD wird meist von der Röntgenkristallographie durchgeführt. NMR hat jedoch einige Vorteile gegenüber der Röntgenkristallographie, welche interessant für SBDD sind. Daher wurden Strategien entwickelt, die NMR für SBDD zugänglicher machen sollen.
This dissertation explores the breadth and variation of authoritarian counter-terrorism strategies and their legitimacy-related origins to challenge prevailing assumptions in Terrorism Studies. Research and analysis are conducted in the form of a Structured Focused Comparison of domestic counter-terrorism strategies in two electoral autocracies. The first case is Russia’s domestic engagement against a mix of ethno-separatist and Islamist terrorism emanating from its North Caucasus republics between 1999 and 2018. The second case is China’s engagement vis-à-vis a similar type of terrorism in its Xinjiang Uyghur Autonomous Region between 1990 and 2018.
The comparison shows that, contrary to prevailing assumptions, the two strategies differ immensely from one another while containing significant if not predominant non-coercive elements. It further shows that the two strategies are closely related to the two states’ sources and resources of legitimacy, both in their original motivation to tackle the terrorist threat and in the design of counter-terrorism strategies. Drawing on David Beetham’s theory of The Legitimation of Power and on the Comparative Politics, Terrorism Studies and Civil War literatures, the dissertation explores the influence of five sources and (re)sources of legitimacy on the two counter-terrorism strategies: responsiveness, performance legitimacy, ideology, discursive power and co-optation. While governmental discursive power is discarded as a source of variation, findings are significant with respect to the influence of ideology and performance legitimacy. Reliance on ideology or related patterns for legitimation raise vulnerability to terrorism and constrain or facilitate the adoption of communicative and preventive measures that accommodate the grievances of potentially defective or even violently terrorist groups. Performance legitimacy is a key motivator in counter-terrorism and an influence on certain types of counter-terrorism policies. Responsiveness and co-optation are identified as potential sources of variation, based on idiosyncratic concurrence with policy choices.
Atomistic molecular dynamics approach for channeling of charged particles in oriented crystals
(2015)
Der Gitterführungseffekt ist der Prozess der Ausbreitung von geladenen Teilchen entlang der Ebenen oder Achsen von kristallinen Materialien. Seit den 1960er Jahren ist dieser Effekt weitgehend theoretisch und experimentell untersucht worden. Dieser Effekt wurde für die Manipulation von Hochenergiestrahlen, die Hochpräzisionsstruktur- und -fehleranalyse von kristallinen Medien und die Herstellung von hochenergetischer Strahlung angewendet. Zur Abstimmung der Parameter der Gitterführung und Gitterführungsstrahlung wurde dieser Prozess für den Fall von künstlich nanostrukturierten Materialien, wie gebogenen Kristallen, Nanoröhren und Fullerit, angenommen. In den letzten Jahren wurde das Konzept des kristallinen Undulators formuliert und getestet, das besondere Eigenschaften der Strahlung aufgrund der Gitterführung von Projektilen in regelmäßig gebogenen Kristallen vorhersagt.
In dieser Arbeit werden die Prozesse der Gitterführung von Sub- und Multi-GeV-Elektronen und -Positronen durch den atomistischen Molekulardynamik-Ansatz untersucht. Die Ergebnisse dieser Studien wurden in einer Reihe von Artikeln während meiner Promotion in Frankfurt vorgestellt. Dieser Ansatz ermöglicht die Simulation komplexer Fälle von Gitterführung in geraden, gebogenen und periodisch gebogenen Kristallen aus reinen kristallinen Materialien und von gemischten Materialien wie Si-Ge-Kristallen, in mehrschichtigen und nanostrukturierten kristallinen Systemen. Die Arbeit beschreibt die Methode der Simulationen, stellt Ergebnisse von Simulationen für verschiedene Fälle vor und vergleicht die Ergebnisse von Simulationen mit aktuellen experimentellen Daten. Die Ergebnisse werden mit Schätzungen der dechanneling-Länge verglichen, dem Anteil der gittergeführten Projektile, der Winkelverteilung der ausgehenden Projektile und des Strahlungsspektrums.
Atmospheric nanoaerosols have extensive effects on the Earth’s climate and human health. This cumulative work focuses on the development and characterization of instrumentation for measuring various parameters of atmospheric nanoaerosols, and its use to understand new particle formation from organic precursors. The principal research question is, how the chemical composition of nanoaerosol particles can be measured and how atmospheric chemistry influences aerosol processes, especially new particle formation and growth. Therefore, nanoaerosols are investigated under various aspects. More specifically, an instrument is developed to analyze nanoparticles, and field as well as chamber studies are conducted.
The main project is the instrument development of the Thermal Desorption Differential Mobility Analyzer (TD-DMA, project 1, Wagner et al. (2018)). This instrument analyzes the chemical composition of small aerosol particles. By characterization and testing in chamber experiments, it is proven to be suitable for the analysis of freshly nucleated particles.
The second project (Wagner et al. (2017)) applies a broad spectrum of aerosol measurement instruments for the characterization of aerosol particles produced by a skyscraper blasting. A comprehensive picture of the particle population emitted by the demolition is obtained.
Project 3 (K¨urten et al. (2016)) is also an ambient aerosol measurement, focusing of new particle formation in a rural area in central Germany, and the ability of a negative nitrate CI-APi-TOF to detect various substances in atmosphere. Project 4 (Heinritzi et al. (2016)) is a characterization of the negative nitrate CI-APi-TOF used in projects 1, 3, 5, 6, 7 and 8. The following projects focus on understanding new particle formation from atmospherically abundant organic precursors. Key instruments comprise the negative nitrate CI-APiTOF for gas-phase measurements of the nucleating species, and various sizing and counting instruments for quantifying the particle formation and growth. Project 5 (Kirkby et al. (2016)) shows that biogenic organic compounds formed from alpha-pinene can nucleate on their own without the influence of e.g. sulfuric acid. Project 6 (Tr¨ostl et al. (2016)) describe the subsequent growth of these particles. Project 7 (Stolzenburg et al. (2018)) covers the temperature dependence of this growth and in project 8 (Heinritzi et al. (2018)), the suppressing influence of isoprene on the new particle formation is assessed.
Im Zentralen Nervensystem (ZNS) kommunizieren neuronale Synapsen über eine Kombination von chemischen und elektrischen Signalen, die in ihrer Umgebung eine spezifische Komposition von Ionen benötigen. Um eine strenge Kontrolle des ZNS-Milieus zu gewährleisten, hat sich in Säugetieren eine endotheliale Blut-Hirn-Schranke (BHS) entwickelt. Die BHS limitiert den parazellulären Molekül Transport und wird von den Kapillargefässen des Gehirns gebildet, wobei die physische Barrier von den Tight Junctions (TJs) des vaskulären Endothels generiert wird. Das Gehirnendothel ist Teil einer neurovaskulären Einheit (NVE), zu der auch Perizyten (PZ), Astrozyten (AZ), Mikroglia und Interneurone zählen. Fehlkommunikation oder defekte zelluläre Komponenten in der NVE führen in der Regel zu Störungen in der BHS Funktion und können schwerwiegende neuronale Erkrankungen zur Folge haben.
Vor einigen Jahren haben wir und andere Forschungsgruppen herausgefunden, dass der Wnt/β-Catenin Signalweg essentiell für die Vaskularisierung des Gehirns während der Embryonalentwicklung ist und darüber hinaus auch eine bedeutende Rolle in der Induktion der BHS spielt. Des Weiteren konnte im Zebrafischmodell eine Aktivierung des kanonischen Wnt Signalweges auch im adulten Organismus nachgewiesen werden. Allerdings ist die Quelle der Wnt Wachstumsfaktoren bis dato unbekannt. Der Wnt Signalweg ist eine hoch konservierte und komplexe zelluläre Signalkaskade, die in allen mehrzelligen Organismen vorkommt. Wnt Wachstumsfaktoren sind sekretierte, hydrophobe Signalmoleküle, die sowohl über lange als auch kurze Strecken entweder den β-Catenin-abhängingen („kanonischen“) oder β-Catenin-unabhängingen („nicht-kanonischen“) Wnt Signalweg aktivieren können.
Da die meisten ZNS Erkrankungen mit einem Zusammenbruch der BHS-Funktion assoziiert sind, ist die Forschung bestrebt die Mechanismen, die der Entstehung und Aufrechterhaltung der BHS zugrunde liegen, zu ermitteln und zu verstehen. Das Ziel meiner Doktorarbeit war es herauszufinden, ob AZ Wnts produzieren und ob deren Wirkung auf das Gehirnendothel an der Aufrechterhaltung der BHS beteiligt ist. Zu diesem Zweck, habe ich ein in vitro BHS Kokultivierungs-Modellsystem etabliert das erstmalig ausschliesslich auf der Verwendung von murinen AZ und Gehirnendothelzellen basiert. Zu Beginn der Studie wurden sowohl primäre AZ als auch eine murine Gehirnendothel-zelllinie (MBE) bezüglich ihrer zell-spezifischen Eigenschaften charakterisiert. Dabei konnte belegt werden, dass sowohl die primären AZ als auch die MBE Zelllinie, aufgrund ihrer Proteinexpressionsprofile als repräsentative Vertreter ihres Zelltyps eingestuft werden können. Die darauffolgenden Untersuchungen konnten zeigen, dass primäre AZ über mehrere Passagen hinweg fast alle 19 Wnt Liganden auf mRNA Ebene exprimierten. Ferner konnte in primären Gehirnendothelzellen und zwei Gehirnendothelzelllinien die korrespondierenden Frizzled (FZD) Rezeptoren und low density lipoprotein receptor-related protein (LRP) Korezeptoren nachgewiesen werden. Dieser Befund legte Nahe, dass AZ und Gehirnendothelzellen die basalen Eigenschaften besitzen, um über den Wnt Signalweg miteinander zu kommunizieren. Die Stimulation von pMBEs mit Astrozyten konditioniertem Medium (AKM) induzierte die Hochregulation von Claudin-3 einem bekannten kanonischen Wnt Zielgens. Interessanterweise konnte diese Regulation teilweise durch die Zugabe von dickkopf 1 (Dkk1), einem Wnt/β-Catenin Antagonisten, inhibiert werden.
Um die physiologische Rolle der Wnt Liganden zu bestimmen, habe ich mir die Eigenschaft des universellen Sekretionsmechanismus der Wachstumsfaktoren, welcher von dem Transmembranprotein evenness interrupted (Evi) abhängig ist, zu Nutze gemacht. Die Verpaarung von Evifl/fl mit hGFAP-Cre Mäusen erlaubt die AZ-spezifische Deletion des Evi Proteins (Evi KO), was zur Folge hat, dass die Astrozyten der Nachkommen keine Wnt Wachstumsfaktoren sekretieren können.
In vitro führte der Verlust von Wnts in AKM zu einer teilweisen Delokalisierung von Junction Proteinen. Während die Kokultivierung mit Evi WT AZ einen straken Anstieg im TEER und reduzierte Permeabilitätsmesswerte induzierten, konnten diese pro-BHS Eigenschaften bei Evi KO AZ nicht beobachtet werden. Diese Ergebnisse zeigten deutlich, dass Wnts sekretiert von AZ den BHS Phenotyp positive beeinflussen, indem sie die Zell-Zell-Verbindung verstärken, was wiederum zu erhöhtem Zellwiderstand und reduzierter transzellulärer Permeabilität führt. Die Analyse des in vivo Phänotyps von Evi KO Mäusen ergab, dass mit fortschreitendem, postnatalem Alter makroskopisch erkennbare zerebrale Blutungen auftraten. Ausserdem konnte ich zeigen, dass eine Subpopulation von Blutgefässen Malformationen aufwies, die mit reduzierter Astrozytenendfuss-Assoziierung einhergingen.
Das Wissen um die Beteiligung des Wnt Signalweges an der Regulation der BHS auch im adulten Organismus kann in Zukunft von wichtiger Bedeutung sein, da es potentielle therapeutische Anwendungen ermöglicht.
Durch Implementierung eines effizienten Früherkennungsprogramms ist die Inzidenz des Zervixkarzinoms in Industrienationen seit 2005 auf konstant niedrigem Niveau. Ungeachtet dessen ist das Zervixkarzinom mit deutlich höheren Inzidenzraten und weniger als 50% Gesamtüberleben in den nicht industrialisierten Staaten die vierthäufigste Tumorentität der Frau weltweit.
Zur Behandlung des lokal fortgeschrittenen Zervixkarzinoms (FIGO Stadium IIb bis IVa bzw. Ib2/IIa2 mit mehreren histologischen Risikofaktoren) besteht nach aktueller Leitlinie (Stand 2014) und internationalem Konsens Indikation zur platinhaltigen Radiochemotherapie (RCT), subsequent gefolgt von einer (High Dose-Rate) Brachytherapie (HDR-BT). Unter diesen Umständen beträgt die lokale Kontrolle für Patientinnen mit lokal fortgeschrittenem Tumor zwischen 74% und 85%.
Dennoch stagnieren Gesamtüberleben und das spezifische Überleben bezogen auf verschiedene klinische Endpunkte, sodass die Entwicklung neuer Behandlungsstrategien und Therapieoptionen, insbesondere zur Behandlung rezidivierter und metastasierter Erkrankungsstadien, angezeigt ist. Darüber hinaus spielen im Gegensatz zu anderen Tumorentitäten molekulare Marker sowohl als prädiktive als auch als therapeutische Targets bei der Behandlung des Zervixkarzinoms eine bislang untergeordnete Rolle, während molekular-zielgerichtete Therapien in der modernen Krebstherapie einen immer größeren Stellenwert einnehmen.
Ziel der hier vorliegenden Arbeit ist es, neue Biomarker für das Zervixkarzinom und dessen Ansprechen auf simultane Radiochemotherapie und anschließende Brachytherapie zu identifizieren.
Zu diesem Zweck untersuchten wir in einem Patientenkollektiv von 74 Patientinnen mit histologisch gesichertem Zervixkarzinom (FIGO Ib - IVb) prätherapeutisch gewonnenes Biopsiegewebe. Mittels immunhistochemischer Methoden wurde die Expression von Polo-like Kinase 3 (PLK3) und phosphoT273 Caspase-8 erfasst und quantifiziert. Die Ergebnisse wurden anschließend mit klinischen bzw. histo-pathologischen Charakteristika, einschließlich der p16INK4a Expression und den klinischen Endpunkten lokales progressionsfreies- und Fernmetastasen-freies
Überleben bzw. dem tumorspezifischem und dem Gesamtüberleben nach kurativ intendierter Therapie korreliert.
Hierbei konnte zunächst eine signifikante Korrelation zwischen der PLK3 und pT273 Caspase-8 Expression beobachtet werden (p = 0.009). Darüber hinaus war PLK3 signifikant mit dem N-Status (p = 0.046), dem M-Status (0.026) und dem FIGO-Stadium (p = 0.001) assoziiert, wohingegen die pT273 Caspase8-Expression signifikant mit der Tumorgröße (T-Stadium) korreliert war. Bezogen auf univariate Überlebenszeitanalysen war eine erhöhte PLK3-Expression signifikant mit einer geringeren Rate an Fernmetastasen (DMFS p = 0.009) sowie einem signifikant verlängertem tumorspezifischen und Gesamtüberleben assoziiert (CSS p = 0.001, OS p = 0.003). Vergleichbare Ergebnisse konnten auch für die pT273-Caspase 8 Expression mit einer verringerten Metastasierungsrate (p=0.021) und verbessertem tumorspezifischem (p<0,001) sowie Gesamtüberleben (p=<0.001) gezeigt werden. In den multivariaten Analysen verblieb die pT273-Caspase 8-Expression mit einem signifikant verbesserten Gesamtüberleben (p=0.001).
Zusammenfassend belegen diese Daten erstmals eine signifikante Korrelation zwischen einer erhöhten prä-therapeutischen PLK3 und pT273 Caspase 8- Expression und einem zu favorisierenden klinischen Verlauf nach mit Radiochemotherapie behandeltem Zervixkarzinom.
The mitochondrial respiratory chain consists of NADH:ubiquinone oxidoreductase (Complex-I), succinate:ubiquinone reductase (Complex-II), ubiquinol:cytochrome c reductase (Complex-III), cytochrome c oxidase (Complex-IV) and cytochrome c as an electron mediator between Complex-III and Complex-IV. Paracoccus denitrificans membranes were used as a model system for the association of the mitochondrial respiratory chain. More than 50 years ago, a model was given for a supercomplex assembly formed by stable associations between these complexes. This model gradually shifted by the model of random diffusion given by Hackenbrock et al. 1986 Different independent approaches were used to further analyze this situation in a native membrane environment, thus avoiding any perturbation caused by detergent solubilization: (a) measuring the distance and orientation of the different complexes by multi-frequency EPR Spectroscopy we started to analyze simple system, the interaction between CuA fragment derived from P. denitrificans and various c type cytochrome by Pulsed X band and G band (180 GHz) EPR. Partner proteins for the CuA (excess negative surface charge) were (i) horse heart cytochrome c which contain a large number of positive charges in heme crevice,(ii) the cytochrome c552 soluble fragment (physiological electron donor and have positive charges), and as a control (iii) the cytochrome c1 soluble fragment (negative surface potential, derived from bc1 complex) The measurements were performed at several magnetic field positions varying temperature between 5 to 30 K. Both the X band and the high-field measurements show the existence of a strong relaxation enhancement of the CuA by the specific binding of the P. denitrificans cytochrome c552 and horse heart cytochrome c. This relaxation enhancement is dependent on temperature and provides information about the distance and relative orientation of the two interacting spins within this protein-protein complex. (b) For quantitative information about lateral diffusion of cytochrome c oxidase in the native membrane Fluorescence Correlation Spectroscopy (FCS) was used. In this experiment, diffusion coefficients for oxidase differ in the case of supercomplex for wild type membrane and for two deletion mutants lacking either Complex-I or Complex-III. (c) The optical absorption spectroscopy at microsecond level resolution was tried for the translational mobility of oxidase in membrane vesicles. Due to the presence of different hemes in the native membrane, carbon monoxide (CO) used as a probe for the experiment. The optimization of the experimental conditions were carried out to get the optimal signal.
The weather of the atmospheric boundary layer significantly affects our life on Earth. Thus, a realistic modelling of the atmospheric boundary layer is crucial. Hereby, the processes of the atmospheric boundary layer depend on an accurate representation of the land-atmosphere coupling in the model. In this context the land surface temperature (LST) plays an important role. In this thesis, it is examined if the assimilation of LST can lead to improved estimates of the boundary layer and its processes.
To properly assimilate the LST retrievals, a suitable model equivalent in the weather prediction model is necessary. In the weather forecast model of the German Weather Service used here, the LST is modelled without a vegetation temperature. To compensate for this deficit, two different vegetation parameterizations were investigated and the better one, a conductivity scheme, was implemented. In order to make optimal use of the influence of the assimilation of the LST observation on the model system, it is useful to pass on the information of the observation to land and atmosphere already in the assimilation step. For that reason, a fully coupled land-atmosphere prediction model was used. Therefore, the existing control vector of the assimilation system, a local ensemble transform Kalman filter, was extended by the soil temperature and moisture. In two-day case studies in March and August 2017, different configurations of the augmented assimilation system were evaluated based on observing system simulation experiments (OSSE).
LST was assimilated hourly over two days in the weakly and strongly coupled assimilation system. In addition, every six hours a free 24-hour forecast was simulated. The experiments were validated with the simulated truth (a high-resolution model run) and compared against an experiment without assimilation. It was shown that the prediction of the boundary layer temperature, especially during the day, and the prediction of the soil temperature, during the whole day and night, could be improved.
The best impact of LST assimilation was achieved with the fully coupled system. The humidity variables of the model benefited only partially from the LST assimilation. For this reason, covariances in the model ensemble were investigated in more detail. To check their compatibility with the high-resolution model run the ensemble consistency score was introduced. It was found that the covariances between the LST and the temperatures of the high-resolution model run were better represented in the ensemble than those between the LST and the humidity variables.
Algae as primary producers are highly important in aquatic ecosystems and provide a variety of environmental and anthropogenic services. In small lotic ecosystems in agriculturally influenced landscapes, algae are often the main constituent of the base of the food web and they contribute considerably to biodiversity. Within these small lotic ecosystems, algae are influenced by both natural stressors, such as flow regime and dry-out events, and anthropogenic factors. Agricultural practices especially influence algal communities by introducing plant protection products (PPP) and fertilizers into the water. The impacts of these exposures and how they affect planktonic algae in particular are not yet well studied in small lotic ecosystems. However, the protection of algae as primary producers is of high relevance and was thus included in official biomonitoring programs such as the European Water Framework Directive (WFD) or in risk assessment of e.g. PPPs. Hence, this thesis addresses this knowledge gap and links new information on algal communities in small lotic ecosystems with biomonitoring and risk assessment.
Data was gathered from small ditches and streams in central Germany as well as from laboratory algal assays. A technique to rapidly classify and quantify planktonic and benthic algae based on their photopigment concentration (measured via delayed fluorescence - DF) in ecological and ecotoxicological studies was assessed, both in the laboratory and in the field. This research provides insight into planktonic and benthic algal communities in small streams and ditches in order to improve management and protection strategies in the face of increased agricultural chemical input. ...
The environmental impact of climate change is meanwhile not only discussed in the scientific community but also in the general public. However, little is known about the interaction between climate change and pollutants like pesticides. A combination of multiple stressors (e.g. temperature, pollutants, predators) may lead to severe alterations for organisms such as changes in time of reproduction, reproductive success and growth performance, mortality and geographic distribution. The questions if aquatic organisms tend to react more sensitive towards incidents under climate change conditions remains. Therefore, within the present thesis the aquatic ecotoxicological profile of the fungicide pyrimethanil, as an exemplarily anthropogenic used contaminant, was examined.
A large test battery of ecotoxicological standard tests and supplement bioassays with non-model species was conducted to investigate if species-specific or life stage-specific differences occur or if temperature alteration may change the impact of the fungicide. Two of the most sensitive species (Chironomus riparius and Daphnia magna) were used to investigate the acute and chronic thermal dependence of pyrimethanil effects. The results clearly depict that the ecotoxicity of pyrimethanil at optimal thermal conditions did not depend on the trophic level, but was species-specific. With regard to EC10 values the acute pyrimethanil toxicity on C. riparius increased with higher temperature (6.78 mg L-1 at 14°C and 3.06 mg L-1 at 26°C). The chronic response of D. magna to the NOEC (no observed effect concentration) of the fungicide (0.5 mg L-1) was examined in an experiment which lasted for several generations under three simulated near-natural temperature regimes (‘cold year, today’ (11 to 22.7°C), ‘warm year, today’ (14 to 25.2°C) and ‘warm year, 2080’ (16.5 to 28.1°C)). A pyrimethanil-induced mortality increase was buffered by the strongly related increase of the general reproductive capacity, while population growth was stronger influenced by temperature than by the fungicide. At a further pyrimethanil concentration (LOEC – lowest observed effect concentration: 1 mg L-1), a second generation could not be established by D. magna under all thermal regimes.
Besides daphnids, the midge C. riparius was used for a second multigeneration study. In a bifactorial test design it was tested if climate change conditions alter or affect the impact of a low fungicide concentration on life history and genetic diversity. The NOAEC/2 (half of the no observed adverse effect concentration derived from a standard toxicity test) was used as a low pyrimethanil concentration to which laboratory populations of the midges were chronically exposed under the mentioned temperature scenarios. During the 140-day-multigeneration study, survival, emergence, reproduction, population growth, and genetic diversity of C. riparius were analyzed. The results reveal that high temperatures and pyrimethanil act synergistically on life history parameters of C. riparius. In simulated present-day scenarios, a NOAEC/2 of pyrimethanil provoked only slight to moderate beneficial or adverse effects. In contrast, an exposure to a NOAEC/2 concentration of pyrimethanil at a thermal situation likely for a summer under the future expactations uncovered adverse effects on mortality and population growth rate. In addition, genetic diversity was considerably reduced by pyrimethanil in the ‘warm year, 2080’ scenario, but only slightly under current climatic conditions. The multigeneration studies under near-natural thermal conditions indicate that not only the impact of climate change, but also low concentrations of pesticides may pose a reasonable risk for aquatic invertebrates in the future. This clearly shows that thermal and multigenerational effects should be considered when appraising the ecotoxicity of pesticides and assessing their future risk for the environment.
In addition to temperature further multiple abiotic and biotic stressors alterate pollutant effects. Moreover, to better discriminate and understand the intrinsic and environmental correlates of changing aquatic ecosystems, it was experimentally unraveled how the effects of a low-dose of pyrimethanil on daphnids becomes modified by different temperatures (15°C, 20°C, 25°C) and in the presence/ absence of predator kairomones of Chaoborus flavicans larvae. The usage of a fractional multifactorial test design provided the possibility to investigate the individual growth, reproduction and population growth rate of Daphnia pulex via different exposure routes to the fungicide pyrimethanil at an environmentally relevant concentration (0.05 mg L-1) - either directly (via the water phase), indirectly (via algae food), dually (via water and food) or for multiple generations (fungicide treated source population).
The number of neonates increased with increasing temperatures. At a temperature of 25°C no significant differences between the individual treatment groups were observed although the growth was overall inhibited due to pyrimethanil. Besides, at 15 and 20°C it is obvious that daphnids which were fed with contaminated algae had the lowest reproduction and growth rate. The obtained results clearly demonstrate that multiple stress factors can modify the response of daphnids to pollutants. The exposure routes of the contaminant are of minor importance, while temperature and the presence of a predator are the dominant factors impacting the reproduction of D. pulex. It can be concluded that low concentrations of pyrimethanil may disturb the zooplankton community at suboptimal temperature conditions, but the effects will become masked if chaoborid larvae are present. Therefore it seems necessary to observe prospectively if the combination of several stress factors like pesticide exposure and suboptimal temperature may influence the life history and sensitivity of several aquatic invertebrates differently.
Besides standard test organisms it is inevitable to conduct test with aquatic invertebrate which are not yet considered regularly in ecotoxicological experiments. For example molluscs represent one of the largest phyla of macroinvertebrates with more than 100.000 species, being ecologically and economically important. Therefore, within the present study embryo, juvenile, half- and full-life cycle toxicity tests with the snail Physella acuta were performed to investigate the impact of pollutants on various life stages. Different concentrations of pyrimethanil (0.06-0.5 or 1.0 mg L-1) assessed at three temperatures (15°C, 20°C, 25°C) revealed that pyrimethanil caused concentration-dependent effects independent of temperature. Interestingly, the ecotoxicity of pyrimethanil was higher at lower temperature for the embryo hatching and F1 reproduction, but its ecotoxicity for the growth of juveniles and the F0 reproduction increased with increasing temperature. More specifically, it could have been observed that especially during the reproduction test high mortality rates occurred at the highest concentration of 1 mg L-1 at all temperatures. Due to high mortality rates no snails were available for the F1 at the highest concentrations (0.5 and 1.0 mg L-1). Compared to the F0, overall more egg masses were produced in the F1, being all fertile and no mortality occurred. For the F1-generation the strongest pyrimethanil effects were detected at 15°C. A comparison of effect concentrations between both generations showed that the F1 is more sensitive than the F0.
These results indicate that an exposure over more than one generation may give a better overview of the impact of xenobiotics. With the establishment of an embryo and reproduction test under different temperatures and various concentrations of pyrimethanil with P. acuta we could successfully show that molluscs can respond more sensitive than model organisms and that both, chemical and thermal stressor strongly influence the behaviour of the pulmonates. It can be concluded that the high susceptibility for the fungicide observed in gastropods clearly demonstrates the complexity of pesticide-temperature interactions and the challenge to draw conclusions for the ecotoxicological risk assessment of pesticides under the impact of global climate change.
The challenging intricacies of strongly correlated electronic systems necessitate the use of a variety of complementary theoretical approaches. In this thesis, we analyze two distinct aspects of strong correlations and develop further or adapt suitable techniques. First, we discuss magnetization transport in insulating one-dimensional spin rings described by a Heisenberg model in an inhomogeneous magnetic field. Due to quantum mechanical interference of magnon wave functions, persistent magnetization currents are shown to exist in such a geometry in analogy to persistent charge currents in mesoscopic normal metal rings. The second, longer part is dedicated to a new aspect of the functional renormalization group technique for fermions. By decoupling the interaction via a Hubbard-Stratonovich transformation, we introduce collective bosonic variables from the beginning and analyze the hierarchy of flow equations for the coupled field theory. The possibility of a cutoff in the momentum transfer of the interaction leads to a new flow scheme, which we will refer to as the interaction cutoff scheme. Within this approach, Ward identities for forward scattering problems are conserved at every instant of the flow leading to an exact solution of a whole hierarchy of flow equations. This way the known exact result for the single-particle Green's function of the Tomonaga-Luttinger model is recovered.
The Opisthobranchia comprise highly specialized marine gastropods and have therefore been subject to diverse investigations covering various biological disciplines. However, a robust phylogeny of these gastropods is still lacking and several subclades have only been rarely studied. Furthermore, crucial aspects for the evolution of Opisthobranchia have not been comparatively analysed. Therefore, the aim of the present thesis is to gain new insights into the phylogeny of the Opisthobranchia with special focus on certain critical groups (Pleurobranchomorpha, Acteonoidea) and to assess several crucial features of the evolution of the investigated clades. The combination of four different gene markers (18S rDNA, 28S rDNA, 16S rDNA and CO1) and modern molecular systematic analysis tools were used to construct phylogenetic hypotheses focussing on Opisthobranchia as a whole as well as Pleurobranchomorpha and Acteonoidea in more detail. Intriguing new aspects of phylogeny and evolution of Opisthobranchia were revealed. First of all, monophyly of Opisthobranchia is definitely rejected based on the present data, while monophyly of Euthyneura (comprising Opisthobranchia and Pulmonata) is supported. Monophyly of opisthobranch subclades is confirmed for Nudipleura (as well as its constituting groups Nudibranchia and Pleurobranchomorpha), Umbraculida, Pteropoda (as well as subclades Thecosomata and Gymnosomata) and Acochlidiacea, for Cephalaspidea (if Runcinacea is regarded as a separate clade) and for Sacoglossa (if Cylindrobulla is accepted as an Oxynoacea). Aplysiomorpha are rendered paraphyletic due to the position of Akera bullata, but this result needs further investigation and should be considered with caution. The Nudipleura are found as the first single offshoot of the Euthyneura implying an early evolutionary separation of the last common ancestor of this clade. The remaining taxa form two main clades, one comprising the opisthobranch subgroups Umbraculida, Cephalaspidea, Aplysiomorpha and Pteropoda, while the other contains the pulmonate taxa and the opisthobranch Sacoglossa and Acochlidiacea. The interrelationships within these clades remain largely unresolved due to low statistical support values. However, a possible sister group relationship of Acochlidiacea and Eupulmonata receives statistical support. Opisthobranchia display various highly specific adaptations to diverse food sources. However, evolution of these specialized traits has never been assessed at an analytical level. The current thesis reconstructs the evolution of dietary preferences with novel methodologies based on the newly proposed phylogenetic hypothesis. Reconstruction of dietary evolution revealed herbivory as the ancestral condition in Euthyneura implying that carnivory evolved at least five times independently in the diverse lineages. The first comprehensive molecular phylogenetic hypothesis of the Pleurobranchomorpha could not reveal monophyly of the two main subclades Pleurobranchaeidae and Pleurobranchidae. This is due to the position of a single taxon (Euselenops luniceps) which is assigned to the Pleurobranchaeidae based on morphology but clusters within Pleurobranchidae in the current hypothesis. Furthermore, the tribe Berthellini and the genus Berthella are rendered paraphyletic by the current analyses. The results of molecular systematic analyses were used to reconstruct historical biogeography of Pleurobranchomorpha. Four different methodological approaches were applied yielding ambiguous results for Pleurobranchomorpha. However, the Pleurobranchidae comprising about 80% of the extant Pleurobranchomorpha most probably derived from an Antarctic origin. Dating of the phylogenetic tree via molecular clock methods yielded divergence of Pleurobranchidae into the Antarctic Tomthompsonia antarctica and the remaining species in Early Oligocene. Afterwards the latter underwent rapid radiation during Oligocene and Early Miocene. This divergence event coincides with two major geological events in the Antarctic region. On the one hand, the onset of glaciation and on the other hand the opening of the Drake Passage with concurrent formation of an Antarctic circumpolar current (ACC). I suppose that these sudden and dramatic changes in climate and palaeogeography probably accounted for migration of the last common ancestor of Pleurobranchidae (besides Tomthompsonia) into warmer regions via the Drake Passage to the Western Atlantic and Eastern Pacific and via the South Tasman Rise to the Indo-West Pacific. Furthermore, the ACC may have triggered larval dispersal to the Eastern Atlantic. The phylogenetic position of Acteonoidea has been a matter of debate for decades and they have long been considered as basal opisthobranchs. Results of the present thesis rather support placement in “Lower Heterobranchia” as sister group of Rissoelloidea. The current division of Acteonoidea into three families has never been investigated by means of phylogenetic methods. Thus, this thesis provides the first comprehensive investigation of this clade challenging present division into three families. The results rather support division into two main clades with the monogeneric Bullinidae clustering within Aplustridae doubting its separate status. Additionally, Rictaxis punctocaelatus which has been assigned to Acteonidae clusters basal to Aplustridae rendering Acteonidae paraphyletic. Since information on morphology of R. punctocaelatus was lacking until now, I conducted the first detailed investigation on morphology and histology of this species in order to reassess the unexpected molecular systematic placement. Character tracing analyses revealed similarities with both acteonoidean families implying an intermediate position of this species which might be assigned to a separate family in the future. Furthermore, the common features of Acteonidae and Rictaxis (massive shell, small foot, anterior mantle cavity opening, and absence of oral gland) are possibly plesiomorphic for the whole Acteonoidea. In summary, the results of the present thesis provide valuable novel insights into the phylogeny and evolution of the Opisthobranchia by employing state-of-the-art approaches of molecular systematics and evolutionary reconstruction. Thus, diverse hypotheses on opisthobranch phylogeny and evolution were either supported or rejected as well as novel hypotheses proposed which offer the basis for further research on these extraordinary gastropods.
For finite baryon chemical potential, conventional lattice descriptions of quantum chromodynamics (QCD) have a sign problem which prevents straightforward simulations based on importance sampling.
In this thesis we investigate heavy dense QCD by representing lattice QCD with Wilson fermions at finite temperature and density in terms of Polyakov loops.
We discuss the derivation of $3$-dimensional effective Polyakov loop theories from lattice QCD based on a combined strong coupling and hopping parameter expansion, which is valid for heavy quarks.
The finite density sign problem is milder in these theories and they are also amenable to analytic evaluations.
The analytic evaluation of Polyakov loop theories via series expansion techniques is illustrated by using them to evaluate the $\SU{3}$ spin model.
We compute the free energy density to $14$th order in the nearest neighbor coupling and find that predictions for the equation of state agree with simulations to $\mathcal{O}(1\%)$ in the phase were the (approximate) $Z(3)$ center symmetry is intact.
The critical end point is also determined but with less accuracy and our results agree with numerical results to $\mathcal{O}(10\%)$.
While the accuracy for the endpoint is limited for the current length of the series, analytic tools provide valuable insight and are more flexible.
Furthermore they can be generalized to Polyakov-loop-theories with $n$-point interactions.
We also take a detailed look at the hopping expansion for the derivation of the effective theory.
The exponentiation of the action is discussed by using a polymer expansion and we also explain how to obtain logarithmic resummations for all contributions, which will be achieved by employing the finite cluster method know from condensed matter physics.
The finite cluster method can also be used to evaluate the effective theory and comparisons of the evaluation of the effective action and a direction evaluation of the partition function are made.
We observe that terms in the evaluation of the effective theory correspond to partial contractions in the application of Wick's theorem for the evaluation of Grassmann-valued integrals.
Potential problems arising from this fact are explored.
Next to next to leading order results from the hopping expansion are used to analyze and compare the onset transition both for baryon and isospin chemical potential.
Lattice QCD with an isospin chemical potential does not have a sign problem and can serve as a valuable cross-check.
Since we are restricted by the relatively short length of our series, we content ourselves with observing some qualitative phenomenological properties arising in the effective theory which are relevant for the onset transition.
Finally, we generalize our results to arbitrary number of colors $N_c$.
We investigate the transition from a hadron gas to baryon condensation and find that for any finite lattice spacing the transition becomes stronger when $N_c$ is increased and to be first order in the limit of infinite $N_c$.
Beyond the onset, the pressure is shown to scale as $p \sim N_c$ through all available orders in the hopping expansion, which is characteristic for a phase termed quarkyonic matter in the literature.
Some care has to be taken when approaching the continuum, as we find that the continuum limit has to be taken before the large $N_c$ limit.
Although we currently are unable to take the limits in this order, our results are stable in the controlled range of lattice spacings when the limits are approached in this order.
Landau's Fermi liquid theory has been the main tool for investigating interactions between fermions at low energies for more than 50 years. It has been successful in describing, amongst other things, the mass enhancement in ³He and the thermodynamics of a large class of metals. Whilst this in itself is remarkable given the phenomenological nature of the original theory, experiments have found several materials, such as some superconducting and heavy-fermion materials, which cannot be described within the Fermi liquid picture. Because of this, many attempts have been made to understand these ''non Fermi liquid'' phases from a theoretical perspective. This will be the broad topic of the first part of this thesis and will be investigated in Chapter 2, where we consider a two-dimensional system of electrons interacting close to a Fermi surface through a damped gapless bosonic field. Such systems are known to give rise to non Fermi liquid behaviour. In particular we will consider the Ising-nematic quantum critical point of a two-dimensional metal. At this quantum critical point the Fermi liquid theory breaks down and the fermionic self-energy acquires the non Fermi liquid like {omega}²/³ frequency dependence at lowest order and within the canonical Hertz-Millis approach to quantum criticality of interacting fermions. Previous studies have however shown that, due to the gapless nature of the electronic single-particle excitations, the exponent of 2/3 is modified by an anomalous dimension {eta_psi} which changes, not only the exponent of the frequency dependence, but also the exponent of the momentum dependence of the self-energy. These studies also show that the usual 1/N-expansion breaks down for this problem. We therefore develop an alternative approach to calculate the anomalous dimensions based on the functional renormalization group, which will be introduced in the introductory Chapter 1. Doing so we will be able to calculate both the anomalous dimension renormalizing the exponent of the frequency dependence and the exponent renormalizing the momentum dependence of the self-energy. Moreover we will see that an effective interaction between the bosonic fields, mediated by the fermions, is crucial in order to obtain these renormalizations.
In the second part of this thesis, presented in Chapter 3, we return to Fermi liquid theory itself. Indeed, despite its conceptual simplicity of expressing interacting electrons through long-lived quasi-particles which behave in a similar fashion as free particles, albeit with renormalized parameters, it remains an active area of research. In particular, in order to take into account the full effects of interactions between quasi-particles, it is crucial to consider specific microscopic models. One such effect, which is not captured by the phenomenological theory itself, is the appearance of non-analytic terms in the expansions of various thermodynamic quantities such as heat-capacity and susceptibility with respect to an external magnetic field, temperature, or momentum. Such non-analyticities may have a large impact on the phase diagram of, for example, itinerant electrons near a ferromagnetic quantum phase transition. Inspired by this we consider a system of interacting electrons in a weak external magnetic field within Fermi liquid theory. For this system we calculate various quasi-particle properties such as the quasi-particle residue, momentum-renormalization factor, and a renormalization factor which relates to the self-energy on the Fermi surface. From these renormalization factors we then extract physical quantities such as the renormalized mass and renormalized electron Lande g-factor. By calculating the renormalization factors within second order perturbation theory numerically and analytically, using a phase-space decomposition, we show that all renormalization factors acquire a non-analytic term proportional to the absolute value of the magnetic field. We moreover explicitly calculate the prefactors of these terms and find that they are all universal and determined by low-energy scattering processes which we classify. We also consider the non-analytic contributions to the same renormalization factors at finite temperatures and for finite external frequencies and discuss possible experimental ways of measuring the prefactors. Specifically we find that the tunnelling density of states and the conductivity acquire a non-analytic dependence on magnetic field (and temperature) coming from the momentum-renormalization factor. For the latter we discuss how this relates to previous works which show the existence of non-analyticities in the conductivity at first order in the interaction.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
End-stage renal disease has been denominated a vasculopathic state, owing to the accelerated arterial stiffening, which occurs in addition to and independent of atherosclerosis and bears an increased cardiovascular risk. The altered metabolic milieu in uraemia leads to an increased oxidative stress, heightened inflammatory burden, and an abnormal calcium-phosphate metabolism, which are thought to be responsible for the vascular changes. The pulse wave velocity (PWV) is a widely employed surrogate parameter of arteriosclerosis. The purpose of this study was to gain more insight into the pathogenesis of arterial stiffness, by investigating the influence of markers of oxidative stress, procoagulation, and inflammation, and of the calcium-phosphate product on the PWV. We conducted a cross-sectional study in 53 stable patients aged 59 ± 16 years, who had been on haemodialysis for at least 4 months (68 ± 48). Carotid-radial PWV was measured using a semi-automated device, Complior SP (Artech Medical, France). Advanced glycosylation end-products (AGE) and advanced oxidation protein products (AOPP), were quantified according to previously described methods. High sensitive CRP was measured using ELISA, whereas the other biochemical parameters, i.e. fibrinogen, albumin, calcium, phosphate, cholesterol, and triglycerides, were determined using routine methods. For statistical calculations we employed SPSS (Statistical Package of Social Science, 12.0, 2003). The correlations between PWV, as the dependent variable, and many dependent variables were assessed by means of multiple regression analysis, in which we controlled for the influence of the traditional cardiovascular risk factors and some of the patients’ medication (calcium-channel blockers and statins). PWV was found to be significantly correlated to serum CRP (p=0.003), LDLcholesterol (p<0.001), triglycerides (p<0.001), AGE (p=0.002), calcium (p<0.001), phosphate (p=0.001), and fibrinogen (p=0.020). Between PWV and dialysis duration (months) an interesting quadratic relationship (p=0.058) was noted. Against expectation, regression analysis showed a negative correlation between AOPP and PWV (p=0.001). We failed to confirm the correlation between PWV and age, systolic blood pressure, or heart rate. Among traditional cardiovascular risk factors only LDL-cholesterol was positively correlated to PWV. In this cross-sectional analysis we could put forward that PWV correlates positively and significantly with fibrinogen, CRP, AGEs, calcium, phosphate, and LDL-cholesterol in haemodialysis patients. It seems procoagulatory and proinflammatory pathways, oxidative stress, and the calcium-phosphate product exert a synergistic effect on disturbances of vascular architecture in ESRD patients.
The research focuses on magic - the practice of performing tricks and illusions on stage aiming at entertaining the audience. In the late XIX-early XX century magic achieved an outstanding social recognition and became an important artistic phenomenon. This study aims to analyze the work of the most prominent magicians of the late XIX-early XX centuries and to define the historical role of magicians in the history of culture and art.
Using methods of art history and cultural studies, I analyze the autobiographies of magicians, the literature on magic published during the period in question, and contemporary press. I approach the history of magic from three different perspectives: magic as a branch of show business, magic as a cultural phenomenon, and magic as a type of performing art. It allowed me to create a detailed account of magic as a social, cultural and artistic phenomenon.
I argue that magic became a highly influential cultural phenomenon of the late 19th- early 20th centuries in Great Britain that represented the idea of real magic on stage. Magic shows reflected a complex relationship between rational and magical thinking that existed in society and produced narratives about science and the supernatural.
Moreover, very few studies have focused on the question of defining magic as an art form. In the thesis, I analyzed the theoretical works on magic written by magicians and developed a framework for further research on magic from the perspective of the history of art.
Calcium-deficiency rickets (CDR) is a metabolic bone disease in children that is characterized by impaired mineralization and severe bone deformities. As CDR is often an endemic phenomenon that is almost exclusively restricted to tropical areas, environmental conditions are currently considered to be a possible predisposing factor for the CDR. Apart from a lack of macronutrients and micronutrients, an oversupply of potentially toxic elements (PTEs) in the soil-plant pathway of the CDR areas is thought to be involved in the aetiology of CDR. This study is the first to comprehensively analyze the impact of the environment on Ca deficiency and the resulting CDR.
To analyze the impact of the environment on CDR in developing countries, a rural region near Kaduna City, northern Nigeria, was chosen as a study area. From this area, cases of CDR have been reported since the early 2000s with a prevalence rate of 5%. Within this study area, 11 study sites, including areas with a high CDR prevalence (HR), a low CDR prevalence (LR) and no CDR prevalence (NR), were visited. In these HR, LR and NR study sites, the bedrock was investigated and the types of parent materials were identified. Local farmers were interviewed to determine the type and intensity of the land use. The soil types were determined along toposequences. The soil textures as well as the clay mineral fractions were determined. The pH values were measured, and the contents of organic carbon (OC) were determined. The potential cation-exchange capacity (CECpot) and the base saturation (BS) were analyzed. Furthermore, the total and plant-available macronutrient, micronutrient and PTE concentrations were measured in the soils. The drinking water was analyzed for pH values and the concentrations of Ca, Se and F were measured. The maize was analyzed for the Ca, Mg, K and P, Se and phytic acid (PA) contents.
The field and laboratory analyses on the bedrock showed that the HR, LR and NR study sites near Kaduna City, northern Nigeria, were underlain by Older Granites. A direct link between the distribution of the bedrock, the parent materials and the prevalence of CDR was not found. Interviews with the local farmers showed that the land use in the Kaduna study area is dominated by the cultivation of cash crops and food crops. Field analyzes on the soil types in the Kaduna study area showed that the distribution of the soil types is highly dependent on the topography and the distribution of the parent materials. In near vicinity to the inselbergs, Lixisols had developed on grus slope deposits. In the lower pediment and plain positions, Acrisols had developed on grus slope deposits and pisolite slope deposits. In the upper plains, Plinthosols had developed on pisolite slope deposits and in the river valleys, Fluvisols had developed on river deposits. Such soil types and soil type distributions are typical for granite-underlain areas in the northern guinea savanna of West Africa. Similarly, the physical soil conditions were representative for the soils of the northern guinea savanna: sandy topsoils, clayey subsoils and relatively high contents of kaolinite clay minerals in the clay fractions. With regard to the geochemical composition, no significant difference was found between the soils of the Kaduna study area and the soils of other granite-underlain areas in West Africa. Only the concentrations of P were considerably low in the soils of the Kaduna study area. However, P deficiency is a typical phenomenon in West African savanna soils and is not restricted to CDR areas. The micronutrient concentrations in the soils were low, but not critically low. Laboratory analyses on the amounts of PTEs showed that compared to worldwide background levels and international critical limits the PTE concentrations were very low in the soils of the Kaduna study area. In the drinking water, neither a significant lack of macronutrients and micronutrients, nor a noticeable oversupply of PTEs was found. The maize in the HR, LR and NR study sites contained normal contents of Mg, K and P, low contents of Ca and Se as well as slightly elevated concentrations of PA compared to West African food composition tables. Comparisons between the mineral contents of traditional and modern maize cultivars showed that the traditional maize cultivars contained significantly higher contents of Ca and noticeably lower concentrations of PA than the modern maize cultivars.
A direct link between the environmental conditions and the CDR in the Kaduna study area was considered unlikely, as neither a statistically significant lack of macronutrients and micronutrients, nor a statistically significant oversupply of PTEs was found in the environment of this area. Instead, the results indicated that the nutrition rather than the environmental conditions that impacts the prevalence of CDR.
The geodynamic processes and the chemical and thermal evolution of the mantle beneath the Kaapvaal craton (South Africa) was investigated with further regard to diamond formation. For this, 31 coarse-grained peridotites and 21 individual subcalcic garnets from heavy mineral concentrates (HMC) from the Finsch mine were studied for their major and trace element compositions, Lu-Hf and Sm-Nd isotope composition. Furthermore, processes in the Earth’s mantle that follow kimberlite sampling and propagation were studied in polymict peridotite breccia from Kimberley mines. Inter mineral equilibrium of the peridotites was tested by comparing the results from different, independent thermometers. These, well equilibrated peridotites stem from a restricted pressure of 5 to 6.5 GPa (depth ~160-200 km) and a temperature range of 1050-1250°C, following the 40 mW/m2 conductive geotherm. The majority of the samples display a well developed anti-correlation of oxygen fugacity with pressure, which is in contrast to the sheared and oxidised, younger kimberlite erupted peridotites from Kimberley. All analysed samples have homogeneous trace element mineral chemistry. Variations in trace elements among Finsch peridotites reflect their complex nature and the intricate development of the subcratonic mantle. The 3.6 Ga is the oldest crustal age recorded in the Kaapvaal craton, and is confirmed by the Lu-Hf model age of a highly radiogenic subcalcic garnet in this study. Therefore, this age probably represents the oldest depletion (partial melting event) of the subcratonic mantle beneath the Kaapvaal craton. Both, subcalcic garnets and Finsch peridotites yield Lu-Hf isochron ages of around 2.5 Ga, which probably represent the last depletion event of the Kaapvaal craton. Several older (than 2.5 Ga) depletions were also necessary to explain higher isochron initials of the both isochrones. The Cr# and HREE concentrations and ratios of the Finsch subcalcic garnets and peridotites indicate that partial melting of the Kaapvaal craton happened at different depths. One group of subcalcic garnets (group-1) experienced depletion at high pressure in the garnet stability field and another one (group-2) at low pressures in the spinel or plagioclase stability field. Major and trace elements indicate that up to 50%, of the melt was remover from the primitive (primer) mantle in at least two melting events. Thus, first continental crust was created early (> 2.5 Ga) from high degrees of partial melting of the lithospheric mantle. According to the Sm-Nd isotope signatures at least two metasomatic events took place significantly after 2.5 Ga. As monitored by group-1 subcalcic garnets, the first enrichment was produced by a fluid and occurred at around 1.3 Ga. The second metasomatic event was much later at 500-300 Ma ago and has changed both Nd and Hf isotopic compositions of group-2 subcalcic garnet as well as some Finsch peridotites. During partial melting any carbon species will be dissolved in the melt and removed from the residue. Therefore, any diamond growth before the last depletion (~2.5 Ga) would have been probably completely removed from the lithospheric mantle. Consequently, carbon was apparently reintroduced into the system, i.e. during Metasomatism, and triggered the growth of diamonds. The Sm-Nd isotope systematics of the subcalcic garnets of this study indicates that enrichment occurred at ~1.3 Ga or later, which implies non-Archean, late diamond growth in Finsch. Fertilisation of the subcontinental craton associated with the percolation of group-2 (~120 Ma) or even younger (~90 Ma) group-1 kimberlites and their precursors are not observed in Finsch peridotites, but are well presented in mantle xenoliths from Kimberley. Therefore, these younger events were studied on specific mantle xenoliths, polymict breccia from Kimberley. A polymict peridotite found at the Boshof road dump, Kimberley, represents a mechanical mixture of upper mantle clasts and minerals (opx, cpx, garnet and olivine) of different lithologies, cemented by fine-grained olivine and minute amounts of interstitial ilmenite, phlogopite and sulphide. According to Ni in garnet thermometry, single porphyroclastic garnets were sampled and mixed during ascent in a 100 km stratigraphic column, starting from ~250 km until ~120 km. During this ascent, melt has reacted with the porphyroclasts and at theirrims neoblastic minerals were formed, i.e. neoblastic opx around opx porphyroclast, neoblastic garnet around garnet porphyroclast, and neoblastic opx around cpx porphyroclast. Analyses of those neoblastic minerals indicate that volatile-rich, kimberlite-like melt was the agent that collected the mantle minerals and amalgamated this xenolith. Several complex processes were responsible for the formation of the polymict breccia. They comprise melt degassing at high pressures that probably created “explosive” Brecciation of the cratonic roots (~250 km), propagation of the melt that collected different porphyroclasts on a way and amalgamation at around 120 km. The whole process of “explosive” brecciation, turbulent transport and mixing of mantle porphyroclasts and melt, porphyroclast dissolution and neoblast precipitation happened very fast and was part of the kimberlite formation. Therefore, the here studied sample probably represents one frozen part (with variable mantle clasts) of the kimberlitic magma precursor, with kimberlite eruption at ~90 Ma years ago in Kimberley.
In the first part of this thesis, we introduce the concept of prospective strict no-arbitrage for discrete-time financial market models with proportional transaction. The prospective strict no-arbitrage condition, which is a variant of strict no-arbitrage, is slightly weaker than the robust no-arbitrage condition. It still implies that the set of portfolios attainable from zero initial endowment is closed in probability. Consequently, prospective strict no-arbitrage implies the existence of consistent prices, which may lie on the boundary of the bid-ask spread. A weak version of prospective strict no-arbitrage turns out to be equivalent to the existence of a consistent price system.
In continuous-time financial market models with proportional transaction costs, efficient friction, i.e., nonvanishing transaction costs, is a standing assumption. Together with robust no free lunch with vanishing risk, it rules out strategies of infinite variation which usually appear in frictionless financial markets. In the second part of this thesis, we show how models with and without transaction costs can be unified. The bid and the ask price of a risky asset are given by cadlag processes which are locally bounded from below and may coincide at some points. In a first step, we show that if the bid-ask model satisfies no unbounded profit with bounded risk for simple long-only strategies, then there exists a semimartingale lying between the bid and the ask price process.
In a second step, under the additional assumption that the zeros of the bid-ask spread are either starting points of an excursion away from zero or inner points from the right, we show that for every bounded predictable strategy specifying the amount of risky assets, the semimartingale can be used to construct the corresponding self-financing risk-free position in a consistent way. Finally, the set of most general strategies is introduced, which also provides a new view on the frictionless case.
Ischemic injuries of the cardiovascular system are still the leading cause of death worldwide. They are often accompanied by loss of cardiomyocytes (CM) and their replacement by non-functional heart tissue. Cardiac fibroblasts (CF) play a major role in the recovery after ischemic injury and in the scar formation. In the last few years researchers were able to reprogram fibroblasts into CM in vitro and in murine models of myocardial infarction using various protocols including a cocktail of microRNAs (miRs). These miRs can target hundreds of messenger RNAs and inhibit their translation into proteins, potentially regulating multiple cellular signaling pathways. Because of this, there has been a rising interest in the use of miRs for therapeutic purposes. However, as different miRs have different effects in different cells, there is the danger of causing serious side effects. These could be alleviated by enacting a cell-specific transport of miRs, for example by using aptamers. Aptamers are usually short strands of DNA or RNA, which can fold into a specific three-dimensional confirmation which allows them to bind specifically to target molecules. Aptamers are commonly selected from a large library for their ability to bind to target molecules using a procedure called SELEX. Aptamers have already been used to transport miRs into cancer cells.
In this thesis, we first established the transport of miRs into cells of the cardiovascular system using aptamers. MiR-126 is an important part of the signaling in endothelial cells (EC), protects from atherosclerosis and supports angiogenesis, which is why we chose it as a candidate to transport into the vasculature. We first tested two aptamers for their ability to internalize into EC and fibroblasts. Both the aptamer for the ubiquitously expressed transferrin receptor (TRA) and a general internalizing RNA motif, but not a control construct, could internalize efficiently into all cell types tested. We then designed three chimeras (Ch) using different strategies to connect TRA to miR-126. While all chimeras could internalize efficiently, only Ch3, which connects TRA to Pre-miR-126 using a sticky bridge structure, had functional effects in EC. Ch3 reduced the protein expression of VCAM-1 in EC and increased the VEGF induced sprouting of EC in a spheroid-sprouting assay. Treatment of breast cancer cells with Ch3 emulated the effects of treatment with classical miR-126-3p and miR-126-5p mimics. In the SK-BR3 cell line Ch3 and miR-126-3p reduce the viability of the cells while they reduce recruitment of EC by the MCF7 cell line. miR-126-5p had no apparent effect in the SK-BR3 line, but increased viability of MCF7 cells, as did Ch3. This implies that Ch3 can be processed to both functional miR-126-3p and miR-126-5p in treated cells.
We were unable to achieve a reprogramming of adult murine cardiac fibroblasts into cells resembling CM using the cocktail of 4 miRs. This indicates that the miR-mediated transdifferentiation is only possible in neonatal fibroblasts. The effects in mice after an AMI might possibly be caused by an enhanced plasticity of fibroblasts in and close to the infarcted area.
We also screened to find aptamers specifically binding to cells of the cardiovascular system. We used two oligonucleotide libraries in a cell-SELEX to select candidates which bind to CF, but not EC. We observed that only the library which contains two randomized regions of 26 bases showed an enrichment of species binding to fibroblasts. We then sequenced rounds 5-7 of the SELEX and analyzed the data bioinfomatically to select 10 candidate aptamers. All candidates showed a strong binding not only to CF, but also EC. This indicates that the selection pressure against species binding to EC was not high enough and would have to be increased to find true CF-aptamers. Four promising candidates were also analyzed for their potential to be internalized and we surprisingly found that all of them were internalized by EC and CF more efficiently than TRA. The similar behavior of the candidates implies that they possibly share a ligand, which is expressed both by EC and CF, but more prominently by the latter.
This work demonstrates the possibility of using aptamers to transport miRs into cells of the cardiovascular system. It also shows that it is possible to select aptamers for non-cancerous mammalian cells, which has not been done before. It is reasonable to assume that a refinement of the cell-SELEX will allow selection of cell-specific aptamers. Due to the failure of reprogramming of adult fibroblasts into induced cardiomyocytes we were unable to test whether a miR-mediated reprogramming might be inducible using aptamer transported-miRs. Ultimately, aptamer mediated transport of miRs is a feasible and promising therapeutic option for the treatment of cardiovascular diseases and other disorders like cancer.
"Autonomy is the condition under which what one does reflects who one is" (Weinrib, 2019, p.8). This quote encapsulates the core idea of autonomy, namely the correspondence of one’s inner values with one’s actions. This is a beautiful idea. After all, who wants their actions to be determined or controlled from the outside?
The classical definition of autonomy is precisely about this independence from external circumstances, which Murray (1938) primarily coined. Among other things, Murray characterizes autonomy as resistance to influence and defiance of authority. Similarly, Piaget (1983) describes individuals as autonomous, independent of external influences, in their thinking and actions, and foremost, adult authority. Subsequent work criticized this equation of autonomy with separation or independence (Bekker, 1993; Chirkov et al., 2003; Hmel & Pincus, 2002). In lieu thereof, autonomy is defined as an ability (Chirkov, 2011; Rössler, 2017) and as an essential human need (Ryan & Deci, 2006). Focus is now
on self-governing while relying on rationally determined values to pursue a happy life (Chirkov, 2011). According to Social Determination Theory (SDT), autonomy is about a sense of initiative and responsibility for one’s own actions. The experience of interest and appreciation can strengthen autonomy, whereas experiences of external control, e.g., through rewards or punishments, limit autonomy (Ryan & Deci, 2020). In the psychological discourse of autonomy, SDT is strongly represented (Chirkov et al., 2003; Koestner & Losier, 1996; Weinstein et al., 2012). Notably, SDT distinguishes between autonomy and independence as follows. While a person can autonomously ask for help or rely on others, a person can also be involuntarily alone and independent. Interestingly, these definitions are again closer to its etymological meaning as self-governing, originating from Greek αυτòνoμζ (autonomous).
The two strands of autonomy as independence and autonomy as self-determination are also reflected in the vital differentiation into reactive and reflective autonomy by Koestner and Losier (1996). Resisting external influence, particularly interpersonal in fluence, is what reactive autonomy entails. This interpretation is closely related to the classical concept of autonomy as separation and independence from others (Murray, 1938). On the other hand, reflective autonomy concerns intrapersonal processes, such as self-governing or self-regulation, as defined in Self-Determination Theory (Ryan et al., 2021). In this dissertation, we investigated the concept in three different approaches while focusing on its assessment and operationalization: To begin, in Article 1, we compared the layperson’s and the scientific perspective to each other to gain insight into the characteristics of autonomy. Then, in Articles 2 and 3, we experimentally tested behavioral autonomy as resistance to external influences. Simultaneously, we investigated the link between various autonomy trait measures and autonomous behavior. As a result, in Article 2, we looked at how people reacted to the effects of message framing and sender authority on social distancing behavior during the early COVID-19 pandemic. Finally, in Article 3 we investigated the resistance to a descriptive norm in answering factual questions, in the context of autonomous personality. In our first article, we used a semi-qualitative bottom-up approach to gain insights into the laypersons’ perspective on autonomy and compare it to the scientific notion. We followed a design proposed by Kraft-Todd and Rand (2019) on the term heroism. We derived five components from philosophical and psychological literature: dignity, independence from others, morality, self-awareness, and unconventionality. In three preregistered online studies, we compared these scientific components to the laypersons’ understanding of autonomy. In Study 1, participants (N = 222) listed at least three and up to ten examples of autonomous (self-determined) behaviors. Here, the participants named 807 meaningful examples, which we systematically categorized into 34 representative items for Study 2. Next, new participants (N = 114) rated these regarding their autonomy. Finally, we transferred the five highest-rated autonomy and the five lowest-rated autonomy items to Study 3 (N = 175). We asked participants to rate how strongly the items represented dignity, independence from others, morality, self-awareness, and unconventionality. We found all components to distinguish between high and low autonomy items but not for unconventionality. Thus, we conclude that laypersons’ view corresponds with the scientific characteristics of dignity, independence from others, self-awareness, and morality. A qualitative analysis of the examples also showed that both reactive and reflective definitions of autonomy are prevalent.
Proteine sind die Maschinen der Zellen. Um die Funktionalität von zahlreichen zellulären Prozessen zu gewährleisten, müssen Kommunikationssignale innerhalb von Proteinen weitergeleitet werden. Die Weiterleitung einer Störung an einem Ort im Protein zu einer entfernten Stelle, an welcher sie strukturelle und/oder dynamische Änderungen auslöst, wird Allosterie genannt. Zunächst wurde Allosterie hauptsächlich mit großräumigen Konformationsänderungen in Verbindung gebracht, aber später entwickelte sich ein dynamischerer Blickwinkel auf Allosterie in Abwesenheit dieser großräumigen Konformationsänderungen. Die Idee eines allosterischen Pfades bestehend aus konservierten und energetisch gekoppelten Aminosäuren, welche die Signalweiterleitung zwischen entfernten Stellen im Protein vermitteln, entstand. Diese allosterischen Pfade wurden durch zahlreiche theoretische Studien in Zusammenhang mit Pfaden effizienten anisotropen Energieflusses gebracht. Der Energiefluss entlang dieser Netzwerke verknüpft allosterische Signalübertragung mit Schwingungsenergietransfer (VET - vibrational energy transfer). Die Großzahl der Forschungsarbeiten über dynamische Allosterie basiert auf theoretischen Methoden, weil nur wenige geeignete experimentelle Verfahren existieren. Um diesen essentiellen biologischen Prozess der Informationsübertragung besser verstehen zu können, ist die Entwicklung neuer und leistungsstarker experimenteller Instrumente und Techniken daher dringend erforderlich. Die vorliegende Dissertation setzt sich dies zum Ziel.
VET in Proteinen ist aufgrund der Proteingeometrie inhärent anisotrop. Alle globulären Proteine besitzen Kanäle effizienten Energieflusses, von denen vermutet wird, dass sie wichtig für Proteinfunktionen, wie die schnelle Ableitung von überschüssiger Wärme, Ligandenbindung und allosterische Signalweiterleitung, sind. VET kann mit zeitaufgelöster Infrarot (IR) Spektroskopie untersucht werden, bei welcher ein Femtosekunden Anregepuls eines Lasers Schwingungsenergie in ein molekulares System an einer bestimmten Stelle injiziert und ein, nach einem veränderbarem Zeitintervall folgender, IR Abfragepuls die Ausbreitung dieser Schwingungsenergie detektiert. Ein protein-kompatibler und universell einsetzbarer Chromophor, der die Energie eines sichtbaren Photons in Schwingungsenergie konvertiert, wird als Heizelement benötigt um langreichweitige VET Pfade in Proteinen kartieren zu können. Der Azulen (Azu) Chromophor eignet sich dafür, weil er nach Photoanregung des ersten elektronischen Zustandes durch ultraschnelle interne Konversion fast die gesamte injizierte Energie innerhalb von einer Picosekunde in Schwingungsenergie umwandelt. Eingebettet in die nicht-kanonische Aminosäure (ncAA - non-canonical amino acid) ß-(1-Azulenyl)-L-Alanine (AzAla), kann der Azu Rest in Proteine eingebaut werden. Die Ankunft der injizierten Schwingungsenergie an einer bestimmten Stelle im Protein kann mithilfe eines IR Sensors detektiert werden. Die Kombination aus Azu als VET Heizelement und Azidohomoalanine (Aha) als VET Sensor mit transienter IR (TRIR) Spektroskopie wurde schon erfolgreich an kleinen Peptiden in der Dissertation von H. M. Müller-Werkmeister getestet, die der vorliegenden Dissertation in den Laboren der Bredenbeck Gruppe vorausging.
Die Schwingungsfrequenz chemischer Bindungen ist hochempfindlich auf selbst kleine Änderungen der Konformation und Dynamik in der unmittelbaren Umgebung und kann mit IR Spektroskopie gemessen werden, z. B. mit Fourier Transform IR (FTIR) Spektroskopie. IR Spektroskopie bietet eine außergewöhnlich gute Zeitauflösung, die es ermöglicht, dynamische Prozesse in Molekülen auf einer Zeitskala von wenigen Picosekunden zu beobachten, wie z. B. die ultraschnelle Weiterleitung von Schwingungsenergie. Mit zweidimensionaler (2D)-IR Spektroskopie können die Relaxation von schwingungsangeregten Zuständen und strukturelle Fluktuationen um die schwingende Bindung untersucht werden. Allerdings geht die herausragende Zeitauflösung mit limitierter spektraler Auflösung einher. In größeren Molekülen mit zahlreichen Bindungen überlagern sich die Schwingungsbanden und die Ortsauflösung geht verloren. Um diese Limitierung zu überwinden, können IR Marker benutzt werden, chemische Gruppen, die in einer spektral durchsichtigen Region des Protein/Wasser Spektrums (1800 bis 2500 cm-1) absorbieren. Als ncAA können sie kotranslational in Proteine an einer gewünschten Stelle eingebaut werden und so ortsspezifische Informationen aus dem Proteininneren liefern. Aufgrund ihrer geringen Größe, eines relativ großen Extinktionskoeffizientens (350-400 M-1cm-1) und einer hohen Empfindlichkeit auf Änderungen in der lokalen Umgebung sind organische Azide (N3) wie zum Beispiel Aha besonders geeignete IR Marker. Aha kann als Methionin Analogon ins Protein eingebaut werden.
...
The topic of this thesis is the functional renormalization group. We discuss some approximations schemes. Thereafter we apply these approximations to study different fields of condensed matter physics. Generally we have to evaluate an infinite set of vertex functions describing the scattering of particles. These vertex functions get renormalized away from their bare values governed by an infinite hierarchy of flow equations. We cannot expect to actually solve these equations but have to apply a couple of approximations. The aim is to somehow separate relevant contributions from irrelevant ones. One possible scheme opens up if we rescale fields and vertices. Here "relevance" is used in a quantitative way to describe the scaling behaviour of vertices close to a fixed point of the RG. One disadvantage of describing the system in terms of infinitely many vertices is that the majority of these vertices we have to evaluate are not of interest to us. In most cases we are just looking for the self-energy or the two-particle effective interaction. However there might be contributions to the flow of these vertices that are generated by irrelevant vertices. We generally assume that we can express irrelevant vertices in terms of the relevant and marginal ones. Then in turn it should be possible to write the contributions of these irrelevant vertices to the flow of relevant and marginal ones in terms of relevant and marginal vertices as well. We show how this can be achieved by what we term the adiabatic approximation. We now consider weakly interacting bosons at the critical point of Bose-Einstein condensation. As the transition takes place at a finite temperature this temperature defines an effective ultraviolet cut-off. For the investigation of physical properties that depend on momenta smaller than this cut-off it is therefore sufficient to describe the system by a classical field theory. Our central topic here is the self-energy of the bosons and we are able to evaluate it with the full momentum dependence. For small momenta it approaches a scaling form and as the momentum is gradually increased we observe a crossover to the perturbative regime. As a test for the reliability of our expression for the selfenergy we investigate the interaction induced shift of the critical. Our results compare quite satisfactory to the best available estimates for this shift. For the anomalous dimension our approach predicts the correct order of magnitude however with a considerable error. As an improvement we include more vertices into our calculations. Here we observe that our fixed point estimates indeed approach the best known results but this convergence is quite weak. We turn toward systems of interacting fermions. The formulation of the functional renormalization group implicitly requires knowledge of the true Fermi surface of the full interacting system. In general however we can just calculate it a-posteriori from the self-energy. The requirement to flow into a fixed point can be translated into a fine-tuning of the frequency/momentum independent part r_0 of the rescaled 2-point function. We show how this bare value is related to the momentum dependent effective interaction along the complete trajectory of the RG. On the other hand r_0 expresses the difference between the bare and the true Fermi surface. Putting both equations together results into an exact selfconsistency equation for the Fermi surface. We apply our self-consistency equation above to tackle the problem of finding the true Fermi surface of interacting fermions in low dimensions. The most simple non-trivial model with an inhomogeneous Fermi surface is a system of two coupled metallic chains. The process of interband backward scattering leads to a smoothing of the Fermi surface. Of special interest is if the Fermi momenta of the two bands collapse into just one value. We propose the term confinement transition for this behaviour. We bosonize the interband backward scattering by means of a Hubbard-Stratonovich transformation and treat our system as a single channel problem. This bosonization together with the adiabatic approximation allows us to investigate the system even at strong coupling. Within a simple one-loop treatment our method predicts a confinement transition at strong coupling. However taken vertex renormalizations into account we observe that this confinement is destroyed by fluctuations beyond one-loop. Actually we observe how the confined phase can be stabilized by the inclusion of interband umklapp scattering. Thereafter we consider the physically more relevant case of a two-dimensional system of infinitely many coupled metallic chains. Here the Fermi surface consists of two disconnected weakly curved sheets. We are able to repeat the calculations we have performed for our toy model. Within a self-consistent 2-loop calculation indeed signs for a confinement transition at finite coupling strength emerge.
Visual perception has increasingly grown important during the last decades in the robotics domain. Mobile robots have to localize themselves in known environments and carry out complex navigation tasks. This thesis presents an appearance-based or view-based approach to robot self-localization and robot navigation using holistic, spherical views obtained by cameras with large fields of view. For view-based methods, it is crucial to have a compressed image representation where different views can be stored and compared efficiently. Our approach relies on the spherical Fourier transform, which transforms a signal defined on the sphere to a small set of coefficients, approximating the original signal by a weighted sum of orthonormal basis functions, the so-called spherical harmonics. The truncated low order expansion of the image signal allows to compare input images efficiently, and the mathematical properties of spherical harmonics also allow for estimating rotation between two views, even in 3D. Since no geometrical measurements need to be done, modest quality of the vision system is sufficient. All experiments shown in this thesis are purely based on visual information to show the applicability of the approach. The research presented on robot self localization was focused on demonstrating the usability of the compressed spherical harmonics representation to solve the well-known kidnapped robot problem. To address this problem, the basic idea is to compare the current view to a set of images from a known environment to obtain a likelihood of robot positions. To localize the robot, one could choose the most probable position from the likelihood map; however, it is more beneficial to apply standard methods to integrate information over time while the robot moves, that is, particle or Kalman filters. The first step was to design a fast expansion method to obtain coefficient vectors directly in image space. This was achieved by back-projecting basis functions on the input image. The next steps were to develop a dissimilarity measure, an estimator for rotations between coefficient vectors, and a rotation-invariant dissimilarity measure, all of them purely based on the compact signal representation. With all these techniques at hand, generating likelihood maps is straightforward, but first experiments indicated strong dependence on illumination conditions. This is obviously a challenge for all holistic methods, in particular for a spherical harmonics approach, since local changes usually affect each single element of the coefficient vector. To cope with illumination changes, we investigated preprocessing steps leading to feature images (e.g. edge images, depth images), which bring together our holistic approach and classical feature-based methods. Furthermore, we concentrated on building a statistical model for typical changes of the coefficient vectors in presence of changes in illumination. This task is more demanding but leads to even better results. The second major topic of this thesis is appearance-based robot navigation. I present a view-based approach called Optical Rails (ORails), which leads a robot along a prerecorded track. The robot navigates in a network of known locations which are denoted as waypoints. At each waypoint, we store a compressed view representation. A visual servoing method is used to reach a current target waypoint based on the appearance and the current camera image. Navigating in a network of views is achieved by reaching a sequence of stopover locations, one after another. The main contribution of this work is a model which allows to deduce the best driving direction of the robot based purely on the coefficient vectors of the current and the target image. It is based on image registration as the classical method by Lucas-Kanade, but has been transferred to the spectral domain, which allows for great speedup. ORails also includes a waypoint selection strategy and a module for steering our nonholonomic robot. As for our self-localization algorithm, dependance on illumination changes is also problematic in ORails. Furthermore, occlusions have to be handled for ORails to work properly. I present a solution based on the optimal expansion, which is able to deal with incomplete image signals. To handle dynamic occlusions, i.e. objects appearing in an arbitrary region of the image, we use the linearity of the expansion process and cut the image into segments. These segments can be treated separately, and finally we merge the results. At this point, we can decide to disregard certain segments. Slicing the view allows for local illumination compensation, which is inherently non-robust if applied to the whole view. In conclusion, this approach allows to handle the most important criticism to holistic view-based approaches, that is, occlusions and illumination changes, and consequently improves the performance of Optical Rails.
Research in cell and developmental biology requires the application of three-dimensional model systems that reproduce the natural environment of cells. Processes in developmental biology are therefore studied in entire systems like insects or plants. In cell biology, three-dimensional cell cultures (e.g. spheroids or organoids) model the physiology and pathology of cells, tissues or organs. In all systems, the cellular neighborhood and interactions, but also physicochemical influences, are realistically presented. The production and handling of these model systems is rather simple and allows for reproducible characterization.
Confocal and light sheet-based fluorescence microscopy (LSFM) enable the observation of these systems while maintaining their three-dimensional integrity. LSFM is applicable to imaging live samples at high spatio-temporal resolution over long periods of time. The quality of the acquired datasets enables the extraction of quantitative features about morphology, functionality and dynamics in the context of the complete system. This approach is referred to as image-based systems biology. Exploiting the potential of the generated datasets requires an image analysis pipeline for data management, visualization and the retrieval of biologically meaningful values.
The goal of this thesis was to identify, develop and optimize modules of the image analysis pipeline. The modules cover data management and reduction, visualization, reconstruction of multiview image datasets, the segmentation and tracking of cell nuclei and the extraction of quantitative features. The modules were developed in an application-driven manner to test and ensure their applicability to real datasets from three-dimensional fluorescence microscopy. The underlying datasets were taken from research projects in developmental biology in insects and plants, as well as from cell biology.
The datasets acquired in fluorescence microscopy are typically complex and require common image processing steps in order to manage, visualize, and analyze the datasets. The first module accomplishes automatic structuring of large image datasets, reduces the data amount by image cropping and compression and computes maximum projection images along different spatial directions. The second module corrects for intensity variations in the generated maximum projection images that occur as a function of time. The program was published as a part of an article in Nature Protocols. Another developed module named BugCube provides a web-based platform to visualize and share the processed image datasets.
In LSFM, samples can be rotated in-between two acquisitions enabling the generation of multiview image datasets. Prior to my work, Frederic Strobl and Alexander Ross acquired the complete embryogenesis of the red flour beetle, Tribolium castaneum, and the field cricket, Gryllus bimaculatus, with LSFM. I evaluated a plugin for the software FIJI as a module for the reconstruction of such datasets. The plugin was optimized for automation and efficiency. We obtained the first high quality three-dimensional reconstructions of Tribolium and Gryllus datasets.
Optical clearing increases the penetration depth into samples, thus providing endpoint images of entire three-dimensional objects with cellular detail. This work contributes a quantitative characterization module that was applied to endpoint images of optically cleared spheroids. A program for the generation of ground truth datasets was developed in order to evaluate the cell nuclei segmentation performance. The program was part of a paper that was published in BMC Bioinformatics. Using the program, I could show that the cell nuclei segmentation is robust and accurate. Approaches from computational topology and graph theory complete the segmentation of cell nuclei. Thus, the developed module provides a comprehensive quantitative characterization of spheroids on the level of the individual cell, the cell neighborhood and the whole cell aggregate. The module was employed in four applications to analyze the influence of different stress conditions on the morphology and cellular arrangement of cells in spheroids. The module was accepted for publication in Scientific Reports along with the results for one application. The cell nuclei segmentation further provided a data source for simulation models that used correlation functions to identify structural zones in spheroids. These results were published in Royal Society Interface.
The final part of this work presents a module for cell tracking and lineage reconstruction. In collaboration with Dr. Alexis Maizel, Dr. Jens Fangerau and Dr. Daniel von Wangenheim, I developed a module to track the positions of all cells involved in lateral root formation in Arabidopsis thaliana and used the extracted positions for extensive data analysis. We reconstructed the cell lineages and established the first atlas of all founder cells that contribute to the formation. The analysis of the retrieved data allowed us to study conserved and individual patterns in lateral root formation. The atlas and parts of the analysis presented in this thesis were published in Current Biology.
In this thesis, I developed modules for an image analysis pipeline in three-dimensional fluorescence microscopy and applied them in interdisciplinary research projects. The modules enabled the organization, processing, visualization and analysis of the datasets. The perspective of the image analysis pipeline is not restricted to image-based systems biology. With ongoing development of the image analysis pipeline, it can also be a valuable tool for medical diagnostics or industrial high-throughput approaches.
The physics of interacting bosons in the phase with broken symmetry is determined by the presence of the condensate and is very different from the physics in the symmetric phase. The Functional Renormalization Group (FRG) represents a powerful investigation method which allows the description of symmetry breaking with high efficiency. In the present thesis we apply FRG for studying the physics of two different models in the broken symmetry phase. In the first part of this thesis we consider the classical O(1)-model close to the critical point of the second order phase transition. Employing a truncation scheme based on the relevance of coupling parameters we study the behavior of the RG-flow which is shown to be influenced by competition between two characteristic lengths of the system. We also calculate the momentum dependent self-energy and study its dependence on both length scales. In the second part we apply the FRG-formalism to systems of interacting bosons in the phase with spontaneously broken U(1)-symmetry in arbitrary spatial dimensions at zero temperature. We use a truncation scheme based on a new non-local potential approximation which satisfy both exact relations postulated by Hugenholtz and Pines, and Nepomnyashchy and Nepomnyashchy. We study the RG-flow of the model, discuss different scaling regimes, calculate the single-particle spectral density function of interacting bosons and extract both damping of quasi-particles and spectrum of elementary excitations from the latter.
The DNA damage response (DDR) is a vast network of molecules that preserves genome integrity and allow the faithful transmission of genetic information in human cells. While the usual response to the detection of DNA lesions in cells involves the control of cell-cycle checkpoints, repair proteins or apoptosis, alterations of the repair processes can lead to cellular dysfunction, diseases, or cancer. Besides, cancer patients with DDR alterations often show poor survival and chemoresistance. Despite the progress made in recent years in identifying genes and proteins involved in DDR and their roles in cellular physiology and pathology, the question of the involvement of DDR in metabolism remains unclear. It remains to study the metabolites associated with specific repair pathways or alterations and to investigate whether differences exist depending on cellular origin. The identification of DDR-related metabolic pathways and of the pathways that cause metabolic reprogramming in DDR-deficient cells may produce new targets for the development of new therapies.
In this thesis, nuclear magnetic resonance spectroscopy (NMR) was used to assess the metabolic consequence of the loss of two central DNA repair proteins with importance in diseases context, ATM and RNase H2, in haematological cells. An increase in intracellular taurine was found in RNase H2- and ATM-deficient cells compared to wild-type cells for these genes and in cells after exposition to a source of DNA damage. The rise in taurine does not appear to result from an increase in its biosynthesis from cysteine, but more likely from other cellular processes such as degradation pathways.
Overall, evidence for metabolic reprogramming in haematological cells with faults in DNA repair resulting from ATM or RNase H2 deficiencies or upon exposition to a source of DNA damage is presented in this study.
The present research in high energy physics as well as in the nuclear physics requires the use of more powerful and complex particle accelerators to provide high luminosity, high intensity, and high brightness beams to experiments. With the increased technological complexity of accelerators, meeting the demand of experimenters necessitates a blend of accelerator physics with technology. The problem becomes severe when optimization of beam quality has to be provided in accelerator systems with thousands of free parameters including strengths of quadrupoles, sextupoles, RF voltages, etc. Machine learning methods and concepts of artificial intelligence are considered in various industry and scientific branches, and recently, these methods are used in high energy physics mainly for experiments data analysis.
In Accelerator Physics the machine learning approach has not found a wide application yet, and in general the use of these methods is carried out without a deep understanding on their effectiveness with respect to more traditional schemes or other alternative approaches. The purpose of this PhD research is to investigate the methods of machine learning applied to accelerator optimization, accelerator control and in particular on optics measurements and corrections. Optics correction, maximization of acceptance, and simultaneous control of various accelerator components such as focusing magnets is a typical accelerator scenario. The effectiven- ess of machine learning methods in a complex system such as the Large Hadron Collider, which beam dynamics exhibits nonlinear response to machine settings is the core of the study. This work presents successful application of several machine learning techniques such as clustering, decision trees, linear multivariate models and neural networks to beam optics measurements and corrections at the LHC, providing the guidelines for incorporation of machine learning techniques into accelerator operation and discussing future opportunities and potential work in this field.
The present work comprises different projects within the scope of public health. In detail, they all aim at combating the high-burden diseases HIV/AIDS, malaria and tuberculosis more effectively. Since there was, and still is, no harmonization between the existing biowaiver guidelines, the biowaiver dissolution test conditions by WHO and FDA were compared against each other using drug products, which had already demonstrated BE to the comparator in vivo. Thereby it could be shown that the dissolution conditions proposed by the WHO are more appropriate for granting biowaivers than those of the FDA. Further, the applicability of the WHO dissolution test conditions was investigated using the APIs ethambutol, isoniazid and pyrazinamide (all BCS Class III) as model compounds. These investigations demonstrated that the concept of the biowaiver proved to work properly, i.e. leading to no false positive BE decision and an acceptable incidence of false negative BE decisions. In addition, four new biowaiver monographs were published addressing important APIs in the treatment of HIV/AIDS and malaria. Before these efforts, there were only a very few biowaiver monographs available for antiviral or antimalarial APIs, i.e. the database of biowaiver monographs has been clearly improved. The last part of the present work dealt with the extension of the biowaiver concept to related areas such as the WHO Prequalification of Medicines Programme. Investigations revealed that the biowaiver tools are generally eligible for prequalification of drug products containing ethambutol, isoniazid, pyrazinamide, or lamivudine to prove BE between an appropriate comparator and the test candidate. By contrast, some APIs are excluded from the biowaiver procedure. In conclusion, the implementation of the biowaiver tools for prequalification of biowaivable APIs is, along with BCS-based biowaiver approval of new generics, an important step towards making essential, high-quality drug products more cost-effective and, as a consequence, more accessible for a larger percentage of the population. In that way, the treatment conditions for those in need living in the developing countries can be improved enormously, so that those who are poor do not have to receive poor treatment. The quality standard of essential medicines will increase worldwide, thereby helping to combat the high-burden diseases better and, in turn, lead to an improvement of the global health status.
Atmospheric particles play an important role in the radiative balance of the Earth, as well as they affect human health and air quality. Hence, the chemical characterization constitutes a crucial task to determinate their properties, sources and fate. Particularly, the analysis of nanoparticles (d<100 nm) represents an analytical challenge, since these particles are abundant in number but have very little mass.
This accumulative thesis focuses on the chemical characterization of nanoparticles, performed in both laboratory and field studies. Here, I present four manuscripts, two of which are my main project as a lead author.
The first manuscript (Caudillo et al., 2021) focuses on the gas and the particle phase originated from biogenic precursor gases (α-pinene and isoprene). The experiments were performed in the CLOUD chamber at CERN to simulate pure biogenic new particle formation. Both gas and particle phases are measured with a nitrate CI-APi-TOF mass spectrometer, while the TD-DMA is coupled to it for particle-phase measurements, this setup allows a direct comparison as both measurements use the identical chemical ionization and detector. This study demonstrates the suitability of the TD-DMA for measuring newly formed nanoparticles and it confirms that isoprene suppresses new particle formation but contributes to the growth of newly formed particles.
The second manuscript (Caudillo et al., 2022) presents an intercomparison of four different techniques (including the TD-DMA) for measuring the chemical composition of SOA nanoparticles. The measurements were conducted in the CLOUD chamber. The intercomparison was done by contrasting the observed chemical composition, the calculated volatility, and the thermal desorption behavior (for the thermal desorption techniques). The methods generally agreed on the most important compounds that are found in the nanoparticles. However, they did see different parts of the organic spectrum. Potential explanations for these differences are suggested.
The third manuscript (Ungeheuer al., 2022) presents both laboratory and ambient measurements to investigate the ability of lubricant oil to form new particles. These new particles are an important source of ultrafine particles in the areas nearby large airports. The ambient measurements were performed downwind of Frankfurt International Airport, and it was found that the fraction of lubricant oil is largest in the smallest particles. In the laboratory, the main finding was that evaporated lubricant oil nucleates and forms new particles rapidly. The results suggest that nucleation of lubricant oil and subsequent particle growth can occur in the cooling exhaust plumes of aircraft-turbofans.
The fourth manuscript (Wang et al., 2022) is a new particle formation study in the CLOUD chamber at CERN. This study shows that nitric acid, sulfuric acid, and ammonia interact synergistically and rapidly form particles under upper free tropospheric conditions. These particles can grow by condensation (driven by the availability of ammonia) up to CCN sizes and INP particles. The ability of these particles to act as a CCN and INP was also investigated and it was found to be as efficient as for desert dust. This mechanism constitutes an important finding and it can account for previous observations of high concentrations of ammonia and ammonium nitrate over the Asia monsoon region.
Application of a developed tool to visualize newly synthesized AMPA receptor components in situ
(2018)
The information flow between neurons happens at contact points, the synapses. One underlying mechanism of learning and memory is the change in the strength of information flow in selected synapses. In order to match the huge demand in membranes and proteins to build and maintain the neurites' complex architecture, neurons use decentralized protein synthesis. Many candidate proteins for local synthesis are known, and the need of de novo synthesis for memory formation is well established. The underlying mechanisms of how somatic versus dendritic synthesis is regulated are yet to be elucidated. Which proteins are newly synthesized in order to allow learning?
In this thesis protein synthesis is studied in hippocampal neurons. The fractional distribution of somatic and dendritic synthesis for candidate proteins and their subsequent transport to their destination are investigated using a newly developed technique. In the first part of this study we describe the development of this technique and use it in the second part to answer biological questions.
We focus here on AMPA receptor subunits, the key players in fast excitatory transmission. AMPA receptors contain multiple subunits with diverse functions. It remains to be understood, when and where in a neuron these subunits come together to form a protein complex and how the choice of subunits is regulated.
The investigation of the subunits' site of synthesis and redistribution kinetics in this study will help us to understand how neurons are able to change their synaptic strength in an input specific manner which eventually allows learning and memory.
Key questions which are addressed in this study:
How can specific newly synthesized endogenous proteins be visualized in situ? What are the neuron's abilities to locally synthesize and fully assemble AMPA receptor complexes?
How fast do different AMPA receptor subunits redistribute within neurons after synthesis?
Antimicrobial resistance became a serious threat to the worldwide public health in this century. A better understanding of the mechanisms, by which bacteria infect host cells and how the host counteracts against the invading pathogens, is an important subject of current research. Intracellular bacteria of the Salmonella genus have been frequently used as a model system for bacterial infections. Salmonella are ingested by contaminated food or water and cause gastroenteritis and typhoid fever in animals and humans. Once inside the gastrointestinal tract, Salmonella can invade intestinal epithelial cells. The host cell can fight against intracellular pathogens by a process called xenophagy. For complex systems, such as processes involved in the bacterial infection of cells, computational systems biology provides approaches to describe mathematically how these intertwined mechanisms in the cell function. Computational systems biology allows the analysis of biological systems at different levels of abstraction. Functional dependencies as well as dynamic behavior can be studied. In this thesis, we used the Petri net formalism to gain a better insight into bacterial infections and host defense mechanisms and to predict cellular behavior that can be tested experimentally. We also focused on the development of new computational methods.
In this work, the first realization of a mathematical model of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells was developed. The mathematical model expressed in the Petri net formalism was constructed in an iterative way of modeling and analyses. For the model verification, we analyzed the Petri net, including a computational performance of knockout experiments named in silico knockouts, which was established in this work. The in silico knockouts of the proposed Petri net are consistent with the published experimental perturbation studies and, thus, ensures the biological credibility of the Petri net. In silico knockouts that have not been experimentally investigated yet provide hypotheses for future investigations of the pathway.
To study the dynamic behavior of an epithelial cell infected with Salmonella enterica serovar Typhimurium, a stochastic Petri net was constructed. In experimental research, a decision like "Which incubation time is needed to infect half of the epithelial cells with Salmonella?" is based on experience or practicability. A mathematical model can help to answer these questions and improve experimental design. The stochastic Petri net models the cell at different stages of the Salmonella infection. We parameterized the model by a set of experimental data derived from different literature sources. The kinetic parameters of the stochastic Petri net determine the time evolution of the bacterial infection of a cell. The model captures the stochastic variation and heterogeneity of the intracellular Salmonella population of a single cell over time. The stochastic Petri net is a valuable tool to examine the dynamics of Salmonella infections in epithelial cells and generate valuable information for experimental design.
In the last part of this thesis, a novel theoretical method was introduced to perform knockout experiments in silico. The new concept of in silico knockouts is based on the computation of signal flows at steady state and allows the determination of knockout behavior that is comparable to experimental perturbation behavior. In this context, we established the concept of Manatee invariants and demonstrated the suitability of their application for in silico knockouts by reflecting biological dependencies from the signal initiation to the response. As a proof of principle, we applied the proposed concept of in silico knockouts to the Petri net of the xenophagic recognition of Salmonella. To enable the application of in silico knockouts for the scientific community, we implemented the novel method in the software isiKnock. isiKnock allows the automatized performance and visualization of in silico knockouts in signaling pathways expressed in the Petri net formalism. In conclusion, the knockout analysis provides a valuable method to verify computational models of signaling pathways, to detect inconsistencies in the current knowledge of a pathway, and to predict unknown pathway behavior.
In summary, the main contributions of this thesis are the Petri net of the xenophagic capturing of Salmonella enterica serovar Typhimurium in epithelial cells to study the knockout behavior and the stochastic Petri net of an epithelial cell infected with Salmonella enterica serovar Typhimurium to analyze the infection dynamics. Moreover, we established a new method for in silico knockouts, including the concept of Manatee invariants and the software isiKnock. The results of these studies are useful to a better understanding of bacterial infections and provide valuable model analysis techniques for the field of computational systems biology.
This thesis presented the measurement of antideuteron and antihelium-3 production in central AuAu collisions at V SNN = 200 GeV center-of-mass energy at RHIC. The analysis is based on STAR data, about 3 x 10 high 6 events at top 10% centrality. Within the data sample a total number of about 5000 antideuterons and 193 antihelium-3 were observed in the STARTPC at mid-rapidity. The specific energy loss measurement in the TPC provides antideuteron identification only in a small momentum window, antihelium-3 however can be identified nearly background free with almost complete momentum range coverage. Following the statistical analysis of the hadronic composition at chemical freeze-out of the fireball, the antinuclei abundances were analyzed in terms of the same statistical description. Now applied to the clusterization of the fireball, the statistical analysis yields a fireball temperature of (135+-10) MeV and chemical potential of (5+-10) MeV at kinetic freeze-out. In the same way as the hadronization, the clusterization process is phase-space dominated and clusters are born into a state of maximum entropy. The large sample of observed antihelium-3 allowed for the first time in heavy-ion physics to calculate a differential multiplicity and invariant cross section as a function of transverse momentum. As expected, the collective transverse flow in the fireball flattens the shape of the transverse momentum spectrum and leads to the high inverse slope parameter of (950+-140) MeV of the antihelium-3 spectrum. With the extracted mean transverse momentum of antihelium-3, the collective flow velocity in transverse direction could be estimated. As the average thermal velocity is small compared to the mean collective flow velocity for heavy particles, the mean transverse momentum of antihelium-3 by itself constrains the flow velocity. Here, a simple ideal-gas approximation was fitted to the distribution of the mean transverse momentum as a function of particle mass and provided direct access to the kinetic freeze-out temperature and the flow velocity. A concept, which is complementary to the combined analysis of momentum spectra and two-particle HBT correlation methods commonly used to extract these parameters, and a cross check for the statistical analysis. The upper limit for the transverse collective flow velocity from the antihelium-3 measurement alone is v flow <= (0.68+-0.06)c, whereas the ideal-gas approximation yields a temperature of (130+-40) MeV and v flow = (0.46+-0.08)c. The results indicate, that the kinetic freeze-out conditions at SPS and RHIC are very similar, except for a smaller baryon chemical potential at RHIC. The simultaneous inclusive measurement of antiprotons allowed to study the cluster production in terms of the coalescence picture. With the large momentum coverage of the antihelium-3 momentum spectrum, the coalescence parameter could be calculated as a function of transverse momentum. Due to the difference between antiproton and antihelium-3 inverse slopes, increases with increasing transverse momentum - again a direct consequence of collective transverse flow. Both B2 and B3 follow the common behavior of decreasing coalescence parameters as a function of collision energy. According to the simple thermodynamic coalescence model, this indicates an increasing freeze-out volume for higher energies and is confirmed by the interpretation of the coalescence parameters in the framework of Scheibl and Heinz. Their model includes a dynamically expanding source in a quantum mechanical description of the coalescence process and expresses the coalescence parameter as a function of the homogeneity volume V hom accessible also in two-particle HBT correlation analyzes. The values for the antideuteron and antihelium-3 results agree well with the homogeneity volume from pion-pion correlations, but do not seem to follow the same transverse mass dependence. A comparison with proton-proton correlations may clarify this point and provide an important cross check for this analysis. Compared to SPS the homogeneity volume increases nearly by a factor of two. The analysis of the antinuclei emission at RHIC allowed to study the kinetic freeze-out of the created fireball. The results show, that the temperature and mean transverse velocity in the expanding system does not change significantly, when the collision energy increases by one order of magnitude. Only the source volume, i.e. the homogeneity volume, increases. That leaves open questions for the theoreticians to the details of the system evolution from the initial hot and dense phase - the initial energy density is a factor of two to three higher at RHIC than at SPS - to the final kinetic freeze-out with similar conditions. At the same time, the results are important constraints for the theoretical descriptions. The successful implementation of the Level-3 trigger system in STAR opens the door for the measurement of very rare signals. Indeed, in the coalescence physics perspective, the first observations of anti-alpha 4 He nuclei and antihypertritons 3/Delta H will come within the reach of STAR, in addition to a high statistics sample of antihelium-3.
Recent data indicate that reactive oxygen species (ROS) are produced in the nociceptive system during persistent pain and contribute to pain sensitization. Aim of this study was to investigate potential antinociceptive effects of ROS scavengers in different animal models of pain. Intrathecal injection of ROS scavengers 1-Oxyl-2,2,6,6-tetramethyl -4-hydroxypiperidine (TEMPOL) or Phenyl-N-tert-butylnitrone (PBN) significantly inhibited formalin-induced nociceptive behavior in mice, suggesting that ROS released in the spinal cord are involved in nociceptive processing. Formalin-induced nociceptive behavior was also inhibited by intraperitoneal injection of a combination of vitamin C and vitamin E, but not of vitamin C or vitamin E alone. Moreover, the combination of vitamin C and E dose-dependently attenuated mechanical allodynia in the spared nerve injury (SNI) model of neuropathic pain. The SNI-induced mechanical allodynia was also reduced after intrathecal injection of the combination of vitamin C and E, and western blot analyses revealed that vitamin C and E treatment can ameliorate the activation of p38 MAPK in the spinal cord and in DRGs. These data suggest that a combination of vitamin C and E can inhibit the nociceptive behavior in animal models of pain, and points to a role of the spinal cord as an important area of ROS production during nociceptive processing.
In this thesis the anti-proton to proton ratio in 197Au + 197Au collisions, measured at mid-rapidity, at a center of mass energy of psNN = 200GeV is reported. The value was measured to be ¹p/p = 0.81+-0.002stat +- 0.05syst: in the 5% most central collisions. The ratio shows no dependence on rapidity in the range jyj < 0:5. Furthermore, a dependence on transverse momentum within 0:4< p? < 1:0 GeV/c is not observed. At higher p?, a slight drop in the ratio is observed. In the present analysis, the highest momentum considered is p? = 4:5 GeV/c yielding ¹p=p = 0:645§0:005stat: §0:10syst:. However, the systematic error is higher in this momentum range. A slight centrality dependence was observed, where a decrease from ¹p=p = 0:83§0:002stat:§0:05syst: for most peripheral collisions (less than 80% central) to ¹p=p = 0:78§0:002stat:§0:05syst: for the 5% most central collisions was measured. An estimate of the feed-down contributions fromthe decay of heavier strange baryons results in ¹p=p = 0:77 § 0:05syst:. The measured ratio indicates a » 12:5 times higher value compared to the highest SPS energy of psNN = 17:3 and an \almost net-baryon free" region, at mid- rapidity. The asymmetry of protons and anti-protons may be explained by the contribution ofvalence quarks in a nucleus break-up picture. In such a scenario, the absolute value of the ratio and the fact that the ratio does not depend on rapidity (at mid-rapidity) is well reproduced. Fragmentation of quarks and anti- quarks into protons and anti-protons is assumed. An estimate of the ratio, when feed-down correction is taken into consideration, agrees well with the prediction of a statistical model analysis at a temperature of T = 177 § 7 MeV and a baryon chemical potential of ¹B = 29 § 8 MeV. The temperature achieved is only slightly higher when compared to the top SPS energy, while the baryochemical potential is factor »10 lower. As in the case of the SPS results, these parameters are close to the phase boundary of Figure 1.6. The measurement of the ratio at high transverse momentum was of special in- terest in this analysis, since at RHIC energies, the cross section for hadrons at high transverse momentum is increased with respect to SPS energies. The weak dependence of the ratio on the transverse momentum is well described by the non- perturbative quenched and baryon junction scenario (i.e. Soft+Quench model), where baryon creation is enhanced by baryon junctions. In comparison the ratio does not decrease within the considered momentum range as predicted by pQCD.
Mast cells are long-lived tissue-resident leukocytes, located most abundantly in the skin and mucosal surfaces. They belong to the first line of defence of the body, protecting against invading pathogens, toxins and allergens. Their secretory granules are densely packed with a plethora of mediators, which can be released immediately upon activation of the cell. Next to their role in IgE-mediated allergic diseases and in promoting inflammation, potential anti-inflammatory functions have been assigned to mast cells, depending on the biological setting. The aim of this thesis was to contribute to a better understanding of the role of mast cells during the resolution of a local inflammation. Therefore, in a first of step a suitable model of a local inflammation had to be identified. Since comparison of the two Toll-like receptor (TLR)-agonists zymosan and lipopolysaccharide (LPS), which are most commonly used to locally induce inflammation, revealed a systemic response after LPS-injection and a local inflammation after zymosan-injection, the TLR2 agonist zymosan was chosen for the subsequent experiments. Multi epitope ligand cartography (MELC) combined with statistical neighbourhood analysis showed that mast cells are located in an anti-inflammatory microenvironment next to M2 macrophages during resolution of inflammation, while neutrophils and M1 macrophages are located in the zymosan-filled core of the inflammation. Furthermore, infiltrating neutrophils during peak inflammation and an increasing population of macrophages phagocytosing neutrophils during resolution of inflammation could be observed. MELC as well as flow cytometry analysis of mast cell-deficient mice revealed a decreased phagocytosing activity of macrophages in the absence of mast cells. As an untargeted approach to identify mast cell-derived mediators induced by zymosan, mRNA sequencing of bone marrow-derived mast cells (BMMCs) was performed. Gene ontology term analysis of the sequencing data revealed the induction of the type I interferon (IFN) pathway as the dominant response. Contradicting previous studies, I could validate the production of IFN-β by mast cells in response to zymosan and LPS in vitro. Furthermore IFN-β expression by mast cells was also detected in vivo. In accordance with previous studies regarding other cell types the release of IFN-β by mast cells depends on endosomal signaling. The potential of IFN-β to enhance the phagocytosing activity of macrophages has been demonstrated recently. Besides IFN-β, various other mediators with reported enhancing effects on macrophage phagocytosis were also induced by zymosan in BMMCs, including Interleukin (IL)-1β, IL-4, IL-13, and Prostaglandin (PG) E2. Thus, either one of these mediators alone or a combination of them could promote macrophage phagocytosis.
In conclusion, I herein present mast cells as a novel source for IFN-β induced by non-viral TLR ligands and demonstrate their enhancing effect on macrophage phagocytosis, thereby contributing to the resolution of inflammation.
In the past sixty years, excessive water consumption and dam construction have significantly influenced natural flow regimes and surface freshwater ecosystems throughout China, and thus resulted in serious environmental problems. In order to balance the competing water demands between human and environment and provide knowledge on sustainable water management, assessments on anthropogenic flow alterations and their impacts on aquatic and riparian ecosystems in China are needed.
In this study, the first evaluation on quantitative relationships between anthropogenic flow alterations and ecological responses in eleven river basins and watersheds in China was performed based on the data that could be obtained from published case studies. Quantitative relationships between changes in average annual discharge, seasonal low flow and seasonal high flow and changes in ecological indicators (fish diversity, fish catch and vegetation cover, etc.) were analyzed. The results showed that changes in riparian vegetation cover as well as changes in fish diversity and fish catch were strongly correlated with the changes in flow magnitude (r = 0.77, 0.66), especially with changes in average annual river discharge. In addition, more than half of the variations in vegetation cover could be explained by changes in average annual river discharge (r² = 0.63) and roughly 50 % changes in fish catch in arid and semi-arid region and 60% changes of fish catch in humid region could be related to alterations in average annual river discharge (r² = 0.53, 0.58).
In a supplementary analysis of this study, the first estimation on quantitative relationships between decreases in native fish species richness and anthropogenic flow alterations in 34 river basins and sub-basins in China was conducted. Linear relationships between losses of native fish species and five ecologically relevant flow indicators were analyzed by single and multiple regression models. For the single regression analysis, significant linear relationships were detected for the indicators of long-term average annual discharge (ILTA) and statistical low flow Q90 (IQ90). For the multiple regressions, no indicator other than ILTA has significant relationships with changes in number of fish species mainly due to collinearity. Two conclusions emerged from the analysis: 1) losses of fish species were positively correlated with changes in ILTA in China and 2) indicator of ILTA was dominant over other flow indicators included in this research for the given dataset. These results provide a guideline for the sustainable water resources management in rivers with high risk of fish extinction in China.
The Compressed Baryonic Matter experiment (CBM) at FAIR and the NA61/SHINE experiment at CERN SPS aim to study the area of the QCD phase diagram at high net baryon densities and moderate temperatures using heavy-ion collisions. The FAIR and SPS accelerators cover energy ranges 2-11 and 13-150 GeV per nucleon respectively in laboratory frame for heavy ions up to Au and Pb. One of the key observables to study the properties of a matter created in such collisions is an anisotropic transverse flow of particles.
In this work, the performance of the CBM experiment for anisotropic flow measurements is studied with Monte-Carlo simulations using gold ions at SIS-100 energies employing different heavy-ion event generators. Also, procedures for centrality estimation and charged hadron identification are described and corresponding frameworks are developed.
The measurement of the reaction plane angle is performed with Projectile Spectator Detector (PSD), which is a hadron calorimeter located at a very forward angle. To prevent radiation damage by the high-intensity ion beam, the PSD has a hole in the center to let the beam pass through. Various combinations of CBM detector subsystems are used to investigate the possible systematic biases in flow and centrality measurements. Effects of detector azimuthal non uniformity and the PSD beam hole size on physics performance are studied. The resulting performance of CBM for flow measurements is demonstrated for identified charged hadron anisotropic flow as a function of rapidity and transverse momentum in different centrality classes.
The measurement techniques developed for CBM were also validated with the experimental data recently collected by the NA61/SHINE experiment at CERN SPS for Pb+Pb collisions at the beam momenta 30A GeV/c. Compared to the existing data from the NA49 experiment at the CERN SPS, the new data allows for a more precise measurement of anisotropic flow harmonics. The fixed target setup of NA61/SHINE also allows extending flow measurements available from the STAR at the RHIC beam energy scan (BES) program to a wide rapidity range up to the forward region where the projectile nucleon spectators appear. In this thesis, an analysis of the anisotropic flow harmonics in Pb+Pb collisions at beam momenta 30A GeV/c collected by the NA61/SHINE experiment in the year 2016 is presented. Flow coefficients are measured relative to the spectator plane estimated with the Projectile Spectators Detector (PSD). The flow coefficients are obtained as a function of rapidity and transverse momentum in different classes of collision centrality. The results are compared with the corresponding NA49 data and the measurements from the RHIC BES program.
The brain vascular system is composed of specialized endothelial cells, which regulate the movement of ions, molecules and cells from the blood lumen to the central nervous system (CNS). Endothelial cells in the brain form the blood-brain barrier (BBB) that is essential to maintain the brain homeostasis and protect the CNS from pathogens and toxins for a proper neurological function. Endothelium together with other cellular components such as pericytes, astrocytes and the basement membrane, forms the neurovascular unit (NVU), the structural unit of the BBB. Breakdown of the BBB occurs in various neurological disorders, leading to edema and neuronal damage. Therapeutic strategies focusing on factors that regulate the permeability of the BBB may help to improve neurological disorders and facilitate drug delivery to the brain.
Angiopoietins (Ang) are potential candidates for therapeutic targeting the BBB due to their role in regulating the vascular permeability in periphery. They are key growth factors that control angiogenesis and vessel maturation. Ang-1 and Ang-2 possess similar binding affinities to the Tie2 receptor tyrosine kinase, which is almost exclusively expressed on endothelial cells. Ang-1 is expressed in smooth muscle cells and pericytes, and binds in a paracrine manner to Tie2. This results in phosphorylation of the receptor and induction of downstream signaling pathways leading to vessel maturation via pericyte recruitment and blood vessel stabilization. Ang-2, on the other hand, is stored in Weibel Palade bodies in endothelial cells and is released upon inflammatory or angiogenic stimuli. Therefore, in mature, stabilized blood vessels, Ang-2 expression is low. Increased level of Ang-2 is only observed during development or in pathology such as ischemia, cancer and inflammation. When Ang-2 is released, it acts in an autocrine manner and interferes with Tie2 phosphorylation in a context-dependent way. Antagonizing the receptor results in de-stabilization of the vessels, often accompanied by reduced numbers of pericytes leading to myeloid cell infiltration. In conjunction with the vascular endothelial growth factor (VEGF), Ang-2 contributes to blood vessel sprouting, whereupon in absence of VEGF it promotes vessel regression. ...
Die Populationsgenetik beschäftigt sich mit dem Einfluss von zufälliger Reproduktion, Rekombination, Migration, Mutation und Selektion auf die genetische Struktur einer Population.
In dieser Arbeit mit dem englischen Titel "Ancestral lines under mutation and selection" wird das Zusammenspiel von zufälliger Reproduktion, gerichteter Selektion und Zweiwegmutation untersucht.
Dazu betrachten wir eine haploide Population in der jedes Individuum zu jedem Zeitpunkt genau einen von zwei Typen aus S:={0,1} trägt. Dabei sei 1 der neutrale und 0 der selektiv bevorzugte Typ. Im Diffusionslimes sehr großer Populationen modellieren wir den Prozess der Frequenz der Typ-0-Individuen durch eine Wright-Fisher-Diffusion X:=(X_t) mit Mutation und gerichteter Selektion.
Zu jedem Zeitpunkt s gibt es genau ein Individuum, dessen Nachkommen ab einem bestimmten zukünftigen Zeitpunkt t>s die gesamte Population ausmachen werden. Wir nennen dieses Individuum den gemeinsamen Vorfahren zum Zeitpunkt s, da alle Individuen zu allen Zeitpunkten r>t von ihm abstammen. Sei R_{s} dessen Typ zum Zeitpunkt s. Wir nehmen an, dass der Prozess X zum Zeitpunkt 0 im Gleichgewicht ist und definieren die Wahrscheinlichkeit, dass der gemeinsame Vorfahre zum Zeitpunkt 0 Typ 0 hat, durch h(x):= P(R_{0}=0|X_{0}=x). Eine Darstellung von h(x) wurde bereits von Fearnhead (2002) und Taylor (2007) gefunden und dort mit vorwiegend analytischen Methoden bewiesen. In dieser Arbeit entwickeln wir in Kapitel 3 ein neues Teilchenbild, den pruned lookdown ancestral selection graph (pruned LD-ASG), der für sich selbst genommen interessant ist und eine neue probabilistische Interpretation der Darstellung von h(x) liefert.
Durch Erweiterung des Teilchenbildes auf Nachkommenverteilungen mit schweren Tails und mit Hilfe einer Siegmund Dualität gelingt es uns in Kapitel 4 das Resultat für h(x) von klassischen Wright-Fisher-Diffusionen auf Lambda-Wright-Fisher-Diffuison zu erweitern.
Eine Verbindung zwischen Ideen von Taylor (2007), der den gemeinsamen Prozess (X,R) untersucht hat, und einem von Fearnhead (2002) betrachteten Prozess (R,V), der die Entwicklung des Typs R des gemeinsamen Vorfahren in einer Umgebung von V sogenannten virtuellen Linien beschreibt, stellen wir in Kapitel 6 her. Wir bestimmen die gemeinsame Dynamik des Tripels (X,R,V). In Kapitel 7 betrachten wir ein diskretes Bild mit endlicher Populationsgröße N und schlagen dort eine Brücke zu Resultaten von Kluth, Hustedt und Baake (2013).
Des Weiteren entwickeln wir in Kapitel 5 dieser Arbeit einen Algorithmus zur Simulation der Typen einer Stichprobe von m Individuen, die aus einer Wright-Fisher-Population mit Mutation und Selektion im Gleichgewicht gezogen wird. Mittels dieses Algorithmus illustrieren wir die Typenverteilung für verschiedene Parameterwerte und Stichprobengrößen.
Unterschiede im Denken und Verhalten zwischen Menschen empirisch zu ermitteln, hat eine lange Tradition in der Differentiellen Psychologie. Forscher dieses Fachgebiets entwickeln spezielle Tests, um Personen hinsichtlich bestimmter psychologischer Merkmale zu klassifizieren. Bekannte Bespiele hierfür sind Intelligenztests, die oft zum Einsatz kommen, um z.B. passende Mitarbeiter für bestimmte Positionen zu selektieren. Dieser differenzielle Ansatz wurde bisher im Bereich der Erforschung neuronaler Grundlagen der Wahrnehmung weitgehend ignoriert. Interindividuelle Unterschiede zwischen Personen wurden meist als Messfehler eingestuft und durch Mittelungsverfahren über die Gruppe herausgerechnet (Kanai and Rees, 2011). Neuere Ergebnisse zeigen jedoch, dass hirnstrukturelle Unterschiede zwischen Personen Unterschiede im Verhalten erklären können (siehe Kanai and Rees, 2011; Kleinschmidt et al., 2012 für einen Überblick). Dieser Ansatz wird mit den hier vorgestellten Studien weiter ausgebaut. Dabei wird der Frage nachgegangen, ob Unterschiede in der Hirnanatomie im Menschen dessen Individualität in der bewussten visuellen Wahrnehmung vorhersagen kann. Insbesondere wird untersucht, inwieweit die Integrationsleistung zwischen den Hirnhälften von spezifischen transkallosalen Faserverbindungen abhängt. Des Weiteren wird überprüft, ob die Größe der frühen visuellen Areale einen Einfluss auf die Reizverarbeitung innerhalb der Hirnhälfte hat. Als Paradigmen verwendeten wir in allen Studien mehrdeutige visuelle Reize. Das besondere an diesen Reizen ist, dass deren Interpretation trotz gleichbleibender physikalischer Darbietung ständig wechselt. Dadurch können Hirnprozesse sichtbar gemacht werden, die unabhängig vom visuellen Reiz mit der bewussten Wahrnehmung einhergehen. Zudem werden die Wechsel zwar von allen Versuchspersonen empfunden, es gibt aber diesbezüglich große Unterschiede zwischen den Beobachtern.
In Kapitel 2 wurden Reize verwendet, die eine Scheinbewegung verursachen (Wertheimer, 1912). Ein passendes Beispiel für dieses Phänomen ist das Daumenkino, bei dem durch die schnelle Abfolge von Standbildern der Eindruck einer Bewegung entsteht. Wir verwendeten in unserer Studie eine spezielle Form der Scheinbewegung, das „Motion Quartet“ (Neuhaus, 1930; Chaudhuri and Glaser, 1991). Bei dieser Form löst die rechteckige Anordnung vierer weißer Quadrate den Eindruck von Bewegung aus. Die Anordnung besteht aus zwei alternierenden Bildern mit jeweils zwei Paaren von diagonal gegenüberliegenden Quadraten (oben links und unten rechts vs. oben rechts und unten links). Die Beobachter sehen entweder eine waagrechte oder eine senkrechte Bewegung. Interessanterweise weiß man aus früheren Studien, dass meistens vertikale Bewegungen wahrgenommen werden, wenn der Abstand zwischen den vier Quadraten gleich ist und die Beobachter den Mittelpunkt des Quartetts fixieren (Chaudhuri and Glaser, 1991). Aufgrund der Organisation des visuellen Systems muss die Sehinformation für waagrecht erscheinende Bewegung über beide Hirnhälften integriert werden, während die senkrecht erscheinende Bewegung nur von einer Hemisphäre verarbeitet wird. Das Quartett erzeugt deshalb in erster Linie senkrechte Bewegung, denn die Kommunikation zwischen den beiden Gehirnhälften braucht länger oder ist aufwändiger als die innerhalb einer Hemisphäre. Allerdings gibt es große Unterschiede zwischen Versuchspersonen, welche Bewegungsrichtung wahrgenommen wird. Chaudhuri und Kollegen hatten bereits zuvor gezeigt, dass jeder Teilnehmer einen individuellen Gleichgewichtspunkt (parity ratio) hat, an dem er beide Bewegungsrichtungen gleich oft wahrnimmt. Dieser Gleichgewichtspunkt spiegelt wieder, wie gut jemand die Informationen aus beiden Hirnhälften integrieren kann. Bei den meisten Teilnehmern muss der waagrechte Abstand kleiner sein als der senkrechte, nur dann ist die Wahrnehmung sowohl waagrechter als auch senkrechter Bewegung ausgeglichen. Unsere Ergebnisse in Kapitel 2 bestätigen die Befunde von Chaudhuri und Glaser (1991) indem sie zeigen, dass der Gleichgewichtspunkt stark zwischen Versuchspersonen variiert. Darüberhinaus zeigen unsere Ergebnisse, dass der individuelle Gleichgewichtspunkt über Monate stabil und damit eine konstante Eigenschaft von Personen ist. Zudem sprechen unsere Befunde dafür, dass der Gleichgewichtspunkt eng mit der Struktur bestimmter Faserverbindungen zusammenhängt. Wie bisherige Studien gezeigt haben, sind jene visuelle Areale, die Bewegung verarbeiten (hMT/V5), hauptsächlich für die Verarbeitung von Scheinbewegung zuständig (Sterzer et al., 2002; Sterzer et al., 2003: Sterzer and Kleinschmidt, 2005; Rose and Büchel, 2005). In unserer Untersuchung fanden wir, dass der geschätzte Durchmesser der Faserverbindungen im Corpus Callosum von eben diesen Regionen den individuellen Gleichgewichtspunkt vorhersagen konnte. Dieser Zusammenhang scheint auf die Bewegungszentren des Sehsystems begrenzt zu sein. Benachbarte kallosale Faserbündel des Sehsystems, die andere visuelle Gebiete miteinander verbinden, sind nicht mit dem Gleichgewichtspunkt assoziiert.
In Kapitel 3 und 4 verwendeten wir einen weiteren mehrdeutigen Stimulus. Hier wurden die Messungen mit dem Phänomen der „Binokularen Rivalität“ (engl. „Binocular Rivalry“) durchgeführt. Dabei werden den beiden Augen sehr unterschiedliche Bilder dargeboten, von denen zu jedem Zeitpunkt nur eine Interpretation bewusst wahrgenommen werden kann. Bei einer bestimmten Variation der Binokularen Rivalität wird die Präsentation der Reize so kontrolliert, dass sich die Änderung des subjektiven Erlebens von einem Bild zum anderen wellenartig ausbreitet (Wilson et al., 2001). Wilson (2001) und Kollegen zeigten bereits in ihrer Studie, dass es bei der Übertragung der Wanderwelle zwischen den Hirnhälften zu einer Verzögerung kommt. Unsere Ergebnisse in Kapitel 3 bestätigen diese Befunde und zeigen zusätzlich, dass diese Verzögerung stark zwischen Beobachtern variiert. Ähnlich wie für den Gleichgewichtspunkt von Kapitel 2 fanden wir auch für diese Verzögerung eine hohe zeitliche Stabilität. Es wurde bereits in vorherigen Studien gezeigt, dass die Ausbreitung der Wanderwelle eng mit der Aktivität im primären visuellen Kortex zusammenhängt (Lee et al., 2005, 2007). Unsere Ergebnisse in Kapitel 3 zeigen, dass die Varianz zwischen Personen für die Verzögerung zum großem Teil durch den Durchmesser der transkallosalen Faserverbindungen des V1 vorhergesagt werden kann. Auch hier bestand kein Zusammenhang zwischen Faserverbindungen benachbarter visueller Areale.Neben der Verzögerung zwischen den Hirnhälften zeigte auch die Ausbreitungsgeschwindigkeit der Wanderwelle innerhalb der Hemisphären eine hohe zeitliche Stabilität. Es stellt sich somit die Frage, ob strukturelle Eigenschaften von bestimmten visuellen Arealen die Ausbreitungsgeschwindigkeit vorhersagen kann. Wie in Kapitel 4 dargestellt, konnten wir einen starken Zusammenhang zwischen der Größe von V1 und der Ausbreitung der Wanderwelle feststellen. Dieser Zusammenhang ist positiv und, wie sich bei Hinzunahme anderer Areale in die Analyse zeigte, spezifisch für den primären visuellen Kortex. Demnach breitet sich die durch den binokularen Wettbewerb erzeugte Wanderwelle umso langsamer über das Sehfeld aus, je größer das Areal bei der entsprechenden Person ist. Die Darstellung in der Abbildung auf der Seite 123 bietet noch einmal einen grafischen Überblick über die oben beschriebenen Ergebnisse dieser Doktorarbeit. Zusammengefasst zeigt diese Arbeit exemplarisch am Beispiel der inter- und intrahemisphärischen Integration auf, wie eng Struktur und Funktion des Gehirns miteinander verknüpft sind. Bei Parametern, die sich experimentell nicht von uns als Forscher variieren lassen, griffen wir auf den Ansatz der differentiellen Psychologie zurück. Dabei nutzten wir die bei Individuen bereits gegebenen Unterschiede aus, um Rückschlüsse auf ganz allgemeine Gesetzmäßigkeiten, wie z. B. der Einfluss der kallosalen Faserdurchmesser und die Oberflächengröße spezifischer Areale auf die Wahrnehmung zu ziehen. Wie wir aufzeigen, formen also schon ganz grundlegende Eigenschaften früher sensorischer Areale unsere Wahrnehmung. Der von uns gewählte Ansatz könnte in zukünftiger Forschung auch auf höhere Funktionen, die uns als Menschen ausmachen, angewandt werden.
Anankastic relatives
(2016)
This dissertation investigates a semantic puzzle in German concerning certain sentences with an intensional transitive verb and a modalized relative clause modifying its indefinite object. In their unspecific reading, the modal inside the relative clause seems to lack a semantic contribution and the construal of the relative clause appears spuriously ambiguous between a restrictive and an appositive reading. However, as a thorough discussion of a wide range of data reveals, the embedded modal is actually anaphoric to the matrix attitude and does contribute to the sentence meaning. But then, precisely due to its anaphoricity, this semantic contribution is restricted and in some cases very subtle; in particular, the semantic phenomenon under scrutiny cannot be analyzed as an instance of modal concord. Rather, previous observations on related data involving epistmic anaphoric modals and anankastic conditionals turn out to indicate the direction for an adequate analysis of the relevant semantic observations. For the restrictive construal, a conservative account is developed containing a fine-grained Lewis-Kratzer-style modal semantics, but with a twist: the anaphoricity of the modal is taken care of by restricting the anaphoricity of the modal to the ordering source of the matrix verb; moreover, the embedded modal receives a historical modal base. In this way compositionality issues and problems of cross-identification are avoided. Finally, the non-restrictive construal is analyzed as an instance of modal subordination, exploiting the well-studied parallel between appositive relatives and discourse anaphora.
Das Projekt anan ist ein Werkzeug zur Fehlersuche in verteilten Hochleistungsrechnern. Die Neuheit des Beitrags besteht darin, dass die bekannten Methoden, die bereits erfolgreich zum Debuggen von Soft- und Hardware eingesetzt werden, auf Hochleistungs-Rechnen übertragen worden sind. Im Rahmen der vorliegenden Arbeit wurde ein Werkzeug namens anan implementiert, das bei der Fehlersuche hilft. Außerdem kann es als dynamischeres Monitoring eingesetzt werden. Beide Einsatzzwecke sind
getestet worden.
Das Werkzeug besteht aus zwei Teilen:
1. aus einem Teil namens anan, der interaktiv vom Nutzer bedient wird
2. und aus einem Teil namens anand, der automatisiert die verlangten Messwerte erhebt und nötigenfalls Befehle ausführt.
Der Teil anan führt Sensoren aus — kleine mustergesteuerte Algorithmen —, deren Ergebnisse per anan zusammengeführt werden. In erster Näherung lässt anan sich als Monitoring beschreiben, welches (1) schnell umkonfiguriert werden (2) komplexere Werte messen kann, die über Korrelationen einfacher Zeitreihen hinausgehen.
The economic success of the World Wide Web makes it a highly competitive environment for web businesses. For this reason, it is crucial for web business owners to learn what their customers want. This thesis provides a conceptual framework and an implementation of a system that helps to better understand the behavior and potential interests of web site visitors by accounting for both explicit and implicit feedback. This thesis is divided into two parts.
The first part is rooted in computer science and information systems and uses graph theory and an extended click-stream analysis to define a framework and a system tool that is useful for analyzing web user behavior by calculating the interests of the users.
The second part is rooted in behavioral economics, mathematics, and psychology and is investigating influencing factors on different types of web user choices. In detail, a model for the cognitive process of rating products on the Web is defined and an importance hierarchy of the influencing factors is discovered.
Both parts make use of techniques from a variety of research fields and, therefore, contribute to the area of Web Science.
In the past decades, the use and production of chemicals has been on the rise globally due to increasing industrialization and intensive agriculture; resulting in the occurrence and ecotoxicological risks of chemicals of emerging concern (CECs) in the aquatic compartments. Risks include changes in community structure resulting in the dominance of one species and ecosystem imbalance. When dominant disease-causing organisms are in the environment, the disease transmission is increased. For example, host snails for the schistosomiasis, a human trematode disease, are known to be tolerant to pesticide
exposure compared to the predators. This would therefore result in an increased abundance of snails which consequently increase the disease transmission in the human population.
Kenya, being a low income country faces a lot of challenges with provision of clean water, diseases and sanitation facilities, and increasing population which results in intensive agriculture coupled with pesticide use. Although a lot of research has been carried out on the environmental occurrence and risk of CECs (Chapter 1), most of these studies have been done in developed countries with limited information from Africa. Additionally, research in Africa focused on urban areas with limited number of compounds analyzed and mostly in the water phase, and inadequate information on the effects of CECs on the aquatic organisms. In order to reduce this knowledge gap, this dissertation focused on identification and quantification of CECs present in water, sediment and snails from western Kenya, and the contribution of pesticides to the transmission of schistosomiasis.
Chapter 2 gives a summary of the results and discussion of the dissertation. In Chapter 3, a comprehensive chemical analysis was carried out on 48 water samples to identify compounds, spatial patterns and associated risks for fish, crustacean and algae using toxic unit (TU) approach. A total of 78 compounds were detected with pesticides and biocides being the compounds most frequently detected. Spatial pattern analysis revealed limited compound grouping based on land use. Acute risk for crustaceans and algae were driven by one to three individual compounds. These compounds responsible for toxicity were prioritized as candidate compounds for monitoring and regulation in Kenya.
In Chapter 4, an extension of Chapter 3 was done to cover the CECs present in snails and sediment from the 48 sites. A total of 30 compounds were found in snails and 78 in sediments with 68 additional compounds being found which were not previously detected in water. Higher contaminant concentrations were found in agricultural sites than in areas without anthropogenic activities. The highest acute toxicity (TU 0.99) was determined for crustaceans based on compounds in sediment samples. The risk was driven by diazinon and pirimiphos-methyl. Acute and chronic risks to algae were driven by diuron whereas fish were found to be at low to no acute risk.
In Chapter 5, the effect of pesticide contamination on schistosomiasis transmission was evaluated by applying complimentary laboratory and field studies. In the field studies, the ecological mechanisms through which pesticides and physical chemical parameters affect host snails, predators and competitors were investigated. Pesticide data was obtained from the results in chapter 3. The overall distribution of grazers and predators was not affected by pesticide pollution. However, within the grazers, pesticide pollution increased dominance of host snails. On the contrary, the host-snail competitors were highly sensitive to pesticide exposure. For the laboratory studies, macroinvertebrates including Schistosoma-host snails, competitors and predators were exposed to 6 concentrations levels of imidacloprid and diazinon. Snails showed higher insecticide tolerance compared to competitors and predators. Finally, Chapter 6 summarizes the conclusions of this dissertation, placing it in a broader
context. In this dissertation, a comprehensive chemical characterization and risk assessment of CECs has been carried out in freshwater systems; together with the effects of pesticides on schistosomiasis transmission in rural western Kenya. Results of this dissertation showed that rural areas are contaminated posing a risk to aquatic organisms which contribute to schistosomiasis transmission. This shows the need for regular monitoring and policy formulation to reduce pollutant emissions which contributes negatively to both ecological and human health effects.
A large number of chemicals are constantly introduced to surface water from anthropogenic and natural sources. Although substantial efforts have been made to identify these chemicals (e.g potentially anthropogenic contaminants) in surface waters using liquid chromatography coupled to high resolution mass spectrometry (LC-HRMS), a large number of LC-HRMS chemical signals often with high peak intensity are left unidentified. In addition to synthetic chemicals and transformation products, these signals may also represent plant secondary metabolites (PSMs) released from vegetation through various pathways such as leaching, surface run-off and rain sewers or input of litter from vegetation. While this may be considered as a confounding factor in screening of water contaminants, it could also contribute to the cumulative toxic risk of water contamination. However, it is hardly known to what extent these metabolites contribute to the chemical mixture of surface waters. Thus, reducing the number of unknowns in water samples by identifying also PSMs in significant concentrations in surface waters will help to improve monitoring and assessment of water quality potentially impacted by complex mixtures of natural and synthetic compounds. Therefore, the main focus of the present study was to identify the occurrence of PSMs in river waters and explore the link between the presence of vegetation along rivers and detection of their corresponding PSMs in river
water.
In order to achieve the goals of the present thesis, two chemical screening approaches, namely, non-target and target screening using LC-HRMS were implemented. (1) Non-target analysis involving a novel approach has been applied to associate unknown peaks of high intensity in LC-HRMS to PSMs from surrounding vegetation by focusing on peaks overlapping between river water and aqueous plant extracts (Annex A1). (2) LC–HRMS target screening in river waters were performed for about 160 PSMs, which were selected from a large phytotoxin database (Annex A2 and A3) considering their expected abundance in the vegetation, their potential mobility, persistence and toxicity in the water cycle and commercial availability of standards.
In non-target screening (Annex A1), a high number of overlapping peaks has been found in between aqueous plant extracts and water from adjacent location, suggesting a significant impact of vegetation on chemical mixtures detectable in river waters. The chemical structures were assigned for 12 pairs of peaks while several pairs of peaks
whose MS/MS spectra matched but no structure suggestion were made by the implemented software tools for retrieving possible chemical structure. Nevertheless, the pairs of peaks with matching spectra represented the same chemical structure. The identified compound belonged to different compound classes such as coumarins, flavonoids besides others. For the identified PSMs individual concentration up to 5 µg/L were measured. The concentration and the number of detected PSMs per sample were correlated with the rain event and vegetation coverage.
Target screening unraveled the occurrence of 33 out of 160 target compounds in river waters (Annex A2 and A3). The identified compounds belonged to different classes such as alkaloids, coumarins, flavonoids, and other compounds. Individual compound concentrations were up to several thousand ng/L with the toxic alkaloids narciclasine and
lycorine recording highest maximum concentrations. The neurotoxic alkaloid coniine from poison hemlock was detected at concentrations up to 0.4 µg/L while simple coumarins
esculetin and fraxidin occurred at concentrations above 1 µg/L. The occurrence of some PSMs in river water were correlated to the specific vegetation growing along the rivers while the others were linked to a wide range of vegetation. As an example, narciclasine and lycorine was emitted by the dominant plant species from Amaryllidaceae family (e.g. Galanthus nivalis (snow drop), Leucojum vernum and Anemone nemorosa) while intermedine and echimidine were from Symphytum officinale. The ubiquitous occurrence of simple coumarins fraxidin, scopoletin and aesculetin could be linked to their presence in a wide range of vegetation.
Due to lack of aquatic toxicity data for the identified PSMs (in both target and non-target) and extremely scarce exposure data, no reliable risk assessment was possible.
Alternatively, risk estimation was performed using the threshold for toxicological concern (TTC) concept developed for drinking water contaminants. Many of the identified PSMs
exceeded the TTC value (0.1 µg/L) thus caution should be taken when using such surface waters for drinking water abstraction or recreational use.
This thesis provides an overview of the occurrence of PSMs in river water impacted by the massive presence of vegetation. Concentration for many of the identified PSMs are well within the range of those of synthetic environmental contaminants. Thus, this study adds to a series of recent results suggesting that possibly toxic PSMs occur in relevant concentrations in European surface waters and should be considered in monitoring and risk assessment of water resources. Aquatic toxicity data for PSMs are extensively lacking but are required to include these compounds in the assessment of risks to aquatic organisms and for eliminating risks to human health during drinking water production.
We discuss aspects of the phase structure of a three-dimensional effective lattice theory of Polyakov loops derived from QCD by strong coupling and hopping parameter expansions. The theory is valid for the thermodynamics of heavy quarks where it shows all qualitative features of nuclear physics emerging from QCD. In particular, the SU(3) pure gauge effective theory also exhibits a first-order thermal deconfinement transition due to spontaneous breaking of its global Z₃ center symmetry. The presence of heavy dynamical quarks breaks this symmetry explicitly and consequently, the transition weakens with decreasing quark mass until it disappears at a critical endpoint. At non-zero baryon density, the effective theory can be evaluated either analytically by the so-called high-temperature expansion which does not suffer from the sign problem, or numerically by standard Monte-Carlo methods due to its mild sign problem. The first part of this work devotes to a systematic derivation of the effective theory up to the 6th order in the hopping parameter κ. This method combined with the SU(3) link update algorithm provides a way to simulate the O(κ⁶) effective theory. The second part involves a study of the deconfinement transition of the pure gauge effective theory, with and without static quarks, at all chemical potentials with help of the high-temperature expansion. Our estimate of the deconfinement transition and its critical endpoint as a function of quark mass and all chemical potentials agrees well with recent Monte-Carlo simulations. In the third part, we investigate the N ſ ∈ {1,2} effective theory with zero chemical potential up to O(κ⁴). We determine the location of the critical hopping parameter at which the first-order deconfinement phase transition terminates and changes to a crossover. Our results for the critical endpoint of the O(κ²) effective theory are in excellent agreement with the determinations from simulations of four-dimensional QCD with a hopping expanded determinant by the WHOT-QCD collaboration. For the O(κ⁴) effective theory, our estimate suggests that the critical quark mass increases as the order of κ-contributions increases. We also compare with full lattice QCD with N ſ = 2 degenerate standard Wilson fermions and thus obtain a measure for the validity of both the strong coupling and the hopping expansion in this regime.
As one of the most widespread infectious diseases in the world, it is currently estimated that approximately 296 million people globally are chronically infected with Hepatitis B virus (HBV), the consequences of HBV infection cause more than 620,000 deaths each year. Although safe and effective HBV vaccines have reduced the incidence of new HBV infections in most countries, there are still around 1.5 million new infections each year. HBV remains a major health problem because there is no large-scale effective vaccination strategy in many countries with a high burden of disease, many people with chronic HBV infection are not receiving effective and timely treatment, and a complete cure for chronic infection is still far from being achieved.
Since its discovery, HBV has been identified as an enveloped DNA virus with a diameter of 42 nm. For efficient egress from host cells, HBV is thought to acquire the viral envelope by budding into multivesicular bodies (MVBs) and escape from infected cells via the exosome release pathway. It is clear that HBV hijacks the host vesicle system to complete self-assembly and propagation by interacting with factors that mediate exosome formation. Consequently, the overlap with exosome biogenesis, using MVBs as the release platform, raises the possibility for the release of exosomal HBV particles. Currently, virus containing exosomal vesicles have been described for several viruses. In light of this, this study explored whether intact HBV-virions wrapped in exosomes are released by HBV-producing cells.
First, this study established a robust method for efficient separation of exosomes from HBV virions by a combination of differential ultracentrifugation and iodixanol density gradient centrifugation. Fractionation of the density gradient revealed that two populations of infectious viral particles can be separated from the culture fluids of HBV-producing cells. The population present in the low-density peak co-migrates with the exosome markers. Whereas the population that appeared in the high-density fractions was the classical HBV virions, which are rcDNA-containing nucleocapsids encapsulated by the HBV envelope.
Subsequently, the characterization of this low-density population was performed, namely the highly purified exosome fraction was systematically investigated. Relying on the detergent sensitivity of the exosome membrane and the outer envelope of the HBV virus, disruption of the exosome structure by treatment with limited detergent revealed the presence of HBsAg in the exosomes. At the same time, mild and limited NP-40 treatment of highly purified exosomes and a further combination of density gradient centrifugation resulted in the stepwise release of intact HBV virions and naked capsids from the exosomes generated by HBV-producing cells. This implies the presence of intact HBV particles encapsulated by the host membrane.
The presence of exosome-encapsulated HBV particles was consequently also verified by suppressing the morphogenesis of MVBs or exosomes. Impairment of MVB- or exosome-generation with small molecule inhibitors has significantly inhibited the release of host membrane-encapsulated HBV particles as well. Likewise, silencing of exosome-related proteins caused a diminution of exosome output, which compromised the budding efficiency of wrapped HBV.
Moreover, electron microscopy images of ultra-thin sections combined with immunogold staining visualized the hidden virus in the exosomal structure. Additionally, the presence of LHBs on the surface of exosomes derived from HBV-expressing cells was also observed.
As expected, these exosomal membrane-wrapped HBV particles can spread productive infection in differentiated HepaRG cells. In HBV-susceptible cells, as LHBs on the membrane surface, this type of exosomal HBV appeared to be uptaken in an NTCP receptor-dependent manner.
Taken together these data indicate that a fraction of intact HBV virions can be released as exosomes. This reveals a so far not described release pathway for HBV. Exosomes hijacked by HBV act as a transporter impacting the dissemination of the virus.
This thesis is concerned with protein structures determined by nuclear magnetic resonance (NMR), and the text focuses on their analysis in terms of accuracy, gauged by the correspondence between the structural model and the experimental data it was calculated from, and in terms of precision, i.e. the degree of uncertainty of the atomic positions. Additionally, two protein structure calculation projects are described...
Due to recent technical developments, it became evident that the mammalian transcriptome is much more complex than originally expected. Alternative splicing(AS) and the transcription of long non-coding RNAs (lncRNAs) are two phenomenas which have been greatly underestimated in their frequency. Nowadays it is accepted that almost every gene has at least one alternative isoform and the number of lncRNAs exceeds the one of protein-coding genes.
We built user-friendly web interfaces which can process Affymetrix GeneChip Exon 1.0 ST Arrays (exon arrays) and GeneChip Gene 1.0 ST Arrays (gene arrays)for the analysis of alternative splicing events. Results are presented with detailed annotation information and graphics to identify splice events and to facilitate biological validations. Based on two studies using exon arrays, we show how our tools were used to profile genome-wide splicing changes under silencing of Jmjd6 and under hypoxic conditions. Since gene arrays are not intended for AS analysis originally, we demonstrated their applicability by profiling alternative splicing events during embryonic heart development.
To measure lncRNAs expressions with exon arrays, we completely re-annotation all probes and built a lncRNA specific annotation. To demonstrate the applicability of exon arrays in combination with our annotation, we profiled the expression of tens of thousands of lncRNAs. Further, our custom annotation allows for a detailed inspection of lncRNAs and to distinguish between isoforms, as we validated by RTPCR.
To allow for a general usage to the research community, we integrated the annotation in an easy-to-use web interface, which provides various helpful features for the analysis of lncRNAs.
RNA modifications are widespread in the RNA world. Nevertheless, their functions remain enigmatic. Recent analysis in tRNAs, mRNAs and rRNAs have revealed that apart from enriching their topological potential, these chemical modifications provide an added significant regulatory level to gene expression...
In the present study possible sources and pathways of the gasoline additive methyl tertiary-butyl ether (MTBE) in the aquatic environment in Germany were investigated. The objective of the present study was to clarify some of the questions raised by a previous study on the MTBE situation in Germany. In the USA and Europe 12 million t and 3 million t of MTBE, respectively, are used as gasoline additive. The detection of MTBE in the aquatic environment and the potential risk for drinking water resources led to a phase-out of MTBE as gasoline additive in single states of the USA. Meanwhile there is also an ongoing discussion about the substitution of MTBE in Europe and Germany. The annual usage of MTBE in Germany is about 600,000 t. However, compared to the USA, significant less data exists on the occurrence of MTBE in the aquatic environment in Europe. Because of its physico-chemical properties, MTBE readily vaporizes from gasoline, is water soluble, adsorbs only weakly to the underground matrix and is largely persistent to biological degradation. The toxicity of MTBE remains to be completely investigated, but MTBE in drinking water has low taste- and odor thresholds of 20-40 microgram/L. The present study was conducted by collecting water samples and analyzing them for their MTBE concentrations through a combination of headspace-solid phase microextraction (HS-SPME) and gas chromatography-mass spectrometry (GC-MS). The detection limit was 10 ng/L. The method was successfully tested in the framework of an interlaboratory study and showed recoveries of reference values of 89% (74 ng/L) and 104% (256 ng/L). The relative standard deviations were 12% and 6%. The investigation of 83 water samples from 50 community water systems (CWSs) in Germany revealed a detection frequency of 40% and a concentration range of 17-712 ng/L. The detection of MTBE in the drinking water samples could be explained by a groundwater pollution and the pathway river - riverbank filtration - waterworks. Rivers are important drinking water sources. MTBE is emitted into rivers through a variety of sources. In the present study, potential point sources were investigated, i.e. MTBE production sites/refineries/tank farms and groundwater pollutions. For this purpose, the spatial distribution of MTBE in three German rivers with the named potential emission sources located close to the rivers was investigated by analyzing 49 corresponding river water samples. The influence of the potential emission sources groundwater pollution and refinery/tank farm was successfully demonstrated in certain parts of the River Saale and the River Rhine. Increasing MTBE concentrations from 24 ng/L to 379 ng/L and from 73 ng/L to 5 microgram/L, respectively, could be observed in the parts investigated in these two rivers. The identification of such emission sources is important for future modeling. Further sources of MTBE emission into surface water are industrial (non-petrochemical) and municipal sewage plant effluents. In the present study long-term monitoring of water from the River Main (n=67 samples), precipitation (n=89) and industrial (n=34) and municipal sewage plant effluents (n=66) was conducted. The comparison of the data sets revealed that maximum MTBE concentrations in the River Main of up to 1 microgram/L were most possibly due to single industrial effluents with MTBE concentrations of up to 28 microgram/L (measured in this study). The average MTBE content of 66 ng/L in the River Main most probably originated from municipal sewage plant effluents and further industrial effluents. Background concentrations of <30 ng/L could be related to the direct atmospheric input via precipitation. A certain aspect of the atmospheric MTBE input is represented by the input of MTBE into river water or groundwater through snow. In the present study 43 snow samples from 13 different locations were analyzed for their MTBE content. MTBE could be detected in 65% of the urban and rural samples. The concentrations ranged from 11-613 ng/L and were higher than the concentrations in rainwater samples formerly analyzed. Furthermore, a temperature dependency and wash-out effects could be observed. The atmospheric input of MTBE was in part also visible in the analyzed groundwater samples (n=170). The detection frequencies in non-urban and urban wells were 24% and 63%, respectively. The median concentrations were 177 ng/L and 57 ng/L. In wells located in the vicinity of sites with gasoline contaminated groundwater, MTBE concentrations of up to 42 mg/L could be observed. The MTBE emission sources and the different pathways of MTBE in the aquatic environment demonstrated in the present study and other works raise the question whether the use of MTBE in a bulk product like gasoline should be continued in the future. Currently, possible substitutes like ethyl tertiary-butyl ether (ETBE) or ethanol are being discussed.
The East African Rift System (EARS) was initiated in the Eocene epoch between 50 and 21 Ma probably due to the influence of mantle plumes that caused volcanism, flood basalts and rifting extensions in Ethiopa and the Afar region. As a result of magmatic intrusions and adiabatic decompression melting within the lithosphere caused by the impact of the Kenya plume, there was a southward propagation of the EARS of about 30 – 15 Ma from Ethiopia to Kenya, which coincide with the occurrence of volcanism. The EARS developed towards the south along the margins of the Tanzania Craton between 15 and 8 Ma. Previous findings of low-velocity anomalies within the upper mantle and the mantle transition zone indicate an upwelling of hot mantle material in the vicinity of the Afar region and the East African Rift. This study includes the analysis of P- and S-receiver functions in order to determine further impacts on the lithosphere from below. The aim was to determine the topographic undulations of further boundary layers and to identify their variability owing to the rifting processes and the formation of the EARS. The study area included the Tanzania Craton and the surrounding rift branches of the East African Rift System.
The region of the Rwenzori Mountains can be analysed in detail because of the large dataset of the RiftLink project. The use of the P-receiver function technique and the H-K stacking method enabled to determine different vP /vS ratios depending on the tectonic setting in the Rwenzori region: Rift shoulders (vP /vS =1.74), Albert Rift segment (vP /vS =1.80), Edward Rift segment (vP /vS =1.87) and Rwenzori Mountains (vP /vS =1.86). To determine the topography of the Moho, it is necessary to take into account the thickness of the sedimentary layer, the surface topography, the azimuthal variations in crustal thickness and the impact of local anomalies. After correcting these effects on the Moho depths, significant variations in Moho topography could be determined. The Moho depths range from 29 to 39 km beneath the rift shoulders of the Albertine Rift. Within the rift valley, the crustal thickness varies between 25 – 31 km in the Edward Rift segment and 22 – 30 km in the Albert Rift segment. An averaged crustal thickness of about 26 km within the rift valley indicates the lack of the crustal root beneath the Rwenzoris. Similar variations in crustal thickness were determined by using an automatic procedure for analysing S-receiver functions that was developed in this study.
The S-receiver functions are created by applying a rotation criterion in order to rotate the Z, N and E components into the L, Q and T components. It is necessary to perform trial rotations using different incident and azimuth angles to determine the correct rotation angles. The latter are identified by the use of the rotation criterion, including the amplitude ratio of the converted Moho signal to the direct S/SKS-wave signal. The L component is rotated correctly in the direction of the incident shear wave in the case of the maximum amplitude ratio. After analysing the frequency content of the receiver functions in order to sort out harmonic and long-periodic traces, the individual Moho signals are checked for consistency in order to remove atypic signals. To increase the signal-to-noise ratios on the traces, the S-receiver functions are stacked. For this purpose, the signals of the direct shear waves must originate from similar epicenters. On the basis of similar ray paths, the receiver functions show comparable waveforms and converted signals. To perform the stacking procedure, it is necessary to merge the datasets of the adjacent stations in order to obtain a sufficient number of receiver functions. This analysis is based on the assumption that the incident seismic waves arriving at the adjacent stations penetrate to some extent the same underground structures in the case of similar wave propagation paths. This approach accounts for the fact that the converted signals do not result exclusively from the piercing points at the boundary layers. Further signals originate from the conversions at the boundary layer within the Fresnel Zone. The piercing points are derived from the significant signals in the receiver functions. Depending on the order of arrival of the converted phases on the traces, the signals are attributed to the theoretical discontinuities DIS1, DIS2, DIS3 and DIS4. However, partly due to the low signal-to-noise ratios on the traces, it is difficult to identify the real conversions on the traces and to ensure that the converted signals are attributed to the correct boundary layers. For this reason, it is necessary to check the consistency of the conversion depths among each other. In the case of inconsistent conversion depths, the corresponding signals are either adjusted to another seismic boundary layer or removed from the dataset. To verify the functionality of the automatic procedure and to determine the resolvability with respect to two boundary layers, several models are tested including horizontal and dipping discontinuities. To resolve distinct discontinuities, their depths must differ by at least 60 km, otherwise, due to similar depth ranges of the different boundary layers, the converted signals cannot be separated from each other. As a consequence, the converted signals that originate from different discontinuities are attributed to a single one. Further tests including break-off edges of seismic discontinuities are performed to check the attributions of the converted signals to the discontinuities. Owing to the varying number of boundary layers, the converted signals cannot be attributed to the discontinuities according to the order of their arrivals on the traces. It is necessary to correct their attributions to the seismic discontinuities in order to resolve the boundary layers.
The crust-mantle boundary and further discontinuities within the lithospheric mantle are investigated by applying this automatic procedure. Depending on the tectonic setting, the conversion depths of the Moho range from about 30 – 45 km beneath the western rift shoulder to 20 – 35 km within the rift valley up to 30 – 40 km beneath the eastern rift shoulder. The long wavelengths of the shear waves hamper the correct identification of the converted phases in the S-receiver functions. With respect to the relative differences in conversion depth, the topographic undulations of the crust-mantle boundary are consistent with the Moho depths derived from P-receiver functions. In contrast to the Rwenzori region, it is difficult to resolve completely the trend of the Moho in the remaining area of the East African Rift due to the small dataset provided by IRIS. The results exibit an increase in crustal thickness to up to 45 km in the region of the Cenozoic volcanics such as Virunga, Kivu, Rungwe and Kenya. The greatest Moho depths of more than 50 km are located near Mount Kilimanjaro. In addition to the Moho, the analysis of the S-receiver functions revealed two further boundary layers at depths of 60 – 140 km and 110 – 260 km, which are associated with a mid-lithospheric discontinuity and the lithosphere-asthenosphere boundary, respectively. The shallowest conversion depths of the LAB are focussed to small-scale regions within the rift branches, namely the northern Albertine Rift, the Chyulu Hills and the Mozambique Belt, which are located around the Tanzania Craton. The larger thickness of the lithosphere beneath the cratonic terrain indicates that the Tanzania Craton is not significantly eroded. However, there are indications that the lithosphere beneath the craton and the rift branches is penetrated by ascending asthenospheric melts to depths of up to 140 and 60 km, respectively. The top of the ascending melts is associated with the occurrence of the mid-lithospheric discontinuity. The shallowest conversion depths of this boundary layer (60 – 90 km) are related to the rifted areas of the EARS and the Cenozoic volcanic provinces, which are located along the Albertine Rift, the Kenya Rift and the Rukwa-Malawi rift zones. The deepest conversion depths of up to 140 km are related to the Rwenzori Belt, the Ugandan Basement Complex and the interior of the Tanzania Craton.
Ziel der Arbeit war es, die Flugzeitmassenspektrometrie als neue Analysemethode für die instrumentelle Analytik halogenierter Spurengase in der Luft zu etablieren. Die grundle-gende Motivation dafür ist, dass anthropogene Emissionen vieler Vertreter dieser Sub-stanzklasse einen negativen Einfluss auf die Umwelt zeigen: in der Atmosphäre agieren die Substanzen bzw. ihre Abbauprodukte als Katalysatoren für den stratosphärischen Ozonab-bau und verstärken den Strahlungsantrieb der Erde durch Absorption elektromagnetischer Strahlung im sogenannten atmosphärischen Fenster. Um diese Effekte und deren Auswir-kung quantifizieren zu können, ist es notwendig, Konzentrationen und Trends der Substan-zen in der Atmosphäre zu überwachen. Nur so können Gegenmaßnahmen wie Produktions-reglementierungen geplant und bewertet werden. In Kombination mit inverser Modellie-rung können zudem Rückschlüsse über tatsächlich emittierten Mengen gezogen werden. Dies stellt den Anspruch an die Analytik, sehr geringe Mengen dieser Gase sehr präzise quantifizieren zu können, um auch schwache Trends zu erkennen. Zudem muss die Analy-semethode die Möglichkeit zu bieten, mit der wachsenden Anzahl bekannter und zu über-wachender Substanzen Schritt zu halten. Besonders für letzteren Aspekt bietet die Flug-zeitmassenspektrometrie einen entscheidenden Vorteil gegenüber der „konventionellen“ Methode, der Quadrupolmassenspektrometrie: sie zeichnet das gesamten Massenspektrum auf ohne dadurch an Empfindlichkeit einzubüßen. Um das atmosphärische Mischungsver-hältnis von Substanzen im Bereich von pmol mol−1 bis fmol mol−1 bestimmen zu können, muss das Quadrupolmassenspektrometer im Single Ion Monitoring Modus betrieben wer-den – so wird zwar eine hohe Sensitivität erreicht, es wird aber auch nur die Intensität eines bestimmten Masse zu Ladungsverhältnisses (kurz: Masse) zu einem Zeitpunkt aufgezeich-net. Ein Flugzeitmassenspektrometer hingegen extrahiert Ionen mit einer Frequenz im Ki-loherzbereich und zeichnet für jede Extraktion das vollständige Flugzeitspektrum und da-mit Massenspektrum auf.
Aufgabe dieser Arbeit war es, ein Flugzeitmassenspektrometer mit vorgeschalteter Pro-benanreicherungseinheit sowie Gaschromatograph zur Trennung des Subtanzgemisches vor der Detektion aufzubauen und Werkzeuge zur Datenauswertung zu entwickeln. Um einen zukünftigen Feldeinsatz vorzubereiten, sollte der Aufbau möglichst kompakt, mobil und vollständig automatisiert sein. Anschließend sollte Empfindlichkeit, Präzision und dynami-scher Messbereich geprüft, optimiert und die Anwendbarkeit zur Analyse halogenierter Spurengase gezeigt werden. Die Ergebnisse aus der in der vorliegenden Arbeit präsentier-ten Geräteentwicklung finden sich in drei Publikationen wieder, welche in thematischer Reihenfolge die Probenanreicherung (Obersteiner et al., 2016b), den Vergleich von Quadrupol- und Flugzeitmassenspektrometrie (Hoker et al., 2015) sowie Eigenschaften und Anwendung des neuen Aufbaus (Obersteiner et al., 2016a) behandeln. Mit den genannten Aufsätzen ist die Arbeitsgruppe Engel weltweit die erste, welche hochpräzise Analytik ha-logenierter Spurengase routinemäßig mittels Flugzeitmassenspektrometrie durchführt. Der nächste Schritt ist der Übergang von der Laboranwendung zur Feldmessung, z.B. in Form von bodenbasierter in situ Analyse troposphärischer Luftmassen am Taunus Observatorium auf dem Kleinen Feldberg. Da es bisher keine Messstation für die hier beschriebene analy-tische Fragestellung in Deutschland gibt, könnte eine deutliche Verbesserung der Überwa-chung halogenierter Treibhausgase und ozonzerstörender Substanzen in Europa erzielt wer-den. Weiterhin wäre eine Flugzeugapplikation in Zukunft denkbar, welche neben der durch das Flugzeitmassenspektrometer abgedeckten Substanzbandbreite auch von dessen hoher möglicher Spektrenrate profitieren könnte. In Kombination mit Hochgeschwindigkeitsgas-chromatographie könnte eine bisher unerreichte Zeitauflösung der Beprobung der Atmo-sphäre mittels Gaschromatographie-Massenspektrometrie erzielt werden.
The membrane protein Green Proteorhodopsin (GPR), found in an uncultured marine γ-proteobacterium, is a retinal binding protein and contains a conserved structure of seven transmembrane helices (A-G). The retinal is bound to a conserved lysine residue (K231) in helix G via Schiff base linkage. It belongs to the widespread family of microbial rhodopsins and functions as a light dependent outward proton pump that bacteria may utilize for establishing a proton gradient across the cellular membrane. Proton pumping takes place after photon absorption, where GPR goes through a series of conformational changes, termed photocycle, causing the proton to be transported across the cellular membrane from the intra-cellular to the extracellular space. It is further mediated by the highly conserved functional residues D97 and E108, which function as the primary proton acceptor and primary proton donor for the protonated Schiff base, respectively. Another functionally important residue is the highly conserved H75 in helix B. It forms an intra-molecular cluster with D97 and is responsible for the high pKa value of the primary proton acceptor, stabilized by a direct interaction between D97 and H75.
Different Proteorhodopsin variants are globally distributed and colour tuned to their environment, depending on the water depth in which they occur. A single residue in the retinal binding pocket at position 105 is responsible for determining the absorption wavelength of the protein. GPR (from eBAC31A08) contains a leucine at position 105, while BPR (blue proteorhodopsin, from Hot75m4) in deeper waters possesses a glutamine. Although GPR shows 79% sequence identity with BPR, a single amino acid substitution (L105Q) in GPR is able to switch the absorption maximum to the one of BPR.
Protein oligomerisation describes the association of subunits (protomers) through non-covalent interactions, forming macromolecular complexes. It is an important structural characteristic of microbial rhodopsins, contributing to structural stability and promoting tight packing of the protomers in the bacterial membrane. GPR was shown to assemble into radially arranged oligomers, mainly pentamers and hexamers. No high resolution crystal structure of the whole GPR complex is available, but the structurally related BPR (Hot75m4) was successfully crystallized, showing pentameric oligomers.
The BPR crystal structure model reveals detailed information about complex assembly of the whole proteorhodopsin family. It reveals the oligomeric structures and shows residues that are part of the protomer interfaces, forming cross-protomer contacts, which is valuable information for the elaborate analysis of cross-protomer interactions of GPR oligomers.
Based on the knowledge of GPR and BPR oligomeric complexes, the aim of this study is to analyse specific cross-protomer contacts and to characterize the functional role of GPR oligomerisation. This includes the identification of residues, which are part of charged cross-protomer contacts and play an important role for the formation of the GPR oligomeric complex. Furthermore, this study deals with a detailed characterization of a potentially functional cross-protomer triad between the residues D97-H75-W34, which was detected in the BPR structural model. Hereby, the focus lies especially on the functional role H75, which is highly conserved and is positioned in between the primary proton acceptor D97 and W34 across the protomer interface. In summary, this study addresses GPR oligomerisation via specific cross-protomer contacts and its potential role for the functional mechanism of the protein.
The fundamental technique used in this study is solid-state NMR. Furthermore, an elaborate characterization of GPR oligomerisation was executed using a variety of biochemical methods and mutational approaches. Solid-state NMR is a powerful biophysical method to analyse membrane proteins in their native lipid environment and can be used to obtain diverse information about structure, molecular dynamics and orientation of the protein in the lipid bilayer.
Solid-state NMR naturally has a low sensitivity. In order to detect the low number of spins, DNP signal enhancement is of particular importance in this study. It is exhibited under cryogenic conditions and allows to drastically enhance the solid-state NMR signal by transferring magnetization from highly polarized electrons to the nuclear spins.
By applying these methods and techniques on GPR oligomers, this study reveals new insights in specific cross-protomer interactions in the complex. First the oligomeric states of GPR were determined for the specific experimental conditions used in this study. LILBID-MS, BN-PAGE and SEC analysis identified the pentameric state to be dominant for GPR. Furthermore, specific interactions across the protomer interface, which drive GPR oligomerisation, were identified. This was conducted by creating mixed 13C-15N labelled complexes. These mixed complexes show a unique isotope labelling pattern across their protomer interfaces. Solid-state NMR 13C-15N-correlation spectroscopy (TEDOR) was used to identify through-space dipole-dipole couplings, which indicate specific cross-protomer contacts. The results indicated that the residues R51, D52, E50 and T60 are important for GPR oligomerisation, and further analysis via single mutations of these residues showed a severe impact of the GPR oligomerisation behaviour.
The functional importance of GPR oligomerisation was analysed by DNP-enhanced solid-state NMR on the cross-protomer D97-H75-W34 triad. The DNP cryogenic conditions allowed to trap GPR in distinct stages of the photocycle. It could be shown that trapping GPR in a specific intermediate leads to a drastic conformational effect for the highly conserved H75 residue. Furthermore, DNP-enhanced solid-state NMR was used to characterize the cross-protomer contact between H75 and W34. Mutations of W34 could show that the cross-protomer interaction is highly important for the functionality of the protein, as negative mutants such as W34E showed a reverse proton transport across the bacterial membrane.
In summary this study represents a detailed analysis of GPR cross-protomer interactions and sheds light into the cause and functional importance of oligomeric complex formation in the microbial rhodopsin.
Analysis of coding principles in the olfactory system and their application in cheminformatics
(2007)
Unser Geruchssinn vermittelt uns die Wahrnehmung der chemischen Welt. Im Laufe der Evolution haben sich in unserem olfaktorischen System Mechanismen entwickelt, die wahrscheinlich optimal auf die Erfüllung dieser Aufgabe angepasst sind. Die Analyse dieser Verarbeitungsstrategien verspricht Einblicke in effiziente Algorithmen für die Kodierung und Verarbeitung chemischer Information, deren Entwicklung und Anwendung dem Kern der Chemieinformatik entspricht. In dieser Arbeit nähern wir uns der Entschlüsselung dieser Mechanismen durch die rechnerische Modellierung von funktionellen Einheiten des olfaktorischen Systems. Hierbei verfolgten wir einen interdisziplinären Ansatz, der die Gebiete der Chemie, der Neurobiologie und des maschinellen Lernens mit einbezieht.
Fas Ligand (FasL; CD95L; CD178; TNSF6) is a 40 kDa glycosylated type II transmembrane protein with 279 aa in mice and 281 aa in humans that belongs to the tumor necrosis factor (TNF) family. The extracellular domain (ECD) harbors a TNF homology domain, the receptor binding site, a motif for self assembly and trimerization, and several putative N-glycosylation and a metalloprotease cleavage site/s. The cytoplasmic tail of FasL is the longest of all TNFL family members and contains several conserved signaling motifs, such as a putative tandem Casein kinase I phosphorylation site, a unique proline-rich domain (PRD) and phosphorylatable tyrosine residues (Y7 in mice; Y7, Y9, Y13 in human). The FasL/Fas system is renowned for the potent induction of apoptosis in the receptor-bearing cell and is especially important for immune system functions. It is involved in the killing of target cells by natural killer (NK) and cytotoxic T cells, in the (self) elimination of effector cells following the proliferative phase of an immune response (activation-induced cell death; AICD), in the maintenance of immuneprivileged sites and in the induction and maintenance of peripheral tolerance. Owing to its potent pro-apoptotic signaling capacity and important functions, FasL expression and activity are tightly regulated at transcriptional and posttranscriptional levels and restricted to few cell types, such as immune effector cells and cells of immune-privileged sites. In contrast, Fas is expressed in a variety of tissues including lymphoid tissues, liver, heart, kidney, pancreas, brain and ovary. In addition to its pro-apoptotic function, the FasL/Fas system can also elicit nonapoptotic signals in the receptor-expressing cell. Among others, Fas-signaling exerts co-stimulatory functions in the immune system, e.g. by promoting survival, activation and proliferation of T cells. Besides the capacity to deliver a signal into receptor-bearing cells (‘forward signal’), FasL can receive and transmit signals into the ligand-expressing cell. This phenomenon has been described for several TNF family ligands and is known as ‘reverse signaling’. The first evidence for the existence of reverse signaling into FasL-bearing cells stems from two studies that demonstrated either co-stimulation of murine CD8+ T cell lines by FasL cross-linking or inhibition of activation-induced proliferation of murine CD4+ T cells. In both cases, the observed changes of proliferative behaviour critically depended on the presence of a signaling-competent FasL. Almost certainly, the FasL ICD is functionally involved in signal-transmission: (i) The ICD is highly conserved across species and harbors several signaling motifs, most notably a unique PRD. (ii) Numerous proteins have been identified which interact with the FasL PRD via their SH3 or WW domains and regulate various aspects of FasL biology, such as FasL sorting, storage, cell surface expression and the linkage of FasL to intracellular signaling pathways. (iii) Post-translational modifications of the ICD have been implicated in the sorting of FasL to vesicles and the FasL-dependent activation of Nuclear factor of activated T cells (NFAT). (iv) Proteolytic processing of FasL liberates the ICD and allows its translocation into the nucleus where it might influence gene transcription. (v) It could be shown that overexpression of the FasL ICD is sufficient to initiate reverse signaling upon concomitant T cell receptor (TCR) stimulation and ICD cross-linking. Conflicting data on the consequences of FasL reverse signaling exist, and costimulatory as well as inhibitory functions have been reported. These discrepancies probably reflect the use of artificial experimental systems. Neither the precise molecular mechanism underlying FasL reverse signaling, nor its physiological relevance have been addressed at the endogenous protein level in vivo. Therefore, a ‘knockout/knockin’ mouse model in which wildtype FasL was replaced with a deletion mutant lacking the intracellular portion (FasL Delta Intra) was established in the group of PD Dr. Martin Zörnig. In the present study, FasL Delta Intra mice were phenotypically characterized and were employed to investigate the physiological consequences of FasL reverse signaling at the molecular and cellular level. To ensure that FasL Delta Intra mice represent a suitable model to study the consequences of FasL reverse signaling, we demonstrated that activated lymphocytes from homozygous FasL Delta Intra or wildtype mice express comparable amounts of (truncated) FasL at the cell surface. The truncated protein retains the capacity to induce apoptosis in Fas receptor-positive target cells, as co-culture assays with FasL-expressing activated lymphocytes and Fas-sensitive target cells showed. Additionally, systematic screening of unchallenged mice did not reveal any phenotypic abnormalities. Notably, signs of a lymphoproliferative autoimmune disease associated with FasL-deficiency could not be detected. As several reports have implicated FasL reverse signaling in the regulation of T cell expansion and activation, proliferation of lymphocytes isolated from FasL Delta Intra and wildtype mice in response to antigen receptor stimulation was investigated. Using CFSE dilution assays it could be demonstrated that the proliferative response of CD4+ T cells, CD8+ T cells and of B cells was enhanced in the absence of the FasL ICD. Interestingly, this effect was most pronounced in B cells and could only be detected in CD4+ T cells after depletion of CD4+CD25+ regulatory T cells. To our Summary knowledge, this is the first time that FasL reverse signaling has been demonstrated in B cells. In a series of experiments, the activation of several pathways that are known to play important roles in signal-transmission initiated upon antigen receptor triggering was assessed. As a molecular correlate for the observed enhancement of activation-induced proliferation, Extracellular signal regulated kinase (ERK1/2) phosphorylation was significantly increased in FasL Delta Intra mice following antigen receptor crosslinking. Surprisingly, B cell stimulation lead to a comparable extent of activating phosphorylations on S38 in c-Raf and S218/S222 in MEK1/2 in cells isolated from wildtype and FasL Delta Intra mice, indicating that Mitogen activated protein kinases (MAPKs) upstream of ERK1/2 (Raf-1 and MEK1/2) apparently do not contribute to the differential regulation of ERK1/2. Experiments in which activation-induced Akt phosphorylation (S473) was quantified also did not suggest a participation of Phosphoinositol specific kinase 3 (PI3K)/Akt signals in this process. Instead, further characterization of the upstream pathway revealed an involvement of Phospholipase C gamma (PLC gamma) and Protein kinase C (PKC) signals in FasL-dependent ERK1/2- regulation. Previous studies in our group revealed a Notch-like processing of FasL, resulting in the transcriptional regulation of a reporter gene. Furthermore, an interaction of the FasL ICD with the transcription factor Lymphoid-enhancer binding factor-1 (Lef-1) that affected Lef-1-dependent reporter gene transcription could be demonstrated. Therefore, a molecular analysis of activated lymphocytes was performed to identify FasL reverse signaling target genes. The differential expression of promising candidates was verified by quantitative real-time PCR (qRT-PCR), which showed that the transcription of genes associated with lymphocyte proliferation and activation was increased in FasL Delta Intra mice compared to wildtype mice. Interestingly, an extensive regulation of Lef-1-dependent Wnt/beta-Catenin signalingrelated genes was found. Lef-1 mRNA (RT-PCR) and protein (intracellular FACS staining) could be detected in mature B cells, suggesting the possibility of FasL ICD-mediated inhibition of Lef-1-dependent gene expression in these cells, initiated by Notch-like processing of FasL. To investigate the consequences of FasL reverse signaling in vivo, a potential participation of the FasL ICD in the regulation of immune responses upon various challenges was analyzed. In experiments in which thymocyte proliferation or the expansion of antigen-specific T cells following a challenge with the superantigen Staphylococcus enterotoxin B (SEB), with Lymphocytic choriomeningitis virus (LCMV) or with Listeria monocytogenes were investigated, comparable results were obtained with wildtype and FasL Delta Intra mice. Likewise, the recruitment of neutrophils in a thioglycollate-induced model of peritonitis was not affected by deletion of the FasL ICD. These findings might reflect regulatory mechanisms operating in vivo, such as control exerted by regulatory T cells. Along these lines, proliferative differences in CD4+ T cells could only be detected ex vivo after depletion of CD4+CD25+ regulatory T cells. Furthermore, several in vitro studies indicate that retrograde FasL signals can be observed under conditions of suboptimal lymphocyte stimulation, but not when the TCR is optimally stimulated. Therefore, the potent initiation of antigen receptor signaling by stimuli like SEB or LCMV might have masked inhibitory FasL reverse signaling in these experiments. In agreement with the observed hyperactivation of lymphocytes in the absence of the ICD ex vivo, the increase in germinal center B cells (GCs) following immunization with the hapten 3-hydroxy 4-nitrophenylacetyl (NP) and the number of antibody-secreting PCs was significantly higher in FasL Delta Intra mice. The larger quantity of PCs correlated with increased titers of NP-binding, i.e. antigen-specific, IgM and IgG1 antibodies in the serum of FasL Delta Intra mice after immunization. These data suggest that FasL reverse signaling exerts immunmodulatory functions. Supporting this notion, a model of Ovalbumin-induced allergic airway inflammation revealed an involvement of retrograde FasL-signals in the recruitment of immune effector cells into the lung and in the activation of T cells following exposure of mice to Ovalbumin. Together, our ex vivo and in vivo findings based on endogenous FasL protein levels demonstrate that FasL ICD-mediated reverse signaling is a negative modulator of certain immune responses. It is tempting to speculate that FasL reverse signaling might be a fine-tuning mechanism to prevent autoimmune diseases, a theory which will be tested in adequate mouse models in the future.
Drought stress is one of the major abiotic factors diminishing crop productivity world wide. In the course of climate change, regions which already experience dry seasons nowadays will suffer from elongated drought periods and water shortage. These climatic changes will not only have an impact on the regional flora and fauna but also on the people inhabiting these areas. It is therefore of great importance to understand the reactions of plants to drought stress to help breeding and biotechnological approaches for the benefit of new robust cereal cultures growing under low water regimes. In this dissertation four grasses of the genus Panicum, P. bisulcatum (C3), P. laetum, P. miliaceum and P. turgidum (all C4 NAD-ME) were subjected to drought stress. The plants diverse reactions were investigated on a physiological as well as on a molecular level to deepen the understanding of drought stress responses. Drought stress was imposed for a species-specific period until a relative leaf water content (RWC) of ~50 % was reached in each grass. Physiological measurements were conducted on leaves with a RWC of ~50 % investigating chlorophyll a fluorescence parameters with a Plant Efficiency Analyzer (PEA) and gas exchange parameters like the photosynthesis rate and stomatal conductance with a Gas Fluorescence Chamber (GFS-3000). Subsequent molecular analysis were conducted on leaf samples taken (RWC = 50 %) analysing different proteins and the transcriptome of the Panicum species. The physiological measurements revealed a higher photosynthesis rate for the C4 grasses under drought stress with no significant differences between the C4 species. Also the water use efficiency was significantly higher in the C4 species in comparison to the C3 species independent from the water regime supporting results from the literature. The chlorophyll a measurements revealed the strongest adaptation to water shortage in the C4 species P. turgidum followed by the C3 species P. bisulcatum. It has been shown before (GHANNOUM 2009) that the C4 photosynthesis apparatus is more prone to drought stress than the C3 apparatus – despite the higher water use efficiency. Results also suggested that the great adaptation of P. turgidum to drought stress arose from its ability to recover from drought stress (all JIP test parameters showed no significant differences between control and recovery samples). The additional down-regulation of PS II but not of PS I under drought stress also helped the plant to endure times of water shortage and facilitated the recovery when water was available again. Protein analyses on the content of PEPC, OEC and RubisCO (LSU and SSU) revealed no changes. Dehydrin 1 in contrast was strongly up-regulated under drought stress and Summary 108 recovery in all four Panicum species. The stable content of the OEC protein was therefore not the catalyst of rising K peaks measured by chlorophyll a fluorescence and a reduced OEC activity was supposed. Transcriptomic analyses revealed a myriad of differentially regulated tags. Due to unsequenced genomes, tags could only be partially (8 % maximum for P. turgidum) annotated to their specific genes. Diverse methods were therefore used to annotate the most highly regulated tags to their genes and their products. Special emphasis was put on the regulation of five gene products confirming the regulation schemata from the HT-SuperSAGE analyses. Interestingly one protein – the NCED1 – was down-regulated under stress conditions, in contrast to results from the literature. It is therefore of great importance to investigate longer lasting drought to understand the full range of drought stress adaptation. Future genome sequencing projects might also include the Panicum species investigated in this dissertation and important gene candidates with no hits (maybe completely new to the research community) might help breeding and biotechnology approaches to produce more drought resistant crop species.
Ribosome biogenesis is best understood in the yeast Saccharomyces cerevisiae. In human or mammalian ribosome biogenesis, it has been shown that basic principles are conserved to yeast, but additional features have been reported. Our understanding about the interplay between proteins and RNA in human ribosome biogenesis is far from complete.
The present study focused on the analysis of the human ribosome biogenesis co-factors PWP2, EMG1 and Exportin 5 (XPO5) to understand the degree of conservation of ribosome biogenesis. The proteins were characterized in respect to their localization and interaction partners. For the early 90S co-factor, PWP2, it was possible to pull down and identify the human UTP-B complex with MALDI mass spectrometry. Besides the orthologues of the members of this complex known in yeast (TBL3, WDR3, WDR36, UTP6, UTP18), the human UTP-B complex is not only conserved from yeast to humans, but contains also additional components, like the DEAD-box RNA helicase DDX21, which lacks a yeast orthologue. DDX21 was localized to the nucleus, assembled to the native UTP-B complex and co-precipitated also with other UTP-B complex members, presumably extending the functions of this complex in ribosome biogenesis.
This phenomenon was also observed for the 90S co-factor EMG1, an RNA methyltransferase, whose mutant form causes the Bowen-Conradi syndrome, if aspartic acid is mutated to glycine at position 86. This study revealed that the mutant, EMG1-D86G, clearly lost its nucleolar localization and co-precipitated to histones for unknown reasons.
A participation of the nuclear export receptor XPO5 in human ribosome biogenesis was shown in this study. Pulldown analysis, sucrose density gradients and UV crosslinking and analysis of cDNAs of XPO5 revealed the involvement of XPO5 in pre-60S subunit maturation. Moreover, besides the known pre-miRNAs and tRNAs as substrates for nuclear export, XPO5 crosslinked to snoRNAs. XPO5 was further demonstrated to interact with the miRNA Let-7a, which has an important regulatory function for MYC, a transcription factor required for ribosome biogenesis.
All results support a role of these proteins in human ribosome biogenesis and therefore it seems that the biogenesis of ribosomes in human cells requires additional components, like DDX21 and XPO5.
Fossils are often anatomically and functionally compared to extant model taxa such as Pan, Gorilla, Pongo and modern Homo sapiens to put the respective fossils into the (taxonomical) context of human evolution. Therefore, knowledge of extant hominid anatomy is necessary as well as knowledge of which traits differ between sexes, populations, (sub-)species and taxa, and whether these differences are pronounced enough to separate respective groups. Dental and mandibular structures have been of particular interest in many paleoanthropological studies, simply due to the fact that these morphological structures are most abundant in the human fossil record.
Various studies have addressed questions regarding taxonomy, variation and sexual dimorphism of hominid taxa with regard to dental and mandibular size. Tooth size, however, has almost exclusively referred to crown size, with little focus on root size. The focus on tooth crowns is partly due to roots being embedded in mandibular bone which makes access difficult. With the help of micro-computed tomography (μCT) it is now possible to render virtual 3D models of dental roots and measure these models without harming the original specimens. In addition, measurements are much more precise using μCT data than previous techniques such as 2D x-rays. The present study used 3D models of 231 (first, second and third) molars and 80 mandibles of 53 Pan troglodytes verus (consisting of individuals form the Tai and Liberia populations), 14 Gorilla sp. and 13 Pongo sp. individuals to investigate molar and mandibular sizes within, and between, taxa and populations with regard to sexual dimorphism, variability and taxonomical value. Molar root size was assessed by applying 7 measurements to each molar. Mandibular size was investigated using three different measurements: overall mandibular size, mandibular robusticity (at each molar position) and 15 linear measurements. Overall mandibular size and root measurements were used to investigate the dental and mandibular size relationship. Furthermore, based on data acquired from great apes, how well fossil mandibles (including their dentition) of Australopithecus africanus, Paranthropus sp. and Homo sp. match one or multiple extant hominid taxa was examined Overall, molar root and mandibular metrics are suitable to differentiate between sexes, populations and taxa. Investigation of 40 (21 molar and 19 mandibular) different measure ments resulted in five common characteristics among Pan, Gorilla and Pongo only: firstly, molar root size sequence in root volume and root surface area (M3 < M1 < M2). Secondly, M2 as the molar with the largest cervical area, root volume, root surface area and mesial root lengths and thirdly, mandibular robusticity is larger in females than in males, yet the difference is not signifficant. Fourthly, mandibular length and premolar width are sexually dimorphic and fifthly, the best factors to discriminate between taxa are bicondyle width and molar root length. There is no generalized answer to the question which molar and/or measurement (dental or mandibular) is best to discriminate between sex or taxa in extant hominids. Moreover, size relationships differ among taxa, depending on the measurement. The overall trend, however, is that Pan is the taxa with the smallest, and Gorilla the largest, mean values. Among Pan populations, Liberian chimpanzees tend to have larger average values compared to Tai chimpanzees, with the exception of mandibular robusticity. The highest percentage of sexual dimorphic measurements is found in Pongo, yet only half of the measurements are statistically different between sexes. African apes are less sexually dimorphic compared to Pongo, and surprisingly, Gorilla is only slightly more dimorphic than Pan. The study also shows that statements and conclusions relating to \mandibular size" should not be generalized: whereas male and female Pongo do not differ significantly in overall mandibular size, they do differ in linear mandibular measurements. Moreover, Gorilla has the overall largest mandible, yet robusticity is higher in Pan, as are some linear measurements. Sexual dimorphism in overall mandibular size does not seem to reflect body mass dimorphism, whereas mandibular size appears to be related to body mass. The same was previously proposed for mandibular robusticity, yet Pan, the smallest taxa, has the most robust mandibular corpus (> Gorilla > Pongo). A substantial amount of molar measurements that positively correlate with (overall) mandibular size was found, but in African apes only. This contrasts with former studies which found no, or weak, correlations between dental and mandibular sizes. Given that the percentage of correlation is highest in Pan, and not present in Pongo, it is proposed that small jaws feature small teeth, rather than large jaws feature large teeth. This proposition assumes a size-threshold from which, when reached, dental and mandibular sizes no longer correlate, as has been previously proposed for the relationship between canine size and mandibular breadth. This assumption is further supported by the fact that the smaller and more robust Tai population shows more significant correlation compared to the less robust and larger Liberia population. Results show that fossil metrics are similar to one or multiple extant hominid taxa, depending on the measurement (dental or mandibular) used for comparison. Subsequently, the assignment to a specific sex depends on the earlier selected extant model taxa. Therefore the study questions whether choosing one model taxa for one fossil, or taxonomical group, is advisable. This study is the first to extensively investigate molar root size in extant hominids and to broadly describe differences in molar root sizes among and between taxa and therefore provides a solid database for future studies. The same applies to mandibular robusticity which has not been investigated as systematically or to such a great extent as in this work. The study specifically shows how complex the search for taxa or sex differentiating molar root and/or mandibular measurements is. Subsequently it shows that generalizations in relation to taxonomical values and statements about sexual dimorphism can be misleading.
In addition, the study contributes to the understanding of intra- and inter-population differences within Pan torglodytes verus. Furthermore, it could be demonstrated that results of a subspecies sample very likely depend on the sample composition, i.e. whether the sample consists of individuals from one or more populations. This study serves as a database for further studies investigating molar root sizes in great apes, whether these studies are investigating various relationships between taxa, population or sex, or as database to investigate functional adaptations or to examine mandibular robusticity and molar root relationships.
Rhythmic changes in environmental lighting conditions have ever been the most reliable environmental cue for life on earth. Nature has therefore selected a genetically encrypted endogenous clock very early in evolution, as it provided cells and subsequently organisms with the ability to anticipate persevering periods of light and darkness. Rhythm generation within the mammalian circadian system is achieved by clock genes and their protein products. The mammalian endogenous master clock, which synchronizes the body to environmental time, is located in the suprachiasmatic nucleus (SCN) of the hypothalamus. As an integral part of the time-coding system, the pineal gland serves the need to tune the body to the temporal environment by the rhythmic nocturnal synthesis and immediate release of the hormone melatonin. In contrast to the transcriptional regulation of melatonin synthesis in rodents, a post-translational shaping is indicated in the human pineal gland. Another important mediator of circadian time and seasonality to the body is the pituitary gland. The aim of this work was to elucidate regulation of melatonin synthesis in the human pineal gland. Furthermore, presence and regulation of clock genes in the human pineal and pituitary gland, and in the SCN were analyzed. Therefore, human tissue, taken from regular autopsies, was analyzed simultaneously for different parameters involved in melatonin biosynthesis and circadian rhythm generation. Presented data demonstrate that post-mortem brain tissue can be used to detect the remnant profile of pre-mortem adaptive changes in neuronal activity. In particular, our results give strong experimental support for the idea that transcriptional mechanisms are not dominant for the generation of rhythmic melatonin synthesis in the human pineal gland. Together with data obtained for clock genes and their protein products in the pituitary, data presented here offer 1) a new working hypothesis for post-translational regulation of melatonin biosynthesis in the human pineal gland, and 2) a novel twist in the molecular competence of clock gene proteins, achieved by nucleo-cytoplasmic shuttling in neuronal and neuroendocrine human tissue. Furthermore, in this study, oscillations in abundance of clock gene proteins were demonstrated for the first time in the human SCN.
Inhaltsverzeichnis Liste der wissenschaftlichen Beiträge .................................................................................. III Inhaltsverzeichnis ..............................................................................................................IV Abbildungsverzeichnis I List of Figures ................................................................................ VII Tabellenverzeichnis I List of Tables ..................................................................................... VIII Abkürzungsverzeichnis .......................................................................................................... IX 1 Einleitung 1.1 Problemstellung .............................................................................................................. 1 1.2 Einordnung und Ergebnisse der wissenschaftlichen Beiträge ....................................... 3 Literaturverzeichnis ................................................................................................................ 9 2 Langes Leben und Wohlstand im Alter: Ein Überblick über die finanzwirtschaftlichen Alternativen zur Ausgestaltung des Ruhestandes ... 10 2.1 Einführung .................................................................................................................... 10 2.2 Produktalternativen fiir die Ausgestaltung der Entnahmephase .................................. 12 2.2.1 Leibrenten .......................................................................................................... 12 2.2.1.1 Charakteristika von Leibrenten und deren historische Entwicklung .... 12 2.2.1.2 Leibrentenmarkt und -produkte in Deutschland ................................... 15 2.2.1.3 Determinanten von Leibrentenprämien ................................................ 22 2.2.2 Entnahmepläne ................................................................................................... 28 2.2.2.1 Charakteristika von Entnahmeplänen ................................................... 28 2.2.2.2 Entnahmepläne als Instrument der Ruhestandsplanung ....................... 31 2.2.2.3 Leibrenten vs. Entnahmepläne .............................................................. 33 2.3 Forschungsergebnisse zur Ausgestaltung der Entnahmephase .................................... 36 2.3.1 Einleitende Bemerkungen .................................................................................. 36 2.3.2 Positive Literatur ................................................................................................ 37 2.3.2.1 Theoretische Arbeiten zur Bedeutung von Leibrenten ......................... 37 2.3.2.2 Vererbungsmotive als Erklärungsansatz fiir geringe Nachfrage nach Leibrenten ... 39 2.3.2.3 Kosten als Erklärungsansatz fiir geringe Nachfrage nach Leibrenten .. 42 2.3.2.4 Weitere Erklärungsansätze rur geringe Nachfrage nach Leibrenten .... 44 2.3.3 Normative Literatur ............................................................................................ 47 2.3.3.1 Untersuchungen zu reinen Entnahmeplänen ......................................... 47 2.3.3.2 Untersuchung von Entnahmeplänen unter Berücksichtigung von Leibrenten ..... 50 2.3.4 Sonstige Arbeiten ............................................................................................... 56 2.4 Schlussbetrachtung ....................................................................................................... 57 Anhang A: Berechnung von Leibrentenprämien ................................................................. 59 Anhang B: Abbildung der Biometrie ................................................................................... 62 Literaturverzeichnis .............................................................................................................. 67 3 Betting on Death and Capital Markets in Retirement: A Shortfall Risk Analysis of Life Annuities versus Phased Withdrawal Plans... 76 3.1 Introduction .................................................................................................................. 76 3.2 The Case of Phased Withdrawal .................................................................................. 79 3.2.1 Withdrawal Plans with Fixed Benefits ............................................................... 80 3.2.2 Phased Withdrawal Rules with Variable Benefits ............ : ................................ 80 3.3 Risk and Reward Analysis of Phased Withdrawal Plans Conditional on Survival... ... 82 3.3.1 Research Design ................................................................................................. 82 3.3.2 Analysis of Expected Benefits ........................................................................... 84 3.3.3 Shortfall Risk Analysis ...................................................................................... 86 3.3.4 Analysis of Expected Bequests .......................................................................... 89 3.4 Risk-Minimizing Phased Withdrawal Strategies ......................................................... 90 3.4.1 Optimized Withdrawal Rules in a Risk-Return Context... ................................. 90 3.4.2 Comparative Results: Annuity versus Phased Withdrawal Plans ...................... 92 3.4.3 Phased Withdrawal Plans with Mandatory Deferred Annuities ........................ 97 3.4.4 Comparative Results ........................................................................................ 100 3.5 Summary and concluding remarks ............................................................................. 101 Appendix A: Determining Annuity Benefits ..................................................................... 104 Appendix B: Determining Expected Benefits, Expected Bequest and the Risk of a Consumption Shortfall for Phased Withdrawal Plans with given Benefit-to-Wealth Ratios .......................................................................................................................... 105 References .......................................................................................................................... 107 4 Leistungsgarantien in der Auszahlphase von investmentbasierten Altersvorsorgeverträgen: Entwicklung eines konditionalen Eigenkapitalsystems und Analyse seiner ökonomischen Implikationen ... 111 4.1 Einführung .................................................................................................................. 111 4.2 Altersvorsorgeverträge in der Auszahlphase ............................................................. 114 4.2.1 Gesetzliche Regelungen ................................................................................... 114 4.2.2 Entnahmepläne vs. Leibrenten ......................................................................... 115 4.3 Konditionales Eigenkapitalsystem fiir Altersvorsorgeverträge ................................. 117 4.3.1 Einleitende Vorbemerkungen ........................................................................... 117 4.3.2 Konzeptionelle Grundlagen eines konditionalen EK-Systems ........................ 119 4.3.3 Deduktion eines Eigenkapitalsystems fiir die Entnahmephase ........................ 121 4.4 Eigenkapitalanforderungen in der Entnahmephase .................................................... 126 4.4.1 Vorbemerkungen zur empirischen Untersuchung ............................................ 126 4.4.2 Ex post Analyse von Altersvorsorge-Entnahmeplänen ................................... 128 4.4.3 Untersuchung der Eigenkapitalanforderungen im ex ante Kontext ................. 132 4.4.3.1 Untersuchungsansatz und Modellannahmen ....................................... 132 4.4.3.2 Analysen auf Einzelvertragsbasis ....................................................... 135 4.4.3.3 Analysen im Rahmen eines Geschäfts- und Absatzmodells ............... 140 4.4.3.4 Robustheitsanalysen ............................................................................ 145 4.5 Schlussbetrachtung ..................................................................................................... 147 Literaturverzeichnis ............................................................................................................ 149 Lebenslauf ............................................................................................................................. 151 Ehrenwörtliche Erklärung: ................................................................................................. 154
Aortic valve (AV) and root replacement with composite graft and re-implantation of coronary arteries described first by Bentall and de Bono in 1968, is considered as a standard operation for treatment of different pathologies of the AV and aortic root. In centres where aortic valve and root repair techniques and Ross operation are well established, generally severely diseased patients remain indicated for this procedure. The aim of this study was to evaluate the early and long-term outcomes after Bentall-De Bono (BD) procedures in high-risk population with complex pathologies and multiple comorbidities.
Between 2005 and 2018, a total of 273 consecutive patients (median age 66 years; 23 % female) underwent AV and root replacement with composite-graft in so called button technique. We divided our population in the following groups: 1. acute type A aortic dissection group (ATAAD) (n = 48), 2. endocarditis group (n = 99) and 3. all other pathologies group (n = 126). The surgery has been per- formed emergent/urgent in 131 patients (49 %) and in 109 cases (40%) as a reoperation. Concomitant surgery was required in 97 patients (58%) and 167 pa- tients (61%) received a biological composite-graft.
Follow-up was completed in 96% (10 patients lost to follow-up) with a mean of 8.6 years (range 0.1-15.7 years), counting a total of 1450 patient-years. Thirty- day mortality was 17% (46 patients). The overall estimated survival in 5 and 10 years was 64% ± 3%) and 46% ±4 %). Group comparison showed a significant difference in favour of patient from the dissection group (p = 0.008). Implantation of a biological valve graft was associated with lower survival probability (p < 0.001). There was no significant difference in the freedom of reoperation rate between the groups. The same applies for freedom of postoperative endocarditis, thromboembolic events, and aortic prosthesis dysfunction. According to the uni- variate and multivariate logistic regression analysis primarily postoperative neu- rological dysfunction (OR 5.45), hypertension (OR 4.8) peripheral artery disease (OR 4.4), re-exploration for bleeding (OR 3.37) and postoperative renal replace- ment therapy (OR 3.09) were identified as leading predictors of mortality.
In conclusion, the BD operation can be performed with acceptable short- and long-term results in high-risk patients with complex aortic pathologies in a centre with well-established AV repair and Ross operation program.
The crude oil constituents benzene, toluene, ethylbenzene, and the three xylene isomers (BTEX) are the dominating groundwater contaminants originating from surface spill accidents by oil production facilities and with gasoline and jet fuel. Thereby BTEX posing a threat to the world´s scarce drinking water resources due to their water solubility and toxicity. An active remediation cleanup involving a BTEX event proves not only to be very expensive but almost impossible when it comes to the complete removal of contaminants from the subsurface. A favoured and common practice is combining an active remediation process focussing on the source of contamination coupled together with the monitoring of the residual contamination in the subsurface (monitored natural attenuation; MNA). MNA include all naturally occuring biological, chemical and physical processes in the subsurface. The general goal of this work was to improve the knowledge of biodegradation of aromatic hydrocarbons under anaerobic conditions in groundwater. For this groundwater and soil at the former military underground storage tank (UST) site Schäferhof – Süd near Nienburg/Weser (Niedersachsen, Germany) were sampled and analysed. The investigations were done in collaboration of the Umweltbundesamt, the universitys of Frankfurt and Bremen and the alphacon GmbH Ganderkesee. To investigate the extent of groundwater contamination, the terminal electron acceptor processes (TEAPs) and the metabolites of BTEX degradation in groundwater, six observation wells were sampled at regular intervals between January 2002 and September 2004. The wells were positioned in order to cover the upstream, the source area and the downstream of the presumed contamination source. Additionally, vertical sediment profiles were sampled and investigated with respect to spreading and concentration of BTEX in the subsurface. A large residual contamination involving BTEX is present in soil and groundwater at the studied locality. Maximum BTEX concentration values of 17 mg/kg were recorded in analysing sediment in the unsaturated zone. In the capillary fringe, values of 450 mg/kg were recorded (October 2004) and in the saturated zone maximum values of 6.7 mg/kg BTEX were detected. The groundwater samples indicate increasing BTEX concentrations in the groundwater flow direction (from 532 µg/l up to 3300 µg/l (mean values)). Biodegradation of aromatic hydrocarbons under anaerobic conditions in the sub surface at contaminated sites is characterised by generation of metabolites. From the monoaromatic hydrocarbons BTEX metabolites such as benzoic acid (BA) and the methylated homologs and C1-and C2-benzyl-succinic acids (BSA) are generated as intermediates. A solid-phase extraction method based on octadecyl-bonded silica sorbent has been developed to concentrate such metabolite compounds from water samples followed by derivatization and gas chromatography/mass spectrometry (GC/MS) of the extracts. The recovery rate range between 75 and 97%. The method detection limit was 0.8 µg/l. Organic acids were identified as metabolic by-products of biodegradation. Benzoic acid, C1-, C2- and C3-benzoic acid were determined in all contaminated wells with considerable concentrations. Furthermore, the depletion of the dominant terminal electron acceptors (TEAs) oxygen, nitrate, and sulphate and the production of dissolved ferrous iron and methane in groundwater indicate biological mediated processes in the plume evidently proving the occurrence of NA. A large overlap of different redox zones at the studied part of the plume has been observed. A important finding in this study is the strong influence of groundwater level fluctuations on the BTEX concentration in groundwater. A very dry summer in 2003 was recorded during the monitoring period, resulting on site in a drop of the groundwater level to 1.7 m and a concomitant increase of BTEX concentrations from 240 µg/l to 1300 µg/l. The groundwater level fluctuations, natural degradation and retention processes essentially influence BTEX concentrations in the groundwater. Groundwater level fluctuations have by far a stronger influence than the influence of biological degradation. Increasing BTEX concentrations are hence not a consequence of limited biological degradation. Another part of the study was to observe the isotopic fractionation of the electron acceptor Fe(III), due to biologically mediated reduction of Fe(III) to the watersoluble Fe(II) at the site and first field data are presented. Both groundwater and sediment samples were analysed with respect to their Fe isotopic compositions using high mass resolution Multi Collector-Inductively Coupled Plasma-Mass Spectrometry (MC-ICP-MS). The delta56Fe -values of groundwater samples taken from observation wells located downstream of the source area were isotopically lighter than delta56Fe -values obtained from groundwater in the uncontaminated well. The Fe isotopic composition of most parts of the sediment profile was similar to the Fe isotopic composition of uncontaminated groundwater. Thus, a significant iron isotope fractionation can be observed between sediment and groundwater downstream of the BTEX contamination.
An investigation of photoelectron angular distributions and circular dichroism of chiral molecules
(2021)
The present work demonstrates the capability of several type of molecular frame photoelectron angular distributions (MFPADs) and their linked chiroptical phenomenon the photoelectron circular dichroism (PECD) to map in great detail the molecular geometry of polyatomic chiral molecules as a function of photoelectron energy. To investigate the influence of the molecular potential on the MFPADs, two chiral molecules were selected, namely 2-(methyl)oxirane (C3H6O, MOx, m = 58,08 uma) and 2-(trifluoromethyl)oxirane (C3H3F3O, TFMOx, m = 112,03 uma). The two molecules differs in one substitutional group and share an oxirane group where the O(1s) electron was directly photoionized with the use of synchrotron radiation in the soft X-ray regime. The direct photoionization of the K-shell electron is well localized in the molecule and it induces the ejection of two or more electrons; the excited system separates into several charged (and eventually neutral) fragments which undergo Coulomb explosion due to their charges. The electrons and the fragments were detected using the COLd Target Recoil Ion Momentum Spectroscopy (COLTRIMS) and the momentum vectors calculated for each fragment belonging from a single ionization. The former method gives the possibility to post-orient molecules in space, giving access to the molecular frame, thus the MFPAD and its related PECD for multiple light propagation direction.
Stereochemistry (from the Greek στερεο- stereo- meaning solid) refers to chemistry in three dimensions. Since most molecules show a three-dimensional structure (3D), stereochemistry pervades all fields of chemistry and biology, and it is an essential point of view for the understanding of chemical structure, molecular dynamics and molecular reactions. The understanding of the chemistry of life is tightly bounded with major discoveries in stereochemistry, which triggered tremendous technical advancements, making it a flourishing field of research since its revolutionary introduction in late 18th century. In chemistry, chirality is a brunch of stereochemistry which focuses on objects with the peculiar geometrical property of not being superimposable to their mirror-images. The word chirality is derived from the Greek χειρ for “hand”, and the first use of this term in chemistry is usually attributed to Lord Kelvin who called during a lecture at the Oxford University Junior Scientific Club in 1893 “any geometrical figure, or group of points, “chiral”, and say that it has chirality if its image in a plane mirror, ideally realized, cannot be brought to coincide with itself.”. Although the latter is usually considered as the birth of the word chirality, the concept underlying it was already present in several fields of science (above all mathematics), already proving the already multidisciplinary relevance of chirality across many field of science and beyond. Nature shows great examples of chiral symmetry on all scales. Empirically, it is possible to observe it at macroscopic scale (e.g. distribution of rotations of galaxies), down to the microscopic scale (e.g. structure of some plankton species), but it is at the molecular level where the number gets remarkable: most of the pharmaceutical drugs, food fragrances, pheromones, enzymes, amino acids and DNA molecules, in fact, are chiral. Moreover, the concept of chirality goes far beyond the mere spatial symmetry of objects being crucially entangled with the fundamental properties of physical forces in nature. The symmetry breaking, namely the different physical behaviour of a two chiral systems upon the same stimuli, is considered to be one of the best explanation for the long standing questions of homochirality in biological life, and ultimately to the chemical origin of life on Earth as we know it. Our organism shows high enantio-selectivity towards specific compounds ranging from drugs, to fragrances. Over 800 odour molecules commonly used in food and fragrance industries have been identified as chiral and their enantiomeric forms are perceived to have very different smells, as the well-know example of D- and L- limonene. Similarly, responses to pharmaceuticals drugs can be enantiomer specific, and in fact about 60 % the drugs currently on the market are chiral compounds, and nearly 90 % of them are sold as racemates. The same degree of enantio-selectivity is observed in the communications systems of plants and insects. Plants produce lipophilic liquids with high vapour pressure called plant volatiles (PVs) which are synthesized via different enzymes called tarpene synthases that are usually chiral. Chiral molecules and chiral effects have a strong impact on all the fields of science with exciting developments ranging from stereo-selective synthesis based on heterogeneous enantioselective catalysis, to optoelctronics, to photochemical asymmetric synthesis, and chiral surface science, just to cite a few.
Chiral molecules come in two forms called enantiomers. Their almost identical chemical and physical properties continue to pose technical challenges concerning the resolution of racemic mixtures, the determination of the enantiomeric excess, and the direct determination of the absolute configuration of an enantiomer. ...
Previous studies suggest that the application of Controlled Language (CL) rules can significantly improve the readability, consistency, and machine-translatability of source text. One of the justifications for the application of CL rules is that they can have a similar impact on several target languages by reducing the post-editing effort required to bring Machine Translation (Ml’) output to acceptable quality. In certain situations, however, post-editing services may not always be a viable solution. Web-based information is often expected to be made available in real-time to ensure that its access is not restricted to certain users based on their locale. Uncertainties remain with regard to the actual usefulness of MT output for such users, as no empirical study has examined the impact of CL rules on the usefulness, comprehensibility, and acceptability of MT technical documents from a Web user's perspective. In this study, a two-phase approach is used to determine whether Controlled English rules can have a significant impact on these three variables. First, individual CL rules are evaluated within an experimental environment, which is loosely based on a test suite.Two documents are then published and subject to a randomised evaluation within the framework of an online experiment using a customer satisfaction questionnaire. The findings indicate that a limited number of CL rules have a similar impact on the comprehensibility of French and German output at the segment level. The results of the online experiment show that the application of certain CL rules has the potential to significantly improve the comprehensibility of German MT technical documentation. Our findings also show that the introduction of CL rules did not lead to any significant improvement of the comprehensibility, usefulness, and acceptability of French MT technical documentation.
In this thesis the first fully integrated Boltzmann+hydrodynamics approach to relativistic heavy ion reactions has been developed. After a short introduction that motivates the study of heavy ion reactions as the tool to get insights about the QCD phase diagram, the most important theoretical approaches to describe the system are reviewed. To model the dynamical evolution of the collective system assuming local thermal equilibrium ideal hydrodynamics seems to be a good tool. Nowadays, the development of either viscous hydrodynamic codes or hybrid approaches is favoured. For the microscopic description of the hadronic as well as the partonic stage of the evolution transport approaches have beeen successfully applied, since they generate the full phse-space dynamics of all the particles. The hadron-string transport approach that this work is based on is the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) approach. It constitutes an effective solution of the relativistic Boltzmann equation and is restricted to binary collisions of the propagated hadrons. Therefore, the Boltzmann equation and the basic assumptions of this model are introduced. Furthermore, predictions for the charged particle multiplicities at LHC energies are made. The next step is the development of a new framework to calculate the baryon number density in a transport approach. Time evolutions of the net baryon number and the quark density have been calculated at AGS, SPS and RHIC energies and the new approach leads to reasonable results over the whole energy range. Studies of phase diagram trajectories using hydrodynamics are performed as a first move into the direction of the development of the hybrid approach. The hybrid approach that has been developed as the main part of this thesis is based on the UrQMD transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The initial energy and baryon number density distributions are not smooth and not symmetric in any direction and the initial velocity profiles are non-trivial since they are generated by the non-equilibrium transport approach. The fulll (3+1) dimensional ideal relativistic one fluid dynamics evolution is solved using the SHASTA algorithm. For the present work, three different equations of state have been used, namely a hadron gas equation of state without a QGP phase transition, a chiral EoS and a bag model EoS including a strong first order phase transition. For the freeze-out transition from hydrodynamics to the cascade calculation two different set-ups are employed. Either an in the computational frame isochronous freeze-out or an gradual freeze-out that mimics an iso-eigentime criterion. The particle vectors are generated by Monte Carlo methods according to the Cooper-Frye formula and UrQMD takes care of the final decoupling procedure of the particles. The parameter dependences of the model are investigated and the time evolution of different quantities is explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic flow values at SPS energies are shown to be in line with an ideal hydrodynamic evolution if a proper initial state is used and the final freeze-out proceeds gradually. The hybrid model calculation is able to reproduce the experimentally measured integrated as well as transverse momentum dependent $v_2$ values for charged particles. The multiplicity and mean transverse mass excitation function is calculated for pions, protons and kaons in the energy range from $E_{\rm lab}=2-160A~$GeV. It is observed that the different freeze-out procedures have almost as much influence on the mean transverse mass excitation function as the equation of state. The experimentally observed step-like behaviour of the mean transverse mass excitation function is only reproduced, if a first order phase transition with a large latent heat is applied or the EoS is effectively softened due to non-equilibrium effects in the hadronic transport calculation. The HBT correlation of the negatively charged pion source created in central Pb+Pb collisions at SPS energies are investigated with the hybrid model. It has been found that the latent heat influences the emission of particles visibly and hence the HBT radii of the pion source. The final hadronic interactions after the hydrodynamic freeze-out are very important for the HBT correlation since a large amount of collisions and decays still takes place during this period.
The mainstream law and economics approach has dominated positive analysis and normative design of economic regulations. This approach represents a form of applied neoclassical and new institutional economics. Neoclassical and/or new institutional economic theories, models, and analytical concepts are applied automatically to economic regulatory problems.
This automatic application of neoclassical economics to economic regulatory problems loses sight of the valid insights of non-neoclassical schools of economic thought and theories, which may illuminate important aspects of the regulatory problems. This thesis, therefore, advocates an integrated law and economics approach to economic regulations. This approach identifies the relevant insights of neoclassical and non-neoclassical schools of thought and theories and refines them through a process of cross-criticism. In this process, the insights of each school of thought are subjected to the critiques of other schools of thought. The resulting refined insights, which are more likely to be valid, are then integrated consistently through various techniques of integration.
Not only does neoclassical (micro and macro) law and economics overlook the valid insights of non-neoclassical schools of thought, it is also highly reductionist. It ignores the interdependencies of legal institutions, highlighted mainly by the comparative capitalism literature, and the structural interlinkages among socio-economic actors, highlighted by economic sociology and complexity economics. Rather, it takes rational individuals and their interactions subject to the constraint of isolated institution(s) as its unit of analysis. In place of this reductionist perspective, the thesis argues for a systemic approach to economic regulations. This systemic perspective replaces the reductionist unit of neoclassical regulatory analysis with a systemic unit of analysis that consists of the least non-decomposable actors’ network and its associated least non-decomposable institutional network. Then, the thesis develops an operationalized and replicable systemic framework for systemic analysis and design of institutional networks.
Both the systemic and integrated approaches are theoretically consistent and complementary. The systemic approach is in essence a way of thinking that requires a broad and rich informational basis that can be secured by using the integrated approach. Due to their complementarity, they give rise to what I call “the integrated and systemic law and economics approach.” The thesis operationalizes this approach by setting out well-defined replicable steps and applying them to concrete regulatory problems, namely, the choice of a corporate governance model for developing countries and the development of a normative theory of economic regulations. These concrete applications demonstrate the critical bite of the integrated and systemic approach, which reveals significant shortcomings of mainstream law and economics’ answers to these regulatory questions. They also show the constructive potential of the integrated and systemic approach in overcoming the critiques advanced to the neoclassical regulatory conclusions.
The operationalized integrated and systemic approach is both a law and economics as well as a law and development approach. It does not only provide an alternative to mainstream law and economics analysis and design of economic regulations. It also fills a significant analytical lacuna in the law and development literature that lacks an analytical framework for analysis and design of context-specific legal institutions that can promote economic development in developing economies.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
The ALICE High-Level-Trigger (HLT) is a large scale computing farm designed and constructed for the purpose of the realtime reconstruction of particle interactions (events) inside the ALICE detector. The reconstruction of such events is based on the raw data produced in collisions inside the ALICE at the Large Hadron Collider. The online reconstruction in the HLT allows the triggering on certain event topologies and a significant data reduction by applying compression algorithms. Moreover, it enables a real-time verification of the quality of the data.
To receive the raw data from the various sub-detectors of ALICE, the HLT is equipped with 226 custom built FPGA-based PCI-X cards, the H-RORCs. The H-RORC interfaces the detector readout electronics to the nodes of the HLT farm. In addition to the transfer of raw data, 108 H-RORCs host 216 Fast-Cluster-Finder (FCF) processors for the Time-Projection-Chamber (TPC). The TPC is the main tracking detector of ALICE and contributes with up to 16 GB/s to over 90% of the overall data volume. The FCF processor implements the first of two steps in the data reconstruction of the TPC. It calculates the space points and their properties from charge clouds (clusters) created by charged particles traversing the TPCs gas volume. Those space points are not only the base for the tracking algorithm, but also allow for a Huffman-based data compression, which reduces the data volume by a factor of 4 to 6.
The FCF processor is designed to cope with any incoming data rate up to the maximum bandwidth of the incoming optical link (160 MB/s) without creating back-pressure to the detectors readout electronics. A performance comparison with the software implementation of the algorithm shows a speedup factor of about 20 compared with one AMD Opteron 6172 Core @ 2.1 GHz, the CPU type used in the HLT during the LHC Run1 campaign. Comparison with an Intel E5-2690 Core @ 3.0 GHz, the CPU type used by the HLT for the LHC Run2 campaign, results in a speedup factor of 8.5. In total numbers, the 216 FCF processors provide the computing performance of 4255 AMD Opteron cores or 2203 Intel cores of the previously mentioned type. The performance of the reconstruction with respect to the physics analysis is equivalent or better than the official ALICE Offline clusterizer. Therefore, ALICE data taking was switched in 2011 to FCF cluster recording and compression only, discarding the raw data from the TPC. Due to the capability to compress the clusters, the recorded data volume could be increased by a factor of 4 to 6.
For the LHC Run3 campaign, starting in 2020, the FCF builds the foundation of the ALICE data taking and processing strategy. The raw data volume (before processing) of the upgraded TPC will exceed 3 TB/s. As a consequence, online processing of the raw data and compression of the results before it enters the online computing farms is an essential and crucial part of the computing model.
Within the scope of this thesis, the H-RORC card and the FCF processor were developed and built from scratch. It covers the conceptual design, the optimisation and implementation, as well as the verification. It is completed by performance benchmarks and experiences from real data taking.
An exciting in vivo function of ATP-sensitive potassium channels in substantia nigra dopamine neurons Ð Implications for burst firing and novelty coding ÐPhasic burst activity is a key feature of dopamine (DA) midbrain neurons. This particular pattern of excitation of DA neurons occurs via a synaptically triggered transition from low-frequency background spiking to transient high-frequency discharges. Burst-firing mediated phasic DA release is critical for flexible switching of behavioural strategies in response to unexpected rewards, novelty and other salient stimuli. However, the cellular and molecular bases of burst signalling in distinct DA subpopulations of the substantia nigra (SN) or the ventral tegmental area (VTA) are unknown.
DA neuron excitability is controlled by synaptic network inputs, neurotransmitter receptors and ion channels, which generate action potentials and determine frequency and pattern of electrical activity in a complex interplay. ATP-sensitive potassium (K-ATP) channels are widely expressed throughout the brain, where in most cases they are believed to act as metabolically-controlled 'excitation brakes' by matching excitability to cellular energy states. However, their precise physiological in vivo function in DA neurons remains elusive.
To study burst firing and the underlying ionic mechanisms with single cell resolution, in vivo single-unit recordings were combined with juxtacellular neurobiotin labelling as well as immunohistochemical and anatomical identification of individual DA neurons. In vivo recordings were performed in adult isoflurane-anaesthetised wildtype (WT) and global K-ATP channel knockout mice, lacking the pore forming Kir6.2 subunit (Kir6.2-/-). In addition, DA cell-selective functional silencing of K-ATP channel activity in vivo was established using virus-mediated expression of dominant-negative Kir6.2 subunits. Careful control experiments ruled out any significant contributions from nonDA neurons as transduction was effectively limited to SN DA neurons rather than affecting those cells that innervate them. Virus-based K-ATP channel silencing in combination with juxtacellular recording and labelling was achieved to define the electrophysiological phenotype of individually identified, virally-transduced DA neurons in vivo.
Single-unit recordings revealed that K-ATP channels Ð in contrast to their conventional hyperpolarising role Ð in a subpopulation of DA neurons located in the medial SN (m-SN) act as cell-type selective gates for excitatory burst firing in vivo. The percentage of spikes in bursts was threefold reduced in Kir6.2-/- compared to WT mice. Classification of firing patterns based on visual inspection of autocorrelation histograms and on a newly developed spike-train-model confirmed the dramatic shift from phasic burst to tonic single-spike oscillatory firing in Kir6.2-/-. This significant decrease of burstiness was selective for m-SN DA neurons and was not exhibited by DA cells in the lateral SN or VTA. Virus-based K-ATP channel silencing in vivo unequivocally demonstrated that the activity of postsynaptic K-ATP channels was sufficient to disrupt bursting in m-SN DA neuron subtypes. Patch-clamp recordings in brain slices indicated an essential role of K-ATP channels for NMDA-mediated in vitro bursting. In accordance with previous studies in DA midbrain neurons, NMDA receptor stimulation triggered burst-like firing in m-SN DA cells in vitro, but only when K-ATP channels were co-activated in these neurons.
K-ATP channel-gated burst firing in m-SN DA neurons might be functionally relevant in awake, freely moving mice. To explore the behavioural consequences of SN DA neuron subtype-selective K-ATP channel suppression, spontaneous open field (OF) behaviour of mice with bilateral K-ATP silencing across the whole SN (medial + lateral) or in only the lateral SN was tested. Analysis of WT and global Kir6.2-/- mice showed reduced exploratory locomotor activity of Kir6.2-/- in a novel OF environment. Remarkably, K-ATP channel silencing in m-SN DA neurons phenocopied this novelty-exploration deficit, indicating that K-ATP channel-gated burst firing in medial but not lateral SN DA neurons is crucial for WT-like novelty-dependent exploratory behaviour.
In summary, a novel role of K-ATP channels in promoting the excitatory switch from tonic to phasic firing in vivo in a cell-type specific manner was discovered. The present PhD thesis provides several important insights into the pivotal function of K-ATP channels in medial SN DA cells, which project to the dorsomedial striatum, for burst firing and its important consequences for context-dependent exploratory behaviour.
In collaboration with two other research groups transcriptional up-regulation of K-ATP channel and NMDA receptor subunits and high levels of in vivo burst firing were detected in surviving SN DA neurons from Parkinson's disease (PD) patients Ð providing a potential link of K-ATP channel activity to neurodegenerative pathomechanisms of PD. Using high-resolution fMRI imaging another study in humans has recently identified distinct DA midbrain regions that are preferentially activated by either reward or novelty. Taken together, these human data and the results of the present PhD thesis suggest that burst-gating K-ATP channel function in SN DA neurons impacts on phenotypes in disease as well as in health.
1. Halobacillus halophilus akkumuliert zum Ausgleich geringer, extrazellulärer Wasserpotentiale kompatible Solute. Bei Anzuchten in Gegenwart von 0,4 – 1,5 M NaCl wurden Glutamin und Glutamat als die dominierenden kompatiblen Solute identifiziert, während zwischen 2,0 und 3,0 M NaCl Prolin das dominierende Solut darstellt. Außerdem wurde Ectoin als zweites kompatibles Solut gefunden, das spezifisch bei hohen Salzgehalten akumuliert wird. Die Konzentrationen während der exponentiellen Wachstumsphase war jedoch um den Faktor 6 – 7 geringer im Vergleich zu Prolin. 2. Aus Wachstumsexperimenten in Gegenwart unterschiedlicher Anionen war bekannt, dass Glutamat, im Gegensatz zu Gluconat und Nitrat, in der Lage ist, das Wachstum von H. halophilus auch in Abwesenheit von Chlorid zu ermöglichen. Um der Frage nachzugehen, ob die wachstumsfördernde Wirkung von unphysiologisch hohen Glutamat-Konzentrationen im Medium auf die Verwendung von Glutamat als kompatiblem Solut in den Zellen zurückzuführen ist, wurden Gesamtsolutepools von Chlorid-, Nitrat-, Gluconat- und Glutamat-gezogenen Zellen gemessen. In NaCl-gezogenen Zellen zeigte sich Glutamat als dominantes Solut, während Prolin und Glutamin einen geringeren Teil am Gesamtpool ausmachten. In Nitrat-gezogenen Zellen betrug der Gesamtpool nur noch 83% und in Gluconat-gezogenen Zellen nur noch 27% im Vergleich zu Chlorid-gezogenen Zellen. Zellen, die mit Glutamat gezogen wurden, zeigten jedoch eine Gesamtkonzentration an Soluten, die ca. 100% über dem Vergleichswert aus Chlorid-gezogenen Zellen lag. Die Konzentration an Glutamin in den Zellen stieg dabei um 168%, die Konzentration an Glutamat sogar um 299%. Die Prolinkonzentration verringerte sich um 32%. Diese Daten belegen, dass der wachstumsstimulierende Effekt von Glutamat auf die Verwendung als kompatibles Solut zurückzuführen ist. 3. Zur Untersuchung der molekularen Grundlage der Salzadaptation sowie der Abhängigkeit von Chlorid in H. halophilus wurde in Zusammenarbeit mit der Gruppe von Prof. D. Oesterhelt (MPI für Biochemie, Martinsried) die Sequenzierung des Genoms begonnen. Das Projekt ist zur Zeit noch nicht abgeschlossen und befindet sich in der „Lückenschluß-Phase“. Die bisherigen Sequenzdaten konnten dennoch für die in dieser Arbeit beschriebenen Untersuchungen herangezogen werden. Das Genom besitzt eine Größe von ca. 4,1 Mbp mit einem ungefähren GC-Gehalt von 40%. Außerdem wurden 2 Plasmide identifiziert mit einer Größe von 16047 und 3329 bp. 4. Die Schlüsselgene bekannter Biosynthesewege für Glutamin und Glutamat konnten identifiziert werden. Darunter befinden sich zwei Isogene für eine Glutamatdehydrogenase (gdh1 und gdh2), ein Gen für die große Untereinheit einer Glutamatsynthase (gltA), zwei Gene für die kleine Untereinheit einer Glutamat-Synthase (gltB1 und gltB2) und zwei Isogene für eine Glutaminsynthetase (glnA1 und glnA2). glnA1 befindet sich in einem Cluster zusammen mit einem Gen, das für einen Regulator kodiert (glnR), wie er auch aus B. subtilis bekannt ist. Über reverse Transkription von mRNA und anschließender PCR-Analyse konnte gezeigt werden, dass sowohl gltA/gltB1 als auch glnA1/glnR in einem Operon organisiert sind. 5. Wurde die Transkriptmenge der in Punkt 4 erwähnten Biosynthesegene in Zellen quantifiziert, die in Gegenwart unterschiedlicher Salzkonzentrationen (0,4 – 3,0 M NaCl) gezogen wurden, so zeigte sich keine Abhängigkeit von der Salzkonzentration für die Gene gltA, glnA1 und gdh1. Über die Transkriptmengen von gdh2 ließ sich keine abschließende Aussage treffen, da die gefundenen Transkriptmengen sehr gering waren und daher zu sehr großen Varianzen bei der Quantifizierung führten. Eine klare Abhängigkeit der Transkriptmenge von der im Medium zugesetzten Salzkonzentration konnte für glnA2 gezeigt werden. Die glnA2 mRNA-Menge stieg dabei mit steigender Salzkonzentration an und erreichte bei 1,5 – 2.0 M NaCl ein Maximum. Bei diesen Salzkonzentrationen war die Menge an mRNA ca. 4 mal höher als der Vergleichswert bei 0,4 M NaCl. Bei höhern Salzkonzentrationen sank die Menge an Transkript wieder leicht und war dann ca. nur noch 3 mal so hoch wie bei 0,4 M NaCl. 6. Die zelluläre Konzentration der glnA2-Transkripte in Abhängigkeit unterschiedlicher Anionen im Anzuchtmedium wurde untersucht. Die Quantifizierung der glnA2–mRNA ergab eine 2 mal höhere Transkriptmenge in Gegenwart von Chlorid verglichen mit Nitrat oder Gluconat. 7. Es wurde nach Enzymaktivitäten der bekannten Schlüsselenzyme im Glutamat und Glutamin-Biosyntheseweg gesucht. Eine Glutamatdehydrogenase und eine Glutamatsynthase – Aktivität konnte nicht oder nur in vernachlässigbarem Maße nachgewiesen werden. Im Gegensatz dazu konnt eine Glutaminsynthetase – Aktivität eindeutig belegt werden. Diese Aktivität erwies sich abhängig von der Art und der Konzentration des angebotenen Anions im Medium. Maximale Aktivitäten wurden mit NaCl in einer Konzentration von 2,5 – 3,0 M erreicht. Interessanterweise erwies sich die Glutaminsynthetase – Aktivität auch abhängig von der Art des im Testpuffers verwendeten Anions. Hier zeigte sich eine deutliche Stimulierung der Aktivität durch das Anion Chlorid. [Die für diesen Punkt zugrunde liegenden Daten wurden im Rahmen einer von mir mitbetreuten Diplomarbeit von Jasmin F. Sydow erhoben und sind aus Gründen der vollständigen Darstellung des Projektverlaufes mitaufgeführt!] 8. Wie im Punkt 1 dargelegt, wird Prolin vor allem bei hohen Salzkonzentrationen in H. halophilus - Zellen akkumuliert. Neben der Abhängigkeit von der Salzkonzentration wurde außerdem die Abhängigkeit von der Wachstumsphase untersucht. Die Analyse der Prolinkonzentrationen während verschiedener Wachstumsphasen in Kulturen, die bei 1,0 bzw. 2,5 M NaCl angezogen wurden, zeigte, (i) dass die Prolinkonzentration während der frühen exponentiellen Phase ca. 2,5-fach erhöht war im Vergleich zu Niedrigsalz-Zellen, (ii) dass die Prolinkonzentration beim Übergang von der frühen in die späte exponentielle Phase dramatisch abnahm (um 64% bei 2,5 M NaCl) und dass (iii) in der stationären Phase Prolin praktisch nicht mehr nachzuweisen war. 9. Die Biosynthesegene für die Herstellung von Prolin aus Glutamat konnten im Genom von H. halophilus identifiziert werden. Es handelt sich dabei um ein Cluster von 3 Genen, die für eine putative Pyrrolin-5-carboxylatreductase (proH), eine Glutamat-5-kinase (proJ), und eine Glutamat-5-semialdehyd-dehydrogenase (proA) kodieren. Mittels reverser Transkription von mRNA und anschließenden PCR-Analysen konnte gezeigt werden, dass die drei Gene ein Operon bilden. 10. Eine Quantifizierung der Transkriptmengen der Biosynthesegene proH, proJ und proA mittels quantitativer PCR in Zellen, die bei unterschiedlichen NaCl-Konzentrationen gezogen wurden, zeigte einen deutlichen Zusammenhang zwischen der Salinität des Mediums und der Menge an Transkript. Diese war umso höher, je höher die Salinität des Mediums war. Die maximale Transkriptmenge (6-fach) wurde bei einer Salzkonzentration von 2,5 M NaCl erreicht. Bei noch höherer Salzkonzentration sank die Transkriptmenge auf die ca. 5-fache Menge des Kontrollwertes ab. 11. Um die Regulation und Dynamik der Osmoregulation unabhängig vom Wachstum untersuchen zu können, wurde ein Zellsuspensions-System für H. halophilus etabliert, bei dem eine konzentrierte Zellsuspension direkt von geringen auf hohe Salzkonzentrationen überführt wurde und bei dem die Prozesse der Transkription, Translation und Solut-Biosynthese erhalten blieben. Beispielhaft wurde dieses System an der Produktion von Prolin nach einem Salzschock von 0,8 auf 2,0 M NaCl getestet. Es zeigte sich bei der Analyse, dass sich die Transkriptmengen unmittelbar nach dem Salzschock deutlich erhöhten und bereits nach 1,5 Stunden ein Maximum erreicht wurde. Verglichen mit dem Wert zu Beginn des Versuches waren die Transkriptmengen ca. 13-fach erhöht, sanken im weiteren Verlauf jedoch wieder ab und blieben bei einer 4-fachen Transkriptmenge konstant. Mit der Erhöhung der Transkriptmenge ging auch eine Erhöhung der Prolinkonzentration einher, die ein Maximum von ca. 6 μmol/mg Protein nach 6 Stunden erreichte. Auch diese Konzentration verringerte sich im weiteren Verlauf wieder und erreichte nach 20 Stunden den Ausgangswert. 12. Um den Einfluß diverser Anionen bzw. Osmolyte im Medium auf die Produktion von Prolin zu untersuchen, wurden Zellsuspensionen von H. halophilus einer Erhöhung der Osmolarität von 0,8 M auf 2,0 M unterzogen. Es zeigte sich dabei, dass die maximale Akkumulation von Prolin in Anwesenheit von Chlorid am höchsten war. Nitrat und Glutamat führten zu ähnlichen, aber leicht geringeren maximalen Konzentrationen (92 bzw. 83% des Chloridwertes). Gluconat führte noch zu einer Akkumulation von ca. 51%, während die anderen Osmolyte zu keiner Akkumulation führten. Eine Analyse der Transkriptmengen zeigte jedoch ein völlig anderes Bild. Während Chlorid, Nitrat und Gluconat zu vergleichbaren Anstiegen der Transkripmengen führten, war die maximale Transkriptmenge der Glutamatinkubierten Zellen 3-9 mal höher als in Vergleichszellen mit Chlorid. In anschließenden Titrationsexperimenten mit verschiedenen Glutamatkonzentrationen konnte gezeigt werden, dass eine minimale Konzentration von 0,2 M Glutamat ausreichend ist, um eine 90-fache Steigerung der Transkriptmenge herbeizuführen. 13. Als Antwort auf Hochsalz-Bedingungen akkumuliert H. halophilus neben Prolin auch Ectoin. Die Ectoinkonzentration bei 2,5 M NaCl war ca. 2-3 mal höher als in Zellen, die bei 1,0 M gezogen wurden. Die Bestimmung der intrazellulären Ectoin-Konzentrationen während des Wachstums zeigte außerdem, dass die Produktion von Ectoin wachstumsphasenabhängig ist. Die Konzentration in der stationären Phase war ca. 5-fach höher als in der exponentiellen Phase. Die Entwicklung der Ectoin- Konzentration verhielt sich somit reziprok zur Entwicklung der Prolin-Konzentration während des Wachstums. 14. Es wurde ein Cluster von drei Genen im Genom von H. halophilus identifiziert, deren Genprodukte die Biosynthese von Ectoin aus Aspartatsemialdehyd katalysieren. ectA kodiert dabei für eine putative Diaminobutyrat-Acetyltransferase, ectB für eine putative Diaminobutyrat-2-oxoglutarat-Transaminase und ectC für eine putative Ectoin-Synthase. Mittels reverser Transkription von mRNA und anschließenden PCR-Analysen konnte gezeigt werden, dass die drei Gene ein Operon bilden. 15. Die Transkription der ect-Gene war abhängig von der Salinität des Mediums. Ab 2,0 M stieg die Menge an RNA um das 10-fache an und erreichte bei 3,0 M ein Maximum mit der 23,5-fachen Menge. 16. Nach einem osmotischen Schock stieg die Konzentration an ect-mRNA signifikant und erreichte ein Maximum nach 3 - 4 Stunden. Das Maximum wurde somit 1,5 – 2,5 Stunden später erreicht als bei anderen Genen der Solute-Biosynthese wie etwa gdh1, das für eine Glutamatdehydrogenase, glnA2, das für eine Glutamin-Synthetase oder proH, das für eine Pyrrolin-5-Carboxylase kodiert. Die maximal erreichten Wert lagen 13-fach (ectA), 6,5-fach (ectB) und 3-fach (ectC) über dem Wert vor dem Salzschock. Gegen EctC wurden polyklonale Antikörper generiert. Western-Blot Analysen mit diesem Antikörper zeigten, dass die EctC-Menge nach 4 Stunden um das 2,5-fache stieg, dann aber wieder abfiel auf das 1,6 – 1,7-fache des Ausgangswertes. Der Rückgang an EctC fand keine Entsprechung in der gemessenen Ectoin-Konzentration, welche über einen Zeitraum von 18 Stunden kontinuierlich anstieg. Die maximale Konzentration nach 18 Stunden betrug das ca. 6,3-fache des Ausgangswertes. 17. Wurden H. halophilus Zellen mit anderen Osmolyten außer NaCl geschockt, so ergab sich folgendes Bild der Regulation der Ectoin-Biosynthese: (i) die Transkription der ect-Gene zeigte keine Chlorid-abhängige Regulation. Die maximale Transkriptmenge wurde in Gegenwart von Nitrat erreicht, wohingegen Gluconat zu vergleichbachen mRNA-Mengen führte wie Chlorid. Glutamat führte nur zu schwacher Stimulierung der Transkription. (ii) auf Ebene der Proteinmenge war zu sehen, dass die Menge an EctC nach osmotischem Schock vergleichbar war in Zellen, die mit Chlorid oder Nitrat inkubiert wurden. Gluconat führte nur zu einer 40%-igen Zunahme während andere Osmolyte nahezu wirkungslos auf die Menge an EctC blieben. (iii) die höchste Akkumulation an Ectoin nach einer plötzlichen Erhöhung der Osmolarität wurde erreicht mit Chlorid (6-fache Zunahme) gefolgt von Nitrat (5,6-fache Zunahme). Gluconat führte lediglich zu einer 3,3-fachen und Glutamat nur noch zu einer 2-fachen Steigerung der Ectoinkonzentration. Glutamat hat somit ähnliche Effekte wie Tartrat, Saccharose oder Sulfat. Succinat führte zu keiner Akkumulation und Glycin sogar zu einer deutlichen Abnahme. Die Produktion von Ectoin ist somit hauptsächlich abhängig vom Anion/Osmolyt und nur untergeordnet von der Osmolarität.
Driven by rapid technological advancements, the amount of data that is created, captured, communicated, and stored worldwide has grown exponentially over the past decades. Along with this development it has become critical for many disciplines of science and business to being able to gather and analyze large amounts of data. The sheer volume of the data often exceeds the capabilities of classical storage systems, with the result that current large-scale storage systems are highly distributed and are comprised of a high number of individual storage components. As with any other electronic device, the reliability of storage hardware is governed by certain probability distributions, which in turn are influenced by the physical processes utilized to store the information. The traditional way to deal with the inherent unreliability of combined storage systems is to replicate the data several times. Another popular approach to achieve failure tolerance is to calculate the block-wise parity in one or more dimensions. With better understanding of the different failure modes of storage components, it has become evident that sophisticated high-level error detection and correction techniques are indispensable for the ever-growing distributed systems. The utilization of powerful cyclic error-correcting codes, however, comes with a high computational penalty, since the required operations over finite fields do not map very well onto current commodity processors. This thesis introduces a versatile coding scheme with fully adjustable fault-tolerance that is tailored specifically to modern processor architectures. To reduce stress on the memory subsystem the conventional table-based algorithm for multiplication over finite fields has been replaced with a polynomial version. This arithmetically intense algorithm is better suited to the wide SIMD units of the currently available general purpose processors, but also displays significant benefits when used with modern many-core accelerator devices (for instance the popular general purpose graphics processing units). A CPU implementation using SSE and a GPU version using CUDA are presented. The performance of the multiplication depends on the distribution of the polynomial coefficients in the finite field elements. This property has been used to create suitable matrices that generate a linear systematic erasure-correcting code which shows a significantly increased multiplication performance for the relevant matrix elements. Several approaches to obtain the optimized generator matrices are elaborated and their implications are discussed. A Monte-Carlo-based construction method allows it to influence the specific shape of the generator matrices and thus to adapt them to special storage and archiving workloads. Extensive benchmarks on CPU and GPU demonstrate the superior performance and the future application scenarios of this novel erasure-resilient coding scheme.
The adaptive immune system protects against daily infections and malignant transformation. In this, the translocation of antigenic peptides by the transporter associated with antigen processing (TAP) into the ER lumen is an essential step in the antigen presentation by MHC I molecules. The heterodimeric ATP-binding cassette transporter (ABC) TAP consist of the two halftransporters TAP1 and TAP2. Each monomer contains an N-terminal transmembrane domain (TMD) and a conserved C-terminal nucleotide-binding domain (NBD). Together, the TMDs build the translocation core and the NBDs bind and hydrolyze ATP, energizing the peptide transport. TAP features an asymmetry in the two ATP-binding sites that are built of several conserved motifs. One motif is the D-loop with the consensus sequence SALD. The highly conserved aspartate of the D-loop of TAP1 reaches into the canonic ATP-binding site and contacts the Walker A motif and the H-loop of the opposite NBD, while the Asp of D-loop of TAP2 is part of the non-canonic ATP-binding site.
To examine this ABC transport complex in mechanistic detail, a purification and reconstitution procedure was established with the function of TAP being preserved. The heterodimeric TAP complex was purified via a His10-tag at TAP1 in a 1:1 ratio of the subunits. Nucleotide binding to the purified transporter was elucidated by tryptophan quenching assays and the affinity constants for MgADP and MgATP were determined to be 1.0 μM and 0.7 μM, respectevely. In addition, the TAP complex shows strict coupling between peptide binding and ATP hydrolysis, revealing no basal ATPase activity in the absence of peptides. Furthermore, TAP was reconstituted into proteoliposomes and the activity was tested by peptide transport and ATP hydrolysis. Interestingly, the kinetic parameters of the transporter in the reconstituted state are comparable to the data gained for TAP in microsomes.
To characterize the functional importance of the D-loop, D-loop mutants of either TAP1 or TAP2 were analyzed. Strikingly, TAP containing a mutated D-loop in TAP1 (D674A) shows an ATP-hydrolysis independent peptide translocation. Accordingly, the MHC I surface expression is similar to the wildtype situation. However, the same mutation in TAP2 (D638A) results in an ATPase dependent peptide transport similar to wildtype, whereas TAP containing mutations in both subunits leads to an inactive transporter. Although all D-loop mutants showed no altered peptide binding activity, the TAP1 mutant is inactive in peptide-stimulated ATPase activity. Strikingly, ATP or ADP binding is strictly required for the peptide translocation. Experiments carried out in proteoliposomes demonstrate that wildtype TAP can export peptides against their gradient when low peptide concentrations are offered. In contrast, the D674A mutant can facilitate peptide translocation along their concentration gradient in the two directions. At high peptide concentrations, TAP is trapped in a transport incompetent state induced by trans-inhibition. In conclusion, a TAP mutant that uncouples solute translocation from ATP hydrolysis was created. Since this passive substrate movement is strictly dependent on binding of ATP or ADP, an active transporter was turned into a “nucleotide-gated facilitator”.
In a cysteine cross-linking approach the conformational changes of TAP during peptide transport and the flexibility of the nucleotide binding domains were examined. Single cysteines were introduced in the D-loops of TAP1 and TAP2. Cross-linking by copper-phenantroline (CuPhe) was possible for all combinations. However, by adding ATP, ADP or peptide to the TAP complex no differences in the cross-linking efficiency were detected. By CuPhe cross-linking TAP was trapped in a conformation, in which the peptide binding site was not accessible. To complete a transport cycle, a flexibility of at least 17.8 Å of the NBDs is needed, since TAP cross-linked by CuPhe (2.0 Å) or bismaleimidoethane (BMOE, 8.0 Å) was transport inactive but when TAP was cross-linked by 1,11-bismaleimido-triethyleneglycol (BM[PEG]3, 17.8 Å) transport activity was preserved.
Amphibians of Malawi : an analysis of their richness and community diversity in a changing landscape
(2009)
This study summarizes the state of the knowledge of the amphibian diversity in Malawi highlighting the possible threats impending on this fauna correlated with human encroachment and land use change. New data about diversity, distribution and ecology have been gathered, whereas the old ones have been summarised, reviewed and commented. In order to put in context the responses of the amphibian communities to land use change, the main environmental characteristics of the country at a broad space and time scale have been explored. Furthermore, the original habitats and vegetation have been described, and their status in the present day Malawi discussed. In the same way, an overview of the actual state of the knowledge about the Malawian amphibians has been provided, and their ability to act as surrogate of environmental integrity in Sub-Saharan Africa commented on the basis of the available studies. Afterwards, the results of the study of the selected areas and samples have been analysed within this newly generated context. Different field and laboratory methods were applied for the quantitative analysis of the richness and diversity of the communities. Opportunistic search was used to detect species richness, whereas the visual encounter survey was applied to detect the relative abundance of species. Several indices of diversity and similarity, and extrapolations by means of true richness estimators were used for the analysis of the alpha and beta diversities. Additional information were gathered by means of pitfall traps with drift fence, and by the recording of the advertisement calls. Supplementary methods were applied for the analysis of the taxonomic composition of the collected material. In Malawi 84 amphibian species are recorded, two of which still undescribed (Leptopelis sp. and Phrynobatrachus sp.). Three further species need to be confirmed and might be possibly present too: Amietia viridireticulata, Hemisus guineensis, and Hyperolius minutissimus. Additionally, other unrecognised cryptic species — at least one — are present within the Hyperolius nasutus complex. Most of the species belong to the order Anura (82 species; 97.6%), whereas only two species belong to the Gymnophiona (2.4%). Anurans are divided into 12 families and 23 genera, whereas the two caecilians species into one family (Caecilidae) and two genera. The more diverse family is the Hyperoliidae (21 species, 25%) followed by the families Ptychadenidae (13 species, 15%), Arthroleptidae (11 species, 13%), Phrynobatrachidae (10 species, 12%), and Bufonidae and Pyxicephalidae (9 species, 11% respectively). The remaining high family diversity (seven families, Caecilidae included) is contrasted by a low number of species (11 species in total, 14%). Based on the available distribution data, the value of species richness of the anuran communities in Malawi is comprised between 5‒45 species. In average 16.8 ± 9.0 species (N=80) are to be found, 75% of the sites have less than 21 species, and only two sites have more than 25 species. Four hot spots of amphibian diversity were identified: the Nyika Plateau (24 species), Mangochi-Malombe (25 species), Zomba Plateau (32 species) and the Mulanje Massif (45 species). In the studied areas a mean of 14.7 ± 1.6 species was observed and extrapolations by means of the true richness estimators were in good agreement with this result. Among the studied areas the richest was Palm Forest Reserve (17 species), followed by Kaningina Forest Reserve (16 species) and Vinthukutu F. R., and Vwaza W. R (15 species). The poorest area was the Misuku Mountains with 12 species only and a slightly different ranking was generated by the true richness estimators. The mean of the species present in the samples was 4.8 ± 2.1 species, considerably less than the true species richness detected in the respective areas. Basing on the ranking generated by the K-dominance plot the most diverse samples were Palm F. R. and Misuku, whereas the less diverse were Kaningina F. R. and Fort Lister, confirmed by the values of the diversity indices. The main finding of this study was the observation of the lack of a clear match between environmental degradation and amphibian diversity, and the crucial importance of temporary water bodies for the preservation of the amphibian diversity. In fact, despite most of the original habitat formerly present in Malawi have been destroyed and replaced by cultivations, the amphibian communities of different areas showed a comparable diversity at both family and species richness level, and no evident match between environmental degradation and amphibian diversity was recognisable. Differences in species richness could mostly be explained by natural factors such the elevation gradient and the presence of temporary water bodies. However, it was not possible to exclude that the communities have changed during historical time and the shift in species composition already occurred together with the modification of their relative frequencies. Most of the species showed a remarkable ecological plasticity and several species were found in a variety of both natural and altered habitats. The classification of the Malawian amphibians on the basis of ecological guilds based on the available natural history data showed the preponderance (76%) of generalist pond breeders. As a consequence, most of these amphibians possessed a scarce capacity to act as surrogates of habitat integrity. Based on the result of this study the farm bush landscape with traditional agriculture practices bears a great potential to support amphibian diversity in terms of species richness, representing a compromise between local economic development and conservation. Furthermore, the results of this study indicate the outstanding importance of the southern-east region of Malawi for the conservation of the country’s amphibians.
This work describes the development and characterization of two instruments and their data evaluation, which contributes to a better understanding of new particle formation and growth, as well as their interactions with clouds. Both instruments were characterized at the Cosmics Leaving Outdoor Droplets (CLOUD) experiment at the European Center for Nuclear Research (CERN).
The problem of unconstrained or constrained optimization occurs in many branches of mathematics and various fields of application. It is, however, an NP-hard problem in general. In this thesis, we examine an approximation approach based on the class of SAGE exponentials, which are nonnegative exponential sums. We examine this SAGE-cone, its geometry, and generalizations. The thesis consists of three main parts:
1. In the first part, we focus purely on the cone of sums of globally nonnegative exponential sums with at most one negative term, the SAGE-cone. We ex- amine the duality theory, extreme rays of the cone, and provide two efficient optimization approaches over the SAGE-cone and its dual.
2. In the second part, we introduce and study the so-called S-cone, which pro- vides a uniform framework for SAGE exponentials and SONC polynomials. In particular, we focus on second-order representations of the S-cone and its dual using extremality results from the first part.
3. In the third and last part of this thesis, we turn towards examining the con- ditional SAGE-cone. We develop a notion of sublinear circuits leading to new duality results and a partial characterization of extremality. In the case of poly- hedral constraint sets, this examination is simplified and allows us to classify sublinear circuits and extremality for some cases completely. For constraint sets with certain conditions such as sets with symmetries, conic, or polyhedral sets, various optimization and representation results from the unconstrained setting can be applied to the constrained case.
Heat stress transcription factors (Hsfs) play essential role in heat stress response and thermotolerance by controlling the transcriptional activation of heat stress response (HSR) genes including molecular chaperones. Plant Hsf families show a striking multiplicity, with more than 20 members in the many plant species. Among Hsfs, HsfA1s act as the master regulators of heat stress (HS) response and HsfA2 becomes one of the most abundant Hsfs during HS. Using transgenic plans with suppressed expression of HsfA2 we have shown that this Hsf is involved in acquired thermotolerance of S. lycopersicum cv Moneymaker as HsfA2 is required for high expression and maintenance of increased levels of Hsps during repeated cycles of HS treatment.
Interestingly, HsfA2 undergoes temperature-dependent alternative splicing (AS) which results in the generation of seven transcript variants. Three of these transcripts (HsfA2-Iα-γ), generated due to alternative splicing of a second, newly identified intron encode for the full length protein involved in acquired thermotolerance. Another 3 transcripts (HsfA2-IIIα-γ) are generated due to alternative splicing in intron 1, leading in all cases to a premature termination codon and targeting of these transcripts for degradation via the non-sense mRNA decay mechanism (NMD).
Interestingly, excision of intron 2, results into the generation of a second previously unreported protein isoform, annotated as HsfA2-II. HsfA2-II shows similar transcriptional activity to the full-length protein HsfA2-I in the presence of HsfA1a but lacks the nuclear export signal (NES) required for nucleocytoplasmic shuttling which allows efficient nuclear retention and stimulation of transcription of HS-induced genes. Furthermore, stability assays showed that HsfA2-II exhibits lower protein stability compared to HsfA2-I.
The presence of a second intron and the generation of a second protein isoform we identified in other Solanaceae species as well. Remarkably, we observed major differences in the splicing efficiency of HsfA2 intron 2 among different tomato species. Several wild tomato accessions exhibit higher splicing efficiency that favors the generation of HsfA2-II, while in these species the splice variant HsfA2-Iγ is absent. This natural variation in splicing efficiency specifically occurring at temperatures around 37.5oC is associated with the presence of 3 intronic polymorphisms. In the case of wild species these polymorphisms seemingly restrict the binding of RS2Z36, identified as a putative splicing silencer for HsfA2 intron 2.
Tomato accessions with the polymorphic “wild” HsfA2 show enhanced thermotolerance against a direct severe heat stress incident due to the stronger increase of Hsps and other stress induced genes. Introgression of the “wild” S. pennellii HsfA2 locus into the cultivar M82, resulted in enhanced seedling thermotolerance highlighting the potential use of the polymorphic HsfA2 for breeding.
We conclude that alterations in the splicing efficiency of HsfA2 have contributed to the adaption of tomato species to different environments and these differences might be directly related to natural variation in their thermotolerance.
The PANDA experiment will be one of the flagship experiments at the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. It is a versatile detector dedicated to topics in hadron physics such as charmonium spectroscopy and nucleon structure. A DIRC counter will deliver hadronic particle identification in the barrel part of the PANDA target spectrometer and will cleanly separate kaons with momenta up to 3.5 GeV/c from a large pion background. An alternative DIRC design option, using wide Cherenkov radiator plates instead of narrow bars, would significantly reduce the cost of the system. Compact fused silica photon prisms have many advantages over the traditional stand-off boxes filled with liquid. This work describes the study of these design options, which are important advancements of the DIRC technology in terms of cost and performance. Several new reconstruction methods were developed and will be presented. Prototypes of the DIRC components have been built and tested in particle beam, and the new concepts and approaches were applied. An evaluation of the performance of the designs, feasibility studies with simulations, and a comparison of simulation and prototype tests will be presented.
Die Adoleszenz, d.h. die Reifungsphase des Jugendlichen zum Erwachsenen, stellt einen zentralen Abschnitt in der menschlichen Entwicklung dar, der mit tief greifenden emotionalen und kognitiven Veränderungen verbunden ist. Neure Studien (Bunge et al., 2002; Durston et al., 2002; Casey et al., 2005; Crone et al., 2006; Bunge and Wright, 2007) machen deutlich, dass sich die funktionelle Architektur des Gehirns während der Adoleszenz grundlegend verändert und dass diese Veränderungen mit der Reifung höherer kognitiven Funktionen in der Adoleszenz assoziiert sein könnten. Messungen des Gehirn-Volumens mit Hilfe der Magnet-Resonanz-Tomographie (MRT) zum Beispiel zeigen eine nicht-lineare Reduktion der grauen und eine Zunahme der weißen Substanz während der Adoleszenz (Giedd et al., 1999; Sowell et al., 1999, 2003). Des weiteren treten in dieser Zeit Veränderungen in exzitatorischen und inhibitorischen Neurotransmitter-Systemen auf (Tseng and O’Donnell, 2005; Hashimoto et al., 2009). Zusammen deuten diese Ergebnisse darauf hin, dass während der Adoleszenz ein Umbau der kortikalen Netzwerke stattfindet, der wichtige Konsequenzen für die Reifung neuronaler Oszillationen haben könnte. Im Anschluss an eine Einführung im Kapitel 2, fasst Kapitel 3 der vorliegenden Dissertation die Vorbefunde bezüglich entwicklungsbedingter Veränderungen in der Amplitude, Frequenz und Synchronisation neuronaler Oszillationen zusammen und diskutiert den Zusammenhang zwischen der Entwicklung neuronaler Oszillationen und der Reifung höhere kognitiver Funktionen während der Adoleszenz. Ebenso werden die anatomischen und physiologischen Mechanismen, die diesen Veränderungen möglicherweise zu Grunde liegen könnten, theoretisch vorgestellt. Die in Kapitel 4-6 vorgestellten eigenen empirischen Arbeiten untersuchen neuronale Oszillationen mit Hilfe der Magnetoencephalographie (MEG), um die Frequenzbänder und die funktionellen Netzwerke zu charakterisieren, die mit höheren kognitiven Prozessen und deren Entwicklung in der Adoleszenz assoziiert sind. Hierzu wurden drei Experimente durchgeführt, bei denen MEG-Aktivität während der Bearbeitung einer Arbeitsgedächtnisaufgabe und im Ruhezustand aufgezeichnet wurde. Die Ergebnisse dieser Experimente zeigen, dass Alpha Oszillationen und Gamma-Band Aktivität sowohl task-abhängig als auch im Ruhezustand gemeinsam auftreten. Darüber hinaus ergänzen die vorliegenden Untersuchungen Vorarbeiten, indem sie eine Wechselwirkung zwischen beiden Frequenzbändern aufgezeigt wird, die als ein Mechanismus für das gezielte Weiterleiten von Informationen dienen könnte. Die in Kapitel 6 vorgestellten Entwicklungsdaten weisen weiterhin darauf, dass in der Adoleszenz späte Veränderungen im Alpha und Gamma-Band stattfinden und dass diese Veränderungen involviert sind in die Entwicklung der Arbeitsgedächtnis-Kapazität und die Entwicklung der Fähigkeit, Distraktoren zu inhibieren. Abschliessend werden in Kapitel 7, die in dieser Dissertation vorgestellten Arbeiten, aus einer übergeordneten Perspektive im Gesamtzusammenhang diskutiert.
Bacteria are true artists of survival, which rapidly adapt to environmental changes like pH shifts, temperature changes and different salinities. Upon osmotic shock, bacteria are able to counteract the loss of water by the uptake of potassium ions. In many bacteria, this is accomplished by the major K+ uptake system KtrAB. The system consists of the K+-translocating channel subunit KtrB, which forms a dimer in the membrane, and the cytoplasmic regulatory RCK subunit KtrA, which binds non-covalently to KtrB as an octameric ring. This unique architecture differs strongly from other RCK-gated K+ channels like MthK or GsuK, in which covalently tethered cytoplasmic RCK domains regulate a single tetrameric pore. As a consequence, an adapted gating mechanism is required: The activation of KtrAB depends on the binding of ATP and Mg2+ to KtrA, while ADP binding at the same site results in inactivation, mediated by conformational rearrangements. However, it is still poorly understood how the nucleotides are exchanged and how the resulting conformational changes in KtrA control gating in KtrB is still poorly understood.
Here,I present a 2.5-Å cryo-EM structure of ADP-bound, inactive KtrAB, which for the first time resolves the N termini of both KtrBs. They are located at the interface of KtrA and KtrB, forming a strong interaction network with both subunits. In combination with functional and EPR data we show that the N termini, surrounded by a lipidic environment, play a crucial role in the activation of the KtrAB system. We are proposing an allosteric network, in which an interaction of the N termini with the membrane facilitates MgATP-triggered conformational changes, leading to the active, conductive state.
The goal of this thesis was to gain further insight into the binding behavior of ligands in the heptahelical domain (HD) of group I metabotropic glutamate receptors (mGluRs). This was realized by the establishment of strategies for the detection and optimization of molecules acting as non-competitive antagonists of group I mGluRs (mGluR1/5). These strategies should guarantee high diversity in the retrieved chemotypes of the detected compounds not resembling original reference molecules (“scaffold-hopping”). The detection of new scaffolds, in turn, was divided into two approaches: First the development of pharmacological assays to screen compounds at a certain target for bioactivity (here: affinity towards the allosteric recognition site of mGluR1 and mGluR5), and second the evaluation of computer assisted methods for the identification of virtual hits to be screened afterwards on the pharmacological assays established before. Promising molecules should be optimized with respect to activity/affinity and selectivity, their binding mode investigated and, finally, compared to existing lead compounds. Initially, membrane based binding assays for the HD of mGlu1 and mGlu5 receptors with enhanced throughput (shifting from 24-well plates to 96-well plates) were set up. For the mGluR1 assay the potent antagonist EMQMCM exhibited high affinity towards the binding site (Ki ~3nM), which is in accordance with published data from Mabire et al. (functional IC50 3nM). For mGluR5 the reference antagonist MPEP binds with high affinity to the receptor (binding IC50 13.8nM), which confirmed earlier findings from Anderson et al. (binding IC50 15nM). In another series of experiments the properties of rat cerebellar (mGluR1) and corticalmembranes (mGluR5) as well as of radiotracers were investigated by means of binding saturation studies and kinetic experiments. Furthermore, the influence of the solvent DMSO, necessary for compound screening of lipophilic substances, on positive and negative controls was evaluated. As the precise architecture of the HD of mGluR1 is still not known our efforts in identifying new ligands for this receptor focused on the ligand-based approach. All computer assisted methods that were applied to virtually screen large compound collections and to retrieve potential hits (“activity-enriched subsets”) acting at the heptahelical domain of mGluR1 relied on the existence of a valid dataset of reference molecules. This was realized by an initial compilation of a mGluR reference data collection comprising in total 357 entries predominantly negative but also some positive allosteric modulators for mGluR1 and mGluR5. In the next step a pharmacophore model for non-competitive mGluR1 antagonists was constructed. It was based upon six selective, potent and structurally diverse ligands. Prospective virtual screening was performed using the CATS atom-pair descriptor. The Asinex Gold-Collection was screened for each seed compound and some of the most similar compounds (according to the CATS descriptor) were ordered and tested forbinding affinity and functional activity at mGluR1. A high hit rate of approximately 26% (IC50 < 15 micro M) was yielded confirming the applicability of this method. One compound exerted functional activity below one micro molar (IC50-value of C-07:362nM ± 0.03). Moreover, non-linear principal component analysis was employed. Again the Asinex vendor database served as test database and was filtered by the pharmacophore model for mGluR1 established before. Test molecules that were adjacently located with mGluR1 antagonist references were selected. 15 compounds were tested on mGluR1 in binding and functional assays and three of them exhibited functional activity (IC50) below 15 micro M. The most potent molecule P-06 revealed an IC50-value of 1.11 micro M (± 0.41). The COBRA database comprising 5,376 structurally diverse bioactive molecules affecting various targets was encoded with the CATS descriptor and used for training two selforganizing maps (SOM). The encoded mGluR reference data collection was projected onto this map according to the SOM algorithm. This projection allowed to clearly distinguish between antagonists of mGluR1 and mGluR5 subtype. 28 compounds were ordered and tested on activity and affinity for mGluR1. They exhibited functional activity down to the sub-micro molar range (IC50-value of S-08: 744nM ± 0.29) yielding a final hit rate of 46% (<15 micro M). Then, the Asinex collection was screened using the SOM approach. For a predicted target panel including the muscarinic mACh (M1) receptor, the histamine H1-receptor and the dopamine D2/D3 receptors, the tested mGluR ligands exhibited the calculated binding pattern. This virtual screening concept might provide a basis for early recognition of potential sideeffects in lead discovery. We superimposed a set of 39 quinoline derivatives as non-competitive mGluR1 antagonists that were recently published by Mabire and co-workers. A CoMFA model (QSAR) was established and the influence of several side chains on functional activity was investigated. The coumarine derivative C-07 was obtained as a result of similarity searching. Starting from this compound a series of chemical derivatives was synthesized. This led to the discovery of potent (B-28, IC50: 58nM ± 0.008; Ki: 293nM ± 0.022) and selective (rmGluR5 IC50: 28.6 micro M) mGluR1 antagonists. From a homology model of mGluR1 we derived a potential binding mode for coumarines within the allosteric transmembrane region. Potential interacting patterns with amino acids were proposed considering the difference of the binding pockets between rat and human receptors. The proposed binding modes for quinolines (here:EMQMCM) and coumarines (here:B-04) were compared and discussed considering in particular the influence on activity of several side chains of quinolines obtained from the QSAR studies. The present studies demonstrated the applicability of ligand-based virtual screening for non-competitive antagonists of a G-protein coupled receptor, resulting in novel, potent and selective agents.
Alignment, characterization and application of polyfluorene in polarized light-emitting devices
(2001)
Ziel im Rahmen der vorliegenden Dissertation war die Realisierung der polarisierten Elektrolumineszenz blau emittierender flüssigkristalliner Polyfluorene. Polymere Leuchtdioden, die aufgrund hoher Orientierung der Moleküle in der aktiven Schicht polarisiert emittieren, sind für eine Anwendung beispielsweise als Hintergrundbeleuchtung in Flüssigkristallanzeigen (LCDs) von Interesse. Es wurde gezeigt, dass sich mit der Ausrichtung von Polyfluoren auf Ori entierungsschichten auf der Basis von geriebenem Polyimid hohe Ordnungsgrade erzielen lassen. Die Dotierung mit lochleitenden Materialien erlaubte erstmals den Einbau solcher Orientierungsschichten in Leuchtdioden und ermöglichte die Realisierung polarisierter Elektrolumineszenz. Die Morphologie und Struktur sowohl der hoch orientierten Polyfluoren filme als auch lochleitender Orientierungsschichten wurden eingehend untersucht. Die ElektrolumineszenzEigenschaften von isotropen sowie polarisierten Leuchtdioden wurden ausführlich analysiert und anschließend durch chemische Modifizierung des Polyfluorens entscheidend verbessert. Zusätzlich wurde Polyfluoren mit fluoreszierenden Farbstoffen dotiert, um ausgehend von blauem Licht grüne und rote Emission zu erhalten. Hierbei wurde unter sucht, in welchem Maß FörsterEnergietransfer sowie Ladungsträgereinfang für die Emission der eingemischten Farbstoffe verantwortlich sind. Eine Einführung in die Grundlagen der Elektrolumineszenz konjugierter Polymere findet sich in Kapitel 2 dieser Arbeit. Da polarisierte Elektrolumineszenz ein hohes Maß an Anistotropie der emittierenden Schicht erfordert, werden anschließend verschiedene Methoden zur Ausrichtung von Polymeren besprochen, wobei besondere Betonung auf der Orientierung flüssigkristalliner Polymere liegt. Kapitel 3 behandelt die signifikanten Eigenschaften der Polymere sowie die experimentel len Methoden, die im Rahmen dieser Arbeit verwendet wurden. Neben Polyfluoren wird ein weiteres blau emittierendes Polymer, Polyphenylenethynylen (PPE), eingeführt. Bei der Cha rakterisierung der Polyfluorene wird im Anschluss an die Beschreibung der reinen Polymere insbesondere der positive Einfluss des Anbringens von lochleitende Endgruppen an die Hauptkettenenden auf wesentliche Eigenschaften bezüglich der Elektrolumineszenz aufgezeigt. Außerdem werden die wesentlichen Merkmale von Polyimid, welches die Matrix der Orientierungsschicht bildet, sowie von verschiedenen Polymeren, die der Lochleitung und der Lochinjektion dienen, besprochen. Die Beschreibung der Methoden zur Präparation isotroper und polarisierter Leuchtdioden sowie zur Untersuchung der optischen, elektrischen und mor phologischen Eigenschaften der Polymerfilme bilden den Abschluss dieses Abschnitts. Im vierten Kapitel dieser Arbeit werden unterschiedliche Verfahren zur Ausrichtung der Polymermoleküle auf Polyfluoren sowie auf PPE angewandt und hinsichtlich der erreichbaren Ordnungsgrade verglichen und beurteilt. Im Falle von Polyfluoren wurde gezeigt, dass eine Orientierung im flüssigkristallinen Zustand mit Hilfe zusätzlicher Orientierungsschichten, welche auf geriebenem Polyimid basieren, die einzige geeignete Methode zur Orientierung dieses Po lymers ist. Durch den Zusatz von niedrigmolekularen lochleitenden Materialien in geeigneter Konzentration in die PolyimidMatrix konnte das nichtleitende Polyimid so modifiziert wer den, dass es sich in Leuchtdioden einbinden ließ, ohne dass die Orientierungseigenschaften der Schichten verloren gingen. Vergleiche unterschiedlicher Polyfluorene ergaben, dass die Länge und Struktur der AlkylSeitenketten das Orientierungsverhalten entscheiden beeinflussen. Hierbei wurde gezeigt, dass sich für verzweigte Seitenketten deutlich höhere Orientierungsgrade erreichen lassen als für solche mit linearen Seitenketten. Dies wurde mit dem vergrößerten Verhältnis aus Persistenzlänge und Polymerdurchmesser erklärt, was gemäß der Theorie der flüssigkristallinen Polymere zu einer Zunahme des erreichbaren Ordnungsparameter führt. Außerdem wiesen die Absorptionsspektren der Polyfluorene mit langen Seitenketten auf eine planare Konformation der Polymerrückgrate hin, welche aufgrund der starken Wechselwirkung zwischen den einzelnen Ketten eine Orientierung im flüssigkristallinen Zustand verhindert. Von allen untersuchten Polyfluorenen ließ sich Poly(diethylhexylfluoren) (PF2/6) am besten orientieren. Im Gegensatz zu Polyfluoren scheiterte der Versuch, PPE im flüssigkristallinen Zustand auf Orientierungsschichten auszurichten. Kalorimetrische DSCUntersuchungen machten deutlich, dass sich die Struktur von PPE in flüssigkristalliner und kristalliner Phase nur unwesentlich voneinander unterscheiden. In beiden Phasen deuteten Absorptionsuntersuchungen auf eine planare Konformation der PPERückgrate. Die Viskosität des als sehr steif bekannten Polymers PPE ist daher auch in flüssigkristallinem Zustand zu hoch, um eine Umordnung der Moleküle zu verursachen, welche allein durch Wechselwirkung mit einer Orientierungsschicht hervorgerufen wird. PPE konnte jedoch im kristallinen Zustand orientiert werden, indem anstatt einer zusätzlichen Orientierungsschicht der Polymerfilm selbst gerieben wurde. Die hohe Steifigkeit von PPE erlaubte die Übertragung der Kräfte, die durch das Reiben verursacht werden, auf das starre Polymerrückgrat und ermöglichte eine homogene Ausrichtung der Moleküle. Mit Hilfe dieser Methode konnten Leuchtdioden mit PPE in der aktiven Schicht verwirklicht werden, die polarisiert emittierten. Die bestmöglichen Methoden zur Ausrichtung der Moleküle unterschie den sich demnach für die beiden flüssigkristallinen Polymere Polyfluoren und PPE, und für beide Polymere wurden Verfahren gefunden, die die Herstellung von polarisierten Leuchtdioden ermöglichten. In Kapitel 5 dieser Arbeit werden die Morphologie, die Struktur sowie weitere wesentliche Eigenschaften sowohl orientierter Polyfluorenfilme als auch der zur Ausrichtung benötigten lochleitenden Orientierungsschichten aus dotiertem Polyimid besprochen. Hierfür wurden die Filme mit Hilfe von Licht und Elektronenmikroskopie sowie von Elektronen und Röntgen beugungsexperimenten untersucht. Im ersten Teil wird die beobachtete Abnahme der Orien tierbarkeit von Polyfluoren mit zunehmendem Molekulargewicht durch Elektronenbeugungs untersuchungen näher beschrieben. Ergebnisse aus TransmissionsElektronenmikroskopie Untersuchungen zeigten, dass sich die Morphologie orientierter PF2/6Filme durch hochgeordnete Lamellen auszeichnet, welche in regelmäßigen Abständen von ungeordneten Regionen unterbrochen werden. Innerhalb der orientierten Lamellen sortieren sich die Moleküle nach ähnlicher Kettenlänge, wohingegen in den ungeordneten Gebieten vornehmlich die Endgruppen der Ketten vorzufinden sind. Strukturuntersuchungen ergaben, dass die einzelnen Polymerketten von PF2/6 zylindrisch sind und eine hexagonale Packung aufweisen, wobei die Polymerrück grate eine 5/2Helixstruktur bilden. Das wurmähnliche Rückgrat ist dabei zylinderförmig von einer Hülle aus ungeordneten Seitenketten umgeben, die ähnlich wie ein Lösungsmittel zwi schen den einzelnen Ketten wirken. Die hieraus folgende geringe Viskosität des Polymers dient als Erklärung für die beobachtete bessere Orientierbarkeit von PF2/6 im Vergleich zu Polyfluoren mit linearen OktylSeitenketten oder zu PPE. Im zweiten Teil des fünften Kapitels werden Ergebnisse von Untersuchungen der lochlei tenden Orientierungsschichten vorgestellt. Der Einfluss der Zugabe von lochleitenden Materialien zu Polyimid auf mechanische sowie auf elektrische Eigenschaften wurde untersucht. Bei moderater LochleiterKonzentration war die mechanische Stabilität der Filme ausreichend, um nach dem Reiben keine merklichen Unterschiede zu undotierten geriebene Filmen aufzuweisen. Vergleiche entsprechender Filme hinsichtlich Ladungsinjektion und transport zeigten, dass erst durch die Dotierung eine Verwendung von PolyimidOrientierungsschichten in Leuchtdioden ermöglicht wird. Sowohl polymere als auch niedrigmolekulare lochleitende Materialien wur den hinsichtlich der erreichbaren Orientierungsgrade sowie der resultierenden ElektrolumineszenzEigenschaften verglichen, wobei nur letztere in beiden Belangen zugleich zu vorteilhaften Ergebnissen führten. Es wurde gezeigt, dass sich die besten Resultate mit polarisierten Leuchtdioden erzielen ließen, bei denen die emittierende Schicht auf eine DoppelschichtStruktur aufgebracht war, die der Lochinjektion und der Orientierung dienten. Hierbei befand sich oberhalb einer LochinjektionsSchicht aus reinem Lochleitermaterial eine weitere lochleitende Orientie rungsSchicht aus dotiertem Polyimid. Variation der Lochleiterkonzentrationen in Polyimid er gaben, dass die Helligkeit mit zunehmender Konzentration zunahm, wohingegen die erreichten Polarisationsverhältnisse gleichzeitig abnahmen. SEM und AFMUntersuchungen über den Einfluss der Lochleiterkonzentration auf die Schichtmorphologie ergaben, dass diese Beobachtungen durch Phasenseparation und mechanische Beschädigung der Filme zu erklären ist, welche bei Konzentrationen oberhalb 20 Gewichtsprozent eintreten. Im Kapitel 6 wird schließlich die Elektrolumineszenz von Leuchtdioden mit Polyfluoren als emittierende Schicht diskutiert. Zuerst wurde in isotropen Leuchtdioden die günstigste Diodenarchitektur ermittelt sowie die Optimierung der verwendeten Schichten vorgenommen. Die Ergebnisse wurden mit den Kenntnissen kombiniert, die im Rahmen der oben beschriebenen Untersuchungen erworben wurden, um die Herstellung von Leuchtdioden mit hochpolarisierter Emission zu verwirklichen. Blaue Elektrolumineszenz mit einem Emissionsmaximum von 450 nm und einem Polarisationsverhältnis von 21 wurden erzielt, wobei die Leuchtdichte bei einer angelegten Spannung von 18 V etwa 100 cd/m 2 betrug, was der typischen Helligkeit eines Computermonitors entspricht. Alle ElektrolumineszenzEigenschaften ließen sich durch End funktionalisierung des Polyfluorens weiter deutlich verbessern, indem lochleitende TriarylaminDerivate an die Enden der Hauptketten angebracht wurden ('Endcapping'). Der unerwünschte Beitrag zur Emission bei höheren Wellenlängen, welcher im Falle des reinen Polyfluoren beo bachtet wurde und gemeinhin aggregierten Polymermolekülen zugeschrieben wird, wurde durch das Konzept der Endfunktionalisierung wirksam unterdrückt. Außerdem war die Farbstabilität wesentlich verbessert und die Effizienz der Leuchtdioden um mehr als eine Größenordnung höher als bei der Verwendung des reinen Polyfluorens. Diese Beobachtungen wurden mit den elektrochemischen Eigenschaften der Endgruppen erklärt. Letztere wirken als anziehende Fallen für Ladungsträger, was dazu führt, dass die Erzeugung von Exzitonen und die anschließende Rekombination vorwiegend in der Nähe der Kettenenden stattfindet, anstatt wie im Falle des reinen Polyfluorens an weniger effizienten Aggregaten oder Exzimererzeugenden Stellen. Es wurde gezeigt, dass die Endfunktionalisierung weder das Verhalten des Polymers im flüssig-kristallinen Zustand, noch dessen Orientierbarkeit beeinträchtigte. Die Verwendung des modifizierten Polyfluorens erlaubte die Herstellung von polarisierten Leuchtdioden mit einem Polarisationsverhältnis von 22 und einer Leuchtdichte von 200 cd/m 2 bei 19 V, wobei die Schwellspannung auf 7,5 V gesenkt wurde. Dioden mit einem Anisotropiefaktor von 15 er reichten Leuchtdichten von bis zu 800 cd/m 2 . Die Effizienz dieser Leuchtdioden war mit 0,25 cd/A bei ähnlichem Polarisationsverhältnis und Leuchtdichte um mehr als doppelt so hoch wie die bisher berichteten Werte. Die Veränderung der eigentlich blauen Emissionsfarbe durch die Zugabe von Materialien mit niedrigerer Bandlücke in eine Polyfluorenmatrix wird im Kapitel 7 beschrieben. Es wurde gezeigt, dass der Zusatz bereits geringer Konzentrationen eines grün emittierenden Thiophen Farbstoffes das Emissionsspektrum des Polyfluorens entscheidend veränderte und die Realisierung grüner Emission ermöglichte. Genau wie im Falle der nichtemittierenden Lochleiter, die für die Endfunktionalisierung des Polyfluoren verwendet wurden, wirken auch die ThiophenFarbstoffe als effektive Ladungsträgerfallen, was neben der Farbveränderung eine drastische Verbesserung der Leuchtdiodeneffizienzen zur Folge hatte. Darüber hinaus konnte mit Hilfe des dotierten Polyfluorens polarisierte grüne Elektrolumineszenz verwirklicht werden, wobei die Polarisationsverhältnisse Werte von bis zu 30 erreichten, bei einer Leuchtdichte von 600 cd/m 2 und einer Effizienz von 0,3 cd/A. Im Hinblick auf rote Elektrolumineszenz wurden Leuchtdioden mit dendronisierten Pery lenfarbstoffen in der emittierenden Schicht untersucht, zum einen in reiner Form und zum an deren in Mischungen mit Polyfluoren. Hierfür wurden zwei Generationen von Dendrimeren, bestehend aus zentralem PerylendiimidChromophor und PolyphenylenGerüst, mit einer nichtdendronisierten Modellverbindung verglichen. Leuchtdioden mit reinen Filmen der ersten und zweiten Dendrimergeneration emittierten rotes Licht mit CIEKoordinaten (0,627/0,372) und einer Leuchtdichte von bis zu 120 cd/m 2 bei 11 V, wobei die Effizienz allerdings nur 0,03 cd/A betrug. Um die unterschiedlichen Mechanismen zu klären, die zur Emission der Farbstoffmoleküle führen, wurden die Farbstoffe in Polyfluoren beigemischt, und der Einfluss der Dendronisierung auf die Emissionsfarbe und die Intensität der Elektrolumineszenz wurde untersucht. In Photolumineszenz wurde mit zunehmender Dendronisierung eine Abnahme des Förster Energieübertrags vom PolyfluorenWirt zu dem PerylenfarbstoffGast verzeichnet, was zu einen höheren blauen Anteil im Emissionsspektrum führte. Hingegen wurde gezeigt, dass in Elektrolumineszenz die Farbstoffe als Elektronenfallen wirken und die Rekombination der Ladungsträger zu Exzitonen somit vorwiegend auf den Farbstoff anstatt auf den Polyfluorenmolekülen statt findet. Aus diesem Grund war die Betonung der roten Emission in Elektrolumineszenz ungleich stärker als in Photolumineszenz, bei der die rote Emission ausschließlich durch Energieübertrag via Förstertransfer zu Stande kommt. Die Verstärkung einer Farbverschiebung von rot nach blau, die mit zunehmender Dendronisierung und ansteigender Betriebsspannung beo bachtet wurde, konnte qualitativ mit der kinetischen Beeinträchtigung des Elektronenübertrags vom PolyfluorenWirt auf den PerylendiimidChromophor erklärt werden. Der bestmögliche Kompromiss aus roter Farbtiefe und Helligkeit wurde für die Mischung aus Polyfluoren und dem Farbstoff der ersten Dendrimergeneration erzielt. Bei angelegter Spannung von 6,5 V lag die Leuchtdichte bei 100 cd/m 2 und bei 11 V bei 700 cd/m 2 , wobei das Emission bei 600 nm ihr Maximum hatte.
A new technique for precision ion implantation has been developed. A scanning probe has been equipped with a small aperture and incorporated into an ion beamline, so that ions can be implanted through the aperture into a sample. By using a scanning probe the target can be imaged in a non-destructive way prior to implantation and the probe together with the aperture can be placed at the desired location with nanometer precision. In this work first results of a scanning probe integrated into an ion beamline are presented. A placement resolution of about 120 nm is reported. The final placement accuracy is determined by the size of the aperture hole and by the straggle of the implanted ion inside the target material. The limits of this technology are expected to be set by the latter, which is of the order of 10 nm for low energy ions. This research has been carried out in the context of a larger program concerned with the development of quantum computer test structures. For that the placement accuracy needs to be increased and a detector for single ion detection has to be integrated into the setup. Both issues are discussed in this thesis. To achieve single ion detection highly charged ions are used for the implantation, as in addition to their kinetic energy they also deposit their potential energy in the target material, therefore making detection easier. A special ion source for producing these highly charged ions was used and their creation and interactions with solids of are discussed in detail.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
Ob Klimawandel oder Luftverschmutzung: Die chemischen und physikalischen Prozesse in der Atmosphäre haben wichtige Auswirkungen auf die menschliche Gesundheit und Ökosysteme. Dabei ist die Atmosphäre mehr als ein Gemisch aus Stickstoff, Sauerstoff, Wasserdampf, Helium und Kohlenstoffdioxid. Es gibt zahlreiche Spurengase, deren Gesamtanteil am Volumen weniger als 1 % ausmacht. In dieser Arbeit werden Stickstoffoxide, Schwefeldioxid, Kohlenstoffmonoxid und Schwefelsäure näher betrachtet, die im Rahmen der flugzeugbasierten Messkampagne Chemistry of the Atmosphere: field experiment in Europe (CAFE-EU)/BLUESKY gemessen wurden.
Die Stickstoffoxide NO und NO2, als NOx zusammengefasst, besitzen hauptsächlich anthropogene Quellen, allen voran fossile Verbrennung und industrielle Prozesse. Zwischen NO und NO2 besteht ein photochemisches Gleichgewicht, sodass in der Atmosphäre vor allem NO2 in relevanten Konzentrationen vorkommt; dies wirkt aufgrund der Bildung von Salpetersäure, HNO3, in wässriger Lösung beim Einatmen ätzend und ist entsprechend gesundheitsschädlich. Troposphärisches Ozon, O3, wesentlicher Bestandteil von Sommersmog, wird hauptsächlich durch die Reaktion von NO mit Peroxiden (HO2 und RO2) gebildet. In der Stratosphäre entstehen NOx hauptsächlich durch die Photodissoziation von Lachgas, N2O, das aufgrund seiner langen Lebenszeit von der Tropo- in die Stratosphäre transportiert werden kann und dort die wichtigste Stickstoffquelle darstellt. In der Stratosphäre tragen NOx zum katalytischen Abbaumechanismus des Ozons bei (Bliefert, 2002; Seinfeld and Pandis, 2016).
Schwefeldioxid, SO2, ist ein toxisches Gas, dessen atmosphärische Quellen hauptsächlich anthropogen sind, nämlich fossile Verbrennung und industrielle Prozesse; Senken sind trockene und feuchte Deposition, wobei letztere zu saurem Regen führen kann. Seit den 1980ern sinken die globalen SO2-Emissionen. SO2 kann in der Atmosphäre zu Sulfat und Schwefelsäure oxidiert werden, was Hauptbestandteil des Wintersmogs ist. Der wichtigste Mechanismus ist die Oxidation mit dem Hydroxylradikal, OH˙, unter Beteiligung von Wasserdampf. In der Stratosphäre ist Carbonylsulfid, OCS, die wichtigste Schwefelquelle, da es analog zum N2O dank seiner langen Lebenszeit von der Tropo- in die Stratosphäre transportiert werden kann (Bliefert, 2002; Seinfeld und Pandis, 2016). Typische Konzentrationen von Schwefelsäure sind 105 cm–3 nachts und 107 cm–3 tagsüber in der Troposphäre sowie 105 cm–3 tagsüber in der Stratosphäre (Clarke et al., 1999; Weber et al., 1999; Fiedler et al., 2005; Arnold, 2008; Kürten et al., 2016; Berresheim et al., 2000).
Kohlenstoffmonoxid, CO, ist ein toxisches Gas, das zu gleichen Teilen durch direkte Emissionen (v.a. Biomasseverbrennung und fossile Verbrennung) und In-situ-Oxidation (v.a. von Methan, Isopren und industriellen Kohlenwasserstoffen) in die Atmosphäre gelangt. Die Hauptsenke ist die Reaktion mit OH˙ in der Troposphäre. Seit 2000 sinkt die globale CO-Konzentration (Bliefert, 2002).
Doch neben Gasen sind auch Aerosolpartikel fester Bestandteil des Gemisches Luft, welche luftgetragene feste oder flüssige Teilchen sind. Primäre Aerosolpartikel werden direkt als solche in die Atmosphäre emittiert, während sekundäre Aerosolpartikel in der Atmosphäre gebildet werden, indem gasförmige Vorläufersubstanzen mit geringer Flüchtigkeit auf primären Partikeln kondensieren oder durch Zusammenclustern und Anwachsen komplett neue Partikel bilden. Aerosolpartikel ermöglichen als Wolkenkondensationskeime erst die Bildung von Wolken und wirken somit – neben ihrem direkten reflektierenden Effekt – durch Änderung der Wolkenbedeckung und -eigenschaften insgesamt kühlend aufs Klima und beeinflussen die lokalen und globalen Wasserkreisläufe. Doch sie haben auch negative Auswirkungen auf die menschliche Gesundheit und sind für eine Verkürzung der durchschnittlichen Lebensdauer in Regionen mit hohen Feinstaubbelastungen verantwortlich (Seinfeld und Pandis, 2016; Bellouin et al., 2020; World Health Organization, 2016).
Neben den bisher betrachteten neutralen, also ungeladenen Gasen und Partikeln sind Ionen in der Gasphase sowie geladene Partikel ebenfalls Bestandteil der Atmosphäre. Sie spielen bei vielen atmosphärischen Prozessen eine wichtige Rolle, wie etwa bei Gewittern, Radiowellenübertragung und ionen-induzierter Nukleation von Aerosolpartikeln. Die Hauptquellen für Ionisation in der Tropo- und Stratosphäre ist die galaktische kosmische Strahlung, die entgegen ihrem Namen hauptsächlich aus Protonen und α-Partikeln (primäre Partikel genannt) besteht und in der Erdatmosphäre durch Kollision mit Luftmolekülen Teilchenschauer von sekundären Partikeln (u.a. Myonen, Pionen und Neutrinos) hervorruft. Die primären und sekundären Partikel können die Luftmoleküle ionisieren unter Entstehung von N+, N2+, O+, O2+ und Elektronen. Sauerstoff reagiert rasch mit letzteren zu O– und O2–. Diese Kationen und Anionen reagieren weiter, bis Ionenclustern der Summenformeln (HNO3)n(H2O)mNO3– und H+(H2O)n(B)m gebildet werden, wobei B Basen wie Methanol, Aceton, Ammoniak oder Pyridin sind. Weitere Ionisationsquellen sind der Zerfall des Radioisotops 222Rn in Bodennähe und ionisierende Solarstrahlung oberhalb der Stratosphäre. Atmosphärische Ionen haben zwei wichtige Senken: die Wiedervereinigung, auch Rekombination genannt, bei der sich ein Kation und ein Anion gegenseitig neutralisieren sowie das Anhaften an Aerosolpartikeln. Letztere Senke ist vor allem in der Troposphäre aufgrund der relativ hohen Konzentration an Aerosolpartikeln relevant (Arnold, 2008; Viggiano und Arnold, 1995; Bazilevskaya et al., 2008; Hirsikko et al., 2011).
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
The African continent is regularly portrayed as an indolent space with a well-known reputation as a chaotic continent. Viewed as lacking vision, means and capacities, Africa is perceived at best as a place that is marked by a permanent status quo, stagnation, or in worst case scenarios, as a declining continent. Various references to the continent are synonymous with famine, poverty, war, etc. Such portrayals are all the more intriguing given that the continent is known for its abundant natural resources, such as timber, oil, natural gas, minerals, etc., whose reserves are, moreover, not well known both by the African people and their leaders. As a result, there is still much progress to be made in tapping into the resources in order to improve the daily lives of African citizens.
In such a context dominated by infantile carelessness throughout the continent, the interventions of actors from outside the continent are the only hopes of bringing some vitality to this continent which is cloaked in "la grande nuit – the great darkness" (Mbembé 2013). Thus during the main sequences of recent history, representing different forms of Western penetration and activity on the African continent (slavery, imperialism, colonization), all the Western world’s contributions have obviously not sufficed to boost Africa and take it out of its never ending childhood. It has remained just as passive and apathetic today as it was yesterday.
The attraction of Asian actors to the continent is even more recent. And consistent with its abovementioned indolence, Africa is seen as an easy and defenceless prey for the Korean, Japanese, Indian, Malaysian, or Chinese conquerors. In the latter case, the insatiable appetite for natural resources whose reserves are being rapidly depleted is the cornerstone of their foreign aid policy. This led China to colonize the continent, showing a preference for Pariah Regimes which held no appeal for the West, by sending an army of workers to extract those resources (Lum et al. 2009), in defiance of all national and international regulations and based on completely opaque contracts.
Although the concept of African Agency was rapidly developed in several African countries, the aim of this study was more specific to Cameroon’s mining sector in which different entrepreneurs from abroad got involved over time. The thesis investigates whether indigenous citizens took part in any way in the development of mining projects in the country. Thus, the work assesses and analyses actions and reactions initiated and undertaken by local people in the context of China’s presence within Cameroon’s mining sector to promote and advance their interests over those of foreign investors. In addition, the author has no knowledge of any other study investigating African Agency in the mining sector as a whole in Cameroon.
In conducting this study, a multi-method research framework was developed including a series of methods used to collect data and analyse concepts of African Agency associated Political Ecology as they developed within Cameroon’s mining sector. Specifically, those methods comprised quantitative research when it came to collecting data using a positivist and empirical approach constructed by deducing evidence from statistical data collected by means of the 167 questionnaire surveys administered to local inhabitants and workers randomly selected on mining sites and in riparian communities. The questionnaires helped to capture Cameroonians' perceptions of the recent phenomenon of the gradual but significant influx of international actors and precisely Chinese players in the mining sector on the one hand, and on the other hand, observational data was collected across the GVC as developed in the Betare-Oya region. As a complement to the former technique, qualitative methods helped to study and deepen understanding of human behaviour and the social world in a holistic perspective through individual interviews, focus groups, and direct observations on the ground. In addition, the spatial analysis method based on the land use classification technique served to detect changes to land use/land cover that have been brought on by mechanised mining activities undertaken in this region. The sequencing of data collected and their processing from a ground theory perspective led to the formulation and specification of Cameroon’s Ecological Agency theory.
One of the earliest steps of this work consisted in a literature review and in placing the African Agency concept in a broader context. It then led to the state of the art, specifications about research content of the work and the main theories undergirding this thesis. Before examining developments that emerged during the last decade, a historical perspective was provided to the topic in order to show how African societies started mining operations and how they dealt with foreign partners interested in their mining resources. The aim was to show that while Western imperialism presented a challenge for the sector, it did not erase local participation, even despite the constraints associated with such involvement.
...
Magnetoencephalography (MEG) measures neural activity non-invasively and at an excellent temporal resolution. Since its invention (Cohen, 1968, 1972), MEG has proven a most valuable tool in neurocognitive (Salmelin et al., 1994) and clinical research (Stufflebeam et al., 2009; Van ’t Ent et al., 2003). MEG is able to measure rapid changes in electrophysiological neural signals related to sensory and cognitive processes. The magnetic fields measured outside the head by MEG directly reflect the cortical currents generated by the synchronised activity of thousands of neuronal sources. This distinguishes MEG from functional magnetic resonance imaging (fMRI), where measurements are only indirectly related to electrophysiological activity through neurovascular coupling...