Refine
Year of publication
Document Type
- Doctoral Thesis (2052) (remove)
Language
- English (2052) (remove)
Has Fulltext
- yes (2052)
Is part of the Bibliography
- no (2052)
Keywords
- ALICE (8)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
- LHC (5)
Institute
- Biowissenschaften (424)
- Physik (377)
- Biochemie und Chemie (281)
- Biochemie, Chemie und Pharmazie (207)
- Medizin (125)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (54)
- Mathematik (46)
Bacteria are true artists of survival, which rapidly adapt to environmental changes like pH shifts, temperature changes and different salinities. Upon osmotic shock, bacteria are able to counteract the loss of water by the uptake of potassium ions. In many bacteria, this is accomplished by the major K+ uptake system KtrAB. The system consists of the K+-translocating channel subunit KtrB, which forms a dimer in the membrane, and the cytoplasmic regulatory RCK subunit KtrA, which binds non-covalently to KtrB as an octameric ring. This unique architecture differs strongly from other RCK-gated K+ channels like MthK or GsuK, in which covalently tethered cytoplasmic RCK domains regulate a single tetrameric pore. As a consequence, an adapted gating mechanism is required: The activation of KtrAB depends on the binding of ATP and Mg2+ to KtrA, while ADP binding at the same site results in inactivation, mediated by conformational rearrangements. However, it is still poorly understood how the nucleotides are exchanged and how the resulting conformational changes in KtrA control gating in KtrB is still poorly understood.
Here,I present a 2.5-Å cryo-EM structure of ADP-bound, inactive KtrAB, which for the first time resolves the N termini of both KtrBs. They are located at the interface of KtrA and KtrB, forming a strong interaction network with both subunits. In combination with functional and EPR data we show that the N termini, surrounded by a lipidic environment, play a crucial role in the activation of the KtrAB system. We are proposing an allosteric network, in which an interaction of the N termini with the membrane facilitates MgATP-triggered conformational changes, leading to the active, conductive state.
The goal of this thesis was to gain further insight into the binding behavior of ligands in the heptahelical domain (HD) of group I metabotropic glutamate receptors (mGluRs). This was realized by the establishment of strategies for the detection and optimization of molecules acting as non-competitive antagonists of group I mGluRs (mGluR1/5). These strategies should guarantee high diversity in the retrieved chemotypes of the detected compounds not resembling original reference molecules (“scaffold-hopping”). The detection of new scaffolds, in turn, was divided into two approaches: First the development of pharmacological assays to screen compounds at a certain target for bioactivity (here: affinity towards the allosteric recognition site of mGluR1 and mGluR5), and second the evaluation of computer assisted methods for the identification of virtual hits to be screened afterwards on the pharmacological assays established before. Promising molecules should be optimized with respect to activity/affinity and selectivity, their binding mode investigated and, finally, compared to existing lead compounds. Initially, membrane based binding assays for the HD of mGlu1 and mGlu5 receptors with enhanced throughput (shifting from 24-well plates to 96-well plates) were set up. For the mGluR1 assay the potent antagonist EMQMCM exhibited high affinity towards the binding site (Ki ~3nM), which is in accordance with published data from Mabire et al. (functional IC50 3nM). For mGluR5 the reference antagonist MPEP binds with high affinity to the receptor (binding IC50 13.8nM), which confirmed earlier findings from Anderson et al. (binding IC50 15nM). In another series of experiments the properties of rat cerebellar (mGluR1) and corticalmembranes (mGluR5) as well as of radiotracers were investigated by means of binding saturation studies and kinetic experiments. Furthermore, the influence of the solvent DMSO, necessary for compound screening of lipophilic substances, on positive and negative controls was evaluated. As the precise architecture of the HD of mGluR1 is still not known our efforts in identifying new ligands for this receptor focused on the ligand-based approach. All computer assisted methods that were applied to virtually screen large compound collections and to retrieve potential hits (“activity-enriched subsets”) acting at the heptahelical domain of mGluR1 relied on the existence of a valid dataset of reference molecules. This was realized by an initial compilation of a mGluR reference data collection comprising in total 357 entries predominantly negative but also some positive allosteric modulators for mGluR1 and mGluR5. In the next step a pharmacophore model for non-competitive mGluR1 antagonists was constructed. It was based upon six selective, potent and structurally diverse ligands. Prospective virtual screening was performed using the CATS atom-pair descriptor. The Asinex Gold-Collection was screened for each seed compound and some of the most similar compounds (according to the CATS descriptor) were ordered and tested forbinding affinity and functional activity at mGluR1. A high hit rate of approximately 26% (IC50 < 15 micro M) was yielded confirming the applicability of this method. One compound exerted functional activity below one micro molar (IC50-value of C-07:362nM ± 0.03). Moreover, non-linear principal component analysis was employed. Again the Asinex vendor database served as test database and was filtered by the pharmacophore model for mGluR1 established before. Test molecules that were adjacently located with mGluR1 antagonist references were selected. 15 compounds were tested on mGluR1 in binding and functional assays and three of them exhibited functional activity (IC50) below 15 micro M. The most potent molecule P-06 revealed an IC50-value of 1.11 micro M (± 0.41). The COBRA database comprising 5,376 structurally diverse bioactive molecules affecting various targets was encoded with the CATS descriptor and used for training two selforganizing maps (SOM). The encoded mGluR reference data collection was projected onto this map according to the SOM algorithm. This projection allowed to clearly distinguish between antagonists of mGluR1 and mGluR5 subtype. 28 compounds were ordered and tested on activity and affinity for mGluR1. They exhibited functional activity down to the sub-micro molar range (IC50-value of S-08: 744nM ± 0.29) yielding a final hit rate of 46% (<15 micro M). Then, the Asinex collection was screened using the SOM approach. For a predicted target panel including the muscarinic mACh (M1) receptor, the histamine H1-receptor and the dopamine D2/D3 receptors, the tested mGluR ligands exhibited the calculated binding pattern. This virtual screening concept might provide a basis for early recognition of potential sideeffects in lead discovery. We superimposed a set of 39 quinoline derivatives as non-competitive mGluR1 antagonists that were recently published by Mabire and co-workers. A CoMFA model (QSAR) was established and the influence of several side chains on functional activity was investigated. The coumarine derivative C-07 was obtained as a result of similarity searching. Starting from this compound a series of chemical derivatives was synthesized. This led to the discovery of potent (B-28, IC50: 58nM ± 0.008; Ki: 293nM ± 0.022) and selective (rmGluR5 IC50: 28.6 micro M) mGluR1 antagonists. From a homology model of mGluR1 we derived a potential binding mode for coumarines within the allosteric transmembrane region. Potential interacting patterns with amino acids were proposed considering the difference of the binding pockets between rat and human receptors. The proposed binding modes for quinolines (here:EMQMCM) and coumarines (here:B-04) were compared and discussed considering in particular the influence on activity of several side chains of quinolines obtained from the QSAR studies. The present studies demonstrated the applicability of ligand-based virtual screening for non-competitive antagonists of a G-protein coupled receptor, resulting in novel, potent and selective agents.
Alignment, characterization and application of polyfluorene in polarized light-emitting devices
(2001)
Ziel im Rahmen der vorliegenden Dissertation war die Realisierung der polarisierten Elektrolumineszenz blau emittierender flüssigkristalliner Polyfluorene. Polymere Leuchtdioden, die aufgrund hoher Orientierung der Moleküle in der aktiven Schicht polarisiert emittieren, sind für eine Anwendung beispielsweise als Hintergrundbeleuchtung in Flüssigkristallanzeigen (LCDs) von Interesse. Es wurde gezeigt, dass sich mit der Ausrichtung von Polyfluoren auf Ori entierungsschichten auf der Basis von geriebenem Polyimid hohe Ordnungsgrade erzielen lassen. Die Dotierung mit lochleitenden Materialien erlaubte erstmals den Einbau solcher Orientierungsschichten in Leuchtdioden und ermöglichte die Realisierung polarisierter Elektrolumineszenz. Die Morphologie und Struktur sowohl der hoch orientierten Polyfluoren filme als auch lochleitender Orientierungsschichten wurden eingehend untersucht. Die ElektrolumineszenzEigenschaften von isotropen sowie polarisierten Leuchtdioden wurden ausführlich analysiert und anschließend durch chemische Modifizierung des Polyfluorens entscheidend verbessert. Zusätzlich wurde Polyfluoren mit fluoreszierenden Farbstoffen dotiert, um ausgehend von blauem Licht grüne und rote Emission zu erhalten. Hierbei wurde unter sucht, in welchem Maß FörsterEnergietransfer sowie Ladungsträgereinfang für die Emission der eingemischten Farbstoffe verantwortlich sind. Eine Einführung in die Grundlagen der Elektrolumineszenz konjugierter Polymere findet sich in Kapitel 2 dieser Arbeit. Da polarisierte Elektrolumineszenz ein hohes Maß an Anistotropie der emittierenden Schicht erfordert, werden anschließend verschiedene Methoden zur Ausrichtung von Polymeren besprochen, wobei besondere Betonung auf der Orientierung flüssigkristalliner Polymere liegt. Kapitel 3 behandelt die signifikanten Eigenschaften der Polymere sowie die experimentel len Methoden, die im Rahmen dieser Arbeit verwendet wurden. Neben Polyfluoren wird ein weiteres blau emittierendes Polymer, Polyphenylenethynylen (PPE), eingeführt. Bei der Cha rakterisierung der Polyfluorene wird im Anschluss an die Beschreibung der reinen Polymere insbesondere der positive Einfluss des Anbringens von lochleitende Endgruppen an die Hauptkettenenden auf wesentliche Eigenschaften bezüglich der Elektrolumineszenz aufgezeigt. Außerdem werden die wesentlichen Merkmale von Polyimid, welches die Matrix der Orientierungsschicht bildet, sowie von verschiedenen Polymeren, die der Lochleitung und der Lochinjektion dienen, besprochen. Die Beschreibung der Methoden zur Präparation isotroper und polarisierter Leuchtdioden sowie zur Untersuchung der optischen, elektrischen und mor phologischen Eigenschaften der Polymerfilme bilden den Abschluss dieses Abschnitts. Im vierten Kapitel dieser Arbeit werden unterschiedliche Verfahren zur Ausrichtung der Polymermoleküle auf Polyfluoren sowie auf PPE angewandt und hinsichtlich der erreichbaren Ordnungsgrade verglichen und beurteilt. Im Falle von Polyfluoren wurde gezeigt, dass eine Orientierung im flüssigkristallinen Zustand mit Hilfe zusätzlicher Orientierungsschichten, welche auf geriebenem Polyimid basieren, die einzige geeignete Methode zur Orientierung dieses Po lymers ist. Durch den Zusatz von niedrigmolekularen lochleitenden Materialien in geeigneter Konzentration in die PolyimidMatrix konnte das nichtleitende Polyimid so modifiziert wer den, dass es sich in Leuchtdioden einbinden ließ, ohne dass die Orientierungseigenschaften der Schichten verloren gingen. Vergleiche unterschiedlicher Polyfluorene ergaben, dass die Länge und Struktur der AlkylSeitenketten das Orientierungsverhalten entscheiden beeinflussen. Hierbei wurde gezeigt, dass sich für verzweigte Seitenketten deutlich höhere Orientierungsgrade erreichen lassen als für solche mit linearen Seitenketten. Dies wurde mit dem vergrößerten Verhältnis aus Persistenzlänge und Polymerdurchmesser erklärt, was gemäß der Theorie der flüssigkristallinen Polymere zu einer Zunahme des erreichbaren Ordnungsparameter führt. Außerdem wiesen die Absorptionsspektren der Polyfluorene mit langen Seitenketten auf eine planare Konformation der Polymerrückgrate hin, welche aufgrund der starken Wechselwirkung zwischen den einzelnen Ketten eine Orientierung im flüssigkristallinen Zustand verhindert. Von allen untersuchten Polyfluorenen ließ sich Poly(diethylhexylfluoren) (PF2/6) am besten orientieren. Im Gegensatz zu Polyfluoren scheiterte der Versuch, PPE im flüssigkristallinen Zustand auf Orientierungsschichten auszurichten. Kalorimetrische DSCUntersuchungen machten deutlich, dass sich die Struktur von PPE in flüssigkristalliner und kristalliner Phase nur unwesentlich voneinander unterscheiden. In beiden Phasen deuteten Absorptionsuntersuchungen auf eine planare Konformation der PPERückgrate. Die Viskosität des als sehr steif bekannten Polymers PPE ist daher auch in flüssigkristallinem Zustand zu hoch, um eine Umordnung der Moleküle zu verursachen, welche allein durch Wechselwirkung mit einer Orientierungsschicht hervorgerufen wird. PPE konnte jedoch im kristallinen Zustand orientiert werden, indem anstatt einer zusätzlichen Orientierungsschicht der Polymerfilm selbst gerieben wurde. Die hohe Steifigkeit von PPE erlaubte die Übertragung der Kräfte, die durch das Reiben verursacht werden, auf das starre Polymerrückgrat und ermöglichte eine homogene Ausrichtung der Moleküle. Mit Hilfe dieser Methode konnten Leuchtdioden mit PPE in der aktiven Schicht verwirklicht werden, die polarisiert emittierten. Die bestmöglichen Methoden zur Ausrichtung der Moleküle unterschie den sich demnach für die beiden flüssigkristallinen Polymere Polyfluoren und PPE, und für beide Polymere wurden Verfahren gefunden, die die Herstellung von polarisierten Leuchtdioden ermöglichten. In Kapitel 5 dieser Arbeit werden die Morphologie, die Struktur sowie weitere wesentliche Eigenschaften sowohl orientierter Polyfluorenfilme als auch der zur Ausrichtung benötigten lochleitenden Orientierungsschichten aus dotiertem Polyimid besprochen. Hierfür wurden die Filme mit Hilfe von Licht und Elektronenmikroskopie sowie von Elektronen und Röntgen beugungsexperimenten untersucht. Im ersten Teil wird die beobachtete Abnahme der Orien tierbarkeit von Polyfluoren mit zunehmendem Molekulargewicht durch Elektronenbeugungs untersuchungen näher beschrieben. Ergebnisse aus TransmissionsElektronenmikroskopie Untersuchungen zeigten, dass sich die Morphologie orientierter PF2/6Filme durch hochgeordnete Lamellen auszeichnet, welche in regelmäßigen Abständen von ungeordneten Regionen unterbrochen werden. Innerhalb der orientierten Lamellen sortieren sich die Moleküle nach ähnlicher Kettenlänge, wohingegen in den ungeordneten Gebieten vornehmlich die Endgruppen der Ketten vorzufinden sind. Strukturuntersuchungen ergaben, dass die einzelnen Polymerketten von PF2/6 zylindrisch sind und eine hexagonale Packung aufweisen, wobei die Polymerrück grate eine 5/2Helixstruktur bilden. Das wurmähnliche Rückgrat ist dabei zylinderförmig von einer Hülle aus ungeordneten Seitenketten umgeben, die ähnlich wie ein Lösungsmittel zwi schen den einzelnen Ketten wirken. Die hieraus folgende geringe Viskosität des Polymers dient als Erklärung für die beobachtete bessere Orientierbarkeit von PF2/6 im Vergleich zu Polyfluoren mit linearen OktylSeitenketten oder zu PPE. Im zweiten Teil des fünften Kapitels werden Ergebnisse von Untersuchungen der lochlei tenden Orientierungsschichten vorgestellt. Der Einfluss der Zugabe von lochleitenden Materialien zu Polyimid auf mechanische sowie auf elektrische Eigenschaften wurde untersucht. Bei moderater LochleiterKonzentration war die mechanische Stabilität der Filme ausreichend, um nach dem Reiben keine merklichen Unterschiede zu undotierten geriebene Filmen aufzuweisen. Vergleiche entsprechender Filme hinsichtlich Ladungsinjektion und transport zeigten, dass erst durch die Dotierung eine Verwendung von PolyimidOrientierungsschichten in Leuchtdioden ermöglicht wird. Sowohl polymere als auch niedrigmolekulare lochleitende Materialien wur den hinsichtlich der erreichbaren Orientierungsgrade sowie der resultierenden ElektrolumineszenzEigenschaften verglichen, wobei nur letztere in beiden Belangen zugleich zu vorteilhaften Ergebnissen führten. Es wurde gezeigt, dass sich die besten Resultate mit polarisierten Leuchtdioden erzielen ließen, bei denen die emittierende Schicht auf eine DoppelschichtStruktur aufgebracht war, die der Lochinjektion und der Orientierung dienten. Hierbei befand sich oberhalb einer LochinjektionsSchicht aus reinem Lochleitermaterial eine weitere lochleitende Orientie rungsSchicht aus dotiertem Polyimid. Variation der Lochleiterkonzentrationen in Polyimid er gaben, dass die Helligkeit mit zunehmender Konzentration zunahm, wohingegen die erreichten Polarisationsverhältnisse gleichzeitig abnahmen. SEM und AFMUntersuchungen über den Einfluss der Lochleiterkonzentration auf die Schichtmorphologie ergaben, dass diese Beobachtungen durch Phasenseparation und mechanische Beschädigung der Filme zu erklären ist, welche bei Konzentrationen oberhalb 20 Gewichtsprozent eintreten. Im Kapitel 6 wird schließlich die Elektrolumineszenz von Leuchtdioden mit Polyfluoren als emittierende Schicht diskutiert. Zuerst wurde in isotropen Leuchtdioden die günstigste Diodenarchitektur ermittelt sowie die Optimierung der verwendeten Schichten vorgenommen. Die Ergebnisse wurden mit den Kenntnissen kombiniert, die im Rahmen der oben beschriebenen Untersuchungen erworben wurden, um die Herstellung von Leuchtdioden mit hochpolarisierter Emission zu verwirklichen. Blaue Elektrolumineszenz mit einem Emissionsmaximum von 450 nm und einem Polarisationsverhältnis von 21 wurden erzielt, wobei die Leuchtdichte bei einer angelegten Spannung von 18 V etwa 100 cd/m 2 betrug, was der typischen Helligkeit eines Computermonitors entspricht. Alle ElektrolumineszenzEigenschaften ließen sich durch End funktionalisierung des Polyfluorens weiter deutlich verbessern, indem lochleitende TriarylaminDerivate an die Enden der Hauptketten angebracht wurden ('Endcapping'). Der unerwünschte Beitrag zur Emission bei höheren Wellenlängen, welcher im Falle des reinen Polyfluoren beo bachtet wurde und gemeinhin aggregierten Polymermolekülen zugeschrieben wird, wurde durch das Konzept der Endfunktionalisierung wirksam unterdrückt. Außerdem war die Farbstabilität wesentlich verbessert und die Effizienz der Leuchtdioden um mehr als eine Größenordnung höher als bei der Verwendung des reinen Polyfluorens. Diese Beobachtungen wurden mit den elektrochemischen Eigenschaften der Endgruppen erklärt. Letztere wirken als anziehende Fallen für Ladungsträger, was dazu führt, dass die Erzeugung von Exzitonen und die anschließende Rekombination vorwiegend in der Nähe der Kettenenden stattfindet, anstatt wie im Falle des reinen Polyfluorens an weniger effizienten Aggregaten oder Exzimererzeugenden Stellen. Es wurde gezeigt, dass die Endfunktionalisierung weder das Verhalten des Polymers im flüssig-kristallinen Zustand, noch dessen Orientierbarkeit beeinträchtigte. Die Verwendung des modifizierten Polyfluorens erlaubte die Herstellung von polarisierten Leuchtdioden mit einem Polarisationsverhältnis von 22 und einer Leuchtdichte von 200 cd/m 2 bei 19 V, wobei die Schwellspannung auf 7,5 V gesenkt wurde. Dioden mit einem Anisotropiefaktor von 15 er reichten Leuchtdichten von bis zu 800 cd/m 2 . Die Effizienz dieser Leuchtdioden war mit 0,25 cd/A bei ähnlichem Polarisationsverhältnis und Leuchtdichte um mehr als doppelt so hoch wie die bisher berichteten Werte. Die Veränderung der eigentlich blauen Emissionsfarbe durch die Zugabe von Materialien mit niedrigerer Bandlücke in eine Polyfluorenmatrix wird im Kapitel 7 beschrieben. Es wurde gezeigt, dass der Zusatz bereits geringer Konzentrationen eines grün emittierenden Thiophen Farbstoffes das Emissionsspektrum des Polyfluorens entscheidend veränderte und die Realisierung grüner Emission ermöglichte. Genau wie im Falle der nichtemittierenden Lochleiter, die für die Endfunktionalisierung des Polyfluoren verwendet wurden, wirken auch die ThiophenFarbstoffe als effektive Ladungsträgerfallen, was neben der Farbveränderung eine drastische Verbesserung der Leuchtdiodeneffizienzen zur Folge hatte. Darüber hinaus konnte mit Hilfe des dotierten Polyfluorens polarisierte grüne Elektrolumineszenz verwirklicht werden, wobei die Polarisationsverhältnisse Werte von bis zu 30 erreichten, bei einer Leuchtdichte von 600 cd/m 2 und einer Effizienz von 0,3 cd/A. Im Hinblick auf rote Elektrolumineszenz wurden Leuchtdioden mit dendronisierten Pery lenfarbstoffen in der emittierenden Schicht untersucht, zum einen in reiner Form und zum an deren in Mischungen mit Polyfluoren. Hierfür wurden zwei Generationen von Dendrimeren, bestehend aus zentralem PerylendiimidChromophor und PolyphenylenGerüst, mit einer nichtdendronisierten Modellverbindung verglichen. Leuchtdioden mit reinen Filmen der ersten und zweiten Dendrimergeneration emittierten rotes Licht mit CIEKoordinaten (0,627/0,372) und einer Leuchtdichte von bis zu 120 cd/m 2 bei 11 V, wobei die Effizienz allerdings nur 0,03 cd/A betrug. Um die unterschiedlichen Mechanismen zu klären, die zur Emission der Farbstoffmoleküle führen, wurden die Farbstoffe in Polyfluoren beigemischt, und der Einfluss der Dendronisierung auf die Emissionsfarbe und die Intensität der Elektrolumineszenz wurde untersucht. In Photolumineszenz wurde mit zunehmender Dendronisierung eine Abnahme des Förster Energieübertrags vom PolyfluorenWirt zu dem PerylenfarbstoffGast verzeichnet, was zu einen höheren blauen Anteil im Emissionsspektrum führte. Hingegen wurde gezeigt, dass in Elektrolumineszenz die Farbstoffe als Elektronenfallen wirken und die Rekombination der Ladungsträger zu Exzitonen somit vorwiegend auf den Farbstoff anstatt auf den Polyfluorenmolekülen statt findet. Aus diesem Grund war die Betonung der roten Emission in Elektrolumineszenz ungleich stärker als in Photolumineszenz, bei der die rote Emission ausschließlich durch Energieübertrag via Förstertransfer zu Stande kommt. Die Verstärkung einer Farbverschiebung von rot nach blau, die mit zunehmender Dendronisierung und ansteigender Betriebsspannung beo bachtet wurde, konnte qualitativ mit der kinetischen Beeinträchtigung des Elektronenübertrags vom PolyfluorenWirt auf den PerylendiimidChromophor erklärt werden. Der bestmögliche Kompromiss aus roter Farbtiefe und Helligkeit wurde für die Mischung aus Polyfluoren und dem Farbstoff der ersten Dendrimergeneration erzielt. Bei angelegter Spannung von 6,5 V lag die Leuchtdichte bei 100 cd/m 2 und bei 11 V bei 700 cd/m 2 , wobei das Emission bei 600 nm ihr Maximum hatte.
A new technique for precision ion implantation has been developed. A scanning probe has been equipped with a small aperture and incorporated into an ion beamline, so that ions can be implanted through the aperture into a sample. By using a scanning probe the target can be imaged in a non-destructive way prior to implantation and the probe together with the aperture can be placed at the desired location with nanometer precision. In this work first results of a scanning probe integrated into an ion beamline are presented. A placement resolution of about 120 nm is reported. The final placement accuracy is determined by the size of the aperture hole and by the straggle of the implanted ion inside the target material. The limits of this technology are expected to be set by the latter, which is of the order of 10 nm for low energy ions. This research has been carried out in the context of a larger program concerned with the development of quantum computer test structures. For that the placement accuracy needs to be increased and a detector for single ion detection has to be integrated into the setup. Both issues are discussed in this thesis. To achieve single ion detection highly charged ions are used for the implantation, as in addition to their kinetic energy they also deposit their potential energy in the target material, therefore making detection easier. A special ion source for producing these highly charged ions was used and their creation and interactions with solids of are discussed in detail.
Wir betrachten Algorithmen für strategische Kommunikation mit Commitment Power zwischen zwei rationalen Parteien mit eigenen Interessen. Wenn eine Partei Commitment Power hat, so legt sie sich auf eine Handlungsstrategie fest und veröffentlicht diese und kann nicht mehr davon abweichen.
Beide Parteien haben Grundinformation über den Zustand der Welt. Die erste Partei (S) hat die Möglichkeit, diesen direkt zu beobachten. Die zweite Partei (R) trifft jedoch eine Entscheidung durch die Wahl einer von n Aktionen mit für sie unbekanntem Typ. Dieser Typ bestimmt die möglicherweise verschiedenen, nicht-negativen Nutzwerte für S und R. Durch das Senden von Signalen versucht S, die Wahl von R zu beeinflussen. Wir betrachten zwei Grundszenarien: Bayesian Persuasion und Delegated Search.
In Bayesian Persuasion besitzt S Commitment Power. Hier legt sich S sich auf ein Signalschema φ fest und teilt dieses R mit. Es beschreibt, welches Signal S in welcher Situation sendet. Erst danach erfährt S den wahren Zustand der Welt. Nach Erhalt der durch φ bestimmten Signale wählt R eine der Aktionen. Das Wissen um φ erlaubt R die Annahmen über den Zustand der Welt in Abhängigkeit von den empfangenen Signalen zu aktualisieren. Dies muss S für das Design von φ berücksichtigen, denn R wird Empfehlungen nicht folgen, die S auf Kosten von R übervorteilen. Wir betrachten das Problem aus der Sicht von S und beschreiben Signalschemata, die S einen möglichst großen Nutzen garantieren.
Zuerst betrachten wir den Offline-Fall. Hier erfährt S den kompletten Zustand der Welt und schickt daraufhin ein Signal an R. Wir betrachten ein Szenario mit einer beschränkten Anzahl k ≤ n Signale. Mit nur k Signalen kann S höchstens k verschiedene Aktionen empfehlen. Für verschiedene symmetrische Instanzen beschreiben wir einen Polynomialzeitalgorithmus für die Berechnung eines optimalen Signalschemas mit k Signalen.
Weiterhin betrachten wir eine Teilmenge von Instanzen, in denen die Typen aus bekannten, unabhängigen Verteilungen gezogen werden. Wir beschreiben Polynomialzeitalgorithmen, die ein Signalschema mit k Signalen berechnen, das einen konstanten Approximationsfaktor im Verhältnis zum optimalen Signalschema mit k Signalen garantiert.
Im Online-Fall werden die Aktionstypen einzeln in Runden aufgedeckt. Nach Betrachtung der aktuellen Aktion sendet S ein Signal und R muss sofort durch Wahl oder Ablehnung der Aktion darauf reagieren. Der Prozess endet mit der Wahl einer Aktion. Andernfalls wird der nächste Aktionstyp aufgedeckt und vorherige Aktionen können nicht mehr gewählt werden. Als Richtwert für unsere Online-Signalschemata verwenden wir das beste Offline-Signalschema.
Zuerst betrachten wir ein Szenario mit unabhängigen Verteilungen. Wir zeigen, wie ein optimales Signalschema in Polynomialzeit bestimmt werden kann. Jedoch gibt es Beispiele, bei denen S – anders als im Offline-Fall – im Online-Fall keinen positiven Wert erzielen kann. Wir betrachten daraufhin eine Teilmenge der Instanzen, für die ein einfaches Signalschema einen konstanten Approximationsfaktor garantiert und zeigen dessen Optimalität.
Zusätzlich betrachten wir 16 verschiedene Szenarien mit unterschiedlichem Level an Information für S und R und unterschiedlichen Zielfunktionen für S und R unter der Annahme, dass die Aktionstypen a priori unbekannt sind, aber in uniform zufälliger Reihenfolge aufgedeckt werden. Für 14 Fälle beschreiben wir Signalschemata mit konstantem Approximationsfaktor. Solche Schemata existieren für die verbleibenden beiden Fälle nicht. Zusätzlich zeigen wir für die meistern Fälle, dass die beschriebenen Approximationsgarantien optimal sind.
Im zweiten Teil betrachten wir eine Online-Variante von Delegated Search. Hier besitzt nun R Commitment Power. Die Aktionstypen werden aus bekannten, unabhängigen Verteilungen gezogen. Bevor S die realisierten Typen beobachtet, legt R sich auf ein Akzeptanzschema φ fest. Für jeden Typen gibt φ an, mit welcher Wahrscheinlichkeit R diesen akzeptiert. Folglich versucht S, eine Aktion mit einem guten Typen für sich selbst zu finden, der von R akzeptiert wird. Da der Prozess online abläuft, muss S für jede Aktion einzeln entscheiden, diese vorzuschlagen oder zu verwerfen. Nur empfohlene Aktionen können von R ausgewählt werden.
Für den Offline-Fall sind für identisch verteilte Aktionstypen konstante Approximationsfaktoren im Vergleich zu einer Aktion mit optimalem Wert für R bekannt. Wir zeigen, dass R im Online-Fall im Allgemeinen nur eine Θ(1/n)-Approximation erzielen kann. Der Richtwert ist der erwartete Wert für eine eindimensionale Online-Suche von R.
Da für die Schranke eine exponentielle Diskrepanz in den Werten der Typen für S benötigt wird, betrachten wir parametrisierte Instanzen. Die Parameter beschränken die Werte für S bzw. das Verhältnis der Werte für R und S. Wir zeigen (beinahe) optimale logarithmische Approximationsfaktoren im Bezug auf diese Parameter, die von effizient berechenbaren Schemata garantiert werden.
Ob Klimawandel oder Luftverschmutzung: Die chemischen und physikalischen Prozesse in der Atmosphäre haben wichtige Auswirkungen auf die menschliche Gesundheit und Ökosysteme. Dabei ist die Atmosphäre mehr als ein Gemisch aus Stickstoff, Sauerstoff, Wasserdampf, Helium und Kohlenstoffdioxid. Es gibt zahlreiche Spurengase, deren Gesamtanteil am Volumen weniger als 1 % ausmacht. In dieser Arbeit werden Stickstoffoxide, Schwefeldioxid, Kohlenstoffmonoxid und Schwefelsäure näher betrachtet, die im Rahmen der flugzeugbasierten Messkampagne Chemistry of the Atmosphere: field experiment in Europe (CAFE-EU)/BLUESKY gemessen wurden.
Die Stickstoffoxide NO und NO2, als NOx zusammengefasst, besitzen hauptsächlich anthropogene Quellen, allen voran fossile Verbrennung und industrielle Prozesse. Zwischen NO und NO2 besteht ein photochemisches Gleichgewicht, sodass in der Atmosphäre vor allem NO2 in relevanten Konzentrationen vorkommt; dies wirkt aufgrund der Bildung von Salpetersäure, HNO3, in wässriger Lösung beim Einatmen ätzend und ist entsprechend gesundheitsschädlich. Troposphärisches Ozon, O3, wesentlicher Bestandteil von Sommersmog, wird hauptsächlich durch die Reaktion von NO mit Peroxiden (HO2 und RO2) gebildet. In der Stratosphäre entstehen NOx hauptsächlich durch die Photodissoziation von Lachgas, N2O, das aufgrund seiner langen Lebenszeit von der Tropo- in die Stratosphäre transportiert werden kann und dort die wichtigste Stickstoffquelle darstellt. In der Stratosphäre tragen NOx zum katalytischen Abbaumechanismus des Ozons bei (Bliefert, 2002; Seinfeld and Pandis, 2016).
Schwefeldioxid, SO2, ist ein toxisches Gas, dessen atmosphärische Quellen hauptsächlich anthropogen sind, nämlich fossile Verbrennung und industrielle Prozesse; Senken sind trockene und feuchte Deposition, wobei letztere zu saurem Regen führen kann. Seit den 1980ern sinken die globalen SO2-Emissionen. SO2 kann in der Atmosphäre zu Sulfat und Schwefelsäure oxidiert werden, was Hauptbestandteil des Wintersmogs ist. Der wichtigste Mechanismus ist die Oxidation mit dem Hydroxylradikal, OH˙, unter Beteiligung von Wasserdampf. In der Stratosphäre ist Carbonylsulfid, OCS, die wichtigste Schwefelquelle, da es analog zum N2O dank seiner langen Lebenszeit von der Tropo- in die Stratosphäre transportiert werden kann (Bliefert, 2002; Seinfeld und Pandis, 2016). Typische Konzentrationen von Schwefelsäure sind 105 cm–3 nachts und 107 cm–3 tagsüber in der Troposphäre sowie 105 cm–3 tagsüber in der Stratosphäre (Clarke et al., 1999; Weber et al., 1999; Fiedler et al., 2005; Arnold, 2008; Kürten et al., 2016; Berresheim et al., 2000).
Kohlenstoffmonoxid, CO, ist ein toxisches Gas, das zu gleichen Teilen durch direkte Emissionen (v.a. Biomasseverbrennung und fossile Verbrennung) und In-situ-Oxidation (v.a. von Methan, Isopren und industriellen Kohlenwasserstoffen) in die Atmosphäre gelangt. Die Hauptsenke ist die Reaktion mit OH˙ in der Troposphäre. Seit 2000 sinkt die globale CO-Konzentration (Bliefert, 2002).
Doch neben Gasen sind auch Aerosolpartikel fester Bestandteil des Gemisches Luft, welche luftgetragene feste oder flüssige Teilchen sind. Primäre Aerosolpartikel werden direkt als solche in die Atmosphäre emittiert, während sekundäre Aerosolpartikel in der Atmosphäre gebildet werden, indem gasförmige Vorläufersubstanzen mit geringer Flüchtigkeit auf primären Partikeln kondensieren oder durch Zusammenclustern und Anwachsen komplett neue Partikel bilden. Aerosolpartikel ermöglichen als Wolkenkondensationskeime erst die Bildung von Wolken und wirken somit – neben ihrem direkten reflektierenden Effekt – durch Änderung der Wolkenbedeckung und -eigenschaften insgesamt kühlend aufs Klima und beeinflussen die lokalen und globalen Wasserkreisläufe. Doch sie haben auch negative Auswirkungen auf die menschliche Gesundheit und sind für eine Verkürzung der durchschnittlichen Lebensdauer in Regionen mit hohen Feinstaubbelastungen verantwortlich (Seinfeld und Pandis, 2016; Bellouin et al., 2020; World Health Organization, 2016).
Neben den bisher betrachteten neutralen, also ungeladenen Gasen und Partikeln sind Ionen in der Gasphase sowie geladene Partikel ebenfalls Bestandteil der Atmosphäre. Sie spielen bei vielen atmosphärischen Prozessen eine wichtige Rolle, wie etwa bei Gewittern, Radiowellenübertragung und ionen-induzierter Nukleation von Aerosolpartikeln. Die Hauptquellen für Ionisation in der Tropo- und Stratosphäre ist die galaktische kosmische Strahlung, die entgegen ihrem Namen hauptsächlich aus Protonen und α-Partikeln (primäre Partikel genannt) besteht und in der Erdatmosphäre durch Kollision mit Luftmolekülen Teilchenschauer von sekundären Partikeln (u.a. Myonen, Pionen und Neutrinos) hervorruft. Die primären und sekundären Partikel können die Luftmoleküle ionisieren unter Entstehung von N+, N2+, O+, O2+ und Elektronen. Sauerstoff reagiert rasch mit letzteren zu O– und O2–. Diese Kationen und Anionen reagieren weiter, bis Ionenclustern der Summenformeln (HNO3)n(H2O)mNO3– und H+(H2O)n(B)m gebildet werden, wobei B Basen wie Methanol, Aceton, Ammoniak oder Pyridin sind. Weitere Ionisationsquellen sind der Zerfall des Radioisotops 222Rn in Bodennähe und ionisierende Solarstrahlung oberhalb der Stratosphäre. Atmosphärische Ionen haben zwei wichtige Senken: die Wiedervereinigung, auch Rekombination genannt, bei der sich ein Kation und ein Anion gegenseitig neutralisieren sowie das Anhaften an Aerosolpartikeln. Letztere Senke ist vor allem in der Troposphäre aufgrund der relativ hohen Konzentration an Aerosolpartikeln relevant (Arnold, 2008; Viggiano und Arnold, 1995; Bazilevskaya et al., 2008; Hirsikko et al., 2011).
Biological ageing is a degenerative and irreversible process, ultimately leading to death of the organism. The process is complex and under the control of genetic, environmental and stochastic traits. Although many theories have been established during the last decades, none of these are able to fully describe the complex mechanisms, which lead to ageing. Generally, biological processes and environmental factors lead to molecular damage and an accumulation of impaired cellular components. In contrast, counteracting surveillance systems are effective, including repair, remodelling and degradation of damaged or impaired components, respectively. Nevertheless, at some point these systems are no longer effective, either because the increasing amount of molecular damages can not longer be removed efficiently or because the repairing and removing mechanisms themselves become affected by impairing effects. The organism finally declines and dies. To investigate and to understand these counteracting mechanisms and the complex interplay of decline and maintenance, holistic and systems biological investigations are required. Hence, the processes which lead to ageing in the fungal model organism Podospora anserina, had been analysed using different advanced bioinformatics methods. In contrast to many other ageing models, P. anserina exhibits a short lifespan, a less biochemical complexity and it provides a good accessibility for genetic manipulations.
To achieve a general overview on the different biochemical processes, which are affected during ageing in P. anserina, an initial comprehensive investigation was applied, which aimed to reveal genes significantly regulated and expressed in an age-dependent manner. This investigation was based on an age-dependent transcriptome analysis. Sophisticated and comprehensive analyses revealed different age-related pathways and indicated that especially autophagy may play a crucial role during ageing. For example, it was found that the expression of autophagy-associated genes increases in the course of ageing.
Subsequently, to investigate and to characterise the autophagy pathway, its associated single components and their interactions, Path2PPI, a new bioinformatics approach, was developed. Path2PPI enables the prediction of protein-protein interaction networks of particular pathways by means of a homology comparison approach and was applied to construct the protein-protein interaction network of autophagy in P. anserina.
The predicted network was extended by experimental data, comprising the transcriptome data as well as newly generated protein-protein interaction data achieved from a yeast two-hybrid analysis. Using different mathematical and statistical methods the topological properties of the constructed network had been compared with those of randomly generated networks to approve its biological significance. In addition, based on this topological and functional analysis, the most important proteins were determined and functional modules were identified, which correspond to the different sub-pathways of autophagy. Due to the integrated transcriptome data the autophagy network could be linked to the ageing process. For example, different proteins had been identified, which genes are continuously up- or down-regulated during ageing and it was shown for the first time that autophagy-associated genes are significantly often co-expressed during ageing.
The presented biological network provides a systems biological view on autophagy and enables further studies, which aim to analyse the relationship of autophagy and ageing. Furthermore, it allows the investigation of potential methods for intervention into the ageing process and to extend the healthy lifespan of P. anserina as well as of other eukaryotic organisms, in particular humans.
The African continent is regularly portrayed as an indolent space with a well-known reputation as a chaotic continent. Viewed as lacking vision, means and capacities, Africa is perceived at best as a place that is marked by a permanent status quo, stagnation, or in worst case scenarios, as a declining continent. Various references to the continent are synonymous with famine, poverty, war, etc. Such portrayals are all the more intriguing given that the continent is known for its abundant natural resources, such as timber, oil, natural gas, minerals, etc., whose reserves are, moreover, not well known both by the African people and their leaders. As a result, there is still much progress to be made in tapping into the resources in order to improve the daily lives of African citizens.
In such a context dominated by infantile carelessness throughout the continent, the interventions of actors from outside the continent are the only hopes of bringing some vitality to this continent which is cloaked in "la grande nuit – the great darkness" (Mbembé 2013). Thus during the main sequences of recent history, representing different forms of Western penetration and activity on the African continent (slavery, imperialism, colonization), all the Western world’s contributions have obviously not sufficed to boost Africa and take it out of its never ending childhood. It has remained just as passive and apathetic today as it was yesterday.
The attraction of Asian actors to the continent is even more recent. And consistent with its abovementioned indolence, Africa is seen as an easy and defenceless prey for the Korean, Japanese, Indian, Malaysian, or Chinese conquerors. In the latter case, the insatiable appetite for natural resources whose reserves are being rapidly depleted is the cornerstone of their foreign aid policy. This led China to colonize the continent, showing a preference for Pariah Regimes which held no appeal for the West, by sending an army of workers to extract those resources (Lum et al. 2009), in defiance of all national and international regulations and based on completely opaque contracts.
Although the concept of African Agency was rapidly developed in several African countries, the aim of this study was more specific to Cameroon’s mining sector in which different entrepreneurs from abroad got involved over time. The thesis investigates whether indigenous citizens took part in any way in the development of mining projects in the country. Thus, the work assesses and analyses actions and reactions initiated and undertaken by local people in the context of China’s presence within Cameroon’s mining sector to promote and advance their interests over those of foreign investors. In addition, the author has no knowledge of any other study investigating African Agency in the mining sector as a whole in Cameroon.
In conducting this study, a multi-method research framework was developed including a series of methods used to collect data and analyse concepts of African Agency associated Political Ecology as they developed within Cameroon’s mining sector. Specifically, those methods comprised quantitative research when it came to collecting data using a positivist and empirical approach constructed by deducing evidence from statistical data collected by means of the 167 questionnaire surveys administered to local inhabitants and workers randomly selected on mining sites and in riparian communities. The questionnaires helped to capture Cameroonians' perceptions of the recent phenomenon of the gradual but significant influx of international actors and precisely Chinese players in the mining sector on the one hand, and on the other hand, observational data was collected across the GVC as developed in the Betare-Oya region. As a complement to the former technique, qualitative methods helped to study and deepen understanding of human behaviour and the social world in a holistic perspective through individual interviews, focus groups, and direct observations on the ground. In addition, the spatial analysis method based on the land use classification technique served to detect changes to land use/land cover that have been brought on by mechanised mining activities undertaken in this region. The sequencing of data collected and their processing from a ground theory perspective led to the formulation and specification of Cameroon’s Ecological Agency theory.
One of the earliest steps of this work consisted in a literature review and in placing the African Agency concept in a broader context. It then led to the state of the art, specifications about research content of the work and the main theories undergirding this thesis. Before examining developments that emerged during the last decade, a historical perspective was provided to the topic in order to show how African societies started mining operations and how they dealt with foreign partners interested in their mining resources. The aim was to show that while Western imperialism presented a challenge for the sector, it did not erase local participation, even despite the constraints associated with such involvement.
...
Magnetoencephalography (MEG) measures neural activity non-invasively and at an excellent temporal resolution. Since its invention (Cohen, 1968, 1972), MEG has proven a most valuable tool in neurocognitive (Salmelin et al., 1994) and clinical research (Stufflebeam et al., 2009; Van ’t Ent et al., 2003). MEG is able to measure rapid changes in electrophysiological neural signals related to sensory and cognitive processes. The magnetic fields measured outside the head by MEG directly reflect the cortical currents generated by the synchronised activity of thousands of neuronal sources. This distinguishes MEG from functional magnetic resonance imaging (fMRI), where measurements are only indirectly related to electrophysiological activity through neurovascular coupling...
Two main types of methods are used in gene therapy: integrating vectors and nuclease-based genome engineering. Nucleases are site-specific and are efficient for knock-outs, but inefficient at inserting long DNA sequences. Integrating vectors perform this task with high efficiency, but their insertion occurs at random genomic positions. This can result in transformation of target cells, which leads to severe adverse events in a gene therapy context. Thus, it is of great interest to develop novel genome engineering tools that combine the advantages of both technologies. The main focus of this thesis is on generating such a targetable integrating vector.
The integrating vector used in this project is the Sleeping Beauty (SB) transposon, a DNA transposon characterized by high activity across a wide range of cells. The SB transposase was combined with an RNA-guided Cas9 nuclease domain. This nuclease component was meant to direct transposase integration to specific targets defined by RNAs. The SB transposase was fused to cleavage-inactivated Cas9 (dCas9) to tether it to the target sites. In addition, adapter proteins consisting of dCas9 and domains non-covalently interacting with SB transposase or the SB transposon were generated. All constituent domains of these fusion proteins were tested in enzymatic assays and almost all enzymatic activities could be verified.
Combining the fusion protein dCas9-SB100X with a gRNA binding a sequence from the AluY repetitive element resulted in a weak, but statistically significant enrichment around sites bound by the gRNA. This enrichment was ca. 2-fold and occurred within a 300 bp window downstream of target sites, or within the AluY element.
Targeting with adapter proteins and targeting of other targets (L1 elements or single-copy targets) did not result in statistically significant effects. Single-copy targets tested included the HPRT gene and three specifically selected GSH targets that were known to be receptive to SB insertions. The combination with a more sequence-specific transposase mutant also failed to increase specificity to a level allowing targeting of single-copy loci. Genome-wide analysis of insertions however demonstrated, that dCas9-SB100X has a different insertion profile than SB100X, regardless of the gRNA used.
As low efficiency of retargeting is likely a consequence of the high background activity of the SB100X transposase in the fusion constructs, a SB mutant with reduced DNA affinity, SB(C42), was generated. For this mutant, transposition activity was partly dependent on a dCas9 domain being supplied with a multi-copy target gRNA, specifically a 2-fold increase in the presence of a AluY-directed gRNA. Whether using this mutant results in improved targeting remains to be determined.
In a side project, an attempt was made to direct SB insertions to ribosomal DNA by fusing the transposase to a nucleolar protein. This fusion transposase partially localized to nucleoli and insertions catalyzed by this transposase were found to be enriched in nucleolus organizer regions (NORs) and nucleolus-associated domains (NADs).
The aim of a second side project was increasing the ratio between homology-directed repair (HDR) and non-homologous end-joining (NHEJ) at Cas9-mediated double-strand breaks (DSBs). To achieve this, Cas9 was fused to DNA-interacting domains and corresponding binding sequences were fused to the homology donors. While an increased HDR/NHEJ ration could be observed for the fusion proteins, it was not dependent on the presence on the binding sequences in the donor molecules.
In the adult mammalian central nervous system, two defined neurogenic regions retain the capacity to generate new neurons throughout adulthood, namely the subependymal zone (SEZ) at the lateral ventricles and the subgranular layer of the hippocampus (SGL). Adult neurogenesis consists of a whole set of events including proliferation, fate specification, migration, survival and finally synaptic integration of newly born neurons. Each of these events is controlled by the interplay of numerous factors. In this study two signalling systems were analysed with regard to their functional role in adult neurogenesis in vivo, namely the purinergic system and the growth factor EGF. Neither short- nor long-term application of the P2Y receptor agonists UTP and ADPβS and the P2Y receptor antagonist suramin into the lateral ventricle of adult mice altered cell responses as compared to vehicle controls in vivo. In contrast, analysis of the expansion rates of cultured neural stem cells (NSCs) from knockout mice revealed a strong increase in the number of NSCs from NTPDase2-/- mice, whereas cell numbers of NSCs from P2Y1-/- and P2Y2-/- mice were significantly reduced in comparison to wildtype levels. Notably, in vivo proliferation rates were potently elevated in the SGL and the SEZ of NTPDase2-deficient mice. However, in vivo proliferation in both neurogenic niches of the single receptor knockout mice P2Y1-/- and P2Y2-/- and P2Y1-/- P2Y2-/-double-knockout mice did not differ significantly from the wildtype. In mice lacking the P2Y2 receptor the survival of newly born neurons in the hippocampal granule cell layer was significantly increased. These data provide the first line of evidence that purinergic signalling is involved in the control of neural stem cells behaviour not only in vitro but also in vivo. In order to further characterise the role of epidermal growth factor (EGF) in adult neurogenesis, transit amplifying precursors (TAPs) and type B astrocytes were identified as EGF-responsive cell populations following ventricular EGF injection, whereas ependymal cells, neuroblasts and NG2-positive cells did not or only to a minor extent respond to EGF injection. These EGF-responsive cell populations were found on both, the septal as well as striatal lateral ventricle walls. Long-term ventricular EGF infusion for 6d, 1. increased cell proliferation of both ventricle walls revealing a gradient along the rostro-caudal axis, 2. altered the balance between neuronal and macroglial cell fates to generate oligodendrocyte precursors and 3. lead to an entire remodelling of the classical architecture of the SEZ.
Signal-dependent regulation of actin dynamics is essential for many cellular processes, including directional cell migration. In particular, cell migration is initiated by lamellipodia, actin-based protrusions of the plasma membrane. The formation of these protruding structures require incessant assembly and disassembly of actin filaments. The Arp2/3 complex and WAVE proteins are essential for both lamellipodium formation and its dynamics. WAVEs mediate the activation of the Arp2/3 complex downstream of the small GTPase Rac, thus being critical for Rac- and RTK-induced actin polymerization and cell migration. The WAVE-family proteins are always found associated with multiprotein complexes. The most abundant WAVE-based complex is referred to as the WANP (WAVE2-Abi-1-Nap1-PIR121) complex. IQGAP1 is a huge scaffolding protein with multiple protein-interacting domains. IQGAP1 participates in many fundamental activities, including regulation of the actin cytoskeleton, mitogenic, adhesive and migratory responses, as well as in cell polarity and cellular trafficking. IQGAP1 binds to N-WASP, thus raising the possibility that it might control actin nucleation by the Arp2/3 complex. In this study, IQGAP1 was found co-immunoprecipitated not only with WAVE, but also with the endogenous WANP-complex subunits. Correspondingly, IQGAP1 associated to both anti-WAVE and anti-Abi-1 immuno-complexes. Pull-down experiments proved that IQGAP1 binds directly to the WANP-complex subunits. Physical interaction between IQGAP1 and the reconstituted WANP complex could also be demonstrated. Together, these data indicate that IQGAP1 is an accessory component of the WANP complex. Interestingly, the IQGAP-WANP complex disassembled after either EGF stimulation or transfection with constitutively active Cdc42 and Rac1. HeLa cells devoid of IQGAP1 showed diminished and less persistent ruffling upon EGF, but not HGF, stimulation in comparison with the control. This phenotype was accompanied by a strong reduction in chemotaxis towards both growth factors, which was as dramatic as in WANP-complex knockdown (KD) cells. Moreover, GM130 and Giantin showed a polarized and flat ribbon-like pattern in control cells, as it is expected for cis- and cis/medial-Golgi markers. Conversely, small and dispersed vesicular structures were found in both IQGAP1 KD and WANP-complex KD cells. Importantly, Arp2/3-complex silencing resulted in the same phenotypes. Consistently, Brefeldin A-induced disassembly of the Golgi strongly inhibited the IQGAP1-WANP-complex interaction and chemotaxis towards EGF in wild-type cells. The re-expression of an RNAi-resistant wild-type IQGAP1 in IQGAP1 KD cells fully rescued both the ruffling abilities and Golgi structure. A constitutively active mutant, unable to bind to neither Rac1 /Cdc42 nor the WANP complex, could reconstitute only the former defect. Hence, this study shows that actin dynamics regulated by the IQGAP1-WANP complex controls Golgi-apparatus architecture and its contribution to cell chemotaxis. The working model here proposes that at the Golgi apparatus, recruitment of the WANP complex by IQGAP1 leads to the assembly of actin filaments required to maintain the appropriated Golgi morphology. The dissociation of the complex may be required to allow the remodeling of the Golgi membranes in order to respond following a chemoattractant gradient.
Lysosomes are major degradative organelles that contain enzymes capable of breaking down proteins, nucleic acids, carbohydrates, and lipids. In the last decade, new discoveries have traced also important roles for lysosomes as signalling hubs, affecting metabolism, autophagy and pathogenic infections. Therefore, maintenance of a healthy lysosome population is of utmost importance to the cell to respond to both stress conditions and also homeostatic signalling. For example, for minor perturbations to the lysosomal membrane, the cell activates repair processes which seal membrane nicks. For more extensive damage, autophagy is activated to remove damaged organelles from the cell. on the other hand, during pathogen invasion host cells have also evolved mechanisms to hijack the endolysosomal pathway to facilitate their own growth and replication in host cells.
The first part of the thesis work focuses on a lysosomal regeneration program which is activated under conditions where the entire lysosomal pool of the cell is damaged. Upon extensive membrane damage induced by the lysosomotropic drug LLOMe, the cell activates a regeneration pathway which helps in the formation of new functional lysosomes by recycling damaged membranes. I have identified the molecules important for this novel pathway of lysosomal regeneration and showed how the protein TBC1D15 orchestrates this process to regenerate functional organelles from completely damaged membrane masses in the first 2 hours following lysosomal membrane damage. This process resembles the process of auto- lysosomal reformation (ALR)- involving the formation of lysosomal tubules which are extended along microtubules and cleaved in a dynamin2 dependent manner to form proto-lysosomes which develop into fully functional mature lysosomes. These lysosomal tubules are closely associated with ATG8 positive autophagosomal membranes and require ATG8 proteins to bind to the lysophagy receptor LIMP2 on damaged membranes. This process is physiologically important under conditions of crystal nephropathy where calcium oxalate crystals induce damage to lysosomal membranes in nephrons in kidney disease.
The second part of the thesis shows how the endolysosomal system of the cell is hijacked by the bacteriaLegionella pneumophila. During Legionella infection the formation of conventional ATG8 positive autophagosomes are blocked due to the protease activity of the bacterial effector protein RavZ which cleaves lipidated ATG8 proteins from autophagosomal membranes. The SidE effectors of Legionella modify STX17 and SNAP29 by the process of non-canonical ubiquitination called phosphoribose-linked serine ubiquitination (PR-Ub). These proteins are essential for the formation of the autophagosomal SNARE complex which is used for fusion of the autophagosome with the lysosome. Upon Legionella infection, PR-UB of STX17 aids in formation of autophagosome-like replication vacuoles. ThesevacuolesdonotfusewiththelysosomebecauseSNAP29isalsoPR-Ubmodified. PR-UbofSTX17 and SNAP29 sterically blocks the formation of the autophagosomal-SNARE complex thereby preventing fusion of the autophagosome with the lysosome. As a result, Legionella can replicate in autophagosome- like vacuoles which do not undergo lysosomal degradation. In absence of PR-Ub modified STX17, bacterial replication is compromised when measured by bacterial replication assays in lung epithelial (A549) cells.
Taken together, this thesis highlights two important aspects of the autophagy-lysosomal system- how it responds to extensive membrane damage and its importance in Legionella pneumophila infection. Extensive damage to lysosomal membranes triggers a rapid regeneration process to partially restore lysosomal function before the effects of TFEB dependent lysosomal biogenesis becomes apparent. On the other hand, Legionella pneumophila infection segregates the lysosomes from the rest of the endo-lysosomal system by blocking autophagosome-lysosome fusion. Though lysosomes remain active, they are incapable of degrading pathogens since pathogen containing vacuoles do not fuse with the lysosome.
In our rapidly changing world, land use has been recognized as having one of the strongest impacts on species and genetic diversity. The present state of temperate forests in Europe is a product of decisions made by former and current management and policy actions, rather than natural factors. Alterations of crown projection areas, structural complexity of the forest stand caused by thinning and cuttings, and changes in tree species composition caused by regeneration or plantings not only affect forest interior buffering against warming, but also the understorey light environment and nutrient availability. Ultimately, current silvicultural management practices have deep impact on the forest ecosystems, microenvironmental changes and forest floor understorey herbs. In response to environmental changes, plants rely on genetically heritable phenotypic variation, an important level of variation in the population, as it is prerequisite for adaptation. However, until now most studies on plant adaptation to land use focus on grassland management. Yet, studies on the adaptation of forest understorey herbs to forest management have been absent so far. This is important because understanding adaptation of understorey herbs is crucial for biodiversity conservation, forest restoration, and climate change mitigation. Studying current adaptation of understorey herbs to forest management yields insights into the evolutionary consequences of management practices, which could be employed to improve sustainable use of forest habitat.
In sum, my conducted experiments complement each other well and managed to fill in research gaps on the topic of genetically heritable phenotypic variation in understorey herbs and how it is affected by forest management and related microenvironmental variables. I showed that forest management has direct evolutionary consequences on the genetic basis of understorey herbs, but also indirectly through the microenvironment. Furthermore, I revealed that local adaptation and phenotypic plasticity of understorey herbs to forest structural attributes act along continuous gradients. And lastly, I highlighted the important role of intra-individual variation by revealing plastic responses to drought and shading, urging researchers to not ignore this important level of trait variation. Ultimately, understorey herbs in temperate forests employ phenotypic plasticity as a flexible strategy to adapt to varying environmental conditions. By adjusting their leaf characteristics, reproductive investment, and phenology, they can optimize their fitness and survival in response to changes in light availability, resource availability, and seasonal cues. The anthropogenic impact on temperate forests and understorey herbs will continue and likely increase in the future. This should urge foresters to adapt their silvicultural management decisions towards the long-term preservation of genetic diversity and, through this, the evolvability and adaptability of forest understorey herbs and associated organisms. Based on the results shown in my dissertation, variation in forest management regimes and types could be beneficial for promoting genetic diversity within several species of forest understorey herbs. Lastly, in the face of future climatic changes, the mechanisms by which plants can cope with increasing stressful environmental conditions might very well rely heavily on intra-individual variation, providing the necessary rapid plastic adjustment to changing microclimatic conditions within populations and thus increase climate change resilience.
Soil fungal communities are an essential element in the terrestrial ecosystem, however their response to ongoing anthropogenic climate change is currently poorly understood. Fungi are one of the most abundant groups of microbes in soil, they are mainly responsible for the decomposition of organic matter (Baldrian et al., 2012; Buée et al., 2009). By binding carbon in soil, fungi thus maintain an important role in the global carbon cycle (Bardgett et al., 2008). Future climates are likely to influence the communities of belowground microbial organisms (Castro et al., 2010; Deacon et al., 2006). However, how these communities are affected in their diversity, composition, and function after environmental perturbation is insufficiently known.
Molecular techniques using high-throughput sequencing are presently revolutionizing the analysis of complex communities, such as soil fungi. High-throughput metabarcoding enables the recovery of DNA sequence data directly from environmental samples, and DNA sequences from entire communities present in these samples can be simultaneously recovered through massively parallel sequencing reactions (Bik et al., 2012; Taberlet et al., 2012b). This results in more accurate estimation of diversity and community composition and thus provides unprecedented insight into cryptic communities (Lindahl and Kuske, 2014). Yet, challenges associated with these novel techniques include the bioinformatic processing, and the ecological analyses of the large amount of sequence data generated. Most biologists without explicit training in bioinformatics spend a fair amount of time learning how to filter raw sequence data, and customize bioinformatics pipelines specific to their project. To improve the quality of data treatment, and decrease the time needed for the analyses, it is desirable to have bioinformatics pipelines that are easy to use, well explained to researchers not trained in bioinformatics, and adaptable to individual research needs...
This thesis describes the adaptation of Acinetobacter species to dry environments with the soil bacterium A. baylyi and the opportunistic hospital pathogen A. baumanii in its focus. The adaptation of A. baylyi and A. baumannii to osmotic stress was investigated. Compatible solutes that were uptaken from the environment or synthesized de novo to cope with the loss of water at high salinity were identified. The corresponding transporters and enzymes involved were characzerized. In addition, the desiccation resistance of A. baumannii was analyzed to elucidate its survival in hospital environments. The usage of compatible solutes during desiccation stress was analyzed and proteins that were produced were identified.
The availability of water is essential for bacterial life and if environmental conditions are awkward, bacteria have to cope with high salinitiy to prevent loss of water. In this thesis it was shown that A. baylyi synthesizes glutamate and mannitol de novo as compatible solutes in response to osmotic stress to balance the osmotic potential. The pathway for mannitol biosynthesis from Fructose-6-Phosphate (F-6-P) via Mannitol-1-Phosphate (Mtl-1-P) was elucidated and the isolation and characterization of a novel type of biofunctional enzyme was described. Interestingly, the unique bifunctional enzyme MtlD, acting as dehydrogenase and phosphatase, mediates both steps of the mannitol biosynthesis pathway. This enzyme catalyzes the reduction of F-6-P to Mtl-1-P with NADPH as reducing equivalent. The dehydrogenase activity of MtlD was salt dependent and the phosphatase activity was dependent on Mg2+ as cofactor. Phylogenetic analyses revealed that MtlD is broadly distributed among other Acinetobacter strains but not in other phylogenetic tribes.
In this thesis it is also described that, besides de novo synthesis of compatible solutes, A. baylyi takes up glycine betaine (GB) or its precursor choline by different transport systems and uses this solutes as osmoprotectants. The uptake of GB occurs via a secondary transporter (ACIAD3460) of the BCCT family. Choline is taken up as precursor and oxidized to GB by two dehydrogenases. The uptake and use of choline as GB precursor involves two transporters, whose genes are encoded in the bet cluster (BetT1, BetT2), two dehydrogenases (BetA, BetB) and a regulatory protein (BetI). Both transporters differ from each other in structure and function: BetT1 is osmo-independent and active independently of osmotic stress. BetT2 contains - in contrast to BetT1 - a long C-terminal domain for osmo-sensing and its activity highly increases in the presence of high osmolarity. The oxidation of choline occurs independently of the osmolarity of the medium but in the absence of salt stress, GB is exported. In contrast, in the presence of high salinity, GB is accumulated in the cytoplasm to balance the osmotic potential in order to prevent loss of water. The regulation of both transporters, the uptake of choline independently of the osmolarity and the export of GB under isoosmotic conditions are regulated by the transcriptional regulator BetI.
A. baumannii ATCC 19606 was also shown to cope with high salinity. Analogously to A. baylyi, A. baumannii ATCC19606 synthesizes glutamate and mannitol de novo in response to osmotic stress. The genes for the synthesis of these compatible solutes are identical to those found in A. baylyi. This suggests that the solute biosynthesis pathways of A. baumannii and A. baylyi are identical. A. baumannii was also able to take up GB and choline in response to osmotic stress and growth at high salinity was restored upon addition of GB and its precursor choline. The bet cluster was also present in the genome A. baumannii and also contains the two different choline transporters BetT1 and BetT2.
Our suggestion that choline or GB or the utilization of phosphatidylcholine as carbon source led to an increase in the survival under desiccation stress was not confirmed. However, 2D analysis of proteins produced during desiccation stress in A. baumannii led to elevated amounts of proteins implicated in biofilm formation, regulation, cell morphology and general stress response, such as Hsp60 or superoxide dismutase, both might play a role in general stress protection.
ADAM15, which belongs to the family of the disintegrin and metalloproteinases, is a multi-domain transmembrane protein. A strongly upregulated expression of ADAM15 is found in inflamed synovial membranes from articular joints affected by osteoarthritis and especially rheumatoid arthritis (RA). During the chronic inflammatory process in RA the synovial membrane gets hyperplastic, resulting eventually in the formation of a pannus tissue, which can invade into the adjacent cartilage and bone thereby destroying their integrity. Previously, the expression of ADAM15 in fibroblasts of the RA synovial membrane was found to confer a significant anti-apoptotic response upon triggering of the Fas receptor, which resulted in the activation of two survival kinases, focal adhesion kinase (FAK) and Src. The Fas receptor, also named CD95, belongs to the death receptor family of the tumor necrosis factor receptors and stimulation of Fas/CD95 by its ligand FasL results in the execution of apoptotic cell death in synovial membranes of RA patients. However, the occurrence of apoptotic cell death in vivo in RA synovial tissues is considerably low despite the presence of FasL at high concentrations in the chronically inflamed joint. Accordingly, a general apoptosis resistance is a characteristic of RA-synovial fibroblasts that contributes considerably to the formation the hyperplastic aggressive pannus tissue. The objective of this study was to investigate the mechanisms underlying the capability of ADAM15 to transform FasL-mediated death- inducing signals into pro-survival activation of Src and FAK in rheumatoid arthritis fibroblasts (RASFs).
In the present study, the down-regulation of ADAM15 by RNA interference resulted in a significant increase of caspase 3/7 activity upon stimulation of the Fas receptor in RASFs. Likewise, chondrocytes expressing a deletion mutant of ADAM15 (ΔC), lacking the cytoplasmic domain, revealed increased caspase activities upon Fas ligation in comparison to cells transfected with full-length ADAM15, clearly demonstrating the importance of the cytoplasmic domain for an increased apoptosis resistance. Furthermore, activation of the Fas receptor triggered the phosphorylation of Src at Y416, which results in the active conformation of Src, as well as the phosphorylation of FAK at Y576/577 and Y861 – the target tyrosines phosphorylated by Src - in full-length ADAM15-transfected chondrocytes. However, cells transfected with ADAM15 mutant (ΔC) or with vector control did not exhibit any activation of Src and FAK upon Fas ligation. This suggested the presence of an as yet unknown protein interaction mediating the Fas triggered activation of the two kinases.
In order to identify this mechanism, the application of signal transduction inhibitors interfering with Calcium signaling either by inhibiting calmodulin with trifluoperazine (TFP) or the Calcium release-activated channel (CRAC/Orai1) with BTP-2 efficiently inhibited the phosphorylation of FAK and Src, revealing a role of calmodulin, the major Ca2+ sensor in cells, in ADAM15-dependent and Fas-elicited activation of the two survival kinases. Also, a direct Ca2+ -dependent binding of calmodulin to ADAM15 could be demonstrated by pull-down assays using calmodulin-conjugated sepharose and by protein binding assays using the recombinant cytoplasmic domain of ADAM15 and calmodulin.
Furthermore, it could be demonstrated in living synovial fibroblasts by double immunofluorescence stainings that triggering the Fas receptor by its ligand FasL or a Fas-activating antibody resulted in the recruitment of calmodulin to ADAM15 as well as to the Fas receptor in patch-like structures at the cell membrane. Simultaneously, Src associated with calmodulin was shown to become engaged in an ADAM15 complex, also containing cytoplasmic-bound FAK, by co-immunoprecipitations.
Additional studies were performed to analyze the efficacy of TFP and BTP-2 on apoptosis induction in synovial fibroblasts from 10 RA patients. Using caspase 3/7 and annexin V stainings for determining apoptosis, it could be shown that both inhibitors did not possess any apoptosis inducing capacity. However, when co-incubated with FasL both compounds synergistically enhanced apoptosis rates in the RASFs. Moreover, an additional silencing of ADAM15 revealed a further significant rise in apoptosis rates upon incubation with FasL/TFP or FasL/BTP-2, providing unequivocal evidence for an involvement of ADAM15 in facilitating apoptosis resistance in RASFs.
Taken together, these results demonstrate that ADAM15 provides a scaffold for the formation of calmodulin-dependent pro-survival signaling complexes upon CRAC/Orai1 coactivation by Fas ligation, which provides a new potential therapeutic target to break the apoptosis resistance in RASFs that critically contributes to joint destruction in RA.
The pictorial art of the Church, as a spiritual product of the Christian civilisation, has continually received great influences from its ecclesiastical tradition and it was defined by its formal aesthetical standards and its iconographic preferences. A more nuanced reading of the parallels can be attained by placing the images in their visual context, which would allow a better appreciation of the meanings within. The biblical story of Adam and Eve, which is the theme of the following thesis, reflects the differentiation between the Eastern and the Western understanding of the events of the history of the holy Oikonomia, a point, which is the major ground for the development of the relative pictorial motifs. The protoplasts are the protagonists from their creation and life in paradise, the fall and expulsion until their resurrection through Christ. Their story is visualised in a number of scenes and episodes, having thus their original sin and resurrection for specific reasons centralised. This doctoral thesis attempts to collect as many parallels of the scenes is possible, trying to collate the Eastern with the Western visual approach in a deductive way, in order to reach our constructive conclusions and make available the combination of the art, theology and liturgy in the scenes of Adam and Eve in Genesis and in Resurrection (Anastasis). The reading we tried to perform was based upon the specific iconographical elements, which were worth to be commented. Our aim was to detect the direct bond between the production of art and the relevant patristic and apocryphal writings or even the theological theories, by quoting texts from the ecclesiastical literature, as well as the liturgical praxis.
Diese Dissertation befasst sich mit den Auswirkungen von nicht letalen Dosen von Neonikotinoiden auf Bienen. Neonikotinoide stellen eine Klasse von Insektiziden dar, die auf den nikotinischen Acetylcholin Rezeptor wirken. In dieser Dissertation wurden die Neonikotinoide Imidacloprid, Clothianidin und Thiacloprid benutzt. Die beiden erst genannten unterliegen zum Zeitpunkt des Verfassens dieser Arbeit einem temporären Verkaufs- und Ausbringungs-Stopp. Damit sind die Ergebnisse dieser Arbeit wichtig für die Bewertung der Gefahren von Neonikotinoiden. Neonikotinoide werden im großen Maße in der Landwirtschaft als Spritzmittel und Saatgutbeize eingesetzt. Dabei können sie in Rückständen von Bienen beim Sammeln von Nektar und Pollen aufgenommen und zum Stock gebracht werden. Um einen weiten Blick auf die Auswirkungen der Stoffe zu werfen wurden deshalb Experimente an einzelnen Sammlerinnen durchgeführt, ebenso wie an Bienenvölkern, bei denen die Substanzen verfüttert wurden. Als neuronal aktive Substanzen können sie die normale Funktion des Nervensystems von Bienen beeinflussen, was Veränderungen im Verhalten hervorrufen kann. Dies zeigt sich in Veränderungen in der Bewegung, Orientierung oder auch Interaktion mit anderen Bienen. Die Wirkung am Rezeptor variiert, trotz gleichen molekularen Ziels, stark zwischen den verwendeten Neonikotinoiden. Clothianidin wurde als Agonist beschrieben, der sogar stärkere Ströme als Acetylcholin bei gleicher Konzentration hervorrufen kann. Imidacloprid dagegen wurde bereits als partieller Agonist beschrieben, der geringere Ströme über den Rezeptor auslöst. In dieser Arbeit wurde ein erster Versuch durchgeführt um Thiacloprid ebenfalls als Agonist am nikotinischen Acetylcholin Rezeptor der Biene zu beschreiben. Hierbei wurde an einer Zelle in Kultur ein geringerer Strom ausgelöst.
Bienenvölker wurden unter kontrollierten Bedingungen gehalten, bei denen je eins der Neonikotinoide Clothianidin, Imidacloprid oder Thiacloprid in das Futter gemischt wurden. Hierfür wurden Dosen gewählt, bei denen davon ausgegangen werden konnte, dass keine akute Beeinflussung der Sammlerinnen bestand. Es konnte festgestellt werden, dass chronisches Füttern mit einer Zuckerlösung mit 8,876 mg/kg Thiacloprid zu einer verringerten Sammelleistung führte. Ebenso wurde die Entwicklung der Eier stark eingeschränkt, wobei die Königin weiterhin Eier legte. Es konnten nur vereinzelte verdeckelte Brutzellen, die ein spätes Entwicklungsstadium der Bienen darstellen, gefunden werden. Damit konnte gezeigt werden, dass geringe Dosen die Larval-Entwicklung von Bienen beeinflussen, eventuell durch Einflüsse auf die Kommunikation zwischen Ammenbienen und der Brut.
Um Auswirkungen auf einzelne Tiere zu zeigen, wurden unterschiedliche Parameter im Heimflug von Bienen nach Fütterung mit je einem der Neonikotinoide analysiert. Bienen mussten sich nach der Fütterung orientieren und von einer neuen Position den Heimweg zum Stock finden. Der Heimflug wurde per Radar verfolgt und so ein Flugprofil erstellt, das aus zwei Flugphasen bestand. Diese wurden durch die Navigation nach Vektorintegration und durch Landmarken unterteilt. Aus dem Flugprofil konnte abgelesen werden, wie lange die Bienen für die Phasen des Flugs benötigten, in welchem Hauptflugwinkel sie die erste Flugphase absolvierten, in welche Richtung sie am Ende der ersten Flugphase flogen und wie gerichtet der Flug war. Auch wurde erfasst, ob die Bienen überhaupt in der Lage waren zum Stock zurückzukehren. Hier zeigte sich, dass die Fütterung mit Zuckerwasser mit 0,6 µM und 0,9 µM Imidacloprid, ebenso wie mit 0,1 mM Thiacloprid zu einer verringerten Heimkehrwahrscheinlichkeit führte. In der ersten Flugphase konnte auch gezeigt werden, dass 0,2 µM Clothianidin im Zuckerwasser zu einem schnelleren Flug führte und dass der Flugwinkel im Vergleich zur Kontrolle in Richtung der wahren Position des Stocks verschoben war. Beide Imidacloprid-Gruppen zeigten eine ähnliche, signifikante Verschiebung des Flugwinkels, ebenso konnte im Flug selbst eine häufige Änderung der Richtung festgestellt werden. In der zweiten Flugphase zeigte sich, dass Bienen, welche mit Thiacloprid behandelt wurden häufiger eine inkorrekte Heimflugrichtung wählten, was in längeren Heimflügen resultierte. Die mit Clothianidin behandelten Bienen legten eine längere Flugstrecke zurück. Bienen, welche Imidacloprid beider Konzentrationen konsumierten, zeigten einen häufigen Wechsel ihrer Flugrichtung. Damit konnten bei allen drei gewählten Neonikotinoiden Einflüsse auf spezifische Komponenten der Navigation von Bienen gefunden und Einschränkungen im Heimkehr- und Orientierungsverhalten einzelner Sammlerinnen gezeigt werden. Somit konnten die eingehenden Fragen zumindest teilweise beantwortet werden und die Datenlage zur Frage der Schädlichkeit der, auch politisch umstrittenen, Substanzen erweitert werden.
Efficient algorithms for object recognition are crucial for the newly robotics and computer vision applications that demand real-time and on-line methods. Some examples are autonomous systems, navigating robots, autonomous driving. In this work, we focus on efficient semantic segmentation, which is the problem of labeling each pixel of an image with a semantic class.
Our aim is to speed-up all of the parts of the semantic segmentation pipeline. We also aim at delivering a labeling solution on a time budget, that can be decided on-the-fly. For this purpose, we analyze all the components of the semantic segmentation pipeline, and identify the computational bottleneck of each of them. The different components of the pipeline are over-segmenting the image with local regions, extracting features and classify the local regions, and the final inference of the image labeling with semantic classes. We focus on each of these steps.
First, we introduce a new superpixel algorithm to over-segment the image. Our superpixel method runs in real-time and can deliver a solution at any time budget. Then, for feature extraction, we focus on the framework that computes descriptors and encodes them, followed by a pooling step. We see that the encoding step is the bottleneck, for computational efficiency and performance. We present a novel assignment-based encoding formulation, that allows for the design of a new, very efficient, encoding. Finally, the image labeling output is obtained modeling the dependencies with a Conditional Random Field (CRF). In semantic image segmentation, the computational cost of instantiating the potentials is much higher than MAP inference. We introduce Active MAP inference to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest as unknown, and to estimate the MAP labeling from such incomplete energy function.
We perform experiments on all proposed methods for the different parts of the semantic segmentation pipeline. We show that our superpixel extraction achieves higher accuracy than state-of-the-art on standard superpixel benchmark, while it runs in real-time. We test our feature encoding on standard image classification and segmentation benchmarks, and we show that our method achieves competitive results with the state-of-the-art, and requires less time and memory. Finally, results for semantic segmentation benchmark show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.
The composition of cellular membranes is extremely complex and the mechanisms underlying their homeostasis are poorly understood. Organelles within a eukaryotic cell require a non-random distribution of membrane lipids and a tight regulation of the membrane lipid composition is a prerequisite for the maintenance of specific organellar functions. Physical membrane properties such as bilayer thickness, lipid packing density and surface charge are governed by the lipid composition and change gradually from the early to the late secretory pathway. As the endoplasmic reticulum (ER) is situated at the beginning of the cells secretory pathway, it has to accept and accommodate a great variety and quantity of secretory and transmembrane proteins, which enter the ER on their way to their final cellular destination. Secretory proteins can be translocated into the lumen of the ER co- or posttanslationally and membrane proteins are being inserted and released into the ER membrane. In the oxidative milieu of the ER-lumen, supported by a variety of chaperones, proteins can fold into their native form.
If the folding capacity of the ER-lumen is exceeded, an accumulation of mis- or unfolded proteins in the lumen of the ER occurs, consequently triggering the unfolded protein response (UPR). This highly conserved program activates a wide-spread transcriptional response to restore protein folding homeostasis. In fact, 7 – 8% of all genes in the yeast Saccharomyces cerevisiae (S. cerevisiae) are regulated by the UPR. The mechanism underlying the activation of the UPR by protein folding stress has been investigated thoroughly in the last decades and many of its mechanistic details have been elucidated. Recently, it became evident that aberrant lipid compositions of the ER membrane, collectively referred to as lipid bilayer stress, are equally potent in activating the UPR. The underlying molecular mechanism of this membrane-activated UPR, however, remained unclear.
This study focuses on the UPR in S. cerevisiae and characterizes the inositol requiring enzyme 1 (Ire1) as the sole UPR sensor in S. cerevisiae. Active Ire1 forms oligomers and, collaboratively with the tRNA ligase Rlg1, splices immature mRNA of the transcription factor HAC1, which results in the synthesis of mature HAC1 mRNA and the production of the active Hac1 protein, which binds to UPR-elements in the nucleus and activates the expression of UPR target genes. Here, the combination of in vivo and in vitro experiments is being used, which is supplemented by molecular dynamics (MD) simulations performed by Roberto Covino and Gerhard Hummer (MPI for Biophysics, Frankfurt), aiming to identify the molecular mechanism of Ire1 activation by lipid bilayer stress. This study focuses on the analysis of the juxta- and transmembrane region of Ire1. Bioinformatic analyses revealed a putative ER-lumenal amphipathic helix (AH) N-terminally of and partially overlapping with the transmembrane helix (TMH). This predicted AH contains a large hydrophobic face, which inserts into the ER membrane, forcing the TMH into a tilted orientation within the membrane. The resulting unusual architecture of Ire1’s AH and TMH constitutes a unique structural element required for the activation of Ire1 by lipid bilayer stress.
To investigate the function of the AH in the physiological context, different variants of Ire1 were produced under the control of their endogenous promoter and from their endogenous locus. The functional role of the AH was tested, by disrupting its amphipathic character by the introduction of charged residues into the hydrophobic face of the AH. The role of a conserved negative residue between the TMH and the AH (E540 in S. cerevisiae) was tested by substituting it by a unipolar, polar, or positively charged residue. These variants were intensively characterized using a series of assays:
This thesis provides evidence that the AH is crucial for the function of Ire1: Mutant variants with a disrupted (F531R, V535R) or otherwise modified AH (E540A) exhibited a lower degree of oligomerization and failed to catalyze the splicing of the HAC1 mRNA as the Wildtype control. Likewise, the induction of PDI1, a target gene of the UPR, was greatly reduced in mutants with a disrupted or defective AH. These data revealed an important functional role of the AH for normal Ire1 function.
An in vitro system was established to analyze the membrane-mediated oligomerization of Ire1. This system enabled the isolated functional analysis of the AH and TMH during Ire1 activation by lipid bilayer stress. A fusion construct, coding for the maltose binding protein (MBP) from Escherichia coli (E. coli), N-terminally to the AH and TMH of Ire1 was produced. The heterologous production in E. coli, the purification and reconstitution of this minimal sensor of Ire1 in liposomes was established as part of this study. To analyze the oligomeric status of the minimal sensor in different lipid environments, continuous wave electron paramagnetic resonance (cwEPR) spectroscopic experiments were performed. These experiments revealed that the molecular packing density of the lipids had a significant influence of the oligomerization of the spin-labeled membrane sensor: increasing packing densities resulted in sensor oligomerization. The AH-disruptive F531R mutant, in which the amphipathic character of the AH was destroyed, showed no membrane-sensitive changes in its oligomerization status.
Thus, the activation of Ire1 by lipid bilayer stress is achieved by a membrane-based mechanism. According to the current model, the AH induces a local membrane compression by inserting its large hydrophobic face into the membrane. As membrane thickness and acyl chain order are interconnected, this compression simultaneously results in an increased local disordering of lipid acyl chains. Supporting MD simulations performed by Roberto Covino and Gerhard Hummer revealed that the bilayer compression is significantly more pronounced in a densely packed lipid environment, than in a lipid environment of lower lipid packing density. Hence, the energetic cost of the local compression increases with the packing density of the membrane, but is compensated for by the oligomerization of Ire1. This minimization of energetic cost induced by the membrane deformation of Ire1 forms the basis for the activation of Ire1 by lipid bilayer stress.
This thesis investigates the acquisition pace and the typical developmental path in eL2 acquisition of selected phenomena of German morphosyntax and semantics and compared them to monolingual acquisition. In addition, the influence of ‘Age of Onset’ and of external factors on eL2 acquisition is examined.
To date, the most studies on eL2 acquisition focused on language production. Based on mostly longitudinal spontaneous speech data of only small number of children, they indicate that eL2 learners acquire sentence structure and subject-verb-agreement faster than monolingual children, whereas the acquisition of case marking causes them more difficulties. Moreover, similar developmental paths to those of monolingual children are claimed. Only several studies examined comprehension abilities in eL2 learners, however overwhelmingly in cross-sectional design. The findings from comprehension studies on telic and atelic verbs, and on wh-questions indicate that eL2 children acquire their target-like interpretation faster than monolingual children. The same acquisition stages towards target-like interpretation like in monolingual acquisition are assumed as well. Taking together, to date, no study exists, that examines comprehension and production abilities in a large group of eL2 learners of German in a longitudinal design.
This thesis extends the previous results by investigating pace of acquisition, impact of factors, and individual developmental paths in a longitudinal design with large groups of participants. Language data of 29 eL2 learners of German (age at T1: 3;7 years, LoE: 10 months) and 45 monolingual German-speaking children (age at T1: 3;7) are examined. The eL2 learners were tested in six test rounds (age at T6: 6;9 years). The monolingual children were tested in five test rounds (are at T5: 5;7). The standardized test LiSe-DaZ (Schulz & Tracy, 2011) was employed to examine children’s language skills.
eL2 learners show a significantly greater rate of change, thus faster acquisition pace, than monolingual children in the following scales: comprehension of telicity, comprehension of wh-questions, production of prepositions, and production of conjunctions. These phenomena are acquired early in monolingual children. No differences regarding acquisition pace between eL2 children and monolingual children are found for comprehension of negation, production of case marking, and production of focus particles. These phenomena are acquired late in monolingual development and involve semantic and pragmatic knowledge. The findings of faster acquisition pace of several phenomena are in line with several studies that reported that eL2 children develop faster than monolingual children.
Independent on whether a phenomenon is acquired early or late, no effects of external factors on eL2 children’s performance are found. These findings indicate that acquisition of core, rule-based phenomena is not sensitive to external factors if the first exposure to L2 takes place around the age of three.
Moreover, eL2 children show the same developmental stages and error types in comprehension of telicity, comprehension of negation, production of matrix and subordinate clauses. This is also independent on how fast they acquire a structure under consideration. Thus, these findings provide a further support for similar developmental paths of eL2 and monolingual children towards target-like comprehension and production.
Echolocation allows bats to orientate in darkness without using visual information. Bats emit spatially directed high frequency calls and infer spatial information from echoes coming from call reflections in objects (Simmons 2012; Moss and Surlykke 2001, 2010). The echoes provide momentary snapshots, which have to be integrated to create an acoustic image of the surroundings. The spatial resolution of the computed image increases with the quantity of received echoes. Thus, a high call rate is required for a detailed representation of the surroundings.
One important parameter that the bats extract from the echoes is an object’s distance. The distance is inferred from the echo delay, which represents the duration between call emission and echo arrival (Kössl et al. 2014). The echo delay decreases with decreasing distance and delay-tuned neurons have been characterized in the ascending auditory pathway, which runs from the inferior colliculus (Wenstrup et al. 2012; Macías et al. 2016; Wenstrup and Portfors 2011; Dear and Suga 1995) to the auditory cortex (Hagemann et al. 2010; Suga and O'Neill 1979; O'Neill and Suga 1982).
Electrophysiological studies usually characterize neuronal processing by using artificial and simplified versions of the echolocation signals as stimuli (Hagemann et al. 2010; Hagemann et al. 2011; Hechavarría and Kössl 2014; Hechavarría et al. 2013). The high controllability of artificial stimuli simplifies the inference of the neuronal mechanisms underlying distance processing. But, it remains largely unexplored how the neurons process delay information from echolocation sequences. The main purpose of the thesis is to investigate how natural echolocation sequences are processed in the brain of the bat Carollia perspicillata. Bats actively control the sensory information that it gathers during echolocation. This allows experimenters to easily identify and record the acoustic stimuli that are behaviorally relevant for orientation. For recording echolocation sequences, a bat was placed in the mass of a swinging pendulum (Kobler et al. 1985; Beetz et al. 2016b). During the swing the bat emitted echolocation calls that were reflected in surrounding objects. An ultrasound sensitive microphone traveling with the bat and positioned above the bat’s head recorded the echolocation sequence. The echolocation sequence carried delay information of an approach flight and was used as stimulus for neuronal recordings from the auditory cortex and inferior colliculus of the bats.
Presentation of high stimulus rates to other species, such as rats, guinea pigs, suppresses cortical neuron activity (Wehr and Zador 2005; Creutzfeldt et al. 1980). Therefore, I tested if neurons of bats are suppressed when they are stimulated with high acoustic rates represented in echolocation sequences (sequence situation). Additionally, the bats were stimulated with randomized call echo elements of the sequence and an interstimulus time interval of 400 ms (element situation). To quantify neuronal suppression induced by the sequence, I compared the response pattern to the sequence situation with the concatenated response patterns to the element situation. Surprisingly, although the bats should be adapted for processing high acoustic rates, their cortical neurons are vastly suppressed in the sequence situation (Beetz et al. 2016b). However, instead of being completely suppressed during the sequence situation, the neurons partially recover from suppression at a unit specific call echo element. Multi-electrode recordings from the cortex allow assessment of the representation of echo delays along the cortical surface. At the cortical level, delay-tuned neurons are topographically organized. Cortical suppression improves sharpness of neuronal tuning and decreases the blurriness of the topographic map. With neuronal recordings from the inferior colliculus, I tested whether the echolocation sequence also induced neuronal suppression at subcortical level. The sequence induced suppression was weaker in the inferior colliculus than in the cortex. The collicular response makes the neurons able to track the acoustic events in the echolocation sequence. Collicular suppression mainly improves the signal-to-noise ratio. In conclusion, the results demonstrate that cortical suppression is not necessarily a shortcoming for temporal processing of rapidly occurring stimuli as it has previously been interpreted.
Natural environments are usually composed of multiple objects. Thus, each echolocation call reflects off multiple objects resulting in multiple echoes following the calls. At present, it is largely unexplored how neurons process echolocation sequences containing echo information from more than one object (multi-object sequences). Therefore, I stimulated bats with a multi-object sequence which contained echo information from three objects. The objects were different distances away from each other. I tested the influence of each object on the neuronal tuning by stimulating the bats with different sequences created from filtering object specific echoes from the multi-object sequence. The cortex most reliably processes echo information from the nearest object whereas echo information from distant objects is not processed due to neuronal suppression. Collicular neurons process less selectively echo information from certain objects and respond to each echo.
For proper echolocation, bats have to distinguish between own biosonar signals and the signals coming from conspecifics. This can be quite challenging when many bats echolocate adjacent to each other. In behavioral experiments, the echolocation performance of C. perspicillata was tested in the presence of potentially interfering sounds. In the presence of acoustic noise, the bats increase the sensory acquisition rate which may increase the update rate of sensory processing. Neuronal recordings from the auditory cortex and inferior colliculus could strengthen the hypothesis. Although there were signs of acoustic interference or jamming at neuronal level, the neurons were not completely suppressed and responded to the rest of the echolocation sequence.
One of the key functions of blood vessels is to transport nutrients and oxygen to distant tissues and organs in the body. When blood supply is insufficient, new vessels form to meet the metabolic tissue demands and to re-establish cellular homeostasis. Expansion of the vascular network through sprouting angiogenesis requires the specification of ECs into leading (sprouting) tip and following (non-sprouting) stalk cells. Attracted by guidance cues tip cells dynamically extend and retract filopodia to navigate the nascent vessel sprout, whereas trailing stalk cells proliferate to form the extending vascular tube. All of these processes are under the control of environmental signals (e.g. hypoxia, metabolism) and numerous cytokines and peptide growth factors. The Dll4/Notch pathway coordinates several critical steps of angiogenic blood vessel growth. Even subtle alterations in Notch activity can profoundly influence endothelial cell behavior and blood vessel formation, yet little is known about the intrinsic regulation and dynamics of Notch signaling in endothelial cells. In addition, it remains an open question, how different growth factor signals impinging on sprouting ECs are coordinated with local environmental cues originating from nutrient-deprived, hypoxic tissue to achieve a balanced endothelial cell response. Acetylation of lysines is a critical posttranslational modification of histones, which acts as an important regulatory mechanism to control chromatin structure and gene transcription. In addition to histones, several non-histone proteins are targeted for acetylation reversible acetylation is emerging as a fundamental regulatory mechanism to control protein function, interaction and stability. Previous studies from our group identified the NAD+-dependent deacetylase SIRT1 as a key regulator of blood vessel growth controlling endothelial angiogenic responses. These studies revealed that SIRT1 is highly expressed in the vascular endothelium during blood vessel development, where it controls the angiogenic activity of endothelial cells. Moreover, in this work SIRT1 has been shown to control the activity of key regulators of cardiovascular homeostasis such as eNOS, Foxo1 and p53. The present study describes that SIRT1 antagonizes Notch signaling by deacetylating the Notch intracellular domain (NICD). We showed that loss of SIRT1 enhances DLL4-induced endothelial Notch responses as assessed by different luciferase responsive elements as well as transcriptional analysis of Notch endogenous target genes activation. Conversely, SIRT1 gain of function by overexpression of pharmacological activation decreases induction of Notch targets in response to DLL4 stimulation. We also showed that the NICD can be directly acetylated by PC AF and p300 and that SIRT1 promotes deacetylation of NICD. We have identified 14 lysines that are targeted for acetylation and their mutation abolishes the effects of SIRT1 of Notch responses. Furthermore, over-expression or activation of SIRT1 significantly reduces the levels of NICD protein. Moreover, SIRT1-mediated NICD degradation can be reversed by blockade of the proteasome suggesting a mechanism resulting from ubiquitin-mediated proteolysis. Indeed, we have shown that SIRT1 knockdown or pharmacological inhibition decreased NICD ubiquitination. We propose a novel molecular mechanism of modulation of the amplitude and duration of Notch responses in which acetylation increases NICD stability and therefore permanence at the promoters, while SIRT1, by inducing NICD degradation through its deacetylation, shortens Notch responses. In order to evaluate the physiological relevance of our findings we used different models in which the Notch functions during blood vessel formation have been extensively characterized. First, retinal angiogenesis in mice lacking SIRT1 activity shows decreased branching and reduced endothelial proliferation, similar to what happens after Notch gain of function mutations. ECs from these mice exhibit increased expression of Notch target genes. Second, these results were reproducible during intersomitic vessel growth in sirt1-deficient zebrafish. In both models, the defects could be partially rescued by inhibition of Notch activation. Third, we used an in vitro model of vessel sprouting from differentiating embryonic bodies in response to VEGF in a collagen matrix. Our results showed that Sirt1-deficient cells shows impaired sprouting which correlated with increased NICD levels. In addition, when in competition with wild-type cells in this assay, Sirt1-deficient cells are more prone to occupy the stalk cell position. Taken together, our study identifies reversible acetylation of NICD as a novel molecular mechanism to adapt the dynamics of Notch signaling and suggest that SIRT1 acts as a rheostat to fine-tune endothelial Notch responses. The NAD+-dependent feature of SIRT1 activity possibly links endothelial Notch responses to environmental cues and metabolic changes during nutrient deprivation in ischemic environments or upon other cellular stresses.
The enzyme acetyl-CoA carboxylase (ACC) plays a fundamental role in the fatty acid metabolism. It regulates the first and rate limiting step in the biosynthesis of fatty acids by catalyzing the carboxylation of acetyl-CoA to malonyl-CoA and exists as two different isoforms, ACC1 and ACC2. In the last few years, ACC has been reported as an attractive drug target for treating different diseases, such as insulin resistance, hepatic steatosis, dyslipidemia, obesity, metabolic syndrome and nonalcoholic fatty liver disease. An altered fatty acid metabolism is also associated with cancer cell proliferation. In general, the inhibition of ACC provides two possibilities to regulate the fatty acid metabolism: It blocks the de novo lipogenesis in lipogenic tissues and stimulates the mitochondrial fatty acid β-oxidation. Surprisingly, the role of ACC in human vascular endothelial cells has been neglected so far. This work aimed to investigate the role of the ACC/fatty acid metabolism in regulating important endothelial cell functions like proliferation, migration and tube formation.
To investigate the function of ACC, the ACC-inhibitor soraphen A as well as an siRNA-based approach were used. This study revealed that ACC1 is the predominant isoform both in human umbilical vein endothelial cells (HUVECs) and in human dermal microvascular endothelial cells (HMECs). Inhibition of ACC via soraphen A resulted in decreased levels of malonyl-CoA and shifted the lipid composition of endothelial cell membranes. Consequently, membrane fluidity, filopodia formation and the migratory capacity were attenuated. Increasing amounts of longer acyl chains within the phospholipid subgroup phosphatidylcholine (PC) were suggested to overcompensate the shift towards shorter acyl chains within phosphatidylglycerol (PG), which resulted in a dominating effect on regulating the membrane fluidity. Most importantly, this work provided a link between changes in the phospholipid composition and altered endothelial cell migration. The antimigratory effect of soraphen A was linked to a reduced amount of PG and to an increased amount of polyunsaturated fatty acids (PUFAs) within the phospholipid cell membrane. This link was unknown in the literature so far. Interestingly, a reduced filopodia formation was observed upon ACC inhibition via soraphen A, which presumably caused the impaired migratory capacity.
This work revealed a relationship between ACC/fatty acid metabolism, membrane lipid composition and endothelial cell migration. The natural compound soraphen A emerged as a valuable chemical tool to analyze the role of ACC/fatty acid metabolism in regulating important endothelial cell functions. Furthermore, regulating endothelial cell migration via ACC inhibition promises beneficial therapeutic perspectives for the treatment of cell migration-related disorders, such as ischemia reperfusion injury, diabetic angiopathy, macular degeneration, rheumatoid arthritis, wound healing defects and cancer.
Der Begriff psychologische Akkulturation beschreibt jene Veränderungen, die infolge des dauerhaften Aufeinandertreffens verschiedener kultureller Gruppen auf individueller Ebene zu beobachten sind (Berry, 1997). Die vorliegende Arbeit umfasst drei Publikationen, die sich mit Akkulturationsprozessen von Kindern und Jugendlichen mit Migrationshintergrund in Deutschland befassen. Zunächst wird ein Überblick über den aktuellen Stand der Forschung zur Situation junger Migranten in Deutschland vorgelegt. An zentraler Stelle steht dabei die Frage, wie die Migrationsgeschichte und Immigrationspolitik Deutschlands sowie die öffentliche Einstellung gegenüber Migranten die transkulturelle Adaptation von Kindern und Jugendlichen nicht-deutscher ethno-kultureller Herkunft beeinflussen. Bereits bestehende wissenschaftliche Erkenntnisse werden verknüpft mit den Ergebnissen neuerer empirischer Studien um zu einem tieferen Verständnis der Ursachen für die vielfach berichteten problematischen Verläufe psychologischer und soziokultureller Adaptation von Migranten beizutragen. Neben anderen Risiken und protektiven Faktoren wird diskutiert, wie sich Besonderheiten Deutschlands als Aufnahmeland, wie z.B. die Eigenarten des Schulsystems, auf Adaptationsverläufe auswirken können. Unsere eigenen Studien tragen zum Verständnis der Anpassungsprozesse junger Migranten bei, indem sie aufzeigen, dass nicht die Akkulturationsstrategie der Integration, sondern speziell die Orientierung an der deutschen Kultur bei Individuen zu den günstigsten psychologischen und soziokulturellen Ergebnissen zu führen scheint. Im Rahmen dieser Arbeit wird weiterhin ein empirischer und methodologischer Beitrag zur Akkulturationsforschung geleistet, indem ein Messinstrument zur Erfassung psychologischer Akkulturation bei Kindern im deutschen Sprachraum – die Frankfurter Akkulturationsskala für Kinder (FRAKK-K)– entwickelt, validiert und schließlich anhand einer Fragestellung praktisch angewandt wird. Die Skalenentwicklung und –optimierung erfolgte auf der Grundlage von zwei Studien, welche Daten von 387 Grundschülern aus zwei städtischen Regionen in Deutschland umfassen (Frankenberg & Bongard, 2013). Die Ergebnisse konfirmatorischer Faktorenanalysen sprechen für zwei Faktoren, Orientierung an der Aufnahmekultur und Orientierung an der Herkunftskultur, die jeweils mittels 6 Items erfasst werden. Beide Subskalen weisen eine zufriedenstellende interne Reliabilität und Kriteriumsvalidität auf und lassen sich zwecks Erfassung der Akkulturationsstrategie kombinieren (i.e. Assimilation, Integration, Separation und Marginalisierung). In einer ersten praktischen Anwendung der Skala wird der Frage nachgegangen, inwiefern erweiterter Musikunterricht und Orchesterspiel in der Grundschule über verstärkte Gruppenkohäsion zur Förderung kultureller Integration beitragen können.
Grundschüler, die in einem Orchester gespielt haben, zeigen über einen Zeitraum von 1,5 Jahren einen stärkeren Anstieg der Orientierung an der deutschen Kultur als Schüler, die keinen erweiterten Musikunterricht erhielten. Musikschüler fühlen sich außerdem stärker in die Klassengemeinschaft integriert. Dies deutet darauf hin, dass die Erfahrung der Zusammenarbeit und des Musizierens innerhalb einer Gruppengemeinschaft zu einer stärkeren Orientierung an der deutschen Kultur geführt hat. Die Orientierung an der Herkunftskultur blieb unbeeinflusst. Somit können Programme, die jungen Migranten die Gelegenheit bieten Musik innerhalb einer größeren, kulturell heterogenen Gruppe aufzuführen, als eine effektive Intervention zur Förderung der kulturellen Anpassung an die Mehrheitskultur und der Integration innerhalb – und außerhalb – des Klassenzimmers führen.
Abschließend werden die Ergebnisse der empirischen Untersuchungen vor dem Hintergrund des aktuellen Forschungsstandes zu neueren Akkulturationsmodellen sowie zu der Terminologie und den methodischen Herausforderungen des Forschungsfeldes in Beziehung gesetzt und kritisch reflektiert. Daraus abgeleitet werden Implikationen für zukünftige Interventionen und Forschung diskutiert.
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
Acceleration of Biomedical Image Processing and Reconstruction with FPGAs
Increasing chip sizes and better programming tools have made it possible to increase the boundaries of application acceleration with reconfigurable computer chips. In this thesis the potential of acceleration with Field Programmable Gate Arrays (FPGAs) is examined for applications that perform biomedical image processing and reconstruction. The dataflow paradigm was used to port the analysis of image data for localization microscopy and for 3D electron tomography from an imperative description towards the FPGA for the first time.
After the primitives of image processing on FPGAs are presented, a general workflow is given for analyzing imperative source code and converting it to a hardware pipeline where every node processes image data in parallel. The theoretical foundation is then used to accelerate both example applications. For localization microscopy, an acceleration of 185 compared to an Intel i5 450 CPU was achieved, and electron tomography could be sped up by a factor of 5 over an Nvidia Tesla C1060 graphics card while maintaining full accuracy in both cases.
The ab-initio molecular dynamics framework has been the cornerstone of computational solid state physics in the last few decades. Although it is already a mature field it is still rapidly developing to accommodate the growth in solid state research as well as to efficiently utilize the increase in computing power. Starting from the first principles, the ab-initio molecular dynamics provides essential information about structural and electronic properties of matter under various external conditions. In this thesis we use the ab-initio molecular dynamics to study the behavior of BaFe2As2 and CaFe2As2 under the application of external pressure. BaFe2As2 and CaFe2As2 belong to the family of iron based superconductors which are a novel and promising superconducting materials. The application of pressure is one of two key methods by which electronic and structural properties of iron based superconductors can be modified, the other one being doping (or chemical pressure). In particular, it has been noted that pressure conditions have an important effect, but their exact role is not fully understood. To better understand the effect of different pressure conditions we have performed a series of ab-initio simulations of pressure application. In order to apply the pressure with arbitrary stress tensor we have developed a method based on the Fast Inertial Relaxation Engine, whereby the unit cell and the atomic positions are evolved according to the metadynamical equations of motion. We have found that the application of hydrostatic and c axis uniaxial pressure induces a phase transition from the magnetically ordered orthorhombic phase to the non-magnetic collapsed tetragonal phase in both BaFe2As2 and CaFe2As2. In the case of BaFe2As2, an intermediate tetragonal non-magnetic tetragonal phase is observed in addition. Application of the uniaxial pressure parallel to the c axis reduces the critical pressure of the phase transition by an order of magnitude, in agreement with the experimental findings. The in-plane pressure application did not result in transition to the non-magnetic tetragonal phase and instead, rotation of the magnetic order direction could be observed. This is discussed in the context of Ginzburg-Landau theory. We have also found that the magnetostructural phase transition is accompanied by a change in the Fermi surface topology, whereby the hole cylinders centered around the Gamma point disappear, restricting the possible Cooper pair scattering channels in the tetragonal phase. Our calculations also permit us to estimate the bulk moduli and the orthorhombic elastic constants of BaFe2As2 and CaFe2As2.
To study the electronic structure in systems with broken translational symmetry, such as doped iron based superconductors, it is necessary to develop a method to unfold the complicated bandstructures arising from the supercell calculations. In this thesis we present the unfolding method based on group theoretical techniques. We achieve the unfolding by employing induced irreducible representations of space groups. The unique feature of our method is that it treats the point group operations on an equal footing with the translations. This permits us to unfold the bandstructures beyond the limit of translation symmetry and also formulate the tight-binding models of reduced dimensionality if certain conditions are met. Inclusion of point group operations in the unfolding formalism allows us to reach important conclusions about the two versus one iron picture in iron based superconductors.
And finally, we present the results of ab-initio structure prediction in the cases of giant volume collapse in MnS2 and alkaline doped picene. In the case of MnS2, a previously unobserved high pressure arsenopyrite structure of MnS2 is predicted and stability regions for the two competing metastable phases under pressure are determined. In the case of alkaline doped picene, crystal structures with different levels of doping were predicted and used to study the role of electronic correlations.
First-principles modeling techniques offer the ability to simulate a wide range of systems under different physical conditions, such as temperature, pressure, and composition, without relying on empirical knowledge. Density functional theory (DFT), a quantum mechanical method, has become an exceptionally successful framework for materials science modeling. Employing DFT makes it possible to gain valuable insights into the fundamental state of a system, enabling the reliable determination of equilibrium crystal structures. Over time, DFT has become an essential tool that can be incorporated into various schemes for predicting the properties of a material related to its structure, insulating/metallic behavior, magnetism, and optics. DFT is regularly applied in numerous fields, spanning from fundamental subjects in condensed matter physics to the study of large-scale phenomena in geosciences. In the latter, the effectiveness of DFT stems from its ability to simulate the properties found on the Earth, other planets, and meteorites, which may pose challenges for their direct study or laboratory investigation.
In this thesis, a comprehensive examination of a family of monosulfides and a perovskite heterostructure was conducted. These materials are relevant for their potential applications in technology, energy harvesting, and in the case of monosulfides, their speculated abundance on the planet Mercury.
Firstly, a DFT approach was used to analyze two non-magnetic monosulfides, CaS and MgS. We determined their structural properties and then focused on the modeling of their reflectivity in the infrared region. The calculation of the reflectivity considered both harmonic and anharmonic contributions. In the harmonic limit, the non-analytic correction was employed to accurately determine the LO/TO splitting, which is necessary to delimit the retstrahlend band, that is, the maximum of the reflectivity. The anharmonic effects given by up to three-phonon and isotopic scatterings, which were included using perturbation theory, primarily smeared the reflectivity spectra edges in the high-wave region.
Secondly, four polymorphs of MnS were studied using a combination of first-principles methods to simulate their antiferromagnetic (AFM) and paramagnetic (PM) states. The integration of DFT+$U$ with special quasirandom structures (SQS) supercells, and occupation matrix control techniques was crucial for achieving convergence, structural optimization accuracy, and obtaining finite energy band gaps and local magnetic moments in the PM phases. The addition of the Hubbard $U$ correction was necessary to treat the highly-correlated Mn $d$-electrons. The success of our approach was clear based on our electronic structure predictions for the PM rock-salt B1-MnS polymorph. Experimentally this phase has been observed to be an insulator, but multiple \emph{ab initio} works resulted previously in metallic behavior. Our computations, on the other hand, predicted insulating and magnetic properties that compare well with available measurements. Additionally, the pressure-field stability of the four MnS polymorphs was studied. In the case of the PM phases, B1-MnS was identified to be the most stable up to about 21 GPa, then transforming into the B31-MnS polymorph. This finding was in close agreement with high-pressure experiments reporting a similar phase transformation. The optical properties of B1-, B4-, and B31-MnS were also simulated. The SQS technique was used to obtain soft-mode-free phonon band structures within the harmonic approximation. Then, the anharmonic effects were included, and the reflectivity was calculated for B1-MnS and B4-MnS. In both cases, a good agreement for the LO/TO splitting was achieved in comparison to experimental results.
Lastly, the oxygen-deficient heterostructure of LaAlO$_{3-\delta}$ /SrTiO$_{3-\delta}$ was investigated also employing DFT+$U$, with a particular emphasis on the potential impact of vacancy clustering at the interface. Six distinct configurations of pairs of vacancies were studied and their energies were compared to find the most stable one. The orbital reconstruction of Ti orbitals was also examined based on their location with respect to the vacancies and the local magnetic moments were calculated. The final results showed that linearly arranged vacancies located opposite to Ti ions give the most energetically stable configuration.
Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke.
Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen.
Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten.
Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden.
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
Computational oral absorption models, in particular PBBM models, provide a powerful tool for researchers and pharmaceutical scientists in drug discovery and formulation development, as they mimic and can describe the physiologically processes relevant to the oral absorption. PBBM models provide in vivo context to in vitro data experiments and allow for a dynamic understanding of in vivo drug disposition that is not typically provided by data from standard in vitro assays. Investigations using these models permit informed decision-making, especially regarding to formulation strategies in drug development. PBBM models, but can also be used to investigate and provide insight into mechanisms responsible for complex phenomena such as food effect in drug absorption. Although there are obviously still some gaps regarding the in silico construction of the gastrointestinal environment, ongoing research in the area of oral drug absorption (e.g. the UNGAP, AGE-POP and InPharma projects) will increase knowledge and enable improvement of these models.
PBBM can nowadays provide an alternative approach to the development of in vitro–in vivo correlations. The case studies presented in this thesis demonstrate how PBBM can address a mechanistic understanding of the negative food effect and be used to set clinically relevant dissolution specification for zolpidem immediate release tablets. In both cases, we demonstrated the importance of integrating drug properties with physiological variables to mechanistically understand and observe the impact of these parameters on oral drug absorption.
Various complex physiological processes are initiated upon food consumption, which can enhance or reduce a drug’s dissolution, solubility, and permeability and thus lead to changes in drug absorption. With improvements in modeling and simulation software and design of in vitro studies, PBBM modeling of food effects may eventually serve as a surrogate for clinical food effect studies for new doses and formulations or drugs. Furthermore, the application of these models may be even more critical in case of compounds where execution of clinical studies in healthy volunteers would be difficult (e.g., oncology drugs).
In the fourth chapter we have demonstrated the establishment of the link between biopredictive in vitro dissolution testing (QC or biorelevant method) PBBM coupled with PD modeling opens the opportunity to set truly clinically relevant specifications for drug release. This approach can be extended to other drugs regardless of its classification according to the BCS.
With the increased adoption of PBBM, we expect that best practices in development and verification of these models will be established that can eventually inform a regulatory guidance. Therefore, the application of Physiologically Based Biopharmaceutical Modelling is an area with great potential to streamline late-stage drug development and impact on regulatory approval procedures.
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
A novel role for mutant mRNA degradation in triggering transcriptional adaptation to mutations
(2020)
Robustness to mutations promotes organisms’ well-being and fitness. The increasing number of mutants in various model organisms, and humans, showing no obvious phenotype (Bouche and Bouchez, 2001; Chen et al., 2016b; Giaever et al., 2002; Kok et al., 2015) has renewed interest into how organisms adapt to gene loss. In the presence of deleterious mutations, genetic compensation by transcriptional upregulation of related gene(s) (also known as transcriptional adaptation) has been reported in numerous systems (El-Brolosy and Stainier, 2017; Rossi et al., 2015; Tondeleir et al., 2012); however, the molecular mechanisms underlying this response remained unclear. To investigate this phenomenon, I develop and study multiple models of transcriptional adaptation in zebrafish and mouse cell lines. I first show that transcriptional adaptation is not caused by loss of protein function, indicating that the trigger lies upstream, and find that the response involves enhanced transcription of the related gene(s). Furthermore, I observe a correlation between levels of mutant mRNA degradation and upregulation of related genes. To investigate the role of mutant mRNA degradation in triggering the response, I generate mutant alleles that do not transcribe the mutated gene and find that they fail to induce a transcriptional response and display stronger phenotypes. Transcriptome analysis of alleles displaying mutant mRNA degradation revealed upregulation of a significant proportion of genes displaying sequence similarity with the mutated gene’s mRNA, suggesting a model whereby mRNA degradation intermediates induce transcriptional adaptation via sequence similarity. Further mechanistic analyses suggested RNA-decay factors-dependent chromatin remodeling, and repression of antisense RNAs to be implicated in the response. These results identify a novel role for mutant mRNA degradation in buffering against mutations. Besides, they hold huge implications on understanding disease-causing mutations and shall help in designing mutations that lead to minimal transcriptional adaptation-induced compensation, facilitating studying gene function in model organisms.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
Seit einigen Jahrzehnten ist Lysozym eines der am meisten erforschten Proteine in der Literatur und wird hauptsächlich als Modell Protein zur Aufklärung der Faltungs- und Entfaltungsprozesse genutzt. Da die Frage nach Fehlfaltung und deren Verknüpfung mit neurodegenerativen Krankheiten bis zum heutigen Tag nicht vollständig geklärt ist, besteht hier ein großer Spielraum für weitere Forschungsansätze. In der vorliegenden Arbeit wurden daher zwei Modellsysteme verwendet, Hühereiweiß-Lysozym und menschliches Lysozym, jeweils in ihrem nicht-nativen ungefalteten Zustand. Diese ungefalteten Ensembles wurden mit Hilfe NMR spektroskopischer Methoden untersucht und ergaben sehr detaillierte, zum Teil auch überraschende neue Einblicke in Struktur und Dynamik der beiden Proteine und liefern somit wichtige Erkenntnisse zu Faltungs- und Aggregationsprozessen. ...
This work is concerned with two topics at the intersection of convex algebraic geometry and optimization.
We develop a new method for the optimization of polynomials over polytopes. From the point of view of convex algebraic geometry the most common method for the approximation of polynomial optimization problems is to solve semidefinite programming relaxations coming from the application of Positivstellensätze. In optimization, non-linear programming problems are often solved using branch and bound methods. We propose a fused method that uses Positivstellensatz-relaxations as lower bounding methods in a branch and bound scheme. By deriving a new error bound for Handelman's Positivstellensatz, we show convergence of the resulting branch and bound method. Through the application of Positivstellensätze, semidefinite programming has gained importance in polynomial optimization in recent years. While it arises to be a powerful tool, the underlying geometry of the feasibility regions (spectrahedra) is not yet well understood. In this work, we study polyhedral and spectrahedral containment problems, in particular we classify their complexity and introduce sufficient criteria to certify the containment of one spectrahedron in another one.
Many hominin species are best physically represented and understood by the sum of their dental morphologies. Generally, taxonomic affinities and evolutionary trends in development (ontogeny) and morphology (phylogeny) can be deduced from dental analyses. More specifically, the study of dental remains can yield a wealth of information on many facets of hominin evolution, life history, physiology and ecological adaptation; in short, the organisms paleobiomics. Functionally, teeth present information about dietary preferences, that is, the dietary niche in ecological context and, in turn, masticatory function. As the amount and types of information that can be gleaned from 2-dimensional tooth measurement exhaust themselves, 3-dimensional microscopic modeling and analysis presents a largely fertile ground for reexamination and reinterpretation of dental characteristics (Bromage et al., 2005). As such, a novel, non-destructive approach has been developed which combines the work of two established technologies (confocal microscopy and 3D modeling) adapted specifically for the purpose of mineralized tissue imaging. Through this method, 3D functional masticatory and therefore occlusal molar microwear is able to be visualized, quantified and comparatively analyzed to assess dietary preference in Javanese Homo erectus. This method differs from other microwear investigative techniques (defining 'pits'- vs- 'scratches', microtexture analysis etc.) in that it defines a molars masticatory microwear functional interactions in 3-dimensions as its baseline dataset for further interpretations and analyses. Due to poor specimen collection techniques employed during the first half of the 20th century, the very complex geologic nature of the Sangiran Dome and disagreements over its chronostratigraphy, only very few scientific works have addressed the Sangiran 7 (S7) Homo erectus molar collection (n=25) (e.g. Grine and Franzen, 1994; Kaifu, 2006). Grine and Franzen's (1994) work was a predominantly qualitative initial assessment of the specimens and identified five specimens that might better be ascribed to a fossil pongid rather than H. erectus. They also noted several molars to which tooth position (M1 or M2) was unable to be ascribed (Grine and Franzen, 1994). Kaifu (2006) comparatively examined crown sizes in several S7 molars.
The Sangiran 7 collection originates from two distinct geologic horizons: ten from the older Sangiran Formation (S7a, ~1.7 to 1.0mya) and fifteen from the younger, overlying Bapang Formation (S7b, ~1.0 to .7mya). During this million year period, Java was connected to the mainland during various glacio-eustatic low-stands in sea level. These mainland connections varied in size, extent, climatic condition and therefore in faunal and floral composition. As the S7 sample may be representative of the earliest Homo erectus migrants into Java and spans long durations of occupation, its investigation yields potential to understand the various influences climatic and ecogeographic fluctuations had on these populations. Since the sample consists only of teeth, an ecodietary approach has been deemed the most logical and appropriate investigative approach. Questions regarding the intra- and inter- S7 sample
relationships will also be addressed.
By comparing various aspects of the H. erectus dentition against that of hunter/ gatherer's (H/G) whose diet is known, functional dietary similarity can be directly correlated. Thus a comparative molar sample consisting of the below historic hunter/ gather's (n=63) has been included in order to assess H. erectus's diet in ecological context: Inuit (n=9), Pacific Northwest Tribes (n=11), Fuegians (n=11), Australian Aborigines (n=12) and Bushman (n=20). Methodologically, this approach produces a 3D facet microwear vector (fmv) signature for each molar which can then be compared for statistical similarity.
Microwear (and, as such, the fmv signatures) was defined by the regular, parallel striations found on specific cusp facets known to arise from patterned, directional masticatory movements. This differs significantly from post-mortem or taphonomic microwear which produces striations at irregular angles on multiple, non-masticatory surfaces (Peuch et al.1985, Teaford, 1988). A 'match value' is produced to determine the similarity of two molars fmv's. The 'match values' are ranked (high to low) and these rankings are used to statistically analyze and infer dietary preference: between Sangiran 7 (as an entire sample) compared against that of the historic hunter/ gatherer H. sapiens whose diet and ecogeography is known; within S7a and S7b and then among the S7 sample (eg. S7a-vs-S7b); whether the purported Pongo molars actually affiliate well with H. erectus, the hunter-gatherer's or if they demonstrate distinctly different fmv signatures altogether; whether fmv signatures are useful in distinguishing molars whose tooth position is in doubt (eg. M1 or M2).
When compared against individual H/G molars, the results show that Sangiran 7 H. erectus most closely correlates with Bushmen across all areas of fmv signature analysis. However, within broader dietary categories (yearly reliant on proteinaceous foods; seasonally reliant on proteinaceous foods; not reliant on proteinaceous foods), it was found that H. erectus most closely allied with the two hunter/ gatherer subpopulations associated with the 'Seasonally reliant on proteinaceous foods' (Australian Aboriginals and Pacific Northwest Tribes). There was also evidence for dietary change or specialization over time. As the environment changed during occupation by the earlier Sangiran to the later Bapang individuals, the dietary preference shifted from a focus on vegetative foods to a diet much more inclusive of proteinaceous resources.
These results are considered logical within the larger ecogeographic and chronostratigraphic context of the Sangiran Dome during the Pleistocene. However, a larger sample would be needed to confirm this. Although general dietary preferences can be drawn from this method, it is not possible at present to define specific foods consumed on a daily basis (eg. tubers or tortoise meat).
Out of the five specimens possibly allied with Pongo, S7-14 matched at the 'high' designation with a hunter/ gatherer, S7-62 matched 'moderately', S7-20 matched 'low' while the remaining two were not able to be matched with any other teeth for various reasons. Although designation to Pongo cannot be ruled on at this time using this method, it does demonstrate that at least two of the teeth correlate well with various hunter/ gatherer's who do not share dietary similarity with Pongo. This suggests their designation as Pongo should be more closely reevaluated. As for the four specimens whose tooth position was unsure, S7-14 matched 'highly' with 1st molars, S7-62 and S7-78 matched 'moderately' with 2nd and 1st molars respectively while S7-20 only matched at the 'low' designation. Although this approach is still exploratory, it adds another analytical tool for use in defining tooth position.
In sum, this method has demonstrated its usefulness in defining and functionally analyzing a novel 3D molar microwear dataset to interpret dietary preference. Future work would include a pan- H. erectus molar sample in order to illuminate broader populational, taxonomic and dietary correlations within and amoung all H. erectus specimens. A larger, more heterogenous historic H/G sample would also be included in order to provide a wider dietary comparative population. This method can be further extended to include and compare any and all hominins as well as any organism which produces micro wear upon it molars. Also, the data obtained and resultant fmv signature diagrams have the potential to be incorporated into 3D VR reconstructions of mandibular movement thus recreating mastication in extinct organisms and leading to more robust anatomical and physiological investigations especially when viewed in the context of larger environmental conditions or changes.
The Earth’s surface condition we find today is a result of long exposure to metabolism of life forms. Particularly, molecular oxygen in the atmosphere is a feature which developed over time. The first substantial and lasting rise of atmospheric oxygen level happened ≈ 2.5 Ga ago, but localities are reported where transiently elevated oxygen levels appeared before this time-point. To trace the timing and circumstances of the earliest availability of free oxygen in the atmosphere is important to understand the habitats of early microbial life forms on Earth.
This thesis focuses to obtain information of oxygen levels and the related atmospheric cycling of metals in sediments of the 3.5 to 3.2 Ga Barberton Greenstone Belt. First, as iron was a ubiquitous constituent of Archean seawater, I investigated its isotopic composition in minerals of chemical sediments. Hereby, I tried to resolve the changes within the water basin on small scale sedimentary sequence cycles. Second, I focused on the minor constituents of Archean seawater. The Re-Os geochronologic system and the abundance patterns of the platinum-group elements were chosen to integrate information of oxygen promoted weathering of a large source area. To integrate information of a large time interval, the isotopes of uranium were investigated over a large stratigraphic section.
The two key findings of this thesis are:
• Quantitative oxidation of ferrous iron in surface layers of Paleoarchean seawater occurred during the onset and termination of hydrothermal FeIIaq delivery into shallow waters.
• Paleoarchean sedimentary successions of the Barberton Greenstone Belt lack any evidence of transient basin-scale oxygenation.
The Manzimnyama Iron Formation (IF, Fig Tree Group, Barberton Greenstone Belt, South Africa) has been deciphered to exist of cyclic stacks of lithostratigraphic units with varying amounts of iron oxide and carbonate minerals. In-situ femtosecond-Laser-Ablation ICP-MS iron isotope measurements showed that the majority of siderite (γ56Fe ≈ −0.5 ‰) precipitated directly from seawater of γ56Fe ≈ 0 ‰. Ferric iron from the surface layers is preserved in ≤ 1μ m hematite and in magnetite that has been grown within the consolidated sediment. During FeIIaq events, fine-grained hematite (γ56Fe ≈ 2.2 ‰) and magnetite (γ56Fe 0.5 to 0.8 ‰) indicate oxygen levels in surface waters of lower than 0.0002 μM. Upon onset and termination of iron oxide abundance, magnetite with γ56Fe ≈ 0 ‰ indicates that low concentrations of FeIIaq in surface waters were oxidized quantitatively. These observations demonstrate the existence of iron oxidation in Paleoarchean surface waters independent of FeIIaq concentration. This is the first investigation of Paleoarchean IF showing that lithostratigraphic cyclicity can be traced in iron isotopic composition of oxide minerals.
ID-ICP-MS measurement of Re, Ir, Ru, Pt and Pd, trace element (SF-ICP-MS) and ID-MCICP- MS uranium isotope determination have been applied to carbonaceous shale of the Mapepe Fm. (Fig Tree Group) after inverse Aqua Regia leaching and bulk digestion. The sediments reveal a silicified fraction which exhibits a seawater REE signature and a mixture of detrital and meteoritic PGE. Neither enrichment of the redox-sensitive elements Re or Mo nor fractionated uranium isotopes have been found on a stratigraphic interval of several hundred meters. The non-silica fraction shows no depletion of Re which indicates that the detrital material had no contact to oxidizing fluids. ID-TIMS measurements of Re and Os after the CrO3-SO4 Carius Tube method of two sample intervals showed that the Re-Os isotopic systems of the non-silica fractions are identical to two komatiite occurrences. Weltevreden Fm. and Komati Fm. rocks were uplifted, eroded and transported to the deep part of the sedimentary basin without any change to the Re-Os system. Negative fractionated uranium isotopes (γ238U = −0.41 ± 0.01 ‰) associated with detrital Ba-Cr-U occurrences suggest the existence of distal redox-processes that involve uranium species. This study demonstrates that over the time of exposure and deposition of the Mapepe Fm. sedimentation, free oxygen was not available for weathering in the catchment area.
A multiple filter test for the detection of rate changes in renewal processes with varying variance
(2014)
The thesis provides novel procedures in the statistical field of change point detection in time series.
Motivated by a variety of neuronal spike train patterns, a broad stochastic point process model is introduced. This model features points in time (change points), where the associated event rate changes. For purposes of change point detection, filtered derivative processes (MOSUM) are studied. Functional limit theorems for the filtered derivative processes are derived. These results are used to support novel procedures for change point detection; in particular, multiple filters (bandwidths) are applied simultaneously in oder to detect change points in different time scales.
In light of the global sea-level rise and climate change of the 21th century, it is important to look back into the recent past in order to understand what the future might hold. A multi-proxy data set was compiled to evaluate the influence of geomorphological and environmental factors, such as antecedent topography, subsidence, sea level and climate, on reef, sand apron and lagoon development in modern carbonate platforms through the Holocene. Therefore, a combination of remote sensing and morphological data from 122 modern carbonate platforms and atolls in the Atlantic, Indian and Pacific Oceans were conducted, along with a case study from the oceanic (Darwinian) barrier-reef system of Bora Bora, French Polynesia, South Pacific.
The influence of antecedent topography and platform size as factors controlling Holocene sand apron development and extension in modern atolls and carbonate platforms is hypothesized. Antecedent topography describes the elevation and relief of the underlying Pleistocene topography (karst) and determines the distance from the sea floor to the rising postglacial sea level. Maximum lagoon depth and marginal reef thickness, when available in literature, were used as proxies for antecedent topography. Sand apron proportions of 122 atolls and carbonate platforms from the Atlantic, Indian and Pacific Oceans were quantified and correlated to maximum lagoon depth, total platform area and marginal reef thickness. This study shows that sand apron proportions increase with decreasing lagoon depths. Sand apron proportions also increase with decreasing platform area. The interaction of antecedent topography and Holocene sea-level rise is responsible for variations in accommodation space and at least determines the extension of the lateral expansion of sand aprons. In general, sand apron formation started when marginal reefs approached relative sea level. Spatial and regional variations in sea-level history let sand apron formation start earlier in the Indo-Pacific region (transgressive-regressive) than in the Western Atlantic Ocean (transgressive).
The influence of sea level, antecedent topography and subsidence of a volcanic island on late Quaternary reef development was evaluated based on six rotary core transects on the barrier and fringing reefs of Bora Bora. This study was designed to revalue the Darwinian model, the subsidence theory of reef development, which genetically connects fringing reef, barrier reef and atoll development by continuous subsidence of the volcanic basement. Postglacial sea-level rise, and to a minor degree subsidence, were identified as major factors controlling Holocene reef development in that they have created accommodation space and controlled reef architecture. Antecedent topography was also an important factor because the Holocene barrier reef is located on a Pleistocene barrier reef forming a topographic high. Pleistocene soil and basalt formed the pedestal of the fringing reef. Uranium-Thorium dating shows that barrier and fringing reefs developed contemporaneously during the Holocene.
In the barrier–reef lagoon of Bora Bora, the influence of environmental factors, such as sea level and climate, tsunamis and tropical cyclones controlling Holocene sediment dynamics was evaluated based on sedimentological, paleontological, geochronological and geochemical data. The lagoonal succession comprises mixed carbonate-siliciclastic sediments overlying peat and Pleistocene soil. The multi-proxy data set shows variations in grain-size, total organic carbon (proxy for primary productivity), Ca and Cl element intensities (proxies for carbonate availability and lagoonal salinity) during the mid-late Holocene. These patterns could result from event sedimentation during storms and correlate to event deposits found in nearby Tahaa, probably induced by elevated cyclone activity. Accordingly, elevated erosion and runoff from the volcanic island and lower lagoonal salinity would be a result of rainfall during repeated cyclone landfall. However, Ti/Ca and Fe/Ca ratios as proxies for terrigenous sediment delivery peaked out in the early Holocene and declined since the mid-Holocene. Benthic foraminifera assemblages do not indicate reef-to-lagoon transport. Alternatively, higher and sustained hydrodynamic energy is probably induced by stronger trade winds and a higher-than-present sea level during the mid-late Holocene. The increase in mid-late Holocene sediment dynamics within the back-reef lagoon is supposed to display sediment-load shedding of sand aprons due to the oversteepening of slopes at sand apron/lagoon edges during their progradation rather than an increase in tropical storm activity during that time.
The influence of sea-level and climate changes on sediment import, composition and distribution in the Bora Bora lagoon during the Holocene is validated. Lagoonal facies succession comprises siderite-rich marly wackestones, foraminifera-siderite wackestones, mollusk-foraminifera marly packstones and mollusk-rich wackestones during the early-mid Holocene, and mudstones since the mid-late Holocene. During the early Holocene, enhanced weathering and iron input from the volcanic island due to wetter climate conditions led to the formation of siderite within the lagoonal sediments. The geochemical composition of these siderites shows that precipitation was driven by microbial activity and iron reduction in the presence of dissolved bicarbonate. Chemical substitutions at grain margins illustrate changes in the oxidation state and probably reflect changes in pore water chemistry due to sea-level rise and climate change (rainfall). In the late Holocene, sediment transport into the lagoon is hampered by motus on the windward side of the lagoon, which led to early submarine lithification within the lagoon.
How the brain evolved remains a mystery. The goal of this thesis is to understand the fundamental processes that are behind the evolutionary history of the brain. Amniotes appeared 320 million years ago with the transition from water to land. This early group bifurcated into sauropsids (reptiles and birds) and synapsids (mammals). Amniote brains evolved separately and display obvious structural and functional differences. Although those differences reflect brain diversification, all amniote brains share a common ancestor and their brains show multiple derived similarities: equivalent structures, networks, circuits and cell types have been preserved during millions of years. Finding these differences and similarities will help us understand brain historical evolution and function. Studying brain evolution can be approached from various levels, including brain structure, circuits, cell types, and genes. We propose a focus on cell types for a more comprehensive understanding of brain evolution. Neurons are the basic building blocks and the most diverse cell types in the brain. Their evolution reflects changes in the developmental processes that produce them, which in turn may shape the neural circuits they belong to. However, there is currently a lack of a unified criteria for studying the homology of connectivity and development between neurons. A neuron’s transcriptome is a molecular representation of its identity, connectivity, and developmental/evolutionary history. Hence the comparison of neuronal transcriptomes within and across species is a new and transformative development in the study of brain evolution. As an alternative, comparing neuronal transcriptomes across different species can provide insights into the evolution of the brain. We propose that comparing transcriptomes can be a way to fill this gap and unify these criteria. In previous studies, published in Science (Tosches et al., 2018) and Nature (Norimoto et al., 2020), we leveraged scRNAseq in reptiles to re-evaluate the origins and evolution of the mammalian cerebral cortex and claustrum. Motivated by the success of this approach, in this thesis we have now expanded single-cell profiling to the entire brain of a lizard species, the Australian dragon Pogona vitticeps, with a special focus in thalamus and prethalamus of. This approach allowed us to study the evolution of neuron types in amniotes. Therefore, we aimed to build a multilevel atlas of the lizard brain based on histology and transcriptomic and compare it to an equal mouse dataset (Zeisel et al., 2018).
Our atlas reveals a general structure that is consistent with that for other amniote brains, allowing us to make a direct comparison between lizard and mouse, despite their evolutionary divergence 320 million years ago. Through our analysis of the transcriptomes present in various neuron types, we have uncovered a core of conserved classes and discovered a fascinating dichotomy of new and conserved neuron types throughout the brain. This research challenges the traditional notion that certain brain regions are more conserved than others.
Our research also has uncovered the evolutionary history of the lizard thalamus and prethalamus by comparing them to homologous brain regions of the mouse. This pioneering research sheds new light on our understanding of the evolutionary history of the lizard brain. We propose a new classification of the lizard thalamic nuclei based on
transcriptomics. Our research revealed that the thalamic neuron types in lizards can be grouped into two large, conserved categories from the medial to lateral thalamus. These categories are encoded by a common set of effector genes, linking theories based on connectivity and molecular studies of these areas. In our data we have seen that there is a conservation of the medial-lateral transcriptomic axis in mouse and lizard, this conservation was most likely already present in the common ancestor. Although there is a shared medial-lateral axis, a deeper study of the thalamic cell types has allowed us to see the existence of a partial diversification of the thalamic population, specifically in the sensory-related lateral thalamus; in opposition, the medial thalamic nuclei neuron-types have been preserved.
On the other hand, the comparison with the mammalian prethalamus allowed us to confirm that the lizard ventromedial thalamic neuron types are homologous to mouse reticular thalamic neuron types (Díaz et al., 1994), even if they do not express the classical Reticular thalamic nucleus (RTn) marker PV/pvalb. We also discovered that there has been a simplification in the mammalian prethalamic neuron types in favor of an increase in the number of Interneurons (IN) types within their thalamus. We suggest that the loss of GABAergic neuronal types in the mammalian prethalamus is linked to the need for a more efficient control of the thalamo-pallial communication in mammals, while in lizards, where thalamo-pallial communication is probably simpler, the diversity prethalamus presents a higher diversity.
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
Synchronized neural activity in the visual cortex is associated with small time delays (up to ~10 ms). The magnitude and direction of these delays depend on stimulus properties. Thus, synchronized neurons produce fast sequences of action potentials, and the order in which units tend to fire within these sequences is stimulusdependent, but not stimulus-locked. In the present thesis, I investigated whether such preferred firing sequences repeat with sufficient accuracy to serve as a neuronal code. To this end, I developed a method for extracting the preferred sequence of firing in a group of neurons from their pair-wise preferred delays, as measured by the offsets of the centre peaks in their cross-correlation histograms. This analysis method was then applied to highly parallel recordings of neuronal spiking activity made in area 17 of anaesthetized cats in response to simple visual stimuli, like drifting gratings and moving bars. Using a measure of effect size, I then analyzed the accuracy with which preferred firing sequences reflected stimulus properties, and found that in the presence of gamma oscillations, the time at which a unit fired in the firing sequence conveyed stimulus information almost as precisely as the firing rate of the same unit. Moreover, the stimulus-dependent changes in firing rates and firing times were largely unrelated, suggesting that the information they carry is not redundant. Thus, despite operating at a time scale of only a few milliseconds, firing sequences have the strong potential to provide a precise neural code that can complement firing rates in the cortical processing of stimulus information.
This thesis examines the literary output of German servicemen writers writing from the occupied territories of Europe in the period 1940-1944. Whereas literary-biographical studies and appraisals of the more significant individual writers have been written, and also a collective assessment of the Eastern front writers, this thesis addresses in addition the German literary responses in France and Greece, as being then theatres of particular cultural/ideological attention. Original papers of the writer Felix Hartlaub were consulted by the author at the Deutsches Literatur Archiv (DLA) at Marbach. Original imprints of the wartime works of the subject writers are referred to throughout, and citations are from these. As all the published works were written under conditions of wartime censorship and, even where unpublished, for fear of discovery written in oblique terms, the texts were here examined for subliminal authorial intention. The critical focus of the thesis is on literary quality: on aesthetic niveau, on applied literary form, and on integrity of authorial intention. The thesis sought to discover: (1) the extent of the literary output in book-length forms. (2) the auspices and conditions under which this literary output was produced. (3) the publication history and critical reception of the output. The thesis took into account, inter alia: (1) occupation policy as it pertained locally to the writers’ remit; (2) the ethical implications of this for the writers; (3) the writers’ literary stratagems for negotiating the constraints of censorship.
In literary translation 'correctness' is rarely ratified by linguistic rules; it is more often a question of what a sensitive translator feels to be correct. Intuition will therefore play a major part. This intuition is seen here neither as instinctive reaction prompted by experience, nor as native competence, but as an inquiring, self-moderating influence inspired by the language itself. It is treated in this respect as an informed intuition, that is, as having a linguistic base for sensitive judgement. This assumes that the literary translator is both a creative writer and his own critical reader as well as a fine judge of language potential. This line is applied to translating meaning and sense, transferring the very language, imitating the form and style, re-creating the features, and above all, to capturing those unique qualities of the original. After dealing with word-accuracy, the question of literary input demanded by form and style is examined. The treatment of language used for effect features in a section on Kafka. The merits and the problems of translating dialect as dialect for its own sake are looked at closely and in a positive way as are the possibilities of reproducing 'oddities' of language. The immense task of translating the language of Joyce ('Ulysses ') with all its vagaries and skilful manipulation of words is examined for the possibility of providing an accurate copy. The ultimate test of reproducing a uniqueness of artistic creation together with the profound thought which inspired it, is reserved for a section on Hopkins. While it is recognized that, owing to the constrictions imposed by the extreme and sensitive use of language, no translation can fully include all that there is in his poems, it might be possible to capture enough of their essence to give an impression of a 'German' Hopkins at work. A major objective throughout is the establishment of a linguistic base for the part played by intuition in literary translation.
Spin waves in yttrium-iron garnet has been the subject of research for decades. Recently the report of Bose-Einstein condensation at room temperature has brought these experiments back into focus. Due to the small mass of quasiparticles compared to atoms for example, the condensation temperature can be much higher. With spin-wave quasiparticles, so-called magnons, even room temperature can be reached by externally injecting magnons. But also possible applications in information technologies are of interest. Using excitations as carriers for information instead of charges delivers a much more efficient way of processing data. Basic logical operations have already been realized. Finally the wavelength of spin waves which can be decreased to nanoscale, gives the opportunity to further miniaturize devices for receiving signals for example in smartphones.
For all of these purposes the magnon system is driven far out of equilibrium. In order to get a better fundamental understanding, we concentrate in the main part of this thesis on the nonequilibrium aspect of magnon experiments and investigate their thermalization process. In this context we develop formalisms which are of general interest and which can be adopted to many different kinds of systems.
A milestone in describing gases out of equilibrium was the Boltzmann equation discovered by Ludwig Boltzmann in 1872. In this thesis extensions to the Boltzmann equation with improved approximations are derived. For the application to yttrium-iron garnet we describe the thermalization process after magnons were excited by an external microwave field.
First we consider the Bose-Einstein condensation phenomena. A special property of thin films of yttrium-iron garnet is that the dispersion of magnons has its minimum at finite wave vectors which leads to an interesting behavior of the condensate. We investigate the spatial structure of the condensate using the Gross-Pitaevskii equation and find that the magnons can not condensate only at the energy minimum but that also higher Fourier modes have to be occupied macroscopically. In principle this can lead to a localization on a lattice in real space.
Next we use functional renormalization group methods to go beyond the perturbation theory expressions in the Boltzmann equation. It is a difficult task to find a suitable cutoff scheme which fits to the constraints of nonequilibrium, namely causality and the fluctuation-dissipation theorem when approaching equilibrium. Therefore the cutoff scheme we developed for bosons in the context of our considerations is of general interest for the functional renormalization group. In certain approximations we obtain a system of differential equations which have a similar transition rate structure to the Boltzmann equation. We consider a model of two kinds of free bosons of which one type of boson acts as a thermal bath to the other one. Taking a suitable initial state we can use our formalism to describe the dynamics of magnons such that an enhanced occupation of the ground state is achieved. Numerical results are in good agreement with experimental data.
Finally we extend our model to consider also the pumping process and the decrease of the magnon particle number till thermal equilibrium is reached again. Additional terms which explicitly break the U(1)-symmetry make it necessary to also extend the theory from which a kinetic equation can be deduced. These extensions are complicated and we therefore restrict ourselves to perturbation theory only. Because of the weak interactions in yttrium-iron garnet this provides already good results.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
‘The whole is more than the sum of its parts.’ This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
"The whole is more than the sum of its parts." This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
I derive a general effective theory for hot and/or dense quark matter. After introducing general projection operators for hard and soft quark and gluon degrees of freedom, I explicitly compute the functional integral for the hard quark and gluon modes in the QCD partition function. Upon appropriate choices for the projection operators one recovers various well-known effective theories such as the Hard Thermal Loop/ Hard Dense Loop Effective Theories as well as the High Density Effective Theory by Hong and Schaefer. I then apply the effective theory to cold and dense quark matter and show how it can be utilized to simplify the weak-coupling solution of the color-superconducting gap equation. In general, one considers as relevant quark degrees of freedom those within a thin layer of width 2 Lambda_q around the Fermi surface and as relevant gluon degrees of freedom those with 3-momenta less than Lambda_gl. It turns out that it is necessary to choose Lambda_q << Lambda_gl, i.e., scattering of quarks along the Fermi surface is the dominant process. Moreover, this special choice of the two cutoff parameters Lambda_q and Lambda_gl facilitates the power-counting of the numerous contributions in the gap-equation. In addition, it is demonstrated that both the energy and the momentum dependence of the gap function has to be treated self-consistently in order to determine the imaginary part of the gap function. For quarks close to the Fermi surface the imaginary part is calculated explicitly and shown to be of sub-subleading order in the gap equation.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The fungal interaction with plants is a 400 million years old phenomenon, which presumably assisted in the plants’ establishment on land. In a natural ecosystem, all plant-ranging from large trees to sea-grasses-are colonized by fungal endophytes, which can be detected inter- and intracellularly within the tissues of apparently healthy plants, without causing obvious negative effects on their host. These ubiquitous and diverse microorganisms are likely playing important roles in plant fitness and development. However, the knowledge on the ecological functions of fungal root endophytes is scarce. Among possible functions of endophytes, they are implicated in mutualisms with plants, which may increase plant resistance to biotic stressors like herbivores and pathogens, and/or to abiotic factors like soil salinity and drought. Also, endophytes are fascinating microorganisms in regard to their high potential to produce a great spectrum of secondary metabolites with expected ecological functions. However, evidences suggest that the interactions between host plants and endophytes are not static and endophytes express different symbiotic lifestyles ranging from mutualism to parasitism, which makes difficult to predict the ecological roles of these cryptic microorganisms. To reveal the ecological function of fungal root endophytes, this doctoral thesis aims at assessing fungal root endophytes interactions with different plants and their effects on plant fitness, based on their phylogeny, traits, and competition potential in settings encompassing different abiotic contexts. To understand the cryptic implication of nonmycorrhizal endophytes in ecosystem processes, we isolated a diverse spectrum of fungal endophytes from roots of several plant species growing in different natural contexts and tested their effects on different model plants under axenic laboratory conditions. Additionally,we aimed at investigating the effect of abiotic and biotic variables on the outcome of interactions between fungal root endophytes and plants.
In summary, the morphological and physiological traits of 128 fungal endophyte strains within ten fungal orders were studied and artificial experimental systems were used to reproduce their interactions with three plant species under laboratory conditions. Under defined axenic conditions, most endophytes behaved as weak parasites, but their performance varied across plant species and fungal taxa. The variation in the interactions was partly explained by convergent fungal traits that separate groups of endophytes with potentially different niche preferences. According to my findings, I predict that the functional complementarity of strains is essential in structuring natural root endophytic communities. Additionally, the responses of plant-endophyte interactions to different abiotic factors, namely nutrient availability, light intensity, and substrate’s pH, indicate that the outcome of plant-fungus relationships may be robust to changes in the abiotic environment. The assessment of the responses of plant endophyte interactions to biotic context, as combinations of selected dominant root fungal endophytes with different degrees of trait similarity and shared evolutionary history, indicates that frequently coexisting root-colonizing fungi may avoid competition in inter-specific interactions by occupying specific niches, and that their interactions likely define the structure of root-associated fungal communities and influence the microbiome impacts on plant fitness.
In conclusion, my findings suggest that dominant fungal lineages display different ecological preferences and complementary sets of functional traits, with different niche preferences within root tissues to avoid competition. Also, their diverse effects on plant fitness is likely host-isolate dependent and robust to changes in the abiotic environment when these encompass the tolerance range of either symbiont.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
This work aimed to investigate the regulation and activity of 5-lipoxygenase (5-LO), the central enzyme in leukotriene biosynthesis, in two colorectal cancer cell lines. The leukotriene pathway is positively correlated with the progression of several solid malignancies; however, factors regulating 5-LO expression and activity in tumors are poorly understood.
Cancer development, as well as cancer progression, are strongly dependent on the tumor microenvironment. In the conventional monolayer culture of cancer cell lines, cell-matrix and cell-cell interactions present in native tumors are absent. Furthermore, it is already known that various colon cancer cell lines dysregulate several important signaling pathways due to 3D growth. Therefore, the expression of the leukotriene cascade in HT-29 and HCT-116 colorectal cancer cells was investigated within a three-dimensional context using multicellular tumor spheroids to mimic a more physiological environment compared to conventional cell culture. Especially the expression of 5-LO, cPLA2α, and LTA4 hydrolase was altered due to threedimensional (3D) cell growth, which was investigated by qPCR and Western blot analysis. High cellular density in monolayer cultures led to similar results. The observed 5-LO upregulation was found inversely correlated with cell proliferation, determined by cell cycle analysis, and activation of PI3K/mTORC-2- and MEK-1/ERK-dependent pathways, determined using pharmacological pathway inhibition, stable shRNA knockdown cell lines, and analysis via qPCR and Western blot analysis. Following, the transcription factor E2F1 and its target gene MYBL2 were identified to play a role in the repression of 5-LO during cell proliferation. For this purpose, several stable MYBL2 over-expression and ALOX5 reporter cell lines were prepared and analyzed. Since 5-LO was already identified as a direct p53 target gene, the influence of p53, which is variably expressed in the cell lines (HT-29, p53 R273H mut; HCT-116 p53 wt; HCT-116 p53 KO), was investigated as well. Furthermore, HCT-116 cells carrying a p53 knockout were investigated. The PI3K/mTORC-2- and MEK-1/ERK-dependent suppression of 5-LO was also found in tumor cells from other origins (Capan-2, Caco-2, MCF-7), which was determined using pharmacological pathway inhibition and following analysis via qPCR. This suggests that the identified mechanism might apply to other tumor entities as well.
5-LO activity was previously described as attenuated in HT-29 and HCT-116 cells compared to polymorphonuclear leukocytes, which express a highly active 5-LO. However, the present study showed that the enzyme activity is indeed low but inducible in HT-29 and HCT-116 cells. Of note, the general lipid mediator profile and the mediator concentrations were comparable to those of M2 macrophages. Finally, the analysis of substrate availability in HT-29 and HCT-116 cells revealed a vast difference between formed metabolite concentrations and supplemented fatty acid concentrations, indicating that the substrates are either transformed into lipoxygenase-independent metabolites or are esterified into the cellular membrane.
In summary, the data presented in this work demonstrate that 5-LO expression and activity are tightly regulated in HT-29 and HCT-116 cells and fine-tuned due to environmental conditions. The cells suppress 5-LO during proliferation but upregulate the expression and activity of the enzyme under cellular stress-triggering conditions. This implies a possible role of 5-LO in manipulating the tumor stroma to support a tumor-promoting microenvironment.
The Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) as well as the T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL) are rare types of malignant lymphomas. Both NLPHL and THRLBCL are frequently observed in middle-aged men with THRLBCL presenting frequently with an advanced Ann-Arbor stage with B-symptoms and associated with more aggressive courses.3 However, due to the limited number of tumor cells in the tissue of both NLPHL and THRLBCL, limited numbers of studies have been conducted on these lymphomas and current results are mainly based on general molecular genetic studies.
In order to obtain a better understanding for these disease forms as well as possible changes in their nuclear and cytoplasmatic sizes, the following study relied on the comparison of the different NLPHL forms and THRLBCL in terms of nuclear size and nuclear volume. This was carried out using both 2D and 3D analysis. During the 2D analysis of nuclear size and nuclear volume no significant differences could be presented between those groups. However, the 3D analysis of NLPHL and THRLBCL pointed out a slightly enlarged nuclear volume in THRLBCL. Furthermore, the analysis indicated a significantly increased cytoplasmatic size of THRLBCL compared to NLPHL forms. Nevertheless, differences occurred not only between the tumor cells of both disease forms, but also the T cells presented a larger nuclear volume in THRLBCL. B cells, which were considered as the control group, did not demonstrate any significant differences between the different groups. The presented results suggest an increased activity of T cells in THRLBCL, which is most likely to be interpreted as a response against the surrounding tumor cells and probably limits the proliferation of the tumor cells. Based on these results, the importance of 3D analysis is also evident due to the fact that it is clearly superior to 2D analysis. For a better understanding of both disease forms, it is therefore recommended to use the 3D technique in combination with molecular genetic analysis in future research.
The subject of this thesis is the experimental investigation of the neutron-capture cross sections of the neutron-rich, short-lived boron isotopes 13B and 14B, as they are thought to influence the rapid neutron-capture process (r process) nucleosynthesis in a neutrino-driven wind scenario.
The 13;14B(n,g)14;15B reactions were studied in inverse kinematics via Coulomb dissociation at the LAND/R3B setup (Reactions with Relativistic Radioactive Beams). A radioactive beam of 14;15B was produced via in-flight fragmentation and directed onto a lead-target at about 500 AMeV. The neutron breakup of the projectile within the electromagnetic field of the target nucleus was investigated in a kinematically complete measurement. All outgoing reaction products were detected and analyzed in order to reconstruct the excitation energy.
The differential Coulomb dissociation cross sections as a function of the excitation energy were obtained and first experimental constraints on the photoabsorption and the neutron-capture cross sections were deduced. The results were compared to theoretical approximations of the cross sections in question. The Coulomb dissociation cross section of 15B into 14B(g.s.) + n was determined to be s(15B;14B(g:s:)+n) CD = 81(8stat)(10syst) mb ; while the Coulomb dissociation cross section of 14B into a neutron and 13B in its ground state was found to be s(14B;13B(g:s:)+n) CD = 281(25stat)(43syst) mb: Furthermore, new information on the nuclear structure of 14B were achieved, as the spectral shape of the differential Coulomb dissociation cross section indicates a halolike structure of the nucleus.
Additionally, the Coulomb dissociation of 11Be was investigated and compared to previous measurements in order to verify the present analysis. The corresponding Coulomb dissociation cross section of 11Be into 10Be(g.s.) + n was found to be 450(40stat)(54syst ) mb, which is in good agreement with the results of Palit et al.
My study examined MMA training, and thereby the ‘back region’ of MMA, where the ‘everyday life’ of MMA takes place. I enquired into how MMA training corresponds with MMA’s self-description, namely the somehow self-contradicting notion that MMA fights would be dangerous combative goings-on of approximately real fighting, but that MMA fighters would be able to approach these incalculable and uncontrolla-ble combative dangers as calculable and controllable risks.235 Conducting an ethnog-raphy in which I focused on the combination of participation and observation, I stud-ied how the specific interaction organisations of the three core training practices of MMA training provide the training students with specific combative experiences and how they thereby construct the social reality that is MMA training....
The book deals with a comprehensive constellation of narrative and visual, often counterposed representations of the causes, course, and results of the assault on the Palace of Justice of Colombia by a guerrilla commando and the immediate counterattack launched by state security forces on November 6, 1985, as well as with the local memorial traditions in which the production, circulation and reproduction of these representations have taken place between 1985 and 2020. The research on which it is based was grounded in the method and perspective of classical anthropology, in as much as qualitative fieldwork and the search for the perspective of the actors involved have played a central role. Within that context, memory entrepreneurs belonging to diverse sectors, from the far-right to the human rights movement, were followed through multisited fieldwork in various locations of Colombia, as well as in various countries of America and Europe. The analyses of fieldwork data, documental sources, and visual representations that constitute the core of the argument are framed in the field of memory studies and mainly based on theoretical and methodological resources from Pierre Bourdieu’s Field Theory, Jeffrey Alexander’s theory of social trauma, and Ernst Gombrich’s characterization of iconological analysis.
The book is composed of four chapters preceded by an introduction and followed by the conclusions and documental appendices, and substantiates three main theses. The first is that the Palace of Justice events were a radio- and television-broadcasted dispersed tragedy that affected the lives of actors from different social sectors and regions of Colombia, who have launched since 1985 multiple memorial initiatives in different fields of culture, thereby contributing to the formation and intergenerational transmission of a widespread cultural trauma. The second is that the narrative and visual representations at the core of that trauma express a vast universe of local representational traditions that can be traced at least until the early 20th century, and therefore preexists the so-called Colombian “memory boom”, dated to the mid-1990s. As an example of the preexistence and longstanding impact of these traditions, the local usage of the figure of “holocaust” for representing the effects of politically motivated violence is analyzed regarding the Palace of Justice events, but also traced to other representations emerged in the decade of 1920. The third thesis is that analyzing the diverse, frequently counterposed accounts of political violence elaborated within these traditions provides an opportunity to explore a wide variety of understandings of the causes and characteristics of the longstanding Colombian social and armed conflict.
Keywords: Political violence, Cultural trauma, Collective Memory, Iconology, Holocaust, Colombia.
Die Fähigkeit der spezifischen und kontextabhängigen zellulären Adaption auf intrinsische und/oder extrinsische Signale ist das Fundament zellulärer Homöostase. Verschiedene Signale werden von Membranrezeptoren oder intrazellulären Rezeptoren erkannt und ermöglichen die molekulare Anpassung zellulärer Prozesse. Komplexe, ineinandergreifende Proteinnetzwerke sind dabei elementar in der Regulation der Zelle. Proteine und deren Funktionen werden dabei nach Bedarf reguliert und unterliegen einem ständigen proteolytischen Umsatz.
Die stimulusabhängige Gentranskription und/oder Proteintranslation nimmt hier eine zentrale Stellung ein, da die zugrundeliegende Maschinerie die Komposition und Funktion der Proteinnetzwerke entsprechend anpassen kann. Zusätzlich zur Regulation der Proteinabundanz werden Proteine posttranslational modifiziert, um deren Eigenschaften rasch zu ändern. Zu posttranslationalen Modifikationen zählen die Ubiquitinierung und/oder Phosphorylierung, welche die Proteinfunktionen hochdynamisch regulieren. Deregulierte Proteinnetzwerke werden oft mit Neurodegeneration und Autoimmun- oder Krebserkrankungen assoziiert. Auch Infektionen mit humanpathogenen Bakterien greifen stark in den Regulierungsprozess von Proteinnetzwerken und deren Funktionen ein. Die zelluläre Homöostase wird dadurch herausgefordert.
Bakterien der Gattung Salmonella sind zoonotische, gramnegative, fakultativ intrazelluläre Pathogene, welche weltweit millionenfach Salmonellen-erkrankungen hervorrufen. Von besonderer Bedeutung ist dabei Salmonella enterica serovar Typhimurium (hiernach Salmonella), welches im Menschen, meist durch mangelnde Hygienemaßnahmen, Gastroenteritis auslöst.
Immunität in Epithelzellen wird über das angeborene Immunsystem vermittelt und dient der Pathogenerkennung und -bekämpfung. Die Toll-like Rezeptoren (TLR) gehören zu den Mustererkennungsrezeptoren (pattern recognition receptors), welche spezifische mikrobielle Strukturen detektieren und eine kontextabhängige zelluläre Antwort generieren. Danger-Rezeptoren erkennen hingegen nicht direkt das Pathogen, sondern zelluläre Perturbationen, welche durch Zellschäden oder bakterielle Invasionen verursacht werden. Die intrinsische Fähigkeit der Wirtszelle, sich gegen Infektionen/Gefahren zu wehren wird dabei als zellautonome Immunität bezeichnet. Dabei nehmen induzierte proinflammatorische Signalwege und zelluläre Stressantworten eine wichtige Stellung ein. Die zelluläre Stressantwort aktiviert unter anderem die selektive Autophagie. Diese kann spezifisch aberrante Organelle, Proteine und invasive Pathogene abbauen. Ein weiterer Stresssignalweg ist die integrated stress response (ISR), welche eine selektive Proteintranslation erlaubt und damit die Auflösung des proteintoxischen Stresses ermöglicht.
Zur Penetration von Epithelzellen benötigt Salmonella ein komplexes System an Virulenzfaktoren, welches die bakterielle Internalisierung und Proliferation in der Wirtszelle ermöglicht. Salmonella nutzt dazu ein Typ-III-Sekretionssystem. Das System sekretiert bakterielle Virulenzfaktoren in die Zelle, sodass eine hochspezifische Modulierung des Wirtes erzwungen wird.
Die Virulenzfaktoren SopE und SopE2 spielen dabei eine Schlüsselrolle, da sie die Pathogenität von Salmonella maßgeblich vermitteln. Durch molekulare Mimikry von Wirts GTP (Guanosintriphosphat) -Austauschfaktoren aktivieren SopE und SopE2 die Rho GTPasen CDC42 und Rac1. GTP-geladenes CDC42 und Rac1 wiederum aktivieren das Aktinzytoskelett und stimulieren die Polymerisierung von Aktinfilamenten über den Arp2/3-Komplex an der Invasionsstelle. Das Pathogen wird dadurch in ein membranumhülltes Vesikel, die sogenannte Salmonella-containing Vakuole (SCV), aufgenommen. Die SCV stellt eine protektive, replikative, intrazelluläre Nische des Pathogens dar und wird permanent durch verschiedene Virulenzfaktoren moduliert.
Im Allgemeinen führt die Aktivierung von Mustererkennungsrezeptoren und Danger-Rezeptoren also zu einer zellulären Stressantwort und Entzündungsreaktion, wodurch es zur Bekämpfung der Infektion kommt. Inflammatorische Signalwege werden meist über den zentralen Transkriptionsfaktor NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells) vermittelt. NF-κB bewirkt die Induktion von proinflammatorischen Effektoren und Stressgenen. Zellautonome Immunität wird zusätzlich durch antibakterielle Autophagie ermöglicht, wobei Salmonella selektiv über das lysosomale System abgebaut werden. Das bakterielle Typ-III-Sekretionssystem verursacht an einigen wenigen SCVs Membranschäden, sodass Salmonella das Wirtszytosol penetrieren. Zytosolische Bakterien werden dabei spezifisch ubiquitiniert. Dies erlaubt die Erkennung durch die Autophagie-Maschinerie.
In der vorliegenden Arbeit wurde die zellautonome Immunität von Epithelzellen während einer akuten Salmonella Infektion durch quantitative Proteomik untersucht...
Twentieth-century scholars have thought little about the attractions of Descartes’ thinking. Especially in feminist theory, he has a bad press as the ‘instigator’ of the body-mind-split – seen as one of the theoretical bases for the subordination of women in Western culture. Seen from within seventeenth-century discourse it is the dictum that can be inferred from his writings that ‘the mind has no sex’ and which can be seen as an appeal to think about rational capacities in the utopian perspective of a gender neutral discourse. My work analyses this “face” of Cartesianism as it was adapted in favour of English seventeenth-century women. How were the specific tenets of Descartes’ philosophy employed on behalf of English women in the second half of the seventeenth century in England? My focus is on Descartes as a thinker, who – whatever his real or imagined intention might have been – provided women in seventeenth-century England with tools with which to change their status, in other words: with instruments of empowerment. So why were Descartes’ arguments so attractive for women? Descartes had argued for equal rational abilities among individuals in a gender neutral way. He had further critiqued generally accepted truth with his universal doubt. I believe this specific combination of ideas, affirming their rational capabilities, was seen by a number of women as an invitation to become involved in spheres of activity from which they were previously excluded. Moreover, a specific set of Descartes’ arguments provided a number of English women with a strategy to extend female agency. Not only did Descartes’ views legitimate female rationality, they also allowed an acknowledgement that this female intellect was equally connected to “truth” as that of their male contemporaries. As a consequence, women developed an increased self-esteem and inspiration to pursue their own independent study (and in some cases publishing). These ideas eventually helped to bring forward a demand for female education, as girls and women were still excluded from formal education in seventeenth-century England. My general thesis is that Cartesianism, as one of the earliest universalist theories on the nature of human reason, introduced new possibilities into the English debate over the nature and, hence, social position of women. It brought a radical twist to the already existing discussion on women by offering new critical tools which were taken up to argue on behalf of English women. In my work I examine the specific historical conditions of the reception of Descartes’ thought in England, the philosophical appeal of his ideas for women and analyse the writings of two English ‘disciples’ of Descartes: Margaret Cavendish, Duchess of Newcastle and Mary Astell.
Based on an original dataset of 100 important pieces of legislation passed during the three presidencies of William J. Clinton, George W. Bush, and Barack H. Obama (1992-2013), this study explores two sets of questions:
(1) How do presidents influence legislators in Congress in the legislative arena, and what factors have an effect on the legislative strategies presidents choose?
(2) How successful are presidents in getting their policy positions enacted into law, and what configurations of institutional and actor-centered conditions determine presidential legislative success?
The analyses show that in an hyper-polarized environment, presidents usually have to fight an uphill-battle in the legislative arena, getting more involved if they face less favorable contexts and the odds are against them.
Moreover, the analyses suggest that there is no silver-bullet approach for presidents' legislative success. Instead, multiple patterns of success exist as presidents - depending on the institutional and public environment - can resort to different combinations of actions in order to see their preferred policy outcomes enacted.
Paläoklimarekonstruktionen, die es sich zum Ziel gesetzt haben, Klima-Mensch Interaktionen auf lange Zeitreihen betrachtet zu erforschen, nehmen begünstigt durch die aktuell intensiv geführte Klimadebatte, einen immer größer werdenden Stellenwert in der öffentlichen und wissenschaftlichen Wahrnehmung ein. Denn trotz aller wissenschaftlicher Fortschritte, die in den vergangenen Jahrzehnten im Bereich der modernen Klimaforschung gemacht wurden, bleibt die zuverlässige Vorhersage und Modellierung von zukünftigen Klimaveränderungen noch immer eine der größten Herausforderungen unser heutigen Zeit. Betrachtet man die Karibik exemplarisch in diesem Rahmen, dann prognostizieren viele Modellrechnungen, infolge steigender Ozeantemperaturen, ein deutlich häufigeres Auftreten von tropischen Stürmen und Hurrikanen sowie eine Verschiebung hin zu höheren Sturmstärken. Dieser Trend stellt für die Karibik und viele daran angrenzende Staaten eine der größten Gefahren des modernen Klimawandels dar, den es wissenschaftlich über einen langen Zeitrahmen zu erforschen gilt.
Klimaprognosen stützen sich meist vollständig auf hoch-aufgelöste instrumentelle Datensätze. Diese sind aber alle durch einen wesentlichen Aspekt limitiert. Aufgrund ihrer eingeschränkten Verfügbarkeit (~150 Jahre) fehlt ihnen die erforderliche Tiefe, um die auf langen Zeitskalen operierenden Prozesse der globalen Klimadynamik adäquat abbilden zu können. Betrachtet man das Holozän in seiner Gesamtheit, so wurde die globale Klimadynamik über die vergangenen ~11,700 Jahre von periodisch auftretenden Prozessen und Abläufen gesteuert. Diese wirken grundsätzlich über Zeiträume von mehreren Jahrzehnten, teilweise Jahrhunderten und in einigen Fällen sogar Jahrtausenden. Viele dieser natürlichen Prozesse, können in der kurzen Instrumentellen Ära nicht gänzlich identifiziert und angemessen in Klimamodellen berücksichtig werden. Die alleinige Berücksichtigung der Instrumentellen Ära bietet daher nur eine eingeschränkte Perspektive, um die Ursachen und Abläufe von vergangenen sowie mögliche Folgen von zukünftigen Klimaveränderungen zu verstehen. Um diese Einschränkung zu überwinden, ist es somit erforderlich, dass die geowissenschaftliche Forschung mit Proxymethoden ein zusammenfassendes und mechanistisches Verständnis über alle Holozänen Klimaveränderungen erlangt.
Wenn man sich diese Limitierung, die ansteigenden Ozeantemperaturen und das in der Karibik in den vergangen 20 Jahren vermehrte Auftreten von starken tropischen Zyklonen ins Gedächtnis ruft, ist es nachvollziehbar, dass im Rahmen dieser Doktorarbeit ein zwei Jahrtausende langer und jährlich aufgelöster Klimadatensatz erarbeitet werden soll, der spät Holozäne Variationen von Ozeanoberflächenwasser-temperaturen (SST) und daraus resultierende lang-zeitliche Veränderungen in der Häufigkeit tropischer Zyklone widerspiegelt. In Zentralamerika wird das Ende der Maya Hochkultur (900-1100 n.Chr.) mit drastischen Umweltveränderungen (z.B. Dürren) assoziiert, die während der Mittelalterlichen Warmzeit (MWP; 900-1400 n.Chr.) durch eine globale Klimaveränderung hervorgerufen wurde. Die aus einem „Blue Hole“ abgeleiteten Informationen über Klimavariationen der Vergangenheit können als Referenz für die gegenwärtige Klimakriese verwendet werden.
Als „Blue Hole“ wird eine Karsthöhle bezeichnet, die sich subaerisch während vergangener Meeresspiegeltiefstände im karbonatischen Gerüst eines Riffsystems gebildet hat und in Folge eines Meeresspiegelanstiegs vollständig überflutet wurde. In einigen wenigen marinen „Blue Holes“ treten anoxische Bodenwasserbedingungen auf. Die in diesen anoxischen Karsthöhlen abgelagerten Abfolgen mariner Sedimente können als einzigartiges Klimaarchiv verwendet werden, da sie aufgrund des Fehlens von Bioturbation eine jährliche Schichtung (Warvierung) aufweisen.
In dieser kumulativen Dissertation über das „Great Blue Hole“ werden die Ergebnisse eines 3-jährigen Forschungsprojekts vorgestellt, dass das Ziel verfolgte einen wissenschaftlich herausragenden spät Holozänen Klimadatensatz für die süd-westliche Karibik zu erzeugen. Beim „Great Blue Hole“ handelt es sich um ein weltweit einzigartiges marines Sedimentarchiv für diverse spät Holozäne Klima-veränderungen, das im Zuge dieser Dissertation sowohl nach paläoklimatischen als auch nach sedimentologischen Fragestellungen untersucht wurde. Die vorliegende Doktorarbeit befasst sich im Einzelnen mit (1) der Ausarbeitung eines jährlich aufgelösten Archives für tropische Zyklone, (2) der Entwicklung eines jährlich aufgelösten SST Datensatzes und (3) einer kompositionellen Quantifizierung der sedimentären Abfolgen sowie einer faziell-stratigraphischen Charakterisierung von Schönwetter-Sedimenten und Sturmlagen. Zu jedem dieser drei Aspekte, wurde jeweils ein Fachartikel bei einer anerkannten wissenschaftlichen Fachzeitschrift mit „peer-review“ Verfahren veröffentlicht.
Der insgesamt 8.55 m lange Sedimentbohrkern („BH6“), der für diese Dissertation untersucht wurde, stammt vom Boden des 125 m tiefen und 320 m breiten „Great Blue Holes“, das sich in der flachen östlichen Lagune des 80 km vor der Küste von Belize (Zentralamerika) gelegenen „Lighthouse Reef“ Atolls befindet. Durch seine besondere Geomorphologie wirkt das, innerhalb des atlantischen „Hurrikan Gürtels“ positionierte, „Great Blue Hole“ wie eine gigantische Sedimentfalle. Die unter Schönwetter-Bedingungen kontinuierlich abgelagerten Abfolgen feinkörniger karbonatischer Sedimente, werden von groben Sturmlagen unterbrochen, die auf „over-wash“ Prozesse von tropischen Zyklonen zurückzuführen sind.
...
Chemokines play a key role in the cellular infiltration of inflamed tissue. They are released by a wide variety of cell types during the initial phase of host response to injury, allergens, antigens, or invading microorganisms, and selectively attract leukocytes to inflammatory foci, inducing both migration and activation. Monocyte chemoattractant protein-1 (MCP-1), a member of the CC chemokine superfamily, functions in attracting monocytes, T lymphocytes, and basophils to sites of inflammation. MCP-1 is produced by monocytes, fibroblasts, vascular endothelial cells and smooth muscle cells in response to various stimuli such as tumour necrosis factor-a (TNF-a), interferon-g (IFN-g), and interleukin-1b (IL-1b). It also plays an important role in the pathogenesis of chronic inflammation, and overexpression of MCP-1 has been implicated in diseases including glomerulonephritis and rheumatoid arthritis. Oligonucleotide-directed triple helix formation offers a means to target specific sequences in DNA and interfere with gene expression at the transcriptional level. Triple helix-forming oligonucleotides (TFOs) bind to homopurine/homopyrimidine sequences, forming a stable, sequence-specific complex with the duplex DNA. Purine-rich sequences are frequent in gene regulatory regions and TFOs directed to promoter sequences have been shown to prevent binding of transcription factors and inhibit transcription initiation and elongation. Exogenous TFOs that bind homopurine/ homopyrimidine DNA sequences and form triple-helices can be rationally designed, while the intracellular delivery of single-stranded RNA TFOs has not been studied in detail before. In this study, expression vectors were constructed which directed transcription of either a 19 nt triplex-forming pyrimidine CU-TFO sequence targeting the human MCP-1 or two different 19 nt GU- or CA-control sequences, respectively, together with the vector encoded hygromycin resistance mRNA as one fusion transcript. HEK 293 cells were stable transfected with these vectors and several TFO and control cell lines were generated. Functional relevant triplex formation of a TFO with a corresponding 19 bp GC-rich AP-1/SP-1 site of the human MCP-1 promoter was shown. Binding of synthetic 19 nt CUTFO to the MCP-1 promoter duplex was verified by triplex blotting at pH 6.7. Underlining binding specificity, control sequences, including the GU- and CA-sequence, a TFO containing one single mismatch and a MCP-1 promoter duplex containing two mismatches, did not participate in triplex formation. Establishing a magnetic capture technique with streptavidin microbeads it was verified that at pH 7.0 the 19 nt TFO embedded in a 1.1 kb fusion transcript binds to a plasmid encoded MCP-1 promoter target duplex three times stronger than the controls. Finally, cell culture experiments revealed 76 ± 10.2% inhibition of MCP-1 protein secretion in TNF-a stimulated CU-TFO harboring cell lines and up to 88% after TNF-a and IFN-g costimulation in comparison to controls. Expression of interleukin-8 (IL-8) as one TNF-a inducible control gene was not affected by CU-TFO, demonstrating both highly specific and effective chemokine gene repression. Furthermore, another chemokine target, regulated upon activation normal T cell expressed and secreted (RANTES), which plays an essential role in inflammation by recruiting T lymphocytes, macrophages and eosinophils to inflammatory sites, was analysed using the triplex approach. A 28 nt TFO was designed targeting the murine RANTES gene promoter, and gel mobility shift assays demonstrated that the phosphodiester TFO formed a sequencespecific triplex with the double-stranded target DNA with a Kd of 2.5 x 10-7 M. It was analysed whether RANTES expression could be inhibited at the transcriptional level testing the TFO in two different cell lines, T helper-1 lymphocytes and brain microvascular endothelial cells (bend3 cells). Although there was a sequence-specific binding of the TFO detectable in the gel shift assays, there was no inhibitory effect of the exogenously added and phosphorothioate stabilised TFO on endogenous RANTES gene expression visible. Additionally, the small interfering RNA (siRNA) approach was tested as another strategy to inhibit expression of the pro-inflammatory chemokines MCP-1 and RANTES. Two different methods were pursuit, describing transient transfection with vector derived and synthetic siRNA. The vector pSUPER containing the siRNA coding sequence was used to suppress endogenous MCP-1 in HEK 293 cells. An empty vector without RNA sequence served as a control. Inhibition due to the siRNA was measured in stimulated and unstimulated cells. In TNF-a stimulated cells MCP-1 protein synthesis was decreased by 35 ± 11% after siRNA transfection. Using a synthetic double-stranded siRNA, the TNF-a induced MCP-1 protein secretion could be successfully inhibited about 62.3 ± 10.3% in HEK 293 cells, indicating that the siRNA is functional in these cells to suppress chemokine expression. The siRNA approach targeting murine RANTES in Th1 cells and b-end3 cells revealed no inhibition of endogenous gene expression. Gene therapy approaches rely on efficient transfer of genes to the desired target cells. A wide variety of viral and nonviral vectors have been developed and evaluated for their efficiency of transduction, sustained expression of the transgene, and safety. Among them, lentiviruses have been widely used for gene therapy applications. In order to improve the delivery of TFOs or siRNAs into the target cells, cloning of the lentiviral transfer vector SEW, the production of lentiviral particles by transient transfection were performed with the aim to generate lentiviral vector-derived TFOs in further experiments. Here, Th1 cells were transduced with infectious lentiviral particles and transduction efficacy was measured. Transduction efficacy higher than 82% could be achieved using the lentiviral vector SEW, opening optimal possibilities for the TFO or siRNA approach.
Canada’s geographic centre lies in the Territory Nunavut. From here the distance to the geographic North Pole is as far as to the US border. Nunavut takes up about 1/5 of the Canadian land mass but has by far the smallest population with currently about 38,000 residents. 85% of its population are Inuit whose culture dramatically changed within the last 70 years.
As a result, the territory is dealing with several generations of Inuit that are traumatized or at least severely affected by cultural and economic changes that started after World War 2 with the resettlement from the land into permanent communities. No matter if we are talking about the actual elders, mid-age adults or pre-teenagers, each of this generation experienced and still experiences various personal and cultural challenges of identity, financial and housing insecurity, food insecurity, substance abuse education, change of social values ranging from inter-generational and gender relationships to the introduction of a foreign political and legal system.
On the other side, a lot of the traditional societal values are still being practiced in Inuit families. Despite all the tragedies that several generations of Inuit have experienced by now, the society keeps generating the strength and cultural pride that allows many Inuit both, as individuals and as a collective under the umbrella of either Inuit Land Claims or not for profit organizations to advocate on behalf of Inuit culture, to fight for more acknowledgement of Inuit culture and to enhance pride in the historic and present day cultural achievements of Nunavut’s indigenous population.
The social issues, inter- and intra-cultural processes described in my thesis are not exclusive to the situation in Nunavut or to Inuit. Studies from other regions, in Canada or from around the world (LaPrairie 1987; Jensen 1986; Nunatsiaq News 6/30/2010) reveal similar challenges.
Though many structural similarities can be identified by comparing these studies with each other, e.g. marginalization of the indigenous local population, colonization, paternalism and resulting issues like personal and cultural identity loss, it is important to have a more in depth look into the single cases to determine which individual events and developments causes and maybe still cause such a devastating social situation as it is found among many indigenous peoples across the world. From my perspective effective improvements of the situation of a group, a respective community or region can only happen when particularities of socialization, communication and philosophy in the single cultural entities are being considered.
That is why my thesis will exclusively focus on developments in Nunavut and use various case studies of communities. The case studies shall help to identify local differences in historic and recent developments and thus provide starting points for explanations of different developments in different Nunavut communities.
The thesis is looking at both, historic and recent root causes for the many issues in Nunavut.
The data that my my thesis is based on are a combination of literature and about 60 formal and informal interviews that I conducted in three Nunavut communities (Iqaluit, Whale Cove, Kugluktuk) during my 18 months of field work between October 2008 and March 2010. Many more spontaneous unstructured conversations between me and community members added to the pool of first-hand information that I gathered.
Since my field work is limited to those three communities it has a very strong qualitative character. The quantitative side, which allows me to confidently apply my research analyses to entire Nunavut, comes from literature research as well as many informal conversations and a few formal interviews that I conducted with people who had some experience in other communities than Iqaluit, Kugluktuk and Whale Cove.
Furthermore, while I was living at the old residence of the Nunavut Arctic College in Iqaluit, I spend time with college students from across Nunavut. Through them, I obtained „case studies “from following communities: Iqaluit, Qikiqtarjuaq, Kimmirut, Pangnirtung, Clyde River, Pond Inlet, Igloolik, Repulse Bay, Cape Dorset, Chesterfield Inlet, Baker Lake, Rankin Inlet, Whale Cove, Arviat, Taloyoak, Kugluktuk.
My general categorization of “early contact period”, “contact”, “1st generation” and “2nd generation” is very similar to Damas’ terms of “early contact phase”, “contact – traditional”, “resettlement” that he uses to create a timeline that describes the major phases of impact for Inuit society (Damas 2002: 7, 17).
Chapters 2 is meant to provide an inventory of the key aspects of current social issues in Nunavut. In this context I am looking at the four major aspects that in my opinion shape Nunavut’s society:
1) violence and other forms of social dysfunctions
2) the associated services and delivering agencies that try to address those matters
3) Education
4) Inuit cultural particularities in communication and socialization
Those four areas are forming the foundation for the rest of my work. The following chapters will guide the reader through the historic transformation process of Inuit pre-colonial semi-nomadic society to a society that is living in permanent settlements, strongly influenced if not in many ways dominated by Euro-Canadian culture. Each of those chapters will be referring to the social and cultural changes that happened in the different time periods that I labeled with “Pre-settlement, First, Second, and Third Generation”. The relevance of violence and other social dysfunctions, their context and strategies how each generation dealt with those matters will be analyzed while I will be also referring to the impacts that non-Inuit, primarily Euro-Canadians and Euro-Americans had and have on Inuit society.
...