Refine
Year of publication
Document Type
- Doctoral Thesis (2082) (remove)
Language
- English (2082) (remove)
Has Fulltext
- yes (2082)
Is part of the Bibliography
- no (2082)
Keywords
- ALICE (9)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- CERN (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
Institute
- Biowissenschaften (429)
- Physik (384)
- Biochemie und Chemie (282)
- Biochemie, Chemie und Pharmazie (215)
- Medizin (132)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (55)
- Mathematik (46)
Efficient algorithms for object recognition are crucial for the newly robotics and computer vision applications that demand real-time and on-line methods. Some examples are autonomous systems, navigating robots, autonomous driving. In this work, we focus on efficient semantic segmentation, which is the problem of labeling each pixel of an image with a semantic class.
Our aim is to speed-up all of the parts of the semantic segmentation pipeline. We also aim at delivering a labeling solution on a time budget, that can be decided on-the-fly. For this purpose, we analyze all the components of the semantic segmentation pipeline, and identify the computational bottleneck of each of them. The different components of the pipeline are over-segmenting the image with local regions, extracting features and classify the local regions, and the final inference of the image labeling with semantic classes. We focus on each of these steps.
First, we introduce a new superpixel algorithm to over-segment the image. Our superpixel method runs in real-time and can deliver a solution at any time budget. Then, for feature extraction, we focus on the framework that computes descriptors and encodes them, followed by a pooling step. We see that the encoding step is the bottleneck, for computational efficiency and performance. We present a novel assignment-based encoding formulation, that allows for the design of a new, very efficient, encoding. Finally, the image labeling output is obtained modeling the dependencies with a Conditional Random Field (CRF). In semantic image segmentation, the computational cost of instantiating the potentials is much higher than MAP inference. We introduce Active MAP inference to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest as unknown, and to estimate the MAP labeling from such incomplete energy function.
We perform experiments on all proposed methods for the different parts of the semantic segmentation pipeline. We show that our superpixel extraction achieves higher accuracy than state-of-the-art on standard superpixel benchmark, while it runs in real-time. We test our feature encoding on standard image classification and segmentation benchmarks, and we show that our method achieves competitive results with the state-of-the-art, and requires less time and memory. Finally, results for semantic segmentation benchmark show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.
The composition of cellular membranes is extremely complex and the mechanisms underlying their homeostasis are poorly understood. Organelles within a eukaryotic cell require a non-random distribution of membrane lipids and a tight regulation of the membrane lipid composition is a prerequisite for the maintenance of specific organellar functions. Physical membrane properties such as bilayer thickness, lipid packing density and surface charge are governed by the lipid composition and change gradually from the early to the late secretory pathway. As the endoplasmic reticulum (ER) is situated at the beginning of the cells secretory pathway, it has to accept and accommodate a great variety and quantity of secretory and transmembrane proteins, which enter the ER on their way to their final cellular destination. Secretory proteins can be translocated into the lumen of the ER co- or posttanslationally and membrane proteins are being inserted and released into the ER membrane. In the oxidative milieu of the ER-lumen, supported by a variety of chaperones, proteins can fold into their native form.
If the folding capacity of the ER-lumen is exceeded, an accumulation of mis- or unfolded proteins in the lumen of the ER occurs, consequently triggering the unfolded protein response (UPR). This highly conserved program activates a wide-spread transcriptional response to restore protein folding homeostasis. In fact, 7 – 8% of all genes in the yeast Saccharomyces cerevisiae (S. cerevisiae) are regulated by the UPR. The mechanism underlying the activation of the UPR by protein folding stress has been investigated thoroughly in the last decades and many of its mechanistic details have been elucidated. Recently, it became evident that aberrant lipid compositions of the ER membrane, collectively referred to as lipid bilayer stress, are equally potent in activating the UPR. The underlying molecular mechanism of this membrane-activated UPR, however, remained unclear.
This study focuses on the UPR in S. cerevisiae and characterizes the inositol requiring enzyme 1 (Ire1) as the sole UPR sensor in S. cerevisiae. Active Ire1 forms oligomers and, collaboratively with the tRNA ligase Rlg1, splices immature mRNA of the transcription factor HAC1, which results in the synthesis of mature HAC1 mRNA and the production of the active Hac1 protein, which binds to UPR-elements in the nucleus and activates the expression of UPR target genes. Here, the combination of in vivo and in vitro experiments is being used, which is supplemented by molecular dynamics (MD) simulations performed by Roberto Covino and Gerhard Hummer (MPI for Biophysics, Frankfurt), aiming to identify the molecular mechanism of Ire1 activation by lipid bilayer stress. This study focuses on the analysis of the juxta- and transmembrane region of Ire1. Bioinformatic analyses revealed a putative ER-lumenal amphipathic helix (AH) N-terminally of and partially overlapping with the transmembrane helix (TMH). This predicted AH contains a large hydrophobic face, which inserts into the ER membrane, forcing the TMH into a tilted orientation within the membrane. The resulting unusual architecture of Ire1’s AH and TMH constitutes a unique structural element required for the activation of Ire1 by lipid bilayer stress.
To investigate the function of the AH in the physiological context, different variants of Ire1 were produced under the control of their endogenous promoter and from their endogenous locus. The functional role of the AH was tested, by disrupting its amphipathic character by the introduction of charged residues into the hydrophobic face of the AH. The role of a conserved negative residue between the TMH and the AH (E540 in S. cerevisiae) was tested by substituting it by a unipolar, polar, or positively charged residue. These variants were intensively characterized using a series of assays:
This thesis provides evidence that the AH is crucial for the function of Ire1: Mutant variants with a disrupted (F531R, V535R) or otherwise modified AH (E540A) exhibited a lower degree of oligomerization and failed to catalyze the splicing of the HAC1 mRNA as the Wildtype control. Likewise, the induction of PDI1, a target gene of the UPR, was greatly reduced in mutants with a disrupted or defective AH. These data revealed an important functional role of the AH for normal Ire1 function.
An in vitro system was established to analyze the membrane-mediated oligomerization of Ire1. This system enabled the isolated functional analysis of the AH and TMH during Ire1 activation by lipid bilayer stress. A fusion construct, coding for the maltose binding protein (MBP) from Escherichia coli (E. coli), N-terminally to the AH and TMH of Ire1 was produced. The heterologous production in E. coli, the purification and reconstitution of this minimal sensor of Ire1 in liposomes was established as part of this study. To analyze the oligomeric status of the minimal sensor in different lipid environments, continuous wave electron paramagnetic resonance (cwEPR) spectroscopic experiments were performed. These experiments revealed that the molecular packing density of the lipids had a significant influence of the oligomerization of the spin-labeled membrane sensor: increasing packing densities resulted in sensor oligomerization. The AH-disruptive F531R mutant, in which the amphipathic character of the AH was destroyed, showed no membrane-sensitive changes in its oligomerization status.
Thus, the activation of Ire1 by lipid bilayer stress is achieved by a membrane-based mechanism. According to the current model, the AH induces a local membrane compression by inserting its large hydrophobic face into the membrane. As membrane thickness and acyl chain order are interconnected, this compression simultaneously results in an increased local disordering of lipid acyl chains. Supporting MD simulations performed by Roberto Covino and Gerhard Hummer revealed that the bilayer compression is significantly more pronounced in a densely packed lipid environment, than in a lipid environment of lower lipid packing density. Hence, the energetic cost of the local compression increases with the packing density of the membrane, but is compensated for by the oligomerization of Ire1. This minimization of energetic cost induced by the membrane deformation of Ire1 forms the basis for the activation of Ire1 by lipid bilayer stress.
This thesis investigates the acquisition pace and the typical developmental path in eL2 acquisition of selected phenomena of German morphosyntax and semantics and compared them to monolingual acquisition. In addition, the influence of ‘Age of Onset’ and of external factors on eL2 acquisition is examined.
To date, the most studies on eL2 acquisition focused on language production. Based on mostly longitudinal spontaneous speech data of only small number of children, they indicate that eL2 learners acquire sentence structure and subject-verb-agreement faster than monolingual children, whereas the acquisition of case marking causes them more difficulties. Moreover, similar developmental paths to those of monolingual children are claimed. Only several studies examined comprehension abilities in eL2 learners, however overwhelmingly in cross-sectional design. The findings from comprehension studies on telic and atelic verbs, and on wh-questions indicate that eL2 children acquire their target-like interpretation faster than monolingual children. The same acquisition stages towards target-like interpretation like in monolingual acquisition are assumed as well. Taking together, to date, no study exists, that examines comprehension and production abilities in a large group of eL2 learners of German in a longitudinal design.
This thesis extends the previous results by investigating pace of acquisition, impact of factors, and individual developmental paths in a longitudinal design with large groups of participants. Language data of 29 eL2 learners of German (age at T1: 3;7 years, LoE: 10 months) and 45 monolingual German-speaking children (age at T1: 3;7) are examined. The eL2 learners were tested in six test rounds (age at T6: 6;9 years). The monolingual children were tested in five test rounds (are at T5: 5;7). The standardized test LiSe-DaZ (Schulz & Tracy, 2011) was employed to examine children’s language skills.
eL2 learners show a significantly greater rate of change, thus faster acquisition pace, than monolingual children in the following scales: comprehension of telicity, comprehension of wh-questions, production of prepositions, and production of conjunctions. These phenomena are acquired early in monolingual children. No differences regarding acquisition pace between eL2 children and monolingual children are found for comprehension of negation, production of case marking, and production of focus particles. These phenomena are acquired late in monolingual development and involve semantic and pragmatic knowledge. The findings of faster acquisition pace of several phenomena are in line with several studies that reported that eL2 children develop faster than monolingual children.
Independent on whether a phenomenon is acquired early or late, no effects of external factors on eL2 children’s performance are found. These findings indicate that acquisition of core, rule-based phenomena is not sensitive to external factors if the first exposure to L2 takes place around the age of three.
Moreover, eL2 children show the same developmental stages and error types in comprehension of telicity, comprehension of negation, production of matrix and subordinate clauses. This is also independent on how fast they acquire a structure under consideration. Thus, these findings provide a further support for similar developmental paths of eL2 and monolingual children towards target-like comprehension and production.
Echolocation allows bats to orientate in darkness without using visual information. Bats emit spatially directed high frequency calls and infer spatial information from echoes coming from call reflections in objects (Simmons 2012; Moss and Surlykke 2001, 2010). The echoes provide momentary snapshots, which have to be integrated to create an acoustic image of the surroundings. The spatial resolution of the computed image increases with the quantity of received echoes. Thus, a high call rate is required for a detailed representation of the surroundings.
One important parameter that the bats extract from the echoes is an object’s distance. The distance is inferred from the echo delay, which represents the duration between call emission and echo arrival (Kössl et al. 2014). The echo delay decreases with decreasing distance and delay-tuned neurons have been characterized in the ascending auditory pathway, which runs from the inferior colliculus (Wenstrup et al. 2012; Macías et al. 2016; Wenstrup and Portfors 2011; Dear and Suga 1995) to the auditory cortex (Hagemann et al. 2010; Suga and O'Neill 1979; O'Neill and Suga 1982).
Electrophysiological studies usually characterize neuronal processing by using artificial and simplified versions of the echolocation signals as stimuli (Hagemann et al. 2010; Hagemann et al. 2011; Hechavarría and Kössl 2014; Hechavarría et al. 2013). The high controllability of artificial stimuli simplifies the inference of the neuronal mechanisms underlying distance processing. But, it remains largely unexplored how the neurons process delay information from echolocation sequences. The main purpose of the thesis is to investigate how natural echolocation sequences are processed in the brain of the bat Carollia perspicillata. Bats actively control the sensory information that it gathers during echolocation. This allows experimenters to easily identify and record the acoustic stimuli that are behaviorally relevant for orientation. For recording echolocation sequences, a bat was placed in the mass of a swinging pendulum (Kobler et al. 1985; Beetz et al. 2016b). During the swing the bat emitted echolocation calls that were reflected in surrounding objects. An ultrasound sensitive microphone traveling with the bat and positioned above the bat’s head recorded the echolocation sequence. The echolocation sequence carried delay information of an approach flight and was used as stimulus for neuronal recordings from the auditory cortex and inferior colliculus of the bats.
Presentation of high stimulus rates to other species, such as rats, guinea pigs, suppresses cortical neuron activity (Wehr and Zador 2005; Creutzfeldt et al. 1980). Therefore, I tested if neurons of bats are suppressed when they are stimulated with high acoustic rates represented in echolocation sequences (sequence situation). Additionally, the bats were stimulated with randomized call echo elements of the sequence and an interstimulus time interval of 400 ms (element situation). To quantify neuronal suppression induced by the sequence, I compared the response pattern to the sequence situation with the concatenated response patterns to the element situation. Surprisingly, although the bats should be adapted for processing high acoustic rates, their cortical neurons are vastly suppressed in the sequence situation (Beetz et al. 2016b). However, instead of being completely suppressed during the sequence situation, the neurons partially recover from suppression at a unit specific call echo element. Multi-electrode recordings from the cortex allow assessment of the representation of echo delays along the cortical surface. At the cortical level, delay-tuned neurons are topographically organized. Cortical suppression improves sharpness of neuronal tuning and decreases the blurriness of the topographic map. With neuronal recordings from the inferior colliculus, I tested whether the echolocation sequence also induced neuronal suppression at subcortical level. The sequence induced suppression was weaker in the inferior colliculus than in the cortex. The collicular response makes the neurons able to track the acoustic events in the echolocation sequence. Collicular suppression mainly improves the signal-to-noise ratio. In conclusion, the results demonstrate that cortical suppression is not necessarily a shortcoming for temporal processing of rapidly occurring stimuli as it has previously been interpreted.
Natural environments are usually composed of multiple objects. Thus, each echolocation call reflects off multiple objects resulting in multiple echoes following the calls. At present, it is largely unexplored how neurons process echolocation sequences containing echo information from more than one object (multi-object sequences). Therefore, I stimulated bats with a multi-object sequence which contained echo information from three objects. The objects were different distances away from each other. I tested the influence of each object on the neuronal tuning by stimulating the bats with different sequences created from filtering object specific echoes from the multi-object sequence. The cortex most reliably processes echo information from the nearest object whereas echo information from distant objects is not processed due to neuronal suppression. Collicular neurons process less selectively echo information from certain objects and respond to each echo.
For proper echolocation, bats have to distinguish between own biosonar signals and the signals coming from conspecifics. This can be quite challenging when many bats echolocate adjacent to each other. In behavioral experiments, the echolocation performance of C. perspicillata was tested in the presence of potentially interfering sounds. In the presence of acoustic noise, the bats increase the sensory acquisition rate which may increase the update rate of sensory processing. Neuronal recordings from the auditory cortex and inferior colliculus could strengthen the hypothesis. Although there were signs of acoustic interference or jamming at neuronal level, the neurons were not completely suppressed and responded to the rest of the echolocation sequence.
One of the key functions of blood vessels is to transport nutrients and oxygen to distant tissues and organs in the body. When blood supply is insufficient, new vessels form to meet the metabolic tissue demands and to re-establish cellular homeostasis. Expansion of the vascular network through sprouting angiogenesis requires the specification of ECs into leading (sprouting) tip and following (non-sprouting) stalk cells. Attracted by guidance cues tip cells dynamically extend and retract filopodia to navigate the nascent vessel sprout, whereas trailing stalk cells proliferate to form the extending vascular tube. All of these processes are under the control of environmental signals (e.g. hypoxia, metabolism) and numerous cytokines and peptide growth factors. The Dll4/Notch pathway coordinates several critical steps of angiogenic blood vessel growth. Even subtle alterations in Notch activity can profoundly influence endothelial cell behavior and blood vessel formation, yet little is known about the intrinsic regulation and dynamics of Notch signaling in endothelial cells. In addition, it remains an open question, how different growth factor signals impinging on sprouting ECs are coordinated with local environmental cues originating from nutrient-deprived, hypoxic tissue to achieve a balanced endothelial cell response. Acetylation of lysines is a critical posttranslational modification of histones, which acts as an important regulatory mechanism to control chromatin structure and gene transcription. In addition to histones, several non-histone proteins are targeted for acetylation reversible acetylation is emerging as a fundamental regulatory mechanism to control protein function, interaction and stability. Previous studies from our group identified the NAD+-dependent deacetylase SIRT1 as a key regulator of blood vessel growth controlling endothelial angiogenic responses. These studies revealed that SIRT1 is highly expressed in the vascular endothelium during blood vessel development, where it controls the angiogenic activity of endothelial cells. Moreover, in this work SIRT1 has been shown to control the activity of key regulators of cardiovascular homeostasis such as eNOS, Foxo1 and p53. The present study describes that SIRT1 antagonizes Notch signaling by deacetylating the Notch intracellular domain (NICD). We showed that loss of SIRT1 enhances DLL4-induced endothelial Notch responses as assessed by different luciferase responsive elements as well as transcriptional analysis of Notch endogenous target genes activation. Conversely, SIRT1 gain of function by overexpression of pharmacological activation decreases induction of Notch targets in response to DLL4 stimulation. We also showed that the NICD can be directly acetylated by PC AF and p300 and that SIRT1 promotes deacetylation of NICD. We have identified 14 lysines that are targeted for acetylation and their mutation abolishes the effects of SIRT1 of Notch responses. Furthermore, over-expression or activation of SIRT1 significantly reduces the levels of NICD protein. Moreover, SIRT1-mediated NICD degradation can be reversed by blockade of the proteasome suggesting a mechanism resulting from ubiquitin-mediated proteolysis. Indeed, we have shown that SIRT1 knockdown or pharmacological inhibition decreased NICD ubiquitination. We propose a novel molecular mechanism of modulation of the amplitude and duration of Notch responses in which acetylation increases NICD stability and therefore permanence at the promoters, while SIRT1, by inducing NICD degradation through its deacetylation, shortens Notch responses. In order to evaluate the physiological relevance of our findings we used different models in which the Notch functions during blood vessel formation have been extensively characterized. First, retinal angiogenesis in mice lacking SIRT1 activity shows decreased branching and reduced endothelial proliferation, similar to what happens after Notch gain of function mutations. ECs from these mice exhibit increased expression of Notch target genes. Second, these results were reproducible during intersomitic vessel growth in sirt1-deficient zebrafish. In both models, the defects could be partially rescued by inhibition of Notch activation. Third, we used an in vitro model of vessel sprouting from differentiating embryonic bodies in response to VEGF in a collagen matrix. Our results showed that Sirt1-deficient cells shows impaired sprouting which correlated with increased NICD levels. In addition, when in competition with wild-type cells in this assay, Sirt1-deficient cells are more prone to occupy the stalk cell position. Taken together, our study identifies reversible acetylation of NICD as a novel molecular mechanism to adapt the dynamics of Notch signaling and suggest that SIRT1 acts as a rheostat to fine-tune endothelial Notch responses. The NAD+-dependent feature of SIRT1 activity possibly links endothelial Notch responses to environmental cues and metabolic changes during nutrient deprivation in ischemic environments or upon other cellular stresses.
The enzyme acetyl-CoA carboxylase (ACC) plays a fundamental role in the fatty acid metabolism. It regulates the first and rate limiting step in the biosynthesis of fatty acids by catalyzing the carboxylation of acetyl-CoA to malonyl-CoA and exists as two different isoforms, ACC1 and ACC2. In the last few years, ACC has been reported as an attractive drug target for treating different diseases, such as insulin resistance, hepatic steatosis, dyslipidemia, obesity, metabolic syndrome and nonalcoholic fatty liver disease. An altered fatty acid metabolism is also associated with cancer cell proliferation. In general, the inhibition of ACC provides two possibilities to regulate the fatty acid metabolism: It blocks the de novo lipogenesis in lipogenic tissues and stimulates the mitochondrial fatty acid β-oxidation. Surprisingly, the role of ACC in human vascular endothelial cells has been neglected so far. This work aimed to investigate the role of the ACC/fatty acid metabolism in regulating important endothelial cell functions like proliferation, migration and tube formation.
To investigate the function of ACC, the ACC-inhibitor soraphen A as well as an siRNA-based approach were used. This study revealed that ACC1 is the predominant isoform both in human umbilical vein endothelial cells (HUVECs) and in human dermal microvascular endothelial cells (HMECs). Inhibition of ACC via soraphen A resulted in decreased levels of malonyl-CoA and shifted the lipid composition of endothelial cell membranes. Consequently, membrane fluidity, filopodia formation and the migratory capacity were attenuated. Increasing amounts of longer acyl chains within the phospholipid subgroup phosphatidylcholine (PC) were suggested to overcompensate the shift towards shorter acyl chains within phosphatidylglycerol (PG), which resulted in a dominating effect on regulating the membrane fluidity. Most importantly, this work provided a link between changes in the phospholipid composition and altered endothelial cell migration. The antimigratory effect of soraphen A was linked to a reduced amount of PG and to an increased amount of polyunsaturated fatty acids (PUFAs) within the phospholipid cell membrane. This link was unknown in the literature so far. Interestingly, a reduced filopodia formation was observed upon ACC inhibition via soraphen A, which presumably caused the impaired migratory capacity.
This work revealed a relationship between ACC/fatty acid metabolism, membrane lipid composition and endothelial cell migration. The natural compound soraphen A emerged as a valuable chemical tool to analyze the role of ACC/fatty acid metabolism in regulating important endothelial cell functions. Furthermore, regulating endothelial cell migration via ACC inhibition promises beneficial therapeutic perspectives for the treatment of cell migration-related disorders, such as ischemia reperfusion injury, diabetic angiopathy, macular degeneration, rheumatoid arthritis, wound healing defects and cancer.
Der Begriff psychologische Akkulturation beschreibt jene Veränderungen, die infolge des dauerhaften Aufeinandertreffens verschiedener kultureller Gruppen auf individueller Ebene zu beobachten sind (Berry, 1997). Die vorliegende Arbeit umfasst drei Publikationen, die sich mit Akkulturationsprozessen von Kindern und Jugendlichen mit Migrationshintergrund in Deutschland befassen. Zunächst wird ein Überblick über den aktuellen Stand der Forschung zur Situation junger Migranten in Deutschland vorgelegt. An zentraler Stelle steht dabei die Frage, wie die Migrationsgeschichte und Immigrationspolitik Deutschlands sowie die öffentliche Einstellung gegenüber Migranten die transkulturelle Adaptation von Kindern und Jugendlichen nicht-deutscher ethno-kultureller Herkunft beeinflussen. Bereits bestehende wissenschaftliche Erkenntnisse werden verknüpft mit den Ergebnissen neuerer empirischer Studien um zu einem tieferen Verständnis der Ursachen für die vielfach berichteten problematischen Verläufe psychologischer und soziokultureller Adaptation von Migranten beizutragen. Neben anderen Risiken und protektiven Faktoren wird diskutiert, wie sich Besonderheiten Deutschlands als Aufnahmeland, wie z.B. die Eigenarten des Schulsystems, auf Adaptationsverläufe auswirken können. Unsere eigenen Studien tragen zum Verständnis der Anpassungsprozesse junger Migranten bei, indem sie aufzeigen, dass nicht die Akkulturationsstrategie der Integration, sondern speziell die Orientierung an der deutschen Kultur bei Individuen zu den günstigsten psychologischen und soziokulturellen Ergebnissen zu führen scheint. Im Rahmen dieser Arbeit wird weiterhin ein empirischer und methodologischer Beitrag zur Akkulturationsforschung geleistet, indem ein Messinstrument zur Erfassung psychologischer Akkulturation bei Kindern im deutschen Sprachraum – die Frankfurter Akkulturationsskala für Kinder (FRAKK-K)– entwickelt, validiert und schließlich anhand einer Fragestellung praktisch angewandt wird. Die Skalenentwicklung und –optimierung erfolgte auf der Grundlage von zwei Studien, welche Daten von 387 Grundschülern aus zwei städtischen Regionen in Deutschland umfassen (Frankenberg & Bongard, 2013). Die Ergebnisse konfirmatorischer Faktorenanalysen sprechen für zwei Faktoren, Orientierung an der Aufnahmekultur und Orientierung an der Herkunftskultur, die jeweils mittels 6 Items erfasst werden. Beide Subskalen weisen eine zufriedenstellende interne Reliabilität und Kriteriumsvalidität auf und lassen sich zwecks Erfassung der Akkulturationsstrategie kombinieren (i.e. Assimilation, Integration, Separation und Marginalisierung). In einer ersten praktischen Anwendung der Skala wird der Frage nachgegangen, inwiefern erweiterter Musikunterricht und Orchesterspiel in der Grundschule über verstärkte Gruppenkohäsion zur Förderung kultureller Integration beitragen können.
Grundschüler, die in einem Orchester gespielt haben, zeigen über einen Zeitraum von 1,5 Jahren einen stärkeren Anstieg der Orientierung an der deutschen Kultur als Schüler, die keinen erweiterten Musikunterricht erhielten. Musikschüler fühlen sich außerdem stärker in die Klassengemeinschaft integriert. Dies deutet darauf hin, dass die Erfahrung der Zusammenarbeit und des Musizierens innerhalb einer Gruppengemeinschaft zu einer stärkeren Orientierung an der deutschen Kultur geführt hat. Die Orientierung an der Herkunftskultur blieb unbeeinflusst. Somit können Programme, die jungen Migranten die Gelegenheit bieten Musik innerhalb einer größeren, kulturell heterogenen Gruppe aufzuführen, als eine effektive Intervention zur Förderung der kulturellen Anpassung an die Mehrheitskultur und der Integration innerhalb – und außerhalb – des Klassenzimmers führen.
Abschließend werden die Ergebnisse der empirischen Untersuchungen vor dem Hintergrund des aktuellen Forschungsstandes zu neueren Akkulturationsmodellen sowie zu der Terminologie und den methodischen Herausforderungen des Forschungsfeldes in Beziehung gesetzt und kritisch reflektiert. Daraus abgeleitet werden Implikationen für zukünftige Interventionen und Forschung diskutiert.
Mitochondial NADH:ubiquinone oxidoreductase (complex I) the largest multiprotein enzyme of the respiratory chain, catalyses the transfer of two electrons from NADH to ubiquinone, coupled to the translocation of four protons across the membrane. In addition to the 14 strictly conserved central subunits it contains a variable number of accessory subunits. At present, the best characterized enzyme is complex I from bovine heart with a molecular mass of about 980 kDa and 32 accessory proteins. In this study, the subunit composition of mitochondrial complex I from the aerobic yeast Y. lipolytica has been analysed by a combination of proteomic and genomic approaches. The sequences of 37 complex I subunits were identified. The sum of their individual molecular masses (about 930 kDa) was consistent with the native molecular weight of approximately 900 kDa for Y. lipolytica complex I obtained by BN-PAGE. A genomic analysis with Y. lipolytica and other eukaryotic databases to search for homologues of complex I subunits revealed 31 conserved proteins among the examined species. A novel protein named “X” was found in purified Y. lipolytica complex I by MALDI-MS. This protein exhibits homology to the thiosulfate sulfurtransferase enzyme referred to as rhodanese. The finding of a rhodanese-like protein in isolated complex I of Y. lipolytica allows to assume a special regulatory mechanism of complex I activity through control of the status of its iron-sulfur clusters. The second part of this study was aimed at investigating the possible role of one of these extra subunits, 39 kDa (NUEM) subunit which is related to the SDRs-enzyme family. The members of this family function in different redox and isomerization reactions and contain a conserved NAD(P)H-binding site. It was proposed that the 39 kDa subunit may be involved in a biosynthetic pathway, but the role of this subunit in complex I is unknown. In contrast to the situation in N. crassa, deletion of the 39 kDa encoding gene in Y. lipolytica led to the absence of fully assembled complex I. This result might indicate a different pathway of complex I assembly in both organisms. Several site-directed mutations were generated in the nucleotide binding motif. These had either no effect on enzyme activity and NADPH binding, or prevented complex I assembly. Mutations of arginine-65 that is located at the end of the second b-strand and responsible for selective interaction with the 2’-phosphate group of NADPH retained complex I activity in mitochondrial membranes but the affinity for the cofactor was markedly decreased. Purification of complex I from mutants resulted in decrease or loss of ubiquinone reductase activity. It is very likely that replacement of R65 not only led to a decrease in affinity for NADPH but also caused instability of the enzyme due to steric changes in the 39 kDa subunit. These data indicate that NADPH bound to the 39 kDa subunit (NUEM) is not essential for complex I activity, but probably involved in complex I assembly in Y. lipolytica.
Acceleration of Biomedical Image Processing and Reconstruction with FPGAs
Increasing chip sizes and better programming tools have made it possible to increase the boundaries of application acceleration with reconfigurable computer chips. In this thesis the potential of acceleration with Field Programmable Gate Arrays (FPGAs) is examined for applications that perform biomedical image processing and reconstruction. The dataflow paradigm was used to port the analysis of image data for localization microscopy and for 3D electron tomography from an imperative description towards the FPGA for the first time.
After the primitives of image processing on FPGAs are presented, a general workflow is given for analyzing imperative source code and converting it to a hardware pipeline where every node processes image data in parallel. The theoretical foundation is then used to accelerate both example applications. For localization microscopy, an acceleration of 185 compared to an Intel i5 450 CPU was achieved, and electron tomography could be sped up by a factor of 5 over an Nvidia Tesla C1060 graphics card while maintaining full accuracy in both cases.
The ab-initio molecular dynamics framework has been the cornerstone of computational solid state physics in the last few decades. Although it is already a mature field it is still rapidly developing to accommodate the growth in solid state research as well as to efficiently utilize the increase in computing power. Starting from the first principles, the ab-initio molecular dynamics provides essential information about structural and electronic properties of matter under various external conditions. In this thesis we use the ab-initio molecular dynamics to study the behavior of BaFe2As2 and CaFe2As2 under the application of external pressure. BaFe2As2 and CaFe2As2 belong to the family of iron based superconductors which are a novel and promising superconducting materials. The application of pressure is one of two key methods by which electronic and structural properties of iron based superconductors can be modified, the other one being doping (or chemical pressure). In particular, it has been noted that pressure conditions have an important effect, but their exact role is not fully understood. To better understand the effect of different pressure conditions we have performed a series of ab-initio simulations of pressure application. In order to apply the pressure with arbitrary stress tensor we have developed a method based on the Fast Inertial Relaxation Engine, whereby the unit cell and the atomic positions are evolved according to the metadynamical equations of motion. We have found that the application of hydrostatic and c axis uniaxial pressure induces a phase transition from the magnetically ordered orthorhombic phase to the non-magnetic collapsed tetragonal phase in both BaFe2As2 and CaFe2As2. In the case of BaFe2As2, an intermediate tetragonal non-magnetic tetragonal phase is observed in addition. Application of the uniaxial pressure parallel to the c axis reduces the critical pressure of the phase transition by an order of magnitude, in agreement with the experimental findings. The in-plane pressure application did not result in transition to the non-magnetic tetragonal phase and instead, rotation of the magnetic order direction could be observed. This is discussed in the context of Ginzburg-Landau theory. We have also found that the magnetostructural phase transition is accompanied by a change in the Fermi surface topology, whereby the hole cylinders centered around the Gamma point disappear, restricting the possible Cooper pair scattering channels in the tetragonal phase. Our calculations also permit us to estimate the bulk moduli and the orthorhombic elastic constants of BaFe2As2 and CaFe2As2.
To study the electronic structure in systems with broken translational symmetry, such as doped iron based superconductors, it is necessary to develop a method to unfold the complicated bandstructures arising from the supercell calculations. In this thesis we present the unfolding method based on group theoretical techniques. We achieve the unfolding by employing induced irreducible representations of space groups. The unique feature of our method is that it treats the point group operations on an equal footing with the translations. This permits us to unfold the bandstructures beyond the limit of translation symmetry and also formulate the tight-binding models of reduced dimensionality if certain conditions are met. Inclusion of point group operations in the unfolding formalism allows us to reach important conclusions about the two versus one iron picture in iron based superconductors.
And finally, we present the results of ab-initio structure prediction in the cases of giant volume collapse in MnS2 and alkaline doped picene. In the case of MnS2, a previously unobserved high pressure arsenopyrite structure of MnS2 is predicted and stability regions for the two competing metastable phases under pressure are determined. In the case of alkaline doped picene, crystal structures with different levels of doping were predicted and used to study the role of electronic correlations.
First-principles modeling techniques offer the ability to simulate a wide range of systems under different physical conditions, such as temperature, pressure, and composition, without relying on empirical knowledge. Density functional theory (DFT), a quantum mechanical method, has become an exceptionally successful framework for materials science modeling. Employing DFT makes it possible to gain valuable insights into the fundamental state of a system, enabling the reliable determination of equilibrium crystal structures. Over time, DFT has become an essential tool that can be incorporated into various schemes for predicting the properties of a material related to its structure, insulating/metallic behavior, magnetism, and optics. DFT is regularly applied in numerous fields, spanning from fundamental subjects in condensed matter physics to the study of large-scale phenomena in geosciences. In the latter, the effectiveness of DFT stems from its ability to simulate the properties found on the Earth, other planets, and meteorites, which may pose challenges for their direct study or laboratory investigation.
In this thesis, a comprehensive examination of a family of monosulfides and a perovskite heterostructure was conducted. These materials are relevant for their potential applications in technology, energy harvesting, and in the case of monosulfides, their speculated abundance on the planet Mercury.
Firstly, a DFT approach was used to analyze two non-magnetic monosulfides, CaS and MgS. We determined their structural properties and then focused on the modeling of their reflectivity in the infrared region. The calculation of the reflectivity considered both harmonic and anharmonic contributions. In the harmonic limit, the non-analytic correction was employed to accurately determine the LO/TO splitting, which is necessary to delimit the retstrahlend band, that is, the maximum of the reflectivity. The anharmonic effects given by up to three-phonon and isotopic scatterings, which were included using perturbation theory, primarily smeared the reflectivity spectra edges in the high-wave region.
Secondly, four polymorphs of MnS were studied using a combination of first-principles methods to simulate their antiferromagnetic (AFM) and paramagnetic (PM) states. The integration of DFT+$U$ with special quasirandom structures (SQS) supercells, and occupation matrix control techniques was crucial for achieving convergence, structural optimization accuracy, and obtaining finite energy band gaps and local magnetic moments in the PM phases. The addition of the Hubbard $U$ correction was necessary to treat the highly-correlated Mn $d$-electrons. The success of our approach was clear based on our electronic structure predictions for the PM rock-salt B1-MnS polymorph. Experimentally this phase has been observed to be an insulator, but multiple \emph{ab initio} works resulted previously in metallic behavior. Our computations, on the other hand, predicted insulating and magnetic properties that compare well with available measurements. Additionally, the pressure-field stability of the four MnS polymorphs was studied. In the case of the PM phases, B1-MnS was identified to be the most stable up to about 21 GPa, then transforming into the B31-MnS polymorph. This finding was in close agreement with high-pressure experiments reporting a similar phase transformation. The optical properties of B1-, B4-, and B31-MnS were also simulated. The SQS technique was used to obtain soft-mode-free phonon band structures within the harmonic approximation. Then, the anharmonic effects were included, and the reflectivity was calculated for B1-MnS and B4-MnS. In both cases, a good agreement for the LO/TO splitting was achieved in comparison to experimental results.
Lastly, the oxygen-deficient heterostructure of LaAlO$_{3-\delta}$ /SrTiO$_{3-\delta}$ was investigated also employing DFT+$U$, with a particular emphasis on the potential impact of vacancy clustering at the interface. Six distinct configurations of pairs of vacancies were studied and their energies were compared to find the most stable one. The orbital reconstruction of Ti orbitals was also examined based on their location with respect to the vacancies and the local magnetic moments were calculated. The final results showed that linearly arranged vacancies located opposite to Ti ions give the most energetically stable configuration.
Das Gehirn ist die wohl komplexeste Struktur auf Erden, die der Mensch erforscht. Es besteht aus einem riesigen Netzwerk von Nervenzellen, welches in der Lage ist eingehende sensorische Informationen zu verarbeiten um daraus eine sinnvolle Repräsentation der Umgebung zu erstellen. Außerdem koordiniert es die Aktionen des Organismus um mit der Umgebung zu interagieren. Das Gehirn hat die bemerkenswerte Fähigkeit sowohl Informationen zu speichern als auch sich ständig an ändernde Bedingungen anzupassen, und zwar über die gesamte Lebensdauer. Dies ist essentiell für Mensch oder Tier um sich zu entwickeln und zu lernen. Die Grundlage für diesen lebenslangen Lernprozess ist die Plastizität des Gehirns, welche das riesige Netzwerk von Neuronen ständig anpasst und neu verbindet. Die Veränderungen an den synaptischen Verbindungen und der intrinsischen Erregbarkeit jedes Neurons finden durch selbstorganisierte Mechanismen statt und optimieren das Verhalten des Organismus als Ganzes. Das Phänomen der neuronalen Plastizität beschäftigt die Neurowissenschaften und anderen Disziplinen bereits über mehrere Jahrzehnte. Dabei beschreibt die intrinsische Plastizität die ständige Anpassung der Erregbarkeit eines Neurons um einen ausbalancierten, homöostatischen Arbeitsbereich zu gewährleisten. Aber besonders die synaptische Plastizität, welche die Änderungen in der Stärke bestehender Verbindungen bezeichnet, wurde unter vielen verschiedenen Bedingungen erforscht und erwies sich mit jeder neuen Studie als immer komplexer. Sie wird durch ein komplexes Zusammenspiel von biophysikalischen Mechanismen induziert und hängt von verschiedenen Faktoren wie der Frequenz der Aktionspotentiale, deren Timing und dem Membranpotential ab und zeigt außerdem eine metaplastische Abhängigkeit von vergangenen Ereignissen. Letztlich beeinflusst die synaptische Plastizität die Signalverarbeitung und Berechnung einzelner Neuronen und der neuronalen Netzwerke.
Der Schwerpunkt dieser Arbeit ist es das Verständnis der biologischen Mechanismen und deren Folgen, die zu den beobachteten Plastizitätsphänomene führen, durch eine stärker vereinheitlichte Theorie voranzutreiben.Dazu stelle ich zwei funktionale Ziele für neuronale Plastizität auf, leite Lernregeln aus diesen ab und analysiere deren Konsequenzen und Vorhersagen.
Kapitel 3 untersucht die Unterscheidbarkeit der Populationsaktivität in Netzwerken als funktionales Ziel für neuronale Plastizität. Die Hypothese ist dabei, dass gerade in rekurrenten aber auch in vorwärtsgekoppelten Netzwerken die Populationsaktivität als Repräsentation der Eingangssignale optimiert werden kann, wenn ähnliche Eingangssignale eine möglichst unterschiedliche Repräsentation haben und dadurch für die nachfolgende Verarbeitung besser unterscheidbar sind. Das funktionale Ziel ist daher diese Unterscheidbarkeit durch Veränderungen an den Verbindungsstärke und der Erregbarkeit der Neuronen mithilfe von lokalen selbst-organisierten Lernregeln zu maximieren. Aus diesem funktionale Ziel lassen sich eine Reihe von Standard-Lernenregeln für künstliche neuronale Netze gemeinsam abzuleiten.
Kapitel 4 wendet einen ähnlichen funktionalen Ansatz auf ein komplexeres, biophysikalisches Neuronenmodell an. Das Ziel ist eine spärliche, stark asymmetrische Verteilung der synaptischen Stärke, wie sie auch bereits mehrfach experimentell gefunden wurde, durch lokale, synaptische Lernregeln zu maximieren. Aus diesem funktionalen Ansatz können alle wichtigen Phänomene der synaptischen Plastizität erklärt werden. Simulationen der Lernregel in einem realistischen Neuronmodell mit voller Morphologie erklären die Daten von timing-, raten- und spannungsabhängigen Plastizitätsprotokollen. Die Lernregel hat auch eine intrinsische Abhängigkeit von der Position der Synapse, welche mit den experimentellen Ergebnissen übereinstimmt. Darüber hinaus kann die Lernregel ohne zusätzliche Annahmen metaplastische Phänomene erklären. Dabei sagt der Ansatz eine neue Form der Metaplastizität voraus, welche die timing-abhängige Plastizität beeinflusst. Die formulierte Lernregel führt zu zwei neuartigen Vereinheitlichungen für synaptische Plastizität: Erstens zeigt sie, dass die verschiedenen Phänomene der synaptischen Plastizität als Folge eines einzigen funktionalen Ziels verstanden werden können. Und zweitens überbrückt der Ansatz die Lücke zwischen der funktionalen und mechanistische Beschreibungsweise. Das vorgeschlagene funktionale Ziel führt zu einer Lernregel mit biophysikalischer Formulierung, welche mit etablierten Theorien der biologischen Mechanismen in Verbindung gebracht werden kann. Außerdem kann das Ziel einer spärlichen Verteilung der synaptischen Stärke als Beitrag zu einer energieeffizienten synaptischen Signalübertragung und optimierten Codierung interpretiert werden.
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
Computational oral absorption models, in particular PBBM models, provide a powerful tool for researchers and pharmaceutical scientists in drug discovery and formulation development, as they mimic and can describe the physiologically processes relevant to the oral absorption. PBBM models provide in vivo context to in vitro data experiments and allow for a dynamic understanding of in vivo drug disposition that is not typically provided by data from standard in vitro assays. Investigations using these models permit informed decision-making, especially regarding to formulation strategies in drug development. PBBM models, but can also be used to investigate and provide insight into mechanisms responsible for complex phenomena such as food effect in drug absorption. Although there are obviously still some gaps regarding the in silico construction of the gastrointestinal environment, ongoing research in the area of oral drug absorption (e.g. the UNGAP, AGE-POP and InPharma projects) will increase knowledge and enable improvement of these models.
PBBM can nowadays provide an alternative approach to the development of in vitro–in vivo correlations. The case studies presented in this thesis demonstrate how PBBM can address a mechanistic understanding of the negative food effect and be used to set clinically relevant dissolution specification for zolpidem immediate release tablets. In both cases, we demonstrated the importance of integrating drug properties with physiological variables to mechanistically understand and observe the impact of these parameters on oral drug absorption.
Various complex physiological processes are initiated upon food consumption, which can enhance or reduce a drug’s dissolution, solubility, and permeability and thus lead to changes in drug absorption. With improvements in modeling and simulation software and design of in vitro studies, PBBM modeling of food effects may eventually serve as a surrogate for clinical food effect studies for new doses and formulations or drugs. Furthermore, the application of these models may be even more critical in case of compounds where execution of clinical studies in healthy volunteers would be difficult (e.g., oncology drugs).
In the fourth chapter we have demonstrated the establishment of the link between biopredictive in vitro dissolution testing (QC or biorelevant method) PBBM coupled with PD modeling opens the opportunity to set truly clinically relevant specifications for drug release. This approach can be extended to other drugs regardless of its classification according to the BCS.
With the increased adoption of PBBM, we expect that best practices in development and verification of these models will be established that can eventually inform a regulatory guidance. Therefore, the application of Physiologically Based Biopharmaceutical Modelling is an area with great potential to streamline late-stage drug development and impact on regulatory approval procedures.
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
A novel role for mutant mRNA degradation in triggering transcriptional adaptation to mutations
(2020)
Robustness to mutations promotes organisms’ well-being and fitness. The increasing number of mutants in various model organisms, and humans, showing no obvious phenotype (Bouche and Bouchez, 2001; Chen et al., 2016b; Giaever et al., 2002; Kok et al., 2015) has renewed interest into how organisms adapt to gene loss. In the presence of deleterious mutations, genetic compensation by transcriptional upregulation of related gene(s) (also known as transcriptional adaptation) has been reported in numerous systems (El-Brolosy and Stainier, 2017; Rossi et al., 2015; Tondeleir et al., 2012); however, the molecular mechanisms underlying this response remained unclear. To investigate this phenomenon, I develop and study multiple models of transcriptional adaptation in zebrafish and mouse cell lines. I first show that transcriptional adaptation is not caused by loss of protein function, indicating that the trigger lies upstream, and find that the response involves enhanced transcription of the related gene(s). Furthermore, I observe a correlation between levels of mutant mRNA degradation and upregulation of related genes. To investigate the role of mutant mRNA degradation in triggering the response, I generate mutant alleles that do not transcribe the mutated gene and find that they fail to induce a transcriptional response and display stronger phenotypes. Transcriptome analysis of alleles displaying mutant mRNA degradation revealed upregulation of a significant proportion of genes displaying sequence similarity with the mutated gene’s mRNA, suggesting a model whereby mRNA degradation intermediates induce transcriptional adaptation via sequence similarity. Further mechanistic analyses suggested RNA-decay factors-dependent chromatin remodeling, and repression of antisense RNAs to be implicated in the response. These results identify a novel role for mutant mRNA degradation in buffering against mutations. Besides, they hold huge implications on understanding disease-causing mutations and shall help in designing mutations that lead to minimal transcriptional adaptation-induced compensation, facilitating studying gene function in model organisms.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
Seit einigen Jahrzehnten ist Lysozym eines der am meisten erforschten Proteine in der Literatur und wird hauptsächlich als Modell Protein zur Aufklärung der Faltungs- und Entfaltungsprozesse genutzt. Da die Frage nach Fehlfaltung und deren Verknüpfung mit neurodegenerativen Krankheiten bis zum heutigen Tag nicht vollständig geklärt ist, besteht hier ein großer Spielraum für weitere Forschungsansätze. In der vorliegenden Arbeit wurden daher zwei Modellsysteme verwendet, Hühereiweiß-Lysozym und menschliches Lysozym, jeweils in ihrem nicht-nativen ungefalteten Zustand. Diese ungefalteten Ensembles wurden mit Hilfe NMR spektroskopischer Methoden untersucht und ergaben sehr detaillierte, zum Teil auch überraschende neue Einblicke in Struktur und Dynamik der beiden Proteine und liefern somit wichtige Erkenntnisse zu Faltungs- und Aggregationsprozessen. ...
This work is concerned with two topics at the intersection of convex algebraic geometry and optimization.
We develop a new method for the optimization of polynomials over polytopes. From the point of view of convex algebraic geometry the most common method for the approximation of polynomial optimization problems is to solve semidefinite programming relaxations coming from the application of Positivstellensätze. In optimization, non-linear programming problems are often solved using branch and bound methods. We propose a fused method that uses Positivstellensatz-relaxations as lower bounding methods in a branch and bound scheme. By deriving a new error bound for Handelman's Positivstellensatz, we show convergence of the resulting branch and bound method. Through the application of Positivstellensätze, semidefinite programming has gained importance in polynomial optimization in recent years. While it arises to be a powerful tool, the underlying geometry of the feasibility regions (spectrahedra) is not yet well understood. In this work, we study polyhedral and spectrahedral containment problems, in particular we classify their complexity and introduce sufficient criteria to certify the containment of one spectrahedron in another one.
Many hominin species are best physically represented and understood by the sum of their dental morphologies. Generally, taxonomic affinities and evolutionary trends in development (ontogeny) and morphology (phylogeny) can be deduced from dental analyses. More specifically, the study of dental remains can yield a wealth of information on many facets of hominin evolution, life history, physiology and ecological adaptation; in short, the organisms paleobiomics. Functionally, teeth present information about dietary preferences, that is, the dietary niche in ecological context and, in turn, masticatory function. As the amount and types of information that can be gleaned from 2-dimensional tooth measurement exhaust themselves, 3-dimensional microscopic modeling and analysis presents a largely fertile ground for reexamination and reinterpretation of dental characteristics (Bromage et al., 2005). As such, a novel, non-destructive approach has been developed which combines the work of two established technologies (confocal microscopy and 3D modeling) adapted specifically for the purpose of mineralized tissue imaging. Through this method, 3D functional masticatory and therefore occlusal molar microwear is able to be visualized, quantified and comparatively analyzed to assess dietary preference in Javanese Homo erectus. This method differs from other microwear investigative techniques (defining 'pits'- vs- 'scratches', microtexture analysis etc.) in that it defines a molars masticatory microwear functional interactions in 3-dimensions as its baseline dataset for further interpretations and analyses. Due to poor specimen collection techniques employed during the first half of the 20th century, the very complex geologic nature of the Sangiran Dome and disagreements over its chronostratigraphy, only very few scientific works have addressed the Sangiran 7 (S7) Homo erectus molar collection (n=25) (e.g. Grine and Franzen, 1994; Kaifu, 2006). Grine and Franzen's (1994) work was a predominantly qualitative initial assessment of the specimens and identified five specimens that might better be ascribed to a fossil pongid rather than H. erectus. They also noted several molars to which tooth position (M1 or M2) was unable to be ascribed (Grine and Franzen, 1994). Kaifu (2006) comparatively examined crown sizes in several S7 molars.
The Sangiran 7 collection originates from two distinct geologic horizons: ten from the older Sangiran Formation (S7a, ~1.7 to 1.0mya) and fifteen from the younger, overlying Bapang Formation (S7b, ~1.0 to .7mya). During this million year period, Java was connected to the mainland during various glacio-eustatic low-stands in sea level. These mainland connections varied in size, extent, climatic condition and therefore in faunal and floral composition. As the S7 sample may be representative of the earliest Homo erectus migrants into Java and spans long durations of occupation, its investigation yields potential to understand the various influences climatic and ecogeographic fluctuations had on these populations. Since the sample consists only of teeth, an ecodietary approach has been deemed the most logical and appropriate investigative approach. Questions regarding the intra- and inter- S7 sample
relationships will also be addressed.
By comparing various aspects of the H. erectus dentition against that of hunter/ gatherer's (H/G) whose diet is known, functional dietary similarity can be directly correlated. Thus a comparative molar sample consisting of the below historic hunter/ gather's (n=63) has been included in order to assess H. erectus's diet in ecological context: Inuit (n=9), Pacific Northwest Tribes (n=11), Fuegians (n=11), Australian Aborigines (n=12) and Bushman (n=20). Methodologically, this approach produces a 3D facet microwear vector (fmv) signature for each molar which can then be compared for statistical similarity.
Microwear (and, as such, the fmv signatures) was defined by the regular, parallel striations found on specific cusp facets known to arise from patterned, directional masticatory movements. This differs significantly from post-mortem or taphonomic microwear which produces striations at irregular angles on multiple, non-masticatory surfaces (Peuch et al.1985, Teaford, 1988). A 'match value' is produced to determine the similarity of two molars fmv's. The 'match values' are ranked (high to low) and these rankings are used to statistically analyze and infer dietary preference: between Sangiran 7 (as an entire sample) compared against that of the historic hunter/ gatherer H. sapiens whose diet and ecogeography is known; within S7a and S7b and then among the S7 sample (eg. S7a-vs-S7b); whether the purported Pongo molars actually affiliate well with H. erectus, the hunter-gatherer's or if they demonstrate distinctly different fmv signatures altogether; whether fmv signatures are useful in distinguishing molars whose tooth position is in doubt (eg. M1 or M2).
When compared against individual H/G molars, the results show that Sangiran 7 H. erectus most closely correlates with Bushmen across all areas of fmv signature analysis. However, within broader dietary categories (yearly reliant on proteinaceous foods; seasonally reliant on proteinaceous foods; not reliant on proteinaceous foods), it was found that H. erectus most closely allied with the two hunter/ gatherer subpopulations associated with the 'Seasonally reliant on proteinaceous foods' (Australian Aboriginals and Pacific Northwest Tribes). There was also evidence for dietary change or specialization over time. As the environment changed during occupation by the earlier Sangiran to the later Bapang individuals, the dietary preference shifted from a focus on vegetative foods to a diet much more inclusive of proteinaceous resources.
These results are considered logical within the larger ecogeographic and chronostratigraphic context of the Sangiran Dome during the Pleistocene. However, a larger sample would be needed to confirm this. Although general dietary preferences can be drawn from this method, it is not possible at present to define specific foods consumed on a daily basis (eg. tubers or tortoise meat).
Out of the five specimens possibly allied with Pongo, S7-14 matched at the 'high' designation with a hunter/ gatherer, S7-62 matched 'moderately', S7-20 matched 'low' while the remaining two were not able to be matched with any other teeth for various reasons. Although designation to Pongo cannot be ruled on at this time using this method, it does demonstrate that at least two of the teeth correlate well with various hunter/ gatherer's who do not share dietary similarity with Pongo. This suggests their designation as Pongo should be more closely reevaluated. As for the four specimens whose tooth position was unsure, S7-14 matched 'highly' with 1st molars, S7-62 and S7-78 matched 'moderately' with 2nd and 1st molars respectively while S7-20 only matched at the 'low' designation. Although this approach is still exploratory, it adds another analytical tool for use in defining tooth position.
In sum, this method has demonstrated its usefulness in defining and functionally analyzing a novel 3D molar microwear dataset to interpret dietary preference. Future work would include a pan- H. erectus molar sample in order to illuminate broader populational, taxonomic and dietary correlations within and amoung all H. erectus specimens. A larger, more heterogenous historic H/G sample would also be included in order to provide a wider dietary comparative population. This method can be further extended to include and compare any and all hominins as well as any organism which produces micro wear upon it molars. Also, the data obtained and resultant fmv signature diagrams have the potential to be incorporated into 3D VR reconstructions of mandibular movement thus recreating mastication in extinct organisms and leading to more robust anatomical and physiological investigations especially when viewed in the context of larger environmental conditions or changes.
The Earth’s surface condition we find today is a result of long exposure to metabolism of life forms. Particularly, molecular oxygen in the atmosphere is a feature which developed over time. The first substantial and lasting rise of atmospheric oxygen level happened ≈ 2.5 Ga ago, but localities are reported where transiently elevated oxygen levels appeared before this time-point. To trace the timing and circumstances of the earliest availability of free oxygen in the atmosphere is important to understand the habitats of early microbial life forms on Earth.
This thesis focuses to obtain information of oxygen levels and the related atmospheric cycling of metals in sediments of the 3.5 to 3.2 Ga Barberton Greenstone Belt. First, as iron was a ubiquitous constituent of Archean seawater, I investigated its isotopic composition in minerals of chemical sediments. Hereby, I tried to resolve the changes within the water basin on small scale sedimentary sequence cycles. Second, I focused on the minor constituents of Archean seawater. The Re-Os geochronologic system and the abundance patterns of the platinum-group elements were chosen to integrate information of oxygen promoted weathering of a large source area. To integrate information of a large time interval, the isotopes of uranium were investigated over a large stratigraphic section.
The two key findings of this thesis are:
• Quantitative oxidation of ferrous iron in surface layers of Paleoarchean seawater occurred during the onset and termination of hydrothermal FeIIaq delivery into shallow waters.
• Paleoarchean sedimentary successions of the Barberton Greenstone Belt lack any evidence of transient basin-scale oxygenation.
The Manzimnyama Iron Formation (IF, Fig Tree Group, Barberton Greenstone Belt, South Africa) has been deciphered to exist of cyclic stacks of lithostratigraphic units with varying amounts of iron oxide and carbonate minerals. In-situ femtosecond-Laser-Ablation ICP-MS iron isotope measurements showed that the majority of siderite (γ56Fe ≈ −0.5 ‰) precipitated directly from seawater of γ56Fe ≈ 0 ‰. Ferric iron from the surface layers is preserved in ≤ 1μ m hematite and in magnetite that has been grown within the consolidated sediment. During FeIIaq events, fine-grained hematite (γ56Fe ≈ 2.2 ‰) and magnetite (γ56Fe 0.5 to 0.8 ‰) indicate oxygen levels in surface waters of lower than 0.0002 μM. Upon onset and termination of iron oxide abundance, magnetite with γ56Fe ≈ 0 ‰ indicates that low concentrations of FeIIaq in surface waters were oxidized quantitatively. These observations demonstrate the existence of iron oxidation in Paleoarchean surface waters independent of FeIIaq concentration. This is the first investigation of Paleoarchean IF showing that lithostratigraphic cyclicity can be traced in iron isotopic composition of oxide minerals.
ID-ICP-MS measurement of Re, Ir, Ru, Pt and Pd, trace element (SF-ICP-MS) and ID-MCICP- MS uranium isotope determination have been applied to carbonaceous shale of the Mapepe Fm. (Fig Tree Group) after inverse Aqua Regia leaching and bulk digestion. The sediments reveal a silicified fraction which exhibits a seawater REE signature and a mixture of detrital and meteoritic PGE. Neither enrichment of the redox-sensitive elements Re or Mo nor fractionated uranium isotopes have been found on a stratigraphic interval of several hundred meters. The non-silica fraction shows no depletion of Re which indicates that the detrital material had no contact to oxidizing fluids. ID-TIMS measurements of Re and Os after the CrO3-SO4 Carius Tube method of two sample intervals showed that the Re-Os isotopic systems of the non-silica fractions are identical to two komatiite occurrences. Weltevreden Fm. and Komati Fm. rocks were uplifted, eroded and transported to the deep part of the sedimentary basin without any change to the Re-Os system. Negative fractionated uranium isotopes (γ238U = −0.41 ± 0.01 ‰) associated with detrital Ba-Cr-U occurrences suggest the existence of distal redox-processes that involve uranium species. This study demonstrates that over the time of exposure and deposition of the Mapepe Fm. sedimentation, free oxygen was not available for weathering in the catchment area.
A multiple filter test for the detection of rate changes in renewal processes with varying variance
(2014)
The thesis provides novel procedures in the statistical field of change point detection in time series.
Motivated by a variety of neuronal spike train patterns, a broad stochastic point process model is introduced. This model features points in time (change points), where the associated event rate changes. For purposes of change point detection, filtered derivative processes (MOSUM) are studied. Functional limit theorems for the filtered derivative processes are derived. These results are used to support novel procedures for change point detection; in particular, multiple filters (bandwidths) are applied simultaneously in oder to detect change points in different time scales.
In light of the global sea-level rise and climate change of the 21th century, it is important to look back into the recent past in order to understand what the future might hold. A multi-proxy data set was compiled to evaluate the influence of geomorphological and environmental factors, such as antecedent topography, subsidence, sea level and climate, on reef, sand apron and lagoon development in modern carbonate platforms through the Holocene. Therefore, a combination of remote sensing and morphological data from 122 modern carbonate platforms and atolls in the Atlantic, Indian and Pacific Oceans were conducted, along with a case study from the oceanic (Darwinian) barrier-reef system of Bora Bora, French Polynesia, South Pacific.
The influence of antecedent topography and platform size as factors controlling Holocene sand apron development and extension in modern atolls and carbonate platforms is hypothesized. Antecedent topography describes the elevation and relief of the underlying Pleistocene topography (karst) and determines the distance from the sea floor to the rising postglacial sea level. Maximum lagoon depth and marginal reef thickness, when available in literature, were used as proxies for antecedent topography. Sand apron proportions of 122 atolls and carbonate platforms from the Atlantic, Indian and Pacific Oceans were quantified and correlated to maximum lagoon depth, total platform area and marginal reef thickness. This study shows that sand apron proportions increase with decreasing lagoon depths. Sand apron proportions also increase with decreasing platform area. The interaction of antecedent topography and Holocene sea-level rise is responsible for variations in accommodation space and at least determines the extension of the lateral expansion of sand aprons. In general, sand apron formation started when marginal reefs approached relative sea level. Spatial and regional variations in sea-level history let sand apron formation start earlier in the Indo-Pacific region (transgressive-regressive) than in the Western Atlantic Ocean (transgressive).
The influence of sea level, antecedent topography and subsidence of a volcanic island on late Quaternary reef development was evaluated based on six rotary core transects on the barrier and fringing reefs of Bora Bora. This study was designed to revalue the Darwinian model, the subsidence theory of reef development, which genetically connects fringing reef, barrier reef and atoll development by continuous subsidence of the volcanic basement. Postglacial sea-level rise, and to a minor degree subsidence, were identified as major factors controlling Holocene reef development in that they have created accommodation space and controlled reef architecture. Antecedent topography was also an important factor because the Holocene barrier reef is located on a Pleistocene barrier reef forming a topographic high. Pleistocene soil and basalt formed the pedestal of the fringing reef. Uranium-Thorium dating shows that barrier and fringing reefs developed contemporaneously during the Holocene.
In the barrier–reef lagoon of Bora Bora, the influence of environmental factors, such as sea level and climate, tsunamis and tropical cyclones controlling Holocene sediment dynamics was evaluated based on sedimentological, paleontological, geochronological and geochemical data. The lagoonal succession comprises mixed carbonate-siliciclastic sediments overlying peat and Pleistocene soil. The multi-proxy data set shows variations in grain-size, total organic carbon (proxy for primary productivity), Ca and Cl element intensities (proxies for carbonate availability and lagoonal salinity) during the mid-late Holocene. These patterns could result from event sedimentation during storms and correlate to event deposits found in nearby Tahaa, probably induced by elevated cyclone activity. Accordingly, elevated erosion and runoff from the volcanic island and lower lagoonal salinity would be a result of rainfall during repeated cyclone landfall. However, Ti/Ca and Fe/Ca ratios as proxies for terrigenous sediment delivery peaked out in the early Holocene and declined since the mid-Holocene. Benthic foraminifera assemblages do not indicate reef-to-lagoon transport. Alternatively, higher and sustained hydrodynamic energy is probably induced by stronger trade winds and a higher-than-present sea level during the mid-late Holocene. The increase in mid-late Holocene sediment dynamics within the back-reef lagoon is supposed to display sediment-load shedding of sand aprons due to the oversteepening of slopes at sand apron/lagoon edges during their progradation rather than an increase in tropical storm activity during that time.
The influence of sea-level and climate changes on sediment import, composition and distribution in the Bora Bora lagoon during the Holocene is validated. Lagoonal facies succession comprises siderite-rich marly wackestones, foraminifera-siderite wackestones, mollusk-foraminifera marly packstones and mollusk-rich wackestones during the early-mid Holocene, and mudstones since the mid-late Holocene. During the early Holocene, enhanced weathering and iron input from the volcanic island due to wetter climate conditions led to the formation of siderite within the lagoonal sediments. The geochemical composition of these siderites shows that precipitation was driven by microbial activity and iron reduction in the presence of dissolved bicarbonate. Chemical substitutions at grain margins illustrate changes in the oxidation state and probably reflect changes in pore water chemistry due to sea-level rise and climate change (rainfall). In the late Holocene, sediment transport into the lagoon is hampered by motus on the windward side of the lagoon, which led to early submarine lithification within the lagoon.
How the brain evolved remains a mystery. The goal of this thesis is to understand the fundamental processes that are behind the evolutionary history of the brain. Amniotes appeared 320 million years ago with the transition from water to land. This early group bifurcated into sauropsids (reptiles and birds) and synapsids (mammals). Amniote brains evolved separately and display obvious structural and functional differences. Although those differences reflect brain diversification, all amniote brains share a common ancestor and their brains show multiple derived similarities: equivalent structures, networks, circuits and cell types have been preserved during millions of years. Finding these differences and similarities will help us understand brain historical evolution and function. Studying brain evolution can be approached from various levels, including brain structure, circuits, cell types, and genes. We propose a focus on cell types for a more comprehensive understanding of brain evolution. Neurons are the basic building blocks and the most diverse cell types in the brain. Their evolution reflects changes in the developmental processes that produce them, which in turn may shape the neural circuits they belong to. However, there is currently a lack of a unified criteria for studying the homology of connectivity and development between neurons. A neuron’s transcriptome is a molecular representation of its identity, connectivity, and developmental/evolutionary history. Hence the comparison of neuronal transcriptomes within and across species is a new and transformative development in the study of brain evolution. As an alternative, comparing neuronal transcriptomes across different species can provide insights into the evolution of the brain. We propose that comparing transcriptomes can be a way to fill this gap and unify these criteria. In previous studies, published in Science (Tosches et al., 2018) and Nature (Norimoto et al., 2020), we leveraged scRNAseq in reptiles to re-evaluate the origins and evolution of the mammalian cerebral cortex and claustrum. Motivated by the success of this approach, in this thesis we have now expanded single-cell profiling to the entire brain of a lizard species, the Australian dragon Pogona vitticeps, with a special focus in thalamus and prethalamus of. This approach allowed us to study the evolution of neuron types in amniotes. Therefore, we aimed to build a multilevel atlas of the lizard brain based on histology and transcriptomic and compare it to an equal mouse dataset (Zeisel et al., 2018).
Our atlas reveals a general structure that is consistent with that for other amniote brains, allowing us to make a direct comparison between lizard and mouse, despite their evolutionary divergence 320 million years ago. Through our analysis of the transcriptomes present in various neuron types, we have uncovered a core of conserved classes and discovered a fascinating dichotomy of new and conserved neuron types throughout the brain. This research challenges the traditional notion that certain brain regions are more conserved than others.
Our research also has uncovered the evolutionary history of the lizard thalamus and prethalamus by comparing them to homologous brain regions of the mouse. This pioneering research sheds new light on our understanding of the evolutionary history of the lizard brain. We propose a new classification of the lizard thalamic nuclei based on
transcriptomics. Our research revealed that the thalamic neuron types in lizards can be grouped into two large, conserved categories from the medial to lateral thalamus. These categories are encoded by a common set of effector genes, linking theories based on connectivity and molecular studies of these areas. In our data we have seen that there is a conservation of the medial-lateral transcriptomic axis in mouse and lizard, this conservation was most likely already present in the common ancestor. Although there is a shared medial-lateral axis, a deeper study of the thalamic cell types has allowed us to see the existence of a partial diversification of the thalamic population, specifically in the sensory-related lateral thalamus; in opposition, the medial thalamic nuclei neuron-types have been preserved.
On the other hand, the comparison with the mammalian prethalamus allowed us to confirm that the lizard ventromedial thalamic neuron types are homologous to mouse reticular thalamic neuron types (Díaz et al., 1994), even if they do not express the classical Reticular thalamic nucleus (RTn) marker PV/pvalb. We also discovered that there has been a simplification in the mammalian prethalamic neuron types in favor of an increase in the number of Interneurons (IN) types within their thalamus. We suggest that the loss of GABAergic neuronal types in the mammalian prethalamus is linked to the need for a more efficient control of the thalamo-pallial communication in mammals, while in lizards, where thalamo-pallial communication is probably simpler, the diversity prethalamus presents a higher diversity.
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
Synchronized neural activity in the visual cortex is associated with small time delays (up to ~10 ms). The magnitude and direction of these delays depend on stimulus properties. Thus, synchronized neurons produce fast sequences of action potentials, and the order in which units tend to fire within these sequences is stimulusdependent, but not stimulus-locked. In the present thesis, I investigated whether such preferred firing sequences repeat with sufficient accuracy to serve as a neuronal code. To this end, I developed a method for extracting the preferred sequence of firing in a group of neurons from their pair-wise preferred delays, as measured by the offsets of the centre peaks in their cross-correlation histograms. This analysis method was then applied to highly parallel recordings of neuronal spiking activity made in area 17 of anaesthetized cats in response to simple visual stimuli, like drifting gratings and moving bars. Using a measure of effect size, I then analyzed the accuracy with which preferred firing sequences reflected stimulus properties, and found that in the presence of gamma oscillations, the time at which a unit fired in the firing sequence conveyed stimulus information almost as precisely as the firing rate of the same unit. Moreover, the stimulus-dependent changes in firing rates and firing times were largely unrelated, suggesting that the information they carry is not redundant. Thus, despite operating at a time scale of only a few milliseconds, firing sequences have the strong potential to provide a precise neural code that can complement firing rates in the cortical processing of stimulus information.
This thesis examines the literary output of German servicemen writers writing from the occupied territories of Europe in the period 1940-1944. Whereas literary-biographical studies and appraisals of the more significant individual writers have been written, and also a collective assessment of the Eastern front writers, this thesis addresses in addition the German literary responses in France and Greece, as being then theatres of particular cultural/ideological attention. Original papers of the writer Felix Hartlaub were consulted by the author at the Deutsches Literatur Archiv (DLA) at Marbach. Original imprints of the wartime works of the subject writers are referred to throughout, and citations are from these. As all the published works were written under conditions of wartime censorship and, even where unpublished, for fear of discovery written in oblique terms, the texts were here examined for subliminal authorial intention. The critical focus of the thesis is on literary quality: on aesthetic niveau, on applied literary form, and on integrity of authorial intention. The thesis sought to discover: (1) the extent of the literary output in book-length forms. (2) the auspices and conditions under which this literary output was produced. (3) the publication history and critical reception of the output. The thesis took into account, inter alia: (1) occupation policy as it pertained locally to the writers’ remit; (2) the ethical implications of this for the writers; (3) the writers’ literary stratagems for negotiating the constraints of censorship.
In literary translation 'correctness' is rarely ratified by linguistic rules; it is more often a question of what a sensitive translator feels to be correct. Intuition will therefore play a major part. This intuition is seen here neither as instinctive reaction prompted by experience, nor as native competence, but as an inquiring, self-moderating influence inspired by the language itself. It is treated in this respect as an informed intuition, that is, as having a linguistic base for sensitive judgement. This assumes that the literary translator is both a creative writer and his own critical reader as well as a fine judge of language potential. This line is applied to translating meaning and sense, transferring the very language, imitating the form and style, re-creating the features, and above all, to capturing those unique qualities of the original. After dealing with word-accuracy, the question of literary input demanded by form and style is examined. The treatment of language used for effect features in a section on Kafka. The merits and the problems of translating dialect as dialect for its own sake are looked at closely and in a positive way as are the possibilities of reproducing 'oddities' of language. The immense task of translating the language of Joyce ('Ulysses ') with all its vagaries and skilful manipulation of words is examined for the possibility of providing an accurate copy. The ultimate test of reproducing a uniqueness of artistic creation together with the profound thought which inspired it, is reserved for a section on Hopkins. While it is recognized that, owing to the constrictions imposed by the extreme and sensitive use of language, no translation can fully include all that there is in his poems, it might be possible to capture enough of their essence to give an impression of a 'German' Hopkins at work. A major objective throughout is the establishment of a linguistic base for the part played by intuition in literary translation.
Spin waves in yttrium-iron garnet has been the subject of research for decades. Recently the report of Bose-Einstein condensation at room temperature has brought these experiments back into focus. Due to the small mass of quasiparticles compared to atoms for example, the condensation temperature can be much higher. With spin-wave quasiparticles, so-called magnons, even room temperature can be reached by externally injecting magnons. But also possible applications in information technologies are of interest. Using excitations as carriers for information instead of charges delivers a much more efficient way of processing data. Basic logical operations have already been realized. Finally the wavelength of spin waves which can be decreased to nanoscale, gives the opportunity to further miniaturize devices for receiving signals for example in smartphones.
For all of these purposes the magnon system is driven far out of equilibrium. In order to get a better fundamental understanding, we concentrate in the main part of this thesis on the nonequilibrium aspect of magnon experiments and investigate their thermalization process. In this context we develop formalisms which are of general interest and which can be adopted to many different kinds of systems.
A milestone in describing gases out of equilibrium was the Boltzmann equation discovered by Ludwig Boltzmann in 1872. In this thesis extensions to the Boltzmann equation with improved approximations are derived. For the application to yttrium-iron garnet we describe the thermalization process after magnons were excited by an external microwave field.
First we consider the Bose-Einstein condensation phenomena. A special property of thin films of yttrium-iron garnet is that the dispersion of magnons has its minimum at finite wave vectors which leads to an interesting behavior of the condensate. We investigate the spatial structure of the condensate using the Gross-Pitaevskii equation and find that the magnons can not condensate only at the energy minimum but that also higher Fourier modes have to be occupied macroscopically. In principle this can lead to a localization on a lattice in real space.
Next we use functional renormalization group methods to go beyond the perturbation theory expressions in the Boltzmann equation. It is a difficult task to find a suitable cutoff scheme which fits to the constraints of nonequilibrium, namely causality and the fluctuation-dissipation theorem when approaching equilibrium. Therefore the cutoff scheme we developed for bosons in the context of our considerations is of general interest for the functional renormalization group. In certain approximations we obtain a system of differential equations which have a similar transition rate structure to the Boltzmann equation. We consider a model of two kinds of free bosons of which one type of boson acts as a thermal bath to the other one. Taking a suitable initial state we can use our formalism to describe the dynamics of magnons such that an enhanced occupation of the ground state is achieved. Numerical results are in good agreement with experimental data.
Finally we extend our model to consider also the pumping process and the decrease of the magnon particle number till thermal equilibrium is reached again. Additional terms which explicitly break the U(1)-symmetry make it necessary to also extend the theory from which a kinetic equation can be deduced. These extensions are complicated and we therefore restrict ourselves to perturbation theory only. Because of the weak interactions in yttrium-iron garnet this provides already good results.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
‘The whole is more than the sum of its parts.’ This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
"The whole is more than the sum of its parts." This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
I derive a general effective theory for hot and/or dense quark matter. After introducing general projection operators for hard and soft quark and gluon degrees of freedom, I explicitly compute the functional integral for the hard quark and gluon modes in the QCD partition function. Upon appropriate choices for the projection operators one recovers various well-known effective theories such as the Hard Thermal Loop/ Hard Dense Loop Effective Theories as well as the High Density Effective Theory by Hong and Schaefer. I then apply the effective theory to cold and dense quark matter and show how it can be utilized to simplify the weak-coupling solution of the color-superconducting gap equation. In general, one considers as relevant quark degrees of freedom those within a thin layer of width 2 Lambda_q around the Fermi surface and as relevant gluon degrees of freedom those with 3-momenta less than Lambda_gl. It turns out that it is necessary to choose Lambda_q << Lambda_gl, i.e., scattering of quarks along the Fermi surface is the dominant process. Moreover, this special choice of the two cutoff parameters Lambda_q and Lambda_gl facilitates the power-counting of the numerous contributions in the gap-equation. In addition, it is demonstrated that both the energy and the momentum dependence of the gap function has to be treated self-consistently in order to determine the imaginary part of the gap function. For quarks close to the Fermi surface the imaginary part is calculated explicitly and shown to be of sub-subleading order in the gap equation.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The fungal interaction with plants is a 400 million years old phenomenon, which presumably assisted in the plants’ establishment on land. In a natural ecosystem, all plant-ranging from large trees to sea-grasses-are colonized by fungal endophytes, which can be detected inter- and intracellularly within the tissues of apparently healthy plants, without causing obvious negative effects on their host. These ubiquitous and diverse microorganisms are likely playing important roles in plant fitness and development. However, the knowledge on the ecological functions of fungal root endophytes is scarce. Among possible functions of endophytes, they are implicated in mutualisms with plants, which may increase plant resistance to biotic stressors like herbivores and pathogens, and/or to abiotic factors like soil salinity and drought. Also, endophytes are fascinating microorganisms in regard to their high potential to produce a great spectrum of secondary metabolites with expected ecological functions. However, evidences suggest that the interactions between host plants and endophytes are not static and endophytes express different symbiotic lifestyles ranging from mutualism to parasitism, which makes difficult to predict the ecological roles of these cryptic microorganisms. To reveal the ecological function of fungal root endophytes, this doctoral thesis aims at assessing fungal root endophytes interactions with different plants and their effects on plant fitness, based on their phylogeny, traits, and competition potential in settings encompassing different abiotic contexts. To understand the cryptic implication of nonmycorrhizal endophytes in ecosystem processes, we isolated a diverse spectrum of fungal endophytes from roots of several plant species growing in different natural contexts and tested their effects on different model plants under axenic laboratory conditions. Additionally,we aimed at investigating the effect of abiotic and biotic variables on the outcome of interactions between fungal root endophytes and plants.
In summary, the morphological and physiological traits of 128 fungal endophyte strains within ten fungal orders were studied and artificial experimental systems were used to reproduce their interactions with three plant species under laboratory conditions. Under defined axenic conditions, most endophytes behaved as weak parasites, but their performance varied across plant species and fungal taxa. The variation in the interactions was partly explained by convergent fungal traits that separate groups of endophytes with potentially different niche preferences. According to my findings, I predict that the functional complementarity of strains is essential in structuring natural root endophytic communities. Additionally, the responses of plant-endophyte interactions to different abiotic factors, namely nutrient availability, light intensity, and substrate’s pH, indicate that the outcome of plant-fungus relationships may be robust to changes in the abiotic environment. The assessment of the responses of plant endophyte interactions to biotic context, as combinations of selected dominant root fungal endophytes with different degrees of trait similarity and shared evolutionary history, indicates that frequently coexisting root-colonizing fungi may avoid competition in inter-specific interactions by occupying specific niches, and that their interactions likely define the structure of root-associated fungal communities and influence the microbiome impacts on plant fitness.
In conclusion, my findings suggest that dominant fungal lineages display different ecological preferences and complementary sets of functional traits, with different niche preferences within root tissues to avoid competition. Also, their diverse effects on plant fitness is likely host-isolate dependent and robust to changes in the abiotic environment when these encompass the tolerance range of either symbiont.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
This work aimed to investigate the regulation and activity of 5-lipoxygenase (5-LO), the central enzyme in leukotriene biosynthesis, in two colorectal cancer cell lines. The leukotriene pathway is positively correlated with the progression of several solid malignancies; however, factors regulating 5-LO expression and activity in tumors are poorly understood.
Cancer development, as well as cancer progression, are strongly dependent on the tumor microenvironment. In the conventional monolayer culture of cancer cell lines, cell-matrix and cell-cell interactions present in native tumors are absent. Furthermore, it is already known that various colon cancer cell lines dysregulate several important signaling pathways due to 3D growth. Therefore, the expression of the leukotriene cascade in HT-29 and HCT-116 colorectal cancer cells was investigated within a three-dimensional context using multicellular tumor spheroids to mimic a more physiological environment compared to conventional cell culture. Especially the expression of 5-LO, cPLA2α, and LTA4 hydrolase was altered due to threedimensional (3D) cell growth, which was investigated by qPCR and Western blot analysis. High cellular density in monolayer cultures led to similar results. The observed 5-LO upregulation was found inversely correlated with cell proliferation, determined by cell cycle analysis, and activation of PI3K/mTORC-2- and MEK-1/ERK-dependent pathways, determined using pharmacological pathway inhibition, stable shRNA knockdown cell lines, and analysis via qPCR and Western blot analysis. Following, the transcription factor E2F1 and its target gene MYBL2 were identified to play a role in the repression of 5-LO during cell proliferation. For this purpose, several stable MYBL2 over-expression and ALOX5 reporter cell lines were prepared and analyzed. Since 5-LO was already identified as a direct p53 target gene, the influence of p53, which is variably expressed in the cell lines (HT-29, p53 R273H mut; HCT-116 p53 wt; HCT-116 p53 KO), was investigated as well. Furthermore, HCT-116 cells carrying a p53 knockout were investigated. The PI3K/mTORC-2- and MEK-1/ERK-dependent suppression of 5-LO was also found in tumor cells from other origins (Capan-2, Caco-2, MCF-7), which was determined using pharmacological pathway inhibition and following analysis via qPCR. This suggests that the identified mechanism might apply to other tumor entities as well.
5-LO activity was previously described as attenuated in HT-29 and HCT-116 cells compared to polymorphonuclear leukocytes, which express a highly active 5-LO. However, the present study showed that the enzyme activity is indeed low but inducible in HT-29 and HCT-116 cells. Of note, the general lipid mediator profile and the mediator concentrations were comparable to those of M2 macrophages. Finally, the analysis of substrate availability in HT-29 and HCT-116 cells revealed a vast difference between formed metabolite concentrations and supplemented fatty acid concentrations, indicating that the substrates are either transformed into lipoxygenase-independent metabolites or are esterified into the cellular membrane.
In summary, the data presented in this work demonstrate that 5-LO expression and activity are tightly regulated in HT-29 and HCT-116 cells and fine-tuned due to environmental conditions. The cells suppress 5-LO during proliferation but upregulate the expression and activity of the enzyme under cellular stress-triggering conditions. This implies a possible role of 5-LO in manipulating the tumor stroma to support a tumor-promoting microenvironment.
The Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) as well as the T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL) are rare types of malignant lymphomas. Both NLPHL and THRLBCL are frequently observed in middle-aged men with THRLBCL presenting frequently with an advanced Ann-Arbor stage with B-symptoms and associated with more aggressive courses.3 However, due to the limited number of tumor cells in the tissue of both NLPHL and THRLBCL, limited numbers of studies have been conducted on these lymphomas and current results are mainly based on general molecular genetic studies.
In order to obtain a better understanding for these disease forms as well as possible changes in their nuclear and cytoplasmatic sizes, the following study relied on the comparison of the different NLPHL forms and THRLBCL in terms of nuclear size and nuclear volume. This was carried out using both 2D and 3D analysis. During the 2D analysis of nuclear size and nuclear volume no significant differences could be presented between those groups. However, the 3D analysis of NLPHL and THRLBCL pointed out a slightly enlarged nuclear volume in THRLBCL. Furthermore, the analysis indicated a significantly increased cytoplasmatic size of THRLBCL compared to NLPHL forms. Nevertheless, differences occurred not only between the tumor cells of both disease forms, but also the T cells presented a larger nuclear volume in THRLBCL. B cells, which were considered as the control group, did not demonstrate any significant differences between the different groups. The presented results suggest an increased activity of T cells in THRLBCL, which is most likely to be interpreted as a response against the surrounding tumor cells and probably limits the proliferation of the tumor cells. Based on these results, the importance of 3D analysis is also evident due to the fact that it is clearly superior to 2D analysis. For a better understanding of both disease forms, it is therefore recommended to use the 3D technique in combination with molecular genetic analysis in future research.
The subject of this thesis is the experimental investigation of the neutron-capture cross sections of the neutron-rich, short-lived boron isotopes 13B and 14B, as they are thought to influence the rapid neutron-capture process (r process) nucleosynthesis in a neutrino-driven wind scenario.
The 13;14B(n,g)14;15B reactions were studied in inverse kinematics via Coulomb dissociation at the LAND/R3B setup (Reactions with Relativistic Radioactive Beams). A radioactive beam of 14;15B was produced via in-flight fragmentation and directed onto a lead-target at about 500 AMeV. The neutron breakup of the projectile within the electromagnetic field of the target nucleus was investigated in a kinematically complete measurement. All outgoing reaction products were detected and analyzed in order to reconstruct the excitation energy.
The differential Coulomb dissociation cross sections as a function of the excitation energy were obtained and first experimental constraints on the photoabsorption and the neutron-capture cross sections were deduced. The results were compared to theoretical approximations of the cross sections in question. The Coulomb dissociation cross section of 15B into 14B(g.s.) + n was determined to be s(15B;14B(g:s:)+n) CD = 81(8stat)(10syst) mb ; while the Coulomb dissociation cross section of 14B into a neutron and 13B in its ground state was found to be s(14B;13B(g:s:)+n) CD = 281(25stat)(43syst) mb: Furthermore, new information on the nuclear structure of 14B were achieved, as the spectral shape of the differential Coulomb dissociation cross section indicates a halolike structure of the nucleus.
Additionally, the Coulomb dissociation of 11Be was investigated and compared to previous measurements in order to verify the present analysis. The corresponding Coulomb dissociation cross section of 11Be into 10Be(g.s.) + n was found to be 450(40stat)(54syst ) mb, which is in good agreement with the results of Palit et al.
My study examined MMA training, and thereby the ‘back region’ of MMA, where the ‘everyday life’ of MMA takes place. I enquired into how MMA training corresponds with MMA’s self-description, namely the somehow self-contradicting notion that MMA fights would be dangerous combative goings-on of approximately real fighting, but that MMA fighters would be able to approach these incalculable and uncontrolla-ble combative dangers as calculable and controllable risks.235 Conducting an ethnog-raphy in which I focused on the combination of participation and observation, I stud-ied how the specific interaction organisations of the three core training practices of MMA training provide the training students with specific combative experiences and how they thereby construct the social reality that is MMA training....
The book deals with a comprehensive constellation of narrative and visual, often counterposed representations of the causes, course, and results of the assault on the Palace of Justice of Colombia by a guerrilla commando and the immediate counterattack launched by state security forces on November 6, 1985, as well as with the local memorial traditions in which the production, circulation and reproduction of these representations have taken place between 1985 and 2020. The research on which it is based was grounded in the method and perspective of classical anthropology, in as much as qualitative fieldwork and the search for the perspective of the actors involved have played a central role. Within that context, memory entrepreneurs belonging to diverse sectors, from the far-right to the human rights movement, were followed through multisited fieldwork in various locations of Colombia, as well as in various countries of America and Europe. The analyses of fieldwork data, documental sources, and visual representations that constitute the core of the argument are framed in the field of memory studies and mainly based on theoretical and methodological resources from Pierre Bourdieu’s Field Theory, Jeffrey Alexander’s theory of social trauma, and Ernst Gombrich’s characterization of iconological analysis.
The book is composed of four chapters preceded by an introduction and followed by the conclusions and documental appendices, and substantiates three main theses. The first is that the Palace of Justice events were a radio- and television-broadcasted dispersed tragedy that affected the lives of actors from different social sectors and regions of Colombia, who have launched since 1985 multiple memorial initiatives in different fields of culture, thereby contributing to the formation and intergenerational transmission of a widespread cultural trauma. The second is that the narrative and visual representations at the core of that trauma express a vast universe of local representational traditions that can be traced at least until the early 20th century, and therefore preexists the so-called Colombian “memory boom”, dated to the mid-1990s. As an example of the preexistence and longstanding impact of these traditions, the local usage of the figure of “holocaust” for representing the effects of politically motivated violence is analyzed regarding the Palace of Justice events, but also traced to other representations emerged in the decade of 1920. The third thesis is that analyzing the diverse, frequently counterposed accounts of political violence elaborated within these traditions provides an opportunity to explore a wide variety of understandings of the causes and characteristics of the longstanding Colombian social and armed conflict.
Keywords: Political violence, Cultural trauma, Collective Memory, Iconology, Holocaust, Colombia.
Die Fähigkeit der spezifischen und kontextabhängigen zellulären Adaption auf intrinsische und/oder extrinsische Signale ist das Fundament zellulärer Homöostase. Verschiedene Signale werden von Membranrezeptoren oder intrazellulären Rezeptoren erkannt und ermöglichen die molekulare Anpassung zellulärer Prozesse. Komplexe, ineinandergreifende Proteinnetzwerke sind dabei elementar in der Regulation der Zelle. Proteine und deren Funktionen werden dabei nach Bedarf reguliert und unterliegen einem ständigen proteolytischen Umsatz.
Die stimulusabhängige Gentranskription und/oder Proteintranslation nimmt hier eine zentrale Stellung ein, da die zugrundeliegende Maschinerie die Komposition und Funktion der Proteinnetzwerke entsprechend anpassen kann. Zusätzlich zur Regulation der Proteinabundanz werden Proteine posttranslational modifiziert, um deren Eigenschaften rasch zu ändern. Zu posttranslationalen Modifikationen zählen die Ubiquitinierung und/oder Phosphorylierung, welche die Proteinfunktionen hochdynamisch regulieren. Deregulierte Proteinnetzwerke werden oft mit Neurodegeneration und Autoimmun- oder Krebserkrankungen assoziiert. Auch Infektionen mit humanpathogenen Bakterien greifen stark in den Regulierungsprozess von Proteinnetzwerken und deren Funktionen ein. Die zelluläre Homöostase wird dadurch herausgefordert.
Bakterien der Gattung Salmonella sind zoonotische, gramnegative, fakultativ intrazelluläre Pathogene, welche weltweit millionenfach Salmonellen-erkrankungen hervorrufen. Von besonderer Bedeutung ist dabei Salmonella enterica serovar Typhimurium (hiernach Salmonella), welches im Menschen, meist durch mangelnde Hygienemaßnahmen, Gastroenteritis auslöst.
Immunität in Epithelzellen wird über das angeborene Immunsystem vermittelt und dient der Pathogenerkennung und -bekämpfung. Die Toll-like Rezeptoren (TLR) gehören zu den Mustererkennungsrezeptoren (pattern recognition receptors), welche spezifische mikrobielle Strukturen detektieren und eine kontextabhängige zelluläre Antwort generieren. Danger-Rezeptoren erkennen hingegen nicht direkt das Pathogen, sondern zelluläre Perturbationen, welche durch Zellschäden oder bakterielle Invasionen verursacht werden. Die intrinsische Fähigkeit der Wirtszelle, sich gegen Infektionen/Gefahren zu wehren wird dabei als zellautonome Immunität bezeichnet. Dabei nehmen induzierte proinflammatorische Signalwege und zelluläre Stressantworten eine wichtige Stellung ein. Die zelluläre Stressantwort aktiviert unter anderem die selektive Autophagie. Diese kann spezifisch aberrante Organelle, Proteine und invasive Pathogene abbauen. Ein weiterer Stresssignalweg ist die integrated stress response (ISR), welche eine selektive Proteintranslation erlaubt und damit die Auflösung des proteintoxischen Stresses ermöglicht.
Zur Penetration von Epithelzellen benötigt Salmonella ein komplexes System an Virulenzfaktoren, welches die bakterielle Internalisierung und Proliferation in der Wirtszelle ermöglicht. Salmonella nutzt dazu ein Typ-III-Sekretionssystem. Das System sekretiert bakterielle Virulenzfaktoren in die Zelle, sodass eine hochspezifische Modulierung des Wirtes erzwungen wird.
Die Virulenzfaktoren SopE und SopE2 spielen dabei eine Schlüsselrolle, da sie die Pathogenität von Salmonella maßgeblich vermitteln. Durch molekulare Mimikry von Wirts GTP (Guanosintriphosphat) -Austauschfaktoren aktivieren SopE und SopE2 die Rho GTPasen CDC42 und Rac1. GTP-geladenes CDC42 und Rac1 wiederum aktivieren das Aktinzytoskelett und stimulieren die Polymerisierung von Aktinfilamenten über den Arp2/3-Komplex an der Invasionsstelle. Das Pathogen wird dadurch in ein membranumhülltes Vesikel, die sogenannte Salmonella-containing Vakuole (SCV), aufgenommen. Die SCV stellt eine protektive, replikative, intrazelluläre Nische des Pathogens dar und wird permanent durch verschiedene Virulenzfaktoren moduliert.
Im Allgemeinen führt die Aktivierung von Mustererkennungsrezeptoren und Danger-Rezeptoren also zu einer zellulären Stressantwort und Entzündungsreaktion, wodurch es zur Bekämpfung der Infektion kommt. Inflammatorische Signalwege werden meist über den zentralen Transkriptionsfaktor NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells) vermittelt. NF-κB bewirkt die Induktion von proinflammatorischen Effektoren und Stressgenen. Zellautonome Immunität wird zusätzlich durch antibakterielle Autophagie ermöglicht, wobei Salmonella selektiv über das lysosomale System abgebaut werden. Das bakterielle Typ-III-Sekretionssystem verursacht an einigen wenigen SCVs Membranschäden, sodass Salmonella das Wirtszytosol penetrieren. Zytosolische Bakterien werden dabei spezifisch ubiquitiniert. Dies erlaubt die Erkennung durch die Autophagie-Maschinerie.
In der vorliegenden Arbeit wurde die zellautonome Immunität von Epithelzellen während einer akuten Salmonella Infektion durch quantitative Proteomik untersucht...
Twentieth-century scholars have thought little about the attractions of Descartes’ thinking. Especially in feminist theory, he has a bad press as the ‘instigator’ of the body-mind-split – seen as one of the theoretical bases for the subordination of women in Western culture. Seen from within seventeenth-century discourse it is the dictum that can be inferred from his writings that ‘the mind has no sex’ and which can be seen as an appeal to think about rational capacities in the utopian perspective of a gender neutral discourse. My work analyses this “face” of Cartesianism as it was adapted in favour of English seventeenth-century women. How were the specific tenets of Descartes’ philosophy employed on behalf of English women in the second half of the seventeenth century in England? My focus is on Descartes as a thinker, who – whatever his real or imagined intention might have been – provided women in seventeenth-century England with tools with which to change their status, in other words: with instruments of empowerment. So why were Descartes’ arguments so attractive for women? Descartes had argued for equal rational abilities among individuals in a gender neutral way. He had further critiqued generally accepted truth with his universal doubt. I believe this specific combination of ideas, affirming their rational capabilities, was seen by a number of women as an invitation to become involved in spheres of activity from which they were previously excluded. Moreover, a specific set of Descartes’ arguments provided a number of English women with a strategy to extend female agency. Not only did Descartes’ views legitimate female rationality, they also allowed an acknowledgement that this female intellect was equally connected to “truth” as that of their male contemporaries. As a consequence, women developed an increased self-esteem and inspiration to pursue their own independent study (and in some cases publishing). These ideas eventually helped to bring forward a demand for female education, as girls and women were still excluded from formal education in seventeenth-century England. My general thesis is that Cartesianism, as one of the earliest universalist theories on the nature of human reason, introduced new possibilities into the English debate over the nature and, hence, social position of women. It brought a radical twist to the already existing discussion on women by offering new critical tools which were taken up to argue on behalf of English women. In my work I examine the specific historical conditions of the reception of Descartes’ thought in England, the philosophical appeal of his ideas for women and analyse the writings of two English ‘disciples’ of Descartes: Margaret Cavendish, Duchess of Newcastle and Mary Astell.
Based on an original dataset of 100 important pieces of legislation passed during the three presidencies of William J. Clinton, George W. Bush, and Barack H. Obama (1992-2013), this study explores two sets of questions:
(1) How do presidents influence legislators in Congress in the legislative arena, and what factors have an effect on the legislative strategies presidents choose?
(2) How successful are presidents in getting their policy positions enacted into law, and what configurations of institutional and actor-centered conditions determine presidential legislative success?
The analyses show that in an hyper-polarized environment, presidents usually have to fight an uphill-battle in the legislative arena, getting more involved if they face less favorable contexts and the odds are against them.
Moreover, the analyses suggest that there is no silver-bullet approach for presidents' legislative success. Instead, multiple patterns of success exist as presidents - depending on the institutional and public environment - can resort to different combinations of actions in order to see their preferred policy outcomes enacted.
Paläoklimarekonstruktionen, die es sich zum Ziel gesetzt haben, Klima-Mensch Interaktionen auf lange Zeitreihen betrachtet zu erforschen, nehmen begünstigt durch die aktuell intensiv geführte Klimadebatte, einen immer größer werdenden Stellenwert in der öffentlichen und wissenschaftlichen Wahrnehmung ein. Denn trotz aller wissenschaftlicher Fortschritte, die in den vergangenen Jahrzehnten im Bereich der modernen Klimaforschung gemacht wurden, bleibt die zuverlässige Vorhersage und Modellierung von zukünftigen Klimaveränderungen noch immer eine der größten Herausforderungen unser heutigen Zeit. Betrachtet man die Karibik exemplarisch in diesem Rahmen, dann prognostizieren viele Modellrechnungen, infolge steigender Ozeantemperaturen, ein deutlich häufigeres Auftreten von tropischen Stürmen und Hurrikanen sowie eine Verschiebung hin zu höheren Sturmstärken. Dieser Trend stellt für die Karibik und viele daran angrenzende Staaten eine der größten Gefahren des modernen Klimawandels dar, den es wissenschaftlich über einen langen Zeitrahmen zu erforschen gilt.
Klimaprognosen stützen sich meist vollständig auf hoch-aufgelöste instrumentelle Datensätze. Diese sind aber alle durch einen wesentlichen Aspekt limitiert. Aufgrund ihrer eingeschränkten Verfügbarkeit (~150 Jahre) fehlt ihnen die erforderliche Tiefe, um die auf langen Zeitskalen operierenden Prozesse der globalen Klimadynamik adäquat abbilden zu können. Betrachtet man das Holozän in seiner Gesamtheit, so wurde die globale Klimadynamik über die vergangenen ~11,700 Jahre von periodisch auftretenden Prozessen und Abläufen gesteuert. Diese wirken grundsätzlich über Zeiträume von mehreren Jahrzehnten, teilweise Jahrhunderten und in einigen Fällen sogar Jahrtausenden. Viele dieser natürlichen Prozesse, können in der kurzen Instrumentellen Ära nicht gänzlich identifiziert und angemessen in Klimamodellen berücksichtig werden. Die alleinige Berücksichtigung der Instrumentellen Ära bietet daher nur eine eingeschränkte Perspektive, um die Ursachen und Abläufe von vergangenen sowie mögliche Folgen von zukünftigen Klimaveränderungen zu verstehen. Um diese Einschränkung zu überwinden, ist es somit erforderlich, dass die geowissenschaftliche Forschung mit Proxymethoden ein zusammenfassendes und mechanistisches Verständnis über alle Holozänen Klimaveränderungen erlangt.
Wenn man sich diese Limitierung, die ansteigenden Ozeantemperaturen und das in der Karibik in den vergangen 20 Jahren vermehrte Auftreten von starken tropischen Zyklonen ins Gedächtnis ruft, ist es nachvollziehbar, dass im Rahmen dieser Doktorarbeit ein zwei Jahrtausende langer und jährlich aufgelöster Klimadatensatz erarbeitet werden soll, der spät Holozäne Variationen von Ozeanoberflächenwasser-temperaturen (SST) und daraus resultierende lang-zeitliche Veränderungen in der Häufigkeit tropischer Zyklone widerspiegelt. In Zentralamerika wird das Ende der Maya Hochkultur (900-1100 n.Chr.) mit drastischen Umweltveränderungen (z.B. Dürren) assoziiert, die während der Mittelalterlichen Warmzeit (MWP; 900-1400 n.Chr.) durch eine globale Klimaveränderung hervorgerufen wurde. Die aus einem „Blue Hole“ abgeleiteten Informationen über Klimavariationen der Vergangenheit können als Referenz für die gegenwärtige Klimakriese verwendet werden.
Als „Blue Hole“ wird eine Karsthöhle bezeichnet, die sich subaerisch während vergangener Meeresspiegeltiefstände im karbonatischen Gerüst eines Riffsystems gebildet hat und in Folge eines Meeresspiegelanstiegs vollständig überflutet wurde. In einigen wenigen marinen „Blue Holes“ treten anoxische Bodenwasserbedingungen auf. Die in diesen anoxischen Karsthöhlen abgelagerten Abfolgen mariner Sedimente können als einzigartiges Klimaarchiv verwendet werden, da sie aufgrund des Fehlens von Bioturbation eine jährliche Schichtung (Warvierung) aufweisen.
In dieser kumulativen Dissertation über das „Great Blue Hole“ werden die Ergebnisse eines 3-jährigen Forschungsprojekts vorgestellt, dass das Ziel verfolgte einen wissenschaftlich herausragenden spät Holozänen Klimadatensatz für die süd-westliche Karibik zu erzeugen. Beim „Great Blue Hole“ handelt es sich um ein weltweit einzigartiges marines Sedimentarchiv für diverse spät Holozäne Klima-veränderungen, das im Zuge dieser Dissertation sowohl nach paläoklimatischen als auch nach sedimentologischen Fragestellungen untersucht wurde. Die vorliegende Doktorarbeit befasst sich im Einzelnen mit (1) der Ausarbeitung eines jährlich aufgelösten Archives für tropische Zyklone, (2) der Entwicklung eines jährlich aufgelösten SST Datensatzes und (3) einer kompositionellen Quantifizierung der sedimentären Abfolgen sowie einer faziell-stratigraphischen Charakterisierung von Schönwetter-Sedimenten und Sturmlagen. Zu jedem dieser drei Aspekte, wurde jeweils ein Fachartikel bei einer anerkannten wissenschaftlichen Fachzeitschrift mit „peer-review“ Verfahren veröffentlicht.
Der insgesamt 8.55 m lange Sedimentbohrkern („BH6“), der für diese Dissertation untersucht wurde, stammt vom Boden des 125 m tiefen und 320 m breiten „Great Blue Holes“, das sich in der flachen östlichen Lagune des 80 km vor der Küste von Belize (Zentralamerika) gelegenen „Lighthouse Reef“ Atolls befindet. Durch seine besondere Geomorphologie wirkt das, innerhalb des atlantischen „Hurrikan Gürtels“ positionierte, „Great Blue Hole“ wie eine gigantische Sedimentfalle. Die unter Schönwetter-Bedingungen kontinuierlich abgelagerten Abfolgen feinkörniger karbonatischer Sedimente, werden von groben Sturmlagen unterbrochen, die auf „over-wash“ Prozesse von tropischen Zyklonen zurückzuführen sind.
...