Refine
Year of publication
Document Type
- Doctoral Thesis (2069) (remove)
Language
- English (2069) (remove)
Has Fulltext
- yes (2069)
Is part of the Bibliography
- no (2069)
Keywords
- ALICE (9)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- CERN (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
Institute
- Biowissenschaften (427)
- Physik (381)
- Biochemie und Chemie (282)
- Biochemie, Chemie und Pharmazie (211)
- Medizin (128)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (55)
- Mathematik (46)
A stochastic model for the joint evaluation of burstiness and regularity in oscillatory spike trains
(2013)
The thesis provides a stochastic model to quantify and classify neuronal firing patterns of oscillatory spike trains. A spike train is a finite sequence of time points at which a neuron has an electric discharge (spike) which is recorded over a finite time interval. In this work, these spike times are analyzed regarding special firing patterns like the presence or absence of oscillatory activity and clusters (so called bursts). These bursts do not have a clear and unique definition in the literature. They are often fired in response to behaviorally relevant stimuli, e.g., an unexpected reward or a novel stimulus, but may also appear spontaneously. Oscillatory activity has been found to be related to complex information processing such as feature binding or figure ground segregation in the visual cortex. Thus, in the context of neurophysiology, it is important to quantify and classify these firing patterns and their change under certain experimental conditions like pharmacological treatment or genetical manipulation. In neuroscientific practice, the classification is often done by visual inspection criteria without giving reproducible results. Furthermore, descriptive methods are used for the quantification of spike trains without relating the extracted measures to properties of the underlying processes.
For that reason, a doubly stochastic point process model is proposed and termed 'Gaussian Locking to a free Oscillator' - GLO. The model has been developed on the basis of empirical observations in dopaminergic neurons and in cooperation with neurophysiologists. The GLO model uses as a first stage an unobservable oscillatory background rhythm which is represented by a stationary random walk whose increments are normally distributed. Two different model types are used to describe single spike firing or clusters of spikes. For both model types, the distribution of the random number of spikes per beat has different probability distributions (Bernoulli in the single spike case or Poisson in the cluster case). In the second stage, the random spike times are placed around their birth beat according to a normal distribution. These spike times represent the observed point process which has five easily interpretable parameters to describe the regularity and the burstiness of the firing patterns.
It turns out that the point process is stationary, simple and ergodic. It can be characterized as a cluster process and for the bursty firing mode as a Cox process. Furthermore, the distribution of the waiting times between spikes can be derived for some parameter combination. The conditional intensity function of the point process is derived which is also called autocorrelation function (ACF) in the neuroscience literature. This function arises by conditioning on a spike at time zero and measures the intensity of spikes x time units later. The autocorrelation histogram (ACH) is an estimate for the ACF. The parameters of the GLO are estimated by fitting the ACF to the ACH with a nonlinear least squares algorithm. This is a common procedure in neuroscientific practice and has the advantage that the GLO ACF can be computed for all parameter combinations and that its properties are closely related to the burstiness and regularity of the process. The precision of estimation is investigated for different scenarios using Monte-Carlo simulations and bootstrap methods.
The GLO provides the neuroscientist with objective and reproducible classification rules for the firing patterns on the basis of the model ACF. These rules are inspired by visual inspection criteria often used in neuroscientific practice and thus support and complement usual analysis of empirical spike trains. When applied to a sample data set, the model is able to detect significant changes in the regularity and burst behavior of the cells and provides confidence intervals for the parameter estimates.
Computational oral absorption models, in particular PBBM models, provide a powerful tool for researchers and pharmaceutical scientists in drug discovery and formulation development, as they mimic and can describe the physiologically processes relevant to the oral absorption. PBBM models provide in vivo context to in vitro data experiments and allow for a dynamic understanding of in vivo drug disposition that is not typically provided by data from standard in vitro assays. Investigations using these models permit informed decision-making, especially regarding to formulation strategies in drug development. PBBM models, but can also be used to investigate and provide insight into mechanisms responsible for complex phenomena such as food effect in drug absorption. Although there are obviously still some gaps regarding the in silico construction of the gastrointestinal environment, ongoing research in the area of oral drug absorption (e.g. the UNGAP, AGE-POP and InPharma projects) will increase knowledge and enable improvement of these models.
PBBM can nowadays provide an alternative approach to the development of in vitro–in vivo correlations. The case studies presented in this thesis demonstrate how PBBM can address a mechanistic understanding of the negative food effect and be used to set clinically relevant dissolution specification for zolpidem immediate release tablets. In both cases, we demonstrated the importance of integrating drug properties with physiological variables to mechanistically understand and observe the impact of these parameters on oral drug absorption.
Various complex physiological processes are initiated upon food consumption, which can enhance or reduce a drug’s dissolution, solubility, and permeability and thus lead to changes in drug absorption. With improvements in modeling and simulation software and design of in vitro studies, PBBM modeling of food effects may eventually serve as a surrogate for clinical food effect studies for new doses and formulations or drugs. Furthermore, the application of these models may be even more critical in case of compounds where execution of clinical studies in healthy volunteers would be difficult (e.g., oncology drugs).
In the fourth chapter we have demonstrated the establishment of the link between biopredictive in vitro dissolution testing (QC or biorelevant method) PBBM coupled with PD modeling opens the opportunity to set truly clinically relevant specifications for drug release. This approach can be extended to other drugs regardless of its classification according to the BCS.
With the increased adoption of PBBM, we expect that best practices in development and verification of these models will be established that can eventually inform a regulatory guidance. Therefore, the application of Physiologically Based Biopharmaceutical Modelling is an area with great potential to streamline late-stage drug development and impact on regulatory approval procedures.
The miniaturization of electronics is reaching its limits. Structures necessary to build integrated circuits from semiconductors are shrinking and could reach the size of only a few atoms within the next few years. It will be at the latest at this point in time that the physics of nanostructures gains importance in our every day life. This thesis deals with the physics of quantum impurity models. All models of this class exhibit an identical structure: the simple and small impurity only has few degrees of freedom. It can be built out of a small number of atoms or a single molecule, for example. In the simplest case it can be described by a single spin degree of freedom, in many quantum impurity models, it can be treated exactly. The complexity of the description arises from its coupling to a large number of fermionic or bosonic degrees of freedom (large meaning that we have to deal with particle numbers of the order of 10^{23}). An exact treatment thus remains impossible. At the same time, physical effects which arise in quantum impurity systems often cannot be described within a perturbative theory, since multiple energy scales may play an important role. One example for such an effect is the Kondo effect, where the free magnetic moment of the impurity is screened by a "cloud" of fermionic particles of the quantum bath.
The Kondo effect is only one example for the rich physics stemming from correlation effects in many body systems. Quantum impurity models, and the oftentimes related Kondo effect, have regained the attention of experimental and theoretical physicists since the advent of quantum dots, which are sometimes also referred to as as artificial atoms. Quantum dots offer a unprecedented control and tunability of many system parameters. Hence, they constitute a nice "playground" for fundamental research, while being promising candidates for building blocks of future technological devices as well.
Recently Loss' and DiVincenzo's p roposal of a quantum computing scheme based on spins in quantum dots, increased the efforts of experimentalists to coherently manipulate and read out the spins of quantum dots one by one. In this context two topics are of paramount importance for future quantum information processing: since decoherence times have to be large enough to allow for good error correction schemes, understanding the loss of phase coherence in quantum impurity systems is a prerequisite for quantum computation in these systems. Nonequilibrium phenomena in quantum impurity systems also have to be understood, before one may gain control of manipulating quantum bits.
As a first step towards more complicated nonequilibrium situations, the reaction of a system to a quantum quench, i.e. a sudden change of external fields or other parameters of the system can be investigated. We give an introduction to a powerful numerical method used in this field of research, the numerical renormalization group method, and apply this method and its recent enhancements to various quantum impurity systems.
The main part of this thesis may be structured in the following way:
- Ferromagnetic Kondo Model,
- Spin-Dynamics in the Anisotropic Kondo and the Spin-Boson Model,
- Two Ising-coupled Spins in a Bosonic Bath,
- Decoherence in an Aharanov-Bohm Interferometer.
A novel role for mutant mRNA degradation in triggering transcriptional adaptation to mutations
(2020)
Robustness to mutations promotes organisms’ well-being and fitness. The increasing number of mutants in various model organisms, and humans, showing no obvious phenotype (Bouche and Bouchez, 2001; Chen et al., 2016b; Giaever et al., 2002; Kok et al., 2015) has renewed interest into how organisms adapt to gene loss. In the presence of deleterious mutations, genetic compensation by transcriptional upregulation of related gene(s) (also known as transcriptional adaptation) has been reported in numerous systems (El-Brolosy and Stainier, 2017; Rossi et al., 2015; Tondeleir et al., 2012); however, the molecular mechanisms underlying this response remained unclear. To investigate this phenomenon, I develop and study multiple models of transcriptional adaptation in zebrafish and mouse cell lines. I first show that transcriptional adaptation is not caused by loss of protein function, indicating that the trigger lies upstream, and find that the response involves enhanced transcription of the related gene(s). Furthermore, I observe a correlation between levels of mutant mRNA degradation and upregulation of related genes. To investigate the role of mutant mRNA degradation in triggering the response, I generate mutant alleles that do not transcribe the mutated gene and find that they fail to induce a transcriptional response and display stronger phenotypes. Transcriptome analysis of alleles displaying mutant mRNA degradation revealed upregulation of a significant proportion of genes displaying sequence similarity with the mutated gene’s mRNA, suggesting a model whereby mRNA degradation intermediates induce transcriptional adaptation via sequence similarity. Further mechanistic analyses suggested RNA-decay factors-dependent chromatin remodeling, and repression of antisense RNAs to be implicated in the response. These results identify a novel role for mutant mRNA degradation in buffering against mutations. Besides, they hold huge implications on understanding disease-causing mutations and shall help in designing mutations that lead to minimal transcriptional adaptation-induced compensation, facilitating studying gene function in model organisms.
In this dissertation a non-deterministic lambda-calculus with call-by-need evaluation is treated. Call-by-need means that subexpressions are evaluated at most once and only if their value must be known to compute the overall result. Also called "sharing", this technique is inevitable for an efficient implementation. In the lambda-ND calculus of chapter 3 sharing is represented explicitely by a let-construct. Above, the calculus has function application, lambda abstractions, sequential evaluation and pick for non-deterministic choice. Non-deterministic lambda calculi play a major role as a theoretical foundation for concurrent processes or side-effected input/output. In this work, non-determinism additionally makes visible when sharing is broken. Based on the bisimulation method this work develops a notion of equality which respects sharing. Using bisimulation to establish contextual equivalence requires substitutivity within contexts, i.e., the ability to "replace equals by equals" within every program or term. This property is called congruence or precongruence if it applies to a preorder. The open similarity of chapter 4 represents a new concept, insofar that the usual definition of a bisimulation is impossible in the lambda-ND calculus. So in section 3.2 a further calculus lambda-Approx has to be defined. Section 3.3 contains the proof of the so-called Approximation Theorem which states that the evaluation in lambda-ND and lambda-Approx agrees. The foundation for the non-trivial precongruence proof is set out in chapter 2 where the trailblazing method of Howe is extended to be capable with sharing. By the use of this (extended) method, the Precongruence Theorem proves open similarity to be a precongruence, involving the so-called precongruence candidate relation. Joining with the Approximation Theorem we obtain the Main Theorem which says that open similarity of the lambda-Approx calculus is contained within the contextual preorder of the lambda-ND calculus. However, this inclusion is strict, a property whose non-trivial proof involves the notion of syntactic continuity. Finally, chapter 6 discusses possible extensions of the base calculus such as recursive bindings or case and constructors. As a fundamental study the calculus lambda-ND provides neither of these concepts, since it was intentionally designed to keep the proofs as simple as possible. Section 6.1 illustrates that the addition case and constructors could be accomplished without big hurdles. However, recursive bindings cannot be represented simply by a fixed point combinator like Y, thus further investigations are necessary.
Seit einigen Jahrzehnten ist Lysozym eines der am meisten erforschten Proteine in der Literatur und wird hauptsächlich als Modell Protein zur Aufklärung der Faltungs- und Entfaltungsprozesse genutzt. Da die Frage nach Fehlfaltung und deren Verknüpfung mit neurodegenerativen Krankheiten bis zum heutigen Tag nicht vollständig geklärt ist, besteht hier ein großer Spielraum für weitere Forschungsansätze. In der vorliegenden Arbeit wurden daher zwei Modellsysteme verwendet, Hühereiweiß-Lysozym und menschliches Lysozym, jeweils in ihrem nicht-nativen ungefalteten Zustand. Diese ungefalteten Ensembles wurden mit Hilfe NMR spektroskopischer Methoden untersucht und ergaben sehr detaillierte, zum Teil auch überraschende neue Einblicke in Struktur und Dynamik der beiden Proteine und liefern somit wichtige Erkenntnisse zu Faltungs- und Aggregationsprozessen. ...
This work is concerned with two topics at the intersection of convex algebraic geometry and optimization.
We develop a new method for the optimization of polynomials over polytopes. From the point of view of convex algebraic geometry the most common method for the approximation of polynomial optimization problems is to solve semidefinite programming relaxations coming from the application of Positivstellensätze. In optimization, non-linear programming problems are often solved using branch and bound methods. We propose a fused method that uses Positivstellensatz-relaxations as lower bounding methods in a branch and bound scheme. By deriving a new error bound for Handelman's Positivstellensatz, we show convergence of the resulting branch and bound method. Through the application of Positivstellensätze, semidefinite programming has gained importance in polynomial optimization in recent years. While it arises to be a powerful tool, the underlying geometry of the feasibility regions (spectrahedra) is not yet well understood. In this work, we study polyhedral and spectrahedral containment problems, in particular we classify their complexity and introduce sufficient criteria to certify the containment of one spectrahedron in another one.
Many hominin species are best physically represented and understood by the sum of their dental morphologies. Generally, taxonomic affinities and evolutionary trends in development (ontogeny) and morphology (phylogeny) can be deduced from dental analyses. More specifically, the study of dental remains can yield a wealth of information on many facets of hominin evolution, life history, physiology and ecological adaptation; in short, the organisms paleobiomics. Functionally, teeth present information about dietary preferences, that is, the dietary niche in ecological context and, in turn, masticatory function. As the amount and types of information that can be gleaned from 2-dimensional tooth measurement exhaust themselves, 3-dimensional microscopic modeling and analysis presents a largely fertile ground for reexamination and reinterpretation of dental characteristics (Bromage et al., 2005). As such, a novel, non-destructive approach has been developed which combines the work of two established technologies (confocal microscopy and 3D modeling) adapted specifically for the purpose of mineralized tissue imaging. Through this method, 3D functional masticatory and therefore occlusal molar microwear is able to be visualized, quantified and comparatively analyzed to assess dietary preference in Javanese Homo erectus. This method differs from other microwear investigative techniques (defining 'pits'- vs- 'scratches', microtexture analysis etc.) in that it defines a molars masticatory microwear functional interactions in 3-dimensions as its baseline dataset for further interpretations and analyses. Due to poor specimen collection techniques employed during the first half of the 20th century, the very complex geologic nature of the Sangiran Dome and disagreements over its chronostratigraphy, only very few scientific works have addressed the Sangiran 7 (S7) Homo erectus molar collection (n=25) (e.g. Grine and Franzen, 1994; Kaifu, 2006). Grine and Franzen's (1994) work was a predominantly qualitative initial assessment of the specimens and identified five specimens that might better be ascribed to a fossil pongid rather than H. erectus. They also noted several molars to which tooth position (M1 or M2) was unable to be ascribed (Grine and Franzen, 1994). Kaifu (2006) comparatively examined crown sizes in several S7 molars.
The Sangiran 7 collection originates from two distinct geologic horizons: ten from the older Sangiran Formation (S7a, ~1.7 to 1.0mya) and fifteen from the younger, overlying Bapang Formation (S7b, ~1.0 to .7mya). During this million year period, Java was connected to the mainland during various glacio-eustatic low-stands in sea level. These mainland connections varied in size, extent, climatic condition and therefore in faunal and floral composition. As the S7 sample may be representative of the earliest Homo erectus migrants into Java and spans long durations of occupation, its investigation yields potential to understand the various influences climatic and ecogeographic fluctuations had on these populations. Since the sample consists only of teeth, an ecodietary approach has been deemed the most logical and appropriate investigative approach. Questions regarding the intra- and inter- S7 sample
relationships will also be addressed.
By comparing various aspects of the H. erectus dentition against that of hunter/ gatherer's (H/G) whose diet is known, functional dietary similarity can be directly correlated. Thus a comparative molar sample consisting of the below historic hunter/ gather's (n=63) has been included in order to assess H. erectus's diet in ecological context: Inuit (n=9), Pacific Northwest Tribes (n=11), Fuegians (n=11), Australian Aborigines (n=12) and Bushman (n=20). Methodologically, this approach produces a 3D facet microwear vector (fmv) signature for each molar which can then be compared for statistical similarity.
Microwear (and, as such, the fmv signatures) was defined by the regular, parallel striations found on specific cusp facets known to arise from patterned, directional masticatory movements. This differs significantly from post-mortem or taphonomic microwear which produces striations at irregular angles on multiple, non-masticatory surfaces (Peuch et al.1985, Teaford, 1988). A 'match value' is produced to determine the similarity of two molars fmv's. The 'match values' are ranked (high to low) and these rankings are used to statistically analyze and infer dietary preference: between Sangiran 7 (as an entire sample) compared against that of the historic hunter/ gatherer H. sapiens whose diet and ecogeography is known; within S7a and S7b and then among the S7 sample (eg. S7a-vs-S7b); whether the purported Pongo molars actually affiliate well with H. erectus, the hunter-gatherer's or if they demonstrate distinctly different fmv signatures altogether; whether fmv signatures are useful in distinguishing molars whose tooth position is in doubt (eg. M1 or M2).
When compared against individual H/G molars, the results show that Sangiran 7 H. erectus most closely correlates with Bushmen across all areas of fmv signature analysis. However, within broader dietary categories (yearly reliant on proteinaceous foods; seasonally reliant on proteinaceous foods; not reliant on proteinaceous foods), it was found that H. erectus most closely allied with the two hunter/ gatherer subpopulations associated with the 'Seasonally reliant on proteinaceous foods' (Australian Aboriginals and Pacific Northwest Tribes). There was also evidence for dietary change or specialization over time. As the environment changed during occupation by the earlier Sangiran to the later Bapang individuals, the dietary preference shifted from a focus on vegetative foods to a diet much more inclusive of proteinaceous resources.
These results are considered logical within the larger ecogeographic and chronostratigraphic context of the Sangiran Dome during the Pleistocene. However, a larger sample would be needed to confirm this. Although general dietary preferences can be drawn from this method, it is not possible at present to define specific foods consumed on a daily basis (eg. tubers or tortoise meat).
Out of the five specimens possibly allied with Pongo, S7-14 matched at the 'high' designation with a hunter/ gatherer, S7-62 matched 'moderately', S7-20 matched 'low' while the remaining two were not able to be matched with any other teeth for various reasons. Although designation to Pongo cannot be ruled on at this time using this method, it does demonstrate that at least two of the teeth correlate well with various hunter/ gatherer's who do not share dietary similarity with Pongo. This suggests their designation as Pongo should be more closely reevaluated. As for the four specimens whose tooth position was unsure, S7-14 matched 'highly' with 1st molars, S7-62 and S7-78 matched 'moderately' with 2nd and 1st molars respectively while S7-20 only matched at the 'low' designation. Although this approach is still exploratory, it adds another analytical tool for use in defining tooth position.
In sum, this method has demonstrated its usefulness in defining and functionally analyzing a novel 3D molar microwear dataset to interpret dietary preference. Future work would include a pan- H. erectus molar sample in order to illuminate broader populational, taxonomic and dietary correlations within and amoung all H. erectus specimens. A larger, more heterogenous historic H/G sample would also be included in order to provide a wider dietary comparative population. This method can be further extended to include and compare any and all hominins as well as any organism which produces micro wear upon it molars. Also, the data obtained and resultant fmv signature diagrams have the potential to be incorporated into 3D VR reconstructions of mandibular movement thus recreating mastication in extinct organisms and leading to more robust anatomical and physiological investigations especially when viewed in the context of larger environmental conditions or changes.
The Earth’s surface condition we find today is a result of long exposure to metabolism of life forms. Particularly, molecular oxygen in the atmosphere is a feature which developed over time. The first substantial and lasting rise of atmospheric oxygen level happened ≈ 2.5 Ga ago, but localities are reported where transiently elevated oxygen levels appeared before this time-point. To trace the timing and circumstances of the earliest availability of free oxygen in the atmosphere is important to understand the habitats of early microbial life forms on Earth.
This thesis focuses to obtain information of oxygen levels and the related atmospheric cycling of metals in sediments of the 3.5 to 3.2 Ga Barberton Greenstone Belt. First, as iron was a ubiquitous constituent of Archean seawater, I investigated its isotopic composition in minerals of chemical sediments. Hereby, I tried to resolve the changes within the water basin on small scale sedimentary sequence cycles. Second, I focused on the minor constituents of Archean seawater. The Re-Os geochronologic system and the abundance patterns of the platinum-group elements were chosen to integrate information of oxygen promoted weathering of a large source area. To integrate information of a large time interval, the isotopes of uranium were investigated over a large stratigraphic section.
The two key findings of this thesis are:
• Quantitative oxidation of ferrous iron in surface layers of Paleoarchean seawater occurred during the onset and termination of hydrothermal FeIIaq delivery into shallow waters.
• Paleoarchean sedimentary successions of the Barberton Greenstone Belt lack any evidence of transient basin-scale oxygenation.
The Manzimnyama Iron Formation (IF, Fig Tree Group, Barberton Greenstone Belt, South Africa) has been deciphered to exist of cyclic stacks of lithostratigraphic units with varying amounts of iron oxide and carbonate minerals. In-situ femtosecond-Laser-Ablation ICP-MS iron isotope measurements showed that the majority of siderite (γ56Fe ≈ −0.5 ‰) precipitated directly from seawater of γ56Fe ≈ 0 ‰. Ferric iron from the surface layers is preserved in ≤ 1μ m hematite and in magnetite that has been grown within the consolidated sediment. During FeIIaq events, fine-grained hematite (γ56Fe ≈ 2.2 ‰) and magnetite (γ56Fe 0.5 to 0.8 ‰) indicate oxygen levels in surface waters of lower than 0.0002 μM. Upon onset and termination of iron oxide abundance, magnetite with γ56Fe ≈ 0 ‰ indicates that low concentrations of FeIIaq in surface waters were oxidized quantitatively. These observations demonstrate the existence of iron oxidation in Paleoarchean surface waters independent of FeIIaq concentration. This is the first investigation of Paleoarchean IF showing that lithostratigraphic cyclicity can be traced in iron isotopic composition of oxide minerals.
ID-ICP-MS measurement of Re, Ir, Ru, Pt and Pd, trace element (SF-ICP-MS) and ID-MCICP- MS uranium isotope determination have been applied to carbonaceous shale of the Mapepe Fm. (Fig Tree Group) after inverse Aqua Regia leaching and bulk digestion. The sediments reveal a silicified fraction which exhibits a seawater REE signature and a mixture of detrital and meteoritic PGE. Neither enrichment of the redox-sensitive elements Re or Mo nor fractionated uranium isotopes have been found on a stratigraphic interval of several hundred meters. The non-silica fraction shows no depletion of Re which indicates that the detrital material had no contact to oxidizing fluids. ID-TIMS measurements of Re and Os after the CrO3-SO4 Carius Tube method of two sample intervals showed that the Re-Os isotopic systems of the non-silica fractions are identical to two komatiite occurrences. Weltevreden Fm. and Komati Fm. rocks were uplifted, eroded and transported to the deep part of the sedimentary basin without any change to the Re-Os system. Negative fractionated uranium isotopes (γ238U = −0.41 ± 0.01 ‰) associated with detrital Ba-Cr-U occurrences suggest the existence of distal redox-processes that involve uranium species. This study demonstrates that over the time of exposure and deposition of the Mapepe Fm. sedimentation, free oxygen was not available for weathering in the catchment area.
A multiple filter test for the detection of rate changes in renewal processes with varying variance
(2014)
The thesis provides novel procedures in the statistical field of change point detection in time series.
Motivated by a variety of neuronal spike train patterns, a broad stochastic point process model is introduced. This model features points in time (change points), where the associated event rate changes. For purposes of change point detection, filtered derivative processes (MOSUM) are studied. Functional limit theorems for the filtered derivative processes are derived. These results are used to support novel procedures for change point detection; in particular, multiple filters (bandwidths) are applied simultaneously in oder to detect change points in different time scales.
In light of the global sea-level rise and climate change of the 21th century, it is important to look back into the recent past in order to understand what the future might hold. A multi-proxy data set was compiled to evaluate the influence of geomorphological and environmental factors, such as antecedent topography, subsidence, sea level and climate, on reef, sand apron and lagoon development in modern carbonate platforms through the Holocene. Therefore, a combination of remote sensing and morphological data from 122 modern carbonate platforms and atolls in the Atlantic, Indian and Pacific Oceans were conducted, along with a case study from the oceanic (Darwinian) barrier-reef system of Bora Bora, French Polynesia, South Pacific.
The influence of antecedent topography and platform size as factors controlling Holocene sand apron development and extension in modern atolls and carbonate platforms is hypothesized. Antecedent topography describes the elevation and relief of the underlying Pleistocene topography (karst) and determines the distance from the sea floor to the rising postglacial sea level. Maximum lagoon depth and marginal reef thickness, when available in literature, were used as proxies for antecedent topography. Sand apron proportions of 122 atolls and carbonate platforms from the Atlantic, Indian and Pacific Oceans were quantified and correlated to maximum lagoon depth, total platform area and marginal reef thickness. This study shows that sand apron proportions increase with decreasing lagoon depths. Sand apron proportions also increase with decreasing platform area. The interaction of antecedent topography and Holocene sea-level rise is responsible for variations in accommodation space and at least determines the extension of the lateral expansion of sand aprons. In general, sand apron formation started when marginal reefs approached relative sea level. Spatial and regional variations in sea-level history let sand apron formation start earlier in the Indo-Pacific region (transgressive-regressive) than in the Western Atlantic Ocean (transgressive).
The influence of sea level, antecedent topography and subsidence of a volcanic island on late Quaternary reef development was evaluated based on six rotary core transects on the barrier and fringing reefs of Bora Bora. This study was designed to revalue the Darwinian model, the subsidence theory of reef development, which genetically connects fringing reef, barrier reef and atoll development by continuous subsidence of the volcanic basement. Postglacial sea-level rise, and to a minor degree subsidence, were identified as major factors controlling Holocene reef development in that they have created accommodation space and controlled reef architecture. Antecedent topography was also an important factor because the Holocene barrier reef is located on a Pleistocene barrier reef forming a topographic high. Pleistocene soil and basalt formed the pedestal of the fringing reef. Uranium-Thorium dating shows that barrier and fringing reefs developed contemporaneously during the Holocene.
In the barrier–reef lagoon of Bora Bora, the influence of environmental factors, such as sea level and climate, tsunamis and tropical cyclones controlling Holocene sediment dynamics was evaluated based on sedimentological, paleontological, geochronological and geochemical data. The lagoonal succession comprises mixed carbonate-siliciclastic sediments overlying peat and Pleistocene soil. The multi-proxy data set shows variations in grain-size, total organic carbon (proxy for primary productivity), Ca and Cl element intensities (proxies for carbonate availability and lagoonal salinity) during the mid-late Holocene. These patterns could result from event sedimentation during storms and correlate to event deposits found in nearby Tahaa, probably induced by elevated cyclone activity. Accordingly, elevated erosion and runoff from the volcanic island and lower lagoonal salinity would be a result of rainfall during repeated cyclone landfall. However, Ti/Ca and Fe/Ca ratios as proxies for terrigenous sediment delivery peaked out in the early Holocene and declined since the mid-Holocene. Benthic foraminifera assemblages do not indicate reef-to-lagoon transport. Alternatively, higher and sustained hydrodynamic energy is probably induced by stronger trade winds and a higher-than-present sea level during the mid-late Holocene. The increase in mid-late Holocene sediment dynamics within the back-reef lagoon is supposed to display sediment-load shedding of sand aprons due to the oversteepening of slopes at sand apron/lagoon edges during their progradation rather than an increase in tropical storm activity during that time.
The influence of sea-level and climate changes on sediment import, composition and distribution in the Bora Bora lagoon during the Holocene is validated. Lagoonal facies succession comprises siderite-rich marly wackestones, foraminifera-siderite wackestones, mollusk-foraminifera marly packstones and mollusk-rich wackestones during the early-mid Holocene, and mudstones since the mid-late Holocene. During the early Holocene, enhanced weathering and iron input from the volcanic island due to wetter climate conditions led to the formation of siderite within the lagoonal sediments. The geochemical composition of these siderites shows that precipitation was driven by microbial activity and iron reduction in the presence of dissolved bicarbonate. Chemical substitutions at grain margins illustrate changes in the oxidation state and probably reflect changes in pore water chemistry due to sea-level rise and climate change (rainfall). In the late Holocene, sediment transport into the lagoon is hampered by motus on the windward side of the lagoon, which led to early submarine lithification within the lagoon.
How the brain evolved remains a mystery. The goal of this thesis is to understand the fundamental processes that are behind the evolutionary history of the brain. Amniotes appeared 320 million years ago with the transition from water to land. This early group bifurcated into sauropsids (reptiles and birds) and synapsids (mammals). Amniote brains evolved separately and display obvious structural and functional differences. Although those differences reflect brain diversification, all amniote brains share a common ancestor and their brains show multiple derived similarities: equivalent structures, networks, circuits and cell types have been preserved during millions of years. Finding these differences and similarities will help us understand brain historical evolution and function. Studying brain evolution can be approached from various levels, including brain structure, circuits, cell types, and genes. We propose a focus on cell types for a more comprehensive understanding of brain evolution. Neurons are the basic building blocks and the most diverse cell types in the brain. Their evolution reflects changes in the developmental processes that produce them, which in turn may shape the neural circuits they belong to. However, there is currently a lack of a unified criteria for studying the homology of connectivity and development between neurons. A neuron’s transcriptome is a molecular representation of its identity, connectivity, and developmental/evolutionary history. Hence the comparison of neuronal transcriptomes within and across species is a new and transformative development in the study of brain evolution. As an alternative, comparing neuronal transcriptomes across different species can provide insights into the evolution of the brain. We propose that comparing transcriptomes can be a way to fill this gap and unify these criteria. In previous studies, published in Science (Tosches et al., 2018) and Nature (Norimoto et al., 2020), we leveraged scRNAseq in reptiles to re-evaluate the origins and evolution of the mammalian cerebral cortex and claustrum. Motivated by the success of this approach, in this thesis we have now expanded single-cell profiling to the entire brain of a lizard species, the Australian dragon Pogona vitticeps, with a special focus in thalamus and prethalamus of. This approach allowed us to study the evolution of neuron types in amniotes. Therefore, we aimed to build a multilevel atlas of the lizard brain based on histology and transcriptomic and compare it to an equal mouse dataset (Zeisel et al., 2018).
Our atlas reveals a general structure that is consistent with that for other amniote brains, allowing us to make a direct comparison between lizard and mouse, despite their evolutionary divergence 320 million years ago. Through our analysis of the transcriptomes present in various neuron types, we have uncovered a core of conserved classes and discovered a fascinating dichotomy of new and conserved neuron types throughout the brain. This research challenges the traditional notion that certain brain regions are more conserved than others.
Our research also has uncovered the evolutionary history of the lizard thalamus and prethalamus by comparing them to homologous brain regions of the mouse. This pioneering research sheds new light on our understanding of the evolutionary history of the lizard brain. We propose a new classification of the lizard thalamic nuclei based on
transcriptomics. Our research revealed that the thalamic neuron types in lizards can be grouped into two large, conserved categories from the medial to lateral thalamus. These categories are encoded by a common set of effector genes, linking theories based on connectivity and molecular studies of these areas. In our data we have seen that there is a conservation of the medial-lateral transcriptomic axis in mouse and lizard, this conservation was most likely already present in the common ancestor. Although there is a shared medial-lateral axis, a deeper study of the thalamic cell types has allowed us to see the existence of a partial diversification of the thalamic population, specifically in the sensory-related lateral thalamus; in opposition, the medial thalamic nuclei neuron-types have been preserved.
On the other hand, the comparison with the mammalian prethalamus allowed us to confirm that the lizard ventromedial thalamic neuron types are homologous to mouse reticular thalamic neuron types (Díaz et al., 1994), even if they do not express the classical Reticular thalamic nucleus (RTn) marker PV/pvalb. We also discovered that there has been a simplification in the mammalian prethalamic neuron types in favor of an increase in the number of Interneurons (IN) types within their thalamus. We suggest that the loss of GABAergic neuronal types in the mammalian prethalamus is linked to the need for a more efficient control of the thalamo-pallial communication in mammals, while in lizards, where thalamo-pallial communication is probably simpler, the diversity prethalamus presents a higher diversity.
The aim of this work is to develop an effective equation of state for QCD, having the correct asymptotic degrees of freedom, to be used as input for dynamical studies of heavy ion collisions. We present an approach for modeling an EoS that respects the symmetries underlying QCD, and includes the correct asymptotic degrees of freedom, i.e. quarks and gluons at high temperature and hadrons in the low-temperature limit. We achieve this by including quarks degrees of freedom and the thermal contribution of the Polyakov loop in a hadronic chiral sigma-omega model. The hadronic part of the model is a nonlinear realization of an sigma-omega model. As the fundamental symmetries of QCD should also be present in its hadronic states such an approach is widely used to describe hadron properties below and around Tc. The quarks are introduced as thermal quasi particles, coupling to the Polyakov loop, while the dynamics of the Polyakov loop are controlled by a potential term which is fitted to reproduce pure gauge lattice data. In this model the sigma field serves a the order parameter for chiral restoration and the Polyakov loop as order parameter for deconfinement. The hadrons are suppressed at high densities by excluded volume corrections. As a next step, we introduce our new HQ model equation of state in a microscopic+macroscopic hybrid approach to heavy ion collisions. This hybrid approach is based on the Ultra-relativistic Quantum Molecular Dynamics (UrQMD) transport approach with an intermediate hydrodynamical evolution for the hot and dense stage of the collision. The present implementation allows to compare pure microscopic transport calculations with hydrodynamic calculations using exactly the same initial conditions and freeze-out procedure. The effects of the change in the underlying dynamics - ideal fluid dynamics vs. non-equilibrium transport theory - are explored. The final pion and proton multiplicities are lower in the hybrid model calculation due to the isentropic hydrodynamic expansion while the yields for strange particles are enhanced due to the local equilibrium in the hydrodynamic evolution. The elliptic and directed flow are shown to be not sensitive to changes in the EoS while the smaller mean free path in the hydrodynamic evolution reflects directly in higher flow results which are consistent with the experimental data. This finding indicates qualitatively that physical mechanisms like viscosity and other non equilibrium effects play an essentially more important role than the EoS when bulk observables like flow are investigated. In the last chapter, results for the thermal production of MEMOs in nucleus-nucleus collisions from a combined micro+macro approach are presented. Multiplicities, rapidity and transverse momentum spectra are predicted for Pb+Pb interaction at different beam energies. The presented excitation functions for various MEMO multiplicities show a clear maximum at the upper FAIR energy regime making this facility the ideal place to study the production of these exotic forms of multistrange objects.
Synchronized neural activity in the visual cortex is associated with small time delays (up to ~10 ms). The magnitude and direction of these delays depend on stimulus properties. Thus, synchronized neurons produce fast sequences of action potentials, and the order in which units tend to fire within these sequences is stimulusdependent, but not stimulus-locked. In the present thesis, I investigated whether such preferred firing sequences repeat with sufficient accuracy to serve as a neuronal code. To this end, I developed a method for extracting the preferred sequence of firing in a group of neurons from their pair-wise preferred delays, as measured by the offsets of the centre peaks in their cross-correlation histograms. This analysis method was then applied to highly parallel recordings of neuronal spiking activity made in area 17 of anaesthetized cats in response to simple visual stimuli, like drifting gratings and moving bars. Using a measure of effect size, I then analyzed the accuracy with which preferred firing sequences reflected stimulus properties, and found that in the presence of gamma oscillations, the time at which a unit fired in the firing sequence conveyed stimulus information almost as precisely as the firing rate of the same unit. Moreover, the stimulus-dependent changes in firing rates and firing times were largely unrelated, suggesting that the information they carry is not redundant. Thus, despite operating at a time scale of only a few milliseconds, firing sequences have the strong potential to provide a precise neural code that can complement firing rates in the cortical processing of stimulus information.
This thesis examines the literary output of German servicemen writers writing from the occupied territories of Europe in the period 1940-1944. Whereas literary-biographical studies and appraisals of the more significant individual writers have been written, and also a collective assessment of the Eastern front writers, this thesis addresses in addition the German literary responses in France and Greece, as being then theatres of particular cultural/ideological attention. Original papers of the writer Felix Hartlaub were consulted by the author at the Deutsches Literatur Archiv (DLA) at Marbach. Original imprints of the wartime works of the subject writers are referred to throughout, and citations are from these. As all the published works were written under conditions of wartime censorship and, even where unpublished, for fear of discovery written in oblique terms, the texts were here examined for subliminal authorial intention. The critical focus of the thesis is on literary quality: on aesthetic niveau, on applied literary form, and on integrity of authorial intention. The thesis sought to discover: (1) the extent of the literary output in book-length forms. (2) the auspices and conditions under which this literary output was produced. (3) the publication history and critical reception of the output. The thesis took into account, inter alia: (1) occupation policy as it pertained locally to the writers’ remit; (2) the ethical implications of this for the writers; (3) the writers’ literary stratagems for negotiating the constraints of censorship.
In literary translation 'correctness' is rarely ratified by linguistic rules; it is more often a question of what a sensitive translator feels to be correct. Intuition will therefore play a major part. This intuition is seen here neither as instinctive reaction prompted by experience, nor as native competence, but as an inquiring, self-moderating influence inspired by the language itself. It is treated in this respect as an informed intuition, that is, as having a linguistic base for sensitive judgement. This assumes that the literary translator is both a creative writer and his own critical reader as well as a fine judge of language potential. This line is applied to translating meaning and sense, transferring the very language, imitating the form and style, re-creating the features, and above all, to capturing those unique qualities of the original. After dealing with word-accuracy, the question of literary input demanded by form and style is examined. The treatment of language used for effect features in a section on Kafka. The merits and the problems of translating dialect as dialect for its own sake are looked at closely and in a positive way as are the possibilities of reproducing 'oddities' of language. The immense task of translating the language of Joyce ('Ulysses ') with all its vagaries and skilful manipulation of words is examined for the possibility of providing an accurate copy. The ultimate test of reproducing a uniqueness of artistic creation together with the profound thought which inspired it, is reserved for a section on Hopkins. While it is recognized that, owing to the constrictions imposed by the extreme and sensitive use of language, no translation can fully include all that there is in his poems, it might be possible to capture enough of their essence to give an impression of a 'German' Hopkins at work. A major objective throughout is the establishment of a linguistic base for the part played by intuition in literary translation.
Spin waves in yttrium-iron garnet has been the subject of research for decades. Recently the report of Bose-Einstein condensation at room temperature has brought these experiments back into focus. Due to the small mass of quasiparticles compared to atoms for example, the condensation temperature can be much higher. With spin-wave quasiparticles, so-called magnons, even room temperature can be reached by externally injecting magnons. But also possible applications in information technologies are of interest. Using excitations as carriers for information instead of charges delivers a much more efficient way of processing data. Basic logical operations have already been realized. Finally the wavelength of spin waves which can be decreased to nanoscale, gives the opportunity to further miniaturize devices for receiving signals for example in smartphones.
For all of these purposes the magnon system is driven far out of equilibrium. In order to get a better fundamental understanding, we concentrate in the main part of this thesis on the nonequilibrium aspect of magnon experiments and investigate their thermalization process. In this context we develop formalisms which are of general interest and which can be adopted to many different kinds of systems.
A milestone in describing gases out of equilibrium was the Boltzmann equation discovered by Ludwig Boltzmann in 1872. In this thesis extensions to the Boltzmann equation with improved approximations are derived. For the application to yttrium-iron garnet we describe the thermalization process after magnons were excited by an external microwave field.
First we consider the Bose-Einstein condensation phenomena. A special property of thin films of yttrium-iron garnet is that the dispersion of magnons has its minimum at finite wave vectors which leads to an interesting behavior of the condensate. We investigate the spatial structure of the condensate using the Gross-Pitaevskii equation and find that the magnons can not condensate only at the energy minimum but that also higher Fourier modes have to be occupied macroscopically. In principle this can lead to a localization on a lattice in real space.
Next we use functional renormalization group methods to go beyond the perturbation theory expressions in the Boltzmann equation. It is a difficult task to find a suitable cutoff scheme which fits to the constraints of nonequilibrium, namely causality and the fluctuation-dissipation theorem when approaching equilibrium. Therefore the cutoff scheme we developed for bosons in the context of our considerations is of general interest for the functional renormalization group. In certain approximations we obtain a system of differential equations which have a similar transition rate structure to the Boltzmann equation. We consider a model of two kinds of free bosons of which one type of boson acts as a thermal bath to the other one. Taking a suitable initial state we can use our formalism to describe the dynamics of magnons such that an enhanced occupation of the ground state is achieved. Numerical results are in good agreement with experimental data.
Finally we extend our model to consider also the pumping process and the decrease of the magnon particle number till thermal equilibrium is reached again. Additional terms which explicitly break the U(1)-symmetry make it necessary to also extend the theory from which a kinetic equation can be deduced. These extensions are complicated and we therefore restrict ourselves to perturbation theory only. Because of the weak interactions in yttrium-iron garnet this provides already good results.
A graph theoretical approach to the analysis, comparison, and enumeration of crystal structures
(2008)
As an alternative approach to lattices and space groups, this work explores graph theory as a means to model crystal structures. The approach uses quotient graphs and nets - the graph theoretical equivalent of cells and lattices - to represent crystal structures. After a short review of related work, new classes of cycles in nets are introduced and their ability to distinguish between non-isomorphic nets and their computational complexity are evaluated. Then, two methods to estimate a structure’s density from the corresponding net are proposed. The first uses coordination sequences to estimate the number of nodes in a sphere, whereas the second method determines the maximal volume of a unit cell. Based on the quotient graph only, methods are proposed to determine whether nets consist of islands, chains, planes, or penetrating, disconnected sub-nets. An algorithm for the enumeration of crystal structures is revised and extended to a search for structures possessing certain properties. Particular attention is given to the exclusion of redundant nets and those, which, by the nature of their connectivity, cannot correspond to a crystal structure. Nets with four four-coordinated nodes, corresponding to sp3 hybridised carbon polymorphs with four atoms per unit cell, are completely enumerated in order to demonstrate the approach. In order to render quotient graphs and nets independent from crystal structures, they are reintroduced in a purely graph-theoretical way. Based on this, the issue of iso- and automorphism of nets is reexamined. It is shown that the topology of a net (that is the bonds in a crystal) constrains severely the symmetry of the embedding (that is the crystal), and in the case of connected nets the space group except for the setting. Several examples are studied and conclusions on phases are drawn (pseudo-cubic FeS2 versus pyrite; α- versus β- quartz; marcasite- versus rutile-like phases). As the automorphisms of certain quotient graphs stipulate a translational symmetry higher than an arbitrary embedding of the corresponding net would show, they are examined in more detail and a method to reduce the size of such quotient graphs is proposed. Besides two instructional examples with 2-dimensional graphs, the halite, calcite, magnesite, barytocalcite, and a strontium feldspar structures are discussed. For some of the structures it is shown that the quotient graph which is equivalent to a centred cell is reduced to a quotient graph equivalent to the primitive cell. For the partially disordered strontium feldspar, it is shown that even if it could be annealed to an ordered structure, the unit cell would likely remain unchanged. For the calcite and barytocalcite structures it is shown that the equivalent nets are not isomorphic.
‘The whole is more than the sum of its parts.’ This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
"The whole is more than the sum of its parts." This idea has been brought forward by psychologists such as Max Wertheimer who formulated Gestalt laws that describe our perception. One law is that of collinearity: elements that correspond in their local orientation to their global axis of alignment form a collinear line, compared to a noncollinear line where local and global orientations are orthogonal. Psychophysical studies revealed a perceptual advantage for collinear over non-collinear stimulus context. It was suggested that this behavioral finding could be related to underlying neuronal mechanisms already in the primary visual cortex (V1). Studies have shown that neurons in V1 are linked according to a common fate: cells responding to collinearly aligned contours are predominantly interconnected by anisotropic long-range lateral connections. In the cat, the same holds true for visual interhemispheric connections. In the present study we aimed to test how the perceptual advantage of a collinear line is reflected in the anatomical properties within or between the two primary visual cortices. We applied two neurophysiological methods, electrode and optical recording, and reversibly deactivated the topographically corresponding contralateral region by cooling in eight anesthetized cats. In electrophysiology experiments our results revealed that influences by stimulus context significantly depend on a unit’s orientation preference. Vertical preferring units had on average a higher spike rate for collinear over non-collinear context. Horizontal preferring units showed the opposite result. Optical imaging experiments confirmed these findings for cortical areas assigned to vertical orientation preference. Further, when deactivating the contralateral region the spike rate for horizontal preferring units in the intact hemisphere significantly decreased in response to a collinear stimulus context. Most of the optical imaging experiments revealed a decrease in cortical activity in response to either stimulus context crossing the vertical midline. In conclusion, our results support the notion that modulating influences from stimulus context can be quite variable. We suggest that the kind of influence may depend on a cell’s orientation preference. The perceptual advantage of a collinear line as one of the Gestalt laws proposes is not uniformly represented in the activity of individual cells in V1. However, it is likely that the combined activity of many V1 neurons serves to activate neurons further up the processing stream which eventually leads to the perceptual phenomenon.
I derive a general effective theory for hot and/or dense quark matter. After introducing general projection operators for hard and soft quark and gluon degrees of freedom, I explicitly compute the functional integral for the hard quark and gluon modes in the QCD partition function. Upon appropriate choices for the projection operators one recovers various well-known effective theories such as the Hard Thermal Loop/ Hard Dense Loop Effective Theories as well as the High Density Effective Theory by Hong and Schaefer. I then apply the effective theory to cold and dense quark matter and show how it can be utilized to simplify the weak-coupling solution of the color-superconducting gap equation. In general, one considers as relevant quark degrees of freedom those within a thin layer of width 2 Lambda_q around the Fermi surface and as relevant gluon degrees of freedom those with 3-momenta less than Lambda_gl. It turns out that it is necessary to choose Lambda_q << Lambda_gl, i.e., scattering of quarks along the Fermi surface is the dominant process. Moreover, this special choice of the two cutoff parameters Lambda_q and Lambda_gl facilitates the power-counting of the numerous contributions in the gap-equation. In addition, it is demonstrated that both the energy and the momentum dependence of the gap function has to be treated self-consistently in order to determine the imaginary part of the gap function. For quarks close to the Fermi surface the imaginary part is calculated explicitly and shown to be of sub-subleading order in the gap equation.
This dissertation is devoted to the study of thermodynamics for quantum gauge theories.The poor convergence of quantum field theory at finite temperature has been the main obstacle in the practical applications of thermal QCD for decades. In this dissertation I apply hard-thermal-loop perturbation theory, which is a gauge-invariant reorganization of the conventional perturbative expansion for quantum gauge theories to the thermodynamics of QED and Yang-Mills theory to three-loop order. For the Abelian case, I present a calculation of the free energy of a hot gas of electrons and photons by expanding in a power series in mD/T, mf /T and e2, where mD and mf are the photon and electron thermal masses, respectively, and e is the coupling constant.I demonstrate that the hard-thermal-loop perturbation reorganization improves the convergence of the successive approximations to the QED free energy at large coupling, e ~ 2. For the non-Abelian case, I present a calculation of the free energy of a hot gas of gluons by expanding in a power series in mD/T and g2, where mD is the gluon thermal mass and g is the coupling constant. I show that at three-loop order hard-thermal-loop perturbation theory is compatible with lattice results for the pressure, energy density, and entropy down to temperatures T ~ 2 - 3 Tc. The results suggest that HTLpt provides a systematic framework that can be used to calculate static and dynamic quantities for temperatures relevant at LHC.
A fundamental work on THz measurement techniques for application to steel manufacturing processes
(2004)
The terahertz (THz) waves had not been obtained except by a huge system, such as a free electron laser, until an invention of a photo-mixing technique at Bell laboratory in 1984 [1]. The first method using the Auston switch could generate up to 1 THz [2]. After then, as a result of some efforts for extending the frequency limit, a combination of antennas for the generation and the detection reached several THz [3, 4]. This technique has developed, so far, with taking a form of filling up the so-called THz gap . At the same time, a lot of researches have been trying to increase the output power as well [5-7]. In the 1990s, a big advantage in the frequency band was brought by non-linear optical methods [8-11]. The technique led to drastically expand the frequency region and recently to realize a measurement up to 41 THz [12]. On the other hand, some efforts have yielded new generation and detection methods from other approaches, a CW-THz as well as the pulse generation [13-19]. Especially, a THz luminescence and a laser, originated in a research on the Bloch oscillator, are recently generated from a quantum cascade structure, even at an only low temperature of 60 K [20-22]. This research attracts a lot of attention, because it would be a breakthrough for the THz technique to become widespread into industrial area as well as research, in a point of low costs and easier operations. It is naturally thought that a technology of short pulse lasers has helped the THz field to be developed. As a background of an appearance of a stable Ti:sapphire laser and a high power chirped pulse amplification (CPA) laser, instead of a dye laser, a lot of concentration on the techniques of a pulse compression and amplification have been done. [23] Viewed from an application side, the THz technique has come into the limelight as a promising measurement method. A discovery of absorption peaks of a protein and a DNA in the THz region is promoting to put the technique into practice in the field of medicine and pharmaceutical science from several years ago [24-27]. It is also known that some absorption of light polar-molecules exist in the region, therefore, some ideas of gas and water content monitoring in the chemical and the food industries are proposed [28-32]. Furthermore, a lot of reports, such as measurements of carrier distribution in semiconductors, refractive index of a thin film and an object shape as radar, indicate that this technique would have a wide range of application [33-37]. I believe that it is worth challenging to apply it into the steel-making industry, due to its unique advantages. The THz wavelength of 30-300 ¼m can cope with both independence of a surface roughness of steel products and a detection with a sub-millimeter precision, for a remote surface inspection. There is also a possibility that it can measure thickness or dielectric constants of relatively high conductive materials, because of a high permeability against non-polar dielectric materials, short pulse detection and with a high signal-to-noise ratio of 103-5. Furthermore, there is a possibility that it could be applicable to a measurement at high temperature, for less influence by a thermal radiation, compared with the visible and infrared light. These ideas have motivated me to start this THz work.
The fungal interaction with plants is a 400 million years old phenomenon, which presumably assisted in the plants’ establishment on land. In a natural ecosystem, all plant-ranging from large trees to sea-grasses-are colonized by fungal endophytes, which can be detected inter- and intracellularly within the tissues of apparently healthy plants, without causing obvious negative effects on their host. These ubiquitous and diverse microorganisms are likely playing important roles in plant fitness and development. However, the knowledge on the ecological functions of fungal root endophytes is scarce. Among possible functions of endophytes, they are implicated in mutualisms with plants, which may increase plant resistance to biotic stressors like herbivores and pathogens, and/or to abiotic factors like soil salinity and drought. Also, endophytes are fascinating microorganisms in regard to their high potential to produce a great spectrum of secondary metabolites with expected ecological functions. However, evidences suggest that the interactions between host plants and endophytes are not static and endophytes express different symbiotic lifestyles ranging from mutualism to parasitism, which makes difficult to predict the ecological roles of these cryptic microorganisms. To reveal the ecological function of fungal root endophytes, this doctoral thesis aims at assessing fungal root endophytes interactions with different plants and their effects on plant fitness, based on their phylogeny, traits, and competition potential in settings encompassing different abiotic contexts. To understand the cryptic implication of nonmycorrhizal endophytes in ecosystem processes, we isolated a diverse spectrum of fungal endophytes from roots of several plant species growing in different natural contexts and tested their effects on different model plants under axenic laboratory conditions. Additionally,we aimed at investigating the effect of abiotic and biotic variables on the outcome of interactions between fungal root endophytes and plants.
In summary, the morphological and physiological traits of 128 fungal endophyte strains within ten fungal orders were studied and artificial experimental systems were used to reproduce their interactions with three plant species under laboratory conditions. Under defined axenic conditions, most endophytes behaved as weak parasites, but their performance varied across plant species and fungal taxa. The variation in the interactions was partly explained by convergent fungal traits that separate groups of endophytes with potentially different niche preferences. According to my findings, I predict that the functional complementarity of strains is essential in structuring natural root endophytic communities. Additionally, the responses of plant-endophyte interactions to different abiotic factors, namely nutrient availability, light intensity, and substrate’s pH, indicate that the outcome of plant-fungus relationships may be robust to changes in the abiotic environment. The assessment of the responses of plant endophyte interactions to biotic context, as combinations of selected dominant root fungal endophytes with different degrees of trait similarity and shared evolutionary history, indicates that frequently coexisting root-colonizing fungi may avoid competition in inter-specific interactions by occupying specific niches, and that their interactions likely define the structure of root-associated fungal communities and influence the microbiome impacts on plant fitness.
In conclusion, my findings suggest that dominant fungal lineages display different ecological preferences and complementary sets of functional traits, with different niche preferences within root tissues to avoid competition. Also, their diverse effects on plant fitness is likely host-isolate dependent and robust to changes in the abiotic environment when these encompass the tolerance range of either symbiont.
A framework for the analysis and visualization of multielectrode spike trains / von Ovidiu F. Jurjut
(2009)
The brain is a highly distributed system of constantly interacting neurons. Understanding how it gives rise to our subjective experiences and perceptions depends largely on understanding the neuronal mechanisms of information processing. These mechanisms are still poorly understood and a matter of ongoing debate remains the timescale on which the coding process evolves. Recently, multielectrode recordings of neuronal activity have begun to contribute substantially to elucidating how information coding is implemented in brain circuits. Unfortunately, analysis and interpretation of multielectrode data is often difficult because of their complexity and large volume. Here we propose a framework that enables the efficient analysis and visualization of multielectrode spiking data. First, using self-organizing maps, we identified reoccurring multi-neuronal spike patterns that evolve on various timescales. Second, we developed a color-based visualization technique for these patterns. They were mapped onto a three-dimensional color space based on their reciprocal similarities, i.e., similar patterns were assigned similar colors. This innovative representation enables a quick and comprehensive inspection of spiking data and provides a qualitative description of pattern distribution across entire datasets. Third, we quantified the observed pattern expression motifs and we investigated their contribution to the encoding of stimulus-related information. An emphasis was on the timescale on which patterns evolve, covering the temporal scales from synchrony up to mean firing rate. Using our multi-neuronal analysis framework, we investigated data recorded from the primary visual cortex of anesthetized cats. We found that cortical responses to dynamic stimuli are best described as successions of multi-neuronal activation patterns, i.e., trajectories in a multidimensional pattern space. Patterns that encode stimulus-specific information are not confined to a single timescale but can span a broad range of timescales, which are tightly related to the temporal dynamics of the stimuli. Therefore, the strict separation between synchrony and mean firing rate is somewhat artificial as these two represent only extreme cases of a continuum of timescales that are expressed in cortical dynamics. Results also indicate that timescales consistent with the time constants of neuronal membranes and fast synaptic transmission (~10-20 ms) appear to play a particularly salient role in coding, as patterns evolving on these timescales seem to be involved in the representation of stimuli with both slow and fast temporal dynamics.
In this work the flexibility requirements of a highly renewable European electricity network that has to cover fluctuations of wind and solar power generation on different temporal and spatial scales are studied. Cost optimal ways to do so are analysed that include optimal distribution of the infrastructure, large scale transmission, storage, and dispatchable generators. In order to examine these issues, a model of increasing sophistication is built, first considering different flexibility classes of conventional generation, then adding storage, before finally considering transmission to see the effects of each.
To conclude, in this work it was shown that slowly flexible base load generators can only be used in energy systems with renewable shares of less than 50%, independent of the expansion of an interconnecting transmission network within Europe. Furthermore, for a system with a dominant fraction of renewable generation, highly flexible generators are essentially the only necessary class of backup generators. The total backup capacity can only be decreased significantly if interconnecting transmission is allowed, clearly favouring a European-wide energy network. These results are independent of the complexity level of the cost assumptions used for the models. The use of storage technologies allows to reduce the required conventional backup capacity further. This highlights the importance of including additional technologies into the energy system that provide flexibility to balance fluctuations caused by the renewable energy sources. These technologies could for example be advanced energy storage systems, interconnecting transmission in the electricity network, and hydro power plants.
It was demonstrated that a cost optimal European electricity system with almost 100% renewable generation can have total system costs comparable to today's system cost. However, this requires a very large transmission grid expansion to nine times the line volume of the present-day system. Limiting transmission increases the system cost by up to a third, however, a compromise grid with four times today's line volume already locks in most of the cost benefits. Therefore, it is very clear that by increasing the pan-European network connectivity, a cost efficient inclusion of renewable energies can be achieved, which is strongly needed to reach current climate change prevention goals.
It was also shown that a similarly cost efficient, highly renewable European electricity system can be achieved that considers a wide range of additional policy constraints and plausible changes of economic parameters.
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
This work aimed to investigate the regulation and activity of 5-lipoxygenase (5-LO), the central enzyme in leukotriene biosynthesis, in two colorectal cancer cell lines. The leukotriene pathway is positively correlated with the progression of several solid malignancies; however, factors regulating 5-LO expression and activity in tumors are poorly understood.
Cancer development, as well as cancer progression, are strongly dependent on the tumor microenvironment. In the conventional monolayer culture of cancer cell lines, cell-matrix and cell-cell interactions present in native tumors are absent. Furthermore, it is already known that various colon cancer cell lines dysregulate several important signaling pathways due to 3D growth. Therefore, the expression of the leukotriene cascade in HT-29 and HCT-116 colorectal cancer cells was investigated within a three-dimensional context using multicellular tumor spheroids to mimic a more physiological environment compared to conventional cell culture. Especially the expression of 5-LO, cPLA2α, and LTA4 hydrolase was altered due to threedimensional (3D) cell growth, which was investigated by qPCR and Western blot analysis. High cellular density in monolayer cultures led to similar results. The observed 5-LO upregulation was found inversely correlated with cell proliferation, determined by cell cycle analysis, and activation of PI3K/mTORC-2- and MEK-1/ERK-dependent pathways, determined using pharmacological pathway inhibition, stable shRNA knockdown cell lines, and analysis via qPCR and Western blot analysis. Following, the transcription factor E2F1 and its target gene MYBL2 were identified to play a role in the repression of 5-LO during cell proliferation. For this purpose, several stable MYBL2 over-expression and ALOX5 reporter cell lines were prepared and analyzed. Since 5-LO was already identified as a direct p53 target gene, the influence of p53, which is variably expressed in the cell lines (HT-29, p53 R273H mut; HCT-116 p53 wt; HCT-116 p53 KO), was investigated as well. Furthermore, HCT-116 cells carrying a p53 knockout were investigated. The PI3K/mTORC-2- and MEK-1/ERK-dependent suppression of 5-LO was also found in tumor cells from other origins (Capan-2, Caco-2, MCF-7), which was determined using pharmacological pathway inhibition and following analysis via qPCR. This suggests that the identified mechanism might apply to other tumor entities as well.
5-LO activity was previously described as attenuated in HT-29 and HCT-116 cells compared to polymorphonuclear leukocytes, which express a highly active 5-LO. However, the present study showed that the enzyme activity is indeed low but inducible in HT-29 and HCT-116 cells. Of note, the general lipid mediator profile and the mediator concentrations were comparable to those of M2 macrophages. Finally, the analysis of substrate availability in HT-29 and HCT-116 cells revealed a vast difference between formed metabolite concentrations and supplemented fatty acid concentrations, indicating that the substrates are either transformed into lipoxygenase-independent metabolites or are esterified into the cellular membrane.
In summary, the data presented in this work demonstrate that 5-LO expression and activity are tightly regulated in HT-29 and HCT-116 cells and fine-tuned due to environmental conditions. The cells suppress 5-LO during proliferation but upregulate the expression and activity of the enzyme under cellular stress-triggering conditions. This implies a possible role of 5-LO in manipulating the tumor stroma to support a tumor-promoting microenvironment.
The Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) as well as the T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL) are rare types of malignant lymphomas. Both NLPHL and THRLBCL are frequently observed in middle-aged men with THRLBCL presenting frequently with an advanced Ann-Arbor stage with B-symptoms and associated with more aggressive courses.3 However, due to the limited number of tumor cells in the tissue of both NLPHL and THRLBCL, limited numbers of studies have been conducted on these lymphomas and current results are mainly based on general molecular genetic studies.
In order to obtain a better understanding for these disease forms as well as possible changes in their nuclear and cytoplasmatic sizes, the following study relied on the comparison of the different NLPHL forms and THRLBCL in terms of nuclear size and nuclear volume. This was carried out using both 2D and 3D analysis. During the 2D analysis of nuclear size and nuclear volume no significant differences could be presented between those groups. However, the 3D analysis of NLPHL and THRLBCL pointed out a slightly enlarged nuclear volume in THRLBCL. Furthermore, the analysis indicated a significantly increased cytoplasmatic size of THRLBCL compared to NLPHL forms. Nevertheless, differences occurred not only between the tumor cells of both disease forms, but also the T cells presented a larger nuclear volume in THRLBCL. B cells, which were considered as the control group, did not demonstrate any significant differences between the different groups. The presented results suggest an increased activity of T cells in THRLBCL, which is most likely to be interpreted as a response against the surrounding tumor cells and probably limits the proliferation of the tumor cells. Based on these results, the importance of 3D analysis is also evident due to the fact that it is clearly superior to 2D analysis. For a better understanding of both disease forms, it is therefore recommended to use the 3D technique in combination with molecular genetic analysis in future research.
The subject of this thesis is the experimental investigation of the neutron-capture cross sections of the neutron-rich, short-lived boron isotopes 13B and 14B, as they are thought to influence the rapid neutron-capture process (r process) nucleosynthesis in a neutrino-driven wind scenario.
The 13;14B(n,g)14;15B reactions were studied in inverse kinematics via Coulomb dissociation at the LAND/R3B setup (Reactions with Relativistic Radioactive Beams). A radioactive beam of 14;15B was produced via in-flight fragmentation and directed onto a lead-target at about 500 AMeV. The neutron breakup of the projectile within the electromagnetic field of the target nucleus was investigated in a kinematically complete measurement. All outgoing reaction products were detected and analyzed in order to reconstruct the excitation energy.
The differential Coulomb dissociation cross sections as a function of the excitation energy were obtained and first experimental constraints on the photoabsorption and the neutron-capture cross sections were deduced. The results were compared to theoretical approximations of the cross sections in question. The Coulomb dissociation cross section of 15B into 14B(g.s.) + n was determined to be s(15B;14B(g:s:)+n) CD = 81(8stat)(10syst) mb ; while the Coulomb dissociation cross section of 14B into a neutron and 13B in its ground state was found to be s(14B;13B(g:s:)+n) CD = 281(25stat)(43syst) mb: Furthermore, new information on the nuclear structure of 14B were achieved, as the spectral shape of the differential Coulomb dissociation cross section indicates a halolike structure of the nucleus.
Additionally, the Coulomb dissociation of 11Be was investigated and compared to previous measurements in order to verify the present analysis. The corresponding Coulomb dissociation cross section of 11Be into 10Be(g.s.) + n was found to be 450(40stat)(54syst ) mb, which is in good agreement with the results of Palit et al.
My study examined MMA training, and thereby the ‘back region’ of MMA, where the ‘everyday life’ of MMA takes place. I enquired into how MMA training corresponds with MMA’s self-description, namely the somehow self-contradicting notion that MMA fights would be dangerous combative goings-on of approximately real fighting, but that MMA fighters would be able to approach these incalculable and uncontrolla-ble combative dangers as calculable and controllable risks.235 Conducting an ethnog-raphy in which I focused on the combination of participation and observation, I stud-ied how the specific interaction organisations of the three core training practices of MMA training provide the training students with specific combative experiences and how they thereby construct the social reality that is MMA training....
The book deals with a comprehensive constellation of narrative and visual, often counterposed representations of the causes, course, and results of the assault on the Palace of Justice of Colombia by a guerrilla commando and the immediate counterattack launched by state security forces on November 6, 1985, as well as with the local memorial traditions in which the production, circulation and reproduction of these representations have taken place between 1985 and 2020. The research on which it is based was grounded in the method and perspective of classical anthropology, in as much as qualitative fieldwork and the search for the perspective of the actors involved have played a central role. Within that context, memory entrepreneurs belonging to diverse sectors, from the far-right to the human rights movement, were followed through multisited fieldwork in various locations of Colombia, as well as in various countries of America and Europe. The analyses of fieldwork data, documental sources, and visual representations that constitute the core of the argument are framed in the field of memory studies and mainly based on theoretical and methodological resources from Pierre Bourdieu’s Field Theory, Jeffrey Alexander’s theory of social trauma, and Ernst Gombrich’s characterization of iconological analysis.
The book is composed of four chapters preceded by an introduction and followed by the conclusions and documental appendices, and substantiates three main theses. The first is that the Palace of Justice events were a radio- and television-broadcasted dispersed tragedy that affected the lives of actors from different social sectors and regions of Colombia, who have launched since 1985 multiple memorial initiatives in different fields of culture, thereby contributing to the formation and intergenerational transmission of a widespread cultural trauma. The second is that the narrative and visual representations at the core of that trauma express a vast universe of local representational traditions that can be traced at least until the early 20th century, and therefore preexists the so-called Colombian “memory boom”, dated to the mid-1990s. As an example of the preexistence and longstanding impact of these traditions, the local usage of the figure of “holocaust” for representing the effects of politically motivated violence is analyzed regarding the Palace of Justice events, but also traced to other representations emerged in the decade of 1920. The third thesis is that analyzing the diverse, frequently counterposed accounts of political violence elaborated within these traditions provides an opportunity to explore a wide variety of understandings of the causes and characteristics of the longstanding Colombian social and armed conflict.
Keywords: Political violence, Cultural trauma, Collective Memory, Iconology, Holocaust, Colombia.
Die Fähigkeit der spezifischen und kontextabhängigen zellulären Adaption auf intrinsische und/oder extrinsische Signale ist das Fundament zellulärer Homöostase. Verschiedene Signale werden von Membranrezeptoren oder intrazellulären Rezeptoren erkannt und ermöglichen die molekulare Anpassung zellulärer Prozesse. Komplexe, ineinandergreifende Proteinnetzwerke sind dabei elementar in der Regulation der Zelle. Proteine und deren Funktionen werden dabei nach Bedarf reguliert und unterliegen einem ständigen proteolytischen Umsatz.
Die stimulusabhängige Gentranskription und/oder Proteintranslation nimmt hier eine zentrale Stellung ein, da die zugrundeliegende Maschinerie die Komposition und Funktion der Proteinnetzwerke entsprechend anpassen kann. Zusätzlich zur Regulation der Proteinabundanz werden Proteine posttranslational modifiziert, um deren Eigenschaften rasch zu ändern. Zu posttranslationalen Modifikationen zählen die Ubiquitinierung und/oder Phosphorylierung, welche die Proteinfunktionen hochdynamisch regulieren. Deregulierte Proteinnetzwerke werden oft mit Neurodegeneration und Autoimmun- oder Krebserkrankungen assoziiert. Auch Infektionen mit humanpathogenen Bakterien greifen stark in den Regulierungsprozess von Proteinnetzwerken und deren Funktionen ein. Die zelluläre Homöostase wird dadurch herausgefordert.
Bakterien der Gattung Salmonella sind zoonotische, gramnegative, fakultativ intrazelluläre Pathogene, welche weltweit millionenfach Salmonellen-erkrankungen hervorrufen. Von besonderer Bedeutung ist dabei Salmonella enterica serovar Typhimurium (hiernach Salmonella), welches im Menschen, meist durch mangelnde Hygienemaßnahmen, Gastroenteritis auslöst.
Immunität in Epithelzellen wird über das angeborene Immunsystem vermittelt und dient der Pathogenerkennung und -bekämpfung. Die Toll-like Rezeptoren (TLR) gehören zu den Mustererkennungsrezeptoren (pattern recognition receptors), welche spezifische mikrobielle Strukturen detektieren und eine kontextabhängige zelluläre Antwort generieren. Danger-Rezeptoren erkennen hingegen nicht direkt das Pathogen, sondern zelluläre Perturbationen, welche durch Zellschäden oder bakterielle Invasionen verursacht werden. Die intrinsische Fähigkeit der Wirtszelle, sich gegen Infektionen/Gefahren zu wehren wird dabei als zellautonome Immunität bezeichnet. Dabei nehmen induzierte proinflammatorische Signalwege und zelluläre Stressantworten eine wichtige Stellung ein. Die zelluläre Stressantwort aktiviert unter anderem die selektive Autophagie. Diese kann spezifisch aberrante Organelle, Proteine und invasive Pathogene abbauen. Ein weiterer Stresssignalweg ist die integrated stress response (ISR), welche eine selektive Proteintranslation erlaubt und damit die Auflösung des proteintoxischen Stresses ermöglicht.
Zur Penetration von Epithelzellen benötigt Salmonella ein komplexes System an Virulenzfaktoren, welches die bakterielle Internalisierung und Proliferation in der Wirtszelle ermöglicht. Salmonella nutzt dazu ein Typ-III-Sekretionssystem. Das System sekretiert bakterielle Virulenzfaktoren in die Zelle, sodass eine hochspezifische Modulierung des Wirtes erzwungen wird.
Die Virulenzfaktoren SopE und SopE2 spielen dabei eine Schlüsselrolle, da sie die Pathogenität von Salmonella maßgeblich vermitteln. Durch molekulare Mimikry von Wirts GTP (Guanosintriphosphat) -Austauschfaktoren aktivieren SopE und SopE2 die Rho GTPasen CDC42 und Rac1. GTP-geladenes CDC42 und Rac1 wiederum aktivieren das Aktinzytoskelett und stimulieren die Polymerisierung von Aktinfilamenten über den Arp2/3-Komplex an der Invasionsstelle. Das Pathogen wird dadurch in ein membranumhülltes Vesikel, die sogenannte Salmonella-containing Vakuole (SCV), aufgenommen. Die SCV stellt eine protektive, replikative, intrazelluläre Nische des Pathogens dar und wird permanent durch verschiedene Virulenzfaktoren moduliert.
Im Allgemeinen führt die Aktivierung von Mustererkennungsrezeptoren und Danger-Rezeptoren also zu einer zellulären Stressantwort und Entzündungsreaktion, wodurch es zur Bekämpfung der Infektion kommt. Inflammatorische Signalwege werden meist über den zentralen Transkriptionsfaktor NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells) vermittelt. NF-κB bewirkt die Induktion von proinflammatorischen Effektoren und Stressgenen. Zellautonome Immunität wird zusätzlich durch antibakterielle Autophagie ermöglicht, wobei Salmonella selektiv über das lysosomale System abgebaut werden. Das bakterielle Typ-III-Sekretionssystem verursacht an einigen wenigen SCVs Membranschäden, sodass Salmonella das Wirtszytosol penetrieren. Zytosolische Bakterien werden dabei spezifisch ubiquitiniert. Dies erlaubt die Erkennung durch die Autophagie-Maschinerie.
In der vorliegenden Arbeit wurde die zellautonome Immunität von Epithelzellen während einer akuten Salmonella Infektion durch quantitative Proteomik untersucht...
Twentieth-century scholars have thought little about the attractions of Descartes’ thinking. Especially in feminist theory, he has a bad press as the ‘instigator’ of the body-mind-split – seen as one of the theoretical bases for the subordination of women in Western culture. Seen from within seventeenth-century discourse it is the dictum that can be inferred from his writings that ‘the mind has no sex’ and which can be seen as an appeal to think about rational capacities in the utopian perspective of a gender neutral discourse. My work analyses this “face” of Cartesianism as it was adapted in favour of English seventeenth-century women. How were the specific tenets of Descartes’ philosophy employed on behalf of English women in the second half of the seventeenth century in England? My focus is on Descartes as a thinker, who – whatever his real or imagined intention might have been – provided women in seventeenth-century England with tools with which to change their status, in other words: with instruments of empowerment. So why were Descartes’ arguments so attractive for women? Descartes had argued for equal rational abilities among individuals in a gender neutral way. He had further critiqued generally accepted truth with his universal doubt. I believe this specific combination of ideas, affirming their rational capabilities, was seen by a number of women as an invitation to become involved in spheres of activity from which they were previously excluded. Moreover, a specific set of Descartes’ arguments provided a number of English women with a strategy to extend female agency. Not only did Descartes’ views legitimate female rationality, they also allowed an acknowledgement that this female intellect was equally connected to “truth” as that of their male contemporaries. As a consequence, women developed an increased self-esteem and inspiration to pursue their own independent study (and in some cases publishing). These ideas eventually helped to bring forward a demand for female education, as girls and women were still excluded from formal education in seventeenth-century England. My general thesis is that Cartesianism, as one of the earliest universalist theories on the nature of human reason, introduced new possibilities into the English debate over the nature and, hence, social position of women. It brought a radical twist to the already existing discussion on women by offering new critical tools which were taken up to argue on behalf of English women. In my work I examine the specific historical conditions of the reception of Descartes’ thought in England, the philosophical appeal of his ideas for women and analyse the writings of two English ‘disciples’ of Descartes: Margaret Cavendish, Duchess of Newcastle and Mary Astell.
Based on an original dataset of 100 important pieces of legislation passed during the three presidencies of William J. Clinton, George W. Bush, and Barack H. Obama (1992-2013), this study explores two sets of questions:
(1) How do presidents influence legislators in Congress in the legislative arena, and what factors have an effect on the legislative strategies presidents choose?
(2) How successful are presidents in getting their policy positions enacted into law, and what configurations of institutional and actor-centered conditions determine presidential legislative success?
The analyses show that in an hyper-polarized environment, presidents usually have to fight an uphill-battle in the legislative arena, getting more involved if they face less favorable contexts and the odds are against them.
Moreover, the analyses suggest that there is no silver-bullet approach for presidents' legislative success. Instead, multiple patterns of success exist as presidents - depending on the institutional and public environment - can resort to different combinations of actions in order to see their preferred policy outcomes enacted.
Paläoklimarekonstruktionen, die es sich zum Ziel gesetzt haben, Klima-Mensch Interaktionen auf lange Zeitreihen betrachtet zu erforschen, nehmen begünstigt durch die aktuell intensiv geführte Klimadebatte, einen immer größer werdenden Stellenwert in der öffentlichen und wissenschaftlichen Wahrnehmung ein. Denn trotz aller wissenschaftlicher Fortschritte, die in den vergangenen Jahrzehnten im Bereich der modernen Klimaforschung gemacht wurden, bleibt die zuverlässige Vorhersage und Modellierung von zukünftigen Klimaveränderungen noch immer eine der größten Herausforderungen unser heutigen Zeit. Betrachtet man die Karibik exemplarisch in diesem Rahmen, dann prognostizieren viele Modellrechnungen, infolge steigender Ozeantemperaturen, ein deutlich häufigeres Auftreten von tropischen Stürmen und Hurrikanen sowie eine Verschiebung hin zu höheren Sturmstärken. Dieser Trend stellt für die Karibik und viele daran angrenzende Staaten eine der größten Gefahren des modernen Klimawandels dar, den es wissenschaftlich über einen langen Zeitrahmen zu erforschen gilt.
Klimaprognosen stützen sich meist vollständig auf hoch-aufgelöste instrumentelle Datensätze. Diese sind aber alle durch einen wesentlichen Aspekt limitiert. Aufgrund ihrer eingeschränkten Verfügbarkeit (~150 Jahre) fehlt ihnen die erforderliche Tiefe, um die auf langen Zeitskalen operierenden Prozesse der globalen Klimadynamik adäquat abbilden zu können. Betrachtet man das Holozän in seiner Gesamtheit, so wurde die globale Klimadynamik über die vergangenen ~11,700 Jahre von periodisch auftretenden Prozessen und Abläufen gesteuert. Diese wirken grundsätzlich über Zeiträume von mehreren Jahrzehnten, teilweise Jahrhunderten und in einigen Fällen sogar Jahrtausenden. Viele dieser natürlichen Prozesse, können in der kurzen Instrumentellen Ära nicht gänzlich identifiziert und angemessen in Klimamodellen berücksichtig werden. Die alleinige Berücksichtigung der Instrumentellen Ära bietet daher nur eine eingeschränkte Perspektive, um die Ursachen und Abläufe von vergangenen sowie mögliche Folgen von zukünftigen Klimaveränderungen zu verstehen. Um diese Einschränkung zu überwinden, ist es somit erforderlich, dass die geowissenschaftliche Forschung mit Proxymethoden ein zusammenfassendes und mechanistisches Verständnis über alle Holozänen Klimaveränderungen erlangt.
Wenn man sich diese Limitierung, die ansteigenden Ozeantemperaturen und das in der Karibik in den vergangen 20 Jahren vermehrte Auftreten von starken tropischen Zyklonen ins Gedächtnis ruft, ist es nachvollziehbar, dass im Rahmen dieser Doktorarbeit ein zwei Jahrtausende langer und jährlich aufgelöster Klimadatensatz erarbeitet werden soll, der spät Holozäne Variationen von Ozeanoberflächenwasser-temperaturen (SST) und daraus resultierende lang-zeitliche Veränderungen in der Häufigkeit tropischer Zyklone widerspiegelt. In Zentralamerika wird das Ende der Maya Hochkultur (900-1100 n.Chr.) mit drastischen Umweltveränderungen (z.B. Dürren) assoziiert, die während der Mittelalterlichen Warmzeit (MWP; 900-1400 n.Chr.) durch eine globale Klimaveränderung hervorgerufen wurde. Die aus einem „Blue Hole“ abgeleiteten Informationen über Klimavariationen der Vergangenheit können als Referenz für die gegenwärtige Klimakriese verwendet werden.
Als „Blue Hole“ wird eine Karsthöhle bezeichnet, die sich subaerisch während vergangener Meeresspiegeltiefstände im karbonatischen Gerüst eines Riffsystems gebildet hat und in Folge eines Meeresspiegelanstiegs vollständig überflutet wurde. In einigen wenigen marinen „Blue Holes“ treten anoxische Bodenwasserbedingungen auf. Die in diesen anoxischen Karsthöhlen abgelagerten Abfolgen mariner Sedimente können als einzigartiges Klimaarchiv verwendet werden, da sie aufgrund des Fehlens von Bioturbation eine jährliche Schichtung (Warvierung) aufweisen.
In dieser kumulativen Dissertation über das „Great Blue Hole“ werden die Ergebnisse eines 3-jährigen Forschungsprojekts vorgestellt, dass das Ziel verfolgte einen wissenschaftlich herausragenden spät Holozänen Klimadatensatz für die süd-westliche Karibik zu erzeugen. Beim „Great Blue Hole“ handelt es sich um ein weltweit einzigartiges marines Sedimentarchiv für diverse spät Holozäne Klima-veränderungen, das im Zuge dieser Dissertation sowohl nach paläoklimatischen als auch nach sedimentologischen Fragestellungen untersucht wurde. Die vorliegende Doktorarbeit befasst sich im Einzelnen mit (1) der Ausarbeitung eines jährlich aufgelösten Archives für tropische Zyklone, (2) der Entwicklung eines jährlich aufgelösten SST Datensatzes und (3) einer kompositionellen Quantifizierung der sedimentären Abfolgen sowie einer faziell-stratigraphischen Charakterisierung von Schönwetter-Sedimenten und Sturmlagen. Zu jedem dieser drei Aspekte, wurde jeweils ein Fachartikel bei einer anerkannten wissenschaftlichen Fachzeitschrift mit „peer-review“ Verfahren veröffentlicht.
Der insgesamt 8.55 m lange Sedimentbohrkern („BH6“), der für diese Dissertation untersucht wurde, stammt vom Boden des 125 m tiefen und 320 m breiten „Great Blue Holes“, das sich in der flachen östlichen Lagune des 80 km vor der Küste von Belize (Zentralamerika) gelegenen „Lighthouse Reef“ Atolls befindet. Durch seine besondere Geomorphologie wirkt das, innerhalb des atlantischen „Hurrikan Gürtels“ positionierte, „Great Blue Hole“ wie eine gigantische Sedimentfalle. Die unter Schönwetter-Bedingungen kontinuierlich abgelagerten Abfolgen feinkörniger karbonatischer Sedimente, werden von groben Sturmlagen unterbrochen, die auf „over-wash“ Prozesse von tropischen Zyklonen zurückzuführen sind.
...
Chemokines play a key role in the cellular infiltration of inflamed tissue. They are released by a wide variety of cell types during the initial phase of host response to injury, allergens, antigens, or invading microorganisms, and selectively attract leukocytes to inflammatory foci, inducing both migration and activation. Monocyte chemoattractant protein-1 (MCP-1), a member of the CC chemokine superfamily, functions in attracting monocytes, T lymphocytes, and basophils to sites of inflammation. MCP-1 is produced by monocytes, fibroblasts, vascular endothelial cells and smooth muscle cells in response to various stimuli such as tumour necrosis factor-a (TNF-a), interferon-g (IFN-g), and interleukin-1b (IL-1b). It also plays an important role in the pathogenesis of chronic inflammation, and overexpression of MCP-1 has been implicated in diseases including glomerulonephritis and rheumatoid arthritis. Oligonucleotide-directed triple helix formation offers a means to target specific sequences in DNA and interfere with gene expression at the transcriptional level. Triple helix-forming oligonucleotides (TFOs) bind to homopurine/homopyrimidine sequences, forming a stable, sequence-specific complex with the duplex DNA. Purine-rich sequences are frequent in gene regulatory regions and TFOs directed to promoter sequences have been shown to prevent binding of transcription factors and inhibit transcription initiation and elongation. Exogenous TFOs that bind homopurine/ homopyrimidine DNA sequences and form triple-helices can be rationally designed, while the intracellular delivery of single-stranded RNA TFOs has not been studied in detail before. In this study, expression vectors were constructed which directed transcription of either a 19 nt triplex-forming pyrimidine CU-TFO sequence targeting the human MCP-1 or two different 19 nt GU- or CA-control sequences, respectively, together with the vector encoded hygromycin resistance mRNA as one fusion transcript. HEK 293 cells were stable transfected with these vectors and several TFO and control cell lines were generated. Functional relevant triplex formation of a TFO with a corresponding 19 bp GC-rich AP-1/SP-1 site of the human MCP-1 promoter was shown. Binding of synthetic 19 nt CUTFO to the MCP-1 promoter duplex was verified by triplex blotting at pH 6.7. Underlining binding specificity, control sequences, including the GU- and CA-sequence, a TFO containing one single mismatch and a MCP-1 promoter duplex containing two mismatches, did not participate in triplex formation. Establishing a magnetic capture technique with streptavidin microbeads it was verified that at pH 7.0 the 19 nt TFO embedded in a 1.1 kb fusion transcript binds to a plasmid encoded MCP-1 promoter target duplex three times stronger than the controls. Finally, cell culture experiments revealed 76 ± 10.2% inhibition of MCP-1 protein secretion in TNF-a stimulated CU-TFO harboring cell lines and up to 88% after TNF-a and IFN-g costimulation in comparison to controls. Expression of interleukin-8 (IL-8) as one TNF-a inducible control gene was not affected by CU-TFO, demonstrating both highly specific and effective chemokine gene repression. Furthermore, another chemokine target, regulated upon activation normal T cell expressed and secreted (RANTES), which plays an essential role in inflammation by recruiting T lymphocytes, macrophages and eosinophils to inflammatory sites, was analysed using the triplex approach. A 28 nt TFO was designed targeting the murine RANTES gene promoter, and gel mobility shift assays demonstrated that the phosphodiester TFO formed a sequencespecific triplex with the double-stranded target DNA with a Kd of 2.5 x 10-7 M. It was analysed whether RANTES expression could be inhibited at the transcriptional level testing the TFO in two different cell lines, T helper-1 lymphocytes and brain microvascular endothelial cells (bend3 cells). Although there was a sequence-specific binding of the TFO detectable in the gel shift assays, there was no inhibitory effect of the exogenously added and phosphorothioate stabilised TFO on endogenous RANTES gene expression visible. Additionally, the small interfering RNA (siRNA) approach was tested as another strategy to inhibit expression of the pro-inflammatory chemokines MCP-1 and RANTES. Two different methods were pursuit, describing transient transfection with vector derived and synthetic siRNA. The vector pSUPER containing the siRNA coding sequence was used to suppress endogenous MCP-1 in HEK 293 cells. An empty vector without RNA sequence served as a control. Inhibition due to the siRNA was measured in stimulated and unstimulated cells. In TNF-a stimulated cells MCP-1 protein synthesis was decreased by 35 ± 11% after siRNA transfection. Using a synthetic double-stranded siRNA, the TNF-a induced MCP-1 protein secretion could be successfully inhibited about 62.3 ± 10.3% in HEK 293 cells, indicating that the siRNA is functional in these cells to suppress chemokine expression. The siRNA approach targeting murine RANTES in Th1 cells and b-end3 cells revealed no inhibition of endogenous gene expression. Gene therapy approaches rely on efficient transfer of genes to the desired target cells. A wide variety of viral and nonviral vectors have been developed and evaluated for their efficiency of transduction, sustained expression of the transgene, and safety. Among them, lentiviruses have been widely used for gene therapy applications. In order to improve the delivery of TFOs or siRNAs into the target cells, cloning of the lentiviral transfer vector SEW, the production of lentiviral particles by transient transfection were performed with the aim to generate lentiviral vector-derived TFOs in further experiments. Here, Th1 cells were transduced with infectious lentiviral particles and transduction efficacy was measured. Transduction efficacy higher than 82% could be achieved using the lentiviral vector SEW, opening optimal possibilities for the TFO or siRNA approach.
Canada’s geographic centre lies in the Territory Nunavut. From here the distance to the geographic North Pole is as far as to the US border. Nunavut takes up about 1/5 of the Canadian land mass but has by far the smallest population with currently about 38,000 residents. 85% of its population are Inuit whose culture dramatically changed within the last 70 years.
As a result, the territory is dealing with several generations of Inuit that are traumatized or at least severely affected by cultural and economic changes that started after World War 2 with the resettlement from the land into permanent communities. No matter if we are talking about the actual elders, mid-age adults or pre-teenagers, each of this generation experienced and still experiences various personal and cultural challenges of identity, financial and housing insecurity, food insecurity, substance abuse education, change of social values ranging from inter-generational and gender relationships to the introduction of a foreign political and legal system.
On the other side, a lot of the traditional societal values are still being practiced in Inuit families. Despite all the tragedies that several generations of Inuit have experienced by now, the society keeps generating the strength and cultural pride that allows many Inuit both, as individuals and as a collective under the umbrella of either Inuit Land Claims or not for profit organizations to advocate on behalf of Inuit culture, to fight for more acknowledgement of Inuit culture and to enhance pride in the historic and present day cultural achievements of Nunavut’s indigenous population.
The social issues, inter- and intra-cultural processes described in my thesis are not exclusive to the situation in Nunavut or to Inuit. Studies from other regions, in Canada or from around the world (LaPrairie 1987; Jensen 1986; Nunatsiaq News 6/30/2010) reveal similar challenges.
Though many structural similarities can be identified by comparing these studies with each other, e.g. marginalization of the indigenous local population, colonization, paternalism and resulting issues like personal and cultural identity loss, it is important to have a more in depth look into the single cases to determine which individual events and developments causes and maybe still cause such a devastating social situation as it is found among many indigenous peoples across the world. From my perspective effective improvements of the situation of a group, a respective community or region can only happen when particularities of socialization, communication and philosophy in the single cultural entities are being considered.
That is why my thesis will exclusively focus on developments in Nunavut and use various case studies of communities. The case studies shall help to identify local differences in historic and recent developments and thus provide starting points for explanations of different developments in different Nunavut communities.
The thesis is looking at both, historic and recent root causes for the many issues in Nunavut.
The data that my my thesis is based on are a combination of literature and about 60 formal and informal interviews that I conducted in three Nunavut communities (Iqaluit, Whale Cove, Kugluktuk) during my 18 months of field work between October 2008 and March 2010. Many more spontaneous unstructured conversations between me and community members added to the pool of first-hand information that I gathered.
Since my field work is limited to those three communities it has a very strong qualitative character. The quantitative side, which allows me to confidently apply my research analyses to entire Nunavut, comes from literature research as well as many informal conversations and a few formal interviews that I conducted with people who had some experience in other communities than Iqaluit, Kugluktuk and Whale Cove.
Furthermore, while I was living at the old residence of the Nunavut Arctic College in Iqaluit, I spend time with college students from across Nunavut. Through them, I obtained „case studies “from following communities: Iqaluit, Qikiqtarjuaq, Kimmirut, Pangnirtung, Clyde River, Pond Inlet, Igloolik, Repulse Bay, Cape Dorset, Chesterfield Inlet, Baker Lake, Rankin Inlet, Whale Cove, Arviat, Taloyoak, Kugluktuk.
My general categorization of “early contact period”, “contact”, “1st generation” and “2nd generation” is very similar to Damas’ terms of “early contact phase”, “contact – traditional”, “resettlement” that he uses to create a timeline that describes the major phases of impact for Inuit society (Damas 2002: 7, 17).
Chapters 2 is meant to provide an inventory of the key aspects of current social issues in Nunavut. In this context I am looking at the four major aspects that in my opinion shape Nunavut’s society:
1) violence and other forms of social dysfunctions
2) the associated services and delivering agencies that try to address those matters
3) Education
4) Inuit cultural particularities in communication and socialization
Those four areas are forming the foundation for the rest of my work. The following chapters will guide the reader through the historic transformation process of Inuit pre-colonial semi-nomadic society to a society that is living in permanent settlements, strongly influenced if not in many ways dominated by Euro-Canadian culture. Each of those chapters will be referring to the social and cultural changes that happened in the different time periods that I labeled with “Pre-settlement, First, Second, and Third Generation”. The relevance of violence and other social dysfunctions, their context and strategies how each generation dealt with those matters will be analyzed while I will be also referring to the impacts that non-Inuit, primarily Euro-Canadians and Euro-Americans had and have on Inuit society.
...