Refine
Year of publication
Document Type
- Doctoral Thesis (2029) (remove)
Language
- English (2029) (remove)
Has Fulltext
- yes (2029)
Is part of the Bibliography
- no (2029) (remove)
Keywords
- ALICE (8)
- Quark-Gluon-Plasma (8)
- Membranproteine (7)
- Geldpolitik (6)
- Proteine (6)
- Apoptosis (5)
- Biochemie (5)
- Heavy Ion Collisions (5)
- Immunologie (5)
- LHC (5)
Institute
- Biowissenschaften (419)
- Physik (373)
- Biochemie und Chemie (279)
- Biochemie, Chemie und Pharmazie (201)
- Medizin (120)
- Pharmazie (92)
- Geowissenschaften (87)
- Informatik und Mathematik (85)
- Informatik (54)
- Mathematik (45)
This thesis contains three theoretical works about certain aspects of the interplay of electronic correlations and topology in the Hubbard model.
In the first part of this thesis, the applicability of elementary band representations (EBRs) to diagnose interacting topological phases, that are protected by spatial symmetries and time-reversal-symmetry, in terms of their single-particle Matsubara Green’s functions is investigated. EBRs for the Matsubara Green’s function in the zero-temperature limit can be defined via the topological Hamiltonian. It is found that the Green’s function EBR classification can only change by (i) a gap closing in the spectral function at zero frequency, (ii) the Green’s function becoming singular i.e. having a zero eigenvalue at zero frequency or (iii) the Green’s function breaking a protecting symmetry. As an example, the use of the EBRs for Matsubara Green’s functions is demonstrated on the Su-Schriefer-Heeger model with exact diagonalization.
In the second part the Two-Particle Self-Consistent approach (TPSC) is extended to include spin-orbit coupling (SOC). Time-reversal symmetry, that is preserved in the presence of SOC, is used to derive new TPSC self-consistency equations including SOC. SOC breaks spin rotation symmetry which leads to a coupling of spin and charge channel. The local and constant TPSC vertex then consists of three spin vertices and one charge vertex. As a test case to study the interplay of Hubbard interaction and SOC, the Kane-Mele-Hubbard model is studied. The antiferromagnetic spin fluctuations are the leading instability which confirms that the Kane-Mele-Hubbard model is an XY antiferromagnet at zero temperature. Mixed spin-charge fluctuations are found to be small. Moreover, it is found that the transversal spin vertices are more strongly renormalized than the longitudinal spin vertex, SOC leads to a decrease of antiferromagnetic spin fluctuations and the self-energy shows dispersion and sharp features in momentum space close to the phase transition.
In the third part TPSC with SOC is used to calculate the spin Hall conductivity in the Kane-Mele-Hubbard model at finite temperature. The spin Hall conductivity is calculated once using the conductivity bubble and once including vertex corrections. Vertex corrections for the spin Hall conductivity within TPSC corresponds to the analogues of the Maki-Thompson contributions which physically correspond to the excitation and reabsorption of a spin, a charge or a mixed spin-charge excitation by an electron. At all temperatures, the vertex corrections show a large contribution in the vicinity of the phase transition to the XY antiferromagnet where antiferromagnetic spin fluctuations are large. It is found that vertex corrections are crucial to recover the quantized value of −2e^2/h in the zero-temperature limit. Further, at non-zero temperature, increasing the Hubbard interaction leads to a decrease of the spin Hall conductivity. The results indicate that scattering of electrons off antiferromagnetic spin fluctuations renormalize the band gap. Decreasing the gap can be interpreted as an effective increase of temperature leading to a decrease of the spin Hall conductivity.
Trait-dependent effects of biotic and abiotic filters on plant regeneration in Southern Ecuador
(2024)
Tropical forests have always fascinated scientists due to their unique biodiversity. However, our understanding of ecological processes shaping the complexity of tropical rainforests is still relatively poor. Plant regeneration is one of the processes that remain understudied in the tropics although this is a key process defining the structure, diversity and assembly of tropical plant communities. In my dissertation, I combine experimental, observational and trait-based approaches to identify processes shaping the assembly of seedling communities and compare associations between environmental conditions and plant traits across plant life stages. By working along a steep environmental gradient in the tropical mountains of Southern Ecuador, I was able to investigate how processes of plant regeneration vary in response to biotic and abiotic factors in tropical montane forests.
My dissertation comprises three complementary chapters, each addressing an individual research question. First, I studied how trait composition in plant communities varies in relation to the broad- and local-scale environmental conditions and across the plant life cycle. I measured key traits reflecting different ecological strategies of plants that correspond to three stages of the plant life cycle (i.e., adult trees, seed rain and recruiting seedlings). I worked on 81 subplots along an elevational gradient covering a large climatic gradient at three different elevations (1000, 2000 and 3000 m a.s.l.). In addition, I measured soil and light conditions at the local spatial scale within each subplot. My findings show that the trait composition of leaves, seeds and seedlings changed similarly across the elevational gradient, but that the different life stages responded differently to the local gradients in soil nutrients and light availability. Consequently, my findings highlight that trait-environment associations in plant communities differ between large and small spatial scales and across plant life stages.
Second, I investigated how seed size affects seedling recruitment in natural forests and in pastures in relation to abiotic and biotic factors. I set up a seed sowing experiment in both habitat types and sowed over 8,000 seeds belonging to seven tree species differing in seed size. I found that large-seeded species had higher proportions of recruitment in the forests compared to small-seeded species. However, small-seeded species tended to recruit better in pastures compared to large-seeded species. I showed that high surface temperature was the main driver of differences in seedling recruitment between habitats, because it limited seedling recruitment of large-seeded species. The results from this experiment show that pasture restoration requires seed addition of large-seeded species and active protection of recruiting seedlings in order to mitigate harmful conditions associated with high temperatures in deforested areas.
Third, I examined the associations between seedling beta-diversity and different abiotic and biotic factors between and within elevations. I applied beta-diversity partitioning to obtain two components of beta-diversity: species turnover and species richness differences. I associated these components of beta-diversity with biotic pressures by herbivores and fungal pathogens and environmental heterogeneity in light and soil conditions. I found that species turnover in seedling communities was positively associated with the dissimilarity in biotic pressures within elevations and with environmental heterogeneity between elevations. Further, I found that species richness differences increased primarily with increasing environmental heterogeneity within elevations. My findings show that the associations between beta-diversity of seedling communities and abiotic and biotic factors are scale-dependent, most likely due to differences in species sorting in response to biotic pressures and species coexistence in response to environmental heterogeneity.
My dissertation reveals that studying processes of community assembly at different plant life stages and spatial scales can yield new insights into patterns and processes of plant regeneration in tropical forests. I investigated how community assembly processes are governed by abiotic and biotic filtering across and within elevations. I also experimentally explored how the process of seedling recruitment depends on seed size-dependent interactions, and verified how these effects are associated with abiotic and biotic filtering. Identifying such processes is crucial to inform predictive models of environmental change on plant regeneration and successful forest restoration. Further exploration of plant functional traits and their associations with local-scale environmental conditions could effectively support local conservation efforts needed to enhance forest cover in the future and halt the accelerating loss of biodiversity.
This thesis develops a naturalist theory of phenomenal consciousness. In a first step, it is argued on phenomenological grounds that consciousness is a representational state and that explaining consciousness requires a study of the brain’s representational capacities. In a second step, Bayesian cognitive science and predictive processing are introduced as the most promising attempts to understand mental representation to date. Finally, in a third step, the thesis argues that the so-called “hard problem of consciousness” can be resolved if one adopts a form of metaphysical anti-realism that can be motivated in terms of core principles of Bayesian cognitive science.
A powerful technique to distinguish the enantiomers of a chiral molecule is the Coulomb Explosion Imaging (CEI). This technique allows us to determine the handedness of a single molecule. In CEI, the molecule becomes charged by losing many electrons in a very short period of time by interacting with the light. The repulsion forces between the positive charged particles of the molecule leads the molecule to break into parts-fragments. By measuring the three vector momentum of (at least) four fragments, the handedness observable can be determined. In this thesis, CEI is induced by absorption of a single high energy photon, which creates an inner-shell hole (K shell) of the molecule. The subsequent cascade of Auger decays lead to fragmentation. We decided to work with the formic acid molecule in this thesis. Two different experiments were conducted. The first experiment focused on exciting electrons to different energy states, while the second experiment focused on extracting directly a photoelectron to the continuum and measure the angular distribution of the photoelectron in the molecular frame. The primary goal was to search for chiral signal in a pure achiral planar molecule under the previous electron processes. The results of these findings were further implemented to two more molecules.
While high-quality climate reconstructions of some past warm periods in the Cenozoic era now exist, the geological processes responsible for driving the observed longterm changes in atmospheric CO2 are not sufficiently well understood. The long-term change in atmospheric CO2 across the Cenozoic has been proposed to be driven by processes such as terrestrial weathering, organic carbon production and burial, reverse weathering, and volcanic degassing. One way of constraining the relative importance of the various driving forces proposed so far is to better understand the degree to which ocean chemistry has changed because the chemistry of seawater responds to geologic processes that drive atmospheric CO2. In addition, knowledge of the concentration of the major elements in seawater is crucial for accurately applying proxies such as those based on the boron isotopic composition and Mg/Ca of marine carbonates (a proxy for palaeo pH/CO2 and palaeotemperature, respectively). Previously reported records of seawater composition are primarily derived from fluid inclusions in marine evaporites; however, the results are sparse due to the limited availability of such deposits. In this thesis, changes in the Eocene seawater chemistry were reconstructed using trace element (elements/Ca) and isotopic (δ26Mg) proxies in a Larger Benthic Foraminifera (LBFs), i.e., Nummulites sp., to constrain the driving processes of long-term changes in seawater chemistry.
To achieve the objective of this thesis, first, a measurement protocol was established using LA-ICPMS to measure the K/Ca ratio simultaneously with other element/calcium ratios, which is challenging due to the interference of ArH+ on K+. Utilising this newly established measurement protocol, laboratory-cultured Operculina ammonoides grown at different seawater calcium concentrations ([Ca2+]), repeated at different temperatures, as well as modern O. ammonoides collected from different regions exhibiting a range of seawater parameters, were investigated. A significant correlation was observed between K/Casw and K/CaLBF, allowing K/CaLBF to potentially be used as a proxy for seawater major ion reconstructions. In addition, modern O. ammonoides demonstrated no significant influence of most seawater parameters (temperature, salinity, pH, or [CO32-]) on K/CaLBF. Modern O.
ammonoides were also assessed for their Mg isotopic composition (δ26Mg), revealing no significant effect of temperature or salinity on δ26MgLBF. Furthermore, the Mg isotopic fractionation in O. ammonoides was found to be close to that of inorganic calcite, indicating minimal vital effects in these large benthic foraminifera.
Operculina ammonoides is the nearest living relative of the abundant Eocene genus Nummulites, enabling the reconstruction of seawater chemistry using the calibration based on O. ammonoides. The trace elemental/calcium proxies, including Na/Ca, K/Ca, and Mg/Ca, as well as the δ26Mg proxy, were investigated in Eocene Nummulites. The result showed that during the Eocene, [Ca2+]sw was 1.6-2 times higher, while [K+]sw was ~2 times lower than the modern seawater composition. Furthermore, [Mg2+]sw decreased from the early Eocene (54.3− +9 7..69 mmol kg-1 at ~55 Ma) to Late Eocene (37.8− +4 4..3 4 mmol kg-1 at ~31 Ma), followed by
an increase toward modern seawater [Mg]. In contrast, the variability in δ26Mgsw values remained within a narrow range of ~0.3 ‰ throughout the Cenozoic. The reconstructed [Ca2+]sw agrees with the suggestion that Cenozoic seawater chemistry changes can be explained via a change in the seafloor spreading rate. When combined with existing records, the observed minimal change in δ26Mgsw with an increase in [Mg2+]sw suggests an additional possible role of a decrease in the formation of authigenic clay minerals coincident with the Cenozoic decline in deep ocean temperature, which is also supported by the increase in the [K+]sw reconstructed here for the first time. This finding highlights that the reduction in seafloor-spreading rate and decline in reverse weathering during the Cenozoic era has played a significant role in the evolution of seawater chemistry, emphasizing the importance of these processes in driving long-term changes in the carbon cycle.
Thomas Bowrey, who was an employee of the British colonial government, visited the Malay-speaking region at the end of the 17th century and published a dictionary of Malay (1701) which consists of 12,683 headwords. It is one of the oldest and largest collections of data on this language, which was the first language of the people he came into contact with while travelling through the Malay Peninsula, spending most of his time in harbours along its west coast. Malay, which was spoken in the various trading centres of this area (e.g. Penang, Malacca), had long previously begun to develop into a form of lingua franca during Bowrey’s stay there due to the fact that traders, especially those from Arabic countries (beginning in the 12th century), China (from the 15th century onwards), Portugal (since 1511), the Netherlands (since 1641), and less so from England, came into contact with Malays speaking their local dialects in the various trading posts in Malaya and probably began to become acquainted with the trade-language variant. Thus, Bowrey must have observed and recorded elements of both.
The data he collected is not limited to Malay variants spoken in coastal areas, but includes material from dialects which he encountered during his travels throughout the Malay Peninsula, though without, however, describing the locations in which he took notes on the lexicon and clauses. Not all of his material was written into manuscript form during his stay in Southeast Asia. A large part of his notes taken in situ were prepared for publication during his long journey home. His notes, which were used to print his dictionary, are in part kept in British libraries. Most of the material accessible to the public was studied during the preparation of this thesis.
Earlier works on this dictionary are quite limited in scope. They deal with very specific aspects such as the meanings of headwords found between the letters A and C (Rahim Aman, 1997 & 1998), and the work of Nor Azizah, who deals with the lexical change found in Bowrey’s dictionary between D and F, and syntactic and sociolinguistic aspects (Mashudi Kader, 2009), and collective nouns by Tarmizi Hasrah (2010). This study will discuss Bowrey’s dictionary as a whole in order to describe its contribution to our knowledge of linguistic and non-linguistic facts in 17th century Malaya. Besides analysing Malay synchronically, this thesis also deals with historical-comparative questions and asks whether Bowrey contributes to our knowledge of the changes to the Malay language between the 17th and 21st centuries.
In order to answer the research questions, this study not only relies on the dictionary in its entirety, but also on the notes found in British libraries as well as other material on early Malay, such as the Pigafetta list (1523), Houtman (1598–1603), and the Wilkinson dictionary (1901) as a complement to Bowrey’s dictionary; at the same time, the Malay Concordance Project (online), the SEAlang Project (online), Kamus Besar Bahasa Indonesia (online), and Kamus Dewan Edisi Keempat (2007) will represent modern Malay. It should be borne in mind that in contrast to the Thomas Bowrey dictionary (TBD), Kamus Dewan Edisi Keempat (KDE4) does not hold information on colloquial forms of Malay, many of which reflect features of lingua franca Malay. This study is divided into two different branches, namely the consideration of synchronic aspects and historical comparative aspects.
Finally, this study concludes that the Malay language in Thomas Bowrey’s dictionary is heavily influenced by both external and internal factors prevalent to the 17th century. The Malay language recorded in the Thomas Bowrey dictionary is very similar to modern Malay. The similarities between the Malay language of the 17th century and the Malay language of today are considerable, even though there are, of course, still some notable variances.
The strong force is one of the four fundamental interactions, and the theory of it is called Quantum Chromodynamics (QCD). A many-body system of strongly interacting particles (QCD matter) can exist in different phases depending on temperature (T) and baryonic chemical potential (µB). The phases and transitions between them can be visualized as µB−T phase diagram. Extraction of the properties of the QCD matter, such as compressibility, viscosity and various susceptibilities, and its Equation of State (EoS) is an important aspect of the QCD matter study. In the region of near-zero baryonic chemical potential and low temperatures the QCD matter degrees of freedom are hadrons, in which quarks and gluons are confined, while at higher temperatures partonic (quarks and gluons) degrees of freedom dominate. This partonic (deconfined) state is called quark-gluon plasma (QGP) and is intensively studied at CERN and BNL. According to lattice QCD calculations at µB=0 the transition to QGP is smooth (cross-over) and takes place at T≈156 MeV. The region of the QCD phase diagram, where matter is compressed to densities of a few times normal nuclear density (µB of several hundreds MeV), is not accessible for the current lattice QCD calculations, and is a subject of intensive research. Some phenomenological models predict a first order phase transition between hadronic and partonic phases in the region of T≲100 MeV and µB≳500 MeV. Search for signs of a possible phase transition and a critical point or clarifying whether the smooth cross-over is continuing in this region are the main goals of the near future explorations of the QCD phase diagram.
In the laboratory a scan of the QCD phase diagram can be performed via heavy-ion collisions. The region of the QCD phase diagram at T≳150 MeV and µB≈0 is accessible in collisions at LHC energies (√sNN of several TeV), while the region of T≲100 MeV and µB≳500 MeV can be studied with collisions at √sNN of a few GeV. The QCD matter created in the overlap region of colliding nuclei (fireball) is rapidly expanding during the collision evolution. In the fireball there are strong temperature and pressure gradients, extreme electromagnetic fields and an exchange of angular momentum and spin between the system constituents. These effects result in various collective phenomena. Pressure gradients and the scattering of particles, together with the initial spatial anisotropy of the density distribution in the fireball, form an anisotropic flow - a momentum (azimuthal) anisotropy in the emission of produced particles. The correlation of particle spin with the angular momentum of colliding nuclei leads to a global polarization of particles. A strong initial magnetic field in the fireball results in a charge dependence and particle-antiparticle difference of flow and polarization.
Anisotropic flow is quantified by the coefficients vₙ from a Fourier decomposition of the azimuthal angle distribution of emitted particles relative to the reaction plane spanned by beam axis and impact parameter direction. The first harmonic coefficient v₁ quantifies the directed flow - preferential particle emission either along or opposite to the impact parameter direction. The v₁ is driven by pressure gradients in the fireball and thus probes the compressibility of the QCD matter. The change of the sign of v₁ at √sNN of several GeV is attributed to a softening of the EoS during the expansion, and thus can be an evidence of the first order phase transition. The global polarization coefficient PH is an average value of the hyperon’s spin projection on the direction of the angular momentum of the colliding system. It probes the dynamics of the QCD matter, such as vorticity, and can shed light on the mechanism of orbital momentum transfer into the spin of produced particles.
In collisions at √sNN of several GeV, which probe the region of the QCD phase diagram at T≲100 MeV and µB≳500 MeV, hadron production is dominated by u and d quarks. Hadrons with strange quarks are produced near the threshold, what makes their yields and dynamics sensitive to the density of the fireball. Thus measurement of flow and polarization, in particular of (multi-)strange particles, provides experimental constraints on the EoS, that allows to extract transport coefficients of the QCD matter from comparison of data with theoretical model calculations of heavy-ion collisions.
For continuation of the annotation see the PDF of thesis
Der Hirntumor Glioblastom (GBM) ist aufgrund seines infiltrativen Wachstums, der hohen intra- und intertumoralen Heterogenität, der hohen Therapieresistenz als auch aufgrund der sogenannten gliomartigen Stammzellen sehr schwer zu behandeln und führt fast immer zu Rezidiven. Da es in den letzten Jahrzehnten kaum Fortschritte in der Behandlung des GBMs gab, bis auf die Therapie mit Tumortherapiefeldern, wird weiterhin nach alternativen Zelltodtherapien geforscht, wie zum Beispiel dem Autophagie-abhängigen Zelltod. Der Autophagie-abhängige Zelltod ist durch einen erhöhten autophagischen Flux gekennzeichnet und obwohl die Autophagie, als auch selektive Formen wie die Lysophagie und Mitophagie, normalerweise als überlebensfördernde Mechanismen gelten, konnten viele Studien eine duale Rolle in der Tumorentstehung, -progression und -behandlung aufzeigen, die vor allem vom Tumortyp und stadium abhängt. Um die zugrunde liegenden Mechanismen des durch Medikamente induzierten Autophagie-abhängigen Zelltods im GBM weiter zu entschlüsseln, habe ich in meiner Dissertation verschiedene Substanzen untersucht, die einen Autophagie-abhängigen Zelltod induzieren.
In einer zuvor in unserem Labor durchgeführten Studie konnte gezeigt werden, dass das Antipsychotikum Pimozid (PIMO) und der Opioidrezeptor-Antagonist Loperamid (LOP) einen Autophagie-abhängigen Zelltod in GBM Zellen induzieren können. Darauf aufbauend habe ich die Fähigkeit zur Induktion des Autophagie-abhängigen Zelltods in weiteren Zellmodellen validiert. Dies bestätigte einen erhöhten autophagischen Flux nach PIMO und LOP Behandlung, während der Zelltod als auch der autophagische Flux in Autophagie-defizienten Zellen reduziert war. In weiteren Versuchen konnte ich die Involvierung der LC3-assoziierten Phagozytose (LAP), ein Signalweg der auf die Funktion einiger autophagischer Proteine angewiesen ist, ausschließen. Weiterhin konnte ich eine massive Störung des Cholesterin- und Lipidstoffwechsels beobachten. Unter anderem akkumulierte Cholesterin in den Lysosomen gefolgt von massiven Schäden des lysosomalen Kompartiments und der Permeabiliserung der lysosomalen Membran. Dies trug einerseits zur Aktivierung überlebensfördernder Lysophagie als auch der Zell-schädigenden „Bulk“-Autophagie bei. Letztendlich konnte aber die erhöhte Lysophagie die Zellen nicht vor dem Zelltod retten und die Zellen starben einen Autophagie-abhängigen lysosomalen Zelltod. Da die Eignung von LOP als Therapie für das GBM aufgrund der fehlenden Blut-Hirn-Schranken Permeabilität und von dem Antipsychotikum PIMO aufgrund teils schwerer Nebenwirkungen eingeschränkt ist, habe ich mich im weiteren Verlauf meiner Dissertation mit einer Substanz mit einem anderen Wirkmechanismus beschäftigt.
Der Eisenchelator und oxidative Phosphorylierungs (OXPHOS) Inhibitor VLX600 wurde zuvor berichtet mitochondriale Dysfunktion und Zelltod in Kolonkarzinomzellen zu induzieren. Allerdings hat meines Wissens nach bisher noch keine Studie die therapeutische Eignung von VLX600 für das GBM untersucht. Hier zeige ich eine neuartige Autophagie-abhängige Zelltod-induzierende Fähigkeit von VLX600 für GBM Zellen, da der Zelltod signifikant in Autophagie-defizienten Zellen aber nicht durch Caspase-Inhibitoren gehemmt wurde und der autophagische Flux erhöht war. Darüber hinaus konnte ich die Hemmung der OXPHOS und die Induktion von mitochondrialem Stress in GBM Zellen bestätigen und weiterhin aufzeigen, dass VLX600 nicht nur die mitochondriale Homöostase stört, sondern auch zu einer BNIP3-BNIP3L-abhängigen Mitophagie führt, die wahrscheinlich durch HIF1A reguliert wird aber keinen erkennbaren Nettoeffekt auf den von VLX600 induzierten Zelltod hat. Demnach induziert VLX600 letale „Bulk“-Autophagie in den hier verwendeten Zellmodellen. Darüber hinaus konnte ich zeigen, dass die Eisenchelatierung durch VLX600 eine große Rolle für den von VLX600-induzierten Zelltod spielt aber auch für die Mitophagie Induktion, Histon Lysin Methylierung und den ribosomalen Stress. Letztendlich ist es wahrscheinlich ein Zusammenspiel all dieser Faktoren, die zur Zelltodinduktion durch VLX600 führen und interessanterweise werden Eisenchelatoren bereits in präklinischen und klinischen Studien für Krebstherapien untersucht. Dabei könnten gewisse metabolische Eigenschaften verschiedener Tumorzellen die Sensitivität von Wirkstoffen, die auf den Metabolismus wirken wie VLX600, beeinflussen was in zukünftigen Studien beachtet werden sollte um den bestmöglichsten Therapieerfolg zu erzielen. Zusammenfassend unterstützt meine Dissertation die duale Rolle der Autophagie, die stark vom jeweiligen Kontext abhängt und befürwortet die weitere Forschung von Substanzen, die einen Autophagie-abhängigen Zelltod induzieren, für das GBM.
The capacity of pathogenic bacteria to adhere to host cells and to avoid subsequent clearance by the host´s immune response is the initial and most decisive step leading to infections. Human pathogenic bacteria circulating in the bloodstream need to find ways to interact with endothelial cells (ECs) lining the blood vessels to infect and colonise the host. The extracellular matrix (ECM) of ECs might represent an attractive initial target for bacterial interaction, as many bacterial adhesins have reported affinities to ECM proteins, particularly fibronectin (Fn). Trimeric autotransporter adhesins (TAA) have been described as important pathogenicity factors of Gram-negative bacteria. The TAA from human pathogenic Bartonella henselae, Bartonella adhesin A (BadA), is one of the longest and best characterised adhesin and represents a prototypic TAA due to its domain architecture. B. henselae, the causative agent of cat scratch disease, endocarditis, and bacillary angiomatosis, adheres to ECs and ECM proteins via BadA interaction.
In this research, it was determined that the interaction between BadA and Fn is essential for B. henselae host cell adhesion. BadA interactions were identified within the heparin-binding domains of Fn, and the exact binding sites were revealed by mass spectrometry analysis of chemically crosslinked whole-cell bacteria and Fn. It turned out that specific BadA interactions with defined Fn regions represent the molecular basis for bacterial adhesion to ECs. These data were confirmed by using BadA-deficient bacteria and CRISPR-Cas FN1 knockout ECs. It was also identified that BadA binds to Fn from both cellular and plasma origin, suggesting that B. henselae binding to Fn might possibly take part in other infection processes apart from bacterial adherence, e.g. evasion from the host cell immune system.
Interactions between TAAs and Fn represent a key step for adherence of B. henselae to ECs. Still, Fn-mediated binding is of more significant importance for pathogenic bacteria than broadly recognised. Fn removal from the ECM environment of ECs, also reduced adherence of Staphylococcus aureus, Borrelia burgdorferi, and Acinetobacter baumannii to host cells Interactions between adhesins and Fn might therefore represent a crucial step for the adhesion of human-pathogenic Gram-negative and Gram-positive bacteria targeting the ECs as a niche of infection or as means for persistence.
This research demonstrated that combining large-scale analysis approaches to describe protein-protein interactions with supportive functional readouts (binding assays) allows for the discrimination of crucial interactions involved in bacterial adhesion to the host. The herein-described experimental approaches and tools might guide future research for other pathogenic bacteria and represent an initial point for the future generation of anti-virulence strategies to inhibit bacterial binding to host cells.
The EMT-transcription factor ZEB1 has been intensively studied in solid cancers, where it is expressed at the invasive front and in cancer-associated fibroblasts (CAFs). In tumour cells, ZEB1 has been involved in multiple steps of cancer progression including stemness, metastasis and therapy resistance, yet its role in the tumour-microenvironment is largely unknown. Here, the role of Zeb1 in CAFs was investigated using mouse models reflecting different tumour stages in immunocompetent fibroblast specific Zeb1 KO mice. Fibroblast-specific depletion of Zeb1 accelerated tumour growth in the inflammation driven AOM/DSS tumour initiation model, reduced tumour growth and invasion in the sporadic AOM/P53 model and reduced liver metastasis in a progressed orthotopic transplantation model. Immunohistochemical and single cell RNA-sequencing analysis showed that Zeb1 ablation resulted in attenuated expression of the myofibroblast marker aSMA and reduced ECM deposition, indicating a shift among fibroblast subpopulations. Modulation of CAFs was furthermore associated with increased inflammatory signaling in fibroblasts resulting in immune infiltration into primary tumours and exaggerated inflammatory signaling in T cells, B cells and macrophages. These changes in the tumour microenvironment were associated with increased efficacy of immune checkpoint inhibition therapy. In summary, Zeb1 expression in CAFs was identified as a potential target to block immunosuppression and metastatic dissemination in colon cancer.
This thesis presents the experimental and numerical analysis of seismic waves that are produced by wind farms. With the aim to develop renewable energies rapidly, the number of wind turbines has been increased in recent years. Ground motions induced by their operation can be observed by seismometers several kilometers away. Hence, the seismic noise level can be significantly increased at the seismic station. Therefore, this study combines long-term experiments and numerical simulations to improve the understanding of the seismic wavefields emitted by complete wind farms and to advance the prediction of signal amplitudes.
Firstly, wind-turbine induced signals that are measured at a small wind farm close to Würzburg (Germany) are correlated with the operational data of the turbines. The frequency-dependent decay of signal amplitudes with distance from the wind farm is modeled using an analytical method including the complex effects of interferences of the wavefields produced by the multiple wind turbines. Specific interference patterns significantly affect the wave propagation and therefore the signal amplitude in the far field of a wind farm. Since measurements inside the wind turbines show that the assumption of in-phase vibrating wind turbines is inappropriate, an approach to calculate representative seismic radiation patterns from multiple wind turbines, which allows the prediction of amplitudes in the far field of a complete wind farm, is proposed.
In a second study, signals with a frequency of 1.15 Hz, produced by the Weilrod wind farm (north of Frankfurt, Germany) are observed at the seismological observatory TNS (Taunus), which is located at a distance of 11 km from the wind farm. The propagation of the wavefield emitted by the wind farm is numerically modeled in 3D, using the spectral element method. It is shown that topographic effects can cause local signal amplitude reductions, but also signal amplification along the travel path of the seismic wave. The comparison of simulations with and without topography reveals that the reduction and amplification are spatially linked to the shape of the topography, which could be an explanation for the relatively high signal amplitude observed at TNS.
Finally, the reduction of the impact of wind turbines on seismic measurements using borehole installations is studied using 2D numerical models. Possible effects of the seismic velocity, attenuation, and layering of the subsurface are demonstrated. Results show that a borehole can be very effective in reducing the observed high-frequency signals emitted by wind turbines. However, a borehole might not be beneficial if signals with frequencies of about 1 Hz (or lower) are of interest, due significant wavelength-dependent effects. The estimations of depth-dependent amplitudes with a layered subsurface are validated with existing data from wind-turbine-induced signals measured at the top and bottom of two boreholes.
The experimental analysis of measurements conducted at wind farms and the advances of modeling such signals improve the understanding of the propagation of wind-farm induced seismic wave fields. Furthermore, the methods developed in this work have a high potential of universal application to the prediction of signal amplitudes at seismometers close to wind farms with arbitrary layout and geographic location.
Brain development is a complex and highly organized process that relies on the coordinated interaction between neurons and vessels. These cell systems form a neurovascular link that involves the exchange of oxygen, ions, and other physiological components necessary for proper neuronal and vascular function. This physiologically coupled process is executed through analogous structural and molecular signaling mechanisms shared by both cell types. At the neurovascular interface, the cellular crosstalk via these shared signaling mechanisms allows for the synchronized expansion and integration of neurons and vessels into complex cellular networks. This study investigated the role of VEGFR2, a receptor for vascular endothelial growth factor (VEGF), during postnatal neuronal development in the mouse hippocampus. Prior studies have revealed physiological roles of VEGF, a pro-angiogenic morphogen, in nervous system development. However, it was unclear if VEGF signaling had a direct effect on neuronal physiology and function through neuronal-expressing receptors. In this investigative work, we identified a previously unknown function of VEGFR2, whereby VEGF-induced signaling coordinates the development and circuitry integration of CA3 pyramidal neurons in the early postnatal mouse hippocampus. Mechanistically, we found that VEGFR2 signaling requires receptor endocytosis, a process mediated by ephrinB2. We also found that VEGF-induced cooperative signaling between VEGFR2 and ephrinB2 is functionally required for the dendritic arborization and spine maturation of developing CA3 neurons during the first few postnatal weeks. Moreover, in a collaborative effort with the research group of Carmen Ruiz de Almodovar, formerly at the University of Heidelberg, we simultaneously studied VEGF-induced VEGFR2 signaling in CA3 axonal development. Together, we aimed to gain a comprehensive understanding of the complex interplay between VEGF and VEGFR2 signaling during the early postnatal development of CA3 neurons. Ruiz de Almodovar’s research group found that, unlike the branch and spine development of CA3 dendrites, VEGF-VEGFR2 signaling promotes axonal development through mechanisms that are independent of ephrinB2 function. Our findings on CA3 dendritic development are reported in the published manuscript, Harde et al. (2019), and the complementary work on CA3 axonal development from Ruiz de Almodovar's group is presented in the co-published manuscript, Luck et al. (2019). Although the totality of Ruiz de Almodovar's group's work on CA3 axons is not fully discussed here, it is referenced where noted to provide biological context for our findings on CA3 dendritic development.
VEGFR2 signaling within neurovascular niches is known to play a role in the neurogenesis of neural progenitor cells during embryonic development and within the adult brain. However, the precise localization of neuronal VEGFR2 expression and functional role within the nervous system during postnatal brain development was unknown. To investigate this, we used immunohistochemistry to identify the spatial expression of VEGFR2 within the mouse hippocampus during the first few weeks after birth. Our results showed that VEGFR2 was predominantly expressed within the hippocampal vasculature, consistent with prior studies. However, we also observed localized VEGFR2 expression in pyramidal cell neurons of the hippocampal CA3 region by postnatal day 10 (P10). This spatially restricted postnatal expression of VEGFR2 in CA3 neurons suggested a potential role in the development of these neurons during this developmental stage.
The first two weeks after birth in the mouse hippocampus is a critical period for the development of neuronal circuits, as neurons undergo extensive dendritic arborization and spine formation. To explore the role of VEGFR2 in the postnatal nervous system, we used a Nes-cre VEGFR2lox/- mouse line to target the deletion of VEGFR2 expression within the nervous system while preserving normal receptor expression in all other cell types. We also generated corresponding control mice that were negative for Nes-cre. By breeding these mice with Thy1-GFP reporter mice, we could analyze the functional consequences of VEGFR2 by assessing the morphologies of CA3 dendritic trees and spine density and maturation at P10 and P15, respectively. Our analysis showed that CA3 neurons in Nes-cre VEGFR2lox/- mice had less complex dendritic arbors compared to control mice. There were significant reductions in total length and branch points, particularly in areas located 100-250 μm from the cell soma within the stratum radiatum layer. Additionally, Nes-cre VEGFR2lox/- mice exhibited a significant decrease in spine density accompanied by an increased proportion of immature spines. These findings suggest that VEGFR2 plays a crucial role in the proper development of CA3 dendrites and spines during the early postnatal weeks.
Cyber Physical Systems (CPS) are growing more and more complex due to the availability of cheap hardware, sensors, actuators and communication links. A network of cooperating CPSs (CPN) additionally increases the complexity. This poses challenges as well as it offers chances: the increasing complexity makes it harder to design, operate, optimize and maintain such CPNs. However, on the other side an appropriate use of the increasing resources in computational nodes, sensors, actuators can significantly improve the system performance, reliability and flexibility. Therefore, self-X features like self-organization, self-adaptation and self-healing are key principles for such systems.
Additionally, CPNs are often deployed in dynamic, unpredictable environments and safety-critical domains, such as transportation, energy, and healthcare. In such domains, usually applications of different criticality level exist. In an automotive environment for example, the brake has a higher criticality level regarding safety as the infotainment. As a result of mixed-criticality, applications requiring hard real-time guarantees compete with those requiring soft real-time guarantees and best-effort application for the given resources within the overall system. This leads to the need to accommodate multiple levels of criticality while ensuring safety and reliability, which increases the already high complexity even more.
This thesis deals with the question on how to conveniently, effectively and efficiently handle the management and complexity of mixed-critical CPNs (MC-CPNs). Since this cannot be done by the system developer without the assistance of the system itself any longer, it is essential to develop new approaches and techniques to ensure that such systems can operate under a range of conditions while meeting stringent requirements.
Based on five research hypothesis, this thesis introduces a comprehensive adaptive mixed-criticality supporting middleware for Cyber-Physical Networks (Chameleon), which efficiently and autonomously takes care of the management and complexity of CPNs with regard to the mixed-criticality aspect.
Chameleon contributes to the state-of-art by introducing and combining the following concepts:
- A comprehensive self-adaption mechanism on all levels of the system model is provided.
- This mechanism allows a flexible combination of parametric and structural adaptation actions (relocation, scheduling, tuning, ...) to modify the behavior of the system.
- Real-time constraints of mixed-critical applications (hard real-time, soft real-time, best-effort) are considered in all possible adaptation conditions and actions by the use of the importance parameter.
- CPNs are supported by the introduction of different scopes (local, system, global) for the adaptation conditions and actions. This also enables the combination of different scopes for conditions and actions.
- The realization of the adaptation with a MAPE-K loop instantiated by a distributed LCS allows for real-time capable reasoning of adaptation actions which also works on resource-spare systems.
- The developed rule language Rango offers an intuitive way to specify an initial rule set for LCS in the context of CPS/CPNs and supports the system administrators in the process of rule set generation.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
The single-source shortest-path problem is a fundamental problem in computer science. We consider a generalization of the shortest-path problem, the $k$-shortest path problem. Let $G$ be a directed edge-weighted graph with $n$ nodes and $m$ edges and $s,t$ be two fixed nodes. The goal is to compute $k$ paths $P_1,\dots,P_k$ between two fixed nodes $s$ and $t$ in non-decreasing order of their length such that all other paths between $s$ and $t$ are at least as long as the $k$\nth path $P_k$. We focus on the version of the $k$-shortest path problem where the paths are not allowed to visit nodes multiple times, sometime referred to as $k$-shortest simple path problem.
The probably best known $k$-shortest path algorithm is Yen's algorithm. It has a worst-case time complexity of O(kn\cdot scp(n,m)), where scp(n,m) is the complexity of the single-source shortest-path algorithm used as a subroutine. In case of Dijkstra's algorithm scp(n,m) is O(m + n\log n). One of the more recent improvements of Yen's algorithm is by Feng.
Even though Feng's algorithm is much faster in practice, it has the same worst-case complexity as Yen's algorithm.
The main results presented in this thesis are upper bounds on the average-case of Yen's and Feng's algorithm, as well as practical improvements and a parallel implementation of Yen's and Feng's algorithms including these improvements. The implementation is publicly available under GPLv3 open source license.
We show in our analysis that Yen's algorithm has an average-case complexity of O(k \log(n)\cdot scp(n,m)) on G(n,p) graphs with at least logarithmic average-degree and random edge weights following a distribution with certain properties.
On G(n,p) graphs with constant to logarithmic average-degree and uniform random edge-weights over $[0;1]$, we show an average-case complexity of O(k\cdot\frac{\log^2 n}{np}\cdot scp(n,m)). Feng's algorithm has an even better average-case complexity of O(k\cdot scp(n,m)) on unweighted G(n,p) graphs with logarithmic average-degree and for constant values of $k$. We further provide evidence that the same holds true for G(n,p) graphs with uniform random edge-weights over $[0;1]$.
On the practical side, we suggest new heuristics to prune even more single-source shortest-path computations than Feng's algorithm and evaluate all presented algorithms on G(n,p) and Grid graphs with up to 256 million nodes. We demonstrate speedups by a factor of up to 40 compared to Feng's algorithm.
Finally we discuss two ways to parallelize the suggested algorithms and evaluate them on grid graphs showing speedups by a factor of 2 using 4 threads and by a factor of up to 8 using 16 threads, respectively.
A synchrotron is a particular type of cyclic particle accelerator and the first accelerator concept to enable the construction of large-scale facilities [10], such as the largest particle accelerator in the world, the 27-kilometre-circumference Large Hadron Collider (LHC) by CERN near Geneva, Switzerland, the European Synchrotron Radiation Facility (ESRF) in Grenoble, France for the synchrotron radiation, the superconducting, heavy ion synchrotron SIS100 under construction for the FAIR facility at GSI, Darmstadt, Germany and so on. Unlike a cyclotron, which can accelerate particles starting at low kinetic energy, a synchrotron needs a pre-acceleration facility to accelerate particles to an appropriate initial value before synchrotron injection. A pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron in case of electrons, for example, Proton and ion injectors Linac 4 and Linac 3 for the LHC, UNLAC as the injector for the SIS18 in GSI and in future the SIS18 as injector for the SIS100. The linac is a commonly used injector for the ion synchrotron and consists of some key components. The three main parts of a linac are: An ion source creating the particles, a buncher system or an RFQ followed by the main drift tube accelerator DTL. In order to meet the energy and the beam current requirement of a synchrotron injector linac, its cost is a remarkable percentage of the total facility costs.
However, the normal conducting linac operation at cryogenic temperatures can be a promising solution in improving the efficiency and reducing the costs of a linac. Synchrotron injectors operate at very low duty factor with beam pulse lengths in 1 micros to 100 micros range, as most of the time is needed to perform the synchrotron cycle. Superconducting linacs are not convenient, as they cannot efficiently operate at low duty factor and high beam currents.
The cryogenic operation of ion linacs is discussed and investigated at IAP in Frankfurt since around 2012 [1, 37]. The motivation was to develop very compact synchrotron injectors at reduced overall linac costs per MV of acceleration voltage. As the needed beam currents for new facilities are increasing as well, the new technology will also allow an efficient realization of higher injector linac energies, which is needed in that case. Operating normal conducting structures at cryogenic temperature exploits the significantly higher conductivity of copper at temperatures of liquid nitrogen and below. On the other hand, the anomalous skin effect reduces the gain in shunt impedance quite a bit[25, 31, 9]. Some intense studies and experiments were performed recently, which are encouraging with respect to increased field levels at linac operation temperatures between 30 K and 70 K [17, 24, 4, 23, 5, 8]. While these studies are motivated by applications in electron acceleration at GHz-frequencies, the aim of this paper is to find applications in the 100 to 700 MHz range, typical for proton and ion acceleration. At these frequencies, a higher impact in saving RF power is expected due to the larger skin depth, which is proportional to the frequency to the power of negative half with respect to the normal skin effect. On the other hand, it is assumed that the improvement in maximum surface field levels will be similar to what was demonstrated already for electron accelerator cavities. This should allow to find a good compromise between reduced RF power needs for achieving a given accelerator voltage and a reduced total linac length to save building costs.
A very important point is the temperature stability of the cavity surface during the RF pulse. This is of increasing importance the lower the operating temperature is chosen: the temperature dependence of the electric conductivity in copper gets rather strong below 80 K, as long as the RRR - value of the copper is adequate. It is very clear, that this technology is suited for low duty cycle operated cavities only - with RF pulse lengths below one millisecond. At longer pulses the cavity surface will be heated within the pulse to temperatures, where the conductivity advantage is reduced substantially. These conditions fit very well to synchrotron injectors or to pulsed beam power applications.
H – Mode structures of the IH – and of the CH – type are well-known to have rather small cavity diameters at a given operating frequency. Moreover, they can achieve effective acceleration voltage gains above 10 MV/m even at low beam energies, and already at room temperature operation[29]. With the new techniques of 3d – printing of stainless steel and copper components one can reduce cavity sizes even further – making the realization of complex cooling channels much easier.
Another topic are copper components in superconducting cavities – like power couplers. It is of great importance to know exactly the thermal losses at these surfaces, which can’t be cooled efficiently in an easy way.
In view of a growing world population and the finite nature of fossil resources, the development of eco-friendly production processes is essential for the transition towards a sustainable industry. Methanol, which can be produced both petrochemically and from renewable resources, offers itself as bridging technology and attractive alternative raw material for biotechnological processes. This work describes developments for the progress of the well-studied methylotrophic α proteobacterium Methylorubrum extorquens AM1 towards an efficient methylotrophic cell factory. Although many homologous and heterologous production routes have already been described and realized for M. extorquens in a laboratory scale, no industrial process has yet been realized. Three major reasons can be identified for this: (1) A limited choice of tools for genetic modifications, (2) a lack of understanding of carbon fluxes and side reactions occurring in modified strains, such as product reimports, and (3) the lack of tailored production strains for profitable target products and optimized bioprocessing protocols. The aim of the present work was to achieve developments for the mentioned areas. As a model application, the high-level production of chiral dicarboxylic acids from the substrate methanol was chosen. Enantiomerically pure chiral compounds are of great interest, e.g., as building blocks for chiral drugs. The ethylmalonyl CoA metabolic pathway (EMCP) which is part of the primary metabolism of M. extorquens, harbors unique chiral CoA-ester intermediates. Their acid derivatives can be released by cleavage of the CoA-moiety using heterologous enzymes. The dicarboxylic acids 2 methylsuccinic acid and mesaconic acid were produced in a previous study by introducing the heterologous thioesterase YciA into M. extorquens. In the said study, a combined product titer of 0.65 g/L was obtained in shake flask experiments. These results serve as the basis for the developments in the present work.
First, the previously described reuptake of products was thoroughly investigated and dctA2, a gene encoding for an acid transporter, was identified as target for reducing the product reuptake. In addition, reuptake of mesaconic acid was prevented by converting it to (S)-citramalic acid, a product not metabolizable by M. extorquens, by the introduction of a heterologous mesaconase. Together with 2-methylsuccinic acid, for which a high enantiomeric excess of (S)-2-methylsuccinic acid was determined, a second chiral molecule was thus added to the product spectrum. For the release of dicarboxylic acid products, YciA, a broad-range thioesterase that accepts a variety of CoA-esters with different chain lengths as substrates, was chosen. The enzyme should theoretically be able to hydrolyze all CoA-esters of interest present in the EMCP. However, in culture supernatants of M. extorquens strains that were overexpressing the corresponding yciA gene, only mesaconic acid and 2 methylsuccinic acid could be detected. To expand the substrate spectrum of YciA thioesterase with respect to other EMCP intermediates, semi-rational enzyme engineering was attempted. Screening of the corresponding strains carrying the respective YciA variants did not result in strains capable of producing new dicarboxylic acid products. However, the experiments revealed an amino acid position that strongly affected the production of mesaconic acid and 2-methylsuccinic acid in vivo. By substituting the according amino acid in YciA, the maximum titers of mesaconic acid and 2-methylsuccinic acid could be increased substantially. Application of an improved thioesterase variant in a second E. coli-based process confirmed the enhanced activity of the enzyme. The desired extension of the product spectrum by another chiral molecule (2-hydroxy-3-methylsuccinic acid, presumably the (2S,3R)-form) was finally achieved by using an alternative thioesterase. Tailored fermentation strategies were developed for the high-level production of the above-mentioned products.
As second part of the work, two novel genetic tools for M. extorquens were developed and characterized. The pBBR1-derived plasmid pMis1_1B was shown to be stably maintained in M. extorquens cells. In addition, its suitability for co-transformations with other plasmids was demonstrated. The second tool, the cumate-inducible promoter Ps6, is tailored for expression of pathways with toxic products, as the transcription of genes controlled by Ps6 is strongly repressed in the absence of an inducer.
Overall, the present work demonstrates the enormous potential of using M. extorquens as a methylotrophic cell factory. In the applications shown, the biotechnological production of high-priced chiral molecules is combined with the use of an attractive alternative substrate. In addition, new achievements and approaches are presented to facilitate the development of future M. extorquens production strains.
In this thesis, we use lattice QCD to study a part of the QCD phase diagram, specifically the QCD phase transition at mu=0, where the QCD matter changes from hadron gas to quark-gluon plasma (QGP) with increasing temperature.
This phase transition takes place as a crossover, but when theoretically changing the masses of the quarks, the order of the phase transition changes as well.
We focus on the region of heavy quark masses with Nf=2 flavours, where we investigate the critical quark mass at the second order phase transition in the form of a Z2 point between the first-order and the crossover region.
The first-order region is positioned at infinitely heavy quarks. As the quark masses decrease, the associated Z3 centre symmetry breaks explicitly, causing the first-order phase transition to weaken until it turns into the Z2 point and finally into a crossover.
We study this Z2 point using simulations at Nf=2 and lattices of the sizes Nt = {6, 8, 10, 12}, partially building on previous work, in which the simulations for Nt = {6, 8, 10} were started.
The simulations for Nt=12 are not finished yet though, but we were able to draw some preliminary conclusions. These simulations are run on GPUs and CPUs, using the codes Cl2QCD and open-QCD-FASTSUM, respectively. Afterwards, the data goes through a first analysis step in the form of the Python program PLASMA, preparing it for the two techniques we use to analyse the nature of the phase transition.
As a first, reliable analysis method, we perform a finite size scaling analysis of the data to find the location of the Z2 point. Since we are using lattice QCD, performing a continuum extrapolation is necessary to reach the continuum result.
In regard to this, the finite size scaling analysis method is hampered by the excessive amount of simulated data that is needed regarding statistics and the total number of simulations, which is why this thesis is only an intermediate step towards the continuum limit.
This also leads to the second analysis technique we explore in this thesis.
We start to design a Landau theory which describes the phase boundary for heavy masses at Nf=2 based on the simulated data.
We develop a Landau functional for every Nt we have simulation data for.
Albeit the results are not at the same precision as the ones from the finite size scaling analysis, we are able to reproduce the position of the Z2 point for every Nt.
Even though we are not able to take a continuum extrapolation right now, after more development takes place in future works, this approach might, in the long run, lead to a continuum result that won't need as many simulations as the finite size scaling analysis.
The Nodular lymphocyte-predominant Hodgkin lymphoma (NLPHL) as well as the T-cell/histiocyte-rich large B-cell lymphoma (THRLBCL) are rare types of malignant lymphomas. Both NLPHL and THRLBCL are frequently observed in middle-aged men with THRLBCL presenting frequently with an advanced Ann-Arbor stage with B-symptoms and associated with more aggressive courses.3 However, due to the limited number of tumor cells in the tissue of both NLPHL and THRLBCL, limited numbers of studies have been conducted on these lymphomas and current results are mainly based on general molecular genetic studies.
In order to obtain a better understanding for these disease forms as well as possible changes in their nuclear and cytoplasmatic sizes, the following study relied on the comparison of the different NLPHL forms and THRLBCL in terms of nuclear size and nuclear volume. This was carried out using both 2D and 3D analysis. During the 2D analysis of nuclear size and nuclear volume no significant differences could be presented between those groups. However, the 3D analysis of NLPHL and THRLBCL pointed out a slightly enlarged nuclear volume in THRLBCL. Furthermore, the analysis indicated a significantly increased cytoplasmatic size of THRLBCL compared to NLPHL forms. Nevertheless, differences occurred not only between the tumor cells of both disease forms, but also the T cells presented a larger nuclear volume in THRLBCL. B cells, which were considered as the control group, did not demonstrate any significant differences between the different groups. The presented results suggest an increased activity of T cells in THRLBCL, which is most likely to be interpreted as a response against the surrounding tumor cells and probably limits the proliferation of the tumor cells. Based on these results, the importance of 3D analysis is also evident due to the fact that it is clearly superior to 2D analysis. For a better understanding of both disease forms, it is therefore recommended to use the 3D technique in combination with molecular genetic analysis in future research.