Refine
Year of publication
- 2020 (127) (remove)
Document Type
- Doctoral Thesis (127) (remove)
Language
- English (127) (remove)
Has Fulltext
- yes (127) (remove)
Is part of the Bibliography
- no (127)
Keywords
- 3-alkylphenols (1)
- 6-methylsalicylic acid synthase (1)
- 9-HODE (1)
- Absorption modeling (1)
- Antibiotic resistance (1)
- Antigen carrier (1)
- Archaeology (1)
- B-cell lymphoma (1)
- BMI (1)
- Bayesian Statistics (1)
Institute
- Biochemie, Chemie und Pharmazie (39)
- Biowissenschaften (32)
- Medizin (13)
- Physik (13)
- Informatik und Mathematik (9)
- Geowissenschaften / Geographie (6)
- Neuere Philologien (4)
- Biochemie und Chemie (3)
- Extern (1)
- Geowissenschaften (1)
A novel role for mutant mRNA degradation in triggering transcriptional adaptation to mutations
(2020)
Robustness to mutations promotes organisms’ well-being and fitness. The increasing number of mutants in various model organisms, and humans, showing no obvious phenotype (Bouche and Bouchez, 2001; Chen et al., 2016b; Giaever et al., 2002; Kok et al., 2015) has renewed interest into how organisms adapt to gene loss. In the presence of deleterious mutations, genetic compensation by transcriptional upregulation of related gene(s) (also known as transcriptional adaptation) has been reported in numerous systems (El-Brolosy and Stainier, 2017; Rossi et al., 2015; Tondeleir et al., 2012); however, the molecular mechanisms underlying this response remained unclear. To investigate this phenomenon, I develop and study multiple models of transcriptional adaptation in zebrafish and mouse cell lines. I first show that transcriptional adaptation is not caused by loss of protein function, indicating that the trigger lies upstream, and find that the response involves enhanced transcription of the related gene(s). Furthermore, I observe a correlation between levels of mutant mRNA degradation and upregulation of related genes. To investigate the role of mutant mRNA degradation in triggering the response, I generate mutant alleles that do not transcribe the mutated gene and find that they fail to induce a transcriptional response and display stronger phenotypes. Transcriptome analysis of alleles displaying mutant mRNA degradation revealed upregulation of a significant proportion of genes displaying sequence similarity with the mutated gene’s mRNA, suggesting a model whereby mRNA degradation intermediates induce transcriptional adaptation via sequence similarity. Further mechanistic analyses suggested RNA-decay factors-dependent chromatin remodeling, and repression of antisense RNAs to be implicated in the response. These results identify a novel role for mutant mRNA degradation in buffering against mutations. Besides, they hold huge implications on understanding disease-causing mutations and shall help in designing mutations that lead to minimal transcriptional adaptation-induced compensation, facilitating studying gene function in model organisms.
This thesis investigates the acquisition pace and the typical developmental path in eL2 acquisition of selected phenomena of German morphosyntax and semantics and compared them to monolingual acquisition. In addition, the influence of ‘Age of Onset’ and of external factors on eL2 acquisition is examined.
To date, the most studies on eL2 acquisition focused on language production. Based on mostly longitudinal spontaneous speech data of only small number of children, they indicate that eL2 learners acquire sentence structure and subject-verb-agreement faster than monolingual children, whereas the acquisition of case marking causes them more difficulties. Moreover, similar developmental paths to those of monolingual children are claimed. Only several studies examined comprehension abilities in eL2 learners, however overwhelmingly in cross-sectional design. The findings from comprehension studies on telic and atelic verbs, and on wh-questions indicate that eL2 children acquire their target-like interpretation faster than monolingual children. The same acquisition stages towards target-like interpretation like in monolingual acquisition are assumed as well. Taking together, to date, no study exists, that examines comprehension and production abilities in a large group of eL2 learners of German in a longitudinal design.
This thesis extends the previous results by investigating pace of acquisition, impact of factors, and individual developmental paths in a longitudinal design with large groups of participants. Language data of 29 eL2 learners of German (age at T1: 3;7 years, LoE: 10 months) and 45 monolingual German-speaking children (age at T1: 3;7) are examined. The eL2 learners were tested in six test rounds (age at T6: 6;9 years). The monolingual children were tested in five test rounds (are at T5: 5;7). The standardized test LiSe-DaZ (Schulz & Tracy, 2011) was employed to examine children’s language skills.
eL2 learners show a significantly greater rate of change, thus faster acquisition pace, than monolingual children in the following scales: comprehension of telicity, comprehension of wh-questions, production of prepositions, and production of conjunctions. These phenomena are acquired early in monolingual children. No differences regarding acquisition pace between eL2 children and monolingual children are found for comprehension of negation, production of case marking, and production of focus particles. These phenomena are acquired late in monolingual development and involve semantic and pragmatic knowledge. The findings of faster acquisition pace of several phenomena are in line with several studies that reported that eL2 children develop faster than monolingual children.
Independent on whether a phenomenon is acquired early or late, no effects of external factors on eL2 children’s performance are found. These findings indicate that acquisition of core, rule-based phenomena is not sensitive to external factors if the first exposure to L2 takes place around the age of three.
Moreover, eL2 children show the same developmental stages and error types in comprehension of telicity, comprehension of negation, production of matrix and subordinate clauses. This is also independent on how fast they acquire a structure under consideration. Thus, these findings provide a further support for similar developmental paths of eL2 and monolingual children towards target-like comprehension and production.
Two main types of methods are used in gene therapy: integrating vectors and nuclease-based genome engineering. Nucleases are site-specific and are efficient for knock-outs, but inefficient at inserting long DNA sequences. Integrating vectors perform this task with high efficiency, but their insertion occurs at random genomic positions. This can result in transformation of target cells, which leads to severe adverse events in a gene therapy context. Thus, it is of great interest to develop novel genome engineering tools that combine the advantages of both technologies. The main focus of this thesis is on generating such a targetable integrating vector.
The integrating vector used in this project is the Sleeping Beauty (SB) transposon, a DNA transposon characterized by high activity across a wide range of cells. The SB transposase was combined with an RNA-guided Cas9 nuclease domain. This nuclease component was meant to direct transposase integration to specific targets defined by RNAs. The SB transposase was fused to cleavage-inactivated Cas9 (dCas9) to tether it to the target sites. In addition, adapter proteins consisting of dCas9 and domains non-covalently interacting with SB transposase or the SB transposon were generated. All constituent domains of these fusion proteins were tested in enzymatic assays and almost all enzymatic activities could be verified.
Combining the fusion protein dCas9-SB100X with a gRNA binding a sequence from the AluY repetitive element resulted in a weak, but statistically significant enrichment around sites bound by the gRNA. This enrichment was ca. 2-fold and occurred within a 300 bp window downstream of target sites, or within the AluY element.
Targeting with adapter proteins and targeting of other targets (L1 elements or single-copy targets) did not result in statistically significant effects. Single-copy targets tested included the HPRT gene and three specifically selected GSH targets that were known to be receptive to SB insertions. The combination with a more sequence-specific transposase mutant also failed to increase specificity to a level allowing targeting of single-copy loci. Genome-wide analysis of insertions however demonstrated, that dCas9-SB100X has a different insertion profile than SB100X, regardless of the gRNA used.
As low efficiency of retargeting is likely a consequence of the high background activity of the SB100X transposase in the fusion constructs, a SB mutant with reduced DNA affinity, SB(C42), was generated. For this mutant, transposition activity was partly dependent on a dCas9 domain being supplied with a multi-copy target gRNA, specifically a 2-fold increase in the presence of a AluY-directed gRNA. Whether using this mutant results in improved targeting remains to be determined.
In a side project, an attempt was made to direct SB insertions to ribosomal DNA by fusing the transposase to a nucleolar protein. This fusion transposase partially localized to nucleoli and insertions catalyzed by this transposase were found to be enriched in nucleolus organizer regions (NORs) and nucleolus-associated domains (NADs).
The aim of a second side project was increasing the ratio between homology-directed repair (HDR) and non-homologous end-joining (NHEJ) at Cas9-mediated double-strand breaks (DSBs). To achieve this, Cas9 was fused to DNA-interacting domains and corresponding binding sequences were fused to the homology donors. While an increased HDR/NHEJ ration could be observed for the fusion proteins, it was not dependent on the presence on the binding sequences in the donor molecules.
High-energy physics experiments aim to deepen our understanding of the fundamental structure of matter and the governing forces. One of the most challenging aspects of the design of new experiments is data management and event selection. The search for increasingly rare and intricate physics events asks for high-statistics measurements and sophisticated event analysis. With progressively complex event signatures, traditional hardware-based trigger systems reach the limits of realizable latency and complexity. The Compressed Baryonic Matter experiment (CBM) employs a novel approach for data readout and event selection to address these challenges. Self-triggered, free-streaming detectors push all data to a central compute cluster, called First-level Event Selector (FLES), for software-based event analysis and selection. While this concept solves many issues present in classical architectures, it also sets new challenges for the design of the detector readout systems and online event selection.
This thesis presents an efficient solution to the data management challenges presented by self-triggered, free-streaming particle detectors. The FLES must receive asynchronously streamed data from a heterogeneous detector setup at rates of up to 1 TB/s. The real-time processing environment implies that all components have to deliver high performance and reliability to record as much valuable data as possible. The thesis introduces a time-based data model to partition the input streams into containers of fixed length in experiment time for efficient data management. These containers provide all necessary metadata to enable generic, detector-subsystem-agnostic data distribution across the entire cluster. An analysis shows that the introduced data overhead is well below 1 % for a wide range of system parameters.
Furthermore, a concept and the implementation of a detector data input interface for the CBM FLES, optimized for resource-efficient data transport, are presented. The central element of the architecture is an FPGA-based PCIe extension card for the FLES entry nodes. The hardware designs developed in the thesis enable interfacing with a diverse set of detector systems. A custom, high-throughput DMA design structures data in a way that enables low-overhead access and efficient software processing. The ability to share the host DMA buffers with other devices, such as an InfiniBand HCA, allows for true zero-copy data distribution between the cluster nodes. The discussed FLES input interface is fully implemented and has already proven its reliability in production operation in various physics experiments.
A large number of chemicals are constantly introduced to surface water from anthropogenic and natural sources. Although substantial efforts have been made to identify these chemicals (e.g potentially anthropogenic contaminants) in surface waters using liquid chromatography coupled to high resolution mass spectrometry (LC-HRMS), a large number of LC-HRMS chemical signals often with high peak intensity are left unidentified. In addition to synthetic chemicals and transformation products, these signals may also represent plant secondary metabolites (PSMs) released from vegetation through various pathways such as leaching, surface run-off and rain sewers or input of litter from vegetation. While this may be considered as a confounding factor in screening of water contaminants, it could also contribute to the cumulative toxic risk of water contamination. However, it is hardly known to what extent these metabolites contribute to the chemical mixture of surface waters. Thus, reducing the number of unknowns in water samples by identifying also PSMs in significant concentrations in surface waters will help to improve monitoring and assessment of water quality potentially impacted by complex mixtures of natural and synthetic compounds. Therefore, the main focus of the present study was to identify the occurrence of PSMs in river waters and explore the link between the presence of vegetation along rivers and detection of their corresponding PSMs in river
water.
In order to achieve the goals of the present thesis, two chemical screening approaches, namely, non-target and target screening using LC-HRMS were implemented. (1) Non-target analysis involving a novel approach has been applied to associate unknown peaks of high intensity in LC-HRMS to PSMs from surrounding vegetation by focusing on peaks overlapping between river water and aqueous plant extracts (Annex A1). (2) LC–HRMS target screening in river waters were performed for about 160 PSMs, which were selected from a large phytotoxin database (Annex A2 and A3) considering their expected abundance in the vegetation, their potential mobility, persistence and toxicity in the water cycle and commercial availability of standards.
In non-target screening (Annex A1), a high number of overlapping peaks has been found in between aqueous plant extracts and water from adjacent location, suggesting a significant impact of vegetation on chemical mixtures detectable in river waters. The chemical structures were assigned for 12 pairs of peaks while several pairs of peaks
whose MS/MS spectra matched but no structure suggestion were made by the implemented software tools for retrieving possible chemical structure. Nevertheless, the pairs of peaks with matching spectra represented the same chemical structure. The identified compound belonged to different compound classes such as coumarins, flavonoids besides others. For the identified PSMs individual concentration up to 5 µg/L were measured. The concentration and the number of detected PSMs per sample were correlated with the rain event and vegetation coverage.
Target screening unraveled the occurrence of 33 out of 160 target compounds in river waters (Annex A2 and A3). The identified compounds belonged to different classes such as alkaloids, coumarins, flavonoids, and other compounds. Individual compound concentrations were up to several thousand ng/L with the toxic alkaloids narciclasine and
lycorine recording highest maximum concentrations. The neurotoxic alkaloid coniine from poison hemlock was detected at concentrations up to 0.4 µg/L while simple coumarins
esculetin and fraxidin occurred at concentrations above 1 µg/L. The occurrence of some PSMs in river water were correlated to the specific vegetation growing along the rivers while the others were linked to a wide range of vegetation. As an example, narciclasine and lycorine was emitted by the dominant plant species from Amaryllidaceae family (e.g. Galanthus nivalis (snow drop), Leucojum vernum and Anemone nemorosa) while intermedine and echimidine were from Symphytum officinale. The ubiquitous occurrence of simple coumarins fraxidin, scopoletin and aesculetin could be linked to their presence in a wide range of vegetation.
Due to lack of aquatic toxicity data for the identified PSMs (in both target and non-target) and extremely scarce exposure data, no reliable risk assessment was possible.
Alternatively, risk estimation was performed using the threshold for toxicological concern (TTC) concept developed for drinking water contaminants. Many of the identified PSMs
exceeded the TTC value (0.1 µg/L) thus caution should be taken when using such surface waters for drinking water abstraction or recreational use.
This thesis provides an overview of the occurrence of PSMs in river water impacted by the massive presence of vegetation. Concentration for many of the identified PSMs are well within the range of those of synthetic environmental contaminants. Thus, this study adds to a series of recent results suggesting that possibly toxic PSMs occur in relevant concentrations in European surface waters and should be considered in monitoring and risk assessment of water resources. Aquatic toxicity data for PSMs are extensively lacking but are required to include these compounds in the assessment of risks to aquatic organisms and for eliminating risks to human health during drinking water production.
Proteine sind die Maschinen der Zellen. Um die Funktionalität von zahlreichen zellulären Prozessen zu gewährleisten, müssen Kommunikationssignale innerhalb von Proteinen weitergeleitet werden. Die Weiterleitung einer Störung an einem Ort im Protein zu einer entfernten Stelle, an welcher sie strukturelle und/oder dynamische Änderungen auslöst, wird Allosterie genannt. Zunächst wurde Allosterie hauptsächlich mit großräumigen Konformationsänderungen in Verbindung gebracht, aber später entwickelte sich ein dynamischerer Blickwinkel auf Allosterie in Abwesenheit dieser großräumigen Konformationsänderungen. Die Idee eines allosterischen Pfades bestehend aus konservierten und energetisch gekoppelten Aminosäuren, welche die Signalweiterleitung zwischen entfernten Stellen im Protein vermitteln, entstand. Diese allosterischen Pfade wurden durch zahlreiche theoretische Studien in Zusammenhang mit Pfaden effizienten anisotropen Energieflusses gebracht. Der Energiefluss entlang dieser Netzwerke verknüpft allosterische Signalübertragung mit Schwingungsenergietransfer (VET - vibrational energy transfer). Die Großzahl der Forschungsarbeiten über dynamische Allosterie basiert auf theoretischen Methoden, weil nur wenige geeignete experimentelle Verfahren existieren. Um diesen essentiellen biologischen Prozess der Informationsübertragung besser verstehen zu können, ist die Entwicklung neuer und leistungsstarker experimenteller Instrumente und Techniken daher dringend erforderlich. Die vorliegende Dissertation setzt sich dies zum Ziel.
VET in Proteinen ist aufgrund der Proteingeometrie inhärent anisotrop. Alle globulären Proteine besitzen Kanäle effizienten Energieflusses, von denen vermutet wird, dass sie wichtig für Proteinfunktionen, wie die schnelle Ableitung von überschüssiger Wärme, Ligandenbindung und allosterische Signalweiterleitung, sind. VET kann mit zeitaufgelöster Infrarot (IR) Spektroskopie untersucht werden, bei welcher ein Femtosekunden Anregepuls eines Lasers Schwingungsenergie in ein molekulares System an einer bestimmten Stelle injiziert und ein, nach einem veränderbarem Zeitintervall folgender, IR Abfragepuls die Ausbreitung dieser Schwingungsenergie detektiert. Ein protein-kompatibler und universell einsetzbarer Chromophor, der die Energie eines sichtbaren Photons in Schwingungsenergie konvertiert, wird als Heizelement benötigt um langreichweitige VET Pfade in Proteinen kartieren zu können. Der Azulen (Azu) Chromophor eignet sich dafür, weil er nach Photoanregung des ersten elektronischen Zustandes durch ultraschnelle interne Konversion fast die gesamte injizierte Energie innerhalb von einer Picosekunde in Schwingungsenergie umwandelt. Eingebettet in die nicht-kanonische Aminosäure (ncAA - non-canonical amino acid) ß-(1-Azulenyl)-L-Alanine (AzAla), kann der Azu Rest in Proteine eingebaut werden. Die Ankunft der injizierten Schwingungsenergie an einer bestimmten Stelle im Protein kann mithilfe eines IR Sensors detektiert werden. Die Kombination aus Azu als VET Heizelement und Azidohomoalanine (Aha) als VET Sensor mit transienter IR (TRIR) Spektroskopie wurde schon erfolgreich an kleinen Peptiden in der Dissertation von H. M. Müller-Werkmeister getestet, die der vorliegenden Dissertation in den Laboren der Bredenbeck Gruppe vorausging.
Die Schwingungsfrequenz chemischer Bindungen ist hochempfindlich auf selbst kleine Änderungen der Konformation und Dynamik in der unmittelbaren Umgebung und kann mit IR Spektroskopie gemessen werden, z. B. mit Fourier Transform IR (FTIR) Spektroskopie. IR Spektroskopie bietet eine außergewöhnlich gute Zeitauflösung, die es ermöglicht, dynamische Prozesse in Molekülen auf einer Zeitskala von wenigen Picosekunden zu beobachten, wie z. B. die ultraschnelle Weiterleitung von Schwingungsenergie. Mit zweidimensionaler (2D)-IR Spektroskopie können die Relaxation von schwingungsangeregten Zuständen und strukturelle Fluktuationen um die schwingende Bindung untersucht werden. Allerdings geht die herausragende Zeitauflösung mit limitierter spektraler Auflösung einher. In größeren Molekülen mit zahlreichen Bindungen überlagern sich die Schwingungsbanden und die Ortsauflösung geht verloren. Um diese Limitierung zu überwinden, können IR Marker benutzt werden, chemische Gruppen, die in einer spektral durchsichtigen Region des Protein/Wasser Spektrums (1800 bis 2500 cm-1) absorbieren. Als ncAA können sie kotranslational in Proteine an einer gewünschten Stelle eingebaut werden und so ortsspezifische Informationen aus dem Proteininneren liefern. Aufgrund ihrer geringen Größe, eines relativ großen Extinktionskoeffizientens (350-400 M-1cm-1) und einer hohen Empfindlichkeit auf Änderungen in der lokalen Umgebung sind organische Azide (N3) wie zum Beispiel Aha besonders geeignete IR Marker. Aha kann als Methionin Analogon ins Protein eingebaut werden.
...
The weather of the atmospheric boundary layer significantly affects our life on Earth. Thus, a realistic modelling of the atmospheric boundary layer is crucial. Hereby, the processes of the atmospheric boundary layer depend on an accurate representation of the land-atmosphere coupling in the model. In this context the land surface temperature (LST) plays an important role. In this thesis, it is examined if the assimilation of LST can lead to improved estimates of the boundary layer and its processes.
To properly assimilate the LST retrievals, a suitable model equivalent in the weather prediction model is necessary. In the weather forecast model of the German Weather Service used here, the LST is modelled without a vegetation temperature. To compensate for this deficit, two different vegetation parameterizations were investigated and the better one, a conductivity scheme, was implemented. In order to make optimal use of the influence of the assimilation of the LST observation on the model system, it is useful to pass on the information of the observation to land and atmosphere already in the assimilation step. For that reason, a fully coupled land-atmosphere prediction model was used. Therefore, the existing control vector of the assimilation system, a local ensemble transform Kalman filter, was extended by the soil temperature and moisture. In two-day case studies in March and August 2017, different configurations of the augmented assimilation system were evaluated based on observing system simulation experiments (OSSE).
LST was assimilated hourly over two days in the weakly and strongly coupled assimilation system. In addition, every six hours a free 24-hour forecast was simulated. The experiments were validated with the simulated truth (a high-resolution model run) and compared against an experiment without assimilation. It was shown that the prediction of the boundary layer temperature, especially during the day, and the prediction of the soil temperature, during the whole day and night, could be improved.
The best impact of LST assimilation was achieved with the fully coupled system. The humidity variables of the model benefited only partially from the LST assimilation. For this reason, covariances in the model ensemble were investigated in more detail. To check their compatibility with the high-resolution model run the ensemble consistency score was introduced. It was found that the covariances between the LST and the temperatures of the high-resolution model run were better represented in the ensemble than those between the LST and the humidity variables.
Machine Learning (ML) is so pervasive in our todays life that we don't even realise that, more often than expected, we are using systems based on it. It is also evolving faster than ever before. When deploying ML systems that make decisions on their own, we need to think about their ignorance of our uncertain world. The uncertainty might arise due to scarcity of the data, the bias of the data or even a mismatch between the real world and the ML-model. Given all these uncertainties, we need to think about how to build systems that are not totally ignorant thereof. Bayesian ML can to some extent deal with these problems. The specification of the model using probabilities provides a convenient way to quantify uncertainties, which can then be included in the decision making process.
In this thesis, we introduce the Bayesian ansatz to modeling and apply Bayesian ML models in finance and economics. Especially, we will dig deeper into Gaussian processes (GP) and Gaussian process latent variable model (GPLVM). Applied to the returns of several assets, GPLVM provides the covariance structure and also a latent space embedding thereof. Several financial applications can be build upon the output of the GPLVM. To demonstrate this, we build an automated asset allocation system, a predictor for missing asset prices and identify other structure in financial data.
It turns out that the GPLVM exhibits a rotational symmetry in the latent space, which makes it harder to fit. Our second publication reports, how to deal with that symmetry. We propose another parameterization of the model using Householder transformations, by which the symmetry is broken. Bayesian models are changed by reparameterization, if the prior is not changed accordingly. We provide the correct prior distribution of the new parameters, such that the model, i.e. the data density, is not changed under the reparameterization. After applying the reparametrization on Bayesian PCA, we show that the symmetry of nonlinear models can also be broken in the same way.
In our last project, we propose a new method for matching quantile observations, which uses order statistics. The use of order statistics as the likelihood, instead of a Gaussian likelihood, has several advantages. We compare these two models and highlight their advantages and disadvantages. To demonstrate our method, we fit quantiled salary data of several European countries. Given several candidate models for the fit, our method also provides a metric to choose the best option.
We hope that this thesis illustrates some benefits of Bayesian modeling (especially Gaussian processes) in finance and economics and its usage when uncertainties are to be quantified.
This thesis discusses important questions of the beam dynamics in the proton-lead operation in the Large Hadron Collider (LHC) at CERN in Geneva. In two time blocks of several weeks in the years 2013 and 2016, proton-lead collisions have so far been successfully generated in the LHC and used by the experiments at the LHC. One reason for doubts regarding the successful operation in proton-lead configuration was the fact that the beams have to be accelerated with different revolution frequencies. There is long-range repulsion between the beams, since both beams share the beam chamber around the interaction points. Because of the different revolution frequencies, the positions of the interaction between the beams shift each revolution. This can lead to resonant excitation and to an increase in the transverse beam emittance, as was observed in the Relativistic Heavy-Ion Collider (RHIC). In this thesis, simulations for the LHC, RHIC and the High-Luminosity Large Hadron Collider (HL-LHC) are performed with a new model. The results for RHIC show relative growth rates of the emittances of the gold beam in gold-deuteron operation in RHIC from 0.1 %/s to 1.5 %/s. Growth rates of this magnitude were observed experimentally in RHIC. Simulations for the LHC show no significant increase of the emittance of the lead beam for different intensities of the counter-rotating beam. The simulation results confirm the measured stability of the beams in the LHC and the issue of strongly increasing emittances in RHIC is reproduced. Also, no significant increase of the emittance is predicted for the Future Circular Collider (FCC) and the HL-LHC.
Using a frequency-map analysis, this work verifies whether the interaction of the lead beam with the much smaller proton beam in the proton-lead operation of the LHC leads to diffusion within the lead beam. Experiences at HERA at DESY in Hamburg and at SppS at CERN have shown that the lifetime of the larger beam can rapidly decrease under certain circumstances. The results of the simulation show no chaotic dynamics near the beam centre of the lead beam. This result is supported by experimental observation.
A program code has been developed which calculates the beam evolution in the LHC by means of coupled differential equations. This study shows that the growth rates of the lead beam due to intra-beam scattering is overestimated and that particle bunches of the lead beam lose more intensity than assumed in the model. The analysis also shows that bunches colliding in a detector suffer additional losses that increase with decreasing crossing angle at the interaction point.
In this work, 2016 data from beam-loss monitors in combination with the luminosity and the loss rate of the beam intensity are used to determine the cross section of proton-lead collisions at the center-of-mass energy of 8.16 TeV. Beam-loss monitors that mainly detect beam losses that are not caused by the collision process itself are used to determine the total cross section via regression. An analysis of the data recorded in 2016 at the center-of-mass energy of 8.16 TeV resulted in a total cross section of σ=(2.32±0.01(stat.)±0.20(sys.)) b. This corresponds approximately to a hadronic cross section of σ(had)=(2.24±0.01(stat.)±0.21(sys.)) b. This value deviates only by 5.7 % from the theoretical value σ(had)=(2.12±0.01) b.
The simulation code for determining the beam evolution is also used to estimate the integrated luminosity of a future one-month run with proton-lead collisions. The result of the study shows that in the future the luminosity in the ATLAS and CMS experiments will increase from 15/nb per day in 2016 to 30/nb per day, which is a significant increase in terms of the performance. This operation, however, requires the use of the TCL collimators to protect the dispersion suppressors at ATLAS and CMS from collision fragments.
This work also gives an outlook on the expected luminosity production in proton-nucleus operation using ion species lighter than lead ions. For example, a change from proton-lead to proton-argon collisions would increase the integrated luminosity from monthly 0.8/nb to 9.4/nb in ATLAS and CMS. This is an increase of one order of magnitude and approximately a doubling of the integrated nucleon-nucleon luminosity. There may be a test operation with proton-oxygen collisions in 2023, which will last only a few days and will be operated with a low luminosity. The LHCf experiment (LHCb experiment) would achieve the desired integrated luminosity of 1.5/nb (2/nb) within 70h (35h) beam time.
Groundwater is the largest source of accessible freshwater with its dynamics having significantly changed due to human withdrawals, and being projected to continue to as a result of climate change. The pumping of groundwater has led to lowered water tables, decreased base flow, and depletion.
Global hydrological models (GHMs) are used to simulate the global freshwater cycle, assessing impacts of changes in climate and human freshwater use. Currently, groundwater is commonly represented by a bucket-like linear storage component in these models. Bucket models, however, cannot provide information on the location of the groundwater table. Due to this limitation, they can only simulate groundwater discharge to surface water bodies but not recharge from surface water to groundwater and calculate no lateral and vertical groundwater flow whatsoever among grid cells. For instance this may lead to an underestimation of groundwater resources in semiarid areas, where groundwater is often replenished by surface water. In order to overcome these limitations it is necessary to replace the linear groundwater model in GHMs with a hydraulic head gradient-based groundwater flow model
This thesis presents the newly developed global groundwater model G3M and its coupling to the GHM WaterGAP spanning over 70,000 lines of newly developed code. Development and validation of the modeling software are discussed along with numerical challenges. Based on the newly developed software, a global natural equilibrium groundwater model is presented showing better agreements with observations than previous models. Groundwater discharge to rivers is found to be the most dominant flow component globally, compared to flows to other surface water bodies and lateral flows. Furthermore, first global maps of the distribution of gaining and losing surface water bodies are displayed.
For the purpose of determining the uncertainty in model outcomes a sensitivity study is conducted with an innovative approach through applying a global sensitivity analysis for a computationally complex model. First global maps of spatially distributed parameter sensitivities are presented. The results at hand indicate that globally simulated hydraulic heads are equally sensitive to hydraulic conductivity, groundwater recharge and surface water body elevation, even though parameter sensitivities do vary regionally.
A high resolution model of New Zealand is developed to further understand the involved uncertainties connected to the spatial resolution of the global model. This thesis finds that a new understanding is necessary how these models can be evaluated and that a simple increase in spatial resolution is not improving the model performance when compared to observations.
Alongside the assessment of the natural equilibrium, the concept of a fully coupled transient model as integrated storage component replacing the former model in the hydrological model WaterGAP is discussed. First results reveal that the model shows reasonable response to seasonal variability although it contains persistent head trends leading to global overestimates of water table depth due to an incomplete coupling. Nonetheless, WaterGAP-G3M is already able to show plausible long term storage trends for areas that are known to be affected by groundwater depletion. In comparison with two established regional models in the Central Valley the coupled model shows a highly promising simulation of storage declines.