Filtern
Erscheinungsjahr
- 2021 (141) (entfernen)
Dokumenttyp
- Dissertation (141) (entfernen)
Sprache
- Englisch (141) (entfernen)
Volltext vorhanden
- ja (141)
Gehört zur Bibliographie
- nein (141)
Schlagworte
- 2D materials (1)
- AIFMD (1)
- ALICE upgrade (1)
- Acetogenic bacteria (1)
- Akute lymphoblastische Leukämie (1)
- Alignment (1)
- Alterung (1)
- Approximation Algorithms (1)
- Architecture (1)
- Astrophysics (1)
Institut
- Biowissenschaften (40)
- Biochemie, Chemie und Pharmazie (28)
- Physik (26)
- Informatik und Mathematik (12)
- Medizin (7)
- Geowissenschaften / Geographie (5)
- Gesellschaftswissenschaften (4)
- Neuere Philologien (4)
- Biochemie und Chemie (3)
- Geowissenschaften (2)
Standard biorelevant media reflect the average gastrointestinal (GI) physiology in healthy volunteers. The use of biorelevant media in in vitro experiments has become an important strategy to predict drug behaviour in vivo and is often combined with in silico tools in order to simulate drug plasma profiles over time. In addition to the healthy population, the effects of disease state or co-administration of other drugs on plasma profiles must be considered to assure drug efficacy and safety. Thus, there is a need for a more accurate representation of the human GI physiology when it is altered by disease or co-administered drugs in in vitro dissolution experiments.
This thesis focused on the development of biorelevant media and dissolution tests reflecting GI physiology in circumstances where the gastric pH is elevated. Diseases linked to an elevated gastric pH are hypochlorhydria and achlorhydria, but these days treatment with acid-reducing agents (ARAs) is the single greatest cause of elevation in gastric pH. pH-dependent drug-drug interactions (DDIs) with ARAs are frequent, as the ARAs are used in a number of diseases using a variety of drugs. As the drugs currently on the market are often poorly soluble and ionisable, their dissolution is highly dependent on the pH of the GI tract, especially the gastric pH.
The thesis research consisted of several steps. In the first step, physiological changes in the human GI tract during the therapy with ARAs were identified. Parameters of the standard biorelevant gastric medium FaSSGF were adjusted to the identified changes to reflect the impact of ARA co-administration on the gastric physiology. The media aim to assess the potential extent of the ARA impact on gastric physiology by introducing biorelevant media pairs, ARA pH 4 and pH 6 media, of which one reflects a lesser, and the other a stronger impact of ARAs.
In the second step these ARA media were implemented in in vitro dissolution set-ups.
The dissolution of poorly soluble ionisable drugs was assessed using one-stage, two-stage and transfer model set-ups, as well as using a more evolved in vitro system TIM-1. Comparison of results from dissolution set-ups using the standard, low pH, gastric biorelevant medium FaSSGF (pH 1.6 or 2), and the same set-ups using ARA pH 4 and pH 6 media, shows a decrease in dissolution rate and extent for weakly basic compounds PSWB 001 and dipyridamole, and an increase in rate and extent of dissolution for the weakly acidic compound raltegravir potassium, when the gastric pH is elevated. Due to different physicochemical properties, the extent of the impact of physiological changes during ARA therapy (when either ARA pH 4 or pH 6 medium is selected) on dissolution varied among the model drugs. Thus, the bracketing approach, which considers a range of the possible ARA co-administration impact on drug dissolution, was confirmed to be best practice in assessing the impact of ARAs.
In the third step, dissolution data from in vitro experiments with ARA media was implemented into in silico models. The predictions using various in silico model approaches in Simcyp™ Simulator (minimal and full PBPK model, dissolution input using DRM and DLM) successfully bracketed in vivo data on drug administration during ARA therapy and correctly predicted an overall decrease in plasma concentration for the two model weakly basic compounds and an increase in plasma concertation for the model weakly acidic compound.
In all assessed scenarios, the ARA methods proved to be an essential part of evaluating and predicting the impact of ARAs on drug pharmacokinetics, and appropriately predicted the extent of a possible impact of ARAs on the drug plasma profiles. Thus, the ARA biorelevant media and dissolution tests were demonstrated to be valuable tools reflecting administration of drugs when the gastric pH is elevated and able to predict the impact of ARA therapy on drug administration.
The ability to evaluate the impact of human (patho) physioloy on drug behaviour in the gastrointestinal tract is of great importance, as the GI conditions play a significant role in drug release and absorption. Thus, there is great interest on the part of the pharmaceutical industry and regulatory agencies to develop best practices in this field, especially for pH-dependent DDIs. The media and dissolution tests developed in this thesis are biorelevant methods appropriate for evaluation of the impact of elevated gastric pH on drug efficacy and safety. Such methods, used as a risk assessment tool, in connection with evaluation of the efficacy window and potential toxicity, may help to increase confidence about decisions as to whether a pH-effect will occur and whether it is relevant or not, prior to conducting clinical studies. They may also enable changes in inclusion/exclusion criteria during recruiting for large-scale efficacy trials. In fact, the biopharmaceutic approach to drug development is becoming standard practice on a number of fronts, including metabolic DDIs, renal and hepatic insufficiency, powering decision-making process and possibly even waiving certain types of clinical studies.
...
Resistant microbes are a growing concern. It was estimated that about 33,000 of people die because of the infections caused by multidrug resistant bacteria each year in Europe (ECDC, 2018, https://www.ecdc.europa.eu/). Bacteria can acquire resistance against toxic compounds via different mechanisms and intrinsic active efflux is one of the first mechanisms deployed by bacterial cells. The membrane-localized efflux pumps catalysing this reaction, extract toxic compounds from the interior of the cell and transport these to the outside, thereby maintaining sub-lethal toxin levels in the cytoplasm, periplasm and membranes. Gram-negative three-component efflux pumps, analysed in this study, are composed of an inner membrane protein, a member of the Resistance-Nodulation cell Division (RND) superfamily, an Outer Membrane Factor (OMF) protein and a Membrane Fusion Protein (MFP) that connects the two afore mentioned components into an active efflux pump. The pumps described in this work, AcrAB-TolC and EmrAB-TolC, are drug efflux pumps belonging to the RND and MFS superfamilies, respectively, while CusCBA is an efflux pump that belongs to the RND heavy metal efflux family. Another efflux pump that was used as a model for the design of an in vitro assay for the silver ion transport studies, CopA, belongs to the P-type ATPase superfamily. All pumps analysed in this study are part of the resistance system of Escherichia coli, which is a highly clinically relevant pathogen.
In order to examine the AcrAB-TolC, CopA and CusA efflux pumps, the individual components were separately produced in E. coli, purified to monodispersity and reconstituted in large unilamellar vesicles, LUVs. Means for the optimized production and adequate conditions for efficient reconstitution were presented in this study. The activity of AcrB in LUVs was detected using fluorescence quenching of the dye 8-hydroxy-1,3,6 pyrenetrisulfonate (pyranine), which is incorporated inside the proteoliposomes and is sensitive to the pH changes in its surrounding. The inactive AcrB variant with a substitution in the proton relay network, D407N, showed no activity in proteoliposomes, which correlates with the measurements done in empty liposomes. When AcrA was co-reconstituted with AcrB D407N proteoliposomes it did not restore protein activity. To test the assembly of the AcrAB-TolC pump out of its single components, an in vitro assay was established where the complex assembly was tested with AcrAB- and TolC-containing liposomes. These experiments showed putative AcrAB-TolC formation in the presence or absence of a pump substrate, taurocholate, as well as in the presence of the pump inhibitor, MBX3132. The assembly appeared stable over time and results were invariant in the presence or absence of a pH gradient across the AcrAB-containing membrane.
After determination of the ATPase activity of the P-type ATPase, CopA, in detergent micelles, the protein was reconstituted in LUVs. Quenching of the Ag+-sensitive dye Phen Green SK (PGSK), present on the inside of the CopA-containing proteoliposomes, was observed in presence of ATP and Ag+. Under the same conditions, but in absence of Ag+-ions, quenching was reduced by 80 % after 300 seconds. No PGSK-quenching was observed in control liposomes in the presence of ATP and Ag+. The additional presence of sodium azide led to minimal reduction of the PGSK-quenching as expected since sodium azide is not an inhibitor of P-type ATPases, but the quenching rate was similar to that of the same experimental condition with control liposomes.
The RND superfamily member CusA, as part of the tripartite CusCBA efflux pump, has been proposed to sequester Ag+ or Cu+ from either the cytoplasmic or periplasmic side of the inner membrane. The periplasmic transport of silver ions was implied from an in vitro assay where the quenching of a pH sensitive dye, 9-amino-6-chloro-2-methoxyacridine (ACMA), indicates acidification of the lumen of the proteoliposomes containing CusA when an inwardly directed pH was imposed. The same experiment with the CusA D405N variant, which was previously reported to be an inactive variant, also led to ACMA quenching, although at a slightly lower rate. Under application of an inwardly directed pH and a (negative inside), CusA-containing proteoliposomes showed a strong quenching of the incorporated PGSK dye, suggesting strong Ag+ influx.
The Major Facilitator Superfamily-(MFS-) type EmrAB-TolC pump has an analogous structural setup as the RND-type AcrAB-TolC pump. To examine the efflux of one of its substrates, carbonyl - cyanide m-chlorophenylhydrazone (CCCP), a plate-based susceptibility assay was used. The presence of the EmrAB-TolC pump confers lower susceptibility levels towards CCCP in E. coli, compared to cells not expressing the pump or cells expressing only the MFS component, indicating that EmrAB-TolC extrudes CCCP.
The work done in this study opens up a path towards investigation of drug and metal resistance in vitro. The methodologies to obtain proteoliposomal samples of multicomponent efflux pumps and subsequent measurements of drug/metal ion and H+ fluxes, as well as the determination of pump assembly are crucial for the future research on pump catalysis and transport kinetics. The in vivo drug-plate assays done in this work provide initial insights for future investigations of the drug susceptibility of E. coli expressing the MFS-type tripartite efflux pumps.
Terahertz (THz) technology is an emerging field that considers the radiation between microwave and far-infrared regions where the electronic and photonic technologies merge. THz generation and THz sensing technologies should fill the gap between photonics and electronics which is defined as a region where THz generation power and THz sensing capabilities are at a low technology readiness level (TRL). As one of the options for THz detection technology, field-effect transistors with integrated antennae were suggested to be used as THz detectors in the 1990s by M. Dyakonov and M. Shur from where the development of field-effect transistor-based detector began. In this work, various FET technologies are presented, such as CMOS, AlGaN/GaN, and graphene-based material systems and their further sensitivity enhancement in order to reach the performance of well-developed Schottky diode-based THz sensing technology. Here presented FET-based detectors were explored in a wide frequency range from 0.1 THz up to 5 THz in narrowband and broadband configurations.
For proper implementation of THz detectors, the well-defined characterization is of high importance. Therefore, this work overviews the characterization methods, establishes various definitions of detector parameters, and summarizes the state-of-the-art THz detectors. The electrical, optical, and cryogenic characterization techniques are also presented here, as well as the best results obtained by the development of the characterization methods, namely graphene FET stabilization, low-power THz source characterization for detector calibration, and technology development for cryogenic detection.
Following the discussion about the detector characterization, a wide range of THz applications, which were tested during the last four years of Ph.D. and conducted under the ITN CELTA project from HORIZON2020 program, are presented in this work. The studies began with spectroscopy applications and imaging and later developed towards hyperspectral imaging and even passive imaging of human body THz radiation. As various options for THz applications, single-pixel detectors as well as multi-pixel arrays are also covered in this work.
The conducted research shows that FET-based detectors can be used for spectroscopy applications or be easily adapted for the relevant frequency range. State-of-the-art detectors considered in this work reach the resonant performance below 20 pW/√Hz at 0.3 THz and 0.5 THz, as well as 404 pW/√Hz cross-sectional NEP at 4.75 THz. The broadband detectors show NEP as low as 25 pW/√Hz at around 0.6 THz for the best AlGaN/GaN design and 25 pW/√Hz around 1 THz for the best CMOS design. As one of the most promising applications, metamaterial characterization was tested using the most sensitive devices. Furthermore, one of the single-pixel devices and a multi-pixel array were tested as an engineering solution for a radio astronomy system called GREAT in a stratosphere observatory named SOFIA. The exploration of the autocorrelation technique using FET-based devices shows the opportunity to employ such detectors for direct detection of THz pulses without an interferometric measurement setup.
This work also considers imaging applications, which include near-field and far-field visualization solutions. A considerable milestone for the theory of FET technology was achieved when scanning near-field microscopy led to the visualization of plasma (or carrier density) waves in a graphene FET channel. Whereas another important milestone for the THz technology was achieved when a 3D scan of a mobile phone was performed under the far-field imaging mode. Even though the imaging was done through the phone’s plastic cover, the image displayed high accuracy and good feature recognition of the smartphone, inching the FET-based detector technology ever so close to practical security applications. In parallel, the multi-pixel array testing was carried out on 6x7 pixel arrays that have been implemented in configurable-size aperture and imaging configurations. The configurable aperture size allowed the easier detector focusing procedure and a better fit for the beam size of the incident radiation. The imaging has been tested on various THz sources and compared to the TeraSense 16x16 pixel array. The experimental results show the big advantage of the developed multi-pixel array against the used commercial technology.
Furthermore, two ultra-low-power applications have been successfully tested. The application on hyper-frequency THz imaging tested in the specially developed dual frequency comb and our detector system for 300 GHz radiation with 9 spectral lines led to outstanding imaging results on various materials. The passive imaging of human body radiation was conducted using the most sensitive broadband CMOS detector with a log-spiral antenna working in the 0.1 – 1.5 THz range and reaching the optical NEP of 42 pW/√Hz. The NETD of this device reaches 2.1 K and overcomes the performance limit of passive room-temperature imaging of the human body radiation, which was less than 10 K above the room temperature. This experiment opened a completely new field that was explored before only by the multiplier chain-based or thermal detectors.
...
Plastics contain a complex mixture of chemicals including polymers, additives, starting substances and side-products of processing. These plastic chemicals are prone to leach into the packaged goods, in the case of food contact materials (FCMs), or into the natural environment, in the case of plastic debris. Thus, plastics represent an exposure source of chemicals for humans and wildlife alike. While it is widely known that individual plastic chemicals, such as bisphenol A and phthalates, are hazardous, little is known on the overall chemical composition and toxicity of plastics. When fragmented into smaller particles, referred to as microplastics (< 5 mm), the plastic itself can be ingested by many species. It is well established that microplastic ingestion can have negative consequences for a wide range of organisms including invertebrates, but the contribution of plastic chemicals to the toxicity of microplastics is unclear.
Given the above, the present thesis aimed at a comprehensive toxicological, ecotoxicological and chemical characterization of everyday plastics. For a comparative evaluation, 77 plastic products were selected covering 16 material types (e.g., polyethylene) made from petroleum or renewable feedstocks. These products included biodegradable products, FCMs and non-FCMs, as well as raw materials and final products, respectively. In the first two studies, the chemical mixtures contained in the 77 products were extracted with methanol and extracts were analyzed in a set of four in vitro bioassays and by non-target high-resolution gas or liquid chromatography mass spectrometry. Since an exposure only occurs if chemicals actually leach under realistic conditions, in a third study migration experiments with water were conducted for 24 out of the 77 products. The aqueous migrates were assessed in the same way as the methanolic extracts. In addition, the freshwater invertebrate Daphnia magna was exposed chronically to microplastics made of polyvinylchloride (PVC), polyurethane (PUR) and polylactic acid (PLA) to investigate the contribution of chemicals in microplastic toxicity, in a fourth study.
The experimental findings demonstrate that a wide variety of chemicals is present in plastics. A single plastic product can contain up to several thousand chemical features, most of which unique to that product and at the same time unknown. The results also indicate that the majority of these chemical mixtures are toxic in vitro. Accordingly, 65% of the plastic extracts induced baseline toxicity and 42% an oxidative stress response, while 25% had an antiandrogenic and 6% an estrogenic activity. This implies that chemicals causing unspecific toxicity are more prevalent in plastics than such with endocrine effects. These chemicals can also leach from plastics under realistic conditions. Between 17 and 8936 chemical features were detected in a single migrate sample and all 24 tested migrates induced in vitro toxicity. This means that humans and wildlife can actually be exposed to toxic plastic chemicals under realistic conditions. Generally, each product has its individual toxicological and chemical fingerprint. Thus, neither material type, feedstock, biodegradability nor the food contact suitability of a product can serve as a predictor for the toxicity, the chemical composition or complexity of a product. Likewise, this means that bio-based and biodegradable materials are not superior to their petroleum-based counterparts from a toxicological perspective despite being promoted as sustainable alternatives to conventional plastics.
Moreover, the present thesis demonstrates that plastic chemicals can be the main driver for microplastic toxicity. Irregular microplastics made of PVC, PUR and PLA adversely affected life-history traits of D. magna in a polymer type- and endpoint-dependent manner at concentrations between 100 and 500 mg L-1 and with a higher efficiency than natural kaolin particles. While the toxicity of PVC was triggered by the chemicals used in the material, the effects of PUR and PLA were induced by the physical properties of the particle.
In addition, in the fifth study, results and observations made during this thesis were integrated inter- and transdisciplinarily with the perspectives of a social scientist and a product manufacturer. This elucidated that knowledge on plastic ingredients is often concealed, is lacking or not applicable in practice. These intransparencies hinder the safety evaluation of plastic products as well as the choice and sale of the least toxic packaging material.
Overall, the present thesis highlights that the chemical safety of plastics and their bio-based and biodegradable alternatives is currently not ensured. Thus, chemicals require more consideration in the toxicity and risk assessment of plastics and microplastics. Product-specific and complex chemical compositions, including unknown compounds, pose a challenge here. Two essential steps towards non-toxic products are to increase transparency along the product life cycle and to reduce the chemical complexity of plastics by communication and regulation. The results of the present thesis indicate that products exist which do not contain toxic chemicals. These can serve to direct the design of safer plastics. Since toxicity and chemical complexity seem to increase with processing, the integration of toxicity testing during the production steps would further support the safe and sustainable production and use of plastic products.
This dissertation analyses the degrees and trajectories of financialisation in the region of South-Eastern Europe. It modifies and applies an eclectic comparative framework for comparing the degrees of financialisation across time and space on different levels. The thesis finds that from the turn of the century until the Great Financial Crisis of 2008, most South-Eastern European countries have increased their degree of financialisation on the different levels, especially on the levels of household, international financialisation and partly the financial sector. Financialisation of non-financial companies is barely existing. After the financial crisis, financialisation is revealed to stagnate in the region. In a second step, the dissertation conducts three case studies on extreme cases: financial sector financialisation in Bulgaria, international financialisation in Serbia and non-financial company and household financialisation in Croatia. Their trajectories are exposed to be mainly driven by deregulation, changed practices by foreign banks, the privatisation of public goods and the liberation of capital controls. The dissertation serves to geographically enlarge the research of financialisation to a peripheral region of the Global North and to add to the discussion on comparative financialisation approaches.
The role of orthographic knowledge for reading performance in German elementary school children
(2021)
Reading is crucial for successful participation in the modern world. However, 3-8% (e.g., Moll et al., 2014) of children in elementary school age show reading difficulties, which can lead to limited education and enhance risks of social and financial disadvantages (Valtin, 2017). Therefore, it is important to identify reading relevant components (Tippelt & Schmidt-Hertha, 2018). In this context, especially phonological awareness (i.e., awareness of the sound structure of the language) and naming speed (i.e., fast and automatized retrieval of information) were identified as significant components for reading skills (e.g., Georgiou et al., 2012; Landerl & Thaler, 2006; Vellutino, Fletcher, Snowling, & Scanlon, 2004). One further component, which is of growing interest to the recent research, is orthographic knowledge. It comprises the knowledge about the spelling of specific words (word-specific orthographic knowledge) and about legal letter patterns (general orthographic knowledge; Apel, 2011).
Previous research focused predominantly on examining the role of orthographic knowledge on basic reading level, including word identification and word meaning (Conrad et al., 2013; Rothe et al., 2015). The relationship between orthographic knowledge and reading comprehension as the core objective of reading, including understanding of the relationship between words within a sentence as well as building a coherence between sentences (Perfetti et al., 2005), was on the contrary scarcely the object of research. The first goal of this dissertation is, therefore, to provide a remedy by investigating the role of orthographic knowledge on higher reading processes (sentence- and text-level). The scarce body of research investigating children with reading difficulties provide a mixed result pattern (e.g., Ise et al., 2014). Therefore, this dissertation aims at clarifying the influence of orthographic knowledge on word-, sentence-, and text-level in children without and with reading difficulties.
A thorough understanding of reading relevant components is also important for conception of interventions aiming at individual reading performance improvements in order to prevent school failure. One promising approach to help children to overcome their reading difficulties is a text-fading based reading training. During this procedure, reading material is faded out letter by letter in reading direction (i.e., in German from left to right; Breznitz & Nevat, 2006). The aim of this manipulation is to prompt the individual to read faster than usual, resulting in reading rate and comprehension improvements (e.g., Nagler et al., 2015). However, the underlying mechanisms leading to improvements of reading performance are still unclear. Considering previous findings showing orthographic skills to influence training outcomes (Berninger et al., 1999), and also word reading performance after a reading intervention (Stage et al., 2003), it seems plausible to include orthographic knowledge when investigating potential training effects. Therefore, this dissertation aims at investigating the predictive value of orthographic knowledge for comprehension performance during the text-fading based reading training.
In order to answer the first research question, two empirical papers are implemented (see Appendix A: Zarić et al., 2020 and Appendix B: Zarić & Nagler, 2021), which investigate the role of orthographic knowledge for reading at word-, sentence-, and text-level in German school children without and with reading difficulties. The study by Zarić et al. (2020) examines the incremental predictive value for explained reading variance of both word-specific and general orthographic knowledge in relation to variance amount explained by general intelligence and phonological awareness. For this purpose, data from 66 German third-graders without reading difficulties were analyzed. Correlation and multiple regression analyses have shown that word-specific and general orthographic knowledge contribute a unique significant amount to the variance of reading comprehension on word-, sentence-, and text-level, over and above the explained variance by general intelligence and phonological awareness. In order to answer the question whether word-specific and general orthographic knowledge also explain variance in children with poor reading proficiency, in addition to established predictors phonological awareness and naming speed, the data from 103 German third-graders with reading difficulties were analyzed in a second study (Zarić & Nagler, 2021). The analyses revealed that word-specific and general orthographic knowledge explain a unique significant amount of the variance of reading on word- and sentence-level. On text-level, these two components did not explain a significant amount of unique variance. Here, only phonological awareness was shown to be a significant predictor. The results indicate that the knowledge about the spelling of specific words (word-specific orthographic knowledge) and the knowledge about legal letter patterns (general orthographic knowledge) contribute to reading comprehension on word-level. Following the assumptions, for instance, of the Lexical Quality Hypothesis (Perfetti & Hart, 2002) high-quality orthographic representations are considered to be important for higher reading processes, such as comprehension.
...
B-cell acute lymphoblastic leukaemia (B-ALL) is characterized by the overproduction of lymphoblasts in the bone marrow (BM), and it is the most common cancer in children while being comparatively uncommon in adults. On the other hand, in chronic myeloid leukaemia (CML), 70% of cases are found in patients older than 50 years, making it uncommon in children. All CML cases and up to 3% of paediatric B- ALL (and 25% of adult B-ALL) cases are due to fusion gene BCR-ABL1, which gives rise to the cytoplasmatic, constitutively active oncoprotein, tyrosine kinase BCR-ABL1 through a reciprocal translocation between chromosomes 9 and 22. The constitutively active BCR-ABL tyrosine kinase leads to deregulation of different signal transduction pathways such as cell growth, proliferation and cell survival. The role of the bone marrow microenvironment (BMM) can mediate disease initiation (only in mice), progression, therapy resistance, and relapse, as has been increasingly recognized over the last two decades. In general, the BMM is a very complex arrangement of various cell types such as osteoblasts, osteoclasts, endothelial cells, adipocytes, mesenchymal stromal cells, macrophages and several others. In addition, the BMM is composed of multiple chemical and mechanical factors and extra cellular matrix (ECM) proteins which contribute to the BMM’s features influencing leukaemia behaviour. Considering the incidence of B-ALL and CML in children and in adults respectively, we hypothesized that the young and/or an aged BMM might also play a previously unrecognized role in the aggressiveness of B-ALL and CML. We proposed that BM, transduced with BCR-ABL1-expressing retrovirus in the murine transduction/transplantation model of B-ALL, transplanted into young versus old recipient mice would lead to a more aggressive disease in young mice, and similarly CML would be more aggressive in old recipient mice. In close recapitulation with the human incidence, induction of CML led to a significantly shorted survival in old recipient mice. On the other hand, induction of B-ALL showed a shortened survival in young compared to old syngeneic mice, as well as in a xenotransplantation model. Among the highly heterogenous composition of the BMM, we implicate young BM macrophages as a supportive niche for B-ALL cells. The results were found to be mostly due to potential soluble factors differentially secreted from young and old macrophages. Therefore, we hypothesized that the chemokine CXCL13, which has been demonstrated to play a role in B cell migration and act as a diagnostic marker in the cerebrospinal fluid of patients with neuroborreliosis, might be responsible for the observed phenotype. CXCL13 was found to be more highly expressed in healthy and leukaemic young mice as well as in conditioned medium of young macrophages. Using a variety of in vitro experiments, CXCL13 showed to significantly increase the proliferation and the migration of leukaemia cells when exposed to young macrophages, and the phenotype was rescued while using a CXCL13 neutralizing antibody. The CXCL13 role was also confirmed in vivo, since macrophage ablation led to a prolongation of survival in young mice and a reduction of CXCL13 levels. The use of an additional mouse model, leukaemia cells with CXCR5 deficiency, led to a significant prolongation of survival of young mice, confirming the importance of the CXCL13-CXCR5 axis in B-ALL. In line with our murine results, we found that human macrophages and CXCL13 levels were higher in pediatric B-ALL patients than in adults. Consistent with our murine data, the expression level of CXCR5 may act as a prognostic marker in B-ALL, as well as a predictive marker for central nervous system relapse in human B-ALL. The overall findings show that a young BMM, and in particular macrophages, influences B-ALL progression. We specifically identified CXCL13, secreted by young macrophages, as a promoter of proliferation of B-ALL cells, influencing survival in B-ALL via CXCR5. The CXCR5-CXCL13 axis may be relevant in human B-ALL, and higher CXCR5 expression in human B-ALL may act as a predictive marker.
We provide extensions of the dual variational method for the nonlinear Helmholtz equation from Evéquoz and Weth. In particular we prove the existence of dual ground state solutions in the Sobolev critical case, extend the dual method beyond the standard Stein Tomas and Kenig Ruiz Sogge range and generalize the method for sign changing nonlinearities.
Despite major improvements of the therapy, many B-cell Non-Hodgkin’s lymphoma (B-NHL) entities still have a poor prognosis. New therapeutic options are urgently needed. Therefore this study sets out to investigate oncogenic signalling pathways in the two B-NHL entities mantle cell lymphoma (MCL) and diffuse large B-cell lymphoma (DLBCL) in order to define new potential therapeutic targets.
MCL cells overexpress the anti-apoptotic protein BCL-2, thereby they evade apoptosis. With venetoclax, the first-in-class BCL-2 specific inhibitor was approved and achieved good response rates in MCL. However, some cases display intrinsic or acquired resistance to venetoclax. In order to improve the therapy, this study aimed to identify genes which confer sensitivity or resistance towards venetoclax upon their respective knockout. To this end, a genome-wide CRISPR/Cas9-based loss-of-function screen was conducted in the MCL cell line Maver-1. The E3 ubiquitin
ligase MARCH5 was identified as one of the top hits conferring sensitivity
towards venetoclax upon its knockout. This finding was validated in a competitive growth assay including two more MCL cell lines, Jeko-1 and Mino. MARCH5 knockout also sensitised Jeko-1 cells towards venetoclax even though this cell line was insensitive towards venetoclax in its wild-type form. Using BH3 profiling, an increased dependency on BCL-2 of MARCH5-depleted cells confirmed this finding. The sensitisation was found to be based on induction of apoptosis upon MARCH5 knockout and to an even higher extent upon additional treatment of MARCH5-depleted cells with venetoclax. As already described for epithelial cancer entities, the BCL-2 family members MCL-1 and NOXA were upregulated in MCL cell lines upon MARCH5 knockout. This led to the hypothesis that MARCH5 is a potential
regulator of intrinsic apoptosis with NOXA as a key component. A competitive growth assay with MARCH5 and NOXA co-depleted cells revealed a partial reversion of the BCL-2 sensitisation compared to MARCH5 knockout alone. Furthermore, mass spectrometry-based methods were used to gain more insight into other cellular pathways and networks which might be regulated in a MARCH5-dependent manner. In an interactome analysis, proteins which regulate mitochondrial morphology, such as Drp-1 were identified as MARCH5 interactors. Besides this expected finding, interaction between MARCH5 and several members of the BCL-2 family as well as a potential connection between MARCH5 and vesicular trafficking was discovered. As expected, an ubiquitinome analysis of MARCH5-depleted cells revealed decreased levels of MCL-1 and NOXA ubiquitination. Additionally, a potential role of MARCH5 in the ubiquitination of several members of the cell cycle regulatory
pathway was discovered. Based on the broad spectrum of cellular pathways which seem to be regulated in a MARCH5-dependent manner, it was hypothesised that MARCH5 primarily regulates BCL-2 family members which in turn regulate intrinsic apoptosis on the one hand and additionally are involved in the regulation of various other pathways on the other hand.
In summary, this study provides insight into a MARCH5-dependent MCL1-1/NOXA axis in MCL cells and potential implications into related cellular processes.
In addition to the anti-apoptotic pathways described above, B-cell receptor (BCR) signalling is known to provide a pro-survival signal to both normal and malignant B-cells. Targeting the BCR signalling pathway therefore is a promising therapeutic target for B-cell malignancies. In order to gain more insight into the differential modes of BCR signalling of ABC- and GCB-DLBCL cells, genes/proteins which displayed differential essentiality in ABC- and GCB-DLBCL cells were aimed to be defined. Consequently, data sets from a CRISPR/Cas9-based loss-of-function screen
were re-analysed. SASH3 was identified as a gene which was essential for GCB- but not for ABC-DLBCL cells. Since this protein is known to be involved in T-cell receptor (TCR)-signalling, SASH3 was assumed to play a potential role in BCR signalling as well and was therefore investigated in more detail. A competitive growth assay confirmed that SASH3 knockout was toxic exclusively for GCB-DLBCL cell lines. An interactome analysis in ABC- and GCB-DLBCL cells revealed interaction between SASH3 and many components of the proximal BCR signalling pathway as well as several downstream signalling pathways such as the PI3K or the NF-ΚB pathway.
An integration of the interactome with data from the CRISPR/Cas9-based loss-offunction screen revealed differential essentiality of the SASH3-interacting proteins in ABC- and GCB-DLBCL cells. It was hypothesised that SASH3 might regulate PI3K signalling on which GCB- but not ABC-DLBCL cells are known to dependent. Discontinuation of the regulation of PI3K signalling could therefore be exclusively toxic to GCB-DLBCL cells.
Taken together, this study describes a subtype-specific dependency of GCB-DLBCL cells on SASH3. Furthermore, the SASH3 interactome has been investigated in B-cells for the first time, thereby highlighting a potential role in proximal BCR signalling and involvement in specific BCR-related downstream signalling pathways.
Using walls to navigate the room: egocentric representations of borders for spatial navigation
(2021)
Spatial navigation forms one of the core components of an animal’s behavioural repertoire. Good navigational skills boost survival by allowing one to avoid predators, to search successfully for food in an unpredictable world, and to be able to find a mating partner. As a consequence, the brain has dedicated many of its resources to the processing of spatial information. Decades of seminal work has revealed how the brain is able to form detailed representations of one’s current position, and use an internal cognitive map of the environment to traverse the local space. However, what is much less understood is how neural computations of position depend on distance information of salient external locations such as landmarks, and how these distal places are encoded in the brain.
The work in this thesis explores the role of one brain region in particular, the retrosplenial cortex (RSC), as a key area to implement distance computations in relation to distal landmarks. Previous research has shown that damage to the RSC results in losses of spatial memory and navigation ability, but its exact role in spatial cognition remains unclear. Initial electrophysiological recordings of single cells in the RSC during free exploration behaviour of the animal resulted in the discovery of a new population of neurons that robustly encode distance information towards nearby walls throughout the environment. Activity of these border cells was characterized by high firing rates near all boundaries of the arena that were available to the animal, and sensory manipulation experiments revealed that this activity persisted in the absence of direct visual or somatosensory detection of the wall.
It quickly became apparent that border cell activity was not only modulated by the distance to walls, but was contingent on the direction the animal was facing relative to the boundary. Approximately 40% of neurons displayed significant selectivity to the direction of walls, mostly in the hemifield contra-lateral to the recorded hemisphere, such that a neuron in left RSC is active whenever a wall occupies proximal space on the right side of the animal. Using a cue-rotation paradigm, experiments initially showed that this egocentric direction information was invariant to the physical rotation of the arena. Yet this rotation elicited a corresponding shift in the preferred direction of local head-direction cells, as well as a rotation in the firing fields of spatially-tuned cells in RSC. As a consequence, position and direction encoding in RSC must be bound together, rotating in unison during the environmental manipulations, as information about allocentric boundary locations is integrated with head-direction signals to form egocentric border representations.
It is known that the RSC forms many anatomical connections with other parts of the brain that encode spatial information, like the hippocampus and para-hippocampal areas. The next step was to establish the circuit mechanisms in place for RSC neurons to generate their activity in respect to the distance and direction of walls. A series of inactivation experiments revealed how RSC activity is inter-dependent with one of its communication partners, the medial entorhinal cortex (MEC). Together they form a wider functional network that encodes precise spatial information of borders, with information flowing from the MEC to RSC but not vice versa. While the conjunction between distance and heading direction relative to the outer walls was the main driver of neural activity in RSC, border cells displayed further behavioural correlates related to movement trajectories. Spiking activity in either hemisphere tended to precede turning behaviour on a short time-scale in a way that border cells in the right RSC anticipated right-way turns ~300 ms into the future.
The interpretation of these results is that the RSC’s primary role in spatial cognition is not necessarily on the early sensory processing stage as suggested by previous studies. Instead, it is involved in computations related to the generation of motion plans, using spatial information that is processed in other brain areas to plan and execute future actions. One potential function of the RSC’s role in this process could be to act correctly in relation to the nearby perimeter, such that border cells in one hemisphere are involved in the encoding of walls in the contralateral hemifield, after which the animal makes an ipsilateral turn to avoid collision. Together this supports the idea that the MEC→RSC pathway links the encoding of space and position in the hippocampal system with the brain’s motor action systems, allowing animals to use walls as prominent landmarks to navigate the room.
Classical light microscopy is one of the main tools for science to study small things. Microscopes and their technology and optics have been developed and improved over centuries, however their resolution is ultimately restricted physically by the diffraction of light based on its wave nature described by Maxwell’s equations. Hence, the nanoworld – often characterized by sub-100-nm structural sizes – is not accessible with classical far-field optics (apart from special x-ray laser concepts) since its lateral resolution scales with the wavelength.
It was not until the 20th century that various technologies emerged to circumvent the diffraction limit, including so-called near-field microscopy. Although conceptually based on Maxwell’s long known equations, it took a long time for the scientific community to recognize its powerful opportunities and the first embodiments of near-field microscopes were developed. One representative of them is the scattering-type Scanning Near-field Optical Microscope (s-SNOM). It is a Scanning Probe Microscope (SPM) that enables imaging and spectroscopy at visible light frequencies down to even radio waves with a sub-100-nm resolution regardless of the wavelength used. This work also reflects this wide spectral range as it contains applications from near-infrared light down to deep THz/GHz radiation.
This thesis is subdivided into two parts. First, new experimental capabilities for the s-SNOM are demonstrated and evaluated in a more technical manner. Second, among other things, these capabilities are used to study various transport phenomena in solids, as already indicated in the title.
On the technical side, preliminary studies on the suitability of the qPlus sensor – a novel scanning probe technology – for near-field microscopy are presented.
The scanning head incorporating the qPlus sensor–named TRIBUS – is originally intended and built for ultra-high vacuum, low temperature, and high resolution applications. These are desirable environments and properties for sensitive nearfield measurements as well. However, since its design was not planned for near-field measurements, several special technical and optical aspects have to be taken into account, among others the scanning tip design and a spring suspended measurement head.
In addition, in this thesis field-effect transistors are used as THz detectors in an s-SNOM for the first time. Although THz s-SNOM is already an emerging technology, it still suffers from the requirements of sophisticated and specialized infrastructure on both the detector and laser side. Field-effect transistors offer an alternative that is flexible, cost-efficient, room-temperature operating, and easy to handle. Here, their suitability for s-SNOM measurements, which in general require very sensitive and fast detectors, is evaluated.
In the scientific part of this thesis, electromagnetic surface waves on silver nanowires and the conductivity/charge carrier density in silicon are investigated. Both are completely different concepts of transport phenomena, but this already shows the general versatility of the s-SNOM as it can enter both fields. Silver nanowires are analysed by means of near-infrared radiation. Their plasmonic behaviour in this spectral region is studied complementing other simulations and studies in literature performed on them using for example far-field optics.
Furthermore, the surface wave imaging ability of the s-SNOM in the near-infrared regime is thoroughly investigated in this thesis. Mapping surface waves in the mid-infrared regime is widespread in the community, however for much smaller wavelengths there are several important aspects to be considered additionally, such as the smaller focal spot size.
After that, doped and photo-excited silicon substrates are investigated. As the characteristic frequencies of charge carriers in semiconductors – described by the plasma frequency and the Drude model – are within the THz range, the THz s-SNOM is very well suited to probe their behaviour and to reveal contrasts, which has already been shown qualitatively by numerous literature reports. Here, the photo-excitation enables to set and tune the charge carrier density continuously.
Furthermore, the analysis of all silicon samples focuses on a quantitative extraction of the charge carrier densities and doping levels ...
This work comprises the investigation of four different biosynthesis gene clusters from Xenorhabdus. Xenorhabdus is an entomopathogenic bacterium that lives in mutualistic symbiosis with its Steinernema nematode host and together they infect and kill insect larvae. Xenorhabdus is well known for the production of so-called specialised metabolites and many of these compounds are synthesised by non-ribosomal peptide synthetases (NRPSs) or NRPS-polyketide synthase (PKS)-hybrids. These enzymes are organised in a modular manner and produce structurally very diverse molecules, often with the help of modifying domains and tailoring enzymes. In general, the genes involved in the biosynthesis are organised in so-called biosynthetic gene clusters (BGCs) in the genome of the producing strain. Exchanging the native promoter with an inducible promoter, e.g. PBAD, allows the targeted activation of the BGC and in turn the analysis of the biosynthesis product via LC-MS analysis.
The first BGC investigated in this work is responsible for the biosynthesis of xenofuranones. Based on gene deletions, this work shows that the NRPS-like enzyme XfsA produces a carboxylated furanone intermediate which is subsequently decarboxylated by XfsB to yield xenofuranone B. The next step in xenofuranone biosynthesis is the O-methylation of xenofuranone B to yield xenofuranone A. A comparative proteomics approach allowed the identification of four methyltransferase candidates and subsequent gene deletions confirmed one of the candidates to be responsible for methylation of xenofuranone B. The proteome analysis was based on the comparison of X. szentirmaii WT and X. szentirmaii Δhfq because distinct levels of the methylated xenofuranone A were observed when the xfs BGC was activated in either WT or Δhfq strain. Hfq is a global transcriptional regulator whose deletion is associated with the down regulation of natural product biosynthesis in Xenorhabdus. The strong PBAD activation of the xfs BGC also allowed the detection of two novel xenofuranone derivatives which arise from incorporation of one 4-hydroxyphenylpyruvic acid as first or second building block, respectively.
PBAD based activation of the second BGC addressed in this work lead to the detection of a novel metabolite and compound purification allowed NMR-based structure elucidation. The molecule exhibits two pyrrolizidine moieties and was named pyrrolizwilline (pyrrolizidine + twin (German: “Zwilling”)). The BGC comprises seven genes and single gene deletions as well as heterologous expression in E. coli and NRPS engineering were conducted to investigate the biosynthesis. The first two genes xhpA and xhpB encode a bimodular NRPS and a monooxygenase which synthesise a pyrrolizixenamide-like structure, similar to PxaA and PxaB in pyrrolizixenamide biosynthesis. It is suggested that the acyl side chain incorporated by XhpA is removed by the α,β-hydrolase XhpG. The keto function is then reduced by two subsequent two electron reductions catalysed by XhpC and XhpD. One of these two reduced pyrrolizidine units most likely is extended with glyoxalate prior to non-enzymatic dimerisation with the second pyrrolizidine moiety. To finally yield pyrrolizwilline, L-valine is incorporated, probably by the free-standing condensation domain XhpF.
The third BGC investigated is responsible for the production of a tripeptide composed of β-D-homoserine, α-hydroxyglycine and L-valine and is referred to as glyoxpeptide. This work demonstrates that the previously observed glyoxpeptide derivative is derived from glycerol present in the culture medium. Furthermore, this work shows that the monooxygenase domain, which is found in an unusual position between motifs A8 and A9 within the adenylation domain, is responsible for the α-hydroxylation of glycine. It is suggested that the α-hydroxylation of glycine renders the tripeptide prone to hydrolysis via hemiacetal formation. Hence, the XgsC_MonoOx domain might be an interesting candidate for further NRPS engineering.
The fourth BGC addressed is responsible for the production of xildivalines and this work describes two additional derivatives which are detected only when the promoter is exchanged and activated in the X. hominickii WT strain but not in X. hominickii Δhfq. Deletion of the methyltransferase encoding gene xisE results in the production of non-methylated xildivalines. It remains to be determined when the N-methylation of L-valine takes place. It is discussed that the methyltransferase could act on the NRPS released product but also during the assembly. The peptide deformylase is not involved in the proposed biosynthesis as xildivaline production is detected in a ΔxisD strain. The PKS XisB features two adjacent, so-called tandem T domains. The inactivation of the first or the second T domain by point mutation causes decreased production titres of detected xildivalines in the respective mutant strain when compared to the wild type.
The oleochemical and petrochemical industries provide diverse chemicals used in personal care products, food and pharmaceutical industries or as fuels, oils, polymers and others. However, fossil resources are dwindling and concerns about these conventional production methods have risen due to their strong negative impact on the environment and contribution to climate change.
Therefore, alternative, sustainable and environmentally friendly production methods for oleochemical compounds such as fatty acids, fatty alcohols, hydroxy fatty acids and dicarboxylic acids are desired. The biotechnological production by engineered microorganism could fulfill these requirements. The concept of metabolic engineering, which is the modification of metabolic pathways of a host organism for increased production of a target compound, is a widely used strategy in biotechnology to generate cell factories or chassis strains for robust, efficient and high production. In this work, the versatile model and industrial yeast Saccharomyces cerevisiae was manipulated by metabolic engineering strategies for increased production of the medium-chain fatty acid octanoic acid and de novo production the derived 8-hydroxyoctanoic acid.
Octanoic acid production was enabled by the fatty acid biosynthesis pathway by use of a mutated fatty acid synthase (FASRK) in a wild type FAS deficient strain. The yeast fatty acid synthase (FAS) consists of two polypeptides, α and β, which assemble to a α6β6 complex in a co-translational manner by interaction of the subunits. Because this step might be subject to cellular regulation, the α- and β- subunits of fatty acid synthase were fused to form a single-chain construct (fusFASRK), which displayed superior octanoic acid production compared with split FASRK. Thus, FASRK expression was identified as a limiting step of octanoic acid production. But the strains that produce octanoic acid have a severe growth defect that is undesirable for biotechnological applications and could lead to lower production titers. One reason is the strong
inhibitory effect of octanoic acid. Another possibility is that the mutant FAS no longer produces enough essential long-chain fatty acids. To compensate for this, the mutated split and fused FAS variants were co-expressed individually in a strain harboring genomic wild type FAS alleles. In
addition, mutant and wild type variants of fused and split FAS were co-expressed together in a FAS deficient strain. However, both cases resulted in decreased octanoic acid titers potentially by physical and/or metabolic crosstalk of the FAS variants.
The fatty acid biosynthesis relies on cytosolic acetyl-CoA for initiation and derived malonyl-CoA for elongation and requires NADPH for reductive power. To increase production of octanoic acid, engineering strategies for increased acetyl-CoA and NADHP supply were investigated. First, the flux through the native cytosolic acetyl-CoA and NADPH providing pyruvate dehydrogenase bypass was enhanced by overexpression of the target genes ADH2, ALD6 and ACSL461P from Salmonella enterica in combination or individually. Next, the acety-CoA forming heterologous phosphoketolase/phosphotransacetylase pathway was expressed and NADPH formation was increased by redirecting the flux of glucose-6-phosphate into the NADPH producing oxidative branch of the pentose phosphate pathway. In particular, the flux through glycolysis and pyruvate dehydrogenase bypass was reduced by downregulating the expression of the phosphoglucose isomerase PGI1 and deleting the acetaldehyde dehydrogenase ALD6. Glucose-6-phosphate was guided into the pentose phosphate pathway by overexpressing the glucose-6-phosphate dehydrogenase ZWF1. The first approach did not influence octanoic acid production but the latter increased yields in the glucose consumption phase by 65 %. However,
combining the superior fusFASRK with acetyl-CoA and NADPH supply engineering strategies did not result in additive production effects, indicating that other limitations hinder high octanoic acid accumulation. Limitations could be caused in particular by the strong inhibitory effects of octanoic acid or by intrinsic limitations of the FASRK mutant. To enlarge the octanoic acid production platform towards other derived valuable oleochemical compounds the de novo production of 8-hydroxyoctanoic acid was targeted. Since short- and medium-chain fatty acids have a strong inhibitory effect on Saccharomyces cerevisiae, the inhibitory effect of hydroxy fatty acid and dicarboxylic with eight or ten carbon atoms were compared and revealed only little or no growth impairment. Subsequently, the formation of 8-hydroxyoctanoic acid was targeted by a terminal hydroxylation of externally supplied octanoic acid in a bioconversion. For that, three heterologous genes, encoding for cytochromes P450 enzymes and their cognate cytochrome P450 reductases were expressed and 8-hydroxyoctanoic acid production was compared. In addition, the use of different carbon sources was compared.
...
Bezüglich der Arzneimittelforschung galt für sehr lange Zeit das Paradigma "ein Gen, ein Medikament, eine Krankheit". In jüngerer Zeit ändert sich dieses Paradigma jedoch auf Grund von redundanten Funktionen und alternativen sich kompensierenden Signalmustern, die insbesondere bei Krebserkrankungen vorherrschend sind. Daher kann die logische Konsequenz nur sein, Multi-Target-Strategien gegenüber Single-Target-Ansätzen in Betracht zu ziehen. Auf Grund der Schwierigkeit, mit einer Kombination von zwei Einzelwirkstoffen, in diesem Fall BET- und HDAC-Inhibitoren eine konsistente Biodistribution und Pharmakokinetik zu erreichen, wurde nach Einzelmolekülen gesucht, die mehrere inhibitorische Aktivitäten aufweisen. Dies wurde hier zunächst durch die einfache Konjugation von zwei unterschiedlichen Pharmakophoren erreicht.
Insgesamt wurden vier verschiedene Liganden dieses Typs synthetisiert und einer von ihnen, Verbindung 14, zeigte sehr vielversprechende Ergebnisse. 14 vereint den BET Inhibitor JQ1- mit dem HDAC Inhibitor CI994 und hat eine hemmende Wirkung sowohl gegen BRD4- als auch HDAC-Proteine wie durch DSF- und nanoBRET-Assay gezeigt werden konnte. Außerdem zeigten in vitro Assays in PDAC-Zellen, dass 14 ein noch potenterer dualer BET/HDAC-Inhibitor ist als die Kombination aus JQ1 und CI994. Während die Effekte von 14 auf das BETi-Antwortgen MYC denen von JQ1 ziemlich ähnlich sind, sind insbesondere die HDAC-inhibitorischen Effekte nachhaltiger und verstärkt, wahrscheinlich aufgrund einer längeren Verweildauer von 14 auf HDAC als dies bei CI994 der Fall ist. Dies ist durch das hohe Niveau der acetylierten Lysine von Histon H3 im Western Blot erkennbar. Dieses veränderte Expressionsverhalten hatte einen großen Einfluss auf das Zellwachstum und überleben in allen getesteten PDAC-Zelllinien. Hier wurde die Überlegenheit von 14 gegenüber der gleichzeitigen Behandlung der Zellen mit JQ1 und CI994 sehr deutlich. Wurden PDAC-Zellen mit dem dualen Inhibitor 14 behandelt, hatte dies ein geringeres Wachstum und Überleben der Krebszellen zur Folge als mit beiden ursprünglichen Molekülen, unabhängig davon, ob diese einzeln oder simultan verabreicht wurden. Außerdem wurde 14 mit Gemcitabin, einem gut verträglichen Chemotherapeutikum, kombiniert, dass bei PDAC allein nur eine begrenzte Aktivität aufweist. Es stellte sich heraus, dass die Reihenfolge, in der die Medikamente verabreicht werden, einen großen Einfluss auf die Effektivität hatte. Der durch 14 induzierte Stopp des Zellzyklus verhindert den Einbau von Gemcitabin in die DNA, wenn 14 vor oder gleichzeitig mit Gemcitabin verabreicht wird. Wenn jedoch die Behandlung mit 14 nach der Verabreichung von Gemcitabin folgt, wird der durch Gemcitabin induzierte S-Phasen-Arrest und Replikationsstress aufrechterhalten. Im Vergleich zu den meisten früheren Studien, die sich mit dualen BET/HDAC-Inhibitoren beschäftigten, ist dies eine große Verbesserung, da es bisher keinen signifikanten Unterschied zwischen der Verwendung eines dualen BET/HDAC-Inhibitors und der Kombination von zwei Einzelinhibitoren gab.
Als Proof of Concept unterstützten die Daten weitere Bemühungen zur Entwicklung zusätzlicher dualer BET/HDAC-Inhibitoren. Daher wurden zwei weitere Generationen dualer BET/HDAC Inhibitoren entwickelt, die jedoch bisher nicht an die Eigenschaften von 14 anknüpfen konnten. Vor allem die 3. Generation bietet jedoch Raum für Optimierungen, so dass hier möglicherweise noch ein potenter dualer Inhibitor zu finden ist. Sollte es in Zukunft einen zugelassenen dualen BET/HDAC-Inhibitor geben, ist es jedoch nicht unwahrscheinlich, dass keine der hier verwendet BET inhibierenden Strukturen verwendet werden, aber Struktur des HDAC inhibierenden Teils immer noch vergleichbar ist. Der Grund dafür ist, dass die HDAC Inhibitoren größtenteils relativ einfach aufgebaut. So lange das wichtigste, die zinkbindende Gruppe vorhanden ist, scheint der Linker sowie die Capping-Gruppe zweitranging zu sein. Die größere Herausforderung wird vermutlich die Suche nach dem passenden BET Inhibitor sein und die Wahlmöglichkeiten sind schon jetzt vielfältig.
Generell lässt sich sagen, dass die Idee der dualen BET/HDAC-Inhibitoren äußerst vielversprechend und es wert ist, weiter verfolgt zu werden. Dies liegt vor allem an den guten Testergebnissen, die mit Verbindung 14 erzielt wurden. Mit Hilfe dieser Art von Inhibitoren könnte es in Zukunft möglich sein, die Überlebensrate von PDAC-Patienten zu erhöhen, wenn nicht als alleiniges Medikament, so vielleicht als Zusatz zur Chemotherapie. Darüber hinaus scheint der Einsatz von dualen BET/HDAC-Inhibitoren nicht nur auf die Behandlung von PDAC beschränkt zu sein und kann auch bei anderen Krebsarten angewendet werden. NMC zum Beispiel ist ein ebenso seltener wie tödlicher Subtyp des schlecht differenzierten Plattenepithelkarzinoms und zeichnet sich durch eine Fusion des NUT-Gens mit BRD4 aus, wodurch es potenziell anfällig für eine BET-Inhibition ist. Tatsächlich zeigte 14 auch hier einen größeren positiven Effekt auf die getesteten NMC-Zellen als JQ1 oder CI994 und veranlasste die Zellen unter anderem zur Differenzierung. ...
High-energy astrophysics plays an increasingly important role in the understanding of our universe. On one hand, this is due to ground-breaking observations, like the gravitational-wave detections of the LIGO and Virgo network or the black-hole shadow observations of the EHT collaboration. On the other hand, the field of numerical relativity has reached a level of sophistication that allows for realistic simulations that include all four fundamental forces of nature. A prime example of how observations and theory complement each other can be seen in the studies following GW170817, the first detection of gravitational waves from a binary neutron-star merger. The same detection is also the chronological starting point of this Thesis. The plethora of information and constraints on nuclear physics derived from GW170817 in conjunction with theoretical computations will be presented in the first part of this Thesis. The second part goes beyond this detection and prepares for future observations when also the high-frequency postmerger signal will become detectable. Specifically, signatures of a quark-hadron phase transition are discussed and the specific case of a delayed phase transition is analyzed in detail. Finally, the third part of this Thesis focuses on the inclusion of radiative transport in numerical astrophysics. In the context of binary neutron-star mergers, radiation in the form of neutrinos is crucial for realistic long-term simulations. Two methods are introduced for treating radiation: the approximate state-of-the-art two-moment method (M1) and the recently developed radiative Lattice-Boltzmann method. The latter promises
to be more accurate than M1 at a comparable computational cost. Given that most methods for radiative transport or either inaccurate or unfeasible, the derivation of this new method represents a novel and possibly paradigm-changing contribution to an accurate inclusion of radiation in numerical astrophysics.
Eine große Gruppe von Aptameren sind die Guanosintriphosphat (GTP) Aptamere. Diese zeigt sehr eindrücklich, wie RNA unterschiedliche Strategien nutzt, um denselben Liganden zu erkennen. Die komplette Struktur des GTP Klasse II Aptamers wird in der ersten Publikation gezeigt. Interessanterweise zeichnet die Struktur ein stabil protoniertes Adenine unterhalb der GTP-Bindestelle aus. Dieses wurde durch eine Kombination aus weiterführenden NMR- und ITC-Experimente untersucht und charakterisiert. Es zeigte sich, dass die protonierte Base einen pKs-Wert hat, der weit von der Neutralität verschoben ist. Die Protonierung ist auch noch bei sehr basischen Puffern stabil.
Eine Art der funktionellen Protonierung wird von den zyklischen di-Nukleotiden (CDN) bindenden Riboswitches genutzt, um zwei CDN mit ähnlicher Affinität zu binden. c-di-GMP Riboswitches wurden als regulatorische Einheit beschrieben und deren Kristallstruktur aufgeklärt. Mutationsexperimente führten dazu, dass bei einer G-zu-A Mutation an der Gα-Bindestelle die Selektivität des Riboswitches verändert wurde. Die Mutante bindet sowohl c-di-GMP als auch cGAMP mit ähnlichen Bindungsaffinitäten. Riboswitche, die cGAMP binden wurden auch in der bakteriellen Genomen gefunden. Hierbei ist die Promiskuität unterschiedlich stark ausgeprägt. Die Untersuchung des Bindungsmodus und der damit verbundenen Promiskuität ist in der zweiten Publikation beschrieben. Hier wurde gezeigt, dass die Riboswitche beide Liganden nur binden können, wenn zur Bindung von c-di-GMP das Ligand bindende A protoniert vorliegt. Auch diese Protonierung konnte mit weiterführenden NMR- und ITC-Experimenten charakterisiert werden. Die Untersuchungen einer solch großen RNA sind mit NMR Spektroskopie herausfordernd. Hierbei wurde ausgenutzt, dass die Kristallstruktur bereits bekannt war, welche allerdings die Protonierung nicht zeigte. Auch diese Protonierung zeigt einen pKs-Wert, der weit von der Neutralität verschoben ist und außerdem bei unterschiedlichen pH stabil ist.
In den beiden untersuchten Beispielen wurden zwei verschiedene Arten von Protonierung gezeigt: eine strukturelle und eine funktionelle. Das GTP Klasse II Aptamer benutzt die Protonierung als strukturelle Basis für die Basis der Ligandenbindungsstelle. Hierbei werden durch die Protonierung des Adenines mehr nutzbare Wasserstoffbrücken ausgebildet und damit die Tertiärstruktur stabilisiert. Im Unterschied dazu nutzen die promiskuitiven CDN Ribsowicthes die Protonierung, um verschiedene Liganden binden zu können und es kommt damit zu einer Verschiebung der Funktionalität. Der regulatorische Nutzen dafür ist allerdings noch unbekannt.
Auch bei den SAM Riboswitches wurde ein promiskuitiver Vertreter beschrieben. SAM Riboswitches gehören zu den am längsten bekannten Klassen der Riboswitches. Bis heute sind hier die meisten unterschiedlichen Klassen bekannt. SAM wird häufig als Donor für funktionelle Gruppen benutzt, besonders häufig als Methlygruppendonor für die Methylierung einer Reihe unterschiedlicher Substrate (z.B. DNA, Proteine, Metabolite etc.). Bei dieser Reaktion entsteht SAH als Nebenprodukt. Zusätzlich ist SAH zelltoxisch, da es affin an Methyltransferasen bindet und damit diese essenzielle Reaktion inhibiert. Eine enge Kontrolle der SAH-Konzentration ist daher kritisch. SAM bindende Riboswitches haben zu SAM eine bis zu 1000-fach höhere Bindungsaffinität im Vergleich zu SAH. Die Beschreibung eines translationalen OFF-Riboswitches, der SAM und SAH mit ähnlicher Affinität bindet, ist daher überraschend. Zumal seine Genassoziation fast ausschließlich zu SAM Synthetasen ist, deren Regulation durch SAH wenig sinnvoll erscheint. Um ein besseres Verständnis für die Funktion des SAM/SAH Riboswitches zu erhalten, wurde seine 3D-Struktur mittels NMR-Spektroskopie aufgeklärt, wie in der vierten Publikation beschrieben. Dafür mussten zunächst alle Resonanzen der Sequenz und dem Liganden zugeordnet werden, wie in der dritten Publikation beschrieben. Dabei wurde als Ligand SAH gewählt, da dieser chemisch stabiler und damit für die teils tagelangen NMR-Messungen besser geeignet ist. Zusätzlich wurden Mutanten bzw. verwandte Liganden mittels ITC Experimente auf ihre Bindungseigenschaften untersucht, um die Bedeutung der Linkerlänge, einzelner Basenpaare und funktionelle Gruppen des Liganden zu untersuchen. Bei anderen bekannten SAM Riboswitches umschließt die RNA den Liganden fast komplett. Dabei wird zum einem das Sulfoniumion spezifisch durch die Carboxylgruppen verschiedener Uracil-Nukleotide erkennt und koordiniert. Außerdem bildet sich eine Bindetasche aus, die genug Platz für die stabile Bindung der Methylgruppe hat. Beim SAH Riboswitch wird die Selektivität für SAH dadurch erreicht, dass die Bindetasche sterisch keinen Platz für die Methylgruppe von SAM bereitstellt.
Zusammenfassend wurden in dieser Arbeit drei verschiedene Ligand bindende RNA-Strukturen untersucht, die alle sehr unterschiedliche Strategien zur Bindung der Liganden nutzen. Obwohl Portionierungen bei Aptameren und Riboswitches selten beschrieben wurden, haben sie eine maßgebliche Funktion in den beiden zuerst untersuchten Strukturen. Obwohl bisher im Hinblick auf alle bekannten RNA Strukturen eher selten beschrieben, gibt es doch neben den genannten zwei, einige Beispiele für strukturelle oder funktionelle Protonierungen. Auch in Hinblick auf zukünftige bzw. Verbesserung bestehender RNA-Strukturvorhersage-Programme ähnlich wie sie für Proteine schon lange nutzt werden, müssen protonierte Nukleobasen ernsthaft in Betracht gezogen werden. Außerdem konnte gezeigt werden, dass zwei der untersuchten Riboswitches zwei Liganden mit ähnlicher Affinität binden. Die genutzte Strategie ist hierbei unterschiedlich. Während bei den promiskuitiven CDN Riboswitches der regulatorische Nutzen noch unbekannt ist, konnte für den SAM/SAH Ribsowitch gezeigt werden, dass SAH nur zufällig aufgrund der wahrscheinlich sehr niedrigen intrazellulären Konzentration gebunden wird und dieser daher wahrscheinlich später in der evolutionären Entwicklung entstanden ist. Riboswitches halten es weiterhin spannend.
Non-ribosomal peptide synthetase docking domains : structure, function and engineering strategies
(2021)
Non-ribosomal peptide synthetases (NRPSs) are known for their capability to produce a wide range of natural compounds and some of them possess interesting bioactivities relevant for clinical application like antibiotics, anticancer, and immunosuppressive drugs. The diverse bioactivity of non-ribosomal peptides (NRPs) originates from their structural diversity, which results not only from the incorporation of non-proteinogenic amino acids into the growing peptide chain, but also the formation of heterocycles or further peptide modifications like methylation, hydroxylation and acetylation.
The biosynthesis of NRPs is achieved via the orchestrated interplay of distinct catalytic domains, which are grouped to modules that are located on one or more polypeptide chains. Each cycle starts with the selection and activation of a specific amino acid by the adenylation (A) domain, which catalyzes the aminoacyl adenylate formation under ATP consumption. This activated amino acid is then bound via a thioester bond to the 4’-phosphopantetheine cofactor (PPant-arm) of the following thiolation (T) domain. Before substrate loading, the PPant-arm is post-translationally added to the T domain by a phosphopantetheinyl transferase (PPTase), which converts the inactive apo-T domain in its active holo-form. In the last step of the catalytic cycle, two T domain bound peptide building blocks are connected by the condensation (C) domain, resulting in peptide bond formation and transfer of the nascent peptide chain to the following module. Each catalytic cycle is performed by a C-A-T elongation module until the termination module with a C-terminal thioesterase (TE) domain is reached. Here, the peptide product is released by hydrolysis or intramolecular cyclisation.
In comparison to single-protein NRPSs, where all modules are encoded on a single polypeptide chain, multi-protein NRPS systems must also maintain a specific module order during the peptide biosynthesis. Therefore, small C-terminal and N-terminal communication-mediating (COM) domains/docking domains (DD) were identified in the C- and N-terminal regions of multi-protein NRPSs. It was shown that these domains mediate specific and selective non-covalent protein-protein interaction, even though DD interactions are generally characterized by low affinities.
The first publication of this work focuses on the Peptide-Antimicrobial-Xenorhabdus peptide-producing NRPS called PaxS, which consists of the three proteins PaxA, PaxB and PaxC. Here, in particular the trans DD interface between the C-terminal attached DD of PaxB and N-terminal attached DD of PaxC was structurally investigated and thermodynamically characterized by isothermal titration calorimetry (ITC), yielding a dissociation constant (KD) of ~25 µM, which is a DD typical affinity known from further characterized DD pairs. The artificial linking of the PaxB/C C/NDD pair via a glycine-serine (GS) linker facilitated the structure determination of the DD complex by solution nuclear magnetic resonance (NMR) spectroscopy. In comparison to known docking domain structures, this DD complex assembles in a completely new fold which is characterized by a central α-helix of PaxC NDD wrapped in two V-shaped α-helices of PaxB CDD.
The first manuscript of this work focuses on the application of synthetic zippers (SZ) to mimic natural docking domains, enabling the easy assembly of NRPS building blocks encoded on different plasmids in a functional way. Here, the high-affinity interaction of SZs unambiguously defines the order of the synthetases derived from single-protein NRPSs in the engineered NRPS system and allows the recombination in a plug-and-play manner. Notably, the SZ engineering strategy even facilitates the functional assembly of NRPSs derived from Gram-positive and Gram-negative bacteria. Furthermore, the functional incorporation of SZs into NRPS modules is not limited to a specific linker region, so we could introduce them within all native NRPS linker regions (A-T, T-C, C-A).
The second publication and the second manuscript of this thesis again focus on the multi-protein PaxS, in particular on the trans interface between the proteins PaxA and PaxB on a molecular level by solution NMR. Therefore, the PaxA CDD adjacent T domain was included into the structural investigation besides the native interaction partner PaxB NDD. Before a three-dimensional structure could be obtained from NMR data, the NH groups located in the peptide bonds had to be assigned to the respective amino acids of the proteins (backbone assignment). Based on these backbone assignments, the secondary structure of PaxA T1-CDD and PaxB NDD in the absence and presence of the respective interaction partner were predicted.
The structural and functional characterization of the PaxA T1-CDD:PaxB NDD complex is summarized in manuscript two. The thermodynamic analysis of this complex by ITC determined a KD value of ~250 nM, whereas the discrete DDs did not interact at all. The high-affinity interaction allowed to determine the solution NMR structure of the PaxA T1-CDD:PaxB NDD complex without the covalent linkage of the interaction partners and an extended docking domain interface could be determined. This interface comprises on the one hand α-helix 4 of the PaxA T1 domain together with the α-helical CDD, and on the other hand the PaxB NDD, which is composed of two α-helices separated by a sharp bend.
...
Die vorliegende Arbeit präsentiert Forschungsarbeiten basierend auf nanoskopischen Oberflächenmessungen an plasmonischen Metaoberflächen und zweidimensionalen Materialien, insbesondere dem halbleitenden Übergangsmetal-Dichalcogenid (TMDC) WS_2. Die Thesis ist in sieben Kapitel untergegliedert. Die Einleitung vermittelt einen Überblick über die treibenden Kräfte hinter der Forschung im Bereich der Nanophotonik an zweidimensionalen Materialsystemen. Die Untersuchung der Licht-Materie-Wechselwirkung an dünnen Materialgrenzflächen zieht sich als roter Faden durch die gesamte Arbeit.
Das zweite Kapitel beschreibt den experimentellen Aufbau, der für die Durchführung der nanoskopischen Messungen in dieser Arbeit implementiert wurde. Es werden theoretische Grundlagen, das Messprinzip und die Implementierung des optischen Rasternahfeldmikroskops (s-SNOM) skizziert. Außerdem wird ein Strom-Spannungs-Rasterkraftmikroskop (c-AFM) im Kontaktmodus genutzt, um elektrische Ströme auf mikroskopischen zweidimensionalen TMDC-Terrassen zu messen. In den darauffolgenden vier Kapiteln werden die Beiträge dieser Arbeit zur Untersuchung der Licht-Materie-Wechselwirkung auf der Nanoskala aus verschiedenen Perspektiven vorgestellt. Jedes Kapitel enthält eine kurze Einleitung, einen Theorieteil, Messdaten oder Simulationsergebnisse sowie eine Analyse; vervollständigt durch einen Schlussteil.
Die zentrale Arbeit an einer metallischen Metaoberfläche aus elliptischen Goldscheiben wird in Kapitel 3 vorgestellt. Der zugehörige Theorieteil führt in das Konzept von Oberflächen-Plasmon-Polaritonen (SPP) ein, das für den Forschungsbereich der Plasmonik im Allgemeinen wesentlich ist. Verschiedene Methoden zur Berechnung der Dispersionsrelation dieser Oberflächenmoden an ein- und mehrschichtigen Grenzflächen werden auf die untersuchte Metaoberflächenprobe angewendet. Das Modell sagt drei verschiedene Moden voraus, die sich an der Grenzfläche ausbreiten. Eine teil-gebundene ins Substrat abstrahlende Oberflächenmode sowie zwei vergrabene stark gebundene anisotrope Moden. Eine auf der Probe platzierte Nanokugel aus Silizium wird als radiale Anregungsquelle verwendet.
Der Vergleich mit s-SNOM-Nahfeldbildern zeigt, dass nur die schwach gebundene geführte Modenresonanz ausreichend angeregt wurde, um durch s-SNOM-Bildgebung nachgewiesen werden zu können. Die schwache Oberflächenbindung erklärt die scheinbar isotrope Ausbreitung auf der anisotropen Oberfläche. Die Beobachtung der verbleibenden stark eingegrenzten anisotropen vergrabenen Moden würde eine verbesserte tiefenempfindliche Auflösung des Systems erfordern, die im Prinzip für Schichtdicken von 20 nm möglich sein sollte. Darüber hinaus wirft die Beobachtung die Frage auf, ob die durch Impuls- und Modenvolumenanpassung der Nanokugel gegebene Anregungseffizienz einen ausreichenden Anregungsquerschnitt erzeugt, um nachweisbare vergrabene SPP-Moden zu erzeugen.
In Kapitel 4 wird die Idee der Visualisierung vergrabener elektrischer Felder mit s-SNOM fortgesetzt. Hier wird es auf die Untersuchung von WS_2 angewendet, einem zweidimensionalen TMDC-Material, welches Photolumineszenz zeigt. Durch die Strukturierung des Galliumphosphid-Substrats unter der hängenden Monolage, die von einer dünnen Schicht aus hBN getragen wird, wird die Photolumineszenzausbeute um den Faktor 10 erhöht. Dies wird durch den Entwurf einer lateralen DBR-Mikrokavität mit zusätzlich optimierter vertikaler Tiefe erreicht, die in das Substrat geätzt wurde.
Die hochauflösende Abbildung der elektrischen Feldverteilung im Resonator wird durch den Einsatz von s-SNOM ermöglicht, um die Verbesserung der Einkopplung durch diese beiden Ansätze zu bewerten. Es konnte festgestellt werden, dass die laterale Struktur überwiegend zur verstärkten Photolumineszenzausbeute beiträgt, während für die Einkopplung keine offensichtliche Verstärkung auf die vertikale Strukturoptimierung zurückgeführt werden konnte.
Das zweidimensionale Material WS_2 wird in Kapitel 5 erneut mit Hilfe von c-AFM untersucht. Unterschiedlich dicke Multilagen auf Graphen und Gold dienen als Tunnelbarrieren für vertikale Ströme zwischen Substrat und leitender c-AFM-Messpitze. Die Daten können mit einem Fowler-Nordheim-Modell mit Parametern für die Tunnelbreite und Schottky-Barrierenhöhen der beiden Grenzflächen erklärt werden. Die Messungen zeigen jedoch eine schwache Reproduzierbarkeit, was eine detailliertere Zusammenfassung der relevanten Fehlerquellen erfordert. In der Schlussfolgerung des Kapitels werden mehrere Schlüsselaspekte vorgeschlagen, die bei künftigen Messungen berücksichtigt werden sollten. Entscheidend ist, dass c-AFM sehr empfindlich auf die Adsorption von Wasserfilmen an der Probenoberfläche reagiert, worunter WS_2-Oberflächen unter Umgebungsbedingungen leiden...
Photorhabdus and Xenorhabdus are Gram-negative, entomopathogenic bacteria, living in endosymbiosis with the soil-dwelling nematode of the genera Steinernema and Heterorhabditis. The life cycle of these nematodes consists of non-feeding infective juvenile (IJ) stage, which actively searches for insects in the soil. After penetrating the insect prey, Photorhabdus and Xenorhabdus bacteria are released from the nematode gut. The bacteria proliferate and produce toxins to kill the insect. Photorhabdus and Xenorhabdus support nematode development throughout the life cycle and to get rid of food competitors by providing a wide variety of specialized metabolites (SMs). However, little is known about which SMs function as so called “food signals” to trigger the development process.
The IJs develop into adult, self-fertilizing hermaphrodites in a process called recovery, while feeding on cadaver and bacterial biomass. Heterorhabditis and Steinernema proceed to breed until nutrients are exhausted. Next generation IJs (NG-IJs) develop and leave the cadaver to search for another insect prey.
Photorhabdus and Xenorhabdus can be cultivated in defined medium under laboratory conditions. By placing IJs on a plate containing their respective bacterial symbiont, the complete life cycle of the nematodes can be observed in vitro. The in vitro nematode bioassay was used as a tool to investigate the development of the nematode.
The aim of this study was to find the food signals responsible for nematode development. Different Photorhabdus deletion strains unable to produce one or several SMs were co-cultivated with nematodes in the nematode bioassay. Subsequently, two aspects of the life cycle were investigated: recovery and NG-IJ development.
As isopropyl stilbene (IPS) is postulated to function as a food signal to support nematode recovery, it was used as a starting point for investigations. This study was focused on the biosynthetic pathway of IPS, including intermediates, side products and derivatives to investigate which one is in fact responsible for supporting nematode development.
The biosynthesis of IPS requires two precursors, phenylalanine and leucine (Figure 5). The first topic was focused on the phenylalanine derived pathway. Photorhabdus laumondii deletion mutants, defective in intermediate steps of this pathway, were created. The deletion of the genes coding for the phenylalanine ammonium lyase (stlA), converting phenylalanine into cinnamic acid (CA), the coenzyme A (CoA) ligase (stlB) and the operon coding for a ketosynthase and aromatase (stlCDE), were used. These strains were used for nematode bioassay including complementation of mutant phenotypes by feeding experiments. Recovery of nematodes grown on the deletion strains was always lower than recovery of nematodes grown on wild type bacteria. Feeding IPS to a deletion strain did not restore wild type level nematode recovery, thus IPS cannot be the food signal. Instead, the food signal must be another compound derived from this part of biosynthetic pathway. Lumiquinone and 2,5-dihydrostilbene are suggested to function as food signals and need to be investigated in future work.
The second part of this study was focused on the leucine derived pathway, which involved the Bkd complex forming the iso-branched part of IPS. A deletion of bkd was created and phenotypically analysed, subsequently performed with the nematode bioassay. Not only IPS but also other branched SMs, like photopyrones and phurealipids are synthetised by the Bkd complex. Deletions strains defective in producing photopyrones and phurealipids were also performed in nematode bioassays to investigate effects of these SMs individually. Branched SMs did not have an impact on nematode development, but nematodes grown on the ΔbkdABC strain showed a reduced nematode recovery and almost diminished NG-IJs development. As the Bkd complex also produces branched chain fatty acids (BCFAs), feeding experiments were performed with lipid extracts of wild type and mutant strain. All lipid extracts improved recovery, but only wild type lipids could complement NG-IJ development. This strongly indicates that BCFAs play an important role in NG-IJ development, which needs to be proven with purified BCFA feeding. This is an interesting finding, which could improve nematode production for biocontrol agent usage.
The role of IPS derived to epoxy stilbene (EPS) for nematode development, was another focus in the nematode life cycle. Recently it was demonstrated that EPS does not support nematode development. However, EPS forms adducts with amino acids. In my thesis, novel adducts containing the amino acid phenylalanine or a tetrapeptide were characterized. Another adduct, most likely being an EPS dimer, was also characterized. The biological role of such adducts was discussed to be potentially important for insect weakening and the structure of the novel compounds need to be structure elucidated and tested for bioactivity.
The DNA damage response (DDR) is a vast network of molecules that preserves genome integrity and allow the faithful transmission of genetic information in human cells. While the usual response to the detection of DNA lesions in cells involves the control of cell-cycle checkpoints, repair proteins or apoptosis, alterations of the repair processes can lead to cellular dysfunction, diseases, or cancer. Besides, cancer patients with DDR alterations often show poor survival and chemoresistance. Despite the progress made in recent years in identifying genes and proteins involved in DDR and their roles in cellular physiology and pathology, the question of the involvement of DDR in metabolism remains unclear. It remains to study the metabolites associated with specific repair pathways or alterations and to investigate whether differences exist depending on cellular origin. The identification of DDR-related metabolic pathways and of the pathways that cause metabolic reprogramming in DDR-deficient cells may produce new targets for the development of new therapies.
In this thesis, nuclear magnetic resonance spectroscopy (NMR) was used to assess the metabolic consequence of the loss of two central DNA repair proteins with importance in diseases context, ATM and RNase H2, in haematological cells. An increase in intracellular taurine was found in RNase H2- and ATM-deficient cells compared to wild-type cells for these genes and in cells after exposition to a source of DNA damage. The rise in taurine does not appear to result from an increase in its biosynthesis from cysteine, but more likely from other cellular processes such as degradation pathways.
Overall, evidence for metabolic reprogramming in haematological cells with faults in DNA repair resulting from ATM or RNase H2 deficiencies or upon exposition to a source of DNA damage is presented in this study.
Die vorliegende Dissertation stellt die Strahldynamikdesigns zweier Hochfrequenzquadrupol-Linearbeschleuniger bzw. Radio Frequency Quadrupoles (RFQs) vor: das fur den RFQ des Protonen-Linearbeschleunigers (p-Linac) des FAIR2-Projekts an der GSI3 Darmstadt sowie einen ersten Designentwurf für einen kompakten RFQ, der u.a. zur Erzeugung von Radioisotopen für medizinische Zwecke genutzt werden könnte. Der Schwerpunkt liegt auf dem ersten Design.
Within the last thirty years, the contraction method has become an important tool for the distributional analysis of random recursive structures. While it was mainly developed to show weak convergence, the contraction approach can additionally be used to obtain bounds on the rate of convergence in an appropriate metric. Based on ideas of the contraction method, we develop a general framework to bound rates of convergence for sequences of random variables as they mainly arise in the analysis of random trees and divide-and-conquer algorithms. The rates of convergence are bounded in the Zolotarev distances. In essence, we present three different versions of convergence theorems: a general version, an improved version for normal limit laws (providing significantly better bounds in some examples with normal limits) and a third version with a relaxed independence condition. Moreover, concrete applications are given which include parameters of random trees, quantities of stochastic geometry as well as complexity measures of recursive algorithms under either a random input or some randomization within the algorithm.
James Joyce's Ulysses is treated as one of the most influential, paradigmatic texts of high modernism. Novels like Thomas Pynchon’s 1973 Gravity’s Rainbow and David Foster Wallace’s 1996 Infinite Jest, which equally raise claims to being the paradigms of their respective time, are perpetually compared to and measured against Joyce’s epic novel. However, novels like Ulysses, Gravity’s Rainbow and Infinite Jest are usually either grouped together due to their length, complexity and importance, to examine direct allusions in the texts or analyse a rather general “style” or to conversely stress the novels’ singularity and autonomy. I argue that not only can Joyce’s Ulysses, Pynchon’s Gravity’s Rainbow and Wallace’s Infinite Jest be meaningfully put in relation to one another but that their singularity and paradigmatic status in 20th century literature should be understood through the relationality of a Ulyssean Tradition. Novels like Gravity’s Rainbow and Infinite Jest can be fruitfully read in a Ulyssean Tradition. Their singular, paradigmatic aesthetic projects emerge from a reciprocal dialogue with Ulysses in their self-inscription into a Ulyssean Tradition. The intertextual connection of this Ulyssean Tradition is integrally constitutive of the autonomy through which these novels claim the status of singular representations of their respective human condition and thus epic paradigms of a new way of writing the world. By positioning themselves in the literary field alongside Ulysses as the received paradigm of modernism, Wallace in Infinite Jest and Pynchon in Gravity’s Rainbow legitimize their own, independent project and their own claims to paradigmaticness. The Ulyssean Tradition thereby becomes not only a way of writing,a nd this study not merely a study of literary influence, but also a way of reading that can generate new, independent readings through the relationality of a Ulyssean Tradition
Polyketides are highly valuable natural products, which are widely used as pharmaceuticals due to their beneficial characteristics, comprising antibacterial, antifungal, immunosuppressive, and antitumor properties, among others. Their biosynthesis is performed by large and complex multiproteins, the polyketide synthases (PKSs). This study solely focuses on the class of type I PKSs, which arrange all their enzymatic domains on one or more polypeptides. Despite their high medical value, little is known about mechanistic details in PKSs.
One central domain is the acyl transferase (AT), which is present in all PKSs and channels small acyl substrates into the enzyme. More precisely, the AT loads the substrates onto the essential acyl carrier protein (ACP), which subsequently shuttles the substrates and all intermediates for condensation and modification to additional domains to build the final polyketide.
Some PKSs use their domains several times during biosynthesis and work iteratively – these are called iterative PKSs. Others feature several sets of domains, each being used only once during biosynthesis – these PKSs are called modular PKSs. All PKSs or PKS modules consist of minimum three essential domains to connect the acyl substrates. Three modifying domains are optional and can enlarge the minimal set. According to the domain composition, the acyl substrate is fully reduced, partly reduced, or not reduced at all. This variation of modifying domains accounts for the huge structural and therefore functional variety of polyketides.
Even though the structure of fatty acids is not exactly reminiscent of polyketides, their biosynthetic pathways are closely related. Fatty acid biosynthesis is carried out by fatty acid synthases (FASs), which share many similarities with PKSs. Both megasynthases feature the same domains, performing the same reactions to connect and modify small acyl substrates. In contrast to PKSs, FASs always contain one full set of modifying domains which is used iteratively, leading to fully reduced fatty acids.
The present thesis extensively analyzes the AT of different PKSs in its substrate selectivity, AT-ACP domain-domain interaction, and enzymatic kinetic properties. The following key findings are revealed through comparison: 1.) ATs of PKSs appear slower than the ones of FASs, which may reflect the different scopes of biosynthetic pathways. Fatty acids as essential compounds in all organisms are needed in high amounts for physiological functions, whereas polyketides as secondary metabolites only require basal concentrations to take effect. 2.) The slower ATs from modular PKSs do not load non-native substrates even in absence of the native substrates. This is different to the faster ATs from iterative PKSs and FASs, which indicates high substrate specificity solely for the ATs from modular PKSs and emphasizes their role as gatekeepers in polyketide synthesis. 3.) The substrate selectivity can emerge in either the first or the second step of the AT-mediated ACP loading and is not assured by a hydrolytic proofreading function.
Moreover, a mutational study on the AT-ACP interaction in the modular PKS 6-deoxyerythronolide B synthase (DEBS) shows that single surface point mutations can influence AT-mediated reactions in a complex manner. Data reveals high enzyme kinetic plasticity of the AT-ACP interaction, which was also recently demonstrated for the interaction in a type II FAS.
Based on these findings, the mammalian FAS is engineered towards a modular PKS-like as- sembly line with the long-term goal to rationally synthesize new products. Basically, three important aspects need to be considered: 1.) AT’s loading needs to be splitted in specific loading of a priming substrate by a priming AT and in specific loading of an elongation substrate by an elongation AT. 2.) FAS-based elongation modules need to be designed with varying domain compositions for introducing functional groups in the product. 3.) Covalent and non-covalent linkers need to be designed for connection of priming and elongation modules.
This study focuses on the first aspect, splitting loading of priming and elongation substrates. An elongation substrate-specific AT is installed in the mammalian FAS via domain swapping. Since ATs from modular PKSs were proven to be substrate specific, these are used to exchange the mammalian FAS AT. This work demonstrates that it is extremely challenging to create stable and functional chimeras, but first essential steps are taken. Proper domain boundaries for AT swapping are established and a stable chimera with 70 % wild type AT activity is created. However, this chimera is only of limited value for application in an elongation module due to the intrinsic slow turnover rate of the wild type AT. Using another PKS AT, a stable elongation module is designed and analyzed in its activity in combination with a priming module. These experiments demonstrate that the loading of priming substrates are successfully suppressed in the elongation module, but nonetheless only minor turnover rates are detected in the assembly line.
...
Bacteria are true artists of survival, which rapidly adapt to environmental changes like pH shifts, temperature changes and different salinities. Upon osmotic shock, bacteria are able to counteract the loss of water by the uptake of potassium ions. In many bacteria, this is accomplished by the major K+ uptake system KtrAB. The system consists of the K+-translocating channel subunit KtrB, which forms a dimer in the membrane, and the cytoplasmic regulatory RCK subunit KtrA, which binds non-covalently to KtrB as an octameric ring. This unique architecture differs strongly from other RCK-gated K+ channels like MthK or GsuK, in which covalently tethered cytoplasmic RCK domains regulate a single tetrameric pore. As a consequence, an adapted gating mechanism is required: The activation of KtrAB depends on the binding of ATP and Mg2+ to KtrA, while ADP binding at the same site results in inactivation, mediated by conformational rearrangements. However, it is still poorly understood how the nucleotides are exchanged and how the resulting conformational changes in KtrA control gating in KtrB is still poorly understood.
Here,I present a 2.5-Å cryo-EM structure of ADP-bound, inactive KtrAB, which for the first time resolves the N termini of both KtrBs. They are located at the interface of KtrA and KtrB, forming a strong interaction network with both subunits. In combination with functional and EPR data we show that the N termini, surrounded by a lipidic environment, play a crucial role in the activation of the KtrAB system. We are proposing an allosteric network, in which an interaction of the N termini with the membrane facilitates MgATP-triggered conformational changes, leading to the active, conductive state.
The topic of this thesis is the theoretical description of the hadron gas stages in heavy-ion collisions. The overall addressed question hereby is: How does the hadronic medium evolve i.e. what are the relevant microscopic reaction mechanisms and the properties of the involved degrees of freedom? The main goal is to address this question specifically for hadronic multi-particle interactions. For this goal, the hadronic transport approach SMASH is extended with stochastic rates, which allow to include detailed balance fulfilling multi-particle reactions in the approach. Three types of reactions are newly-accounted for: 3-to-1, 3-to-2 and 5-to-2 reactions. After extensive verifications of the stochastic rates approach, they are used to study the effect of multi-particle interactions, particularly in afterburner calculations.
These studies follow complementary results for the dilepton and strangeness production with only binary reactions, which show that hadronic transport approaches are capable of describing observables when employed for the entire evolution of low-energy heavy-ion collisions. This is illustrated by the agreement of dilepton and strangeness production for smaller systems with SMASH calculations. It is, in particular, possible to match the measured strangeness production of phi and Xi hadrons via additional heavy nucleon resonance decay channels. For larger systems or higher energies, hadronic transport cascade calculations with vacuum resonance properties can point to medium effects. This is demonstrated extensively for the dilepton emission in comparisons to the full set of HADES dielectron data. The dilepton invariant mass spectra are sensitive to a medium modification of the vector meson spectral function for large collision systems already at low beam energies. The sensitivity to medium modifications is mapped out in detail by comparisons to a coarse-graining approach, which employs medium-modified spectral functions and is based on the same evolution.
The theoretical foundation of stochastic rates are collision probabilities derived from the Boltzmann equation's collision term with the assumption of a constant matrix element. This derivation is presented in a comprehensive and pedagogical fashion. The derived collision probabilities are employed for a stochastic collision criterion and various detailed-balance fulfilling multi-particle reactions: the mesonic Dalitz decay back-reaction (3-to-1), the deuteron catalysis (3-to-2) and the proton-antiproton annihilation back-reaction (5-to-2). The introduced stochastic rates approach is extensively verified by studies of the numerical stability and comparisons to previous results and analytic expectations. The stochastic rates results agree perfectly with the respective analytic results.
Physically, multi-particle reactions are demonstrated to be significant for different observables, most notably the yield of the partaking particles, even in the late dilute stage of heavy-ion reactions. They lead to a faster equilibration of the system than equivalent binary multi-step treatments. The difference in equilibration consequently influences the yield in afterburner calculations. Interestingly, the interpretation of results is not dependent on employing multi-particle or multi-step treatments, which a posteriori validates the latter.
As the first test case of multi-particle reactions in heavy-ion reactions, the mesonic 3-to-1 Dalitz decay is found to be dominated by the omega Dalitz decay back-reaction. While the effect on the medium is found to be negligible overall, the regeneration is found to be sizable: up to a quarter of Dalitz decays are regenerated.
Non-equilibrium rescattering effects are shown to be relevant for late collision stages for two particle species: deuteron and protons. In both cases, the relevant rescatterings involve multiple particles.
The deuteron pion and nucleon catalysis reactions equilibrate quickly in the afterburner stage at intermediate energies. The constant formation and destruction keeps the yield constant and microscopically explains the "snowballs in hell"-paradox. The yield is also generated with no d present at early times, which explains why coalescence models can also match the multiplicity.
New is the study of the 5-body back-reaction of proton-antiproton annihilations. This work marks the first realization of microscopic 5-body reactions in a transport approach to fulfill detailed balance for such reactions. A sizable regeneration due to the back-reaction of up to half of the proton-antiproton pairs lost due to annihilations is found. Consequently, both annihilation and regeneration in the late non-equilibrium stage are shown to have a significant effect on the p yield.
Ziel dieser Dissertation ist es, die Gleichgewichts- und Nichtgleichgewichts-Eigenschaften des stark wechselwirkenden QGP-Mediums nahe dem Phasenübergang unter extremen Bedingungen von hohen T und hohen Baryonendichten mit Hilfe der kinetischen Theorie im Rahmen von effektiven Modellen zu untersuchen. Wir werden zunächst die thermodynamischen und Transporteigenschaften des QGPs in der Nähe des Gleichgewichts auf der Basis des DQPM im Bereich moderater chemischer Baryonenpotentiale μB ≥ 0.5 GeV untersuchen. Insbesondere werden die EoS und die Schallgeschwindigkeit sowie die Transportkoeffizienten des QGP auf der Grundlage des DQPM bei endlichen T und μB berechnet. Transportkoeffizienten sind besonders interessant, da sie Informationen über die Wechselwirkungen im Medium erlauben, das im Gleichgewicht durch eine Temperatur T und ein chemisches Potential μB charakterisiert werden kann. Unter Berücksichtigung der Transportkoeffizienten und der EoS der QGP-Phase vergleichen wir unsere Ergebnisse mit verschiedenen Resultaten aus der Literatur, in denen Transportkoeffizienten des QGPs auf Basis von effektiven Modellen vorwiegend bei Null oder kleinem chemischen Potentialen untersucht wurden.
Darüber hinaus werden in Kapitel 3 die Gleichgewichtseigenschaften des QGPs und insbesondere die Auswirkungen der μB-Abhängigkeit der thermodynamischen und Transporteigenschaften des QGPs im Rahmen des erweiterten PHSD-Transportansatzes untersucht, der die vollständige Entwicklung des Systems einschließlich der partonischen Phase umfasst. Die Entwicklung des PHSD-Transportansatzes wird in der partonischen Phase erweitert, indem explizit die gesamt- und differentiellen partonischen Streuquerschnitte auf der Grundlage des DQPM berechnet und bei der tatsächlichen Temperatur T und dem baryonischen chemischen Potential μB in jeder einzelnen Raum-Zeit-Zelle, in der die partonische Streuung stattfindet, ausgewertet werden.
Um die Spuren der μB-Abhängigkeit des QGPs in den Observablen zu untersuchen, werden die Ergebnisse von PHSD5.0 (mit μB-Abhängigkeiten) mit den Ergebnissen von PHSD5.0 für μB = 0 sowie mit PHSD4.0, in dem die Massen/Breiten der Quarks und Gluonen sowie deren Wechselwirkungsquerschnitte nur von T abhängen, verglichen. Wir diskutieren die PHSD-Ergebnisse für verschiedene Observablen: (i) Rapiditäts- und pT -Verteilungen von identifizierten Hadronen für symmetrische Au+Au- und Pb+Pb- Kollisionen bei Energien von 30 AGeV (zukünftige NICA-Energie) sowie für die RHIC-Spitzenenergie von √sNN = 200 GeV; (ii) gerichteter Fluss v1 von identifizierten Hadronen für Au + Au bei invarianter Energie √sNN = 27 GeV und 200 GeV; (iii) elliptischer Fluss v2 der identifizierten Hadronen für Au+Au bei invarianten Energien √sNN = 27 und 200 GeV. Der Vergleich der "Bulk"-Observablen für Au+Au-Kollisionen innerhalb der drei PHSD-Einstellungen hat gezeigt, dass sie eine recht geringe Empfindlichkeit gegenüber den μB -Abhängigkeiten der Partoneigenschaften (Massen und Breiten) und ihrer Wechselwirkungsquerschnitte aufweisen, sodass die Ergebnisse von PHSD5.0 mit und ohne μB sehr nahe beieinander liegen. Nur im Fall von Kaonen, Antiprotonen ̄p und Antihyperonen ̄Λ + ̄Σ0 konnte ein kleiner Unterschied zwischen PHSD4.0 und PHSD5.0 bei den höchsten SPS- und RHIC-Energien festgestellt werden.
Wir finden nur geringe Unterschiede zwischen den Ergebnissen von PHSD4.0 und PHSD5.0 für die hier betrachteten hadronischen Observablen sowohl bei hohen als auch bei mittleren Energien. Dies hängt damit zusammen, dass bei hohen Energien, wo die Materie vom QGP dominiert wird, ein sehr kleines chemisches Baryonenpotential μB in zentralen Kollisionen bei mittlerer Rapidität gemessen wird, während mit abnehmender Energie und größerem μB der Anteil des QGPs rapide abnimmt, sodass die endgültigen Beobachtungswerte insgesamt von den Hadronen dominiert werden, die an der hadronischen Rückstreuung teilgenommen haben, und somit die Information über ihren QGP-Ursprung verwaschen oder verloren geht.
In Kapitel 4 betrachten wir die Transportkoeffizienten von QGP-Materie im erweiterten Polyakov-NJL-Modell entlang der Übergangslinie für moderate Werte des chemischen Baryonenpotenzials 0 ≤ μB ≤ 0.9 GeV sowie in der Nähe des kritischen Endpunkts(CEP) und bei großem chemischen Baryonenpotenzial μB = 1.2 GeV, wo ein Phasenübergang erster Ordnung stattfindet. Wir untersuchen, wie die Natur der Freiheitsgrade die Transporteigenschaften des QGPs beeinflusst. Darüber hinaus demonstrieren wir die Auswirkungen des Phasenübergangs erster Ordnung und des CEP auf die Transportkoeffizienten im dekonfinierten QCD-Medium.
Darüber hinaus wird in Kapitel 5 eine phänomenologische Erweiterung des DQPM auf große baryonchemische Potentiale μB einschließlich der Region mit einem möglichen CEP und späterem Phasenübergang erster Ordnung betrachtet. Eines der wichtigsten Merkmale des Modells ist das Auftreten einer ’kritischen‘ Skalierung in der Nähe des CEP. Das Hauptziel des vorgestellten Modells besteht darin, die mikroskopischen und makroskopischen Eigenschaften der partonischen Freiheitsgrade für den Bereich des Phasendiagramms bereitzustellen, der durch moderates T und moderates oder hohes μB gekennzeichnet ist.
...
Quo vadis Papua: case study of special autonomy policies and socio-political movements in Papua
(2021)
This research discusses socio-political movements in Papua as a result of the implementation of special autonomy policies (Otsus) by the government for almost two decades. Theoretically, indigenous Papuans should support it but in empirical reality, Otsus has been considered "fail" by the indigenous Papuan people because there are still many problems that have not been resolved by Otsus. This negative response indicates public dissatisfaction towards the development planning process in Papua. This dissertation aims to examine these issues; why these policies and development plans failed and are protested, why protests against them are prolonged, how do protests develop into social movements, and whether indigenous Papuan movements can be classified as social movements. The study uses qualitative approach, through case study methods. Data are collected through interviews, observations and documentation studies. The research finds that the presence of Otsus in Papua in addition to being a source of new conflict, also triggers conflicts in the form of protests and resistance movements against the government of Indonesia, both physical and political. This research discovers that, indeed the Otsus management has succeeded in changing the face of Papua because of the many physical projects but the development of human aspects and supporting instruments has not been touched at all. Thus, only a small percentage of indigenous Papuans feel the benefits of Otsus, while most of them are still struggling. This paper finds that protests against Otus are due to the growing resentments from the community so long as their demands are not met. This study suggests that the presence of the state in Papua through the Otsus policy must be re-evaluated. The state must ensure that in the Otsus era, the indigenous Papuans should not be marginalized, so that aspirations for the welfare of all indigenous Papuans through Otsus can be realized.
Until quite recently, stem cell technology mainly focused on pure populations of embryonic stem cells (ES) derived from the inner cell mass of the blastocyst and induced pluripotent stem cells (iPS). Using organoids, a newly established culture technique, it is now possible to culture also organ and patient-specific adult stem (AS) and induced pluripotent stem (IPS) cells in vitro. Furthermore, it has been shown that adult stem cells, grown as organoids, are genetically stable, proliferate and maintain their multi-potency (often a bi-potency) for months. This is possible by providing conditions that recapitulate the stem cell niche of the corresponding organ. Particularly, defined growth factors and a physiological scaffold, which is provided by an extracellular matrix (ECM). Because of increasing research activities, organoids became influential in the recent years. Wide-ranging interest also led to a clearer definition: organoids must contain multiple organ-specific cell types, must be able to recapitulate some organ specific functions, and the cells must be spatially organized in a way similar to the organ they are derived from. The excitement about organoids is based on their high potential as a model to understand wound healing, cellular behaviour and differentiation processes in organogenesis. Furthermore, high potential in the drug development and in personalized stem cell therapeutic approaches has been shown. Specifically, for personalized stem cell therapy, one potential application is for chronic autoimmune diseases such as Diabetes type 1 (T1D). T1D is characterized by the immune-mediated destruction of ß-cells in the Pancreas that leads to absolute insulin deficiency. In T1D the first-line therapeutic approach is exogenous insulin replacement therapy, which always implicates the risk of high fluctuations in blood-sugar levels and therefore the risk of hypoglycaemia. Another therapeutic approach is the xenotransplantation of islets from human donors. A successful islet transplantation allows patients a years-long insulin independence. However, the therapeutic value of islet transplantation is highly limited by the availability of organ donors and by the need for chronic administration of immune suppressive medication. The use of pancreas organoids offers a promising alternative as a personalized cell therapeutic approach to treat T1D without the hypoglycaemia risks of the established therapies. In 2013 Meritxell Huch and colleagues established for the first-time organoids from the exocrine, ductal part of the pancreas. These pancreas organoids are characterized by a monolayered, spherical cell epithelium which comprises a liquid filled lumen. In addition, they showed that after transplantation of these cells into immunodeficient mice, they differentiate into ß-cells and cure T1D. However, basic knowledge of the culture growth behaviour is still lacking: to date, no growth parameters are defined and reliable and robust investigation approaches are still missing. Furthermore, basic knowledge about the organoid development and biochemical/biophysical mechanisms that generate the phenotypic structure are not identified. For a clinical approach these parameters are fundamental and therefore must be defined pre-clinically.
The aim of this study is the preclinical characterization of the hPOs...
Tissue translocation, multigenerational and population effects of microplastics in Daphnia magna
(2021)
The last century saw the widespread adoption of plastic materials throughout nearly every aspect of our lives. Plastics are synthetic polymers that are made up of monomer chains. The properties of the monomer in conjunction with chemical additives allow plastics to have a sheer endless variety of features and use cases. They are cheap, lightweight, and extremely durable. Plastic materials are often engineered for single-use and in conjunction with high production volumes and insufficient waste management and recycling across the globe, this leads to a large number of plastics entering the environment. Marine ecosystems are considered sinks. However, freshwater ecosystems as entry pathways are highly affected by plastic waste as well. Throughout the past decade, the impact of plastic waste on human and environmental health has received a lot of attention from the ecotoxicological community as well as the public. Small plastic fragments (< 1 mm called microplastics) are a large part of this emerging field of research. Within this, the water flea Daphnia magna is probably the most common organism that is used to assess microplastics toxicity. As a filter-feeding organism, it indiscriminately ingests particles from the water column and is thus highly susceptible to microplastics. For this thesis, we identified some gaps in the available data on the ecotoxicity of microplastics to daphnids. To illuminate some of those gaps the present thesis was aimed at five main aspects:
(1) Tissue translocation of spherical microplastics in Daphnia magna
(2) Investigation of the toxicity of irregularly shaped microplastics
(3) Multigenerational and population effects of microplastics
(4) Comparison of the toxicity of microplastics and natural particles
(5) Effects of particle-aging on microplastics toxicity
The thesis is comprised of three peer-reviewed articles and one so-far unpublished study as “additional results”. The first study was aimed at understanding tissue translocation of spherical microplastics to lipid storage droplets of daphnids. The crossing of biological membranes is discussed as a prerequisite to eliciting tissue damage and an inflammatory response. Previously, researchers reported the translocation of fluorescently labeled spherical microplastics to lipid storage droplets of daphnids, even though no plausible biological mechanism to explain this occurrence. Therefore, in order to learn more about this process and potentially illuminate the mechanism we replicated the study. We were able to observe a fluorescence signal inside the lipid droplets only after increasing the exposure concentrations. Nonetheless, it appeared to be independent of particles. This led to the hypothesis, that the lipophilic fluorescent dye uncoupled from the particles and subsequently accumulated in lipid storage droplets. The hypothesis was further confirmed through an additional experiment with a silicone-based passive sampling device showing that the fluorescence occurred both independent of particles and digestive processes. Accordingly, we concluded that the reported findings were a microscopic artifact caused by the uncoupling of the dye from the particles. Therefore, a fluorescence signal alone is not a sufficient proxy to assume that particles have translocated. It needs to be coupled with additional methods to ensure that the observation is indeed caused by the translocation of particles.
It is still unclear whether the toxicity profile of microplastics is different from that of naturally occurring particles or if they are “just another particle”, as there are innumerable amounts in the natural environment surrounding an organism. The goal of the second study was to compare the toxicity of irregularly shaped polystyrene microplastics to that of the natural particle kaolin. The environment is full of natural non-food particles that daphnids ingest more or less indiscriminately and therefore are well adapted to deal with. Daphnids have a short generation time and usually experience food limitation in nature. Therefore, short-term studies only looking at acute toxicity with ad libitum food availability are not representative of the exposure scenario in nature. For a more realistic scenario, we, therefore, used a four-generation multigenerational design under food limitation to investigate how effects translate from one generation to the next. We observed concentration-dependent effects of microplastics but not of natural particles on mortality, reproduction, and growth. Some of the effects increased from generation to generation, leading to the extinction of two treatment groups. Here, microplastics were more toxic than natural particles. At least part of this difference can be explained by physical properties leading to the quick sedimentation of the kaolin, while microplastics remained in the water column. Nonetheless, buoyancy and sedimentation would also affect exposure in the environment and are likely different for most microplastics than for most naturally occurring particle types.
...
Die Studien im Rahmen dieser Arbeit wurden am Modellorganismus Anabaena sp. PCC 7120 (Anabaena) durchgeführt, einem filamentösen Süßwasser-Cyanobakterium. Cyanobakterien sind photosynthetische, Gram-negative Organismen. Sie besitzen eine das Zytosol begrenzende Plasmamembran und eine Äußere Membran. TonB-abhängige Transporter (TBDTs) und Porine der Äußeren Membran bewerkstelligen und regulieren die Aufnahme von Nährstoffen. Typischerweise wenig abundante Substrate für den TBDT-vermittelten, aktiven Transport sind beispielsweise eisenhaltige Siderophore oder VitaminB12. Kleinere gelöste und abundante Stoffe wie Salze oder andere Ionen gelangen hingegen passiv durch Porine in das Periplasma.
In Anabaena wurden neun putative Porine identifiziert. Sieben hiervon wiesen eine porinspezifische Domänenstruktur auf (Alr0834, Alr2231, All4499, Alr4550, Alr4741, All5191 und All7614), und wurden im Rahmen dieser Arbeit näher betrachtet. Die Expression dieser sieben Gene wurde vergleichend untersucht, nachdem der Wildtyp in Standardmedium oder in Medium indem jeweils Mangan, Eisen, Kupfer oder Zink fehlte angezogen wurde. Außerdem wurde das Wachstum der einzelnen Porinmutanten im Vergleich zum Wildtyp auf Festmedium mit hohen Konzentrationen von Salzen, Antibiotika oder anderen Stoffen analysiert. Hierbei konnten den einzelnen Mutanten teilweise spezifische phänotypische Eigenschaften zugeschrieben werden. Zusammengefasst kann anhand der Analysenergebnisse vermutet werden, dass Alr4550 eine besondere Rolle in der Wahrung der Zellhüllenstabilität oder -integrität spielt, wohingegen das Fehlen von Alr5191 auf unbekannte Weise die Fixierung von Stickstoff zu erschweren scheint. Die alr2231-Mutante zeigte eine Resistenz gegenüber hohen Zinkkonzentrationen, was die Vermutung zulässt, dass Zink ein Substrat von Alr2231 darstellt. Für weitere Porine kann ebenfalls ein Zusammenhang zum Transport von Kupfer oder Mangan vermutet werden.
Neben Porinen wurden ebenfalls TonB-ähnliche Proteine in Anabaena untersucht. TonB ist ein plasmamembranständiges Protein, das in Komplex mit ExbB und ExbD die Energie für Transportprozesse über die Äußere Membran bereitstellt. Hierfür bindet TonB C-terminal an TBDTs und induziert dort Strukturänderungen, welche den Substratimport ins Periplasma ermöglichen. Als Energiequelle wird der Protonengradient genutzt, der über die Plasmamembran besteht. In Anabaena wurden vier putative TonB Proteine identifiziert, die sich jeweils in Länge und Domänenstruktur unterscheiden. Im Rahmen dieser Arbeit konnte durch Substrattransport-Experimente und Wachstumsanalysen gezeigt werden, dass TonB3 an der Aufnahme zweier Siderophore (Schizokinen und dem Xenosiderophor Ferrichrom) beteiligt ist, da die entsprechende Mutante sich als unfähig erwies diese zu als Eisenquelle nutzbar zu machen. Daneben wies TonB3 weitere Merkmale auf, die auch TonB-Proteinen anderer Organismen zugeschrieben wurden (Wachstumsdefizit der Mutante unter Eisenmangel, eisenabhängiges Expressionsprofil). Interessanterweise zeigte sich, dass das Siderophor Ferrichrom ebenfalls nicht als Eisenquelle für die tonB4-Mutante zur Verfügung stand, was zum Beispiel auf eine Beteiligung von TonB4 an dessen Transport hinweisen könnte.
TonB1, welches sich durch ein inkomplettes TBDT-Interaktionsmotiv auszeichnet, und TonB2 konnte keine Beteiligung am Siderophoretransport zugeschrieben werden, jedoch zeigten Mutanten der einzelnen Gene spezifische phänotypische Eigenschaften. Die tonB1-Mutante stach hervor durch ein vergleichsweise stark verzögertes Wachstum unter diazotrophen Bedingungen. Es konnte gezeigt werden, dass sowohl die Nitrogenaseaktivität als auch die expression vermindert war im tonB1-Mutantenstamm. Außerdem zeigten die Heterozysten dieser Mutante, die auf die Stickstoffixierung spezialisierten Zellen, eine abnormale Morphologie. Da die Expression von tonB1 jedoch nach dem Überführen von Wildypzellen in stickstoffreies Medium nicht erhöht war, kann eine direkte Beteiligung von TonB1 an der Heterozystendifferenzierung als unwahrscheinlich betrachtet werden. Die Zelleinschnürungen zwischen Heterozysten und vegetativen Zellen waren in I-tonB1 weniger ausgeprägt als im Wildtyp, was durch eine Anfärbung der Zellwand mit einem Fluoreszenzmarker gezeigt werden konnte. Ebenfalls konnte anhand des fluoreszierenden Markers Calcein gezeigt werden, dass die molekulare Diffusionsgeschwindigkeit zwischen Heterozysten und vegetativen Zellen, und auch zwischen zwei benachbarten vegetativen Zellen, in der tonB1-Mutante erhöht ist. Deswegen kann hier vermutlich vermehrt die Nitrogenase schädigender Sauerstoff in Heterozysten eindringen. Die aufgezählten Ergebnisse deuten auf eine Funktion von SjdR im Aufbau der Septumsstrukturen hin, beispielsweise durch Regulation der Peptidoglykansynthese oder -verteilung, weswegen TonB1 umbenannt wurde in SjdR (Septal junction disc regulator).
Die Untersuchung der tonB2-Mutante zeigte bei dieser eine veränderte Pigmentierung, eine vermehrte Lipopolysaccharidproduktion und Filamentaggregation sowie eine erhöhte Resistenz gegenüber bestimmten Antibiotika oder Detergenzien. Letzteres könnte auf die ebenfalls in der tonB2-Mutante beobachtete verringerte Porinexpression zurückgeführt werden. Es wurde außerdem eine vermehrte Anreicherung von Kupfer und Molybdän in der Mutante gemessen, was ein Grund für die Veränderte Pigmentierung sein könnte und ebenfalls die Porinexpression beeinflussen könnte. Insgesamt scheint sich das Fehlen von TonB2 auf die Integrität der Äußeren Membran auszuwirken. Daher kann für TonB2, eine Funktion in Anlehnung an das Tol-system vermutet werden.
The main subject of this thesis is the study of hadron and photon production in relativistic heavy-ion collisions by means of hydrodynamics+transport approaches. Two different kinds of such hybrid approaches are employed in this work, the SMASH-vHLLE-hybrid and a MUSIC+SMASH hybrid. While the former is capable of simulating heavy-ion collisions covering a wide range of collision energies down to √s = 4.3 GeV, reproducing the correct baryon stopping powers, the latter provides a framework to consistently model photon production in the hadronic stage of high-energy heavy-ion collisions.
The SMASH-vHLLE-hybrid is a novel state-of-the-art hybrid approach whose development constitutes a major contribution to this thesis. It couples the hadronic transport SMASH to the 3+1D viscous hydrodynamics approach vHLLE. Therein, SMASH is employed to provide the fluctuating 3D initial conditions and to model the late hadronic rescattering stage, and vHLLE for the fluid dynamical evolution of the hot and dense fireball. The initial conditions are provided on a hypersurface of constant proper time, and the macroscopic evolution of the fireball is carried out down to an energy density of ecrit = 0.5 GeV/fm3, where particlization occurs. Consistency at the interfaces is verified in view of global, on-average quantum number conservation and the SMASH-vHLLE-hybrid is validated by comparison to SMASH+CLVisc as well as UrQMD+vHLLE hybrid approaches. The establishment of the SMASH-vHLLE-hybrid to theoretically describe heavy-ion collisions at intermediate and high collision energies forms a basis for a range of extensions and future research projects. It is further made available to the heavy-ion community by virtue of being published on Github.
The SMASH-vHLLE-hybrid is applied to simulate Au+Au/Pb+Pb collisions between √s = 4.3 GeV and √s = 200.0 GeV. A good agreement with the experimentally measured rapidity and transverse mass spectra is obtained. In particular the baryon stopping dynamics are well reproduced at low, intermediate, and high collision energies. Excitation functions for the mid-rapidity yield and mean transverse momentum of pions, protons and kaons are demonstrated to agree well with their experimentally measured counterpart. These results further validate the approach and provide a solid baseline for potential future studies. The importance of annihilations and regenerations of protons and anti-protons is additionally investigated in Au+Au/Pb+Pb collisions between √s = 17.3 GeV and √s = 5.02 TeV with the SMASH-vHLLE-hybrid. It is found that, regarding the p + p ̄ ↔ 5 π reaction, 20-50% (depending on the rapidity range) of the (anti-)proton yield lost to annihila- tions in the hadronic rescattering stage is restored owing to the back reaction. The back reaction thus constitutes a non-negligible contribution to the final (anti-)proton yield and should not be neglected when modelling the late rescattering stage of heavy-ion collisions.
The MUSIC+SMASH hybrid is a hybrid approach ideally suited to model the production of photons in relativistic heavy-ion collisions. Therein, the macroscopic production of photons in the hadronic stage in MUSIC relies on the identical effective field theories as the photon cross sections implemented in SMASH for the microscopic production. The MUSIC+SMASH hybrid thus provides the first consistent framework to the end of hadronic photon production. It accounts for 2 → 2 scattering processes of the kind π + ρ → π + γ and pion bremsstrahlung processes π + π → π + π + γ. The MUSIC+SMASH hybrid is employed in an ideal 2D setup to systematically assess the importance of non-equliibrium dynamics in the hadronic rescattering stage on mid-rapidity transverse momentum spectra and elliptic flow of photons at RHIC/LHC energies. This is achieved by comparing the outcome of the MUSIC+SMASH hybrid, involving an out-of-equilibrium late rescattering stage, to macroscopically approximating late stage photon production by means of MUSIC, employed down to temperatures well below the switching temperature. It is found that non-equilibrium dynamics have only minor implications for photon transverse momentum spectra, but significantly enhance the photon elliptic flow. At RHIC energies, an enhancement of up to 70%, and at LHC of up to 65% is observed in the non-equilibrium afterburner as compared to its hydrodynamical counterpart. In combination with the large amount of photons produced above the particlization temperature, these differences are modest regarding the transverse momentum spectra, but a significant enhancement of the elliptic flow is observed at low transverse momenta. Below pT ≈ 1.4 GeV, the combined v2 is enhanced by up to 30% at RHIC, and up to 20% at the LHC within the non-equilibrium setup as compared to its approximation via hydrodynamics. Non-equilibrium dynamics in the hadronic rescattering stage are hence important, especially in view of momentum anisotropies at low transverse momenta. These findings thus contribute to the understanding of low-pT photons produced in heavy-ion collisions at RHIC/LHC energies and the MUSIC+SMASH hybrid employed for this study provides a baseline for additional studies regarding photon production in the future.
To summarize, the approaches and frameworks presented in this thesis provide a good baseline for further extensions and studies in order to improve the understanding of hadron and photon production in relativistic heavy-ion collisions across a wide range of collision energies. More broadly, such future studies of hadrons and photons may contribute to enhance the understandig of the properties of the fundamental building blocks of matter, of which everything that surrounds us is made of.
The central dogma of biology is based on the concatenated transfer of information from DNA, via transcribed mRNA, to the translated protein. In eukaryotes, transcription and translation are separated locally as well as temporally by cellular compartmentalization. Prior to active export factor-dependent transport from the nucleus to the cytosol, the newly formed pre-mRNA must mature. This involves 5'capping, splicing, and endonucleolytic cleavage and polyadenylation (CPA).
Transcription of a new pre-mRNA is terminated by hydrolytic cleavage in the 3'-UTR, and the newly formed 3'-end is protected from premature degradation by synthesis of a poly(A) tail. These processes are catalyzed by four multi-protein complexes (CFIm, CFIIm, CPSF, and CsTF) and poly(A) polymerase (PAP). CPA is sequence-specific and dependent on RNA-binding proteins (RBPs). APA-specific sequences include the poly(A) motif ('AAUAAA' and certain motif variants), the UGUA motif, and U/GU-rich sequences upstream and downstream of the poly(A) signal, respectively. About 70% of mammalian genes have more than one polyadenylation site (PAS) and express transcripts of different lengths by a mechanism called alternative polyadenylation (APA). This can affect the length of the 3'UTR (3'UTR-APA) or the coding sequence of the transcript (CDS-APA) if the alternative PAS is upstream of the STOP codon. The length of the 3'UTR affects the stability, export efficiency, subcellular localization, translation rate, and local translation of the nascent transcript. 3'UTR-APA is regulated in the interplay of the cis-elements (poly(A) motif, UGUA and U/GU) and trans-elements (expression of CPA factors). In this context, the functions of the individual cis and trans elements have been extensively studied, yet the regulation of alternative polyadenylation-the decision whether to use the proximal or distal PAS-is less deciphered and requires additional study.
In murine P19 cells, we were able to demonstrate for the first time a direct link between 3'UTR-APA and nuclear export of mature mRNA by the splicing factors SRSF3 and SRSF7 and decipher the mechanism. At the core here is the direct recruitment of the export factor NXF1 by SRSF3 and SRSF7 to transcripts with 3'UTRs of different lengths.
The primary goal of the thesis presented here was to decipher the function of SRSF3 and SRSF7 in the regulation of 3'UTR-APA and to determine the basic mechanism. For this purpose, various genome-wide methods, such as RNA-Seq, MACE-Seq, and iCLIP-Seq, were integrated and the findings were supported by reporter gene and mutation studies.
Initial determination of the poly(A)-tome in P19 cells by MACE-Seq yielded approximately 16,000 PAS and showed that slightly less than 50% of all genes used two or more PAS and expressed alternative 3'UTR isoforms. Further DaPARS analyses after knockdown of Srsf3 or Srsf7 confirmed that SRSF3 affected more transcripts than SRSF7 and led primarily to the expression of long 3'UTRs, whereas SRSF7 promoted the expression of short 3'UTRs. Integration of SRSF3- and SRSF7-specific iCLIP data suggested a possible competition between SRSF3 and SRSF7 at the proximal PAS (pPAS), which could thus act as a hotspot of 3'UTR regulation.
Experiments with intron-free reporter genes revealed that SRSF3- and SRSF7-dependent regulation of 3'UTR-APA is independent of splicing. With respect to SRSF7, a concentration dependence was demonstrated. Mutation experiments involving the SRSF3- and SRSF7-specific binding motifs in the 3'UTR also confirmed the hypothesis of competition between the two SR proteins.
Extensive Co-IP experiments clearly demonstrated that only SRSF7, but not SRSF3, can interact with CFIm and FIP1 (a subunit from the CPSF complex) in an RNA-independent manner. In addition, we showed that these interactions exhibited some phosphorylation dependence, such that the interaction to FIP1 arose primarily in the semi- to hypophosphorylated state of SRSF7. Whereas the interaction to CFIm was mainly detected in the hyperphosphorylated state. The differential affinity between SRSF3 and SRSF7 for polyadenylation factors could be attributed to two SRSF7-specific domains in subsequent mutation experiments: A CCHC-type Zn finger between the RRM and the RS domain, and a hydrophobic 27 amino acid long region in the middle of the RS domain. Together, this suggested that SRSF3 could block the utilization of pPAS, whereas SRSF7 could activate it by directly recruiting polyadenylation factors.
Interestingly, we showed that knockdown of Srsf3 also negatively regulates the expression of Cpsf6 (a subunit of CFIm) through alternative splicing, which subsequently leads to decreased expression of CPSF6 and of CFIm. Reduction of CFIm led to increased expression of transcripts with short 3'UTR, analogous to knockdown of Srsf3. This mirrors the results of previous studies. A direct comparison between SRSF3- and CPSF6-specific transcripts revealed that not all targets were congruent. In addition, we found preliminary evidence for CFIm-related masking of essential cis-pPAS elements by bimodal UGUA motifs at the pPAS. In summary, we present a novel mechanism of indirect 3'UTR-APA regulation through SRSF3-conditional expression of the CFIm subunit CPSF6.
...
This dissertation describes the development of the beam dynamics design of a novel superconducting linear accelerator. At a main operating frequency of 216.816 MHz, ions with a mass-to-charge ratio of up to 6 can be accelerated at high duty cycles up to CW operation. Intended for construction at the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt, the focus of the work is on the beam dynamic design of the accelerator section downstream of the high charge injector (HLI) at an injection energy of 1.39 MeV/u. An essential feature of this linear accelerator (Linac) is the use of the EQUUS (Equidistant Multigap Structure) beam dynamics concept for a variably adjustable output energy between 3.5 and 7.3 MeV/u (corresponding to about 12.4 % of the speed of light) with a required low energy spread of maximum 3 keV/u.
The GSI Helmholtz Centre for Heavy Ion Research is a large-scale research facility that uses its particle accelerators to perform basic research with ion beams. Research on super-heavy elements ("SHE") is a major focus. It is expected that their production and research will provide answers to a large number of scientific questions. The production and detection of elements with atomic numbers 107 to 112 (Bohrium, Hassium, Meitnerium, Darmstadtium, Röntgenium and Copernicium) was first achieved at GSI between 1981 and 1996.
Key to this remarkable progress in SHE research were continuous developments and technical innovations. On the one hand, in the field of experimental sensitivity and detection of the nuclear reaction products and, on the other hand, in the field of accelerator technology.
For the acceleration of the projectile beam, the UNILAC (Universal Linear Accelerator), which was put into operation in 1975, has been used at GSI so far. In the course of the reconstruction and expansion of the research infrastructure at GSI, a dedicated new particle accelerator, HELIAC (Helmholtz Linear Accelerator), is now under development to meet the special requirements of the beam parameters for the synthesis of new superheavy elements. Typically, the production rates of super-heavy elements with effective cross sections in the picobarn range are very low. Therefore, a high duty cycle (up to CW operation) is a key feature of HELIAC. Thus, the required beam time for the desired nuclear reactions can be significantly shortened.
Theoretical preliminary work by Minaev et al. and newly created knowledge about design, fabrication, and operation of superconducting drift tube cavities have laid the foundation for this work and thus the development of the HELIAC linear accelerator. It consists of a superconducting and a normal conducting part. Acceleration takes place in the superconducting part in four cryomodules, each about 5 m long. These contain three CH cavities, one buncher cavity, two solenoid magnets for transverse beam focusing, and two beam position monitors (BPMs).
The following 10 m long normal conducting part is primarily used for beam transport and ends with a buncher cavity. This is operated at a halved frequency of 108.408 MHz.
A key feature of this accelerator is the variability of the output energy from 3.5 to 7.3 MeV/u with a small energy uncertainty of ±3 keV/u maximum over the entire output energy range. For the development of HELIAC, the EQUUS beam dynamics concept used combined the advantages of conventional linac designs with the high acceleration gradients of superconducting CH-DTLs. By doubling the frequency (compared to the GSI high charge injector) to 216.816 MHz in the superconducting section and using CH cavities at an acceleration gradient of maximum 7.1 MV/m, an acceleration efficiency with superconducting drift tube structures that is unique in the world is made possible. At the same time, the compact lengths of the CH cavities ensure good handling for both production and operation. EQUUS leads to longitudinal beam stability in all energy ranges of the accelerator with the sliding motion of the synchronous phase within each CH cavity. The rms emittance growth is moderate in all levels. The modular design of the HELIAC with four cryomodules basically allows the Linac to be commissioned starting with the first cryomodule, the so-called Advanced Demonstrator. In the subsequent expansion stage with only the first two cryomodules of HELIAC, the lower limit of the energy range to be provided by HELIAC (3.5 MeV/u) can already be clearly exceeded, so that use in regular beam operation at GSI is already conceivable from here on.
By means of error tolerance studies, the stability of the HELIAC beam dynamics design against possible alignment errors of the magnetic focusing elements and accelerator cavities as well as errors of the electric field amplitudes and phases have been investigated, basically confirmed and critical parameters have been determined. An additional steering concept via dipole correction coils at the solenoid magnets allows transverse beam control as well as diagnostics by means of two BPMs per cryomodule.
With completion of this work in 2021, the CH1 and CH2 cavities have already been built and are in the final preparation and cold test phase. In parallel, the development of the CH cavities CH3-11 has also been started.
In Zeiten der globalen Klimaerwärmung und des Klimawandels werden Strategien zur Vermeidung, Reduzierung oder Wiederverwertung von CO2-Emissionen sowie die Abkehr von fossilen Energieträgern immer wichtiger. Aus diesem Grund finden Technologien zur Bindung, Speicherung und Wiederverwertung von CO2 immer größere Aufmerksamkeit und diverse chemische als auch biologische Ansätze werden verfolgt. Eine dieser Möglichkeiten umfasst die Reduktion von CO2 mit Hilfe von molekularem Wasserstoff. Im Prozess der direkten Hydrogenierung von CO2 zu Ameisensäure bzw. Formiat wird nicht nur CO2 gebunden, sondern ebenfalls H2 in flüssiger Form gespeichert. Die Ameisensäure weist gegenüber dem hochflüchtigen Wasserstoffgas verschiedene Vorteile auf und zählt zu der Gruppe der flüssigen, organischen Wasserstoffspeicherverbindungen. Daneben ist das Einsatzgebiet von Ameisensäure als Ausgangstoff für Chemikalien oder als mikrobielle Kohlenstoffquelle sehr vielseitig und die Verbindung erfreut sich zunehmenden Interesses.
Die Natur hält biologische Katalysatoren (Enzyme) für die Reduktion von CO2 bereit. Die Gruppe der obligat anaeroben, acetogenen Bakterien verwendet so genannte Formiatdehydrogenasen als CO2-Reduktasen, um CO2 im Wood-Ljungdahl-Weg (WLP) der Bakterien fixieren zu können. Diese Enzyme katalysieren die reversible 2-Elektronen Reduktion von CO2 zu Ameisensäure. Kürzlich konnte aus den beiden Vertretern A. woodii (mesophil) und T. kivui (thermophil) ein neuartiger, cytoplasmatischer Enzymkomplex isoliert werden. Dieser Enzymkomplex koppelt die Reduktion von CO2 direkt an die Oxidation von H2 und wird deshalb als Wasserstoff-abhängige CO2-Reduktase bezeichnet (engl. hydrogen-dependent CO2 reductase, HDCR). Die HDCR katalysiert dabei die reversible Hydrogenierung von CO2 zu Formiat mit annähernd gleicher Kinetik und gleichen Umsatzraten. Die bei der CO2 Reduktion erreichten Umsatzraten übertrafen dabei bisherige chemische als auch biologische Katalysatoren um mehre Größenordnungen.
Im Hinblick auf die besonderen katalytischen Eigenschaften der HDCRs wurde in dieser Arbeit die biotechnologische Anwendbarkeit der Enzyme als Biokatalysatoren zur Speicherung und Sequestrierung von H2 und CO2 in Form von Ameisensäure untersucht. Im Speziellen wurde ein HDCR-basiertes Ganz-Zell-System für das thermophile Bakterium T. kivui entwickelt. Um eine Ganz-Zell basierte Umwandlung von H2 und CO2 zu Formiat zu gewährleisten, wurde zuvor die Weiterverwertung des Formiats zu Acetat im WLP gestoppt. Durch eine Reduktion des zellulären ATP-Gehalts konnte eine weitere Prozessierung des aus der HDCR-Reaktion gebildeten Formiats im Zellstoffwechsel des Bakteriums unterbunden werden. Die Formiatbildung aus H2 und CO2 wurde in Zellsuspensionen von T. kivui untersucht und charakterisiert. Hier zeigten T. kivui Zellen die höchste spezifische Formiatbildungsrate, die bis dato in der Literatur genannt wurde. Ebenfalls wurde in dieser Arbeit die Umwandlung von Synthesegas (H2 + CO2 und CO) und CO zu Formiat geprüft. Bioenergetisch entkoppelte und auf CO-adaptierte T. kivui Zellen konnten in der Tat Synthesegas exklusiv zu Formiat umsetzen. Um die CO-Verwertung zu Acetat und Formiat im Stoffwechsel der Rnf- (A. woodii) und Ech-Acetogenen (T. kivui) verstehen zu können, wurden Mutanten von Δhdcr, ΔcooS, ΔhydBA, Δrnf and Δech2 von A. woodii und T. kivui zur Hilfe genommen. In beiden Organismen war die CO-basierte Formiatbildung vom Vorhandensein eines funktionalen HDCR-Enzymkomplexes abhängig.
Für eine mögliche biotechnologische Anwendung wurde die Maßstabsvergrößerung des Ganz-Zell-Systems angestrebt und hin zum Bioreaktormaßstab mit kontrollierten Prozessbedingungen skaliert. Diese Arbeit demonstriert die effiziente Umwandlung von H2 und CO2 zu Formiat und vice versa unter Verwendung eines Rührkesselreaktors. Der Prozess zeigte eine Effizienz von 100% für die Umwandlung von CO2 zu Formiat und spezifische Raten von 48.3 mmol g-1 h-1 wurden von A. woodii Zellen erreicht. Die spezifische H2-Produktionsrate (qH2) aus der Ameisensäureoxidation betrug 27.6 mmol g-1 h-1 und mehr als 2.12 M Ameisensäure konnte über einen Zeitraum von 195 h oxidiert werden. Wichtige Parameter der Enzymkatalyse wie Wechselzahl (engl. turnover frequency, TOF) und katalytische Produktivität (engl. turnover number, TON) wurden ebenfalls im Versuch bestimmt. Basierend auf dem generierten Prozessverständnis und der effizienten Reversibilität der katalysierten Reaktionen wurde abschließend ein Ganz-Zell-basierter Bioreaktoraufbau gewählt, der die vielfache Speicherung und Freisetzung von H2 in einem einzigen Rührkesselreaktor und unter Verwendung des gleichen Katalysators ermöglicht. Über eine Prozesszeit von 2 Wochen und 15 CO2 Reduktions-/Formiat Oxidations-Zyklen konnte so im Mittel 330 mM Formiat produziert und oxidiert werden.
Zusammenfassend thematisiert diese Arbeit die biotechnologische Anwendbarkeit eines Ganz-Zell-Systems zur Speicherung und Sequestrierung von H2 und CO2 in Form von Formiat und vice versa. Die katalytische Aktivität der betrachteten Organismen fußt dabei auf der Aktivität eines neuartigen Enzymkomplexes, der erstmals in der Gruppe der acetogenen Bakterien entdeckt wurde. Der als Wasserstoff-abhängige CO2-Reduktase bezeichnete Enzymkomplex könnte die zukünftige Konzipierung Enzym-inspirierter und effizienter chemischer Katalysatoren vorantreiben. Auch der Einsatz des Enzyms/der Zellen in so genannten Hydrogelen oder die Etablierung elektrochemischer Prozesse sind vorstellbar. Diese Arbeit stellt somit eine Basis für mögliche zukünftige Anwendungen des etablierten Ganz-Zell-Systems von A. woodii und T. kivui im Bereich der Wasserstoffökonomie dar.
Despite constant progress in basic and translational research, cancer is still one of the leading cause of death. In particular, tumors of the central nervous system (CNS) are usually associated with dismal prognosis. Although about 100 distinct subtypes of primary CNS tumors have been classified molecularly, metastases derived from primaries outside the CNS (= brain metastases, BrM) are more frequently observed across brain tumor patients. It is estimated that approximately 20 - 40 % of all cancer patients will develop BrM during their course of disease, and basically every tumor type is able to metastasize to the brain. Nevertheless, BrM are most frequently derived from primaries of the lung, breast, and skin (melanoma). Treatment options for patients with BrM are very limited, and standard of care therapies include surgery, ionizing radiation (e.g. whole brain radio-therapy, WBRT), and some systemic and immuno-therapeutic approaches.
The brain represents a unique organ, which in part is due to the presence of the blood-brain barrier, a unit of the neuro-vascular interface ensuring tightly regulated exchange of nutrients, molecules, and cells. Furthermore, apart from microglia the brain parenchyma does not harbor other immune cells. Those cells however can be found at the borders of the CNS residing in the meninges, for instance. Based on recent insight on the immune landscape in the CNS, a paradigm shift occurred after which the brain is no longer regarded as immune-privileged but rather immune distinct. The phenomenon of immune cell infiltration has been described before in the context of neurological disorders including Multiple Sclerosis, as well as in brain tumors.
Since the development of immune-therapeutic approaches for tumors outside the CNS that aim to evoke sustainable anti-tumor effects, it became increasingly interesting to understand and harness the immune landscape (= tumor microenvironment, TME) of brain tumors, as well. Interestingly, most of the knowledge about the TME is based on studies of primary brain tumors. However, it is known that BrM compared to primary brain tumors induce a different TME like e.g. the recruitment of much more lymphocytes, which is one of the reasons primary brain tumors are considered immunologically “cold” and poorly respond to immuno-therapies. Previous insight into the functional contribution of tumor-associated cells in BrM progression revealed for example that brain-resident cell types (e.g. astrocytes or microglia) promote BrM development and outgrowth. However, until recently a comprehensive view on the cellular composition and functional role of the brain metastases-associated TME was missing and little was known how it changes during tumor progression or standard therapy.
Hence, within this thesis it was sought to describe novel aspects of the TME of preclinical BrM models, which include two xenograft and one syngeneic mouse model. BrM was induced via intra-cardiac injection of tumor cells with a high brain tropism. Both xenograft models were based on immuno-compromised nude mice (Balb/c nude) and included the melanoma-to-brain (M2B) model H1_DL2, and the lung-to-brain (L2B) model H2030. In addition the breast-to-brain model 99LN-BrM was used in wild-type mice (BL6), and therefore represented an immuno-competent, syngeneic model. First BrMs could be detected in the xenograft models at 3 weeks after injection, whereas first 99LN BrMs were detected at 5 weeks. BrM development and progression were monitored by bioluminescence imaging once per week in the xenograft models. Tumor progression in the 99LN model was examined by magnetic resonance imaging. Based on the measurement methods, and for further histologic and cytometric experiments, mice were stratified into groups with small or large BrMs, respectively. Some initial immuno-stainings confirmed previous findings, showing that brain-resident cells like astrocytes and microglia become activated in the presence of tumor cells, whereas neurons for example rather give the impression of passive bystanders. Importantly, an accumulation of IBA1+ cells was observed during BrM progression. IBA1 is a pan-macrophage marker that stains all tumor-associated macrophages (TAMs). However previous work suggested that the TAM population consists of at least two main subpopulations in BrM as well: the resident-infiltrating microglia (MG, TAM-MG), as well as the peripheral and monocytic-derived macrophages (TAM-MDM). Since both cell types within the tumor share morphological traits, and due to the lack of markers to distinguish them, an exact discrimination of both cell types was complicated in the past. Recently, an integrative lineage-tracing-based study identified the integrin CD49d as MDM-specific in the context of brain tumor-associated myeloid cells, hence enabling a reliable dissection of both TAM populations in e.g. flow cytometric experiments.
One of the main aims of this thesis was to dissect the myeloid TME in the three different BrM models during tumor progression. Using a 5-marker flow cytometry (FCM) (CD45/CD11b/Ly6C/Ly6G/CD49d) approach, the following cell populations were examined in more detail: granulocytes, inflammatory monocytes, MDM, and MG.
...
Paläoklimarekonstruktionen, die es sich zum Ziel gesetzt haben, Klima-Mensch Interaktionen auf lange Zeitreihen betrachtet zu erforschen, nehmen begünstigt durch die aktuell intensiv geführte Klimadebatte, einen immer größer werdenden Stellenwert in der öffentlichen und wissenschaftlichen Wahrnehmung ein. Denn trotz aller wissenschaftlicher Fortschritte, die in den vergangenen Jahrzehnten im Bereich der modernen Klimaforschung gemacht wurden, bleibt die zuverlässige Vorhersage und Modellierung von zukünftigen Klimaveränderungen noch immer eine der größten Herausforderungen unser heutigen Zeit. Betrachtet man die Karibik exemplarisch in diesem Rahmen, dann prognostizieren viele Modellrechnungen, infolge steigender Ozeantemperaturen, ein deutlich häufigeres Auftreten von tropischen Stürmen und Hurrikanen sowie eine Verschiebung hin zu höheren Sturmstärken. Dieser Trend stellt für die Karibik und viele daran angrenzende Staaten eine der größten Gefahren des modernen Klimawandels dar, den es wissenschaftlich über einen langen Zeitrahmen zu erforschen gilt.
Klimaprognosen stützen sich meist vollständig auf hoch-aufgelöste instrumentelle Datensätze. Diese sind aber alle durch einen wesentlichen Aspekt limitiert. Aufgrund ihrer eingeschränkten Verfügbarkeit (~150 Jahre) fehlt ihnen die erforderliche Tiefe, um die auf langen Zeitskalen operierenden Prozesse der globalen Klimadynamik adäquat abbilden zu können. Betrachtet man das Holozän in seiner Gesamtheit, so wurde die globale Klimadynamik über die vergangenen ~11,700 Jahre von periodisch auftretenden Prozessen und Abläufen gesteuert. Diese wirken grundsätzlich über Zeiträume von mehreren Jahrzehnten, teilweise Jahrhunderten und in einigen Fällen sogar Jahrtausenden. Viele dieser natürlichen Prozesse, können in der kurzen Instrumentellen Ära nicht gänzlich identifiziert und angemessen in Klimamodellen berücksichtig werden. Die alleinige Berücksichtigung der Instrumentellen Ära bietet daher nur eine eingeschränkte Perspektive, um die Ursachen und Abläufe von vergangenen sowie mögliche Folgen von zukünftigen Klimaveränderungen zu verstehen. Um diese Einschränkung zu überwinden, ist es somit erforderlich, dass die geowissenschaftliche Forschung mit Proxymethoden ein zusammenfassendes und mechanistisches Verständnis über alle Holozänen Klimaveränderungen erlangt.
Wenn man sich diese Limitierung, die ansteigenden Ozeantemperaturen und das in der Karibik in den vergangen 20 Jahren vermehrte Auftreten von starken tropischen Zyklonen ins Gedächtnis ruft, ist es nachvollziehbar, dass im Rahmen dieser Doktorarbeit ein zwei Jahrtausende langer und jährlich aufgelöster Klimadatensatz erarbeitet werden soll, der spät Holozäne Variationen von Ozeanoberflächenwasser-temperaturen (SST) und daraus resultierende lang-zeitliche Veränderungen in der Häufigkeit tropischer Zyklone widerspiegelt. In Zentralamerika wird das Ende der Maya Hochkultur (900-1100 n.Chr.) mit drastischen Umweltveränderungen (z.B. Dürren) assoziiert, die während der Mittelalterlichen Warmzeit (MWP; 900-1400 n.Chr.) durch eine globale Klimaveränderung hervorgerufen wurde. Die aus einem „Blue Hole“ abgeleiteten Informationen über Klimavariationen der Vergangenheit können als Referenz für die gegenwärtige Klimakriese verwendet werden.
Als „Blue Hole“ wird eine Karsthöhle bezeichnet, die sich subaerisch während vergangener Meeresspiegeltiefstände im karbonatischen Gerüst eines Riffsystems gebildet hat und in Folge eines Meeresspiegelanstiegs vollständig überflutet wurde. In einigen wenigen marinen „Blue Holes“ treten anoxische Bodenwasserbedingungen auf. Die in diesen anoxischen Karsthöhlen abgelagerten Abfolgen mariner Sedimente können als einzigartiges Klimaarchiv verwendet werden, da sie aufgrund des Fehlens von Bioturbation eine jährliche Schichtung (Warvierung) aufweisen.
In dieser kumulativen Dissertation über das „Great Blue Hole“ werden die Ergebnisse eines 3-jährigen Forschungsprojekts vorgestellt, dass das Ziel verfolgte einen wissenschaftlich herausragenden spät Holozänen Klimadatensatz für die süd-westliche Karibik zu erzeugen. Beim „Great Blue Hole“ handelt es sich um ein weltweit einzigartiges marines Sedimentarchiv für diverse spät Holozäne Klima-veränderungen, das im Zuge dieser Dissertation sowohl nach paläoklimatischen als auch nach sedimentologischen Fragestellungen untersucht wurde. Die vorliegende Doktorarbeit befasst sich im Einzelnen mit (1) der Ausarbeitung eines jährlich aufgelösten Archives für tropische Zyklone, (2) der Entwicklung eines jährlich aufgelösten SST Datensatzes und (3) einer kompositionellen Quantifizierung der sedimentären Abfolgen sowie einer faziell-stratigraphischen Charakterisierung von Schönwetter-Sedimenten und Sturmlagen. Zu jedem dieser drei Aspekte, wurde jeweils ein Fachartikel bei einer anerkannten wissenschaftlichen Fachzeitschrift mit „peer-review“ Verfahren veröffentlicht.
Der insgesamt 8.55 m lange Sedimentbohrkern („BH6“), der für diese Dissertation untersucht wurde, stammt vom Boden des 125 m tiefen und 320 m breiten „Great Blue Holes“, das sich in der flachen östlichen Lagune des 80 km vor der Küste von Belize (Zentralamerika) gelegenen „Lighthouse Reef“ Atolls befindet. Durch seine besondere Geomorphologie wirkt das, innerhalb des atlantischen „Hurrikan Gürtels“ positionierte, „Great Blue Hole“ wie eine gigantische Sedimentfalle. Die unter Schönwetter-Bedingungen kontinuierlich abgelagerten Abfolgen feinkörniger karbonatischer Sedimente, werden von groben Sturmlagen unterbrochen, die auf „over-wash“ Prozesse von tropischen Zyklonen zurückzuführen sind.
...
Human GLUTs represent a family of specialized transporters that facilitate the diffusion of hexoses through membranes along a concentration gradient. The 14 isoforms share high sequence identity but differ in substrate specificity and affinity, and tissue distribution. According to their structure similarity, GLUTs are divided into three classes, with class 1 comprising the most intensively studied isoforms GLUTs1 4. An abnormal function of different GLUT members has been related to the pathogenesis of various diseases, including cancer and diabetes. Hence, GLUTs are the subject of intensive research, and efforts concentrate on identifying GLUT-selective ligands for putative medical purposes and their application in studies aiming to further unravel the metabolic roles of these transporters.
The hexose transporter deficient (hxt0) yeast strain EBY.VW4000 is devoid of all its endogenous hexose transporters and unable to grow on glucose or related hexoses. This strain has proven to be a valuable platform to investigate heterologous transporters due to its easy handling, increased robustness, and versatile applications. However, the functional expression of GLUTs in yeast requires certain modifications. Single point mutations of GLUT1 and GLUT5 led to their functional expression in EBY.VW4000, whereas the native GLUT1 was actively expressed in EBY.S7, a hxt0 strain carrying the fgy1 mutation that putatively reduces the phosphatidylinositol-4-phosphate (PI4P) content in the plasma membrane. GLUT4 was only actively expressed in the hxt0 strain SDY.022, which also contains the fgy1 mutation and in which ERG4 is additionally deleted. Erg4 is one of the late enzymes in the ergosterol pathway, and therefore SDY.022 probably has an altered sterol composition in its membrane.
The goal of this thesis was to actively express GLUT2 and GLUT3 in a hxt0 yeast strain, providing a convenient system for their ligand screening. A PCR-derived amino acid exchange in the sequence of GLUT3 enabled its functional expression in EBY.VW4000 and the unmodified GLUT3 protein was active in EBY.S7. Functional expression of GLUT2 was achieved by rational design. The extracellular loop between the transmembrane regions 1 and 2 is significantly larger in GLUT2 than in other class 1 GLUTs. By truncating this loop by 34 amino acids and exchanging an alanine for a serine, a GLUT3-like loop was implemented. The resulting construct GLUT2∆loopS was functional in EBY.S7. With an additional point mutation in the transmembrane region 11, GLUT2∆loopS_Q455R was also actively expressed in EBY.VW4000. Inhibition studies with the known GLUT inhibitors phloretin and quercetin showed a reduced transporter activity for GLUT2 and GLUT3 in uptake assays and growth tests when inhibitors were present, demonstrating that both systems are amenable for ligand screening experiments.
The newly established GLUT2 yeast system was then used to screen a library of compounds pre-selected by in silico screening. Thereby, eleven identified GLUT2 inhibitors exhibited strong potencies with IC50 values ranging from 0.61 to 19.3 µM. By employing the other yeast systems, these compounds were tested for their effects on GLUT1, and GLUTs3-5, revealing that nine of the identified ligands were GLUT2-selective. In contrast, one was a pan-class 1 inhibitor (inhibiting GLUTs1-4), and one affected GLUT2 and GLUT5, the two fructose transporting isoforms. These compounds will serve as useful tools for investigations on the role of GLUT2 in metabolic diseases and might even evolve into pharmaceutical agents targeting GLUT2-associated diseases.
Due to the beneficial effect of the putatively changed sterol composition in SDY.022 (by ERG4 deletion) on the functional expression of GLUT4, it was hypothesized that the presence of the human sterol cholesterol, or cholesterol-like sterols, might have a beneficial effect on GLUT expression, too. Thus, it was attempted to generate hxt0 strains that synthesize these sterols by genetic modifications targeting the ergosterol pathway. In the scope of these experiments, several strains with different sterol compositions were generated. Drop tests on glucose medium with the different strains expressing GLUT1 or GLUT4 revealed that the deletion of ERG6 is clearly advantageous for a functional expression of GLUT1 (but not GLUT4). This indicates that the methyl group at the ergosterol side chain (introduced by Erg6 and reduced by Erg4) negatively influences GLUT1 activity. However, this effect on GLUT1 activity was less pronounced than the putative altered PI4P content in EBY.S7.
Additionally, in this thesis, a new tool to measure glucose transport rates of transporters expressed in the hxt0 yeast system was developed to facilitate their kinetic characterization. For this, the pH-sensitive GFP variant pHluorin was employed as a biosensor for the cytosolic pH (pHcyt) by measuring the ratio (R390/470) of emission intensities at 512 nm from two different excitation wavelengths (390 and 470 nm). Sugar-starved cells exhibit a slightly acidic pHcyt because ATP production is depleted, reducing the activity of ATP-dependent proton pumps.
...
Geochemical investigations on biogenic carbonates are commonly conducted to reconstruct the environmental conditions of the past. However, different carbonate producers incorporate elements to varying degrees, due to biological vital effects. Detecting and quantifying these effects is crucial to produce reliable reconstructions. These paleoreconstructions are of great importance to evaluate the consequences of our recent climate change and identify control mechanisms on the distribution of endangered species such as Desmophyllum pertusum. In chapter three we tested Mg/Ca, Sr/Ca and Na/Ca ratios on this species, among other coldwater scleractinians, to test if they provide reliable proxy information. The results reveal no apparent control of Mg/Ca or Sr/Ca ratios through seawater temperature, salinity or pH. Na/Ca ratios appear to be partly controlled by the seawater temperature, which is also true for other aragonitic organisms such as warm-water corals and the bivalve Mytilus edulis. However, a large variability complicates possible reconstructions by means of Na/Ca. In addition, we explore different models to explain the apparent temperature effect on Na/Ca ratios based on temperature sensitive Na and Ca pumping enzymes.
The bivalve Acesta excavata is commonly found in cold-water coral reefs among the North Atlantic, together with D. pertusum. Multiple linear regression analysis, presented in chapter four, indicates that up to 79% of the elemental variability in Mg/Ca, Sr/Ca and Na/Ca is explainable with temperature and salinity as independent predictor variables. Vital effects, for instance growth rate effects, are evident and make paleoreconstructions not feasible. Furthermore, organic material embedded in the shell, as well as possible stress effects can drastically change the elemental composition. Removal of these organic matrices from bulk samples for LA-ICP-MS (laser ablation inductively coupled mass spectrometer) measurements by means of oxidative cleaning is not possible, but Na/Ca ratios decrease after this cleaning. This is presumably an effect of leaching and not caused by the removal of organic matrices.
Interesting biogeochemical relations were found in the parasitic foraminifera H. sarcophaga. We report Mg/Ca, Sr/Ca, Na/Ca and Mn/Ca ratios measured in H. sarcophaga from two different host species (A. excavata and D. pertusum) in chapter five. Sr/Ca ratios are significantly higher in foraminifera that lived on D. pertusum. This could indicate that dissolved host material is utilized in shell calcification of H. sarcophaga, given the naturally higher strontium concentration in the aragonite of D. pertusum. Mn/Ca ratios are highest in foraminifera that lived on A. excavata but did not fully penetrate the host’s shell. Most likely, this represents a juvenile stadium of the foraminifera during which it feeds on the organic
periostracum of the bivalve, which is enriched in Mn and Fe. The isotopic compositions are similarly affected, both δ18O and δ13C values are significantly lower in foraminifera that lived 23on D. pertusum compared to specimen that lived on A. excavata. Again, this might represent the uptake of dissolved host material or different pH regimes in the calcifying fluid of the hosts (bivalve < 8, coral > 8) that control the extent of hydration/hydroxylation reactions. Temperature reconstructions are possible using stable oxygen isotopes on this foraminifera species; however, the results are only reliable if the foraminifera lived on A. excavata. Samples of H. sarcophaga from D. pertusum would lead to overestimations of the seawater temperature due to the lower δ18O values.
Apart from biological vital effects, storage and preservation methods can significantly change the geochemical composition of different marine biogenic carbonates. In chapter six this is presented on the example of ethanol preservation, a common technique to allow extended storage of biogenic samples. The investigation reveals a significant decrease of Mg/Ca and Na/Ca ratios even after only 45 days storage in ultrapure ethanol. Sr/Ca ratios on the other hand are not influenced.
Besides temperature, salinity and pH further environmental parameters are important such as nutrient availability, especially for the distribution of cold-water corals. In chapter seven we extend the investigations on A. excavata by including the elemental ratios Ba/Ca, Mn/Ca and P/Ca. We expected P/Ca to be helpful in the otherwise difficult process of dentifying growth increments. Based on our observations we had to refute this theory. P/Ca ratios are not systematically enriched in the vicinity of growth lines. Instead, we found a regular sequence of peaks of Ba/Ca, P/Ca and Mn/Ca. This sequence as well as the peaks in general are potentially caused by equential blooms of different algae, diatoms and other planktonic organisms ...
For finite baryon chemical potential, conventional lattice descriptions of quantum chromodynamics (QCD) have a sign problem which prevents straightforward simulations based on importance sampling.
In this thesis we investigate heavy dense QCD by representing lattice QCD with Wilson fermions at finite temperature and density in terms of Polyakov loops.
We discuss the derivation of $3$-dimensional effective Polyakov loop theories from lattice QCD based on a combined strong coupling and hopping parameter expansion, which is valid for heavy quarks.
The finite density sign problem is milder in these theories and they are also amenable to analytic evaluations.
The analytic evaluation of Polyakov loop theories via series expansion techniques is illustrated by using them to evaluate the $\SU{3}$ spin model.
We compute the free energy density to $14$th order in the nearest neighbor coupling and find that predictions for the equation of state agree with simulations to $\mathcal{O}(1\%)$ in the phase were the (approximate) $Z(3)$ center symmetry is intact.
The critical end point is also determined but with less accuracy and our results agree with numerical results to $\mathcal{O}(10\%)$.
While the accuracy for the endpoint is limited for the current length of the series, analytic tools provide valuable insight and are more flexible.
Furthermore they can be generalized to Polyakov-loop-theories with $n$-point interactions.
We also take a detailed look at the hopping expansion for the derivation of the effective theory.
The exponentiation of the action is discussed by using a polymer expansion and we also explain how to obtain logarithmic resummations for all contributions, which will be achieved by employing the finite cluster method know from condensed matter physics.
The finite cluster method can also be used to evaluate the effective theory and comparisons of the evaluation of the effective action and a direction evaluation of the partition function are made.
We observe that terms in the evaluation of the effective theory correspond to partial contractions in the application of Wick's theorem for the evaluation of Grassmann-valued integrals.
Potential problems arising from this fact are explored.
Next to next to leading order results from the hopping expansion are used to analyze and compare the onset transition both for baryon and isospin chemical potential.
Lattice QCD with an isospin chemical potential does not have a sign problem and can serve as a valuable cross-check.
Since we are restricted by the relatively short length of our series, we content ourselves with observing some qualitative phenomenological properties arising in the effective theory which are relevant for the onset transition.
Finally, we generalize our results to arbitrary number of colors $N_c$.
We investigate the transition from a hadron gas to baryon condensation and find that for any finite lattice spacing the transition becomes stronger when $N_c$ is increased and to be first order in the limit of infinite $N_c$.
Beyond the onset, the pressure is shown to scale as $p \sim N_c$ through all available orders in the hopping expansion, which is characteristic for a phase termed quarkyonic matter in the literature.
Some care has to be taken when approaching the continuum, as we find that the continuum limit has to be taken before the large $N_c$ limit.
Although we currently are unable to take the limits in this order, our results are stable in the controlled range of lattice spacings when the limits are approached in this order.
Neurons are cells with a highly complex morphology; their dendritic arbor spans up to thousands of micrometers. This extended arbor poses a challenge for the logistics of neuronal processes: mRNA, proteins, and organelles have to be transported to dendrites, hundreds of micrometers away from the soma. This thesis aims to calculate the minimum number of proteins needed to populate the dendritic trees for different scenarios.
In chapter 2, I analyzed the ability of different mechanisms to populate the dendritic arbor. I started from the solution of the diffusion equation in Sec. 2.1, then I included the contribution of active transport in Sec. 2.2 and showed how it could have either the effect of increasing the effective diffusion coefficient or of introducing a bias in the diffusion process. In Sec. 2.3 I studied the spatial distribution of locally synthesized protein, accordingly with actively and passively transported mRNA. In Sec. 2.5, I derived the boundary condition for branches showing a qualitatively different behavior of surface and cytoplasmic proteins induced by the medium’s dimensionality in which they diffuse.
In chapter 3, I introduced the concept of protein requirement, defined as the minimum number of proteins that the neuron needs to produce to provide at least one protein to each micrometer of the dendritic arbor. In Sec. 3.1, I derived the protein requirement for diffusive proteins for somatic translation and constant translation in the dendritic arbor. In Sec. 3.2, I analyzed numerically the protein requirement in the case of actively transported protein synthesized in the soma, and, in Sec. 3.3, in the case of actively transported proteins synthesized in the dendritic arbor. In Sec. 3.4, I analyzed the protein requirement of protein synthesized in the dendrite accordingly with the distribution of mRNA described in Sec. 3.3 and 3.2. In Sec. 3.5, I derived the protein requirement for a single branch and purely diffusive proteins.
In chapter 4, I analyzed the relation between the radii of the three afferent dendrites in a branch, their length, and the diffusion length of a protein. In Sec. 4.1 I derived the optimal ratio between the radii of the daughter dendrites that minimizes the protein requirement. In Sec. 4.3 I introduced the 3/2− Rall Rule and in Sec. 4.5 its generalization. Finally, I used those rules to estimate the fraction of proteins diffusing away from and toward the soma.
In chapter 5, I analyzed the radii distribution for three categories of neurons: cultured hippocampal neurons in Sec. 5.1, stomatogastric ganglia neuron in Sec. 5.2, and 3DEM reconstructed prefrontal pyramidal neurons in Sec. 5.3. For each of these three classes, I analyzed the distribution of radii, Rall exponents, and the probability ratio. For most of them, I found that the probability of a protein diffusing away from the soma is higher for surface proteins than for cytoplasmic ones. I quantified this with a parameter called surface bias.
In Chapter 6, I analyzed the fluorescent ratio imaged by our collaborators Anne-Sophie Hafner, for a surface protein, GFP::Nlg, and a soluble one, GFP, in cultured hippocampal neurons, and I compared the fluorescent ratio with the probability ratio obtained in 5.1, finding that they are in good agreement.
In chapter 7, I compared the real dendritic morphologies imaged by one of our collaborators Ali Karimi with the optimal branching rule obtained in Sec. 4.1 and I calculated the cost for not having optimal branching radii.
Finally, in Chapter 8, I used the knowledge of the branching statistics gathered in 5.3 to simulate the protein profile on three different classes of neurons: pyramidal neurons, granule neuron, and Purkinje neurons. I compared the protein profile for surface and cytoplasmic neurons for each morphology for two different values of the diffusion length: λ = 109µm and λ = 473µm, both for optimized radii and symmetrical radii. I showed how the radii optimization reduces the protein requirement of a factor 10 4 for pyramidal neurons.
The project investigates how economic paradigm shifts that occur at the beginning of the 1970s (primarily the abandonment of the gold standard and the endlessly increasing pool of capital awaiting investment that succeeded it) led to the emergence of a unique building type: the high-altitude observation deck. Part investment vehicle, part iteration of an ongoing fascination with the view from above, the project presents the observation deck as the point where three distinct paradigms intersect: observation, speculation and spectacle. Tracing the emergence of the observation deck through a series of case studies (Top of the World atop the World Trade Center (NYC), One World Observatory (NYC), The Tulip (London) the project enriches its interdisciplinary approach with archival research and fieldwork. Re-telling the complicated collaboration between architect Warren Platner and graphic designer Milton Glaser at the end of the 1960s, the project lays out how the observation deck is conceived at a time when the perceived “crisis” of New York results in a rapidly accelerating neoliberalization of urban space. An avatar of this emerging ideology the observation deck is heavily invested in making the city visually comprehensible. Incorporating a sort of neoliberalist geometry, the deck transforms the city into a product to be consumed instead of a reality to live in and thus paves the way for other ventures of what has been called the “experience economy.” Thus, it signals the ongoing shift away from an architecture that possesses any use value, towards one that, as Barthes put it with regards to Eiffel Tower, is centered only on viewing and being viewed. A speculative machine, the observation deck renders the city into a product.
Monte Carlo methods : barrier option pricing with stable Greeks and multilevel Monte Carlo learning
(2021)
For discretely observed barrier options, there exists no closed solution under the Black-Scholes model. Thus, it is often helpful to use Monte Carlo simulations, which are easily adapted to these models. However, as presented above, the discontinuous payoff may lead to instability in option's sensitivities for Monte Carlo algorithms.
This thesis presents a new Monte Carlo algorithm that can calculate the pathwise sensitivities for discretely monitored barrier options. The idea is based on Glasserman and Staum's one-step survival strategy and the results of Alm et al., with which we can stably determine the option's sensitivities such as Delta and Vega by finite-differences. The basic idea of Glasserman and Staum is to use a truncated normal distribution, which excludes the values above the barrier (e.g.\ for knock-up-out options), instead of sampling from the full normal distribution. This approach avoids the discontinuity generated by any Monte Carlo path crossing the barrier and yields a Lipschitz-continuous payoff function.
The new part will be to develop an extended algorithm that estimates the sensitivities directly, without simulation at multiple parameter values as in finite-difference.
Consider the local volatility model, which is a generalisation of the Black-Scholes model. Although standard Monte Carlo algorithms work well for the pricing of continuously monitored barrier options within this model, they often do not behave stably with respect to numerical differentiation.
To bypass this problem, one would generally either resort to regularised differentiation schemes or derive an algorithm for precise differentiation. Unfortunately, while the widespread solution of using a Brownian bridge approach leads to accurate first derivatives, they are not Lipschitz-continuous. This leads to instability with respect to numerical differentiation for second-order Greeks.
To alleviate this problem - i.e. produce Lipschitz-continuous first-order derivatives - and reduce variance, we generalise the idea of one-step survival to general scalar stochastic differential equations. This approach leads to the new one-step survival Brownian bridge approximation, which allows for stable second-order Greeks calculations.
To show the new approach's numerical efficiency, we present a new respective Monte Carlo pathwise sensitivity estimator for the first-order Greeks and study different methods to compute second-order Greeks stably. Finally, we develop a one-step survival Brownian bridge multilevel Monte Carlo algorithm to reduce the computational cost in practice.
This thesis proves unbiasedness and variance reduction of our new, one-step survival version with respect to the classical, Brownian bridge approach. Furthermore, we will present a new convergence result for the Brownian bridge approach using the Milstein scheme under certain conditions. Overall, these properties imply convergence of the new one-step survival Brownian bridge approach.
In recent years, deep learning has become pervasive in various fields. As a family of machine learning methods it is used in a broad set of applications, such as image processing, voice recognition, email filtering, computer vision. Most modern deep learning algorithms are based on artificial neural networks inspired by the biological neural networks constituting animal brains. Also in computational finance deep learning may be of use: Consider there is no closed-solution available for an option price, Monte Carlo simulations are substantially for estimation. Instead of persistently contributing new price computations arising from an updated volatility term, one could replace these by evaluating a neural network.
If an according neural network is available, the evaluation could lead to substantial savings and be highly efficient. I.e., once trained, a neural network could save further expensive estimations. However, in practice, the challenge is the training process of the neural network.
We study and compare two generic neural network training algorithms' computational complexity. Then, we introduce a new multilevel training algorithm that combines a deep learning algorithm with the idea of multilevel Monte Carlo path simulation. The idea is to train several neural networks with training data computed from the so-called level estimators of the multilevel Monte Carlo approach introduced by Giles. We show that the new method can reduce computational complexity by formulating a complexity theorem.
In contrast to Japan and the “dragon economies,” the Philippines has not been able to partake in the “Asian Economic Miracle.” In short, the Philippines does not classify as a developmental state which exercises strategic industrial policies as traced in Japan, South Korea, Taiwan and Singapore. In fact, even its Southeast Asian neighbors Malaysia, Thailand and Indonesia had economically outdone the Philippines by the 1980s even though their prospects were much worse than those of the Philippines in the 1950s. And while the Philippine economy has been experiencing an upsurge in recent years, it is still significantly lagging behind regional standards—especially with regard to industrial development. From a political economy perspective, it is of key interest in how far the Philippine state has been contributing to this subpar development. In order to explore the ongoing Philippine development dilemma, the study thus offers a comprehensive analysis of the Philippines’ industrial policies, based on distinct government–business relations and patterns of social embeddedness. In addition to assessing the Philippines’ industrial policies and their embeddedness in general, two of the Philippines’ main export industry sectors—textile/garments and electronics—are examined. In this manner, the study contributes to the analysis of the political economy of economic development in the Philippines and provides insights on the prospects and limits of industrial policy in the Southeast Asian context.
The first part of this work addresses the automatic online tuning of transfer lines in particle accelerator facilities. In the second part the focus lies on the automatic construction and optimisation of such transport lines. It can be shown that genetic algorithms can be used very well for optimisation in both cases. Automatic online tuning can be performed very efficiently at accelerators under certain boundary conditions and is particularly well suited for initial beam commissioning with low intensity pilot beams. The construction of transfer lines can also be formulated and solved as an minimisation problem with an adopted parameterisation. Thereby, both the imaging properties of the beam transport and the robustness against error studies can be optimised at the same time.
The main focus of research in the field of high-energy heavy-ion physics is the study of the quark-gluon plasma (QGP). Topic of the present work is the measurement of electron-positron pairs (dielectrons), which grant direct access to some of the key properties of this state of matter, since after their formation they leave the hot and dense medium without significant interaction. In particular, the measurement of the initial QGP temperature is considered a "holy grail" of heavy-ion physics. Therefore, in addition to the analysis of existing data, a feasibility study has been conducted to determine to which extent this goal would be achievable by upgrading the ALICE experiment at CERN.
Dielectrons are produced during all stages of a heavy-ion collision, with their invariant mass reflecting the amount of energy available at the time of their formation. Dielectrons of highest mass are thus produced in the initial scatterings of the colliding nuclei by quark-antiquark annihilation. Correlated electron-positron pairs can also emerge from the decay chains of early-produced pairs of heavy-flavour (HF) particles. During the QGP stage and at the beginning of the hadronic phase, the system emits thermal radiation in the form of photons and dielectrons, which carry information about the medium temperature to the observer. In the final stage of the collision, decays of light-flavour (LF) hadrons produce additional contributions to the dielectron spectrum.
The present work is based on early data from the ALICE experiment recorded from lead-lead collisions at a center-of-mass energy of 2.76 TeV. Due to the limited amount of data, a focus is placed on achieving high efficiencies throughout the analysis. To this end, a special electron identification strategy is developed and a custom track selection applied, together resulting in a tenfold increase in pair efficiency. The dielectron spectrum is evaluated on a statistical basis, using a pair prefilter, which is optimized based on two signal quality criteria, to reduce the fraction of electrons and positrons from unwanted sources at minimum signal loss. In addition, an artifact of the track reconstruction is exploited to suppress pairs from photon conversions and to correct the dielectron yield for a contribution from different-conversion pairs. The main signal uncertainty is extracted from the deviation between results of 20 analysis settings and amounts to 20% in most of the studied kinematic range.
For comparison with the analysis results, a hadronic cocktail consisting of the LF and HF contributions is simulated, which can reasonably well describe the measured dielectron production, with a hint of an enhancement at low invariant mass. Two approaches to model the in-medium modification of the heavy-flavour are followed, resulting in up to 50% suppression, which creates some additional space for a thermal contribution at intermediate mass.
For a complete comparison between experimental data and theoretical expectation, two model calculations are consulted. The Thermal Fireball Model provides predictions for thermal dielectron radiation from the QGP and hadron gas. The data tends to be better described with these additional thermal contributions. For a comparison with a prediction by the UrQMD model, the HF component of the cocktail is subtracted from the data. This results in better agreement if the HF suppression by in-medium effects is taken into account.
The feasibility study in this work has served as a physical motivation for the ALICE upgrade for LHC Run 3. The precision with which the early temperature of the QGP can be determined via dielectrons is chosen as key observable. A multitude of individual contributions are merged into a fully modeled dielectron analysis. The resulting signal-to-background ratio represents some of the expected systematic uncertainties, while from the significance combined with the planned number of lead-lead collisions a realistic "measurement" with statistical fluctuations around the expected dielectron signal is generated using a Poisson sampling technique. Since the HF yield exceeds the QGP thermal radiation by about an order of magnitude, an additional analysis step exploiting the enhanced track reconstruction is introduced to reduce its contribution by up to a factor of five. The resulting reduction in pair efficiency is overcompensated by an up to hundred times higher collision rate. The entire cocktail is then subtracted from the sampled data to isolate the thermal excess yield. The final analysis of this spectrum shows that the inverse slope of the model prediction, which depends directly on the QGP temperature, can be reproduced within statistical and systematic uncertainties of about 10%.
The promising results of this study have contributed on the one hand to the realization of the ALICE upgrade and to a design decision for the new Inner Tracking System, and at the same time represent exciting predictions for upcoming measurements.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
Private equity has grown remarkably in the last 30 years. Given its rise to prominence, exceptional profitability and a more prolific and publicly visible buyout activity, regulation in the private equity space seemed inevitable. The 2007 global financial crisis furnished an opportunity to doubt the industry’s role and magnify the key concerns, providing momentum for calls to regulate the industry more aggressively. Ultimately, the regulatory change came from the Alternative Investment Fund Managers Directive (AIFMD), which has been described as one of the most rigorously debated and controversial pieces of financial regulation to ever emerge from the European Union (EU).
The AIFMD is unique and unprecedented, yet there has been very little written about it in the context of private equity. Therefore, this thesis makes a contribution to this area of research by examining the implications of AIFMD for private equity and arguing that this EU Directive has a re-shaping effect on the industry that inevitably marks the end of the light-touch regulation in this area. Whilst the desire of policymakers to act and intervene decisively during market
downturns is understandable, there is a risk that the response may not be appropriate and result in a crisis-induced over-reaction.
This thesis demonstrates, amongst other things, that the AIFMD has created a particularly
complex regulatory regime which for the hitherto unregulated or lightly regulated fund managers has had a significant effect in the EU and beyond. Examples of the most impactful
provisions relate to authorisation, marketing, depositaries, acquisition of control, remuneration, and transparency and disclosure. The implication are wide-ranging, and there is a clear conflict between the opportunities (e.g. EU passport, AIFMD as a global brand) and threats (e.g. excessive compliance costs, exodus of fund managers from the EU), which depend on a firm’s size, domicile and the gap needed to be aligned between the pre- and post-AIFMD regime.
Although there will be no stark triumph of one position over another in the assessment of the AIFMD until all of its elements are fully implemented, overall the impact of the Directive has been material, requiring substantial work to comply with (or adapt to) the requirements, which in some cases are not only particularly onerous and costly, but also a bit misguided, discouraging, or fairly irrelevant.
Despite all advancements in cancer research and clinical practice, cancer remains a life- threatening disease with an increasing incidence. According to a 2018 WHO forecast, cancer incidence will double to approximately 37 million new cancer cases by 2040. Today, clinical management of cancer is based on a "one-fits-all" strategy. Most cancers are still treated by surgical therapy followed by adjuvant or neoadjuvant chemotherapy based on rather strict guidelines (S3 guidelines in Europe) which are based on studies of large cohorts of patients with the same tumor entity. While this approach has led to substantial increases in progression-free survival and overall patient survival, most patients do not benefit from the administered treatment regimen. One reason for this is intra-tumor heterogeneity, which results from clonal evolution between cancer cells and their environment. This means that cancer patients may respond differently to a particular drug due to the different mutation patterns of their tumor cells. Therefore, patients should be screened in advance for reliable cancer biomarkers that definitively predict whether they will respond to a particular therapy. This would increase the probability of a successful treatment.
Colorectal cancer (CRC) is the third most diagnosed cancer and the second leading cause of cancer deaths worldwide. The main cause of death in CRC is a metastatic disease, which is presented in 20 % of patients and eventually develops in more than 30 % of early-stage patients. Despite the significant increase (to more than 30 months) in median survival with the development of cytotoxic agents and the introduction of targeted therapy, the progression-free survival in the first-line setting has remained largely unchanged over the past decade.
The heterogeneity in CRC is characterized by alterations in multiple signaling pathways that affect cellular functions such as cell proliferation or apoptosis. Commonly affected signaling pathways include the mitogen-activated protein kinase (MAPK)- and the transforming growth factor-β/bone morphogenetic protein (TGF-β/BMP)-pathway. Alterations in the TGF-β/BMP pathway, due to mutations in the SMAD4 gene (mothers against decapentaplegic homolog 4), are associated with different drug response and promote resistance to chemotherapy. In addition, they are associated with a higher recurrence rate.
SMAD4 is one of the most common cancer driver genes, and mutations occur in up to 15 % of CRC cases. Therefore, there is an urgent need for therapeutic agents that can specifically target SMAD4-mutated tumors.
The aim of the present study was the identification of the clinical relevance of the SMAD4 gene and the investigation of its suitability as a potential biomarker in CRC.
For this purpose, I investigated sibling patient-derived organoids (PDOs) derived from different regions of a chemo-naïve CRC tumor. PDOs are 3D cell cultures that reliably recapitulate the architecture of the tissue of origin, as well as preserve the genomic background and intra-tumor heterogeneity. The sibling PDOs (R1R361H and R4wt) shared the most common CRC mutations, such as KRASG12D (kirsten rat sarcoma), PIK3CAH1047R (phosphatidylinositol-4,5-bisphosphate 3-kinase, catalytic subunit alpha), and TP53C242F (tumor protein 53), but differed in a SMAD4R361H mutation and showed a different drug response. The single nucleotide variant R361H of the SMAD4 gene is among the most common pathogenic alterations in various cancers, including CRC.
The sibling PDOs showed significant differences in response to the MEK-inhibitors cobimetinib, trametinib, and selumetinib. MEK-inhibitors are antineoplastic agents that inhibit the function of MEK1 and MEK2, preventing phosphorylation of transcription factors, which leads to inhibition of tumor cell proliferation. MEK-inhibitors are approved for the treatment of malignant melanoma. Currently, they are in phase-III clinical trials for the treatment of patients with metastatic CRC.
To investigate whether SMAD4R361H is responsible for sensitivity to MEK-inhibitors, Iestablished three syngeneic PDOs harboring a SMAD4R361H mutation using the CRISPR/Cas9 genome editing system. All CRISPR-PDOs were significantly more sensitive to the MEK-inhibitors, compared to R4wt. I have shown that the SMAD4R361H mutation is responsible for sensitivity to MEK inhibition in CRC models and may be a predictive biomarker.
To test this hypothesis, I examined 62 CRC PDO models and treated them with the MEK-inhibitors cobimetinib, trametinib, and selumetinib. All models that had a pathogenic mutation or deletion in the SMAD4 gene (15 %) were sensitive to cobimetinib, 10 % of models were sensitive to trametinib, and 8 % were sensitive to selumetinib.
I performed transcriptome (RNA sequencing) and proteome analyses using the DigiWest® method to investigate the mechanism underlying MEK-inhibitor sensitivity.
DigiWest® is a Luminex® bead-based analysis that allows the simultaneous analysis of over 100 (phospho-)proteins. The transcriptome and proteome data support the observation that MEK inhibition primarily affects SMAD4R361H PDOs. Furthermore, I have shown that activation of the BMP signaling pathway in organoids with wild-type SMAD4 appears to be responsible for resistance to MEK-inhibitors. Thus, a genetic alteration in the BMP signaling pathway, beyond SMAD4, could lead to sensitivity to MEK-inhibitors.
I identified four genes involved in the TGF-β/BMP signaling pathway that are frequently mutated in CRC and grouped them into the so-called SFAB-signature (SMAD4, FBXW7 (F-box/WD repeat-containing protein 7), ARID1A (AT-rich interactive domain-containing protein 1A), or BMPR2 (Bone morphogenetic protein receptor type II). Clinical data show that approximately 36 % of CRC patients have at least one pathogenic mutation in these genes.
I tested all 62 CRC PDO models and found a significant positive prediction for sensitivity to cobimetinib (95 %) and selumetinib (70 %) for the SFAB-signature. Trametinib and the newly approved MEK-inhibitor binimetinib showed a similar trend. Therefore, the SFAB-signature has high predictive power for response to MEK-inhibitors and could be used as a predictive biomarker panel.
The current clinically used biomarkers for CRC are based on the mutation status of driver genes KRAS and BRAF, which are present in up to 50 % and 10 % of CRC, respectively. Investigation of molecular alterations in CRC revealed that mutations in the KRAS gene, which is downstream of EGFR (epidermal growth factor receptor) in the MAPK-pathway, interfere with an anti-EGFR-antibody therapy (e.g., cetuximab). Therefore, cetuximab is only relevant for RAS wild-type tumors. However, approximately 40 % of patients with RAS wild-type status do not respond to this treatment.
About 53 % of CRC PDO models carry a pathogenic RAS mutation, about 10 % harbor a pathogenic BRAF mutation. Both, the RAS and RAF status alone as well as the combination of RAS and RAF status with SFAB-signature did not provide a better prediction of sensitivity to MEK inhibition.