Refine
Year of publication
- 2021 (2452) (remove)
Document Type
- Article (1862)
- Preprint (188)
- Doctoral Thesis (141)
- Working Paper (127)
- Part of Periodical (79)
- Conference Proceeding (21)
- Part of a Book (11)
- Review (7)
- Bachelor Thesis (5)
- Book (5)
Language
- English (2452) (remove)
Is part of the Bibliography
- no (2452) (remove)
Keywords
- taxonomy (94)
- new species (48)
- COVID-19 (40)
- SARS-CoV-2 (37)
- morphology (26)
- inflammation (22)
- systematics (16)
- phylogeny (15)
- prostate cancer (14)
- aging (12)
Institute
- Medizin (834)
- Physik (305)
- Wirtschaftswissenschaften (188)
- Biowissenschaften (183)
- Frankfurt Institute for Advanced Studies (FIAS) (150)
- Biochemie, Chemie und Pharmazie (133)
- Informatik (126)
- Sustainable Architecture for Finance in Europe (SAFE) (123)
- Psychologie und Sportwissenschaften (90)
- Center for Financial Studies (CFS) (82)
Drawing on the role of teachers for peer ecologies, we investigated whether students favored ethnically homogenous over ethnically diverse relationships, depending on classroom diversity and perceived teacher care. We specifically studied students’ intra- and interethnic relationships in classrooms with different ethnic compositions, accounting for homogeneous subgroups forming on the basis of ethnicity and gender diversity (i.e., ethnic-demographic faultlines). Based on multilevel social network analyses of dyadic networks between 1299 early adolescents in 70 German fourth grade classrooms, the results indicated strong ethnic homophily, particularly driven by German students who favored ethnically homogenous dyads over mixed dyads. As anticipated, the results showed that there was more in-group bias if perceived teacher care was low rather than high. Moreover, stronger faultlines were associated with stronger in-group bias; however, this relation was moderated by teacher care: If students perceived high teacher care, they showed a higher preference for mixed-ethnic dyads, even in classrooms with strong faultlines. These findings highlight the central role of teachers as agents of positive diversity management and the need to consider contextual classroom factors other than ethnic diversity when investigating intergroup relations in schools.
Molecular surveillance of carbapenem-resistant gram-negative bacteria in liver transplant candidates
(2021)
Background: Carbapenem-resistant Gram-negative bacteria (CRGN) cause life-threatening infections due to limited antimicrobial treatment options. The occurrence of CRGN is often linked to hospitalization and antimicrobial treatment but remains incompletely understood. CRGN are common in patients with severe illness (e.g., liver transplantation patients). Using whole-genome sequencing (WGS), we aimed to elucidate the evolution of CRGN in this vulnerable cohort and to reconstruct potential transmission routes.
Methods: From 351 patients evaluated for liver transplantation, 18 CRGN isolates (from 17 patients) were analyzed. Using WGS and bioinformatic analysis, genotypes and phylogenetic relationships were explored. Potential epidemiological links were assessed by analysis of patient charts.
Results: Carbapenem-resistant (CR) Klebsiella pneumoniae (n=9) and CR Pseudomonas aeruginosa (n=7) were the predominating pathogens. In silico analysis revealed that 14/18 CRGN did not harbor carbapenemase-coding genes, whereas in 4/18 CRGN, carbapenemases (VIM-1, VIM-2, OXA-232, and OXA-72) were detected. Among all isolates, there was no evidence of plasmid transfer-mediated carbapenem resistance. A close phylogenetic relatedness was found for three K. pneumoniae isolates. Although no epidemiological context was comprehensible for the CRGN isolates, evidence was found that the isolates resulted of a transmission of a carbapenem-susceptible ancestor before individual radiation into CRGN.
Conclusion: The integrative epidemiological study reveals a high diversity of CRGN in liver cirrhosis patients. Mutation of carbapenem-susceptible ancestors appears to be the dominant way of CR acquisition rather than in-hospital transmission of CRGN or carbapenemase-encoding genetic elements. This study underlines the need to avoid transmission of carbapenem-susceptible ancestors in vulnerable patient cohorts.
Systemic lupus erythematosus (SLE) is a severe autoimmune disease of unknown etiology. The major histocompatibility complex (MHC) class I-related chain A (MICA) and B (MICB) are stress-inducible cell surface molecules. MICA and MICB label malfunctioning cells for their recognition by cytotoxic lymphocytes such as natural killer (NK) cells. Alterations in this recognition have been found in SLE. MICA/MICB can be shed from the cell surface, subsequently acting either as a soluble decoy receptor (sMICA/sMICB) or in CD4+ T-cell expansion. Conversely, NK cells are frequently defective in SLE and lower NK cell numbers have been reported in patients with active SLE. However, these cells are also thought to exert regulatory functions and to prevent autoimmunity. We therefore investigated whether, and how, plasma membrane and soluble MICA/B are modulated in SLE and whether they influence NK cell activity, in order to better understand how MICA/B may participate in disease development. We report significantly elevated concentrations of circulating sMICA/B in SLE patients compared with healthy individuals or a control patient group. In SLE patients, sMICA concentrations were significantly higher in patients positive for anti-SSB and anti-RNP autoantibodies. In order to study the mechanism and the potential source of sMICA, we analyzed circulating sMICA concentration in Behcet patients before and after interferon (IFN)-α therapy: no modulation was observed, suggesting that IFN-α is not intrinsically crucial for sMICA release in vivo. We also show that monocytes and neutrophils stimulated in vitro with cytokines or extracellular chromatin up-regulate plasma membrane MICA expression, without releasing sMICA. Importantly, in peripheral blood mononuclear cells from healthy individuals stimulated in vitro by cell-free chromatin, NK cells up-regulate CD69 and CD107 in a monocyte-dependent manner and at least partly via MICA-NKG2D interaction, whereas NK cells were exhausted in SLE patients. In conclusion, sMICA concentrations are elevated in SLE patients, whereas plasma membrane MICA is up-regulated in response to some lupus stimuli and triggers NK cell activation. Those results suggest the requirement for a tight control in vivo and highlight the complex role of the MICA/sMICA system in SLE.
In this paper, we investigate the question of whether and how perspective taking at the linguistic level interacts with perspective taking at the level of co-speech gestures. In an experimental rating study, we compared test items clearly expressing the perspective of an individual participating in the event described by the sentence with test items which clearly express the speaker’s or narrator’s perspective. Each test item was videotaped in two different versions: In one version, the speaker performed a co-speech gesture in which she enacted the event described by the sentence from a participant’s point of view (i.e. with a character viewpoint gesture). In the other version, she performed a co-speech gesture depicting the event described by the sentence as if it was observed from a distance (i.e. with an observer viewpoint gesture). Both versions of each test item were shown to participants who then had to decide which of the two versions they find more natural. Based on the experimental results we argue that there is no general need for perspective taking on the linguistic level to be aligned with perspective taking on the gestural level. Rather, there is clear preference for the more informative gesture.
Aqueous solutions of a nonionic surfactant (either Tween20 or BrijL23) and an anionic surfactant (sodium dodecyl sulfate, SDS) are investigated, using small-angle neutron scattering (SANS). SANS spectra are analysed by using a core-shell model to describe the form factor of self-assembled surfactant micelles; the intermicellar interactions are modelled by using a hard-sphere Percus–Yevick (HS-PY) or a rescaled mean spherical approximation (RMSA) structure factor. Choosing these specific nonionic surfactants allows for comparison of the effect of branched (Tween20) and linear (BrijL23) surfactant headgroups, both constituted of poly-ethylene oxide (PEO) groups. The nonionic–anionic surfactant mixtures are studied at various concentrations up to highly concentrated samples (ϕ ≲ 0.45) and various mixing ratios, from pure nonionic to pure anionic surfactant solutions. The scattering data reveal the formation of mixed micelles already at concentrations below the critical micelle concentration of SDS. At higher volume fractions, excluded volume effects dominate the intermicellar structuring, even for charged micelles. In consequence, at high volume fractions, the intermicellar structuring is the same for charged and uncharged micelles. At all mixing ratios, almost spherical mixed micelles form. This offers the opportunity to create a system of colloidal particles with a variable surface charge. This excludes only roughly equimolar mixing ratios (X≈ 0.4–0.6) at which the micelles significantly increase in size and ellipticity due to specific sulfate–EO interactions.
Mild acquired factor XIII deficiency and clinical relevance at the ICU - a retrospective analysis
(2021)
Acquired FXIII deficiency is a relevant complication in the perioperative setting; however, we still have little evidence about the incidence and management of this rarely isolated coagulopathy. This study aims to help find the right value for the substitution of patients with an acquired mild FXIII deficiency. In this retrospective single-center cohort study, we enrolled critically ill patients with mild acquired FXIII deficiency (>5% and ≤70%) and compared clinical and laboratory parameters, as well as pro-coagulatory treatments. The results of the present analysis of 104 patients support the clinical relevance of FXIII activity out of the normal range. Patients with lower FXIII levels, beginning at <60%, had lower minimum and maximum hemoglobin values, corresponding to the finding that patients with a minimum FXIII activity of <50% needed significantly more packed red blood cells. FXIII activity correlated significantly with general coagulation markers such as prothrombin time, activated partial thromboplastin time, and fibrinogen. Nevertheless, comparing the groups with a cut-off of 50%, the amount of fresh frozen plasma, thrombocytes, PPSB, AT-III, and fibrinogen given did not differ. These results indicate that a mild FXIII deficiency occurring at any point of intensive care unit stay is also probably relevant for the total need of packed red blood cells, independent of pro-coagulatory management. In alignment with the ESAIC guidelines, the measurement of FXIII in critically ill patients with the risk of bleeding and early management, with the substitution of FXIII at levels <50%-60%, could be suggested.
Background: Various studies have been made about the most effective and safest type of treatment for vertebral compression fractures (VCFs). Long-term results are needed for qualitative evaluation.
Purpose: The purpose of the study is to evaluate the effectiveness of percutaneous vertebroplasty (PVP) and percutaneous kyphoplasty (PKP) procedures for VCFs.
Materials and Methods: Forty-nine patients who received either PVP or PKP between 2002 and 2015 returned a specially developed questionnaire and were included in a cross-sectional outcome analysis. The questionnaire assessed pain development by use of a visual analog scale (VAS). Imaging data (CT scans) were retrospectively analyzed for identification of cement leakage.
Results: Patients’ VAS scores significantly decreased after treatment (7.0 ± 3.4 => 3.7 ± 3.4), (p < 0.001). The average pain reduction in patients treated with PVP was −3.3 ± 3.8 (p < 0.001) (median −3.5) and −4.0 ± 3.9 (p < 0.001) (median −4.5) in patients treated with PKP. Fifteen Patients (41.7%) receiving PVP and four patients (30.7%) receiving PKP experienced recurrence of pain. Cement leakage occurred in 10 patients (22.73%). Patients with cement leakage showed comparable VAS scores after treatment (6.8 ± 3.5 => 1.4 ± 1.6), (p = 0.008). Thirty-nine patients reported an increase in mobility (79.6%) and 41 patients an improvement in quality of life (83.7%).
Conclusion: Pain reduction by means of PVP or PKP in patients with VCFs was discernible over the period of observation. Percutaneous vertebroplasty and PKP contribute to the desired treatment results. However, the level of low pain may not remain constant.
Evidence of hydrothermal activity is reported for the Mesozoic pre- and syn-rift successions of the western Adriatic palaeomargin of the Alpine Tethys, preserved in the Western Southalpine Domain (NW Italy). The products of hydrothermal processes are represented by vein and breccia cements, as well as dolomitization and silicification of the host rocks. In the eastern part of the study area, interpreted as part of the necking zone of the continental margin, Middle Triassic dolostones and Lower Jurassic sediments are crossed by veins and hydrofracturing breccias cemented by saddle dolomite. The precipitation of dolomite cements occurred within the stratigraphic succession close to the sediment–water interface. Despite the shallow burial depth, fluid inclusion microthermometry and clumped isotopes show that hydrothermal fluids were relatively hot (80–150°C). In the western part of the study area, interpreted as part of the hyperextended distal zone, a polyphase history of host-rock fracturing is recorded, with at least two generations of veins cemented by calcite, dolomite and quartz. Vein opening and cementation occurred at shallow burial depth around the time of deposition of the syn-rift clastic succession. Fluid inclusion microthermometry on both quartz and dolomite cements indicates a fluid temperature of 90–130°C, again pointing to hydrothermal fluids. Both in Fenera-Sostegno and Montalto Dora areas, O, C and Sr isotope values, coupled with fluid inclusion and clumped isotope data, indicate that hydrothermal fluids derived from seawater interacted with crustal rocks during hydrothermal circulation. Stratigraphic and petrographic evidence, and U–Pb dating of dolomitized clasts within syn-rift sediments, document that hydrothermal fluids circulated through sediments from the latest Triassic to the Toarcian, corresponding to the entire syn-rift evolution of the western portion of the Adriatic palaeomargin. The documented hydrothermal processes are temporally correlated with regional-scale thermal events that took place in the same time interval at deeper crustal levels.
The physical processes behind the production of light nuclei in heavy ion collisions are unclear. The successful theoretical description of experimental yields by thermal models conflicts with the very small binding energies of the observed states, being fragile in such a hot and dense environment. Other available ideas are delayed production via coalescence, or a cooling of the system after the chemical freeze-out according to a Saha equation, or a ‘quench’ instead of a thermal freeze-out. A recently derived prescription of an (interacting) Hagedorn gas is applied to consolidate the above pictures. The tabulation of decay rates of Hagedorn states into light nuclei allows to calculate yields usually inaccessible due to very poor Monte Carlo statistics. Decay yields of stable hadrons and light nuclei are calculated. While the scale-free decays of Hagedorn states alone are not compatible with the experimental data, a thermalized hadron and Hagedorn state gas is able to describe the experimental data. Applying a cooling of the system according to a Saha-equation with conservation of nucleon and anti-nucleon numbers leads to (nearly) temperature independent yields, thus a production of the light nuclei at temperatures much lower than the chemical freeze-out temperature is compatible with experimental data and with the statistical hadronization model.
Conditional yield skewness is an important summary statistic of the state of the economy. It exhibits pronounced variation over the business cycle and with the stance of monetary policy, and a tight relationship with the slope of the yield curve. Most importantly, variation in yield skewness has substantial forecasting power for future bond excess returns, high-frequency interest rate changes around FOMC announcements, and consensus survey forecast errors for the ten-year Treasury yield. The COVID pandemic did not disrupt these relations: historically high skewness correctly anticipated the run-up in long-term Treasury yields starting in late 2020. The connection between skewness, survey forecast errors, excess returns, and departures of yields from normality is consistent with a theoretical framework where one of the agents has biased beliefs.
The authors present evidence of a new propagation mechanism for wealth inequality, based on differential responses, by education, to greater inequality at the start of economic life. The paper is motivated by a novel positive cross-country relationship between wealth inequality and perceptions of opportunity and fairness, which holds only for the more educated. Using unique administrative micro data and a quasi-field experiment of exogenous allocation of households, the authors find that exposure to a greater top 10% wealth share at the start of economic life in the country leads only the more educated placed in locations with above-median wealth mobility to attain higher wealth levels and position in the cohort-specific wealth distribution later on. Underlying this effect is greater participation in risky financial and real assets and in self-employment, with no evidence for a labor income, unemployment risk, or human capital investment channel. This differential response is robust to controlling for initial exposure to fixed or other time-varying local features, including income inequality, and consistent with self-fulfilling responses of the more educated to perceived opportunities, without evidence of imitation or learning from those at the top.
The authors identify U.S. monetary and fiscal dominance regimes using machine learning techniques. The algorithms are trained and verified by employing simulated data from Markov-switching DSGE models, before they classify regimes from 1968-2017 using actual U.S. data. All machine learning methods outperform a standard logistic regression concerning the simulated data. Among those the Boosted Ensemble Trees classifier yields the best results. The authors find clear evidence of fiscal dominance before Volcker. Monetary dominance is detected between 1984-1988, before a fiscally led regime turns up around the stock market crash lasting until 1994. Until the beginning of the new century, monetary dominance is established, while the more recent evidence following the financial crisis is mixed with a tendency towards fiscal dominance.
This note argues that the European Central Bank should adjust its strategy in order to consider broader measures of inflation in its policy deliberations and communications. In particular, it points out that a broad measure of domestic goods and services price inflation such as the GDP deflator has increased along with the euro area recovery and the expansion of monetary policy since 2013, while HICP inflation has become more variable and, on average, has declined. Similarly, the cost of owner-occupied housing, which is excluded from the HICP, has risen during this period. Furthermore, it shows that optimal monetary policy at the effective lower bound on nominal interest rates aims to return inflation more slowly to the inflation target from below than in normal times because of uncertainty about the effects and potential side effects of quantitative easing.
The plaque reduction neutralization test (PRNT) is a preferred method for the detection of functional, SARS-CoV-2 specific neutralizing antibodies from serum samples. Alternatively, surrogate enzyme-linked immunosorbent assays (ELISAs) using ACE2 as the target structure for the detection of neutralization-competent antibodies have been developed. They are capable of high throughput, have a short turnaround time, and can be performed under standard laboratory safety conditions. However, there are very limited data on their clinical performance and how they compare to the PRNT. We evaluated three surrogate immunoassays (GenScript SARS-CoV-2 Surrogate Virus Neutralization Test Kit (GenScript Biotech, Piscataway Township, NJ, USA), the TECO® SARS-CoV-2 Neutralization Antibody Assay (TECOmedical AG, Sissach, Switzerland), and the Leinco COVID-19 ImmunoRank™ Neutralization MICRO-ELISA (Leinco Technologies, Fenton, MO, USA)) and one automated quantitative SARS-CoV-2 Spike protein-based IgG antibody assay (Abbott GmbH, Wiesbaden, Germany) by testing 78 clinical samples, including several follow-up samples of six BNT162b2 (BioNTech/Pfizer, Mainz, Germany/New York, NY, USA) vaccinated individuals. Using the PRNT as a reference method, the overall sensitivity of the examined assays ranged from 93.8 to 100% and specificity ranged from 73.9 to 91.3%. Weighted kappa demonstrated a substantial to almost perfect agreement. The findings of our study allow these assays to be considered when a PRNT is not available. However, the latter still should be the preferred choice. For optimal clinical performance, the cut-off value of the TECO assay should be individually adapted.
In ‘Justice and Natural Resources,’ Chris Armstrong offers a rich and sophisticated egalitarian theory of resource justice, according to which the benefits and burdens flowing from natural (and non-natural) resources are ideally distributed with a view to equalize people’s access to wellbeing, unless there are compelling reasons that justify departures from that egalitarian default. Armstrong discusses two such reasons: special claims from ‘improvement’ and ‘attachment.’ In this paper, I critically assess the account he gives of these potential constraints on global equality. I argue that his recognition of them has implications that Armstrong does not anticipate, and which challenge some important theses in his book. First, special claims from improvement will justify larger departures from the egalitarian default than Armstrong believes. Second, a consistent application of Armstrong’s life planfoundation for special claims from attachment implies that nation-states may move closer to justify ‘permanent sovereignty’ over the resources within their territories than what his analysis suggests.
Clean water is fundamental to human health and ecosystem integrity. However, water quality deteriorates due to novel anthropogenic pollutants present at microgram per liter concentrations in urban water cycles (termed micropollutants). Wastewater treatment plants (WWTP) have been identified as major point sources for aquatic (micro-)pollutants. Chemical and ecotoxicological analyses have shown that conventional biological WWTPs do not fully remove micropollutants and associated toxicities, which is often because of mobile, polar and/or recalcitrant compounds and transformation products (TPs). To minimize possible environmental risks, advanced wastewater treatment (AWWT) technologies could be a promising mitigation measure. Multiple processes are therefore being developed and evaluated such as ozonation and ozonation followed by granulated activated carbon (GAC) or biological filtration. Assessing the performance of these combined AWWTs was the focus the TransRisk project. Within this project, this thesis accomplished four major goals.
Firstly, the preparation of (waste)water samples was optimised for in vitro bioassays. Acidification, filtration and solid phase extraction (SPE) were tested for their impact on environmentally relevant in vitro endocrine activities, mutagenicity, genotoxicity and cytotoxicity. Significantly different outcomes of these assays were detected comparing neutral and acidified samples. Sample filtration had a lesser impact, but in some cases retention of particle-bound compounds could have caused significant toxicity losses. Out of three SPE sorbents the Telos C18/ENV at sample pH 2.5 extracted highest toxicity, some undetected in aqueous samples. These results indicate that sample preparation needs to be optimised for specific sample matrices and bioassays to avoid false-positive or -negative detects in effect-based analyses.
Secondly, the above listed in vitro toxicities were monitored in a protected region for drinking water production in South-West Germany (2012-2015). Out of 30 sampling sites surface water and groundwater were the least polluted. Nonetheless, a few groundwater samples induced high anti-estrogenic activity that prompted further monitoring. The latter included a waterworks in which no toxicity was detected. Hospital wastewater also had elevated in vitro toxicities and hospitals are, thus, relevant intervention points for source control. The biological WWTPs were effective in removing most of the detected toxicity, and the selected bioassays proved to be pertinent tools for water quality assessment and prioritisation of pollution hotspots.
Thirdly, the in vivo bioassay ISO10872 based on Caenorhabditis elegans (C. elegans) was adapted for this thesis. Using this model, a median effect concentration (EC50) for reproductive toxicity of the polycyclic aromatic hydrocarbon β-naphthoflavone (β- NF) of 114 µg/L was computed which is slightly lower than reported in the scientific literature. β-NF induced cyp-35A3::GFP (a biomarker in transgenic animals) in a time and concentration dependent manner (≤ 21.3–24 fold above controls). β-NF spiked wastewater samples supported earlier hypotheses on particle-bound pollutants. Reproductive toxicity (96 h) and cyp-35A3 induction (24 h) of biologically treated and/or ozonated wastewater extracts and growth promoting effects of GAC/biologically filtered ozonated wastewater extracts were observed. This suggested the presence of residual bioactive/toxic chemicals not included in the targeted chemical analysis. It also highlighted the importance of integrating multiple (apical and molecular) endpoints in wastewater assessments.
Fourthly, five in vitro and the adapted C. elegans bioassay were integrated into a wastewater quality evaluation (developed within TransRisk). Out of the five AWWT options, ozonation (at 1 g O3,applied/g DOC, HRT ~ 18 min) combined with nonaerated GAC filtration was rated most effective for toxicity removal. All five AWWTs largely removed estrogenic and (anti-)androgenic activities, but not anti-estrogenic activity and mutagenicity, which even increased during ozonation. This has been observed in related studies and points towards toxic TPs. These results also emphasized the need for implementing an effective post-treatment for ozonation. The results from a parallel in vivo study with Lumbriculus variegatus and Potamopyrgus antipodarum conducted on site at the WWTP (using flow through systems) were in accordance with the C. elegans results. In this context, it is suggested to further implement C. elegans as sensitive, feasible and ecologically relevant model.
In conclusion, this thesis shows how optimised sample preparation, long-term (in vitro) environmental monitoring, sensitive and ecologically relevant (in vivo) bioassays as well as innovative evaluation concepts, are pivotal in improving the removal of micropollutants and their toxicities with AWWTs. Future research should further develop and evaluate measures at sewer systems, conventional biological, tertiary and other advanced treatment technologies, as well as sociopolitical strategies (e.g., source control or natural conservation) and restoration projects. The effect-based tools optimised in this thesis will support assessing their success.
In the past decades, the use and production of chemicals has been on the rise globally due to increasing industrialization and intensive agriculture; resulting in the occurrence and ecotoxicological risks of chemicals of emerging concern (CECs) in the aquatic compartments. Risks include changes in community structure resulting in the dominance of one species and ecosystem imbalance. When dominant disease-causing organisms are in the environment, the disease transmission is increased. For example, host snails for the schistosomiasis, a human trematode disease, are known to be tolerant to pesticide
exposure compared to the predators. This would therefore result in an increased abundance of snails which consequently increase the disease transmission in the human population.
Kenya, being a low income country faces a lot of challenges with provision of clean water, diseases and sanitation facilities, and increasing population which results in intensive agriculture coupled with pesticide use. Although a lot of research has been carried out on the environmental occurrence and risk of CECs (Chapter 1), most of these studies have been done in developed countries with limited information from Africa. Additionally, research in Africa focused on urban areas with limited number of compounds analyzed and mostly in the water phase, and inadequate information on the effects of CECs on the aquatic organisms. In order to reduce this knowledge gap, this dissertation focused on identification and quantification of CECs present in water, sediment and snails from western Kenya, and the contribution of pesticides to the transmission of schistosomiasis.
Chapter 2 gives a summary of the results and discussion of the dissertation. In Chapter 3, a comprehensive chemical analysis was carried out on 48 water samples to identify compounds, spatial patterns and associated risks for fish, crustacean and algae using toxic unit (TU) approach. A total of 78 compounds were detected with pesticides and biocides being the compounds most frequently detected. Spatial pattern analysis revealed limited compound grouping based on land use. Acute risk for crustaceans and algae were driven by one to three individual compounds. These compounds responsible for toxicity were prioritized as candidate compounds for monitoring and regulation in Kenya.
In Chapter 4, an extension of Chapter 3 was done to cover the CECs present in snails and sediment from the 48 sites. A total of 30 compounds were found in snails and 78 in sediments with 68 additional compounds being found which were not previously detected in water. Higher contaminant concentrations were found in agricultural sites than in areas without anthropogenic activities. The highest acute toxicity (TU 0.99) was determined for crustaceans based on compounds in sediment samples. The risk was driven by diazinon and pirimiphos-methyl. Acute and chronic risks to algae were driven by diuron whereas fish were found to be at low to no acute risk.
In Chapter 5, the effect of pesticide contamination on schistosomiasis transmission was evaluated by applying complimentary laboratory and field studies. In the field studies, the ecological mechanisms through which pesticides and physical chemical parameters affect host snails, predators and competitors were investigated. Pesticide data was obtained from the results in chapter 3. The overall distribution of grazers and predators was not affected by pesticide pollution. However, within the grazers, pesticide pollution increased dominance of host snails. On the contrary, the host-snail competitors were highly sensitive to pesticide exposure. For the laboratory studies, macroinvertebrates including Schistosoma-host snails, competitors and predators were exposed to 6 concentrations levels of imidacloprid and diazinon. Snails showed higher insecticide tolerance compared to competitors and predators. Finally, Chapter 6 summarizes the conclusions of this dissertation, placing it in a broader
context. In this dissertation, a comprehensive chemical characterization and risk assessment of CECs has been carried out in freshwater systems; together with the effects of pesticides on schistosomiasis transmission in rural western Kenya. Results of this dissertation showed that rural areas are contaminated posing a risk to aquatic organisms which contribute to schistosomiasis transmission. This shows the need for regular monitoring and policy formulation to reduce pollutant emissions which contributes negatively to both ecological and human health effects.
In this paper, we discuss Armstrong’s account of attachment-based claims to natural resources, the kind of rights that follow from attachment-based claims, and the limits we should impose on such claims. We hope to clarify how and why attachment matters in the discourse on resource rights by presenting three challenges to Armstrong’s theory. First, we question the normative basis for certain attachment claims, by trying to distinguish more clearly between different kinds of attachment and other kinds of claims. Second, we highlight the need to supplement Armstrong’s account with a theory of how to weigh different attachment claims so as to establish the normative standing that different kinds of attachment claims should have. Third, we propose that sustainability must be a necessary requirement for making attachment claims to natural resources legitimate. Based on these three challenges and the solutions we propose, we argue that attachment claims are on the one hand narrower than Armstrong suggests, while on the other hand they can justify more far-reaching rights to control than Armstrong initially considers, because of the particular weight that certain attachment claims have.
What are called 'natural languages' are artificial, often politically instituted and regulated, phenomena; a more accurate picture of speech practices around the globe is of a multidimensional continuum. This essay asks what the implications of this understanding of language are for translation, and focuses on the variety of Afrikaans known as Kaaps, which has traditionally been treated as a dialect rather than a language in its own right. An analysis of a poem in Kaaps by Nathan Trantraal reveals the challenges such a use of language constitutes for translation. A revised understanding of translation is proposed, relying less on the notion of transfer of meaning from one language to another and more on an active engagement with the experience of the reader.
Introduction
(2021)
In Germany, traffic planning still follows the tradition of modernist urban planning theory from the beginning of the 1930s and car-oriented city planning during the post-war period in West Germany. From a methodological perspective, the prevailing narrative is that traffic can be abstracted and modelled under laboratory conditions (in vitro) as a spatial movement process of individual neutral particles. The use of these laboratory experiments in traffic planning cannot be understood as a neutral application of experimental results, assumed to be true, in a variety of spatial contexts. Rather, it is an active practice of staging traffic according to a particular social interactionist paradigm.
According to this, traffic is staged through interventions in planning authorities as well as the practices of people on the streets. In order to describe these staging conduits, traffic is ontologically thought of as a social order that is continuously reproduced situationally through interactions, following Erving Goffman and Harold Garfinkel. To investigate the staging conduits empirically, an ethnographic-inspired field study was conducted at Willy-Brandt-Platz in Frankfurt am Main in May and June 2020. Through situational mapping and observation of social interactions (in situ), knowledge about the staging of social orders was generated.
These empirical findings are further embedded in debates that discuss traffic not only as a staging but also as an enactment of certain realities. Understanding planning practice as a political enactment, through which realities are not only described but also made, makes it possible for us to think and design alternative realities.
Predator-induced plasticity in life-history and antipredator traits during the larval period has been extensively studied in organisms with complex life-histories. However, it is unclear whether different levels of predation could induce warning signals in aposematic organisms. Here, we investigated whether predator-simulated handling affects warning coloration and life-history traits in the aposematic wood tiger moth larva, Arctia plantaginis. As juveniles, a larger orange patch on an otherwise black body signifies a more efficient warning signal against predators but this comes at the costs of conspicuousness and thermoregulation. Given this, one would expect that an increase in predation risk would induce flexible expression of the orange patch. Prior research in this system points to plastic effects being important as a response to environmental changes for life history traits, but we had yet to assess whether this was the case for predation risk, a key driver of this species evolution. Using a full-sib rearing design, in which individuals were reared in the presence and absence of a non-lethal simulated bird attack, we evaluated flexible responses of warning signal size (number of orange segments), growth, molting events, and development time in wood tiger moths. All measured traits except development time showed a significant response to predation. Larvae from the predation treatment developed a more melanized warning signal (smaller orange patch), reached a smaller body size, and molted more often. Our results suggest plasticity is indeed important in aposematic organisms, but in this case may be complicated by the trade-off between costly pigmentation and other life-history traits.
In the model of randomly perturbed graphs we consider the union of a deterministic graph G with minimum degree αn and the binomial random graph G(n, p). This model was introduced by Bohman, Frieze, and Martin and for Hamilton cycles their result bridges the gap between Dirac’s theorem and the results by Pósa and Korshunov on the threshold in G(n, p). In this note we extend this result in G ∪G(n, p) to sparser graphs with α = o(1). More precisely, for any ε > 0 and α: N ↦→ (0, 1) we show that a.a.s. G ∪ G(n, β/n) is Hamiltonian, where β = −(6 + ε) log(α). If α > 0 is a fixed constant this gives the aforementioned result by Bohman, Frieze, and Martin and if α = O(1/n) the random part G(n, p) is sufficient for a Hamilton cycle. We also discuss embeddings of bounded degree trees and other spanning structures in this model, which lead to interesting questions on almost spanning embeddings into G(n, p).
Introduction: Obesity is classified as a global epidemic and judged to be the greatest public health threat in Western countries. The tremendously increasing prevalence rates in children lead to morbidity and mortality in adults. In many countries, prevalence has doubled since the 1980s. Other countries show a continuous increase or stagnate at a very high level. Given these regional differences, this study aims to draw a global world map of childhood obesity research, including regional epidemiological characteristics, to comprehensively assess research influences and needs. Methods: In addition to established bibliometric parameters, this study uses epidemiological data to interpret metadata on childhood obesity research from the Web of Science in combination with state-of-the-art visualization methods, such as density equalizing map projections. Results: It was not until the 1990s that belated recognition of the dangerous effects of childhood obesity led to an increase in the number of publications worldwide. In addition, our findings show that countries’ study output does not correlate with epidemiologic rates of childhood obesity. In contrast, the primary driver of the research efforts on childhood obesity appears to be largely driven government funding structures. Discussion/Conclusion: The geographical differences in the epidemiological background of childhood obesity complicate the implementation of transnational research projects and cross-border prevention programs. Effective realization requires a sound scientific basis, which is facilitated by globally valid approaches. Hence, there is a need for information exchange between researchers, policy makers, and private initiatives worldwide.
History films personalize, dramatize and emotionalize historical events and characters. They revive the past by exemplifying it in the present, engage ongoing discourses of history and as a result have proven to be the most influential medium in conveying history to large audiences. History films are regarded as an attractive, motivating and efficient (supplementary) teaching and learning medium in history as well as in foreign language classes. As part of the course "Historical Survey of Germany" (BA German-programme at University Putra Malaysia) history film projects on important periods and events in German history were conducted. The article introduces a film project on World War II and describes the pedagogical approach which aims to develop three core competencies of historical understanding – Content Knowledge, Historical Empathy/Perspective Recognition and Narrative Analysis. It discusses selected general findings provided as qualitative data in group and individual assignments. While the responses to questions related to Content Knowledge and Narrative Analysis show that students achieved higher competency levels, the participants showed shortcomings in the rational examination of historical characters, their perspectives and motivations for their actions. Time, practice and guidance can be identified as key factors in developing historical literacy competencies further.
We review the effective field theory associated with the superfluid phonons that we use for the study of transport properties in the core of superfluid neutrons stars in their low temperature regime. We then discuss the shear and bulk viscosities together with the thermal conductivity coming from the collisions of superfluid phonons in neutron stars. With regard to shear, bulk, and thermal transport coefficients, the phonon collisional processes are obtained in terms of the equation of state and the superfluid gap. We compare the shear coefficient due to the interaction among superfluid phonons with other dominant processes in neutron stars, such as electron collisions. We also analyze the possible consequences for the r-mode instability in neutron stars. As for the bulk viscosities, we determine that phonon collisions contribute decisively to the bulk viscosities inside neutron stars. For the thermal conductivity resulting from phonon collisions, we find that it is temperature independent well below the transition temperature. We also obtain that the thermal conductivity due to superfluid phonons dominates over the one resulting from electron-muon interactions once phonons are in the hydrodynamic regime. As the phonons couple to the Z electroweak gauge boson, we estimate the associated neutrino emissivity. We also briefly comment on how the superfluid phonon interactions are modified in the presence of a gravitational field or in a moving background.
Objectives: Inadequate oral hygiene still leads to many serious diseases all over the world. Therefore, this study aimed to analyze scientific research in the field of oral health in order to be able to comprehend their relevant subject areas, research connections, or developments. Methods: This study aimed to assess the global publication output on oral hygiene to create a world map that provides background information on key players, trends, and incentives of research. For this purpose, established bibliometric parameters were combined with state-of-the-art visualization techniques. Results: This study shows the actual key players of research on oral hygiene in high-income economies with only marginal participation from lower economies. This still corresponds to the current burden situations, but they are more and more shifting to the disadvantage of the low-income countries. There is a clear North–South and West–East gradient, with the USA and the Western European nations being the most publishing nations on oral hygiene. As an emerging country, Brazil plays a role in the research. Conclusions: The scientific power players were concentrated in high-income countries. However, the changing epidemiological situation requires a different scientific approach to oral hygiene. This requires an expansion of the international network to meet the demands of future global oral health burdens, which are mainly related to oral hygiene.
This article provides a comparative overview of phonological and phonetic differences of Mukrī Kurdish varieties and their geographical distribution. Based on the examined data, four distinct varieties can be distinguished. In each variety area, different phonological patterns are analyzed according to age, gender, and social groups in order to establish cross-regional and cross-generational developments in relation to specific phonological distributions and shifts. The variety regions which are examined in the present article include West Mukrī (representing an archaic form of Mukrī), Central Mukrī (representing a linguistically peripheral dialect), East Mukrī (representing mixed archaic and peripheral dialect features), and South Mukrī (sharing features of both Mukrī and Ardałānī). The study concludes that variation in the Mukrīyān region depends on phonological developments, which in turn are due to geographical and sociological factors. Moreover, contact-induced change and internal language development are also established as triggering factors distinguishing regional variants.
During RUN3 (2021-2023) of the Large Hadron Collider, the Time Projection Chamber (TPC) of ALICE will be operated with quadruple stacks of Gas Electron Multipliers (GEMs). This technology will allow to overcome the rate limitation due to the gated operation of the Multi-Wire Proportional Chambers (MWPCs) used in RUN1 (2009-2013) and RUN2 (2015-2018).
As part of the Upgrade project, long-term irradiation tests, so called "ageing tests", have been carried out. A test setup with a detector using a quadruple stack of 10x10cm2 GEMs was built and operated in Ar-CO2 and Ne-CO2-N2 gas mixtures. The detector performance such as gas gain and energy resolution were monitored continuously. In addition, outgassing tests of materials used for the assembly process of the upgraded TPC were performed. To reach the expected dose of the GEM-based TPC, the detector was operated at much higher gains than the TPC. It was found, that the GEMs could keep their performance within the projected lifetime of the TPC. Most of the tested materials showed no negative impact on the detector. For the tested epoxy adhesive no certain conclusion could be drawn.
At much higher doses than expected for the upgraded TPC, a new phenomenon was observed, which changed the hole geometry of the GEMs and led to a degradation of the energy resolution. Even though its occurrence is not expected during the lifetime of the GEM-based TPC, simulations were carried out to study this effect more systematically. The simulations confirmed, that a change of the hole geometries of the GEMs, lead to an increase of the local gain variation, which results in a decrease of the energy resolution.
Furthermore the effect of methane as quench gas on GEMs was studied, even though this gas is not foreseen to be used in the TPC. From ageing tests with single-wire proportional counters it is well known that hydrocarbons are produced in the plasma of the avalanches, which cover the electrodes and lead to a degradation of the detector performance. Even though GEMs have a quite different geometry, the ageing tests showed, that also this technology tends to methane-induced ageing. A loss of gas gain as well as a degradation of the energy resolution due to deposits on the electrodes was monitored. A qualitative and quantitative comparison between ageing in GEMs and proportional counters was performed.
The main focus of research in the field of high-energy heavy-ion physics is the study of the quark-gluon plasma (QGP). Topic of the present work is the measurement of electron-positron pairs (dielectrons), which grant direct access to some of the key properties of this state of matter, since after their formation they leave the hot and dense medium without significant interaction. In particular, the measurement of the initial QGP temperature is considered a "holy grail" of heavy-ion physics. Therefore, in addition to the analysis of existing data, a feasibility study has been conducted to determine to which extent this goal would be achievable by upgrading the ALICE experiment at CERN.
Dielectrons are produced during all stages of a heavy-ion collision, with their invariant mass reflecting the amount of energy available at the time of their formation. Dielectrons of highest mass are thus produced in the initial scatterings of the colliding nuclei by quark-antiquark annihilation. Correlated electron-positron pairs can also emerge from the decay chains of early-produced pairs of heavy-flavour (HF) particles. During the QGP stage and at the beginning of the hadronic phase, the system emits thermal radiation in the form of photons and dielectrons, which carry information about the medium temperature to the observer. In the final stage of the collision, decays of light-flavour (LF) hadrons produce additional contributions to the dielectron spectrum.
The present work is based on early data from the ALICE experiment recorded from lead-lead collisions at a center-of-mass energy of 2.76 TeV. Due to the limited amount of data, a focus is placed on achieving high efficiencies throughout the analysis. To this end, a special electron identification strategy is developed and a custom track selection applied, together resulting in a tenfold increase in pair efficiency. The dielectron spectrum is evaluated on a statistical basis, using a pair prefilter, which is optimized based on two signal quality criteria, to reduce the fraction of electrons and positrons from unwanted sources at minimum signal loss. In addition, an artifact of the track reconstruction is exploited to suppress pairs from photon conversions and to correct the dielectron yield for a contribution from different-conversion pairs. The main signal uncertainty is extracted from the deviation between results of 20 analysis settings and amounts to 20% in most of the studied kinematic range.
For comparison with the analysis results, a hadronic cocktail consisting of the LF and HF contributions is simulated, which can reasonably well describe the measured dielectron production, with a hint of an enhancement at low invariant mass. Two approaches to model the in-medium modification of the heavy-flavour are followed, resulting in up to 50% suppression, which creates some additional space for a thermal contribution at intermediate mass.
For a complete comparison between experimental data and theoretical expectation, two model calculations are consulted. The Thermal Fireball Model provides predictions for thermal dielectron radiation from the QGP and hadron gas. The data tends to be better described with these additional thermal contributions. For a comparison with a prediction by the UrQMD model, the HF component of the cocktail is subtracted from the data. This results in better agreement if the HF suppression by in-medium effects is taken into account.
The feasibility study in this work has served as a physical motivation for the ALICE upgrade for LHC Run 3. The precision with which the early temperature of the QGP can be determined via dielectrons is chosen as key observable. A multitude of individual contributions are merged into a fully modeled dielectron analysis. The resulting signal-to-background ratio represents some of the expected systematic uncertainties, while from the significance combined with the planned number of lead-lead collisions a realistic "measurement" with statistical fluctuations around the expected dielectron signal is generated using a Poisson sampling technique. Since the HF yield exceeds the QGP thermal radiation by about an order of magnitude, an additional analysis step exploiting the enhanced track reconstruction is introduced to reduce its contribution by up to a factor of five. The resulting reduction in pair efficiency is overcompensated by an up to hundred times higher collision rate. The entire cocktail is then subtracted from the sampled data to isolate the thermal excess yield. The final analysis of this spectrum shows that the inverse slope of the model prediction, which depends directly on the QGP temperature, can be reproduced within statistical and systematic uncertainties of about 10%.
The promising results of this study have contributed on the one hand to the realization of the ALICE upgrade and to a design decision for the new Inner Tracking System, and at the same time represent exciting predictions for upcoming measurements.
Das Feld der Hochenergie-Schwerionenforschung hat sich der Untersuchung des Quark-Gluon-Plasmas (QGP) gewidmet. Ein QGP ist ein sehr heißer und dichter Materiezustand, der kurz nach dem Urknall für einige Mikrosekunden das Universum füllte. Unter diesen extremen Bedingungen sind die fundamentalen Bausteine der Materie, die Quarks und Gluonen, quasi frei, also nicht in Hadronen eingeschlossen, wie es unter normalen Bedingungen der Fall ist. Hadronen sind Teilchen, die aus Quarks und Gluonen bestehen. Die bekanntesten Hadronen sind Protonen und Neutronen, die Bestandteile von Atomkernen, aus denen, zusammen mit Elektronen, die gesamte bekannte Materie aufgebaut ist.
Um ein QGP im Labor zu erzeugen, lässt man ultrarelativistische schwere Ionen, wie zum Beispiel Pb-208-Kerne, aufeinander prallen. Dies geschieht am CERN, dem größten Kernforschungszentrum der Welt. Der Teilchenbeschleuniger, welcher Protonen und Pb-Kerne beschleunigt und zur Kollision bringt, heißt Large Hadron Collider (LHC) und ist mit 27 km Umfang der größte der Welt. Bei einer einzigen Pb-Pb Kollision am LHC werden mehrere Tausend Teilchen und Antiteilchen erzeugt. Das dedizierte Experiment zur Untersuchung von Schwerionenkollisionen am LHC ist ALICE. ALICE ist mit mehreren Teilchendetektoren ausgerüstet, die es ermöglichen, tausende Teilchen gleichzeitig zu messen und zu identifizieren.
Unter den produzierten Teilchen befinden sich auch leichte Atomkerne, wenngleich diese nur sehr selten erzeugt werden. Die Anzahl der produzierten Teilchen pro Teilchensorte hängt nämlich von deren Masse ab. In Pb-Pb Kollisionen am LHC sinkt die Anzahl der produzierten (Anti)kerne exponentiell um einen Faktor 1/330 bei Hinzufügen jedes weiteren Nukleons. Die Menge an produzierten Teilchen pro Spezies stellt Informationen über den Produktionsmechanismus beim Übergang vom QGP zum Hadrongas zur Verfügung. Hierbei sind leichte (Anti)kerne von besonderem Interesse, da sie vergleichsweise groß sind und ihre Bindungsenergie bis zu zwei Größenordnungen kleiner ist als die Temperaturen, die bei der Erzeugung der Hadronen vorherrschen. Es ist bis heute noch nicht verstanden, wie leichte (Anti)kerne bei diesen Bedingungen erzeugt werden und überleben können.
Für diese Arbeit wurden ca. 270 Millionen Pb-Pb Kollisionen bei einer Schwerpunktsenergie von 5,02 TeV, die von ALICE im November 2018 aufgezeichnet wurden, analysiert. Es wurde die Produktion von (Anti)triton und (Anti)alpha untersucht. Wegen ihrer großen Masse werden beide Kerne sehr selten produziert, bei weitem nicht bei jeder Kollision. Antialpha ist der schwerste Antikern, der jemals gemessen wurde. Aufgrund dieser Seltenheit ist die Größe des zur Verfügung stehenden Datensatzes entscheidend. Es war möglich, das erste jemals gemessene Antialpha-Transversalimpulsspektrum zu extrahieren. Auch für (Anti)triton und Alpha wurden Transversalimpulsspektren bestimmt.
Die Ergebnisse wurden mit theoretischen Modellen und anderen ALICE Messungen verglichen.
Am Ende wird in einem Ausblick auf das kürzlich durchgeführte Upgrade der ALICE Spurendriftkammer (TPC) eingegangen. In der nächsten, bald startenden Datennahmeperiode wird der LHC seine Kollisionsrate erheblich erhöhen, was es ermöglichen wird, mehr als 100 mal so viele Daten wie bisher aufzuzeichnen. Hiervon werden die in dieser Arbeit beschriebenen (Anti)triton- und (Anti)alpha-Analysen beachtlich profitieren. Um mit den erheblich höheren Kollisionsraten zurecht zu kommen, mussten einige Detektoren, unter anderem die TPC, maßgeblich erneuert werden. In den ersten beiden Datennahmeperioden wurde die TPC mit Vieldrahtproportionalkammern betrieben. Diese sind allerdings viel zu langsam für die geplanten Kollisionsraten. Deshalb wurden sie im Jahr 2019, während einer langen Betriebspause des LHC, durch Quadrupel-GEM (Gas Electron Multiplier) Folien basierte Auslesekammern ersetzt, welche eine kontinuierliche Auslese der TPC ermöglichen. Da es sich um die erste jemals gebaute GEM TPC im Großformat handelt, war ein umfangreiches Forschungs- und Entwicklungs- (F&E) Programm notwendig, um die GEM Auslesekammern zu charakterisieren und zu testen. Im Rahmen dieses F&E Programms wurden am Anfang dieser Promotion systematische Messungen an einer kleinen Test TPC mit Quadrupel-GEM Auslese, die extra zu diesem Zweck gebaut worden war, durchgeführt. Hierbei wurde der Rückfluss der bei der Gasverstärkung erzeugten Ionen in das Driftvolumen der TPC und die Energieauflösung mit verschiedenen GEM Folien Typen und unterschiedlicher Anordnung gemessen. Das Ziel war, möglichst kleine Ionenrückflüsse bei möglichst guter Energieauflösung zu erreichen. Hierbei musste ein Kompromiss gefunden werden, da die beiden Größen sich gegenläufig verhalten. Es war jedoch möglich, mit mehreren GEM Konfigurationen Spannungseinstellungen zu identifizieren, bei denen beide Größen den gewünschten Anforderungen entsprachen.
The thermal fit to preliminary HADES data of Au+Au collisions at sNN=2.4 GeV shows two degenerate solutions at T≈50 MeV and T≈70 MeV. The analysis of the same particle yields in a transport simulation of the UrQMD model yields the same features, i.e. two distinct temperatures for the chemical freeze-out. While both solutions yield the same number of hadrons after resonance decays, the feeddown contribution is very different for both cases. This highlights that two systems with different chemical composition can yield the same multiplicities after resonance decays. The nature of these two minima is further investigated by studying the time-dependent particle yields and extracted thermodynamic properties of the UrQMD model. It is confirmed, that the evolution of the high temperature solution resembles cooling and expansion of a hot and dense fireball. The low temperature solution displays an unphysical evolution: heating and compression of matter with a decrease of entropy. These results imply that the thermal model analysis of systems produced in low energy nuclear collisions is ambiguous but can be interpreted by taking also the time evolution and resonance contributions into account.
The metaphor of DIADEM informs the way in which Proverbs depicts the character of a woman of strength and her place in the society. The metaphor serves the Proverbs to conceptualise a prudent, virtuous and reasonable character in relation to the divine and the human, and thus to provide the main support of a successful life.
Few empirical studies have explored psychological attitudes toward out-of-home mobility in old age. We aimed to validate an instrument to assess mobility-related behavioral flexibility and routines in the context of everyday mobility and successful aging. Data were gathered from face-to-face interviews and travel diaries of 211 community-dwelling older adults (aged 65–92) in Germany. Analysis revealed sufficient reliability and confirmed the factorial and convergent validity of the instrument. Mobility-related behavioral flexibility predicted the number of daily trips, particularly by mobility-impaired participants, and was strongly linked to autonomy and to psychological well-being. However, a preference for routines predicted neither out-of-home mobility nor further outcomes. The results demonstrate the importance of mobility-related flexibility in maintaining an active and independent life in old age.
In recent years, several neuronal differentiation protocols were published that circumvent the requirement of embryoid body (EB) formation under serum-deprivation and simplified medium conditions. But a neuronal default model to establish an approach that works efficiently for all pluripotent cells and neuronal precursors is still lacking. Whether such a default neural mechanism exist and how this is implemented across a broad spectrum of cell source, is addressed in several studies and still controversially discussed. It was proposed that the default neuronal fate is initiated in the absence of extrinsic signals and is achieved by eliminating extracellular inhibitors of neuroectodermal fate and suppressing cell-cell signalling through limited cell density. Previous studies reported that ESC and ECC grown at low density and in absence of exogenous factors or feeder layers die within 24 h but acquire a neural identity as indicated by expression of the neural marker Nestin. Thus, this application is not suitable for generating neural cultures. Furthermore, it was reported that P19 cells survive and express neuroectodermal marker genes in serum-free DMEM/F12 medium containing transferrin, insulin, and selenite, although no neurites were identified.
Based on this background, in this study, a novel approach to induce neuronal differentiation in vitro was developed that implements a nutrient-poor environment, which, in contrast to previous studies, ensures the survival of neuronally differentiated cells over a long period of time and allows normal formation of neurites. Neither the formation of free-floating aggregates nor supplementation of growth factors or known inducers was required to establish a reliable neuronal differentiation protocol. A simple medium, consisting of DMEM/F12+N2 that was highly diluted in salt solution, was sufficient to drive a fast neuronal differentiation in monolayer cultures. Serum deprivation and strong dilution of DMEM/F12+N2 medium cause a nutrient-poor environment in which the influence of growth factors and inducers is minimized. This medium creates a metabolically defined environment that is presumably free of extrinsic signals that prevent the decision of neuronal fate. Analysis of the medium components discovered no actual inducer. Hence, it was suggested that the metabolic composition of the medium exclusively covers specific cell requirements of neurons, therefore ensures their survival, and drives the switch from pluripotent cells to neurons. The self-developed method was established by usage of the murine embryonal carcinoma cell line P19 and could be transferred to murine ESC. Consequently, the method could provide a feasible protocol for a generally valid neuronal default model.
The established protocol provides several advantages such as the possibility to generate stable pure neuronal cultures by a fast, simple, and highly reproducible one-step induction under defined medium conditions with a minimum of exogen effectors. The method is characterised by clear and steady medium conditions that makes the investigation of specific cell requirements during differentiation accessible. It is therefore expected to be a useful tool to investigate the molecular basis of neuronal differentiation as well as for high throughput screenings. The phenotype of mature postmitotic neurons was arising within one week and cultures were shown to stay stable at least for three weeks. The neuronal identity was confirmed by expression of neuronal markers through immunofluorescence staining and mass spectrometry analysis. Furthermore, increased levels of axon markers were detected in early neuronal differentiation and functionality of the synapses of the P19-derived neurons was ascertained by detection of calcium activity. Axonal laser ablation, immediately followed by fast regrowth of connections in the neuronal network, revealed a strong regeneration potential under the given conditions. Furthermore, the generated neurons showed a morphologically distinct phenotype and the formation of neural rosettes. Immunofluorescence staining demonstrated the generation of pure and homogeneous neuronal cultures, free of glial cells.
Retinoic acid (RA) plays an essential role in cell signalling during embryogenesis and efficiently induces neuronal differentiation in vitro in a concentration dependent manner. Neither retinol nor retinoic acid was included in any of the components of the self-prepared medium in this work. However, I observed, dependence on RARβ- and/or RARγ-regulated RA signalling in serum-free monolayer cultures. Nevertheless, neuronal differentiation in serum-free monolayer cultures was assumed to be RARα-independent because (i) RARα was slightly downregulated after neuronal induction, (ii) the truncated RARα of the RAC65 mutant had no effect on induction efficiency, and (iii) a pan-RAR inhibitor suppressed neuronal differentiation. In contrast to serum-free monolayer cultures, the truncated RARα prevented neuronal differentiation by application of the conventional protocol where cells are grown in free floating cell aggregates in serum-containing medium. Proteome analysis of P19 cells, treated by the self-developed differentiation protocol over five days showed increased levels of cellular RA binding proteins that mediate the cellular RA transport and are involved in canonical as well as non-canonical RA signalling.
...
Genetic engineering of Saccharomyces cerevisiae for improved cytosolic isobutanol biosynthesis
(2021)
The finite nature of fossil resources and the environmental problems caused by their excessive usage requires alternative approaches. The transformation from a fossil based economy to one based on renewable biomass is called a “bioeconomy”. To substitute fossil resources, various microorganisms have already been modified for the biosynthesis of valuable chemicals from biomass. However, the development of such efficient microorganisms at an industrial scale, remains a major challenge. The most prominent and robust microorganism for industrial production is the yeast Saccharomyces cerevisiae, which is known to produce ethanol that is used as renewable biofuel. However, S. cerevisiae is also naturally able to produce isobutanol in small amounts. Isobutanol is favoured as a biofuel compared to ethanol due to its higher octane number and lower hygroscopicity, which makes it more suitable for application in conventional combustion engines. In S. cerevisiae, the biosynthesis of isobutanol is permitted by the combination of mitochondrial valine synthesis (catalysed by Ilv2, Ilv5 and Ilv3) and its cytosolic degradation (catalysed by Aro10 and Adh2). The different compartmentalisation of the two pathways limit isobutanol biosynthesis. Thus, Brat et al. (2012) were able to increase the isobutanol yield up to 15 mg/gGlc by cytosolic re localisation of the enzymes Ilv2Δ54, Ilv5Δ48 and Ilv3Δ19 (cyt-ILV), with simultaneous deletion of ilv2. This corresponds to approximately 3.7% of the theoretical yield of 410 mg/gGlc, implying existing limitations in isobutanol biosynthesis, which have been investigated in this work.
For yet unknown reasons, isobutanol was only produced by S. cerevisiae in a valine free medium, according to Brat et al. (2012). This work shows that this can be attributed to the catalytic activity of Ilv2Δ54, which acted as growth inhibitor to S. cerevisiae. By this logic, a negative selection on the ILV2∆54 gene was exerted, which made the ilv2 deletion and simultaneous valine exclusion necessary to maintain the functional expression of toxic ILV2∆54. Furthermore, it was shown that valine exclusion is not mandatory due to the feedback regulation of Ilv2, permitted by Ilv6. Rather, increased isobutanol yield was observed when cytosolic Ilv6∆61 was expressed in the valine free medium, which is explained by the enhanced regulation of Ilv2Δ54 by Ilv6∆61 when BCAA are absent. Isobutanol biosynthesis is neither redox nor NAD(P)H co factor balanced. It was seen that co factor imbalance could be mitigated by the expression of an NADH oxidase (NOX), but not by expression of the NADH dependent ilvC6E6, since the latter showed low in vivo activity. Furthermore, it was seen that NAD(H) imbalance did already limit isobutanol biosynthesis, but the NADP(H) imbalance did not. Another limitation of cytosolic isobutanol biosynthesis is the secretion of the intermediate 2‑dihydroxyisovalerate, which then no longer is taken up by S. cerevisiae, causing a reduced isobutanol yield. This is attributed to insufficient Ilv3∆19 activity, due to poor iron sulphur cluster apo protein maturation. Therefore, it was aimed to replace Ilv3∆19 by heterologous dihydroxyacid dehydratases. Even though some of the enzymes were functionally expressed, none showed better in vivo activity than Ilv3∆19. Therefore, the Ilv3∆19 apo protein maturation was improved. This was achieved by the genomic deletion of fra2 or pim1 as well as by the cytosolic expression of Grx5∆29.
In addition to the isobutanol pathway, S. cerevisiae was optimised for isobutanol biosynthesis by rational and evolutionary engineering. For this purpose, the genes which are necessary for isobutanol production were integrated into the ilv2 locus, and the resulting strain was evolved in a medium containing the toxic amino acid analogue norvaline. Evolved single colonies were isolated, which presented improved growth and increased isobutanol yields (0.59 mg/gGlc) in a valine free medium, as compared to the initial strain. This is explained by a gene dosage effect which occurred during the evolutionary engineering experiment. In collaboration with Dr. Wess, the genes ilv2, bdh1/2, leu4/9, ecm31, ilv1, adh1, gpd1/2 and ald6 were cumulatively deleted in CEN.PK113 7D to block competing metabolic pathways. The resulting strain JWY23 achieved isobutanol yields up to 67.3 mg/gGlc, when expressing the cyt ILV enzymes from a multi copy vector. The most promising approaches of this work, namely the deletion of fra2 and the expression of Grx5∆29, Ilv6∆61, and NOX, were confirmed in this JWY23 strain. The highest isobutanol yield from this work was observed at 72 mg/gGlc for Ilv6∆61 and cyt ILV enzymes expressing JWY23, which corresponds to 17.6% of the theoretical isobutanol yield.
Isobutyric acid (IBA) is a by product of isobutanol biosynthesis, but it is also considered a valuable platform chemical. Therefore, the approaches that improved isobutanol biosynthesis were applied to the biosynthesis of IBA in S. cerevisiae. The highest IBA yield of 9.8 mg/gGlc was observed in a valine free medium by expression of cyt ILV enzymes, NOX and Ald6 in JWY04 (CEN.PK113 7D Δilv2; Δbdh1; Δbdh2; Δleu4; Δleu9; Δecm31; Δilv1). This corresponded to an 8.9 fold increase compared with the control and is, to our best knowledge, the highest IBA yield reported to date for S. cerevisiae.
Sleep is one of the fundamental requirements of all animals from nematodes to humans. It appears in different formats with shared features such as reduced muscle activities and reduced responsiveness to the environment. Despite the long history of sleep research, why a brain must be taken offline for a large portion of each day remains unknown. Moreover, sleep research focused on mammals and birds reveals two stages, rapid-eye-movement (REM) and slow-wave (SW) sleep, alternating during sleep. Whether these two stages of sleep exist in other vertebrates, particularly reptiles, is debated, as is the evolution of sleep in general.
Recordings from the brain of a lizard, the Australian bearded dragon Pogona vitticeps, indicate the presence of two electrophysiological states and provides a better picture of their sleep. Local field potential (LFP) signals, head velocity, eye movements, and heart rate during sleep match the pattern of REM and SW sleep in mammals. The SW and REM sleep patterns that we observed in lizards oscillated continuously for 6 to 10 hours with a period of 80-100 seconds when the ambient temperature was ~27°C. Lizard SW dynamics closely resemble those observed in rodent hippocampal CA1, yet originated from a brain area, the dorsal ventricular ridge (DVR), that does not correspond anatomically or transcriptomically to the mammalian hippocampus. This finding pushes back the probable evolution of these dynamics to the emergence of amniotes, at least 300 million years ago.
Unlike mammals and birds, REM and SW sleep in lizards occupy an almost equal amount of time during sleep. The clock-like alternation between these two sleep states was found initially by measuring the power modulation of two frequency bands, delta and beta. I recorded the full-band LFP and found an infra-slow oscillation (ISO) in the frequency range between 5 and 20 milli-Hz during sleep. The magnitude of ISO increased during sleep and decreased during both wakefulness and arousal during sleep. The up- and down-states of ISO were synchronized with the sleep state alternating rhythm but with a significant time lag dependent on the locations of the recording electrodes. Multi-site LFP recordings indicated that this ISO is a putative propagation wave sweeping extremely slowly, 30-67 µm/sec, from the posterior-dorsal pole to the anterior-ventral pole of the DVR.
Previous studies in other animals showed that brainstem areas such as the locus coeruleus, laterodorsal tegmentum, and periaqueductal gray are involved in sleep states regulation. It is sadly impossible to carry out in vivo recordings in the lizard brainstem without severely affecting them and their quality of life. I thus carried out ex vivo recordings in both DVR and brainstem. Pharmacological stimulation of the brainstem could reversibly silence one distinct EEG pattern characteristic of SW sleep, the sharp-wave and ripple complex, in DVR. An ISO could be recorded simultaneously in both DVR and brainstem. From data collected in both intact and split ex vivo brains, I concluded that there are independent ISO generators in at least two areas, the brainstem and the telencephalon. Their signals may normally be synchronized by long-range connections. The DVR ISO leads the brainstem ISO by ~29 sec. Optogenetic stimulation of brainstem neurons was able to disrupt the ISO in DVR reversibly.
In conclusion, the lizard brain offers a relatively simple model system to study sleep. Despite a diversity of results in different lizard species, my results revealed a number of new findings. Relevant for sleep research in general: 1) REM and SW sleep exist in a reptile. Since they also exist in birds and mammals, they probably existed in their common amniote ancestor, if not earlier. 2) REM and SW occupy equal amounts of time during sleep (50% duty cycle), a unique feature among all described sleep electrophysiological patterns, suggesting the possible existence of a simple central pattern generator of sleep, possibly ancestral. 3) I discovered the existence, in the local field potential, of an infra slow oscillation with extremely slow propagation, locked to the SW-REM alternating rhythm. The causes and mechanisms of this ISO remain to be understood. To my knowledge, the correlation between sleep states and a slow rhythm has only been reported in human scalp EEG recordings so far.
Green finance upside down
(2021)
Die vorliegende Arbeit präsentiert Forschungsarbeiten basierend auf nanoskopischen Oberflächenmessungen an plasmonischen Metaoberflächen und zweidimensionalen Materialien, insbesondere dem halbleitenden Übergangsmetal-Dichalcogenid (TMDC) WS_2. Die Thesis ist in sieben Kapitel untergegliedert. Die Einleitung vermittelt einen Überblick über die treibenden Kräfte hinter der Forschung im Bereich der Nanophotonik an zweidimensionalen Materialsystemen. Die Untersuchung der Licht-Materie-Wechselwirkung an dünnen Materialgrenzflächen zieht sich als roter Faden durch die gesamte Arbeit.
Das zweite Kapitel beschreibt den experimentellen Aufbau, der für die Durchführung der nanoskopischen Messungen in dieser Arbeit implementiert wurde. Es werden theoretische Grundlagen, das Messprinzip und die Implementierung des optischen Rasternahfeldmikroskops (s-SNOM) skizziert. Außerdem wird ein Strom-Spannungs-Rasterkraftmikroskop (c-AFM) im Kontaktmodus genutzt, um elektrische Ströme auf mikroskopischen zweidimensionalen TMDC-Terrassen zu messen. In den darauffolgenden vier Kapiteln werden die Beiträge dieser Arbeit zur Untersuchung der Licht-Materie-Wechselwirkung auf der Nanoskala aus verschiedenen Perspektiven vorgestellt. Jedes Kapitel enthält eine kurze Einleitung, einen Theorieteil, Messdaten oder Simulationsergebnisse sowie eine Analyse; vervollständigt durch einen Schlussteil.
Die zentrale Arbeit an einer metallischen Metaoberfläche aus elliptischen Goldscheiben wird in Kapitel 3 vorgestellt. Der zugehörige Theorieteil führt in das Konzept von Oberflächen-Plasmon-Polaritonen (SPP) ein, das für den Forschungsbereich der Plasmonik im Allgemeinen wesentlich ist. Verschiedene Methoden zur Berechnung der Dispersionsrelation dieser Oberflächenmoden an ein- und mehrschichtigen Grenzflächen werden auf die untersuchte Metaoberflächenprobe angewendet. Das Modell sagt drei verschiedene Moden voraus, die sich an der Grenzfläche ausbreiten. Eine teil-gebundene ins Substrat abstrahlende Oberflächenmode sowie zwei vergrabene stark gebundene anisotrope Moden. Eine auf der Probe platzierte Nanokugel aus Silizium wird als radiale Anregungsquelle verwendet.
Der Vergleich mit s-SNOM-Nahfeldbildern zeigt, dass nur die schwach gebundene geführte Modenresonanz ausreichend angeregt wurde, um durch s-SNOM-Bildgebung nachgewiesen werden zu können. Die schwache Oberflächenbindung erklärt die scheinbar isotrope Ausbreitung auf der anisotropen Oberfläche. Die Beobachtung der verbleibenden stark eingegrenzten anisotropen vergrabenen Moden würde eine verbesserte tiefenempfindliche Auflösung des Systems erfordern, die im Prinzip für Schichtdicken von 20 nm möglich sein sollte. Darüber hinaus wirft die Beobachtung die Frage auf, ob die durch Impuls- und Modenvolumenanpassung der Nanokugel gegebene Anregungseffizienz einen ausreichenden Anregungsquerschnitt erzeugt, um nachweisbare vergrabene SPP-Moden zu erzeugen.
In Kapitel 4 wird die Idee der Visualisierung vergrabener elektrischer Felder mit s-SNOM fortgesetzt. Hier wird es auf die Untersuchung von WS_2 angewendet, einem zweidimensionalen TMDC-Material, welches Photolumineszenz zeigt. Durch die Strukturierung des Galliumphosphid-Substrats unter der hängenden Monolage, die von einer dünnen Schicht aus hBN getragen wird, wird die Photolumineszenzausbeute um den Faktor 10 erhöht. Dies wird durch den Entwurf einer lateralen DBR-Mikrokavität mit zusätzlich optimierter vertikaler Tiefe erreicht, die in das Substrat geätzt wurde.
Die hochauflösende Abbildung der elektrischen Feldverteilung im Resonator wird durch den Einsatz von s-SNOM ermöglicht, um die Verbesserung der Einkopplung durch diese beiden Ansätze zu bewerten. Es konnte festgestellt werden, dass die laterale Struktur überwiegend zur verstärkten Photolumineszenzausbeute beiträgt, während für die Einkopplung keine offensichtliche Verstärkung auf die vertikale Strukturoptimierung zurückgeführt werden konnte.
Das zweidimensionale Material WS_2 wird in Kapitel 5 erneut mit Hilfe von c-AFM untersucht. Unterschiedlich dicke Multilagen auf Graphen und Gold dienen als Tunnelbarrieren für vertikale Ströme zwischen Substrat und leitender c-AFM-Messpitze. Die Daten können mit einem Fowler-Nordheim-Modell mit Parametern für die Tunnelbreite und Schottky-Barrierenhöhen der beiden Grenzflächen erklärt werden. Die Messungen zeigen jedoch eine schwache Reproduzierbarkeit, was eine detailliertere Zusammenfassung der relevanten Fehlerquellen erfordert. In der Schlussfolgerung des Kapitels werden mehrere Schlüsselaspekte vorgeschlagen, die bei künftigen Messungen berücksichtigt werden sollten. Entscheidend ist, dass c-AFM sehr empfindlich auf die Adsorption von Wasserfilmen an der Probenoberfläche reagiert, worunter WS_2-Oberflächen unter Umgebungsbedingungen leiden...
In this paper, we present an experimental and theoretical study of excitation processes for the heaviest stable helium-like ion, that is, He-like uranium occurring in relativistic collisions with hydrogen and argon targets. In particular, we concentrate on angular distributions of the characteristic Kα radiation following the K → L excitation of He-like uranium. We pay special attention to the magnetic sub-level population of the excited 1s2lj states, which is directly related to the angular distribution of the characteristic Kα radiation. We show that the experimental data can be well described by calculations taking into account the excitation by the target nucleus as well as by the target electrons. Moreover, we demonstrate for the first time an important influence of the electron-impact excitation process on the angular distributions of the Kα radiation produced by excitation of He-like uranium in collisions with different targets.
Similar to chloroplast loci, mitochondrial markers are frequently used for genotyping, phylogenetic studies, and population genetics, as they are easily amplified due to their multiple copies per cell. In a recent study, it was revealed that the chloroplast offers little variation for this purpose in central European populations of beech. Thus, it was the aim of this study to elucidate, if mitochondrial sequences might offer an alternative, or whether they are similarly conserved in central Europe. For this purpose, a circular mitochondrial genome sequence from the more than 300-year-old beech reference individual Bhaga from the German National Park Kellerwald-Edersee was assembled using long and short reads and compared to an individual from the Jamy Nature Reserve in Poland and a recently published mitochondrial genome from eastern Germany. The mitochondrial genome of Bhaga was 504,730 bp, while the mitochondrial genomes of the other two individuals were 15 bases shorter, due to seven indel locations, with four having more bases in Bhaga and three locations having one base less in Bhaga. In addition, 19 SNP locations were found, none of which were inside genes. In these SNP locations, 17 bases were different in Bhaga, as compared to the other two genomes, while 2 SNP locations had the same base in Bhaga and the Polish individual. While these figures are slightly higher than for the chloroplast genome, the comparison confirms the low degree of genetic divergence in organelle DNA of beech in central Europe, suggesting the colonisation from a common gene pool after the Weichsel Glaciation. The mitochondrial genome might have limited use for population studies in central Europe, but once mitochondrial genomes from glacial refugia become available, it might be suitable to pinpoint the origin of migration for the re-colonising beech population.
First as a student of comparative literature with a focus on German and then as a professor of German Studies, I’ve been traveling back and forth to Germany for three decades, almost exactly the age of the reunified German state. I have stayed for weeks, for months, or for more than a year at a time. I have lived in Leipzig, in Cologne, and in Munich, but I have spent by far the most time in Berlin, a place that I have come to consider a second home. Throughout that time, Germany has changed enormously, both demographically and attitudinally. In relation to diversity in general and in its relationship to Jews.
Protein ubiquitination is a post-translational modification that typically involves the conjugation of ubiquitin to substrate proteins via a three-enzyme cascade and regulates a wide variety of cellular processes. Recent studies have revealed that SidE family of Legionella effectors such as SdeA catalyzes novel phosphoribosyl-linked ubiquitination (PR-ubiquitination) of serines in host substrate proteins utilizing NAD+, without the need of E2, E3. The catalytic core of SdeA comprises a mono-ADP-ribosyltransferase (mART) domain that functions to ADP-ribosylate ubiquitin, and a phosphodiesterase (PDE) domain that processes ADP-ribosylated ubiquitin and transfers the resulting phosphoribosylated ubiquitin to serines of substrates.
To date, extensive efforts have been made to study the function of SdeA and mechanism of SdeA mediated PR-ubiquitination, however, the cellular effects of this novel ubiquitination and phosphoribosylation of ubiquitin remained poorly understood. In our study, using biochemical and cell biological approaches, we explored the biological effect of phosphoribosylation of ubiquitin caused by SdeA in cells. We found that phosphoribosylated ubiquitin is not available for conventional ubiquitination, thereby phosphoribosylation of ubiquitin impairs numerous classical ubiquitination related cellular processes including mitophagy, TNF-α signaling and proteasomal degradation.
The precise temporal regulation of the functions of bacterial effectors during Legionella infection by other effectors with antagonizing activities has been well studied so far. Not surprisingly, PR-ubiquitination catalyzed by SidE family effecters is tightly controlled as well, it has been long known that effector SidJ counteracts the toxicity of SdeA to yeast cells. Interestingly, in an experiment for verifying the activity of SidJ, we found that Legionella lysate lacking SidJ was still able to remove ubiquitin from PR-ubiquitinated substrates. Using biochemical approach we identified DupA and DupB, two Legionella bacterial effectors that specifically reverse the novel serine PR-ubiquitination catalyzed by SdeA. We found that DupA and DupB possess a highly homologous PDE domain that removes ubiquitin from PR-ubiquitinated substrates by cleaving the phosphodiester bond between the phosphoribosylated-ubiquitin and serines of substrates. Catalytically deficient mutant DupA H67A strongly binds to PR-ubiquitinated proteins but not capable of cleaving PR-ubiquitin, using it as a trapping bait we identified over 180 substrates of PR-ubiquitination, including a number of ER and Golgi proteins.
In particular, we found that exogenously expressed SdeA localizes to the Golgi apparatus via its C-terminal region and disrupts the Golgi. We validated the identified potential substrates of SidE effectors and found that SdeA modifies Golgi tethering proteins GRASP55 and GRASP65. Using mass spectrometry analyses we identified four serine targets (S3, S408, S409, S449) of GRASP55 PR-ubiquitinated by SdeA in vitro. Ubiquitination of GRASP55 serine mutant in cells co-expressing SdeA or infected with Legionella was markedly decreased, compared with that of the wild-type GRASP55. In addition, with co-immunoprecipitation analyses we found that SdeA-catalyzed ubiquitination regulates the function of GRASP55. PR-ubiquitinated GRASP55 exhibited reduced self-interaction compared to unmodified GRASP55, expression of GRASP55 serine mutant in cells in part rescued Golgi damage caused by SdeA. Furthermore, our study reveals that Golgi structure disruption caused by SdeA does not result in the recruitment of Golgi membranes to the Legionella-containing vacuoles. Instead, it affects cellular secretory pathway including cytokine secretion in cells.
Taken all together, this work expands the understanding of this unconventional PR-ubiquitination catalyzed by Legionella effectors and sheds light on the functions of PR-ubiquitination by which Legionella regulates the Golgi function and secretion pathway during bacterial infection.
Inducing cell death in tumor cells is a major goal of anti-cancer therapy. However, the preferable mode of cell death to induce is under debate. Apoptosis is known to be an anti-inflammatory and pro-resolving type of programmed cell death, whereas necroptosis results in the release of danger-associated molecular patterns (DAMPs) and is pro-inflammatory. Efferocytosis of apoptotic cells by macrophages results in a pro-resolving switch of macrophages polarization and is required to induce resolution of inflammation. This impact of apoptotic cells on macrophages is a non-desired consequence of cell death in tumors, which are often characterized by an overshooting wound healing response. Moreover, apoptosis resistance is frequently observed in cancer cells. To overcome apoptosis resistance in cancer cells, necroptosis can be induced as an alternative mechanism for cancer treatment. Interferons (IFNs) play an important role in tumor immune responses and act by inducing the expression of IFN-stiumlated genes (ISGs). Furthermore, IFNs were shown to be able to induce necroptosis together with Smac-mimetics when caspases are inhibited in different cancer cell lines. Necroptosis is induced by phosphorylation and activation of receptor-interacting serine/threonine-protein kinase 1 (RIPK1), RIPK3 and pseudokinase mixed lineage kinase domain-like (MLKL).
In my thesis, we first identified MLKL as an ISG in various cancer cell lines. MLKL upregulation was found to be a general feature of IFN signaling since both type I and type II IFNs increase the expression of MLKL. IFNy was able to upregulate MLKL at messenger ribonucleic acid (mRNA) and protein level indicating that MLKL is elevated transcriptionally. Indeed, Actinomycin D chase experiments showed that inhibition of transcription abolished MLKL upregulation upon IFN treatment. Both, knockdown of the IFNy-activated transcription factors interferon regulatory factor 1 (IRF1) and signal transducer and activator of transcription 1 (STAT1) as well as knockout of IRF1 significantly dampened MLKL mRNA upregulation, demonstrating that STAT1 and especially IRF1 are necessary to induce MLKL expression. This first part of the study highlights the upregulation of MLKL by IFNy as valuable tool to sensitize cells towards necroptosis and by that overcome apoptosis resistance in cancers.
When compared to apoptosis, the immune response to necroptotic cells and the polarization of macrophages phagocytosing necroptotic cells is not well studied. In most studies, cell death was induced by biological or chemical compounds, which may lead to artifacts by affecting the macrophages and triggering of unrelated signaling pathways. Therefore, in the second part of my thesis we used a pure cell death system of NIH 3T3 cells expressing either dimerizable caspase 8 or oligomerizable RIPK3 to induce cell death. Addition of B/B-Homodimerizer (dimerizer) to the cells resulted in apoptosis or necroptosis, which was confirmed by caspase 3/7 activation, phosphorylation of MLKL and inhibitor experiments, respectively. We analyzed the effect of dying cells on peritoneal macrophages by establishing a co-culture in a transwell system. The genetic profile of macrophages co-cultured with dying cells was evaluated by whole transcriptome RNA sequencing. In macrophages co-cultured with necroptotic cells genes corresponding to chemotaxis and hypoxia pathways were upregulated. A significant proportion of hypoxia-related pathways are mediated by hypoxia-inducible factor 1-alpha (HIF-1α), which also induces metabolic changes in polarized macrophages. We could show that macrophages co-cultured with necroptotic cells showed a decreased mitochondrial respiration, indicating an inflammatory (M1) polarization. Protein levels of chemokine C-X-C motif ligand 1 (CXCL1), which was increased in the RNA sequencing data, were also upregulated in supernatant of co-cultured macrophages and of necroptotic cells, demonstrating that necroptotic cells both secrete CXCL1 and induce gene expression of CXCL1 in peritoneal macrophages. This may influence the recruitment of neutrophils as inhibition of necroptosis during Zymosan-A-induced peritonits in mice decreased the levels of neutrophils at day 1 of this model of self-resolving inflammation.
Furthermore, RNA sequencing revealed an unexpected impact of apoptotic cells on macrophage biology as cell cycle and cell division pathways were increased. Enhanced proliferation of macrophages was confirmed by two functional assay with peritoneal macrophages isolated from mice and IC-21 macrophages. Inhibition of apoptosis during Zymosan-A-induced peritonits in mice demonstrated decreased mRNA levels of cell cycle mediators in peritoneal macrophages. Simultaneously with cell cycle activation, gene sets of prostaglandin E2 (PGE2) signaling were upregulated during RNA sequencing. In the second part of my thesis we could demonstrate, that apoptotic cells induce transcription of cell cycle genes and proliferation of macrophages and necroptotic cells are able to influence the chemokine profile of macrophages and thereby the recruitment of neutrophils.
(1) Background: The aim of our study was to identify specific risk factors for fatal outcome in critically ill COVID-19 patients. (2) Methods: Our data set consisted of 840 patients enclosed in the LEOSS registry. Using lasso regression for variable selection, a multifactorial logistic regression model was fitted to the response variable survival. Specific risk factors and their odds ratios were derived. A nomogram was developed as a graphical representation of the model. (3) Results: 14 variables were identified as independent factors contributing to the risk of death for critically ill COVID-19 patients: age (OR 1.08, CI 1.06–1.10), cardiovascular disease (OR 1.64, CI 1.06–2.55), pulmonary disease (OR 1.87, CI 1.16–3.03), baseline Statin treatment (0.54, CI 0.33–0.87), oxygen saturation (unit = 1%, OR 0.94, CI 0.92–0.96), leukocytes (unit 1000/μL, OR 1.04, CI 1.01–1.07), lymphocytes (unit 100/μL, OR 0.96, CI 0.94–0.99), platelets (unit 100,000/μL, OR 0.70, CI 0.62–0.80), procalcitonin (unit ng/mL, OR 1.11, CI 1.05–1.18), kidney failure (OR 1.68, CI 1.05–2.70), congestive heart failure (OR 2.62, CI 1.11–6.21), severe liver failure (OR 4.93, CI 1.94–12.52), and a quick SOFA score of 3 (OR 1.78, CI 1.14–2.78). The nomogram graphically displays the importance of these 14 factors for mortality. (4) Conclusions: There are risk factors that are specific to the subpopulation of critically ill COVID-19 patients.
11,12-Dihydrodibenzo[c,g]-1,2-diazocines have been established as a viable alternative to azobenzene for photoswitching, in particular, as they show an inverted switching behavior: the ground state is the Z isomer. In this paper, we present an improved method to obtain dibenzodiazocine and its derivatives from the respective 2-nitrotoluenes in two reaction steps, each proceeding in minutes. This fast access to a variety of derivatives permitted the study of substitution effects on the synthesis and on the photochemical properties. With biochemical applications in mind, methanol was chosen as a protic solvent system for the photochemical investigations. In contrast to the azobenzene system, none of the tested substitution patterns resulted in more efficient switching or in significantly prolonged half-lives, showing that the system is dominated by the ring strain.
Acinetobacter baumannii is an important nosocomial pathogen that requires thoughtful consideration in the antibiotic prescription strategy due to its multidrug resistant phenotype. Tetracycline antibiotics have recently been re-administered as part of the combination antimicrobial regimens to treat infections caused by A. baumannii. We show that the TetA(G) efflux pump of A. baumannii AYE confers resistance to a variety of tetracyclines including the clinically important antibiotics doxycycline and minocycline, but not to tigecycline. Expression of tetA(G) gene is regulated by the TetR repressor of A. baumannii AYE (AbTetR). Thermal shift binding experiments revealed that AbTetR preferentially binds tetracyclines which carry a O-5H moiety in ring B, whereas tetracyclines with a 7-dimethylamino moiety in ring D are less well-recognized by AbTetR. Confoundingly, tigecycline binds to AbTetR even though it is not transported by TetA(G) efflux pump. Structural analysis of the minocycline-bound AbTetR-Gln116Ala variant suggested that the non-conserved Arg135 interacts with the ring D of minocycline by cation-π interaction, while the invariant Arg104 engages in H-bonding with the O-11H of minocycline. Interestingly, the Arg135Ala variant exhibited a binding preference for tetracyclines with an unmodified ring D. In contrast, the Arg104Ala variant preferred to bind tetracyclines which carry a O-6H moiety in ring C except for tigecycline. We propose that Arg104 and Arg135, which are embedded at the entrance of the AbTetR binding pocket, play important roles in the recognition of tetracyclines, and act as a barrier to prevent the release of tetracycline from its binding pocket upon AbTetR activation. The binding data and crystal structures obtained in this study might provide further insight for the development of new tetracycline antibiotics to evade the specific efflux resistance mechanism deployed by A. baumannii.
Membrane-suspended nanopores in microchip arrays for stochastic transport recording and sensing
(2021)
The transport of nutrients, xenobiotics, and signaling molecules across biological membranes is essential for life. As gatekeepers of cells, membrane proteins and nanopores are key targets in pharmaceutical research and industry. Multiple techniques help in elucidating, utilizing, or mimicking the function of biological membrane-embedded nanodevices. In particular, the use of DNA origami to construct simple nanopores based on the predictable folding of nucleotides provides a promising direction for innovative sensing and sequencing approaches. Knowledge of translocation characteristics is crucial to link structural design with function. Here, we summarize recent developments and compare features of membrane-embedded nanopores with solid-state analogues. We also describe how their translocation properties are characterized by microchip systems. The recently developed silicon chips, comprising solid-state nanopores of 80 nm connecting femtoliter cavities in combination with vesicle spreading and formation of nanopore-suspended membranes, will pave the way to characterize translocation properties of nanopores and membrane proteins in high-throughput and at single-transporter resolution.