Universitätspublikationen
Refine
Year of publication
Document Type
- Article (10912)
- Preprint (1670)
- Doctoral Thesis (1575)
- Working Paper (1441)
- Part of Periodical (569)
- Conference Proceeding (513)
- Report (299)
- Part of a Book (107)
- Review (92)
- Book (60)
Language
- English (17342) (remove)
Keywords
- inflammation (94)
- COVID-19 (91)
- SARS-CoV-2 (63)
- Financial Institutions (48)
- climate change (46)
- Germany (45)
- ECB (43)
- aging (43)
- apoptosis (42)
- cancer (42)
Institute
- Medizin (5129)
- Physik (3055)
- Frankfurt Institute for Advanced Studies (FIAS) (1664)
- Wirtschaftswissenschaften (1653)
- Biowissenschaften (1409)
- Informatik (1259)
- Center for Financial Studies (CFS) (1139)
- Sustainable Architecture for Finance in Europe (SAFE) (1067)
- Biochemie und Chemie (858)
- House of Finance (HoF) (704)
Above 1 MeV of incident neutron energy the fission fragment angular distribution (FFAD) has generally a strong anisotropic behavior due to the combination of the incident orbital momentum and the intrinsic spin of the fissioning nucleus. This effect has to be taken into account for the efficiency estimation of devices used for fission cross section measurements. In addition it bears information on the spin deposition mechanism and on the structure of transitional states. We designed and constructed a detection device, based on Parallel Plate Avalanche Counters (PPAC), for measuring the fission fragment angular distributions of several isotopes, in particular 232Th. The measurement has been performed at n_TOF at CERN taking advantage of the very broad energy spectrum of the neutron beam. Fission events were recognized by back to back detection in coincidence in two position-sensitive detectors surrounding the targets. The detection efficiency, depending mostly on the stopping of fission fragments in backings and electrodes, has been computed with a Geant4 simulation and validated by the comparison to the measured case of 235U below 3 keV where the emission is isotropic. In the case of 232Th, the result is in good agreement with previous data below 10 MeV, with a good reproduction of the structures associated to vibrational states and the opening of second chance fission. In the 14 MeV region our data are much more accurate than previous ones which are broadly scattered.
Neutron capture on 241Am plays an important role in the nuclear energy production and also provides valuable information for the improvement of nuclear models and the statistical interpretation of the nuclear properties. A new experiment to measure the 241Am(n, γ) cross section in the thermal region and the first few resonances below 10 eV has been carried out at EAR2 of the n_TOF facility at CERN. Three neutron-insensitive C6D6 detectors have been used to measure the neutron-capture gamma cascade as a function of the neutron time of flight, and then deduce the neutron capture yield. Preliminary results will be presented and compared with previously obtained results at the same facility in EAR1. In EAR1 the gamma-ray background at thermal energies was about 90% of the signal while in EAR2 is up to a 25 factor much more favorable signal to noise ratio. We also extended the low energy limit down to subthermal energies. This measurement will allow a comparison with neutron capture measurements conducted at reactors and using a different experimental technique.
J/ψ production as a function of charged-particle multiplicity in p-Pb collisions at √sNN = 8.16 TeV
(2020)
Inclusive J/ψ yields and average transverse momenta in p-Pb collisions at a center-of-mass energy per nucleon pair s NN $$ \sqrt{s_{\mathrm{NN}}} $$ = 8.16 TeV are measured as a function of the charged-particle pseudorapidity density with ALICE. The J/ψ mesons are reconstructed at forward (2.03 < y cms < 3.53) and backward (−4.46 < y cms < −2.96) center-of-mass rapidity in their dimuon decay channel while the charged-particle pseudorapidity density is measured around midrapidity. The J/ψ yields at forward and backward rapidity normalized to their respective average values increase with the normalized charged-particle pseudorapidity density, the former showing a weaker increase than the latter. The normalized average transverse momenta at forward and backward rapidity manifest a steady increase from low to high charged-particle pseudorapidity density with a saturation beyond the average value.
Using data samples collected with the BESIII detector operating at the BEPCII storage ring at center-of-mass energies from 4.178 to 4.600 GeV, we study the process eþe− → π0Xð3872Þγ and search for Zcð4020Þ0 → Xð3872Þγ. We find no significant signal and set upper limits on σðeþe− → π0Xð3872ÞγÞ · BðXð3872Þ → πþπ−J=ψÞ and σðeþe− → π0Zcð4020Þ0Þ · BðZcð4020Þ0 → Xð3872ÞγÞ · BðXð3872Þ → πþπ−J=ψÞ for each energy point at 90% confidence level, which is of the order of several tenths pb.
Reactive oxygen species are a class of naturally occurring, highly reactive molecules that change the structure and function of macromolecules. This can often lead to irreversible intracellular damage. Conversely, they can also cause reversible changes through post-translational modification of proteins which are utilized in the cell for signaling. Most of these modifications occur on specific cysteines. Which structural and physicochemical features contribute to the sensitivity of cysteines to redox modification is currently unclear. Here, I investigated the in uence of protein structural and sequence features on the modifiability of proteins and specific cysteines therein using statistical and machine learning methods. I found several strong structural predictors for redox modification, such as a higher accessibility to the cytosol and a high number of positively charged amino acids in the close vicinity. I detected a high frequency of other post-translational modifications, such as phosphorylation and ubiquitination, near modified cysteines. Distribution of secondary structure elements appears to play a major role in the modifiability of proteins. Utilizing these features, I created models to predict the presence of redox modifiable cysteines in proteins, including human mitochondrial complex I, NKG2E natural killer cell receptors and proximal tubule cell proteins, and compared some of these predictions to earlier experimental results.
We establish weighted Lp-Fourier extension estimates for O(N−k)×O(k)-invariant functions defined on the unit sphere SN−1, allowing for exponents p below the Stein–Tomas critical exponent 2(N+1)/N−1. Moreover, in the more general setting of an arbitrary closed subgroup G⊂O(N) and G-invariant functions, we study the implications of weighted Fourier extension estimates with regard to boundedness and nonvanishing properties of the corresponding weighted Helmholtz resolvent operator. Finally, we use these properties to derive new existence results for G-invariant solutions to the nonlinear Helmholtz equation −Δu−u = Q(x)|u|p−2u,u∈W2,p(RN), where Q is a nonnegative bounded and G-invariant weight function.
The current SARS-CoV-2 outbreak leads to a growing need of point-of-care thoracic imaging that is compatible with isolation settings and infection prevention precautions. We retrospectively reviewed 17 COVID-19 patients who received point-of-care lung ultrasound imaging in our isolation unit. Lung ultrasound was able to detect interstitial lung disease effectively; severe cases showed bilaterally distributed B-Lines with or without consolidations; one case showed bilateral pleural plaques. Corresponding to CT scans, interstitial involvement is accurately depicted as B-Lines on lung ultrasound. Lung ultrasound might be suitable for detecting interstitial involvement in a bedside setting under high security isolation precautions.
Objectives: Immune checkpoint inhibitors have become the standard of care for metastatic non–small-cell lung cancer (NSCLC) progressing during or after platinum-based chemotherapy. Real-world clinical practice tends to represent more diverse patient characteristics than randomized clinical trials. We sought to evaluate overall survival (OS) outcomes in the total study population and in key subsets of patients who received nivolumab for previously treated advanced NSCLC in real-world settings in France, Germany, or Canada.
Materials and methods: Data were pooled from two prospective observational cohort studies, EVIDENS and ENLARGE, and a retrospective registry in Canada. Patients included in this analysis were aged ≥18 years, had stage IIIB/IV NSCLC, and received nivolumab after at least one prior line of systemic therapy. OS was estimated in the pooled population and in various subgroups using the Kaplan-Meier method. Timing of data collection varied across cohorts (2015–2019).
Results: Of the 2585 patients included in this analyses, 1235 (47.8 %) were treated in France, 881 (34.1 %) in Germany, and 469 (18.1 %) in Canada. Median OS for the total study population was 11.3 months (95 % CI: 10.5–12.2); this was similar across France, Germany, and Canada. The OS rate was 49 % at 1 year and 28 % at 2 years for the total study population. In univariable Cox analyses, the presence of epidermal growth factor receptor mutations in nonsquamous disease, liver, or bone metastases were associated with significantly shorter OS, whereas tumor programmed death ligand 1 expression and Eastern Cooperative Oncology Group performance status 0–1 were associated with significantly prolonged OS. Similar OS was noted across subgroups of age and prior lines of therapy.
Conclusion: OS rates in patients receiving nivolumab for previously treated advanced NSCLC in real-world clinical practice closely mirrored those in phase 3 studies, suggesting similar effectiveness of nivolumab in clinical trials and clinical practice.
We measure the inclusive semielectronic decay branching fraction of the D+s meson. A double-tag technique is applied to e+e− annihilation data collected by the BESIII experiment at the BEPCII collider, operating in the center-of-mass energy range 4.178–4.230 GeV. We select positrons fromD+s→Xe+νe with momenta greater than 200 MeV/c and determine the laboratory momentum spectrum, accounting for the effects of detector efficiency and resolution. The total positron yield and semielectronic branching fraction are determined by extrapolating this spectrum below the momentum cutoff. We measure the D+s semielectronic branching fraction to be(6.30±0.13(stat.)±0.09(syst.)±0.04(ext.))%, showing no evidence for unobserved exclusive semielectronic modes. We combine this result with external data taken from literature to determine the ratio of the D+s and D0 semielectronic widths, Γ(D+s→Xe+νe)Γ(D0→Xe+νe)=0.790±0.016(stat.)±0.011(syst.)±0.016(ext.). Our results are consistent with and more precise than previous measurements.
Correlations between moments of different flow coefficients are measured in Pb–Pb collisions at √sNN=5.02TeV recorded with the ALICE detector. These new measurements are based on multiparticle mixed harmonic cumulants calculated using charged particles in the pseudorapidity region |η| <0.8with the transverse momentum range 0.2 <pT<5.0GeV/c. The centrality dependence of correlations between two flow coefficients as well as the correlations between three flow coefficients, both in terms of their second moments, are shown. In addition, a collection of mixed harmonic cumulants involving higher moments of v2and v3is measured for the first time, where the characteristic signature of negative, positive and negative signs of four-, six-and eight-particle cumulants are observed, respectively. The measurements are compared to the hydrodynamic calculations using iEBE-VISHNU with AMPT and TRENTo initial conditions. It is shown that the measurements carried out using the LHC Run 2 data in 2015 have the precision to explore the details of initial-state fluctuations and probe the nonlinear hydrodynamic response of v2and v3to their corresponding initial anisotropy coefficients ε2and ε3. These new studies on correlations between three flow coefficients as well as correlations between higher moments of two different flow coefficients will pave the way to tighten constraints on initial-state models and help to extract precise information on the dynamic evolution of the hot and dense matter created in heavy-ion collisions at the LHC.
Recent studies suggest that synaptic lysophosphatidic acids (LPAs) augment glutamate-dependent cortical excitability and sensory information processing in mice and humans via presynaptic LPAR2 activation. Here, we studied the consequences of LPAR2 deletion or antagonism on various aspects of cognition using a set of behavioral and electrophysiological analyses. Hippocampal neuronal network activity was decreased in middle-aged LPAR2−/− mice, whereas hippocampal long-term potentiation (LTP) was increased suggesting cognitive advantages of LPAR2−/− mice. In line with the lower excitability, RNAseq studies revealed reduced transcription of neuronal activity markers in the dentate gyrus of the hippocampus in naïve LPAR2−/− mice, including ARC, FOS, FOSB, NR4A, NPAS4 and EGR2. LPAR2−/− mice behaved similarly to wild-type controls in maze tests of spatial or social learning and memory but showed faster and accurate responses in a 5-choice serial reaction touchscreen task requiring high attention and fast spatial discrimination. In IntelliCage learning experiments, LPAR2−/− were less active during daytime but normally active at night, and showed higher accuracy and attention to LED cues during active times. Overall, they maintained equal or superior licking success with fewer trials. Pharmacological block of the LPAR2 receptor recapitulated the LPAR2−/− phenotype, which was characterized by economic corner usage, stronger daytime resting behavior and higher proportions of correct trials. We conclude that LPAR2 stabilizes neuronal network excitability upon aging and allows for more efficient use of resting periods, better memory consolidation and better performance in tasks requiring high selective attention. Therapeutic LPAR2 antagonism may alleviate aging-associated cognitive dysfunctions.
Background: Rare Diseases (RDs) are difficult to diagnose. Clinical Decision Support Systems (CDSS) could support the diagnosis for RDs. The Medical Informatics in Research and Medicine (MIRACUM) consortium developed a CDSS for RDs based on distributed clinical data from eight German university hospitals. To support the diagnosis for difficult patient cases, the CDSS uses data from the different hospitals to perform a patient similarity analysis to obtain an indication of a diagnosis. To optimize our CDSS, we conducted a qualitative study to investigate usability and functionality of our designed CDSS. Methods: We performed a Thinking Aloud Test (TA-Test) with RDs experts working in Rare Diseases Centers (RDCs) at MIRACUM locations which are specialized in diagnosis and treatment of RDs. An instruction sheet with tasks was prepared that the participants should perform with the CDSS during the study. The TA-Test was recorded on audio and video, whereas the resulting transcripts were analysed with a qualitative content analysis, as a ruled-guided fixed procedure to analyse text-based data. Furthermore, a questionnaire was handed out at the end of the study including the System Usability Scale (SUS). Results: A total of eight experts from eight MIRACUM locations with an established RDC were included in the study. Results indicate that more detailed information about patients, such as descriptive attributes or findings, can help the system perform better. The system was rated positively in terms of functionality, such as functions that enable the user to obtain an overview of similar patients or medical history of a patient. However, there is a lack of transparency in the results of the CDSS patient similarity analysis. The study participants often stated that the system should present the user with an overview of exact symptoms, diagnosis, and other characteristics that define two patients as similar. In the usability section, the CDSS received a score of 73.21 points, which is ranked as good usability. Conclusions: This qualitative study investigated the usability and functionality of a CDSS of RDs. Despite positive feedback about functionality of system, the CDSS still requires some revisions and improvement in transparency of the patient similarity analysis.
Purpose: To test the effect of anatomic variants of the prostatic apex overlapping the membranous urethra (Lee type classification), as well as median urethral sphincter length (USL) in preoperative multiparametric magnetic resonance imaging (mpMRI) on the very early continence in open (ORP) and robotic-assisted radical prostatectomy (RARP) patients. Methods: In 128 consecutive patients (01/2018–12/2019), USL and the prostatic apex classified according to Lee types A–D in mpMRI prior to ORP or RARP were retrospectively analyzed. Uni- and multivariable logistic regression models were used to identify anatomic characteristics for very early continence rates, defined as urine loss of ≤ 1 g in the PAD-test. Results: Of 128 patients with mpMRI prior to surgery, 76 (59.4%) underwent RARP vs. 52 (40.6%) ORP. In total, median USL was 15, 15 and 10 mm in the sagittal, coronal and axial dimensions. After stratification according to very early continence in the PAD-test (≤ 1 g vs. > 1 g), continent patients had significantly more frequently Lee type D (71.4 vs. 54.4%) and C (14.3 vs. 7.6%, p = 0.03). In multivariable logistic regression models, the sagittal median USL (odds ratio [OR] 1.03) and Lee type C (OR: 7.0) and D (OR: 4.9) were independent predictors for achieving very early continence in the PAD-test. Conclusion: Patients’ individual anatomical characteristics in mpMRI prior to radical prostatectomy can be used to predict very early continence. Lee type C and D suggest being the most favorable anatomical characteristics. Moreover, longer sagittal median USL in mpMRI seems to improve very early continence rates.
Caspase-8 is an aspartate-specific cysteine protease, which is best known for its apoptotic functions. Caspase-8 is placed at central nodes of multiple signal pathways, regulating not only the cell cycle but also the invasive and metastatic cell behavior, the immune cell homeostasis and cytokine production, which are the two major components of the tumor microenvironment (TME). Ovarian cancer often has dysregulated caspase-8 expression, leading to imbalance between its apoptotic and non-apoptotic functions within the tumor and the surrounding milieu. The downregulation of caspase-8 in ovarian cancer seems to be linked to high aggressiveness with chronic inflammation, immunoediting, and immune resistance. Caspase-8 plays therefore an essential role not only in the primary tumor cells but also in the TME by regulating the immune response, B and T lymphocyte activation, and macrophage differentiation and polarization. The switch between M1 and M2 macrophages is possibly associated with changes in the caspase-8 expression. In this review, we are discussing the non-apoptotic functions of caspase-8, highlighting this protein as a modulator of the immune response and the cytokine composition in the TME. Considering the low survival rate among ovarian cancer patients, it is urgently necessary to develop new therapeutic strategies to optimize the response to the standard treatment. The TME is highly heterogenous and provides a variety of opportunities for new drug targets. Given the variety of roles of caspase-8 in the TME, we should focus on this protein in the development of new therapeutic strategies against the TME of ovarian cancer.
Background: Cerebral radiation injury, including subacute radiation reactions and later stage radiation necrosis, is a severe side effect of brain tumor radiotherapy. A protocol of four infusions of the monoclonal antibody bevacizumab has been shown to be a highly effective treatment. However, bevacizumab is costly and can cause severe complications including thrombosis, bleeding and gastrointestinal perforations.
Methods: We performed a retrospective analysis of patients treated in our clinic for cerebral radiation injury who received only a singular treatment with bevacizumab. Single-shot was defined as a singular administration of bevacizumab without a second administration during an interval of at least 6 weeks.
Results: We identified 11 patients who had received a singular administration of bevacizumab to treat cerebral radiation injury. Prior radiation had been administered to treat gliomas (ten patients) or breast cancer brain metastases (one patient). 9 of 10 patients with available MRIs showed a marked reduction of edema at first follow-up. Discontinuation of Dexamethasone was possible in 6 patients and a significant dose reduction could be achieved in all other patients. One patient developed pulmonary artery embolism 2 months after bevacizumab administration. The median time to treatment failure of any cause was 3 months.
Conclusions: Single-shot bevacizumab therefore has meaningful activity in cerebral radiation injury, but durable control is rarely achieved. In patients where a complete protocol of four infusions with bevacizumab is not feasible due to medical contraindications or lack of reimbursement, single-shot bevacizumab treatment may be considered.
Liquidity derivatives
(2022)
It is well established that investors price market liquidity risk. Yet, there exists no financial claim contingent on liquidity. We propose a contract to hedge uncertainty over future transaction costs, detailing potential buyers and sellers. Introducing liquidity derivatives in Brunnermeier and Pedersen (2009) improves financial stability by mitigating liquidity spirals. We simulate liquidity option prices for a panel of NYSE stocks spanning 2000 to 2020 by fitting a stochastic process to their bid-ask spreads. These contracts reduce the exposure to liquidity factors. Their prices provide a novel illiquidity measure refllecting cross-sectional commonalities. Finally, stock returns significantly spread along simulated prices.
SAFE Update August 2022
(2022)
SAFE Update June 2022
(2022)
In the communication of the European Central Bank (ECB), the statement that „we act within our mandate“ is often referred to. Also among practitioners of the Eurosystem the term „mandate“ has become popular. In his Working Paper, Helmut Siekmann analyzes the legal foundation of the tasks and objectives of the Eurosysstem and price stability as a legal term. He finds that the primary law of the EU only very sparsely employs the term „mandate“. It is never used in the context of monetary policy and its institutions. Moreover, he comes to the conclusion that inflation targeting as a task, competence, or objective of the Eurosystem is legally highly questionable according to the common standards of interpretation.
Identifying the cause of discrimination is crucial to design effective policies and to understand discrimination dynamics. Building on traditional models, this paper introduces a new explanation for discrimination: discrimination based on motivated reasoning. By systematically acquiring and processing information, individuals form motivated beliefs and consequentially discriminate based on these beliefs. Through a series of experiments, I show the existence of discrimination based on motivated reasoning and demonstrate important differences to statistical discrimination and taste-based discrimination. Finally, I demonstrate how this form of discrimination can be alleviated by limiting individuals’ scope to interpret information.
Spillovers of PE investments
(2022)
In this paper, we investigate a primary potential impact of leveraged buyout (LBOs) transactions: the effects of LBOs on the peers of the LBO target in the same industry. Using a data sample based on US LBO transactions between 1985 and 2016, we investigate the impact of the peer firms in the aftermath of the transaction, relative to non-peer firms. To account for potential endogeneity concerns, we employ a network-based instrumental variable approach. Based on this analysis, we find support for the proposition that LBOs do indeed matter for peer firms’ performance and corporate strategy relative to non-peer firms. Our study supports a learning factor hypothesis: peers gain by learning from the LBO target to improve their operational performance. Conversely, we find no evidence to support the conjecture that peers lose due to the increased competitiveness of the LBO target firm.
Eine große Gruppe von Aptameren sind die Guanosintriphosphat (GTP) Aptamere. Diese zeigt sehr eindrücklich, wie RNA unterschiedliche Strategien nutzt, um denselben Liganden zu erkennen. Die komplette Struktur des GTP Klasse II Aptamers wird in der ersten Publikation gezeigt. Interessanterweise zeichnet die Struktur ein stabil protoniertes Adenine unterhalb der GTP-Bindestelle aus. Dieses wurde durch eine Kombination aus weiterführenden NMR- und ITC-Experimente untersucht und charakterisiert. Es zeigte sich, dass die protonierte Base einen pKs-Wert hat, der weit von der Neutralität verschoben ist. Die Protonierung ist auch noch bei sehr basischen Puffern stabil.
Eine Art der funktionellen Protonierung wird von den zyklischen di-Nukleotiden (CDN) bindenden Riboswitches genutzt, um zwei CDN mit ähnlicher Affinität zu binden. c-di-GMP Riboswitches wurden als regulatorische Einheit beschrieben und deren Kristallstruktur aufgeklärt. Mutationsexperimente führten dazu, dass bei einer G-zu-A Mutation an der Gα-Bindestelle die Selektivität des Riboswitches verändert wurde. Die Mutante bindet sowohl c-di-GMP als auch cGAMP mit ähnlichen Bindungsaffinitäten. Riboswitche, die cGAMP binden wurden auch in der bakteriellen Genomen gefunden. Hierbei ist die Promiskuität unterschiedlich stark ausgeprägt. Die Untersuchung des Bindungsmodus und der damit verbundenen Promiskuität ist in der zweiten Publikation beschrieben. Hier wurde gezeigt, dass die Riboswitche beide Liganden nur binden können, wenn zur Bindung von c-di-GMP das Ligand bindende A protoniert vorliegt. Auch diese Protonierung konnte mit weiterführenden NMR- und ITC-Experimenten charakterisiert werden. Die Untersuchungen einer solch großen RNA sind mit NMR Spektroskopie herausfordernd. Hierbei wurde ausgenutzt, dass die Kristallstruktur bereits bekannt war, welche allerdings die Protonierung nicht zeigte. Auch diese Protonierung zeigt einen pKs-Wert, der weit von der Neutralität verschoben ist und außerdem bei unterschiedlichen pH stabil ist.
In den beiden untersuchten Beispielen wurden zwei verschiedene Arten von Protonierung gezeigt: eine strukturelle und eine funktionelle. Das GTP Klasse II Aptamer benutzt die Protonierung als strukturelle Basis für die Basis der Ligandenbindungsstelle. Hierbei werden durch die Protonierung des Adenines mehr nutzbare Wasserstoffbrücken ausgebildet und damit die Tertiärstruktur stabilisiert. Im Unterschied dazu nutzen die promiskuitiven CDN Ribsowicthes die Protonierung, um verschiedene Liganden binden zu können und es kommt damit zu einer Verschiebung der Funktionalität. Der regulatorische Nutzen dafür ist allerdings noch unbekannt.
Auch bei den SAM Riboswitches wurde ein promiskuitiver Vertreter beschrieben. SAM Riboswitches gehören zu den am längsten bekannten Klassen der Riboswitches. Bis heute sind hier die meisten unterschiedlichen Klassen bekannt. SAM wird häufig als Donor für funktionelle Gruppen benutzt, besonders häufig als Methlygruppendonor für die Methylierung einer Reihe unterschiedlicher Substrate (z.B. DNA, Proteine, Metabolite etc.). Bei dieser Reaktion entsteht SAH als Nebenprodukt. Zusätzlich ist SAH zelltoxisch, da es affin an Methyltransferasen bindet und damit diese essenzielle Reaktion inhibiert. Eine enge Kontrolle der SAH-Konzentration ist daher kritisch. SAM bindende Riboswitches haben zu SAM eine bis zu 1000-fach höhere Bindungsaffinität im Vergleich zu SAH. Die Beschreibung eines translationalen OFF-Riboswitches, der SAM und SAH mit ähnlicher Affinität bindet, ist daher überraschend. Zumal seine Genassoziation fast ausschließlich zu SAM Synthetasen ist, deren Regulation durch SAH wenig sinnvoll erscheint. Um ein besseres Verständnis für die Funktion des SAM/SAH Riboswitches zu erhalten, wurde seine 3D-Struktur mittels NMR-Spektroskopie aufgeklärt, wie in der vierten Publikation beschrieben. Dafür mussten zunächst alle Resonanzen der Sequenz und dem Liganden zugeordnet werden, wie in der dritten Publikation beschrieben. Dabei wurde als Ligand SAH gewählt, da dieser chemisch stabiler und damit für die teils tagelangen NMR-Messungen besser geeignet ist. Zusätzlich wurden Mutanten bzw. verwandte Liganden mittels ITC Experimente auf ihre Bindungseigenschaften untersucht, um die Bedeutung der Linkerlänge, einzelner Basenpaare und funktionelle Gruppen des Liganden zu untersuchen. Bei anderen bekannten SAM Riboswitches umschließt die RNA den Liganden fast komplett. Dabei wird zum einem das Sulfoniumion spezifisch durch die Carboxylgruppen verschiedener Uracil-Nukleotide erkennt und koordiniert. Außerdem bildet sich eine Bindetasche aus, die genug Platz für die stabile Bindung der Methylgruppe hat. Beim SAH Riboswitch wird die Selektivität für SAH dadurch erreicht, dass die Bindetasche sterisch keinen Platz für die Methylgruppe von SAM bereitstellt.
Zusammenfassend wurden in dieser Arbeit drei verschiedene Ligand bindende RNA-Strukturen untersucht, die alle sehr unterschiedliche Strategien zur Bindung der Liganden nutzen. Obwohl Portionierungen bei Aptameren und Riboswitches selten beschrieben wurden, haben sie eine maßgebliche Funktion in den beiden zuerst untersuchten Strukturen. Obwohl bisher im Hinblick auf alle bekannten RNA Strukturen eher selten beschrieben, gibt es doch neben den genannten zwei, einige Beispiele für strukturelle oder funktionelle Protonierungen. Auch in Hinblick auf zukünftige bzw. Verbesserung bestehender RNA-Strukturvorhersage-Programme ähnlich wie sie für Proteine schon lange nutzt werden, müssen protonierte Nukleobasen ernsthaft in Betracht gezogen werden. Außerdem konnte gezeigt werden, dass zwei der untersuchten Riboswitches zwei Liganden mit ähnlicher Affinität binden. Die genutzte Strategie ist hierbei unterschiedlich. Während bei den promiskuitiven CDN Riboswitches der regulatorische Nutzen noch unbekannt ist, konnte für den SAM/SAH Ribsowitch gezeigt werden, dass SAH nur zufällig aufgrund der wahrscheinlich sehr niedrigen intrazellulären Konzentration gebunden wird und dieser daher wahrscheinlich später in der evolutionären Entwicklung entstanden ist. Riboswitches halten es weiterhin spannend.
Extreme convective precipitation events are among the most severe hazards in central Europe and are expected to intensify under global warming. However, the degree of intensification and the underlying processes are still uncertain. In this thesis, recent advances in continuous, radar-based precipitation monitoring and convection-permitting climate modeling are used to investigate Lagrangian properties of convective rain cells such as precipitation intensity, cell area, and precipitation sum and their relationship to large-scale, environmental conditions.
Firstly, convective precipitation objects are tracked in a gauge-adjusted radar-data set and the properties of these cells are related to large-scale environmental variables to investigate the observed super-Clausius-Clapeyron (CC) scaling of convective extreme precipitation. The Lagrangian precipitation sum of convective cells increases with dew point temperature at rates well above the CC-rate with increasing rates for higher dew point temperatures. These varying, high rates are caused by a covarying increase of CAPE with dew point temperature as well as the effect of high vertical wind shear causing an increase in cell area and thus precipitation sum. At the same time, cells move faster at high vertical wind shear so that Eulerian scaling rates are lower than Lagrangian but still above the CC-rate. The results show that wind shear and static instability need to be taken into account when transferring precipitation scaling under current climate conditions to future conditions. Secondly, the representation of convective cell properties in the convection-permitting climate model COSMO-CLM is evaluated. The model can simulate the observed frequency distributions of cell properties such as lifetime, area, mean and maximum intensity, and precipitation sum. The increase of area and intensity with lifetime is also well captured despite an underestimation of the intensity of the most severe cells. Furthermore, the model can represent the temperature scaling of intensity, area, and precipitation sum but fails to simulate the observed increase of lifetime. Thus, the model is suitable to study climatologies of convective storms in Germany. Thirdly, two COSMO-CLM projections at the end of the century under emission scenario RCP8.5 were investigated. While the number of convective cells and their lifetime remain approximately constant compared to present conditions, intensity and area increase strongly. The relative increase of intensity and area is largest for the highest percentiles meaning that extreme events intensify the most. The characteristic afternoon maximum of convective precipitation is damped, and shifted to later times of day which leads to an increase of nighttime precipitation in the future. Scaling rates of cell properties with dew point temperature are nearly identical in present and future in the simulation driven by the EC-Earth model which means that the upper limit of cell properties like intensity, area, and precipitation sum could be predicted from near-surface dew point temperature. However, this result could not be reproduced by the simulation driven by MIROC5 and needs further investigation.
Die vorgelegte Dissertation behandelt den Einfluss homöostatischer Adaption auf die Informationsverarbeitung und Lenrprozesse in neuronalen Systemen. Der Begriff Homöostase bezeichnet die Fähigkeit eines dynamischen Systems, bestimmte interne Variablen durch Regelmechanismen in einem dynamischen Gleichgewicht zu halten. Ein klassisches Beispiel neuronaler Homöostase ist die dynamische Skalierung synaptischer Gewichte, wodurch die Aktivität bzw. Feuerrate einzelner Neuronen im zeitlichen Mittel konstant bleibt. Bei den von uns betrachteten Modellen handelt es sich um eine duale Form der neuronalen Homöostase. Das bedeutet, dass für jedes Neuron zwei interne Parameter an eine intrinsische Variable wie die bereits erwähnte mittlere Aktivität oder das Membranpotential gekoppelt werden. Eine Besonderheit dieser dualen Adaption ist die Tatsache, dass dadurch nicht nur das zeitliche Mittel einer dynamischen Variable, sondern auch die zeitliche Varianz, also die stärke der Fluktuation um den Mittelwert, kontrolliert werden kann. In dieser Arbeit werden zwei neuronale Systeme betrachtet, in der dieser Aspekt zum Tragen kommt.
Das erste behandelte System ist ein sogennantes Echo State Netzwerk, welches unter die Kategorie der rekurrenten Netzwerke fällt. Rekurrente neuronale Netzwerke haben im Allgemeinen die Eigenschaft, dass eine Population von Neuronen synaptische Verbindungen besitzt, die auf die Population selbst projizieren, also rückkoppeln. Rekurrente Netzwerke können somit als autonome (falls keinerlei zusätzliche externe synaptische Verbindungen existieren) oder nicht-autonome dynamische Systeme betrachtet werden, die durch die genannte Rückkopplung komplexe dynamische Eigenschaften besitzen. Abhängig von der Struktur der rekurrenten synaptischen Verbindungen kann beispielsweise Information aus externem Input über einen längeren Zeitraum gespeichert werden. Ebenso können dynamische Fixpunkte oder auch periodische bzw. chaotische Aktivitätsmuster entstehen. Diese dynamische Vielseitigkeit findet sich auch in den im Gehirn omnipräsenten rekurrenten Netzwerken und dient hier z.B. der Verarbeitung sensorischer Information oder der Ausführung von motorischen Bewegungsmustern. Das von uns betrachtete Echo State Netzwerk zeichnet sich dadurch aus, dass rekurrente synaptische Verbindungen zufällig generiert werden und keiner synaptischen Plastizität unterliegen. Verändert werden im Zuge eines Lernprozesses nur Verbindungen, die von diesem sogenannten dynamischen Reservoir auf Output-Neuronen projizieren. Trotz der Tatsache, dass dies den Lernvorgang stark vereinfacht, ist die Fähigkeit des Reservoirs zur Verarbeitung zeitabhängiger Inputs stark von der statistischen Verteilung abhängig, die für die Generierung der rekurrenten Verbindungen verwendet wird. Insbesondere die Varianz bzw. die Skalierung der Gewichte ist hierbei von großer Bedeutung. Ein Maß für diese Skalierung ist der Spektralradius der rekurrenten Gewichtsmatrix.
In vorangegangenen theoretischen Arbeiten wurde gezeigt, dass für das betrachtete System ein Spektralradius nahe unterhalb des kritischen Wertes von 1 zu einer guten Performance führt. Oberhalb dieses Wertes kommt es im autonomen Fall zu chaotischem dynamischen Verhalten, welches sich negativ auf die Informationsverarbeitung auswirkt. Der von uns eingeführte und als Flow Control bezeichnete duale Adaptionsmechanismus zielt nun darauf ab, über eine Skalierung der synaptischen Gewichte den Spektralradius auf den gewünschten Zielwert zu regulieren. Essentiell ist hierbei, dass die verwendete Adaptionsdynamik im Sinne der biologischen Plausibilität nur auf lokale Größen zurückgreift. Dies geschieht im Falle von Flow Control über eine Regulation der im Membranpotential der Zelle auftretenden Fluktuationen. Bei der Evaluierung der Effektivität von Flow Control zeigte sich, dass der Spektralradius sehr präzise kontrolliert werden kann, falls die Aktivitäten der Neuronen in der rekurrenten Population nur schwach korreliert sind. Korrelationen können beispielsweise durch einen zwischen den Neuronen stark synchronisierten externen Input induziert werden, der sich dementsprechend negativ auf die Präzision des Adaptionsmechanismus auswirkt.
Beim Testen des Netzwerks in einem Lernszenario wirkte sich dieser Effekt aber nicht negativ auf die Performance aus: Die optimale Performance wurde unabhängig von der stärke des korrelierten Inputs für einen Spektralradius erreicht, der leicht unter dem kritischen Wert von 1 lag. Dies führt uns zu der Schlussfolgerung, dass Flow Control unabhängig von der Stärke der externen Stimulation in der Lage ist, rekurrente Netze in einen für die Informationsverarbeitung optimalen Arbeitsbereich einzuregeln.
Bei dem zweiten betrachteten Modell handelt es sich um ein Neuronenmodell mit zwei Kompartimenten, welche der spezifischen Anatomie von Pyramidenneuronen im Kortex nachempfunden ist. Während ein basales Kompartiment synaptischen Input zusammenfasst, der in Dendriten nahe des Zellkerns auftritt, repräsentiert das zweite apikale Kompartiment die im Kortex anzutreffende komplexe dendritische Baumstruktur. In früheren Experimenten konnte gezeigt werden, dass eine zeitlich korrelierte Stimulation sowohl im basalen als auch apikalen Kompartiment eine deutlich höhere neuronale Aktivität hervorrufen kann als durch Stimulation nur einer der beiden Kompartimente möglich ist. In unserem Modell können wir zeigen, dass dieser Effekt der Koinzidenz-Detektion es erlaubt, den Input im apikalen Kompartiment als Lernsignal für synaptische Plastizität im basalen Kompartiment zu nutzen. Duale Homöostase kommt auch hier zum Tragen, da diese in beiden Kompartimenten sicherstellt, dass sich der synaptische Input hinsichtlich des zeitlichen Mittels und der Varianz in einem für den Lernprozess benötigten Bereich befindet. Anhand eines Lernszenarios, das aus einer linearen binären Klassifikation besteht, können wir zeigen, dass sich das beschriebene Framework für biologisch plausibles überwachtes Lernen eignet.
Die beiden betrachteten Modelle zeigen beispielhaft die Relevanz dualer Homöostase im Hinblick auf zwei Aspekte. Das ist zum einen die Regulation rekurrenter neuronaler Netze in einen dynamischen Zustand, der für Informationsverarbeitung optimal ist. Der Effekt der Adaption zeigt sich hier also im Verhalten des Netzwerks als Ganzes. Zum anderen kann duale Homöostase, wie im zweiten Modell gezeigt, auch für Plastizitäts- und Lernprozesse auf der Ebene einzelner Neuronen von Bedeutung sein. Während neuronale Homöostase im klassischen Sinn darauf beschränkt ist, Teile des Systems möglichst präzise auf einen gewünschten Mittelwert zu regulieren, konnten wir Anhand der diskutierten Modelle also darlegen, dass eine Kontrolle des Ausmaßes von Fluktuationen ebenfalls Einfluss auf die Funktionalität neuronaler Systeme haben kann.
In haploidentical stem cell transplantation (SCT), achieving a balance between graft versus host disease (GvHD), graft versus leukemia effect (GvL) and bridging the vulnerable phase of aplasia against viral infections is still a challenge. Graft preparation strategies attempt to achieve this balance by removing and retaining harmful and helpful cells. At this point it is known that T cell subpopulations hold different properties concerning GvHD promotion and immunocompetence towards pathogens. CD45RA+ naïve T cells show the greatest, while CD45RO+ memory T cells show less alloreactive potential but provide immunocompetence. CD45RA depletion is a promising new approach to graft processing that potentially combines GvHD prevention, GvL promotion and transfer of immunological competence by removing potentially harmful CD45RA+ naïve T cells and retaining CD45RO+ memory cells. This work focused on manufacturing CD45RA-depleted grafts within a one- or two-step approach, as well as a feasibility assessment of the process and the establishment of a 10-color fluorescence activated cell sorting (FACS) measurement panel for clinical-scale graft generation. CD45RA depletions were conducted from granulocyte-colony stimulated factor (G-CSF) mobilized peripheral blood stem cells (PBSC) applying two different strategies, direct depletion of CD45RA+ cells (one-step approach), or depletion following preceding CD34 selection. A 10-color FACS measurement panel was established ensuring quality control and enabling preliminary data acquisition on CD45RA co-expression for cell loss estimations. Residual virus-specific T cells after depletion were measured using MHC multimers. It was observed that the depletion antibody occupied the cell binding sites, resulting in insufficient binding of the fluorescent dye for subsequent FACS measurement. Therefore, three FACS antibodies were tested and compared, and CD45RA-PE (clone:2H4) was found to be the best choice for reliable cell detection. To further characterize residual T cells, two homing markers, CD62L and CCR7, were compared, with particular attention paid to the expression of the surface markers after cooling. Both markers were complementary to each other, resulting in the decision to include an additional FACS measuring tube whenever samples are cooled or further T cell characterization is needed. With a median log depletion of -3.9 (one-step) and -3.8 (two-step) data showed equally efficient removal of CD45RA+CD3+ T cells for both approaches. Close to complete B cell removal was obtained without additional reagent use. However, also close to complete NK cell loss occurred due to high CD45RA co-expression. Stem cells recovered at a median of 52% (range: 49.7 - 67.2%) after one-step CD45RA depletion. CD45RO+ memory T cells recovery was statistically not differing between both approaches. Virus-specific T cells were detectable after depletion, suggesting that virus-specific immunocompetence is transferable. In conclusion, CD45RA depletions are equally feasible for both approaches when performed from fresh, non-cryopreserved starting products, show reliable reduction of CD45RA and B cells, but also result in co-depletion of NK cells. Stem cell recovery and NK cell losses must be considered carefully especially regarding overcoming HLA barriers, pathogen protection during aplasia, early engraftment an GvL. Therefore, a combination of CD45RA-depleted products with already established other processing methods to ensure sufficient stem and NK cells is desirable to allow high clinical flexibility.
The intensive use of the North Sea area through offshore activities, sand mining, and the spreading of dredged material is leading to increasing pollution of the ecosystem by chemicals such as hydrophobic organic contaminants (HOCs). Due to their toxicological properties and their ability to accumulate in the environment, HOCs are of particular concern. The contaminants partition between aqueous (pore water, overlying water) and solid phases (sediment, suspended particulate matter, and biota) within these systems. The accumulated contaminants in the sediment are of major concern for benthic organisms, who are in close contact with sediment and interstitial water. It is thus particularly important to better understand how contaminants interact with biota, as these animals may contribute to trophic transfer through the food web. Furthermore, sediments are a crucial factor for the water quality of aquatic systems. They not only represent a sink for contaminants but also determine environmental fate, bioavailability, and toxicity. The Marine Strategy Framework Directive (MSFD) was introduced to protect our marine environment across Europe and includes the assessment of pollutant concentrations in the total sediment, which, however, rarely reflects the actual exposure situation. The consideration of the pollutant concentrations in the pore water is not implemented, although this is needed for the evaluation of bioavailability and risk assessment. For this reason, special attention is given to further development, implementation, and validation of pollutant monitoring methods that can determine the bioavailable fraction in sediment pore water. For risk assessment purposes, it is furthermore important to use biological indicators in addition to classical analytics to determine the effect of pollutants on organisms. The main objective of this thesis was to gain insight into the pollution load and the potential risk of hydrophobic organic chemicals (HOCs) in the sediment of the North Sea and to evaluate these results with regard to possible risks for benthic organisms and the ecosystem. The following five aims are covered within these studies to gain a holistic assessment of sediment contamination:
1. Assessment of the pore water concentrations of PAHs and PCBs
2. Determination of the bioturbation potential by macrofauna analysis
3. Application of the SPME method on biological tissue
4. Assessment of recreated environmental mixtures in passive dosing bioassays
5. Development of SPME method for DDT in sediments
The thesis is comprised of three main studies supported by three additional studies ...
GATA2 deficiency is a heterogeneous multi-system disorder characterized by a high risk of developing myelodysplastic syndrome (MDS) and myeloid leukemia. We analyzed the outcome of 65 patients reported to the registry of the European Working Group (EWOG) of MDS in childhood carrying a germline GATA2 mutation (GATA2mut) who had undergone hematopoietic stem cell transplantation (HSCT). At 5 years the probability of overall survival and disease-free survival (DFS) was 75% and 70%, respectively. Non-relapse mortality and relapse equally contributed to treatment failure. There was no evidence of increased incidence of graft-versus-host-disease or excessive rates of infections or organ toxicities. Advanced disease and monosomy 7 (−7) were associated with worse outcome. Patients with refractory cytopenia of childhood (RCC) and normal karyotype showed an excellent outcome (DFS 90%) compared to RCC and −7 (DFS 67%). Comparing outcome of GATA2mut with GATA2wt patients, there was no difference in DFS in patients with RCC and normal karyotype. The same was true for patients with −7 across morphological subtypes. We demonstrate that HSCT outcome is independent of GATA2 germline mutations in pediatric MDS suggesting the application of standard MDS algorithms and protocols. Our data support considering HSCT early in the course of GATA2 deficiency in young individuals.
Folgend auf den ersten Realisierungen von Bose-Einstein Kondensaten erschienen weitere innovative Experimente, die sich in den optischen Gittern gefangenen Quantengasen widmeten. In diesen zahlreichen, wissenschaftlichen Untersuchungen konnten die Eigenschaften von Bose-Einstein Kondensaten besser verstanden werden. Das Prinzip von Vielteilchensystemen, gefangen in einem periodischen Potential, bot eine Plattform zur Untersuchung weiterer Quantenphasen.
Eine konzeptionell einfache Modifikation von solchen Systemen erhält man durch die Kopplung der Grundzustände der gefangenen Teilchen an hoch angeregten Zuständen mithilfe einer externen Lichtquelle. Im Falle dessen, dass diese Zustände nahe der Ionisationsgrenze des Atoms liegen, spricht man von Rydberg-Zuständen und Atome, welche zu diesen Zuständen angeregt werden, bezeichnet man als Rydberg-Atome. Eines der vielen charakteristischen Eigenschaften von Rydberg-Atomen ist die Fähigkeit über große Entfernungen jenseits der atomaren Längenskalen zu wechselwirken. Im Rahmen von Vielteilchensystemen wurden dementsprechend Kristallstrukturen aus gefangenen Rydberg-Atomen experimentell beobachtet.
Nun stellt sich die Frage, was mit einem gefangenen Bose-Einstein Kondensat passiert, dessen Teilchen an langreichweitig wechselwirkenden Zuständen gekoppelt sind. Gibt es ein Parameterregime, in dem sowohl Kristallstruktur als auch Suprafluidität in solchen Systemen koexistieren können? Dies ist die zentrale Frage dieser Arbeit, die sich mit der Theorie von gefangenen Quantengasen gekoppelt an Rydberg-Zuständen auseinandersetzt.
Capturing intermolecular interactions accurately is essential for describing, e.g., morphology of molecular matter on the nanoscale. When it reveals characteristics which are not directly accessible through experiments or ab initio theories, a model here becomes eminently beneficial. In laboratory astrochemistry, the intense study of ices has led i.a. to the exploration of the spontelectric state of nanofilms. Despite its success in biophysics or biochemistry and despite its predictive power, molecular modeling has however not yet been widely deployed for solid-state astrochemistry. In this article, therefore a pertinent hitherto unaddressed problem is tackled by means of the classical molecular-dynamics method, namely the unknown distribution of relative dipole orientations in spontelectric cis-methyl formate (MF). In doing so, from ab initio data, a molecular model is derived which confirms for the first time the anomalous temperature-dependent polarization of MF. These insights thus represent a further step toward understanding spontelectric behavior. Moreover, unprecedented first-principles predictions are reported regarding the ground-state geometry of the MF trimer and tetramer. In conjunction with the study of the binding to carbonaceous substrates, these additional findings can help to exemplarily elucidate molecular ice formation in astrochemical settings.
Purpose: Colorectal cancer (CRC) is the second most common cancer in Germany. Around 60,000 people were diagnosed CRC in 2016 in Germany. Since 2019, screening colonoscopies are offered in Germany for men by the age of 50 and for women by the age of 55. It is recently discussed if women should also undergo a screening colonoscopy by the age of 50 and if there are any predictors for getting CRC.
Methods: Colonoscopies of 1553 symptomatic patients younger than 55 years were compared with colonoscopies of 1075 symptomatic patients older than 55 years. We analyzed if there are any significant differences between those two groups in the prevalence of CRC and its precursor lesions or between symptomatic men and women. We evaluated if there is a correlation between abdominal symptoms and the prevalence of CRC.
Results: In 164/1553 symptomatic patients, 194 (12.5%) polyps were detected. In total, six colorectal carcinomas (0.4%) were detected. There were no significant differences between men and women. In symptomatic patients ≥ 55 years, significantly more polyps were found (p<0.0001; 26.6% vs. 12.5%). Totally, 286 polyps (26.6%) were removed in 1075 symptomatic patients older than 55 years. Anorectal bleeding was the only abdominal symptom being a significant indicator for the prevalence of the occurrence of colon and rectum cancer in both groups (p=0.03, OR=2.73 95%-CI [1.11;6.70]), but with only low sensitivity (44%).
Conclusion: Due to no significant differences in men and women, we recommend screening colonoscopies also for women by the age of 50.
Militarization, factionalism and political transitions: an inquiry into the causes of state collapse
(2020)
Why do some fragile states collapse while others do not? This article presents results from a comparative analysis of the causes of state collapse. Using a dataset of 15 cases of state collapse between 1960 and 2007, we conduct both synchronic and diachronic comparisons with two different control groups of fragile states using crisp-set QCA. The results support our hypothesis that state collapse has multiple causes. The militarization of political groups, when combined with other conditions, plays a major part in the process. Other causal factors are political transition, extreme poverty, declining government resources or external aid, factionalist politics, repression and pre-colonial polities. This challenges structuralist explanations focusing on regime types and the resource curse, among other things, and opens up avenues for further research.
This thesis concerns three specific constraint satisfaction problems: the k-SAT problem, random linear equations and the Potts model. We investigated a phenomenon called replica symmetry, its consequences and its limitation. For the $k$-SAT problem, we were able to show that replica symmetry holds up to a threshold $d^{*}$. However, after another critical threshold $d^{**}$, we discovered that replica symmetry could not hold anymore, which enabled us to establish the existence of a replica symmetry breaking region. For the random linear problem, a peculiar phenomenon occurs. We observed that a more robust version of replica symmetry (strong replica symmetry) holds up to a threshold $d=e$ and ceases to hold after. This phenomenon is linked to the fact that before the threshold $d=e$, the fraction of frozen variables, i.e. variable forced to take the same value in all solutions, is concentrated around a deterministic value but vacillates between two values with equal probability for $d>e$. Lastly, for the Potts model, we show that a phenomenon called metastability occurs. The latter phenomenon can be understood as a consequence of trivial replica symmetry breaking scheme. This metastability phenomenon further produces slow mixing results for two famous Markov chains, the Glauber and the Swendsen-Wang dynamics.
This systematic review investigated how successful children/adolescents with poor literacy skills learn a foreign language compared with their peers with typical literacy skills. Moreover, we explored whether specific characteristics related to participants, foreign language instruction, and assessment moderated scores on foreign language tests in this population. Overall, 16 studies with a total of 968 participants (poor reader/spellers: n = 404; control participants: n = 564) met eligibility criteria. Only studies focusing on English as a foreign language were available. Available data allowed for meta-analyses on 10 different measures of foreign language attainment. In addition to standard mean differences (SMDs), we computed natural logarithms of the ratio of coefficients of variation (CVRs) to capture individual variability between participant groups. Significant between-study heterogeneity, which could not be explained by moderator analyses, limited the interpretation of results. Although children/adolescents with poor literacy skills on average showed lower scores on foreign language phonological awareness, letter knowledge, and reading comprehension measures, their performance varied significantly more than that of control participants. Thus, it remains unclear to what extent group differences between the foreign language scores of children/adolescents with poor and typical literacy skills are representative of individual poor readers/spellers. Taken together, our results indicate that foreign language skills in children/adolescents with poor literacy skills are highly variable. We discuss the limitations of past research that can guide future steps toward a better understanding of individual differences in foreign language attainment of children/adolescents with poor literacy skills.
When a very strong light field is applied to a molecule an electron can be ejected by tunneling. In order to quantify the time-resolved dynamics of this ionization process, the concept of the Wigner time delay can be used. The properties of this process can depend on the tunneling direction relative to the molecular axis. Here, we show experimental and theoretical data on the Wigner time delay for tunnel ionization of H2 molecules and demonstrate its dependence on the emission direction of the electron with respect to the molecular axis. We find, that the observed changes in the Wigner time delay can be quantitatively explained by elongated/shortened travel paths of the emitted electrons, which occur due to spatial shifts of the electrons’ birth positions after tunneling. Our work provides therefore an intuitive perspective towards the Wigner time delay in strong-field ionization.
Background: Autism spectrum disorder (ASD) is characterized by impaired social communication and interaction, and stereotyped, repetitive behaviour and sensory interests. To date, there is no effective medication that can improve social communication and interaction in ASD, and effect sizes of behaviour-based psychotherapy remain in the low to medium range. Consequently, there is a clear need for new treatment options. ASD is associated with altered activation and connectivity patterns in brain areas which process social information. Transcranial direct current stimulation (tDCS) is a technique that applies a weak electrical current to the brain in order to modulate neural excitability and alter connectivity. Combined with specific cognitive tasks, it allows to facilitate and consolidate the respective training effects. Therefore, application of tDCS in brain areas relevant to social cognition in combination with a specific cognitive training is a promising treatment approach for ASD. Methods: A phase-IIa pilot randomized, double-blind, sham-controlled, parallel-group clinical study is presented, which aims at investigating if 10 days of 20-min multi-channel tDCS stimulation of the bilateral tempo-parietal junction (TPJ) at 2.0 mA in combination with a computer-based cognitive training on perspective taking, intention and emotion understanding, can improve social cognitive abilities in children and adolescents with ASD. The main objectives are to describe the change in parent-rated social responsiveness from baseline (within 1 week before first stimulation) to post-intervention (within 7 days after last stimulation) and to monitor safety and tolerability of the intervention. Secondary objectives include the evaluation of change in parent-rated social responsiveness at follow-up (4 weeks after end of intervention), change in other ASD core symptoms and psychopathology, social cognitive abilities and neural functioning post-intervention and at follow-up in order to explore underlying neural and cognitive mechanisms. Discussion: If shown, positive results regarding change in parent-rated social cognition and favourable safety and tolerability of the intervention will confirm tDCS as a promising treatment for ASD core-symptoms. This may be a first step in establishing a new and cost-efficient intervention for individuals with ASD.
Individual patient data (IPD) from the CELESTIAL trial (cabozantinib) and population-level data from the REACH-2 trial (ramucirumab) were used. To align with REACH-2, the CELESTIAL population was limited to patients who received first-line sorafenib only and had baseline serum AFP ≥ 400 ng/mL. The IPD from CELESTIAL were weighted to balance the distribution of 11 effect-modifying baseline characteristics with those of REACH-2. Overall survival (OS; primary endpoint) and progression-free survival (PFS) were compared for the CELESTIAL (matching-adjusted) and REACH-2 populations using weighted Kaplan-Meier (KM) curves and parametric (OS, Weibull; PFS, log-logistic) modeling. Rates of treatment-related adverse events (TRAEs) and TRAE-related discontinuations were also compared.
Pathogenic genetic variants in the ATP7B gene cause Wilson disease, a recessive disorder of copper metabolism showing a significant variability in clinical phenotype. Promoter mutations have been rarely reported, and controversial data exist on the site of transcription initiation (the core promoter). We quantitatively investigated transcription initiation and found it to be located in immediate proximity of the translational start. The effects human single-nucleotide alterations of conserved bases in the core promoter on transcriptional activity were moderate, explaining why clearly pathogenic mutations within the core promoter have not been reported. Furthermore, the core promoter contains two frequent polymorphisms (rs148013251 and rs2277448) that could contribute to phenotypical variability in Wilson disease patients with incompletely inactivating mutations. However, neither polymorphism significantly modulated ATP7B expression in vitro, nor were copper household parameters in healthy probands affected. In summary, the investigations allowed to determine the biologically relevant site of ATP7B transcription initiation and demonstrated that genetic variations in this site, although being the focus of transcriptional activity, do not contribute significantly to Wilson disease pathogenesis.
In this survey paper, we present a multiscale post-processing method in exploration. Based on a physically relevant mollifier technique involving the elasto-oscillatory Cauchy–Navier equation, we mathematically describe the extractable information within 3D geological models obtained by migration as is commonly used for geophysical exploration purposes. More explicitly, the developed multiscale approach extracts and visualizes structural features inherently available in signature bands of certain geological formations such as aquifers, salt domes etc. by specifying suitable wavelet bands.
The future of work has become a pressing matter of concern: Researchers, business consultancies, and industrial companies are intensively studying how new work models could be best implemented to increase workplace flexibility and creativity. In particular, the agile model has become one of the “must-have” elements for re-organizing work practices, especially for technology development work. However, the implementation of agile work often comes together with strong presumptions: it is regarded as an inevitable tool that can be universally integrated into different workplaces while having the same outcome of flexibility, transparency, and flattened hierarchies everywhere. This paper challenges such essentializing assumptions by turning agile work into a “matter of care.” We argue that care work occurs in contexts other than feminized reproductive work, namely, technology development. Drawing on concepts from feminist Science and Technology Studies and ethnographic research at agile technology development workplaces in Germany and Kenya, we examine what work it takes to actually keep up with the imperative of agile work. The analysis brings the often invisibilized care practices of human and nonhuman actors to the fore that are necessary to enact and stabilize the agile promises of flexibilization, co-working, and rapid prototyping. Revealing the caring sociotechnical relationships that are vital for working agile, we discuss the emergence of power asymmetries characterized by hierarchies of skills that are differently acknowledged in the daily work of technology development. The paper ends by speculating on the emancipatory potential of a care perspective, by which we seek to inspire careful Emancipatory Technology Studies.
Coupling between epidermis and amphid morphogenesis during embryonic development of C. elegans
(2021)
Sensory organs are fundamental for survival of animal populations, since the detection of environmental stimuli is crucial for localization of nourishment, predators or mating partners. In nematodes, the amphid (AM) sensilla are the largest sensory organs for detection of chemical compounds.
This study investigates how the AM sensilla acquire their special elongated shape during lima-bean to 1.5-fold embryonic stages of C. elegans head development. The dissertation also examines events facilitating the morphogenesis of other head sensilla (IL/OL/CEP) and addresses aspects of general embryonic head morphogenesis. Using high resolution live-cell imaging techniques with different combinations of markers highlighting specific tissues, this study shows that epidermal head enclosure, migration of AM socket cells (pores) and translocation of AM dendrite tips are coupled processes, facilitating the elongation of AM dendrites. Importantly, during AM dendrite elongation the AM neural cell bodies are staying stationary. Manipulation through conducting UV-Laser ablation (epidermis close to pore/pore) and RPN-6.1 dsRNA interference resulted in compromised AM pore migration and impaired dendrite elongation. This leads to the conclusion that AM pores need to be physically attached (through C. elegans apical junctions, CeAJ) to the migrating epidermal sheet and to AM dendrite tips for successful AM morphogenesis. This study infers that RPN-6.1 plays an important role for correct AM pore morphogenesis and AM pore to AM dendrite tip attachment. Our results lead to the conclusion that head enclosure drives AM pore migration and AM dendrite elongation with AM neural cell bodies staying stationary. Thereby, CeAJ are interconnecting AM dendrite tips to AM pores and CeAJ link the sensillar ending to the migrating epidermis. Thus, migration of attached target tissue (pore), with neural cell bodies staying stationary (constituting an abutment), creates a pulling force facilitating AM dendrite elongation. This passive neurite elongation procedure is coined dendrite towing in this study.
Additionally, this study discovers that translocation of IL, OL and CEP head sensilla pores is influenced by apical constriction. This conclusion was made based on the findings that IL/OL/CEP pores migrate towards the prospective mouth anterior to the epidermal leading edge, separated from AM pores and irrespective of highly impaired AM sensilla morphogenesis after strong RPN-6.1 depletion. Also, concurrent with translocation of IL/OL/CEP pores, bottle-shaped cells occur and non-muscle-myosin and apical polarity factors are getting enriched at the anterior most part of the head, indicating de-novo manifestation of apical constriction. It is furthermore assumed that apical constriction in arcade cells might contribute to early pharynx development. All in all, this study reveals two force-generating events: Head enclosure-driven AM sensilla morphogenesis via dendrite towing and, otherwise, apical constriction-facilitated translocation of IL/OL/CEP sensilla pores. These events can get separated by graded depletion of the proteasome activator RPN-6.1.
Objective: Trauma is the most common cause of death among young adults. Alcohol intoxication plays a significant role as a cause of accidents and as a potent immunomodulator of the post-traumatic response to tissue injury. Polytraumatized patients are frequently at risk to developing infectious complications, which may be aggravated by alcohol-induced immunosuppression. Systemic levels of integral proteins of the gastrointestinal tract such as syndecan-1 or intestinal fatty acid binding proteins (FABP-I) reflect the intestinal barrier function. The exact impact of acute alcohol intoxication on the barrier function and endotoxin bioactivity have not been clarified yet. Methods: 22 healthy volunteers received a precisely defined amount of alcohol (whiskey–cola) every 20 min over a period of 4 h to reach the calculated blood alcohol concentration (BAC) of 1‰. Blood samples were taken before alcohol drinking as a control, and after 2, 4, 6, 24 and 48 h after beginning with alcohol consumption. In addition, urine samples were collected. Intestinal permeability was determined by serum and urine values of FABP-I, syndecan-1, and soluble (s)CD14 as a marker for the endotoxin translocation via the intestinal barrier by ELISA. BAC was determined. Results: Systemic FABP-I was significantly reduced 2 h after the onset of alcohol drinking, and remained decreased after 4 h. However, at 6 h, FABP-I significantly elevated compared to previous measurements as well as to controls (p < 0.05). Systemic sCD14 was significantly elevated after 6, 24 and 48 h after the onset of alcohol consumption (p < 0.05). Systemic FABP-I at 2 h after drinking significantly correlated with the sCD14 concentration after 24 h indicating an enhanced systemic LPS bioactivity. Women showed significantly lower levels of syndecan-1 in serum and urine and urine for all time points until 6 h and lower FABP-I in the serum after 2 h. Conclusions: Even relative low amounts of alcohol affect the immune system of healthy volunteers, although these changes appear minor in women. A potential damage to the intestinal barrier and presumed enhanced systemic endotoxin bioactivity after acute alcohol consumption is proposed, which represents a continuous immunological challenge for the organism and should be considered for the following days after drinking.
Based on Ivan Marcus’s concept of “open book” and considerations on medieval Ashkenazic concepts of authorship, the present article inquires into the circumstances surrounding the production of Sefer Arugat ha-Bosem, a collection of piyyut commentaries written or compiled by the thirteenth-century scholar Abraham b. Azriel. Unlike all other piyyut commentators, Abraham ben Azriel inscribed his name into his commentary and claims to supersede previous commentaries, asserting authorship and authority. Based on the two different versions preserved in MS Vatican 301 and MS Merzbacher 95 (Frankfurt fol. 16), already in 1939 Ephraim E. Urbach suggested that Abraham b. Azriel might have written more than one edition of his piyyut commentaries. The present reevaluation considers recent scholarship on concepts of authorship and “open genre” as well as new research into piyyut commentary. To facilitate a comparison with Marcus’s definition of “open book,” this article also explores the arrangement and rearrangement of small blocks of texts within a work.
Depletion of the enzyme cofactor, tetrahydrobiopterin (BH4), in T-cells was shown to prevent their proliferation upon receptor stimulation in models of allergic inflammation in mice, suggesting that BH4 drives autoimmunity. Hence, the clinically available BH4 drug (sapropterin) might increase the risk of autoimmune diseases. The present study assessed the implications for multiple sclerosis (MS) as an exemplary CNS autoimmune disease. Plasma levels of biopterin were persistently low in MS patients and tended to be lower with high Expanded Disability Status Scale (EDSS). Instead, the bypass product, neopterin, was increased. The deregulation suggested that BH4 replenishment might further drive the immune response or beneficially restore the BH4 balances. To answer this question, mice were treated with sapropterin in immunization-evoked autoimmune encephalomyelitis (EAE), a model of multiple sclerosis. Sapropterin-treated mice had higher EAE disease scores associated with higher numbers of T-cells infiltrating the spinal cord, but normal T-cell subpopulations in spleen and blood. Mechanistically, sapropterin treatment was associated with increased plasma levels of long-chain ceramides and low levels of the poly-unsaturated fatty acid, linolenic acid (FA18:3). These lipid changes are known to contribute to disruptions of the blood–brain barrier in EAE mice. Indeed, RNA data analyses revealed upregulations of genes involved in ceramide synthesis in brain endothelial cells of EAE mice (LASS6/CERS6, LASS3/CERS3, UGCG, ELOVL6, and ELOVL4). The results support the view that BH4 fortifies autoimmune CNS disease, mechanistically involving lipid deregulations that are known to contribute to the EAE pathology.
Background: This prospective randomized trial is designed to compare the performance of conventional transarterial chemoembolization (cTACE) using Lipiodol-only with additional use of degradable starch microspheres (DSM) for hepatocellular carcinoma (HCC) in BCLC-stage-B based on metric tumor response. Methods: Sixty-one patients (44 men; 17 women; range 44–85) with HCC were evaluated in this IRB-approved HIPPA compliant study. The treatment protocol included three TACE-sessions in 4-week intervals, in all cases with Mitomycin C as a chemotherapeutic agent. Multiparametric magnetic resonance imaging (MRI) was performed prior to the first and 4 weeks after the last TACE. Two treatment groups were determined using a randomization sheet: In 30 patients, TACE was performed using Lipiodol only (group 1). In 31 cases Lipiodol was combined with DSMs (group 2). Response according to tumor volume, diameter, mRECIST criteria, and the development of necrotic areas were analyzed and compared using the Mann–Whitney-U, Kruskal–Wallis-H-test, and Spearman-Rho. Survival data were analyzed using the Kaplan–Meier estimator. Results: A mean overall tumor volume reduction of 21.45% (± 62.34%) was observed with an average tumor volume reduction of 19.95% in group 1 vs. 22.95% in group 2 (p = 0.653). Mean diameter reduction was measured with 6.26% (± 34.75%), for group 1 with 11.86% vs. 4.06% in group 2 (p = 0.678). Regarding mRECIST criteria, group 1 versus group 2 showed complete response in 0 versus 3 cases, partial response in 2 versus 7 cases, stable disease in 21 versus 17 cases, and progressive disease in 3 versus 1 cases (p = 0.010). Estimated overall survival was in mean 33.4 months (95% CI 25.5–41.4) for cTACE with Lipiosol plus DSM, and 32.5 months (95% CI 26.6–38.4), for cTACE with Lipiodol-only (p = 0.844), respectively. Conclusions: The additional application of DSM during cTACE showed a significant benefit in tumor response according to mRECIST compared to cTACE with Lipiodol-only. No benefit in survival time was observed.
A glenohumeral internal rotation deficit (GIRD) of the shoulder, is associated with an increased risk of shoulder injuries in tennis athletes. The aim of the present study was to reveal the impact of 1) age, sex, specific training data (i.e. training volume, years of tennis practice, years of competitive play) and 2) upper extremity injuries on GIRD in youth competitive tennis athletes.
A cross-sectional retrospective study design was adopted. Youth tennis players (n = 27, 12.6 ± 1.80 yrs., 18 male) belonging to an elite tennis squad were included. After documenting the independent variables (anthropometric data, tennis specific data and history of injury), the players were tested for internal (IR) and external (ER) shoulder rotation range of motion (RoM, [°]). From these raw values, the GIRD parameters ER/IR ratio and side differences and TRoM side differences were calculated. Pearson’s correlation analyses were performed to find potential associations of the independent variables with the GIRD outcomes.
A significant positive linear correlation between the years of tennis training and IR side asymmetry occurred (p < .05). A significant negative linear relation between the years of tennis training and the ratio of ER to IR range of motion (RoM) in the dominant side (p < .05) was found. The analysis of covariance showed a significant influence of the history of injuries on IR RoM (p < .05).
Injury and training history but not age or training volume may impact on glenohumeral internal rotation deficit in youth tennis athletes. We showed that GIRD in the dominant side in youth tennis players is progressive with increasing years of tennis practice and independent of years of practice associated with the history of injuries. Early detection of decreased glenohumeral RoM (specifically IR), as well as injury prevention training programs, may be useful to reduce GIRD and its negative consequences.
Vehicle registrations have been shown to strongly react to tax reforms aimed at reducing CO2 emissions from passengers’ cars, but are the effects equally strong for positive and negative tax changes? The literature on asymmetric reactions to price and tax changes has documented asymmetries for everyday goods but has not yet considered durables. We leverage multiple vehicle registration tax (VRT) reforms in Norway and estimate their impact on within car-model substitutions. We estimate stronger effects for cars receiving tax cuts and rebates than for those affected by tax increases. The corresponding estimated elasticity is − 1.99 for VRT decreases and 0.77 for increases. As consumers may also substitute across car models, our estimates represent a lower bound.
The relevant field of interest in High Energy Physics experiments is shifting to searching and studying extremely rare particles and phenomena. The search for rare probes requires an increase in the number of available statistics by increasing the particle interaction rate. The structure of the events also becomes more complicated, the multiplicity of particles in each event increases, and a pileup appears. Due to technical limitations, such data flow becomes impossible to store fully on available storage devices. The solution to the problem is the correct triggering of events and real-time data processing.
In this work, the issue of accelerating and improving the algorithms for reconstruction of the charged particles' trajectories based on the Cellular Automaton in the STAR experiment is considered to implement them for track reconstruction in real-time within the High-Level Trigger. This is an important step in the preparation of the CBM experiment as part of the FAIR Phase-0 program. The study of online data processing methods in real conditions at similar interaction energies allows us to study this process and determine the possible weaknesses of the approach.
Two versions of the Cellular Automaton based track reconstruction are discussed, which are used, depending on the detecting systems' features. HFT~CA Track Finder, similar to the tracking algorithm of the CBM experiment, has been accelerated by several hundred times, using both algorithm optimization and data-level parallelism. TPC~CA Track Finder has been upgraded to improve the reconstruction quality while maintaining high calculation speed. The algorithm was tuned to work with the new iTPC geometry and provided an additional module for very low momentum track reconstruction.
The improved track reconstruction algorithm for the TPC detector in the STAR experiment was included in the HLT reconstruction chain and successfully tested in the express production for the online real data analysis. This made it possible to obtain important physical results during the experiment runtime without the full offline data processing. The tracker is also being prepared for integration into a standard offline data processing chain, after which it will become the basic track search algorithm in the STAR experiment.
Despite major improvements of the therapy, many B-cell Non-Hodgkin’s lymphoma (B-NHL) entities still have a poor prognosis. New therapeutic options are urgently needed. Therefore this study sets out to investigate oncogenic signalling pathways in the two B-NHL entities mantle cell lymphoma (MCL) and diffuse large B-cell lymphoma (DLBCL) in order to define new potential therapeutic targets.
MCL cells overexpress the anti-apoptotic protein BCL-2, thereby they evade apoptosis. With venetoclax, the first-in-class BCL-2 specific inhibitor was approved and achieved good response rates in MCL. However, some cases display intrinsic or acquired resistance to venetoclax. In order to improve the therapy, this study aimed to identify genes which confer sensitivity or resistance towards venetoclax upon their respective knockout. To this end, a genome-wide CRISPR/Cas9-based loss-of-function screen was conducted in the MCL cell line Maver-1. The E3 ubiquitin
ligase MARCH5 was identified as one of the top hits conferring sensitivity
towards venetoclax upon its knockout. This finding was validated in a competitive growth assay including two more MCL cell lines, Jeko-1 and Mino. MARCH5 knockout also sensitised Jeko-1 cells towards venetoclax even though this cell line was insensitive towards venetoclax in its wild-type form. Using BH3 profiling, an increased dependency on BCL-2 of MARCH5-depleted cells confirmed this finding. The sensitisation was found to be based on induction of apoptosis upon MARCH5 knockout and to an even higher extent upon additional treatment of MARCH5-depleted cells with venetoclax. As already described for epithelial cancer entities, the BCL-2 family members MCL-1 and NOXA were upregulated in MCL cell lines upon MARCH5 knockout. This led to the hypothesis that MARCH5 is a potential
regulator of intrinsic apoptosis with NOXA as a key component. A competitive growth assay with MARCH5 and NOXA co-depleted cells revealed a partial reversion of the BCL-2 sensitisation compared to MARCH5 knockout alone. Furthermore, mass spectrometry-based methods were used to gain more insight into other cellular pathways and networks which might be regulated in a MARCH5-dependent manner. In an interactome analysis, proteins which regulate mitochondrial morphology, such as Drp-1 were identified as MARCH5 interactors. Besides this expected finding, interaction between MARCH5 and several members of the BCL-2 family as well as a potential connection between MARCH5 and vesicular trafficking was discovered. As expected, an ubiquitinome analysis of MARCH5-depleted cells revealed decreased levels of MCL-1 and NOXA ubiquitination. Additionally, a potential role of MARCH5 in the ubiquitination of several members of the cell cycle regulatory
pathway was discovered. Based on the broad spectrum of cellular pathways which seem to be regulated in a MARCH5-dependent manner, it was hypothesised that MARCH5 primarily regulates BCL-2 family members which in turn regulate intrinsic apoptosis on the one hand and additionally are involved in the regulation of various other pathways on the other hand.
In summary, this study provides insight into a MARCH5-dependent MCL1-1/NOXA axis in MCL cells and potential implications into related cellular processes.
In addition to the anti-apoptotic pathways described above, B-cell receptor (BCR) signalling is known to provide a pro-survival signal to both normal and malignant B-cells. Targeting the BCR signalling pathway therefore is a promising therapeutic target for B-cell malignancies. In order to gain more insight into the differential modes of BCR signalling of ABC- and GCB-DLBCL cells, genes/proteins which displayed differential essentiality in ABC- and GCB-DLBCL cells were aimed to be defined. Consequently, data sets from a CRISPR/Cas9-based loss-of-function screen
were re-analysed. SASH3 was identified as a gene which was essential for GCB- but not for ABC-DLBCL cells. Since this protein is known to be involved in T-cell receptor (TCR)-signalling, SASH3 was assumed to play a potential role in BCR signalling as well and was therefore investigated in more detail. A competitive growth assay confirmed that SASH3 knockout was toxic exclusively for GCB-DLBCL cell lines. An interactome analysis in ABC- and GCB-DLBCL cells revealed interaction between SASH3 and many components of the proximal BCR signalling pathway as well as several downstream signalling pathways such as the PI3K or the NF-ΚB pathway.
An integration of the interactome with data from the CRISPR/Cas9-based loss-offunction screen revealed differential essentiality of the SASH3-interacting proteins in ABC- and GCB-DLBCL cells. It was hypothesised that SASH3 might regulate PI3K signalling on which GCB- but not ABC-DLBCL cells are known to dependent. Discontinuation of the regulation of PI3K signalling could therefore be exclusively toxic to GCB-DLBCL cells.
Taken together, this study describes a subtype-specific dependency of GCB-DLBCL cells on SASH3. Furthermore, the SASH3 interactome has been investigated in B-cells for the first time, thereby highlighting a potential role in proximal BCR signalling and involvement in specific BCR-related downstream signalling pathways.
Oxidative stress is thought to be a driver for several diseases. However, many data to support this concept were obtained by the addition of extracellular H2O2 to cells. This does not reflect the dynamics of intracellular redox modifications. Cells actively control their redox-state, and increased formation of ROS is a response to cellular stress situations such as chronic inflammation.
In this study, it was shown that different types of ROS lead to different metabolic and transcriptomic responses of HUVECs. While 300 μM extracellular H2O2 led to substantial metabolic and transcriptomic changes, the effects of DAO-derived H2O2 and menadione were low to moderate, indicating that the source and the concentration of ROS are important in eliciting changes in metabolism and gene expression.
Specifically, it was identified that acute increases in ROS transiently inactivate the enzyme ω-amidase/NIT2 of the glutaminase II pathway, which supplies cells with anaplerotic α-ketoglutarate. The pathway has not been studied systematically because, as noted above, the major intermediate, KGM, is not commercially available. In the present study, an internal standard for targeted detection of KGM in cells and blood plasma/serum was used. Deletion of NIT2 by CRISPR/Cas9 significantly reduced α-ketoglutarate levels in HUVECs and elevated KGM levels. It appears that in cell culture conditions, hydrolysis of KGM to α-ketoglutarate is very efficient. Knockout of the glutamine transaminases significantly reduced methionine, suggesting that the glutaminase II pathway is an important source of amino acid replenishment.
Similar to genetic silencing of GLS1 [91,92], HUVECs lacking NIT2 showed reduced proliferation and angiogenic sprouting. Furthermore, our results indicate that, at least in HUVECs, the enzyme also locates in the mitochondria where it interacts with key enzymes of glutamine/glutamate/α-ketoglutarate metabolism.
The data of the present work indicate that the glutaminase II pathway is an underappreciated, redox-sensitive pathway for glutamine utilization in HUVECs. Genetic deletion of NIT2 has considerable physiological effects highlighting the importance of glutamine for ECs.
Geochemical investigations on biogenic carbonates are commonly conducted to reconstruct the environmental conditions of the past. However, different carbonate producers incorporate elements to varying degrees, due to biological vital effects. Detecting and quantifying these effects is crucial to produce reliable reconstructions. These paleoreconstructions are of great importance to evaluate the consequences of our recent climate change and identify control mechanisms on the distribution of endangered species such as Desmophyllum pertusum. In chapter three we tested Mg/Ca, Sr/Ca and Na/Ca ratios on this species, among other coldwater scleractinians, to test if they provide reliable proxy information. The results reveal no apparent control of Mg/Ca or Sr/Ca ratios through seawater temperature, salinity or pH. Na/Ca ratios appear to be partly controlled by the seawater temperature, which is also true for other aragonitic organisms such as warm-water corals and the bivalve Mytilus edulis. However, a large variability complicates possible reconstructions by means of Na/Ca. In addition, we explore different models to explain the apparent temperature effect on Na/Ca ratios based on temperature sensitive Na and Ca pumping enzymes.
The bivalve Acesta excavata is commonly found in cold-water coral reefs among the North Atlantic, together with D. pertusum. Multiple linear regression analysis, presented in chapter four, indicates that up to 79% of the elemental variability in Mg/Ca, Sr/Ca and Na/Ca is explainable with temperature and salinity as independent predictor variables. Vital effects, for instance growth rate effects, are evident and make paleoreconstructions not feasible. Furthermore, organic material embedded in the shell, as well as possible stress effects can drastically change the elemental composition. Removal of these organic matrices from bulk samples for LA-ICP-MS (laser ablation inductively coupled mass spectrometer) measurements by means of oxidative cleaning is not possible, but Na/Ca ratios decrease after this cleaning. This is presumably an effect of leaching and not caused by the removal of organic matrices.
Interesting biogeochemical relations were found in the parasitic foraminifera H. sarcophaga. We report Mg/Ca, Sr/Ca, Na/Ca and Mn/Ca ratios measured in H. sarcophaga from two different host species (A. excavata and D. pertusum) in chapter five. Sr/Ca ratios are significantly higher in foraminifera that lived on D. pertusum. This could indicate that dissolved host material is utilized in shell calcification of H. sarcophaga, given the naturally higher strontium concentration in the aragonite of D. pertusum. Mn/Ca ratios are highest in foraminifera that lived on A. excavata but did not fully penetrate the host’s shell. Most likely, this represents a juvenile stadium of the foraminifera during which it feeds on the organic
periostracum of the bivalve, which is enriched in Mn and Fe. The isotopic compositions are similarly affected, both δ18O and δ13C values are significantly lower in foraminifera that lived 23on D. pertusum compared to specimen that lived on A. excavata. Again, this might represent the uptake of dissolved host material or different pH regimes in the calcifying fluid of the hosts (bivalve < 8, coral > 8) that control the extent of hydration/hydroxylation reactions. Temperature reconstructions are possible using stable oxygen isotopes on this foraminifera species; however, the results are only reliable if the foraminifera lived on A. excavata. Samples of H. sarcophaga from D. pertusum would lead to overestimations of the seawater temperature due to the lower δ18O values.
Apart from biological vital effects, storage and preservation methods can significantly change the geochemical composition of different marine biogenic carbonates. In chapter six this is presented on the example of ethanol preservation, a common technique to allow extended storage of biogenic samples. The investigation reveals a significant decrease of Mg/Ca and Na/Ca ratios even after only 45 days storage in ultrapure ethanol. Sr/Ca ratios on the other hand are not influenced.
Besides temperature, salinity and pH further environmental parameters are important such as nutrient availability, especially for the distribution of cold-water corals. In chapter seven we extend the investigations on A. excavata by including the elemental ratios Ba/Ca, Mn/Ca and P/Ca. We expected P/Ca to be helpful in the otherwise difficult process of dentifying growth increments. Based on our observations we had to refute this theory. P/Ca ratios are not systematically enriched in the vicinity of growth lines. Instead, we found a regular sequence of peaks of Ba/Ca, P/Ca and Mn/Ca. This sequence as well as the peaks in general are potentially caused by equential blooms of different algae, diatoms and other planktonic organisms ...
Production of pions, kaons, (anti-)protons and φ mesons in Xe–Xe collisions at √sNN = 5.44 TeV
(2021)
The first measurement of the production of pions, kaons, (anti-)protons and φ mesons at midrapidity in Xe–Xe collisions at √sNN = 5.44 TeV is presented. Transverse momentum (pT) spectra and pT-integrated yields are extracted in several centrality intervals bridging from p–Pb to mid-central Pb–Pb collisions in terms of final-state multiplicity. The study of Xe–Xe and Pb–Pb collisions allows systems at similar charged-particle multiplicities but with different initial geometrical eccentricities to be investigated. A detailed comparison of the spectral shapes in the two systems reveals an opposite behaviour for radial and elliptic flow. In particular, this study shows that the radial flow does not depend on the colliding system when compared at similar charged-particle multiplicity. In terms of hadron chemistry, the previously observed smooth evolution of particle ratios with multiplicity from small to large collision systems is also found to hold in Xe–Xe. In addition, our results confirm that two remarkable features of particle production at LHC energies are also valid in the collision of medium-sized nuclei: the lower proton-to-pion ratio with respect to the thermal model expectations and the increase of the φ-to-pion ratio with increasing final-state multiplicity.
Themultiplicity dependence of the pseudorapidity density of charged particles in proton–proton (pp) collisions at centre-of-mass energies √s = 5.02, 7 and 13 TeV measured by ALICE is reported. The analysis relies on track segments measured in the midrapidity range (|η| < 1.5). Results are presented for inelastic events having at least one charged particle produced in the pseudorapidity interval |η| < 1. The multiplicity dependence of the pseudorapidity density of charged particles is measured with mid- and forward rapidity multiplicity estimators, the latter being less affected by autocorrelations.Adetailed comparison with predictions from the PYTHIA 8 and EPOS LHC event generators is also presented. The results can be used to constrain models for particle production as a function of multiplicity in pp collisions.
The absolute-scale electronic energetics of liquid water and aqueous solutions, both in the bulk and at associated interfaces, are the central determiners of water-based chemistry. However, such information is generally experimentally inaccessible. Here we demonstrate that a refined implementation of the liquid microjet photoelectron spectroscopy (PES) technique can be adopted to address this. Implementing concepts from condensed matter physics, we establish novel all-liquid-phase vacuum and equilibrated solution–metal-electrode Fermi level referencing procedures. This enables the precise and accurate determination of previously elusive water solvent and solute vertical ionization energies, VIEs. Notably, this includes quantification of solute-induced perturbations of water's electronic energetics and VIE definition on an absolute and universal chemical potential scale. Defining and applying these procedures over a broad range of ionization energies, we accurately and respectively determine the VIE and oxidative stability of liquid water as 11.33 ± 0.03 eV and 6.60 ± 0.08 eV with respect to its liquid-vacuum-interface potential and Fermi level. Combining our referencing schemes, we accurately determine the work function of liquid water as 4.73 ± 0.09 eV. Further, applying our novel approach to a pair of exemplary aqueous solutions, we extract absolute VIEs of aqueous iodide anions, reaffirm the robustness of liquid water's electronic structure to high bulk salt concentrations (2 M sodium iodide), and quantify reference-level dependent reductions of water's VIE and a 0.48 ± 0.13 eV contraction of the solution's work function upon partial hydration of a known surfactant (25 mM tetrabutylammonium iodide). Our combined experimental accomplishments mark a major advance in our ability to quantify electronic–structure interactions and chemical reactivity in liquid water, which now explicitly extends to the measurement of absolute-scale bulk and interfacial solution energetics, including those of relevance to aqueous electrochemical processes.
Glucose is an essential energy source for cells. In humans, its passive diffusion through the cell membrane is facilitated by members of the glucose transporter family (GLUT, SLC2 gene family). GLUT2 transports both glucose and fructose with low affinity and plays a critical role in glucose sensing mechanisms. Alterations in the function or expression of GLUT2 are involved in the Fanconi–Bickel syndrome, diabetes, and cancer. Distinguishing GLUT2 transport in tissues where other GLUTs coexist is challenging due to the low affinity of GLUT2 for glucose and fructose and the scarcity of GLUT-specific modulators. By combining in silico ligand screening of an inward-facing conformation model of GLUT2 and glucose uptake assays in a hexose transporter-deficient yeast strain, in which the GLUT1-5 can be expressed individually, we identified eleven new GLUT2 inhibitors (IC50 ranging from 0.61 to 19.3 µM). Among them, nine were GLUT2-selective, one inhibited GLUT1-4 (pan-Class I GLUT inhibitor), and another inhibited GLUT5 only. All these inhibitors dock to the substrate cavity periphery, close to the large cytosolic loop connecting the two transporter halves, outside the substrate-binding site. The GLUT2 inhibitors described here have various applications; GLUT2-specific inhibitors can serve as tools to examine the pathophysiological role of GLUT2 relative to other GLUTs, the pan-Class I GLUT inhibitor can block glucose entry in cancer cells, and the GLUT2/GLUT5 inhibitor can reduce the intestinal absorption of fructose to combat the harmful effects of a high-fructose diet.
The authors study the effects of forward looking communication in an environment of rising inflation rates on German consumers‘ inflation expectations using a randomized control trial. They show that information about rising inflation increases short- and long-term inflation expectations. This initial increase in expectations can be mitigated using forward looking information about inflation. Among these information treatments, professional forecasters‘ projections seem to reduce inflation expectations by more than policymakers‘ characterization of inflation as a temporary phenomenon.
The reuse of collateral can support the efficient allocation of safe assets in the financial system. Exploiting a novel dataset, we show that banks substantially increase their reuse of sovereign bonds in response to scarcity induced by Eurosystem asset purchases. While repo rates react little to purchase-induced scarcity when reuse is low, they become increasingly sensitive at high levels of reuse. An elevated reuse rate is also associated with more failures to deliver and a higher volatility of repo rates in the cross-section of bonds. Our results highlight the trade-off between shock absorption and shock amplification effects of collateral reuse.
The postthrombotic syndrome (PTS) is beside the venous thromboembolism (VTE) recurrence and chronic thromboembolic pulmonary hypertension (CTEPH) a long-term adverse outcome and chronic complication of deep vein thrombosis (DVT) in the lower extremities and can occur in up to 20–50% of patients within 2 years after DVT. The prevalence of PTS in the adult population is expected to increase due to the growing incidence of VTE in the elderly. Although not life threatening it can impose significant morbidity and can be associated with a negative impact on quality of life associated with disease severity. From an economic point of view, PTS is an important predictor of increased health care costs after VTE.
Factors potentially related to the development of the PTS are older age, obesity, a history of previous ipsilateral DVT, iliofemoral location of the current thrombosis, failure to promptly recover from the acute symptoms and insufficient quality of oral anticoagulant therapy. Furthermore, it is known that the severity of PTS correlates with the location of the DVT, the more proximal the more severe.
PTS induces a range of symptoms and clinical signs, which can be assessed in different scales. The Villalta scale is one of the most suitable scales for defining the presence and severity of subjective symptoms and physical signs of PTS.
In the last century, various therapeutic strategies have been developed to prevent mortality due to VTE or long-term morbidity due to PTS.
Conservative treatment today consists of anticoagulation - usually using direct oral anticoagulants - and compression therapy. One of the first invasive treatments with the aim of thrombus removal was surgical venous thrombectomy by Läwen in 1938. Mahorner and Fontaine improved the technique in the 1950s combining it with a course of anticoagulant treatment to prevent rethrombosis and PTS.
Mechanical thrombectomy by the use of Fogarty balloons, which started in 1963, or the creation of a transient arteriovenous fistula, performed since 1974, are now no longer recommended due to the high invasiveness, risk of fatal intraoperative embolism and a high rethrombosis rate.
In current practice, early thrombus removal mainly relies on the use of catheter-directed pharmacologic thrombolytic therapy. Another approach currently is the endovenous, device-driven thrombectomy and stenting in case of venous obstruction. There is an ongoing broad discussion as to whether these invasive therapies should be offered to patients with iliofemoral thrombosis (IFT), which remains controversial.
IFT, the major target for endovenous thrombectomy respectively pharmacologic thrombolytic therapy, is not enough represented in current literature because the used definition of proximal DVT does not necessarily include the iliac veins. In consequence, it may not be representative enough concerning questions like prevalence and severity of PTS or the effects on quality of life.
The present registry – the Iliaca-PTS registry – addresses exactly these patients and tries to answer these questions. The data of 85 patients who had suffered an IFT in the past were evaluated in the prospective registry documenting the severity of PTS, the occurrence of iliac vein compression syndrome in left-sided IFT and quality of life. A significant predictor for the development of severe PTS or venous claudication in our patient population is a high BMI.
The results of this registry show that IFT is frequently observed and only ten percent develop a moderate or severe PTS respectively venous claudication. In conclusion, the conservative treatment strategy with optimal effective anticoagulant therapy can lead to a low incidence of PTS and a high quality of life.
Previous research has demonstrated the efficacy of psychological interventions to foster resilience. However, little is known about whether the cultural context in which resilience interventions are implemented affects their efficacy on mental health. Studies performed in Western (k = 175) and Eastern countries (k = 46) regarding different aspects of interventions (setting, mode of delivery, target population, underlying theoretical approach, duration, control group design) and their efficacy on resilience, anxiety, depressive symptoms, quality of life, perceived stress, and social support were compared. Interventions in Eastern countries were longer in duration and tended to be more often conducted in group settings with a focus on family caregivers. We found evidence for larger effect sizes of resilience interventions in Eastern countries for improving resilience (standardized mean difference [SMD] = 0.48, 95% confidence interval [CI] 0.28 to 0.67; p < 0.0001; 43 studies; 6248 participants; I2 = 97.4%). Intercultural differences should receive more attention in resilience intervention research. Future studies could directly compare interventions in different cultural contexts to explain possible underlying causes for differences in their efficacy on mental health outcomes.
Echolocation behavior, a navigation strategy based on acoustic signals, allows scientists to explore neural processing of behaviorally relevant stimuli. For the purpose of orientation, bats broadcast echolocation calls and extract spatial information from the echoes. Because bats control call emission and thus the availability of spatial information, the behavioral relevance of these signals is undiscussable. While most neurophysiological studies, conducted in the past, used synthesized acoustic stimuli that mimic portions of the echolocation signals, recent progress has been made to understand how naturalistic echolocation signals are encoded in the bat brain. Here, we review how does stimulus history affect neural processing, how spatial information from multiple objects and how echolocation signals embedded in a naturalistic, noisy environment are processed in the bat brain. We end our review by discussing the huge potential that state-of-the-art recording techniques provide to gain a more complete picture on the neuroethology of echolocation behavior.
Common ownership and the (non-)transparency of institutional shareholdings: an EU-US comparison
(2022)
This paper compares the extent of common ownership in the US and the EU stock markets, with a particular focus on differences in the ap- plicable ownership transparency requirements. Most empirical research on common ownership to date has focused on US issuers, largely relying on ownership data obtained from institutional investors’ 13F filings. This type of data is generally not available for EU issuers. Absent 13F filings, researchers have to use ownership records sourced from mutual funds’ periodic reports and blockholder disclosures. Constructing a “reduced dataset” that seeks to capture only ownership information available for both EU and US issuers, I demonstrate that the “extra” ownership information introduced by 13F filings is substantial. However, even when taking differences in the transparency situation into due account, common ownership among listed EU firms is much less pronounced than among listed US firms by any measure. This is true even if the analysis is limited to non-controlled firms.
Peer effects can lead to better financial outcomes or help propagate financial mistakes across social networks. Using unique data on peer relationships and portfolio composition, we show considerable overlap in investment portfolios when an investor recommends their brokerage to a peer. We argue that this is strong evidence of peer effects and show that peer effects lead to better portfolio quality. Peers become more likely to invest in funds when their recommenders also invest, improving portfolio diversification compared to the average investor and various placebo counterfactuals. Our evidence suggests that social networks can provide good advice in settings where individuals are personally connected.
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
Background: Decedents who are repatriated to Germany from abroad are not systematically registered nationwide. In Hamburg, in addition to an epidemic hygienic examination, registration and examination of the content of the documents accompanying the corpses of German citizens has been carried out since 2007. In this way, unclear and non-natural deaths in particular are to be followed up as necessary.
Material and methods: Protocols of external and internal autopsies of German nationals who died abroad and were repatriated to Hamburg via the port or airport between 2007 and 2018 were retrospectively evaluated with respect to numbers, completeness of the autopsy abroad and correctness of manner and cause of death.
Results: Between 2007 and 2018 a total of 703 corpses were repatriated via the port or airport of Hamburg and examined by the Port Medical Service for epidemic hygiene and for anything conspicuous in the documents accompanying the corpse. Of them, 307 corpses were examined at the Institute of Legal Medicine at the University Medical Center Hamburg-Eppendorf. In total, 82.4% of the examined cases had an incorrect, unspecific or incomplete foreign death certificate. Of the deceased, 238 were subjected to a second external autopsy by a forensic pathologist and 69 deceased were autopsied again or for the first time in Hamburg. It was found that 84% of the autopsies performed abroad were not performed according to German and European standards. The most common discrepancy was incomplete preparation of the organs. In almost one quarter of the autopsies performed in Hamburg a different cause of death than abroad was determined at autopsy.
Conclusion: Since the quality of autopsies performed abroad sometimes does not meet the standards in Germany and Europe and many papers accompanying corpses are incomplete or incorrectly filled out, a systematic review procedure in the home country is recommended. Through the system established in Hamburg in 2007, at least a re-evaluation of the cases takes place.
Background: In a phase 3 clinical study, patients from Germany with moderate to severe psoriasis who were naïve to systemic treatment and received risankizumab had greater and more rapid disease improvements compared with those who received fumaric acid esters (FAEs).
Objective: To evaluate patient-reported outcomes (PROs) in patients treated with risankizumab compared with FAEs.
Methods: Adult patients were randomized 1:1 to receive either risankizumab 150 mg subcutaneous injections at weeks 0, 4 and 16 or FAEs (Fumaderm®) provided according to the prescribing label. PRO secondary endpoints assessed were Psoriasis Symptom Scale (PSS), Dermatology Life Quality Index (DLQI), 36-Item Short Form Health Survey, version 2 (SF-36v2), Patient Benefit Index (PBI), Hospital Anxiety and Depression Scale (HADS), Patient Global Assessment (PtGA) and European Quality of Life 5 Dimensions 5 Level (EQ-5D-5L). PROs were assessed at weeks 0, 16 and 24.
Results: Sixty patients each were randomized to receive risankizumab or FAEs. A significant PSS improvement was observed with risankizumab vs. FAEs at weeks 16 and 24 for total and psoriasis-associated redness, itching and burning scores (P < 0.001). DLQI scores were significantly lower (reflecting better health-related quality of life) with risankizumab vs. FAEs, with least squares (LS) mean differences of −7.4 and −7.6 at weeks 16 and 24, respectively (both P < 0.001). Patients randomized to risankizumab also had larger improvements in SF-36 Physical and Mental Component Summary scores, HADS anxiety and depression scores, PtGA, and EQ-5D-5L index and visual analogue scale scores (all P ≤ 0.002) at weeks 16 and 24 compared with FAEs. PBI was significantly higher, indicating greater benefit, with risankizumab vs. FAEs, with an LS mean difference of 1.1 and 1.3 at weeks 16 and 24, respectively (both P < 0.001).
Conclusions: Risankizumab provides significant benefits over FAEs in improving PROs across several dimensions in patients with moderate to severe psoriasis.
Children often perform worse than adults on tasks that require focused attention. While this is commonly regarded as a sign of incomplete cognitive development, a broader attentional focus could also endow children with the ability to find novel solutions to a given task. To test this idea, we investigated children’s ability to discover and use novel aspects of the environment that allowed them to improve their decision-making strategy. Participants were given a simple choice task in which the possibility of strategy improvement was neither mentioned by instructions nor encouraged by explicit error feedback. Among 47 children (8—10 years of age) who were instructed to perform the choice task across two experiments, 27.5% showed a full strategy change. This closely matched the proportion of adults who had the same insight (28.2% of n = 39). The amount of erroneous choices, working memory capacity and inhibitory control, in contrast, indicated substantial disadvantages of children in task execution and cognitive control. A task difficulty manipulation did not affect the results. The stark contrast between age-differences in different aspects of cognitive performance might offer a unique opportunity for educators in fostering learning in children.
Single-electron transport in focused electron beam induced deposition (FEBID)-based nanostructures
(2022)
Mit steigender Komplexität von integrierten Schaltungen im Nanometer-Maÿstab werden immer innovativere Techniken nötig, um diese zu fabrizieren. Dies erfordert einen starken Fokus auf die Kontrolle der Fabrikation akkurater Strukturen und der Materialreinheit, und dies im Zusammenhang mit einer skalierbaren Produktion. In diesem Kontext hat Elektronenstrahlinduzierte Abscheidung (engl. Focused Electron Beam Induced Deposition, FEBID) eine wachsende Aufmerksamkeit im Bereich der Nanostrukturierung gewonnen. Der FEBID-Prozess basiert auf der lokalen Abscheidung von Material auf einem Substrat. Das Deponat entsteht durch die Spaltung von Präkursor-Molekülen durch die Interaktion mit einem Elektronenstrahl entsteht. Als Beispiel sei hier der Präkursor Me3PtCpMe angeführt. Das auf dem Substrat abgelagerte Material besteht aus wenigen Nanometer großen Kristalliten aus Platin, welche in einer Matrix aus amorphem Kohlenstoff eingebettet sind. Die Pt-C FEBID Ablagerungen sind nano-granulare Metalle, deren elektrische Transporteigenschaften die Folge des Zusammenspiels von diffusivem Transport von Ladungen innerhalb der Pt-Kristalliten und temperaturabhängigen Tunneleffekten sind. Das größte Interesse an diesen Materialien liegt an der Möglichkeit, Strukturen für technische Anwendungen im Nanometerbereich herstellen zu können.
In dieser Arbeit wurden Anwendungen, die auf Einzelelektroneneffekten beruhen, ausgewählt, um die FEBID basierte Probenpräparation zu testen. Um Einzelelektronentransport zu ermöglichen, der auf dem Tunneln einzelner Elektronen basiert, müssen alle Parameter wie Grösse und Abstände der Strukturen genauestens definiert sein. Im Rahmen dieser Arbeit wurden Einzelelektronenbausteine entwickelt, die auf zwei unterscheidlichen Anwendungen des Pt-C FEBID-Prozesses basieren. Die beiden Anwendungen sind: 1) Arrays von Gold-Nanopartikeln (Au-NP), welche mittels Pt-Strukturen kontaktiert wurden, die mit FEBID präpariert und anschlieÿend aufgereinigt wurden; 2) Einzelelektronentransistoren (engl. Single-Electron Transistors, SET), deren Inseln aus elektronennachbestrahlten Pt-C FEBID Deponate bestehen. Die elektrischen Eigenschaften der präparierten Nanostrukturen wurden charakterisiert und mit der erzielten Auflösung und Materialqualität in Relation gesetzt. Es wurden Optimierungen an der Präparationsmethode durchgeführt, welche direkt die Leitfähigkeit des Pt-C FEBID-Materials erhöhen. Dies kann durch die Änderung der
Karbonmatrix oder die Erhöhung des metallischen Gehalts der Struktur geschehen. In dieser Arbeit wurde eine katalytische Aufreinigungsmethode von Pt-C FEBID Strukturen für zwei Anwendungen genutzt: zum Einen wurden die aufgereinigten Strukturen als Keimschichten für die nachfolgende ortsgenaue Atomlagenabscheidung (engl. Area-Selective Atomic Layer
Deposition, AS-ALD) von Pt-Dünnschichten genutzt. Zum Anderen wurde diese Technik dafür genutzt, Metallbrücken zwischen den bereits durch Auftropfen zufällig auf dem Substrat aufgebrachten NP-Gruppen und den zuvor aufgebrachten UV-Lithographie (UVL) präparierten Cr-Au Kontakten zu erzeugen. Eine NP-Gruppe ist ein periodisches, granulares Array von Partikeln, welche uniform in Größe und Form sind und einen unterschiedlichen Grad von Ordnung inne haben. Durch die Art des Aufbringens kann die Anordnung der Nanopartikel durch Lösen und Erzeugen der Verbindungen beeinflusst werden. Diese Systeme zeigen ein Verhalten wie Tunnelkontakte mit Coulombblockade und eine Verteilung der Schwellspannung. Die Ergebnisse der elektrischen Messungen bestätigen den Einzelelektronentransport durch die Nanopartikel in einem typischen Elektronentransportregime mit schwacher Kopplung. Trotz dieser Ergebnisse war die Anwendung dieser Technik für die SET Nanostrukturierung nicht erfolgreich. Die Ursache
konnte zurückgeführt werden auf das Vorhandensein von Pt-Partikeln in der Nähe der Kontakte zu den Au-NP-Arrays. Die Pt-Partikel sind durch den FEBID Fertigungsprozess in
der Nähe der vorgegebenen Struktur entstanden. Aus diesem Grund wurde das FEBID Co-Deponat in der folgenden SET-Nanofabrikation entfernt.
Ein SET basiert auf einer Nano-Insel, welche durch Tunnelkontakte mit Source- und Drain-Elektroden verbunden ist. Darüber hinaus besteht eine kapazitive Verbindung zu einer
oder mehreren Gate-Elektrode(n). Innerhalb der Insel gibt es eine feste Anzahl von Elektronen.
In dieser Arbeit wurden die Source-, Drain- und Gate-Kontakte durch Ätzen mittels eines fokussierten Gallium-Strahls erzeugt, was Abstände von 50nm ermöglichte, wohingegen die SET Insel mit Pt-C FEBID-Material erzeugt wurde. Die Leitfähigkeit der Insel aus Pt-C wurde mit anschließender Elektronenbestrahlung erhöht. Als letzter Präparationsmethode wurde ein neueartiges Argon-Ätzverfahren genutzt, um die durch FEBID erzeugten Co-Ablagerungen in der direkten Umgebung der Insel zu entfernen. Durch die Elektronennachbestrhalung kann die Kopplung der einzelnen metallischen Kristalliten angepasst werden. Die Auswirkungen unterschiedlicher starker Tunnelkontakte auf die elektronischen Eigenschaften der Insel und die daraus resultierende Performanz des SETs wurden in dieser Arbeit beobachtet ...
This study explores literary representations of gender and sexuality in contemporary Malaysian Popular Fiction in English (MPFE) written by Malay Muslim authors that are published in between the years 2010 to 2020. It questions why gender and sexuality are considered sensitive topics and the public discussion of these topics is deemed taboo by some Malay Muslim traditionalists and contemporary scholars of Malay literature. Previous studies suggests that Islamic rules and regulations influence the Malaysian Malays worldview. Its sacred book, the Quran, has established clear-cut prohibitions against any sexual indulgence among its believers. Muslim writers must learn to restrict themselves from indulging in sexual writings in order to prevent them from intentionally or unintentionally arousing their readers’ sexual fantasies that may lead both parties to sinning. However, at the end of the twentieth century, many factors such as the impact of modernisation through scientific and industrial revolution on Malaysian society, the influence of Western Humanities theories among local intellects, and the introduction of Internet culture have contributed tremendously to the dramatic social changes in Malaysia. These changes are reflected heavily in its literary culture. In recent years, the Malay people’s awareness of their body and individuality is heightened. There is a surge of curiosity among contemporary Malay Muslims about their gender and sexuality and they would want a discussion. Following this development, the first objective of this study is to provide the latest discussion on gender and sexuality in MPFE by Malay Muslim authors. The second objective is to provide observations on how MPFE authors employ their literary strategies to approach aspects of gender and sexuality in their literary works. It pays attention to how writers express their acceptance, negotiations, and/or rejections towards the dominant “normative” or “common” values in the Malay society with regards to their body and sexuality. Using textual analysis to examine one novel and six short stories from the MPFE genre, this paper cross-examines Malay literary theories on sexual and erotic literature available in Pengkaedahan Melayu (Malay Methodology), Persuratan Baru (Genuine Literature), as well as Western theoretical approaches in Postcolonialism, Postmodernism and Feminism on gendering system and sexuality, in its aim to explain the growing interest in the topics in spite of the red-tape around sexual taboos in Malaysian literature.
Classical light microscopy is one of the main tools for science to study small things. Microscopes and their technology and optics have been developed and improved over centuries, however their resolution is ultimately restricted physically by the diffraction of light based on its wave nature described by Maxwell’s equations. Hence, the nanoworld – often characterized by sub-100-nm structural sizes – is not accessible with classical far-field optics (apart from special x-ray laser concepts) since its lateral resolution scales with the wavelength.
It was not until the 20th century that various technologies emerged to circumvent the diffraction limit, including so-called near-field microscopy. Although conceptually based on Maxwell’s long known equations, it took a long time for the scientific community to recognize its powerful opportunities and the first embodiments of near-field microscopes were developed. One representative of them is the scattering-type Scanning Near-field Optical Microscope (s-SNOM). It is a Scanning Probe Microscope (SPM) that enables imaging and spectroscopy at visible light frequencies down to even radio waves with a sub-100-nm resolution regardless of the wavelength used. This work also reflects this wide spectral range as it contains applications from near-infrared light down to deep THz/GHz radiation.
This thesis is subdivided into two parts. First, new experimental capabilities for the s-SNOM are demonstrated and evaluated in a more technical manner. Second, among other things, these capabilities are used to study various transport phenomena in solids, as already indicated in the title.
On the technical side, preliminary studies on the suitability of the qPlus sensor – a novel scanning probe technology – for near-field microscopy are presented.
The scanning head incorporating the qPlus sensor–named TRIBUS – is originally intended and built for ultra-high vacuum, low temperature, and high resolution applications. These are desirable environments and properties for sensitive nearfield measurements as well. However, since its design was not planned for near-field measurements, several special technical and optical aspects have to be taken into account, among others the scanning tip design and a spring suspended measurement head.
In addition, in this thesis field-effect transistors are used as THz detectors in an s-SNOM for the first time. Although THz s-SNOM is already an emerging technology, it still suffers from the requirements of sophisticated and specialized infrastructure on both the detector and laser side. Field-effect transistors offer an alternative that is flexible, cost-efficient, room-temperature operating, and easy to handle. Here, their suitability for s-SNOM measurements, which in general require very sensitive and fast detectors, is evaluated.
In the scientific part of this thesis, electromagnetic surface waves on silver nanowires and the conductivity/charge carrier density in silicon are investigated. Both are completely different concepts of transport phenomena, but this already shows the general versatility of the s-SNOM as it can enter both fields. Silver nanowires are analysed by means of near-infrared radiation. Their plasmonic behaviour in this spectral region is studied complementing other simulations and studies in literature performed on them using for example far-field optics.
Furthermore, the surface wave imaging ability of the s-SNOM in the near-infrared regime is thoroughly investigated in this thesis. Mapping surface waves in the mid-infrared regime is widespread in the community, however for much smaller wavelengths there are several important aspects to be considered additionally, such as the smaller focal spot size.
After that, doped and photo-excited silicon substrates are investigated. As the characteristic frequencies of charge carriers in semiconductors – described by the plasma frequency and the Drude model – are within the THz range, the THz s-SNOM is very well suited to probe their behaviour and to reveal contrasts, which has already been shown qualitatively by numerous literature reports. Here, the photo-excitation enables to set and tune the charge carrier density continuously.
Furthermore, the analysis of all silicon samples focuses on a quantitative extraction of the charge carrier densities and doping levels ...
Efficient algorithms for object recognition are crucial for the newly robotics and computer vision applications that demand real-time and on-line methods. Some examples are autonomous systems, navigating robots, autonomous driving. In this work, we focus on efficient semantic segmentation, which is the problem of labeling each pixel of an image with a semantic class.
Our aim is to speed-up all of the parts of the semantic segmentation pipeline. We also aim at delivering a labeling solution on a time budget, that can be decided on-the-fly. For this purpose, we analyze all the components of the semantic segmentation pipeline, and identify the computational bottleneck of each of them. The different components of the pipeline are over-segmenting the image with local regions, extracting features and classify the local regions, and the final inference of the image labeling with semantic classes. We focus on each of these steps.
First, we introduce a new superpixel algorithm to over-segment the image. Our superpixel method runs in real-time and can deliver a solution at any time budget. Then, for feature extraction, we focus on the framework that computes descriptors and encodes them, followed by a pooling step. We see that the encoding step is the bottleneck, for computational efficiency and performance. We present a novel assignment-based encoding formulation, that allows for the design of a new, very efficient, encoding. Finally, the image labeling output is obtained modeling the dependencies with a Conditional Random Field (CRF). In semantic image segmentation, the computational cost of instantiating the potentials is much higher than MAP inference. We introduce Active MAP inference to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest as unknown, and to estimate the MAP labeling from such incomplete energy function.
We perform experiments on all proposed methods for the different parts of the semantic segmentation pipeline. We show that our superpixel extraction achieves higher accuracy than state-of-the-art on standard superpixel benchmark, while it runs in real-time. We test our feature encoding on standard image classification and segmentation benchmarks, and we show that our method achieves competitive results with the state-of-the-art, and requires less time and memory. Finally, results for semantic segmentation benchmark show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.
When performing transfer learning in Computer Vision, normally a pretrained model (source model) that is trained on a specific task and a large dataset like ImageNet is used. The learned representation of that source model is then used to perform a transfer to a target task. Performing transfer learning in this way had a great impact on Computer Vision, because it worked seamlessly, especially on tasks that are related to each other. Current research topics have investigated the relationship between different tasks and their impact on transfer learning by developing similarity methods. These similarity methods have in common, to do transfer learning without actually doing transfer learning in the first place but rather by predicting transfer learning rankings so that the best possible source model can be selected from a range of different source models. However, these methods have focused only on singlesource transfers and have not paid attention to multi-source transfers. Multi-source transfers promise even better results than single-source transfers as they combine information from multiple source tasks, all of which are useful to the target task. We fill this gap and propose a many-to-one task similarity method called MOTS that predicts both, single-source transfers and multi-source transfers to a specific target task. We do that by using linear regression and the source representations of the source models to predict the target representation. We show that we achieve at least results on par with related state-of-the-art methods when only focusing on singlesource transfers using the Pascal VOC and Taskonomy benchmark. We show that we even outperform all of them when using single and multi-source transfers together (0.9 vs. 0.8) on the Taskonomy benchmark. We additionally investigate the performance of MOTS in conjunction with a multi-task learning architecture. The task-decoder heads of a multi-task learning architecture are used in different variations to do multi-source transfers since it promises efficiency over multiple singletask architectures and incurs less computational cost. Results show that our proposed method accurately predicts transfer learning rankings on the NYUD dataset and even shows the best transfer learning results always being achieved when using more than one source task. Additionally, it is further examined that even just using one task-decoder head from the multi-task learning architecture promises better transfer learning results, than using a single-task architecture for the same task, which is due to the shared information from different tasks in the multi-task learning architecture in previous layers. Since the MOTS rankings for selecting the MTI-Net task-decoder head with the highest transfer learning performance were very accurate for the NYUD but not satisfying for the Pascal VOC dataset, further experiments need to varify the generalizability of MOTS rankings for the selection of the optimal task-decoder head from a multi-task architecture.
Cryptocurrencies provide a unique opportunity to identify how derivatives impact spot markets. They are fully fungible, trade across multiple spot exchanges at different prices, and futures contracts were selectively introduced on bitcoin (BTC) exchange rates against the USD in December 2017. Following the futures introduction, we find a significantly greater increase in cross-exchange price synchronicity for BTC--USD relative to other exchange rate pairs, as demonstrated by an increase in price correlations and a reduction in arbitrage opportunities and volatility. We also find support for an increase in price efficiency, market quality, and liquidity. The evidence suggests that futures contracts allowed investors to circumvent trading frictions associated with short sale constraints, arbitrage risk associated with block confirmation time, and market segmentation. Overall, our analysis supports the view that the introduction of BTC--USD futures was beneficial to the bitcoin spot market by making the underlying prices more informative.
he ECB is independent, but it is also accountable to the European parliament (EP). Yet, how the EP has held the ECB accountable has largely been overlooked. This paper starts addressing this gap by providing descriptive statistics of three accountability modalities. The paper highlights three findings. First, topics of accountability have changed. Climate-related accountability has increased quickly and dramatically since 2017. Second, if the relationship between price stability and climate change remains an object of conflict among MEPs, a majority within the EP has emerged to put pressure for the ECB to take a more active stance against climate change, precisely on behalf of its price stability mandate. Third, MEPs engage with the climate topic in very specific ways. There is a gender divide between the climate and the price stability topics. Women engage more actively with climate-related topics. While the Greens heavily dominate the climate topic, parties from the Right dominate the topic of Price stability. Finally, MEPs adopt a more united strategy and a particularly low confrontational tone in their climate-related interventions.
Veronika Grimm, Lukas Nöh, and Volker Wieland assess the possible development of government interest expenditures as a share of GDP for Germany, France, Italy and Spain. Until 2021, these and other member states could anticipate a further reduction of interest expenditure in the future. This outlook has changed considerably with the recent surge in inflation and government bond rates. Nevertheless, under reasonable assumptions current yield curves still imply that interest expenditure relative to GDP can be stabilized at the current level. The authors also review the implications of a further upward shift in the yield curves of 1 or 2 percentage points. These implications suggest significant medium-term risks for highly indebted member states with interest expenditure approaching or exceeding levels last observed on the eve of the euro area debt crisis. In light of these risks, governments of euro area member states should take substantive action to achieve a sustained decline in debt-to-GDP ratios towards safer levels. They bear the responsibility for making sure that government finances can weather the higher interest rates which are required to achieve price stability in the euro area.
Central banks have faced a succession of crises over the past years as well as a number of structural factors such as a transition to a greener economy, demographic developments, digitalisation and possibly increased onshoring. These suggest that the future inflation environment will be different from the one we know. Thus uncertainty about important macroeconomic variables and, in particular, inflation dynamics will likely remain high.
Global consensus is growing on the contribution that corporations and finance must make towards the net-zero transition in line with the Paris Agreement goals. However, most efforts in legislative instruments as well as shareholder or stakeholder initiatives have ultimately focused on public companies.
This article argues that such a focus falls short of providing a comprehensive approach to the problem of climate change. In doing so, it examines the contribution of private companies to climate change, the relevance of climate risks for them, as well as the phenomenon of brown-spinning (ie, the practice of public companies selling their highly polluting assets to private companies). We show that one cannot afford to ignore private companies in the net-zero transition and climate change adaptation. Yet, private companies lack several disciplining mechanisms that are available to public companies, such as institutional investor engagement, certain corporate governance arrangements, and transparency through regular disclosure obligations. At this stage, only some generic regulatory instruments such as carbon pricing and environmental regulation apply to them.
The article closes with a discussion of the main policy implications. Primarily, we discuss and evaluate the recent push to extend climate-related disclosure requirements to private companies. These disclosures would not only help investors by addressing information asymmetry, but also serve a wide group of stakeholders and thus aim at promoting a transition to a greener economy.
The authors study the impact of dissent in the ECB‘s Governing Council on uncertainty surrounding households‘ inflation expectations. They conduct a randomized controlled trial using the Bundesbank Online Panel Households. Participants are provided with alternative information treatments concerning the vote in the Council, e.g. unanimity and dissent, and are asked to submit probabilistic inflation expectations. The results show that the vote is informative.
Households revise their subjective inflation forecast after receiving information about the vote. Dissenting votes cause a wider individual distribution of future inflation. Hence, dissent increases households‘ uncertainty about inflation. This effect is statistically significant once the authors allow for the interaction between the treatments and individual characteristics of respondents.
The results are robust with respect to alternative measures of forecast uncertainty and hold for different model specifications. The findings suggest that providing information about dissenting votes without additional information about the nature of dissent is detrimental to coordinating household expectations.
This work deals with the theoretical investigation of the vibrationally promoted electronic resonance (VIPER) experiment, the intramolecular energy transfer within a rhodamine-BODIPY antenna system initiated by two-photon excitation and a computational study of the photochemical mechanism of the uncaging of the [7-(dimethylamino)coumarin-4-yl]methyl (DEACM) class of photocages . In continuation to Jan von Cosel’s work, the setup for the theoretical investigation of the VIPER experiment has been extended to two-photon absorption (TPA) also including the first-order Herzberg-Teller (HT) effects which are dependent on changes with respect to nuclear coordinates.
The VIPER experiment constitutes an extended form of two-dimensional infrared (2DIR) spectroscopy with a sequence of infrared (IR) and ultraviolet (UV) or visible (vis) pulses. The molecular system under probe is excited initially by a narrow-band IR pump pulse and then electronically excited by an off-resonant UV/vis pulse. An IR probe pulse is applied afterwards to probe the system and record a 2DIR spectrum in combination with the first pulse. Since the lifetime of the vibrational excitation is very short, the electronic excitation by the UV/vis pulse is used to enlarge the lifetime of the excitation in the molecule and thus enable measurements on a longer timescale. Therefore, it becomes easier to study dynamical photochemical processes on long timescales. In the VIPER experiment with TPA, the UV/vis pulse is replaced by a near-infrared (NIR) pulse which offers an intrinsic 3D resolution, minimzed photodamage, a lower noise level and an increased penetration depth. This makes TPA highly attractive for biological systems among a wide range of other possible applications.
The computation of the vibrationally resolved electronic absorption spectra accounts for the Franck-Condon (FC) contributions which are independent of the nuclear framework as well as the HT effects which are dependent on the nuclear coordinates. The FC contributions are dominant for electronically-allowed transitions whereas HT contributions could be important for weakly-allowed or forbidden transitions. Laying emphasis on TPA, the test systems used belong to the category of two-photon active compounds. The initial candidate is dimethylaminonitrodibenzofuran (DMA-NDBF) which has been reported to be a two-photon only caging compound. The other system is a well-known laser dye, a rhodamine derivative of the commercially available rhodamine 101 (Rh101). Rhodamines are also recognized for their excellent TPA characteristics.
The findings for both the test systems show interesting contrasts. The one-photon absorption (OPA) and TPA spectrum together with vibronic couplings present the same lineshape in case of DMA- NDBF and also the HT effects have very weak contributions to the vibronic spectrum. Insignificant HT effects are quite typical for electronically allowed transitions. Overall, the NO2 bending mode exhibits the strongest change in the absorption spectrum upon vibrational pre-excitation, even stronger than in the case of different ring distortion modes that usually show a high VIPER activity. In the case of rhodamine, the vibronic OPA spectrum is pre-dominantly the FC spectrum and the HT couplings have a very weak contribution. The vibronic TPA spectrum is entirely dominated by the HT contributions and hence, the vibrationally resolved TPA spectrum of the rhodamine is a HT-only spectrum. Explanations towards this behaviour have been reported by Milojevich et al. which are holding the change in symmetry of the molecular orbital transitions from the ground to the excited state accountable. No significantly VIPER-active normal modes could be determined owing to the low magnitudes of their dimensionless displacements that are connected to the Huang-Rhys factors. Two ring distortion modes however have been probed but the intensity of their vibrational pre-excitation is observed to be very low.
The other part of this work is concerned with the estimation of the rate of the intramolecular energy transfer within rhodamine-BODIPY dyads. After the investigations on the prospective rhodamine derivatives, the Rho101 derivative shows the highest TPA activity. This linked together with the BODIPY derivative with styryl substituents through an acetylene bond has been probed theoretically as well as experimentally for the excitation energy transfer (EET).
Time-resolved spectroscopic measurements reveal an ultrafast energy transfer process on femtosecond timescales. The theoretical estimation of the EET rates through the Förster theory and the determination of the coupling between the donor and acceptor groups by the transition density cube (TDC) method falls short of the experimental results. Because of this disagreement, quantum dynamics simulations with the multi-layer multi-configuration time-dependent Hartree (ML-MCTDH) method have been performed on an adapted rhodamine-BODIPY molecular dyad which reveal that the energy transfer occurs through transient coherence whose mechanism cannot be described by Förster theory ...
The archaeological data dealt with in our database solution Antike Fundmünzen in Europa (AFE), which records finds of ancient coins, is entered by humans. Based on the Linked Open Data (LOD) approach, we link our data to Nomisma.org concepts, as well as to other resources like Online Coins of the Roman Empire (OCRE). Since information such as denomination, material, etc. is recorded for each single coin, this information should be identical for coins of the same type. Unfortunately, this is not always the case, mostly due to human errors. Based on rules that we implemented, we were able to make use of this redundant information in order to detect possible errors within AFE, and were even able to correct errors in Nomimsa.org. However, the approach had the weakness that it was necessary to transform the data into an internal data model. In a second step, we therefore developed our rules within the Linked Open Data world. The rules can now be applied to datasets following the Nomisma. org modelling approach, as we demonstrated with data held by Corpus Nummorum Thracorum (CNT). We believe that the use of methods like this to increase the data quality of individual databases, as well as across different data sources and up to the higher levels of OCRE and Nomisma.org, is mandatory in order to increase trust in them.
What does your personality reveal about your financial behavior? Evidence from a FinTech experiment
(2022)
We co-operate with a German financial account aggregator (FAA) and conduct a personality survey with 1,700 app users. We combine the survey results with their anonymized transaction data and investigate links between personality traits and spending behavior. Observing many lottery windfalls in our dataset and treating these incidents as real-life experiments, we ask: what do individuals do with unexpected income changes? Our findings suggest that highly extraverted individuals tend to overspend in response to lottery windfalls.
Business practitioners increasingly use Artificial Intelligence (AI) applications to assist customers in making decisions due to their higher prediction quality. Yet, customers are frequently reluctant to rely on advice generated from machines, especially when their decision is at stake. Our study proposes a solution, which is to bring a human expert in the loop of machine advice. We empirically test whether customers are more accepting expert-AI collaborative advice than expert or AI advice.
ETFs Prove Their Worth in Turbulent Times / Eric Leupold, Managing Director / Head of Cash Market, Deutsche Börse AG
Is Human-AI Advice Better than Human or AI Advice? / Cathy Liu Yang, Kevin Bauer, Xitong Li, Oliver Hinz
What Does Your Personality Reveal about Your Financial Behavior? Evidence from a FinTech Experiment / Andreas Hackethal, Fabian Nemeczek, Jan Radermacher
“MiCA” – Regulating the European Markets in Crypto-Assets / Dr. Stefan Berger, Member of the European Parliament, Committee on Economic and Monetary Affairs
News of the efl
Quo vadis Papua: case study of special autonomy policies and socio-political movements in Papua
(2021)
This research discusses socio-political movements in Papua as a result of the implementation of special autonomy policies (Otsus) by the government for almost two decades. Theoretically, indigenous Papuans should support it but in empirical reality, Otsus has been considered "fail" by the indigenous Papuan people because there are still many problems that have not been resolved by Otsus. This negative response indicates public dissatisfaction towards the development planning process in Papua. This dissertation aims to examine these issues; why these policies and development plans failed and are protested, why protests against them are prolonged, how do protests develop into social movements, and whether indigenous Papuan movements can be classified as social movements. The study uses qualitative approach, through case study methods. Data are collected through interviews, observations and documentation studies. The research finds that the presence of Otsus in Papua in addition to being a source of new conflict, also triggers conflicts in the form of protests and resistance movements against the government of Indonesia, both physical and political. This research discovers that, indeed the Otsus management has succeeded in changing the face of Papua because of the many physical projects but the development of human aspects and supporting instruments has not been touched at all. Thus, only a small percentage of indigenous Papuans feel the benefits of Otsus, while most of them are still struggling. This paper finds that protests against Otus are due to the growing resentments from the community so long as their demands are not met. This study suggests that the presence of the state in Papua through the Otsus policy must be re-evaluated. The state must ensure that in the Otsus era, the indigenous Papuans should not be marginalized, so that aspirations for the welfare of all indigenous Papuans through Otsus can be realized.
As part of two drilling campaigns of the International Continental Scientific Drilling Program (ICDP), several geophysical borehole measurements were carried out by the Leibniz Institute for Applied Geophysics (LIAG) in two lakes. The acquired data was used to answer stratigraphic and paleoclimatic research questions, including the establishment of robust age-depth models and the construction of continuous lithological profiles.
Lake Towuti is located on Sulawesi (Indonesia), within the "Indo-Pacific Warm Pool" (IPWP), a globally important region for atmospheric heat and moisture budgets. The lake exists for approximately one million years, but its exact age is uncertain. We present the first agedepth model for the approximately 100 m continuous sediment sequence from the central part of the lake. The basis for this model is the magnetic susceptibility measured in the borehole and a tephra layer with an age of about 797 ka at 72 m depth. Our age-depth model is inferred from cyclostratigraphic analysis of borehole data and covers a period from 903 ± 11 to 131 ± 67 ka. We suggest that orbital eccentricity and/or changes between global cold and warm periods are responsible for hydroclimatic changes in the IPWP, that these changes affect sedimentation processes in Lake Towuti, and that we can measure and observe this effect in the sediment properties today. Additionally, we created a continuous artificial lithological profile from a series of different borehole data using cluster analysis. This provides information from parts of the borehole where no sediment is available due to core loss.
Lake Ohrid is 1.36 million years old and is located on the Balkan Peninsula on the border between Albania and North Macedonia. The primary hole 'DEEP' in the central part of the lake has been the subject of several investigations, but information about sediments of the marginal locations 'Pestani' and 'Cerava' have not been published yet. In our study, we use natural gamma radiation (GR) measured in the borehole to generate an age-depth model for DEEP. This is performed using the correlation of GR to the global LR04 reference record of Lisiecki and Raymo (2005).
The age information is then transferred via prominent seismic marker horizons to the other two sites, Pestani and Cerava, where it provides the first age-control points for the construction of age-depth models from correlation of GR to LR04. The generated age-depth models are tested using cyclostratigraphic methods, but the limits of this approach are revealed. At DEEP, sedimentation rates (SR) from the cyclostratigraphic method and the correlative approach differ by 2.8 %, at Pestani this difference is 16.7 %, and at Cerava the quality of the data does not allow a reliable evaluation of SR using the cyclostratigraphic approach. We used cluster analysis to construct artificial lithological profiles at all three sites and integrated them into the respective age-depth models. This enables us to determine which sediment types were deposited at what time, and we recognize the change between warm and cold periods in the sediment properties at all three locations. The analyses in this study were all performed on borehole and seismic data and thus do not involve sediment core data. Especially at Pestani and Cerava, new insights into the sedimentological history of Lake Ohrid could be obtained.
In the last part we discuss the occurrence of the half-precession (HP) signal in the European region during the last one million years. The focus is on Lake Ohrid, but a range of other proxies, from the eastern Mediterranean, across the European continent, up to Greenland are analyzed in regards to HP. Applying filters, we focus on the frequency range with a period of 13-8.5 ka and only HP remains in the records. We use correlative methods to determine the clarity of the HP signal in proxies distributed across the European realm. Additionally, we determined the development of HP over time. The HP signal is clearest in the southeast and decreases toward the north. It is further more pronounced in interglacial periods and in the younger part (<621 ka) of most proxies. We suggest that there are mechanisms that transmit the HP signal from its origin near the equator to higher latitudes via different processes. In this context, for instance, the African monsoon, the Nile River and the Mediterranean outflow via the Strait of Gibraltar can be important factors.
Response to upfront azacitidine in juvenile myelomonocytic leukemia in the AZA-JMML-001 trial
(2021)
Allogeneic hematopoietic stem cell transplantation (HSCT) is the only curative therapy for most children with juvenile myelomonocytic leukemia (JMML). Novel therapies controlling the disorder prior to HSCT are needed. We conducted a phase 2, multicenter, open-label study to evaluate the safety and antileukemic activity of azacitidine monotherapy prior to HSCT in newly diagnosed JMML patients. Eighteen patients enrolled from September 2015 to November 2017 were treated with azacitidine (75 mg/m2) administered IV once daily on days 1 to 7 of a 28-day cycle. The primary end point was the number of patients with clinical complete remission (cCR) or clinical partial remission (cPR) after 3 cycles of therapy. Pharmacokinetics, genome-wide DNA-methylation levels, and variant allele frequencies of leukemia-specific index mutations were also analyzed. Sixteen patients completed 3 cycles and 5 patients completed 6 cycles. After 3 cycles, 11 patients (61%) were in cPR and 7 (39%) had progressive disease. Six of 16 patients (38%) who needed platelet transfusions were transfusion-free after 3 cycles. All 7 patients with intermediate- or low-methylation signatures in genome-wide DNA-methylation studies achieved cPR. Seventeen patients received HSCT; 14 (82%) were leukemia-free at a median follow-up of 23.8 months (range, 7.0-39.3 months) after HSCT. Azacitidine was well tolerated and plasma concentration-–time profiles were similar to observed profiles in adults. In conclusion, azacitidine monotherapy is a suitable option for children with newly diagnosed JMML. Although long-term safety and efficacy remain to be fully elucidated in this population, these data demonstrate that azacitidine provides valuable clinical benefit to JMML patients prior to HSCT. This trial was registered at www.clinicaltrials.gov as #NCT02447666.
Monte Carlo methods : barrier option pricing with stable Greeks and multilevel Monte Carlo learning
(2021)
For discretely observed barrier options, there exists no closed solution under the Black-Scholes model. Thus, it is often helpful to use Monte Carlo simulations, which are easily adapted to these models. However, as presented above, the discontinuous payoff may lead to instability in option's sensitivities for Monte Carlo algorithms.
This thesis presents a new Monte Carlo algorithm that can calculate the pathwise sensitivities for discretely monitored barrier options. The idea is based on Glasserman and Staum's one-step survival strategy and the results of Alm et al., with which we can stably determine the option's sensitivities such as Delta and Vega by finite-differences. The basic idea of Glasserman and Staum is to use a truncated normal distribution, which excludes the values above the barrier (e.g.\ for knock-up-out options), instead of sampling from the full normal distribution. This approach avoids the discontinuity generated by any Monte Carlo path crossing the barrier and yields a Lipschitz-continuous payoff function.
The new part will be to develop an extended algorithm that estimates the sensitivities directly, without simulation at multiple parameter values as in finite-difference.
Consider the local volatility model, which is a generalisation of the Black-Scholes model. Although standard Monte Carlo algorithms work well for the pricing of continuously monitored barrier options within this model, they often do not behave stably with respect to numerical differentiation.
To bypass this problem, one would generally either resort to regularised differentiation schemes or derive an algorithm for precise differentiation. Unfortunately, while the widespread solution of using a Brownian bridge approach leads to accurate first derivatives, they are not Lipschitz-continuous. This leads to instability with respect to numerical differentiation for second-order Greeks.
To alleviate this problem - i.e. produce Lipschitz-continuous first-order derivatives - and reduce variance, we generalise the idea of one-step survival to general scalar stochastic differential equations. This approach leads to the new one-step survival Brownian bridge approximation, which allows for stable second-order Greeks calculations.
To show the new approach's numerical efficiency, we present a new respective Monte Carlo pathwise sensitivity estimator for the first-order Greeks and study different methods to compute second-order Greeks stably. Finally, we develop a one-step survival Brownian bridge multilevel Monte Carlo algorithm to reduce the computational cost in practice.
This thesis proves unbiasedness and variance reduction of our new, one-step survival version with respect to the classical, Brownian bridge approach. Furthermore, we will present a new convergence result for the Brownian bridge approach using the Milstein scheme under certain conditions. Overall, these properties imply convergence of the new one-step survival Brownian bridge approach.
In recent years, deep learning has become pervasive in various fields. As a family of machine learning methods it is used in a broad set of applications, such as image processing, voice recognition, email filtering, computer vision. Most modern deep learning algorithms are based on artificial neural networks inspired by the biological neural networks constituting animal brains. Also in computational finance deep learning may be of use: Consider there is no closed-solution available for an option price, Monte Carlo simulations are substantially for estimation. Instead of persistently contributing new price computations arising from an updated volatility term, one could replace these by evaluating a neural network.
If an according neural network is available, the evaluation could lead to substantial savings and be highly efficient. I.e., once trained, a neural network could save further expensive estimations. However, in practice, the challenge is the training process of the neural network.
We study and compare two generic neural network training algorithms' computational complexity. Then, we introduce a new multilevel training algorithm that combines a deep learning algorithm with the idea of multilevel Monte Carlo path simulation. The idea is to train several neural networks with training data computed from the so-called level estimators of the multilevel Monte Carlo approach introduced by Giles. We show that the new method can reduce computational complexity by formulating a complexity theorem.
Die Beteiligung an Schlüsselfunktionen in zellulären Signalwegen macht Kinasen zu einem vielversprechenden Ansatzpunkt in der Wirkstoffentwicklung bei verschiedenen menschlichen Erkrankungen wie z.B. Krebs oder auch Autoimmun- und Entzündungskrankheiten. Die Prävention von post-translationalen Modifikationen durch Phosphorylierung und somit die Regulierung der nachgeschalteten Signalwege ist das Ziel von Kinaseinhibitoren. Die katalytische Aktivität von Kinasen ist abhängig von ATP, welches im hochkonservierten aktiven Zentrum bindet. Bedingt durch diese kinomweite hohe Konservierung stellt die Entwicklung von hoch selektiven ATP-mimetischen Inhibitoren eine Herausforderung dar. Typische ATP-Mimetika sind flach und die oft hydrophoben Moleküle weisen meist eine große Zahl an frei rotierbaren Bindungen auf. Um das aus dieser Flexibilität hervorgehende Problem der teils mangelnden Selektivität zu umgehen, kann eine bioaktive Konformation des Inhibitors durch Makrozyklisierung fixiert werden. Als Konsequenz dieser konformationellen Einschränkung können die entropischen Kosten während des Bindens reduziert werden und folglich zu einer gesteigerten Affinität gegenüber der Kinase führen.
Der Grundstein dieser Arbeit war der makrozyklische Pyrazolo[1,5-a]pyrimidin basierte FLT3 Kinaseinhibitor ODS2004070 (37). Im Rahmen eines kinomweiten Screenings konnten hohe Affinitäten zu verschiedensten Kinasen detektiert werden, was 37 zu einer guten Leitstruktur für das Design von potenten und selektiven Kinaseinhibitoren machte. Im Rahmen dieser Arbeit blieb das literaturbekannte Pyrazolo[1,5-a]pyrimidin basierte ATP-mimetische Bindemotiv sowie das makrozyklische Grundgerüst 37 bis auf einige wenige Variation unverändert.
Strukturelle Optimierungen zur Fokussierung der Selektivität wurden am sekundären Amin zwischen Bindemotiv und Linker als auch über die freie Carbonsäure durchgeführt. Mit einer Anzahl von mehr als 430 identifizierten Phosphorylierungsstellen ist die pleiotropisch und konstitutiv aktive Casein Kinase 2 (CK2) an verschiedensten zellulären Prozessen wie dem Verlauf des Zellzyklus, der Apoptose oder der Transkription regulatorisch beteiligt. Die Fehlregulation von CK2 wird häufig mit der Pathologie von Krankheiten wie zum Beispiel Krebs assoziiert, was CK2 zu einem vielversprechenden Ziel klinischer Untersuchungen macht.
Im Rahmen des CK2-Projekts war es möglich, durch spezifische Modifikationen an 37, die hoch selektiven und potenten CK2-Inhibitoren 47 und 60 zu entwickeln. Ebenfalls gezeigt wurde, dass kleine strukturelle Veränderungen, wie z.B. Makrozyklisierung, einen signifikanten Effekt auf Selektivität und Potenz des Inhibitors haben kann.
Weiter Untersuchungen der Verbindungen lenkten den Fokus weiterer Arbeiten u.a. auf die Serin/Threonin Kinase 17A (STK17A) oder auch death-associated protein kinase-related apoptosis-inducing protein kinase 1 (DRAK1) genannt. Sie ist Teil der DAPK Familie und gehört zusammen mit anderen Kinasen zu den weniger erforschten Kinasen. Bis heute ist nicht viel über ihre zellulären Funktionen und die Beteiligung an pathophysiologischen Prozessen bekannt. Berichtet wurde jedoch eine Überexpression in verschiedenen Formen von Hirntumoren des zentralen Nervensystems (Gliom). Strukturelle Modifikationen, unter Erhalt des makrozyklischen Grundgerüsts 37, führten zu dem hoch selektiven und potenten DRAK1 Inhibitor 121, der alle Kriterien für eine chemical probe Verbindung erfüllt.
Ein weiteres Ziel dieser Arbeit war die AP-2-assoziierte Protein Kinase 1 (AAK1) aus der NAK Familie, bestehend aus AAK1, BIKE und GAK. Sie ist als potenzielles therapeutisches Ziel für viele verschieden Krankheiten wie z.B. neuropathische Schmerzen, Schizophrenie und Parkinson identifiziert. Durch die Regulierung der Clathrin-mediierten Endozytose ist AAK1 an intrazellulären Bewegungen verschiedener nicht zusammenhängenden RNS- und DNSViren, wie beispielsweise HCV, DENV oder EBOV, beteiligt. Ebenfalls berichtet wurde eine mögliche Assoziation mit dem SARS-CoV-2 Virus, was das Interesse an neuen selektiven AAK1 Inhibitoren verstärkte. Die Entwicklung der hochpotenten und selektiven AAK1 Inhibitoren 61 und 63 basierte ebenfalls auf dem makrozyklischen Grundgerüst 37, das bereits im CK2- und DRAK1-Projekt verwendet wurde.
Zusammenfassend lässt sich sagen, dass es im Rahmen dieser Arbeit gelungen ist, ausgehend von einem höchst unselektiven makrozyklischen Grundgerüst, hochpotente und selektive Kinaseinhibitoren für CK2, DRAK1 und AAK1 zu entwickeln und zu charakterisieren. Im Zuge von Untersuchungen verschiedener Struktur-Wirkungsbeziehungen wurde gezeigt, dass es durch geringfügige strukturelle Modifikationen möglich ist, die kinomweite Selektivität zu variieren und auf eine Kinase zu fokussieren. Diese Arbeit brachte nicht nur die erwähnten Inhibitoren hervor, sondern bildet auch die Grundlage für weitere Projekte zur Entwicklung von hoch potenten und selektiven Verbindungen als potenzielle chemische Werkzeuge für den Einsatz in der Forschung.