Universitätspublikationen
Refine
Year of publication
Document Type
- Article (13457)
- Part of Periodical (3457)
- Doctoral Thesis (3291)
- Contribution to a Periodical (2132)
- Book (2063)
- Working Paper (1885)
- Preprint (1538)
- Review (1063)
- Report (909)
- Conference Proceeding (692)
Language
- English (16996)
- German (13812)
- Portuguese (231)
- Spanish (123)
- Italian (66)
- French (64)
- Multiple languages (59)
- Turkish (12)
- Ukrainian (10)
- slo (7)
Keywords
- Deutschland (132)
- COVID-19 (95)
- inflammation (93)
- Financial Institutions (90)
- ECB (67)
- Capital Markets Union (65)
- SARS-CoV-2 (63)
- Financial Markets (61)
- Adorno (58)
- Banking Union (50)
Institute
- Medizin (6575)
- Präsidium (5053)
- Physik (3360)
- Wirtschaftswissenschaften (2274)
- Gesellschaftswissenschaften (2015)
- Biowissenschaften (1747)
- Frankfurt Institute for Advanced Studies (FIAS) (1559)
- Biochemie und Chemie (1393)
- Sustainable Architecture for Finance in Europe (SAFE) (1391)
- Informatik (1378)
Luminosities and energies of e⁺e⁻ collision data taken between √s=4.61 GeV and 4.95 GeV at BESIII
(2022)
From December 2019 to June 2021, the BESIII experiment collected about 5.85 fb−1 of data at center-of-mass energies between 4.61 GeV and 4.95 GeV. This is the highest collision energy BEPCII has reached so far. The accumulated e+e− annihilation data samples are useful for studying charmonium(-like) states and charmed-hadron decays. By adopting a novel method of analyzing the production of Λ+cΛ¯−c pairs in e+e− annihilation, the center-of-mass energies are measured with a precision of ∼0.6 MeV. Integrated luminosities are measured with a precision of better than 1\% by analyzing the events of large-angle Bhabha scattering. These measurements provide important inputs to the analyses based on these data samples.
The production cross section of inclusive isolated photons has been measured by the ALICE experiment at the CERN LHC in pp collisions at centre-of-momentum energy of s√=13 TeV collected during the LHC Run 2 data-taking period. The measurement is performed by combining the measurements of the electromagnetic calorimeter EMCal and the central tracking detectors ITS and TPC, covering a pseudorapidity range of |ηγ|<0.67 and a transverse momentum range of 7<pγT<200 GeV/c. The result extends to lower pγT and xγT=2pγT/s√ ranges, the lowest xγT of any isolated photon measurements to date, extending significantly those measured by the ATLAS and CMS experiments towards lower pγT at the same collision energy with a small overlap between the measurements. The measurement is compared with next-to-leading order perturbative QCD calculations and the results from the ATLAS and CMS experiments as well as with measurements at other collision energies. The measurement and theory prediction are in agreement with each other within the experimental and theoretical uncertainties.
A common element of market structure analysis is the spatial representation of firms’ competitive positions on maps. Such maps typically capture static snapshots in time. Yet, competitive positions tend to change. Embedded in such changes are firms’ trajectories, that is, the series of changes in firms’ positions over time relative to all other firms in a market. Identifying these trajectories contributes to market structure analysis by providing a forward-looking perspective on competition, revealing firms’ (re)positioning strategies and indicating strategy effectiveness. To unlock these insights, we propose EvoMap, a novel dynamic mapping framework that identifies firms’ trajectories from high-frequency and potentially noisy data. We validate EvoMap via extensive simulations and apply it empirically to study the trajectories of more than 1,000 publicly listed firms over 20 years. We find substantial changes in several firms’ positioning strategies, including Apple, Walmart, and Capital One. Because EvoMap accommodates a wide range of mapping methods, analysts can easily apply it in other empirical settings and to data from various sources.
Regulators worldwide have been implementing different privacy laws. They vary in their impact on the value for advertisers, publishers and users, but not much is known about these differences. This article focuses on three important privacy laws (i.e., General Data Protection Regulation [GDPR], California Consumer Privacy Act [CCPA] and Personal Information Protection Law [PIPL]) and compares their impact on the value for the three primary actors of the online advertising market, namely, advertisers, publishers and users. This article first compares these three privacy laws by developing a legal strictness score. It then uses the existing literature to derive the effects of the legal strictness of each privacy law on each actor’s value. Finally, it quantifies the three privacy laws’ impact on each actor’s value. The results show that GDPR and PIPL are similar and stricter than CCPA. Stricter privacy laws bring larger negative changes to the value for actors. As a result, both GDPR and PIPL decrease the actors’ value more substantially than CCPA. These value declines are the largest for publishers and are rather similar for users and advertisers. Scholars and practitioners can use our findings to explore ways to create value for multiple actors under various privacy laws.
For many services, consumers can choose among a range of optional tariffs that differ in their access and usage prices. Recent studies indicate that tariff-specific preferences may lead consumers to choose a tariff that does not minimize their expected billing rate. This study analyzes how tariff-specific preferences influence the responsiveness of consumers’ usage and tariff choice to changes in price. We show that consumer heterogeneity in tariff-specific preferences leads to heterogeneity in their sensitivity to price changes. Specifically, consumers with tariff-specific preferences are less sensitive to price increases of their preferred tariff than other consumers. Our results provide an additional reason why firms should offer multiple tariffs rather than a uniform nonlinear pricing plan to extract maximum consumer surplus.
Digitale Technologien begünstigen den Einsatz einer dynamischen Preisgestaltung, also von Preisen, die für ein prinzipiell gleiches Produkt unangekündigt variieren. Dabei werden in der öffentlichen Diskussion unterschiedliche Ausgestaltungsformen dynamischer Preise oftmals vermischt, was eine sinnvolle Analyse der Vor- und Nachteile der dynamischen Preisgestaltung erschwert. Das Ziel des Beitrags ist die Darstellung der ökonomischen Grundlagen und die Diskussion sowie Klassifikation der Ausgestaltungsmöglichkeiten der dynamischen Preisgestaltung. Darüber hinaus erfolgt eine Bewertung der Vor- und Nachteile der dynamischen Preisgestaltung aus Käufer- und Verkäufersicht. Abschließend werden Implikationen für die betriebswirtschaftliche Forschung diskutiert.
Polygene Risikoscores (PRS) integrieren zahlreiche Einzelnukleotid-Polymorphismen (SNP) von meist geringer Effektstärke, um Auskunft über das Erkrankungsrisiko bestimmter Krankheiten zu geben. In dieser Arbeit wurde der PRS zur genetisch generalisierten Epilepsie (GGE) von Leu et al. aus dem Jahr 2019 untersucht, um festzustellen, ob über das Erkrankungsrisiko hinaus noch Korrelationen mit weiteren phänotypischen Eigenschaften von Patienten bestehen. Der Nachweis solcher Zusammenhänge würde eine Prädiktionsfähigkeit des GGE-PRS demonstrieren, die perspektivisch ein Potential für dessen klinische Anwendbarkeit, beispielsweise im Sinne der personalisierten Medizin, aufzeigen könnte.
Die Identifizierung neuer Korrelationen sollte durch Vergleich der Phänotypen von zwei Gruppen von GGE-Patienten mit extrem hohen, beziehungsweise extrem niedrigen PRS-Werten erfolgen. Hierfür wurden von 2256 Patienten aus der Datenbank von Epi25, einem internationalen Forschungskollaborativ zur Erforschung der Relevanz genetischer Faktoren bei der Entwicklung von Epilepsie, die Patienten mit den höchsten (n=59) und den niedrigsten (n=49) GGE-PRS-Werten ausgewählt. Für diese 108 Patienten wurden retrospektive klinische Daten von den jeweiligen Behandlungszentren akquiriert. Hierzu wurde den Studienleitern der Zentren ein Questionnaire mit Fragen zu zahlreichen phänotypischen Parametern der Patienten übermittelt. Die Rücklaufrate war mit 54% gut.
Die so eingeholten Patientendaten wurden anschließend mittels Exaktem Test nach Fisher und Wilcoxon-Rangsummentest statistisch analysiert, um Unterschiede zwischen den Phänotypen beider Gruppen nachzuweisen. Im Falle der Pharmakoresistenz zeichneten sich hierbei zunächst signifikante Unterschiede ab, die ein selteneres Auftreten dieser Eigenschaft für Patienten mit hohen GGE-PRS-Werten implizierten. Diese Ergebnisse waren jedoch nach einer Bonferroni-Korrektur und bei Validierung in einer größeren Kohorte (n=825) nicht mehr signifikant. Für die anderen untersuchten Parameter waren ebenfalls keine signifikanten Unterschiede nachweisbar.
Das Ergebnis, dass für keinen der untersuchten Parameter signifikante Differenzen bestanden, obwohl zwei Kohorten mit extrem gegensätzlichen PRS-Werten untersucht wurden, spricht gegen eine Verwendung des aktuell verfügbaren GGE-PRS als prädiktiver Biomarker über das Erkrankungsrisiko hinaus und somit gegen dessen klinische Anwendbarkeit. Jedoch können die nicht-signifikanten Korrelationen im Falle der Pharmakoresistenz als Hinweis verstanden werden, dass im Bereich der Pharmakotherapie Zusammenhänge zwischen Score und Phänotyp bestehen könnten, die weiterer Untersuchungen in zukünftigen Studien bedürfen. Bei Verwendung eines verbesserten GGE-PRS mit zusätzlichen risikoassoziierten SNP und verfeinerter Wichtung der Effektstärken sowie größerer Kohorten könnten in diesem Bereich möglicherweise auch signifikante Zusammenhänge nachweisbar werden.
Exploring strategies to improve the reverse beta-oxidation pathway in Saccharomyces cerevisiae
(2024)
Microbes are the most diverse living organisms on Earth, with various metabolic adaptations that allow them to live in different conditions and produce compounds with different chemical complexity. Microbial biotechnology exploits the metabolic diversity of microorganisms to manufacture products for different industries. Today, the chemical industry is a significant energy consumer and carbon dioxide emitter, with processes that harm natural ecosystems, like the extraction of medium-chain fatty acids (MCFAs). MCFAs are used as precursors for biofuels, volatile esters, surfactants, or polymers in materials with enhanced properties.
However, their current extraction process uses large, non-sustainable monocultures of coconut and palm trees. Therefore, the microbial production of MCFAs can help reduce the current environmental impact of obtaining these products and their derivatives.
In nature, fatty acids are mostly produced via fatty acid biosynthesis (FAB). However, the reverse β-oxidation (rBOX) is a more energy-efficient pathway compared to FAB. The rBOX pathway consists of four reactions, which result in the elongation of an acyl-CoA molecule by two carbon units from acetyl-CoA in each cycle. In this work we used Saccharomyces cerevisiae, an organism with a high tolerance towards toxic compounds, as the expression host of the rBOX pathway to produce MCFAs and medium-chain fatty alcohols (MCFOHs).
In the first part of this work, we expanded the length of the products from expressing the rBOX in the cytosol and increased the MCFAs titres. First, we deleted the major glycerol-3-phosphate dehydrogenase (GPD2). This resulted in a platform strain with significantly reduced glycerol fermentation and increased rBOX pathway activity, probably due to an increased availability of NADH. Then, we tested different combinations of rBOX enzymes to increase the length and titres of MCFA. Expressing the thiolase CnbktB and β-hydroxyacyl-CoA dehydrogenase CnpaaH1 from Cupriavidus necator, Cacrt from Clostridium acetobutylicum and the trans-enoyl-CoA reductase Tdter (Treponema denticola) resulted in hexanoic acid as the main product.
Expressing Cncrt2 (C. necator) or YlECH (Y. lipolytica) as enoyl-CoA hydratases resulted in octanoic acid as the main product. Then, we integrated the octanoic (Cncrt2 or YlECH) and the hexanoic acid (Cacrt)-producing variants in the genome of the platform strain and we achieved titers of ≈75 mg/L (hexanoic acid) and ≈ 60 mg/L (octanoic acid) when growing these strains in a complex, highly buffered medium. These are the highest titers of octanoic and hexanoic acid obtained in S. cerevisiae with the rBOX. Additionally, we deleted TES1 and FAA2 to prevent competition for butyryl-CoA and degradation of the produced fatty acids, respectively.
However, these deletions did not improve MCFA titers. In addition, we tested two dual acyl-CoA reductase/alcohol dehydrogenases (ACR/ADH), CaadhE2 from C. acetobutylicum and the putative ACR/ADH EceutE from Escherichia coli, in an octanoyl-CoA-producing strain to produce MCFOH. As a result, we produced 1-hexanol and 1-octanol for the first time in S. cerevisiae with these two enzymes. Nonetheless, the titres were low (<10 mg/L and <2 mg/L, respectively), and four-carbon 1-butanol was the main product in both cases (>80 mg/L). This showed the preference of these two enzymes for butyryl-CoA.
In the second part of this work, we expressed the rBOX in the mitochondria of S. cerevisiae to benefit from the high levels of acetyl-CoA and the reducing environment in that organelle. First, in an adh-deficient strain, we mutated MTH1, a transcription factor regulating the expression of hexose transporters, and deleted GPD2. This resulted in a strain with a reduced Crabtree effect and, therefore, an increased carbon flux to the mitochondria. We partially validated the increased flux to the mitochondria by expressing the ethanol-acetyltransferase EAT1 from Kluyveromyces marxianus in this organelle. This resulted in a higher isoamyl acetate production in the MTH1-mutant strain. Isoamyl acetate is synthesised by Eat1 from acetyl-CoA and isoamyl alcohol, a product of the metabolism of amino acids in the mitochondria. Then, we targeted different butyryl-CoA-producing rBOX variants to the mitochondria, and we used the production of 1-butanol and butyric acid as a proof-of-concept. The strong expression of all the enzymes was toxic for the cell, and the highest butyric acid titres (≈ 50 mg/L) in the mitochondria from the rBOX were obtained from the weak expression of the pathway. The highest 1-butanol titers (≈ 5 mg/L) were obtained with the downregulation of the mitochondrial NADH-oxidase NDI1. However, this downregulation led to a non-desirable petite phenotype.
In summary, we produced hexanoic and octanoic acid for the first time in S. cerevisiae using the rBOX and achieved the highest reported titers of hexanoic and octanoic acid so far using this pathway in S. cerevisiae. In addition, we successfully compartmentalised the rBOX in the mitochondria. However, competing reactions, some of them essential for the viability of the cell, limit the use of this organelle for the rBOX.
Background: Prostate cancer is a major health concern in aging men. Paralleling an aging society, prostate cancer prevalence increases emphasizing the need for efcient diagnostic algorithms.
Methods: Retrospectively, 106 prostate tissue samples from 48 patients (mean age,
66 ± 6.6 years) were included in the study. Patients sufered from prostate cancer (n = 38) or benign prostatic hyperplasia (n = 10) and were treated with radical prostatectomy or Holmium laser enucleation of the prostate, respectively. We constructed tissue microarrays (TMAs) comprising representative malignant (n = 38) and benign (n = 68) tissue cores. TMAs were processed to histological slides, stained, digitized and assessed for the applicability of machine learning strategies and open–source tools in diagnosis of prostate cancer. We applied the software QuPath to extract features for shape, stain intensity, and texture of TMA cores for three stainings, H&E, ERG, and PIN-4. Three machine learning algorithms, neural network (NN), support vector machines (SVM), and random forest (RF), were trained and cross-validated with 100 Monte Carlo random splits into 70% training set and 30% test set. We determined AUC values for single color channels, with and without optimization of hyperparameters by exhaustive grid search. We applied recursive feature elimination to feature sets of multiple color transforms.
Results: Mean AUC was above 0.80. PIN-4 stainings yielded higher AUC than H&E and
ERG. For PIN-4 with the color transform saturation, NN, RF, and SVM revealed AUC of 0.93 ± 0.04, 0.91 ± 0.06, and 0.92 ± 0.05, respectively. Optimization of hyperparameters improved the AUC only slightly by 0.01. For H&E, feature selection resulted in no increase of AUC but to an increase of 0.02–0.06 for ERG and PIN-4.
Conclusions: Automated pipelines may be able to discriminate with high accuracy between malignant and benign tissue. We found PIN-4 staining best suited for classifcation. Further bioinformatic analysis of larger data sets would be crucial to evaluate the reliability of automated classifcation methods for clinical practice and to evaluate potential discrimination of aggressiveness of cancer to pave the way to automatic precision medicine.