Refine
Year of publication
Document Type
- Article (30547)
- Part of Periodical (11896)
- Book (8259)
- Doctoral Thesis (5708)
- Part of a Book (3719)
- Working Paper (3386)
- Review (2878)
- Contribution to a Periodical (2369)
- Preprint (2059)
- Report (1544)
Language
- German (42438)
- English (29221)
- French (1067)
- Portuguese (723)
- Multiple languages (309)
- Croatian (302)
- Spanish (301)
- Italian (194)
- mis (174)
- Turkish (148)
Is part of the Bibliography
- no (75202) (remove)
Keywords
- Deutsch (1038)
- Literatur (809)
- taxonomy (760)
- Deutschland (543)
- Rezension (491)
- new species (449)
- Frankfurt <Main> / Universität (341)
- Rezeption (325)
- Geschichte (292)
- Linguistik (268)
Institute
- Medizin (7694)
- Präsidium (5190)
- Physik (4435)
- Wirtschaftswissenschaften (2698)
- Extern (2661)
- Gesellschaftswissenschaften (2373)
- Biowissenschaften (2182)
- Biochemie und Chemie (1973)
- Frankfurt Institute for Advanced Studies (FIAS) (1681)
- Center for Financial Studies (CFS) (1630)
Lange hat sich die Feststellung von Subjekten auf den menschlichen Körper gerichtet, auf dessen Spur, besonderes Merkmal, Fingerabdruck oder DNA-Profil. Derweil verliert sich dieser indexikalische Zusammenhang in digitalen Erkennungsverfahren, einerseits indem sie nicht länger ein Indiz auf einen unverwechselbaren Körper beziehen, sondern dessen Bild nunmehr mit den Pixel-Verteilungen in tausenden anderen Bilddateien abgleichen, andererseits weil sie, noch vor der Identifizierung einer Person, überhaupt erst einen "Körper" als solchen registrieren müssen. Damit aber verschieben sich all jene subversiven Praktiken, die den Menschen der Detektion zu entziehen trachten: Statt bloßer Maskierung gilt es jetzt, den Körper selbst zu tilgen - durch technische Verwischungen, Oberflächenmodulationen oder Bewegungsmuster außerhalb der KI-Trainingsraster. Derlei Strategien des Entzugs werden in der zeitgenössischen Videokunst erprobt, etwa in Hito Steyerls "How Not to Be Seen" oder Liam Youngs "Choreographic Camouflage". Damit stellt sich zudem die grundsätzliche Frage nach der Verfügbarkeit und Neuform(at)ierung des Körpers im Digitalen; hier nämlich verwandelt er sich vom Zeichen- und Faktenreservoir in Virtualität und stetes Werden, die ihr politisches Potential auch weit jenseits aller Überwachungstechnologie entfalten.
Indizien zu lesen, gehört zu den Plausibilisierungspraktiken in alltäglichen wie wissenschaftlichen Kontexten. Über indexikalische Techniken des Schließens ist Wissen semiologisch wie hermeneutisch dynamisiert. Zentral gilt dies für Disziplinen wie Recht, Medizin, Psychologie, Archäologie, Philologie u.a.m. - sie alle folgen Indizien als eine Art 'Allegorie des Verweisens'. Das Indiz als der Zeichentypus, der sich selbst zeigt und zugleich etwas anderes an-zeigt (indicare), changiert dabei stets zwischen Evidenz und Lektüre - das weiß bereits die Antike, doch epistemologisch und semiologisch prominent diskutiert wird es im 18. Jahrhundert.
Als Carlo Ginzburg die These formulierte, dass die Geisteswissenschaften wie die Kriminalliteratur im sog. "Indizienparadigma" gründeten, hatte er mit Sherlock Holmes einen Detektiv vor Augen, der persönlich den Tatort besichtigte. Dort erhob er Spuren, kombinierte sie und kam in oftmals ingeniösen, aber auch höchst spekulativen Schlussfolgerungen zur Lösung seines Falls. Vor dem Hintergrund aktueller Entwicklungen in Forschung und Fahndung muss dieses materiell und empirisch grundierte "Indizienparadigma" jedoch einer Revision unterzogen werden. Denn seit der Privatdetektiv von "Kommissar Computer" Konkurrenz bekommen hat, haben sich die Investigationspraktiken grundlegend gewandelt: So können computergestützte Fahndungs- und Aufklärungsmethoden eine Besichtigung des Tatorts ersetzen, während algorithmische Wahrscheinlichkeitsrechnung vergangene wie zukünftige Fälle erhellt. Der vorliegende Sammelband mit Beiträgen aus der Literatur-, Medien- und Designwissenschaft untersucht, inwiefern solche "virtuellen Investigationen" in Literatur und Kunst der Gegenwart eine Revision des Indizienparadigmas einschließen - und inwiefern Begriffe der Virtualität bereits die Investigativarbeit im 19. Jahrhundert prägten
A partial-wave analysis of the decay 𝐽/𝜓→𝐾+𝐾−𝜋0 has been made using (223.7±1.4)×106 𝐽/𝜓 events collected with the BESIII detector in 2009. The analysis, which is performed within the isobar-model approach, reveals contributions from 𝐾*2(1430)±, 𝐾*2(1980)± and 𝐾*4(2045)± decaying to 𝐾±𝜋0. The two latter states are observed in 𝐽/𝜓 decays for the first time. Two resonance signals decaying to 𝐾+𝐾− are also observed. These contributions cannot be reliably identified and their possible interpretations are discussed. The measured branching fraction 𝐵(𝐽/𝜓→𝐾+𝐾−𝜋0) of (2.88±0.01±0.12)×10−3 is more precise than previous results. Branching fractions for the reported contributions are presented as well. The results of the partial-wave analysis differ significantly from those previously obtained by BESII and BABAR.
Auf Initiative des Schwerpunktes Ökotoxikologie im Masterstudiengang Umweltwissenschaften an der Justus-Liebig-Universität Gießen wurde eine neue Plattform für die Begegnung und Vernetzung der Studierenden untereinander eingerichtet, der sich nun zum ersten Mal jährt. Um mehr Einblicke in das praktische Versuchswesen und Studienangebote anderer Hochschulen zu geben, organisiert der Schwerpunkt Angebotsformate, die über die Vorlesungen hinausgehen. Der »Mittelhessische Tag der Ökotoxikologie« bringt Studierende der Technischen Hochschule Mittelhessen, der Goethe-Universität Frankfurt, der Philipps-Universität Marburg, jüngst auch der Universität Kassel und der JLU am FNU in Homberg (Ohm) zusammen und erlaubt ihnen Einblicke in die ökotoxikologische Forschung und Praxis.
[Nachruf] Großer Historiker mit breiter Wirkung Lothar Gall * 3. November 1937 † 22. Mai 2024
(2024)
Zehn Jahre Mitmenschlichkeit
(2024)
Am 18. Juni wird Jürgen Habermas, der die Geistes- und Sozialwissenschaften der Goethe-Universität nachhaltig geprägt hat, 95 Jahre alt, und dazu sendet unsere wissenschaftliche Community, der er nach wie vor aktiv angehört, die herzlichsten Glückwünsche. Bis heute ist Habermas’ wissenschaftliche und intellektuelle Stimme national und international eine der meistgehörten, und wir wünschen von Herzen, dass es noch lange so bleiben möge.
Neuer Open-Access-Publikationsfonds für die Allgemeine und Vergleichende Literaturwissenschaft
(2024)
Die Arbeitsgruppe Generative KI erforscht die Folgen von KI für die Goethe-Universität. Die beiden studiumdigitale-Mitarbeiter*innen der AG Julia Schmitt und Dr. David Weiß, der zugleich auch der Leiter der AG ist, geben Auskunft darüber, warum TechnologieAusrollen allein nicht reicht und es wichtig ist, im KI-Hype nicht die Ruhe zu verlieren.
Die Physikerin Prof. Laura Sagunski, die Sprachwissenschaftlerin Dr. Mariam Kamarauli und der Biochemiker PD. Dr. Rupert Abele sind mit dem begehrten 1822-Universitätspreis für exzellente Lehre ausgezeichnet worden. Zum 23. Mal bereits hat die Goethe-Universität gemeinsam mit der Stiftung der Frankfurter Sparkasse verdiente Lehrende gewürdigt.
Fröhliches Babylon : im Sprach-Welt-Café treffen Studierende auf ganz unterschiedliche Sprachen
(2024)
Leerstand und Utopie. Die Kämpfe um den Campus Bockenheim : eine Ausstellung geht auf Wanderschaft
(2024)
Entdecke die Geschichte und die umkämpfte Gegenwart des Campus Bockenheim. Das Areal ist ein Ort vielfältiger Funktionen und Interessen. Gleichzeitig ist es geprägt von Leerstand; einem Zustand aus einer »nichtmehr« universitären Kultur mit studentischem Leben und einer »noch-nicht« zukünftigen Nutzung.
Ausstellung im Schopenhauer-Studio:...
Dritte universitätsweite Studierendenbefragung 2023 der Goethe-Universität: zentrale Ergebnisse
(2023)
Die dritte universitätsweite Studierendenbefragung fand zwischen Ende November 2022 und Ende Januar 2023 statt und setzte damit den fünfjährigen Turnus der Befragung fort (Wintersemester 2012/13, Wintersemester 2017/18). In der vorliegenden Zusammenfassung der zentralen Ergebnisse wird eine Auswahl der Ergebnisse der universitätsweiten Studierendenbefragung 2023 wiedergegeben. Ziel der Zusammenfassung ist es, einen übersichtlichen Einblick in die Perspektiven der Studierenden der Goethe-Universität zu geben.
We study the hadronic decays of Λ+c to the final states Σ+η and Σ+η′, using an e+e− annihilation data sample of 567 pb−1 taken at a center-of-mass energy of 4.6 GeV with the BESIII detector at the BEPCII collider. We find evidence for the decays Λ+c→Σ+η and Σ+η′ with statistical significance of 2.5σ and 3.2σ, respectively. Normalizing to the reference decays Λ+c→Σ+π0 and Σ+ω, we obtain the ratios of the branching fractions B(Λ+c→Σ+η)B(Λ+c→Σ+π0) and B(Λ+c→Σ+η′)B(Λ+c→Σ+ω) to be 0.35±0.16±0.03 and 0.86±0.34±0.07, respectively. The upper limits at the 90\% confidence level are set to be B(Λ+c→Σ+η)B(Λ+c→Σ+π0)<0.58 and B(Λ+c→Σ+η′)B(Λ+c→Σ+ω)<1.2. Using BESIII measurements of the branching fractions of the reference decays, we determine B(Λ+c→Σ+η)=(0.41±0.19±0.05)% (<0.68%) and B(Λ+c→Σ+η′)=(1.34±0.53±0.21)% (<1.9%). Here, the first uncertainties are statistical and the second systematic. The obtained branching fraction of Λ+c→Σ+η is consistent with the previous measurement, and the branching fraction of Λ+c→Σ+η′ is measured for the first time.
Neutron star mergers (NSMs) are one of the astrophysical sites for the occurrence of the rapid neutron capture process (r-process). After a merger, the ejected neutron-rich matter hosts the production of radioactive heavy nuclei located far from the stability valley. Their nuclear physics properties are key inputs for r-process nucleosynthesis calculations. Here, we focus on the importance of neutron-capture rates and perform a sensitivity study for typical outflows from NSMs. We identify the rates with the highest impact on the final r-process abundance pattern and the nuclear energy release, therefore determining the nucleosynthesis in NSMs. A list of major n-capture rates affecting individual isotopes and elements production is also provided.
The sympathetic nervous system (SNS) is a major regulatory mediator connecting the brain and the immune system that influences accordingly inflammatory processes within the entire body. In the periphery, the SNS exerts its effects mainly via its neurotransmitters norepinephrine (NE) and epinephrine (E), which are released by peripheral nerve endings in lymphatic organs and other tissues. Depending on their concentration, NE and E bind to specific α- and β-adrenergic receptor subtypes and can cause both pro- and anti-inflammatory cellular responses. The co-transmitter neuropeptide Y, adenosine triphosphate, or its metabolite adenosine are also mediators of the SNS. Local pro-inflammatory processes due to injury or pathogens lead to an activation of the SNS, which in turn induces several immunoregulatory mechanisms with either pro- or anti-inflammatory effects depending on neurotransmitter concentration or pathological context. In chronic inflammatory diseases, the activity of the SNS is persistently elevated and can trigger detrimental pathological processes. Recently, the sympathetic contribution to mild chronic inflammatory diseases like osteoarthritis (OA) has attracted growing interest. OA is a whole-joint disease and is characterized by mild chronic inflammation in the joint. In this narrative article, we summarize the underlying mechanisms behind the sympathetic influence on inflammation during OA pathogenesis. In addition, OA comorbidities also accompanied by mild chronic inflammation, such as hypertension, obesity, diabetes, and depression, will be reviewed. Finally, the potential of SNS-based therapeutic options for the treatment of OA will be discussed.
Autophagy is an important degradation pathway mediating the engulfment of cellular material (cargo) into autophagosomes followed by degradation in autophagosomes.
Different stress stimuli, e.g. nutrient deprivation, oxidative stress or organelle damage, engage autophagy to maintain cellular homeostasis, recycle nutrients or remove damaged cell organelles. Autophagy not only degrades bulk cytoplasmic material but also selective autophagic cargo, for example lysosomes (lysophagy), mitochondria (mitophagy), ER (ER-phagy), lipid droplets (lipophagy), protein aggregates (aggrephagy) or pathogens (xenophagy). Selective autophagy pathways are regulated by selective autophagy receptors which bind to ubiquitinated cargo proteins and link them to LC3 on the autophagosomal membrane.
Ubiquitination is an essential post-translational modification controlling different cellular processes such as proteasomal and lysosomal degradation or innate immune signaling.
M1-linked (linear) poly-Ubiquitin (poly-Ub) chains are exclusively assembled by the E3 ligase linear ubiquitin chain assembly complex (LUBAC) and removed by the M1 poly-Ub-specific OTU domain-containing deubiquitinase with linear linkage specificity (OTULIN). In addition to key functions in innate immune signaling and nuclear factor-κB (NF-κB) activation, M1 ubiquitination is also implicated in the regulation of autophagy.
LUBAC and OTULIN control autophagy initiation and maturation and the autophagic clearance of invading bacteria via xenophagy. However, additional functions of LUBAC- and OTULIN-regulated M1 ubiquitination in autophagy are largely unknown and it also remains unexplored if LUBAC and OTULIN control other selective autophagy pathways in addition to xenophagy. This study aimed to unravel the role of LUBAC- and OTULIN-controlled M1 ubiquitination in bulk and selective autophagy in more detail.
In this study, characterization of OTULIN-depleted MZ-54 glioblastoma (GBM) cells revealed that OTULIN deficiency results in enhanced LC3 lipidation in response to autophagy induction and upon blockade of late stage autophagy with Bafilomycin A1 (BafA1). Furthermore, electron microscopy analysis showed that OTULIN-deficient cells have an increased number of degradative compartments (DGCs), confirming enhanced autophagy activity upon loss of OTULIN. APEX2-based autophagosome content profiling identified various OTULIN-dependent autophagy cargo proteins. Among these were the autophagy receptor TAX1BP1 which regulates different forms of selective autophagy (e.g. lysophagy, aggrephagy) and the glycan-binding protein galectin-3 which serves key functions in lysophagy, suggesting a role of OTULIN and M1 poly-Ub in the regulation of aggrephagy and lysophagy.
Abstract 2
To study aggrephagy, protein aggregation was induced with puromycin which causes premature termination of translation and accumulation of defective ribosomal products (DRiPs). Loss of OTULIN increased the number of M1 poly-Ub-positive foci and insoluble proteins and reduced the levels of soluble TAX1BP1 and p62 in response to puromycin-induced proteotoxic stress.
Intriguingly, upon induction of lysosomal membrane permeabilization (LMP) with the lysosomotropic drug L-Leucyl-L-Leucine methyl ester (LLOMe), M1 poly-Ub strongly accumulated at damaged lysosomes and colocalized with TAX1BP1- and galectin-3-positive puncta. M1 poly-Ub-modified lysosomes formed a platform for NF-κB essential modulator (NEMO) and inhibitor of κB (IκB) kinase (IKK) complex recruitment and local NF-κB activation in a K63 poly-Ub- and OTULIN-dependent manner. Furthermore, inhibition of lysosomal degradation enhanced LLOMe-induced cell death, suggesting pro-survival functions of lysophagy following LMP. Enrichment of M1 poly-Ub at damaged lysosomes was also observed in human dopaminergic neurons and in primary mouse embryonic cortical neurons, confirming the importance of M1 poly-Ub in the response to lysosomal damage.
Together, these results identify OTULIN as a negative regulator of autophagy induction and the autophagic flux and reveal OTULIN-dependent autophagy cargo proteins.
Furthermore, this study uncovers novel and important roles of M1 poly-Ub in the response to lysosomal damage and local NF-κB activation at damaged lysosomes.
The dynamics of the torsion field is analyzed in the framework of the Covariant Canonical Gauge Theory of Gravity (CCGG), a De Donder–Weyl Hamiltonian formulation of gauge gravity. The action is quadratic in both, the torsion and the Riemann–Cartan tensor. Since the latter adds the derivative of torsion to the equations of motion, torsion is no longer identical to spin density, as in the Einstein–Cartan theory, but an additional propagating degree of freedom. As torsion turns out to be totally anti-symmetric, it can be parametrised via a single axial vector. It is shown in this paper that, in the weak torsion limit, the axial vector obeys a wave equation with an effective mass term which is partially dependent on the scalar curvature. The source of torsion is thereby given by the fermion axial current which is the net fermionic spin density of the system. Possible measurable effects and approaches to experimental analysis are addressed. For example, neutron star mergers could act as a dipoles or quadrupoles for torsional radiation, and an analysis of radiation of pulsars could lead to a detection of torsion wave background radiation.
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition with an onset in early development. ASD has varying degrees of severity and thus affects people differently throughout their lives. Early diagnosis of ASD is essential to provide children with individually-tailored support.8 Eye-tracking may contribute to an earlier diagnosis: Several studies showed differences in eye movements between people with autism spectrum disorder (ASD) and typically developing controls (TD). Different eye movements may contribute to different visual perception that perpetuates to problems in attention, communication and social interaction.
Eye movements are divided into: (1) Fixations (2) Saccades (fast and short eye movements) and (3) Smooth Pursuit Eye Movements (SPEM). SPEM follow the target in a continuous manner. The latter are the subject of the present thesis. SPEM consist of two phases: the open loop phase (= phase of initiation, first 50- 100ms) and the closed loop phase (= phase of maintenance, after about 100ms). SPEM are usually measured by a gain index. It is defined as the ratio of smooth pursuit velocity and visual target velocity and ideally equals to 1.2
In young children, corneal-reflection (CR) eye-tracking is usually applied to quantify eye movement. It allows precise measurements without the use of potentially intrusive devices.
Studies in ASD reported deficits in open loop and closed loop pursuit in children and adults with a mean age of 19.32 (TD) and 20.04 (ASD) years. However, SPEM in preschoolers with ASD remain understudied, although this developmental phase is crucial to the development of non-social and social attentional abilities.
In the present study 66 toddlers and preschoolers (18 to 72 months; ASD: n = 33, TD: n = 33) with matched cognitive abilities and sex were assessed. The main objective was to compare the gain index (Smooth Pursuit Gain = SPG). SPEM were compared between groups with gain index as a dependent measure. We hypothesized that participants with ASD show lower average gain compared to the control group.
We could show a significant group influence on the gain when considering interactions between target velocity and group (p = 0.041). The TD group showed a greater dependence on the increasing object speed than the ASD group with a trend of -0.30 ± 0.11 in the TD group and a trend of -0.13 ± 0.12 in the ASD group. Across groups, the gain decreased with increasing target velocity and dropped faster in vertical than in horizontal trials. Additionally, participants showed a lower SPG in vertical sequences than in horizontal sequences. This supports the general validity of the measure.
Toddlers and preschoolers represent a group that has been subject of little research to date. In addition, there has been only a limited number of studies analyzing SPEM in ASD. To check for a possible group difference without interactions a study with a larger sample size at fixed target velocity and target direction should follow.
We introduce a Cannings model with directional selection via a paintbox construction and establish a strong duality with the line counting process of a new Cannings ancestral selection graph in discrete time. This duality also yields a formula for the fixation probability of the beneficial type. Haldane’s formula states that for a single selectively advantageous individual in a population of haploid individuals of size N the probability of fixation is asymptotically (as N→∞) equal to the selective advantage of haploids sN divided by half of the offspring variance. For a class of offspring distributions within Kingman attraction we prove this asymptotics for sequences sN obeying N−1≪sN≪N−1/2, which is a regime of “moderately weak selection”. It turns out that for sN≪N−2/3 the Cannings ancestral selection graph is so close to the ancestral selection graph of a Moran model that a suitable coupling argument allows to play the problem back asymptotically to the fixation probability in the Moran model, which can be computed explicitly.
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~N is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as N→∞). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as N→∞.
Highlights
• Reduced evoked theta activity in the deaf.
• Reduced theta-gamma and alpha-gamma cross-frequency couplings in the deaf.
• Stronger delta-alpha coupling in the deaf.
Abstract
Neurons within a neuronal network can be grouped by bottom-up and top-down influences using synchrony in neuronal oscillations. This creates the representation of perceptual objects from sensory features. Oscillatory activity can be differentiated into stimulus-phase-locked (evoked) and non-phase-locked (induced). The former is mainly determined by sensory input, the latter by higher-level (cortical) processing. Effects of auditory deprivation on cortical oscillations have been studied in congenitally deaf cats (CDCs) using cochlear implant (CI) stimulation. CI-induced alpha, beta, and gamma activity were compromised in the auditory cortex of CDCs. Furthermore, top-down information flow between secondary and primary auditory areas in hearing cats, conveyed by induced alpha oscillations, was lost in CDCs. Here we used the matching pursuit algorithm to assess components of such oscillatory activity in local field potentials recorded in primary field A1. Additionally to the loss of induced alpha oscillations, we also found a loss of evoked theta activity in CDCs. The loss of theta and alpha activity in CDCs can be directly related to reduced high-frequency (gamma-band) activity due to cross-frequency coupling. Here we quantified such cross-frequency coupling in adult 1) hearing-experienced, acoustically stimulated cats (aHCs), 2) hearing-experienced cats following acute pharmacological deafening and subsequent CIs, thus in electrically stimulated cats (eHCs), and 3) electrically stimulated CDCs. We found significant cross-frequency coupling in all animal groups in > 70% of auditory-responsive sites. The predominant coupling in aHCs and eHCs was between theta/alpha phase and gamma power. In CDCs such coupling was lost and replaced by alpha oscillations coupling to delta/theta phase. Thus, alpha/theta oscillations synchronize high-frequency gamma activity only in hearing-experienced cats. The absence of induced alpha and theta oscillations contributes to the loss of induced gamma power in CDCs, thereby signifying impaired local network activity.
Inorganic phosphate is one of the most abundant and essential nutrients in living organisms. It plays an indispensable role in energy metabolism and serves as a building block for major cellular components such as the backbones of DNA and RNA, headgroups of phospholipids and in posttranslational modifcations of many proteins. Disturbances in cellular phosphate homeostasis have a detrimental effect on the viability of cells. There- fore, both the import and export of phosphate is strictly regulated in eukaryotic cells. In the eukaryotic model organism Saccharomyces cerevisiae, the uptake of phosphate is carried out either by transporters with high affinity or by transporters with low affinity, depending on the cytosolic phosphate concentration. While structures are available for homologues of the high-affinity transporters, no structures of low-affinity transporters have been solved so far. Interestingly, only the low-affinity transporters have a regulatory SPX domain, which is found in various proteins involved in phosphate homeostasis.
In this work, structures of Pho90 from Saccharomyces cerevisiae, a low-affinity phosphate transporter, were solved by cryo-EM, providing insights into its transport mechanism. The dimeric structure resembles the structures of proteins of the divalent anion symporter superfamily (DASS) and of mammalian transporters of the solute carrier 13 (SLC13) family. The transmembrane domain of each protomer consists of 13 helical elements and can be subdivided into scaffold and transport domains. The structure of ScPho90 in the presence of phosphate shows the phosphate binding site within the transporter domain in an outward-open conformation with a bound phosphate ion and two sodium ions. In the absence of phosphate, an asymmetric dimer structure was determined, with one protomer adopting an inward-open conformation. While the dimer contact and the scaffold domain are identical in both conformations, the transport domain is rotated by about 30° and shifted by 11 Å towards the cytoplasmic side, leading to the accessibility of the binding pocket from the cytoplasm. Based on these findings and by comparison with known structures, a phosphate transport mechanism is proposed in the present work that involves substrate binding on the extracellular side, conformational change by a rigid-body motion of the transport domain, in an "elevator-like" motion, and substrate release into the cytoplasm. The regulatory SPX domain is not well resolved in the ScPho90 structures, so that no direct conclusions were drawn about its regulatory mechanism. The findings provide new insights into the function and mechanism of eukaryotic low-affinity phosphate transporters.
While eukaryotic cells express various phosphate import proteins, most eukaryotes have only a single highly conserved and essential phosphate exporter. These exporters show no sequence homology to other transporters of known structure, but also possess a regulatory SPX domain. In this work, the structural basis for eukaryotic phosphate export is investigated by elucidating the structures of the homologous phosphate exporters Syg1 from Saccharomyces cerevisiae and Xpr1 from Homo sapiens, using cryo-EM. The structures of ScSyg1 and HsXpr1 show a conserved homodimeric structure and the transmembrane part of each protomer consists of 10 TM helices. Helix TM1 establishes the dimer contact by means of a glycine zipper motif, which is a known oligomerization motif. Helices TM2-5 form a hydrophobic pocket that has density for a lipid molecule. Whether the lipid binding into the hydrophobic pocket has an allosteric effect on the phosphate export activity or only serves protein stabilization is not known. Helices TM5-10 form a six-helix bundle, which constitutes a putative phosphate translocation pathway in its center. This bundle is formed by the protein sequence annotated as EXS domain.
The respective phosphate translocation pathways of ScSyg1 and HsXpr1 show structural differences. While the translocation pathway in HsXpr1 is accessible from the cytoplasm, in ScSyg1 it is closed by a large loop of the SPX domain. Interestingly, this loop is not conserved in higher eukaryotes and is therefore not present in HsXpr1. Another difference are distinct conformations of helix TM9. In ScSyg1, TM9 adopts a kinked conformation, which results in the translocation pathway being open to the extracellular side. In contrast, TM9 adopts a straight conformation in HsXpr1, resulting in the placement of a highly conserved tryptophane residue in the middle of the translocation pathway. As a result, the translocation pathway in HsXpr1 is closed to the extracellular side.
Libra — a global virtual currency project initiated by Facebook — has been the subject of many controversial discussions since its announcement in June 2019. This paper provides a differentiated view on Libra, recognising that different development scenarios of Libra are conceivable. Libra could serve purely as an alternative payment system in combination with a dedicated payment token, the Libra coin. Alternatively, the Libra project could develop into a broader financial infrastructure for advanced financial services such as savings and loan products operating on the Libra Blockchain. Based on a comparison of the Libra architecture with other cryptocurrencies, the opportunities and challenges for the development of the respective Libra ecosystems are investigated from a commercial, regulatory and monetary policy perspective.
The importance of agile methods has increased in recent years, not only to manage IT projects but also to establish flexible and adaptive organisational structures, which are essential to deal with disruptive changes and build successful digital business strategies. This paper takes an industry-specific perspective by analysing the dissemination, objectives and relative popularity of agile frameworks in the German banking sector. The data provides insights into expectations and experiences associated with agile methods and indicates possible implementation hurdles and success factors. Our research provides the first comprehensive analysis of agile methods in the German banking sector. The comparison with a selected number of fintechs has revealed some differences between banks and fintechs. We found that almost all banks and fintechs apply agile methods in IT projects. However, fintechs have relatively more experience with agile methods than banks and use them more intensively. Scrum is the most relevant framework used in practice. Scaled agile frameworks are so far negligible in the German banking sector. Acceleration of projects is apparently the most important objective of deploying agile methods. In addition, agile methods can contribute to cost savings and lead to improved quality and innovation performance, though for banks it is evidently more challenging to reach their respective targets than for fintechs. Overall our findings suggest that German banks are still in a maturing process of becoming more agile and that there is room for an accelerated adoption of agile methods in general and scaled agile frameworks in particular.
Bei THS handelt sich um einen operativen Eingriff der Neurochirurgie, bei dem Impulse in tiefere Hirnstrukturen appliziert werden, ohne diese zu beschädigen. Die THS stellt eine etablierte Behandlungsoption bei Bewegungsstörungen von Morbus Parkinson, essentiellem Tremor und Dystonie dar. Zugleich besteht ein zunehmendes Interesse an weiteren Anwendungsmöglichkeiten für neurologische und psychiatrische Erkrankungen. Insoweit wird von einem weiterwachsenden Therapieverfahren in der Neurochirurgie auszugehen sein.
Die bislang implantierten THS-Systeme sind nicht bzw. nur eingeschränkt MRT-fähig.
Aufgrund der in den letzten Jahren zu beobachtenden steigenden Anzahl an Bildgebungen, insbesondere bei MRT-Untersuchungen, stellt sich hier die Frage nach dem Umgang mit einem bildgebenden Verfahren und der Notwendigkeit MRT-fähiger THS-Systeme für das Patientenkollektiv.
Hierzu wurde in der vorliegenden Dissertation analysiert, wie viele der mit THS versorgten Patienten überhaupt ein MRT - nach erfolgreicher Implantation - benötigt haben und zu welchen Konsequenzen dies gegebenenfalls geführt hat.
Die in diesem Zusammenhang retrospektiv gesammelten Patientendaten stammen sowohl aus dem digitalen Patientensystem als auch aus telefonischen Interviews mit Patienten, die seit mindestens 12 Monaten ein THS-System implantiert bekommen haben. Es wurde erfasst, bei wem und aus welchem Grund eine CT- oder MRT-Untersuchung stattgefunden hat. Zusätzlich sind diese Daten von einem unabhängigen Neurologen dahingehend beurteilt worden, ob ein MRT anstatt eines CTs sinnvoller gewesen wäre.
Bei 28 der 54 hier teilnehmenden Patienten ist mindestens eine Bildgebung im Rahmen einer CT- oder MRT-Untersuchung erfolgt. In 16 Fällen ist dabei bei 14 dieser Patienten die Frage aufgekommen, ob ein MRT durchgeführt werden solle. In diesem Zusammenhang sind letztlich sieben MRT-Untersuchungen an sieben Patienten durchgeführt worden, drei kraniale MRT-Untersuchungen und vier Wirbelsäulen MRT-Untersuchungen.
In sieben Fällen bei sechs Patienten hätte der unabhängige Neurologe zu einer MRT-Untersuchung anstatt der durchgeführten CT-Untersuchung geraten.
Von den durchgeführten MRT-Untersuchungen haben alle kranialen sowie zwei Wirbelsäulen MRT-Untersuchungen zu einer konservativen Therapie geführt. Zu einer operativen Therapie haben zwei der durchgeführten Wirbelsäulen MRT-Untersuchungen geführt. Während der in diesem Patientenkollektiv durchgeführten MRT-Untersuchungen ist es zu keiner kritischen Situationen oder Folgeschäden gekommen. Aus Gründen der Patientensicherheit wird trotzdem empfohlen, soweit möglich, bei Patienten mit einem THS-Implantat eine CT-Untersuchung durchzuführen.
Aus den dieser Arbeit zu Grunde liegenden Daten lässt sich konstatieren, dass der Einsatz MRT-fähiger THS-Geräte nicht zwingend geboten ist, dies insbesondere bei jüngeren Patienten jedoch durchaus in Betracht gezogen werden sollte.
The financial sector plays an important role in financing the green transformation. Various regulatory initiatives in the EU aim to improve transparency in relation to the sustainability of financial products and the sustainability of economic activities of non-financial and financial undertakings. For credit institutions, the Green Asset Ratio (GAR) has been established by the European regulatory authorities as a key performance indicator (KPI) for measuring the proportion of Taxonomy-aligned on-balance-sheet exposure in relation to the total assets. The breakdown of the total GAR by type of counterparty, environmental objective and type of asset provides in-depth information about the sustainability profile of a credit institution. This information, which has not been available to date, may also initiate discussions between management and shareholders or other stakeholders regarding the future sustainability strategy of credit institutions. This paper provides an overview of the regulatory background and the method of calculating the GAR along different dimensions. Finally, the potential benefits and limitations of the GAR are discussed.
Advances in distributed ledger technology are leading to a growing decentralisation of financial services (“decentralised finance”) that can be offered largely without intermediation by financial institutions. An important driver for this development is the ongoing tokenisation of assets, payments and rights, which enables the digital encryption of “crypto assets” on distributed ledgers. This article elaborates the foundations and fields of application of decentralised financial services with crypto assets that could challenge the established business models of financial institutions. This trend not only affects payment systems based on controversial crypto currencies such as Bitcoin, but also exchange platforms, capital markets solutions and corporate financing. A rapidly growing ecosystem of start-ups, tech companies and financial institutions is emerging, yet this ecosystem lacks a consistent regulatory framework. The European initiative MiCA (Markets in Crypto Assets) points in the right direction but needs to be adopted soon to ensure the future competitiveness of the European financial sector.
The financial sector plays an important role in supporting the green transformation of the European economy. A critical assessment of the current regulatory framework for sustainable finance in Europe leads to ambiguous results. Although the level of transparency on environmental, social and governance aspects of financial products has improved significantly, it is questionable whether the complex, mainly disclosure-oriented architecture is sufficient to mobilise more private capital into sustainable investments. It should be discussed whether a minimum taxonomy ratio or Green Asset Ratio has to be fulfilled to market a financial product as “green”. Furthermore, because of the high complexity of the regulation, it could be helpful for private investors to establish a simplified green rating, based on the taxonomy ratio, to facilitate the selection of green financial products.
With a notional amount outstanding of more than USD 500 trillion, the market for OTC derivatives is of vital importance for global financial stability. A growing proportion of these contracts are cleared via central counterparties (CCPs), which means that CCPs are gaining in importance as critical financial market infrastructures. At the same time, there is growing concern that a new „too big to fail" problem could arise, as the CCP industry is highly concentrated due to economies of scale. From a European perspective, it should be noted that the clearing of euro-denominated OTC derivatives mainly takes place in London, hence outside the EU in the foreseeable future. For some time there has been a controversial discussion as to whether this can remain the case post Brexit. CCPs, which clear a significant proportion of euro OTC derivatives and are systemically relevant from an EU perspective, should be subject to direct supervision by EU authorities and should be established in the EU. This would represent an important building block for a future Capital Markets Union in Europe, as regulatory or supervisory arbitrage in favour of systemically important third-country CCPs could be prevented. In addition, if a systemically relevant CCP handling a considerable portion of the euro OTC derivatives business were to run into serious difficulties, this may impact ECB monetary policy. This applies both to demand for central bank money and to the transmission of monetary policy measures, which can be significantly impaired, particularly in the event that the repo market or payment systems are disrupted. It is therefore essential for the ECB to be closely involved in the supervision of CCPs. Against this background, the draft amendment of EMIR (European Market Infrastructure Regulation) presented on 13 June 2017 is a step in the right direction. In addition, there is an urgent need to introduce a recovery and resolution mechanism for CCPs in the EU to complement the existing single resolution mechanism (SRM) for banks in the eurozone. Only then can the diverse interdependencies between banks and CCPs be adequately taken into account in the recovery and resolution programmes required in a financial crisis.
The German federal government intended to alleviate the burden of increasing fuel prices by introducing a temporary reduction of energy taxes on gasoline and diesel. In order to evaluate the impact of this measure on consumer prices at the filling stations the development of procurement costs for crude oil as well as the downstream development of refinery and distribution margins have to be taken into account. It turns out that about 80 % of the tax reduction has been passed on to end consumers on and around the effective date of the tax relief. However, within the first month the impact of the tax reduction has been wiped out for diesel completely as the gross margin of the mineral oil groups have substantially improved since then. On the other hand, for gasoline (E10) at least part of the impact can still be observed as the initial margin improvement has come down in the meantime. For a detailed analysis the German antitrust authority should look into the pricing algorithms of all 14,000 filling stations in Germany.
Mehr Nachhaltigkeit im deutschen Leitindex DAX : Reformvorschläge im Lichte des Wirecard-Skandals
(2020)
Im Rahmen der Aufarbeitung des Wirecard-Skandals wird auch eine Änderung der Kriterien zur Aufnahme in den deutschen Leitindex DAX diskutiert. Die bislang von der Deutschen Börse vorgesehenen Maßnahmen gehen in die richtige Richtung, sind aber nicht weitreichend genug. Es bedarf eines deutlichen Zeichens, dass sich künftig nur solche Unternehmen für den DAX qualifizieren können, die ein zumindest befriedigendes Maß an Nachhaltigkeit gemessen durch einen ESG-Risk-Score (Environment, Social, Governance) in ihrer Geschäftstätigkeit erreichen. Eine Simulation verdeutlicht, dass nach ESG-Kriterien seit langem kritisch betrachtete Unternehmen dem DAX nicht mehr angehören würden. Damit könnte mehr Kapital in nachhaltig wirtschaftende Unternehmen und Sektoren fließen.
We analyze the experimental data on nuclei and hypernuclei yields recently obtained by the STAR collaboration. The hybrid dynamical and statistical approaches which have been developed previously are able to describe the experimental data reasonably. We discuss the intriguing difference between the yields of normal nuclei and hypernuclei which may be related to the properties of hypermatter at subnuclear densities. New (hyper)nuclei could be detected via particle correlations. Such measurements are important to pin down the production mechanism.
Dieser Arbeit lag die Fragestellung zugrunde, welchen Einfluss die jeweilige Transfusionsstrategie auf AML-Patienten unter intensiver Induktionschemotherapie hat.
Dafür wurde ein am Universitätsklinikum Frankfurt zwischen 2007 und 2018 behandeltes Kollektiv von 352 Patienten untersucht. So erfolgte hier bis ins Jahr 2015 hinein die Transfusion eines EK ab einem Hb-Wert <8g/dl und nach Änderung des Transfusionstriggers ab einem Hb <7g/dl. AML-Patienten aus dem Jahr 2015 – dem Jahr der Änderung der Transfusionsregel – wurden von weiterer Untersuchung ausgeschlossen, um zwei klar abgrenzbare Kohorten erhalten zu können.
Es zeigte sich, dass die weniger restriktive Transfusionskohorte unter Induktionschemotherapie einen durchschnittlich um 1g/dl höheren Hb-Wert aufwies und früher als die restriktive Kohorte transfundiert wurde. Die Anzahl an Fiebertagen, CRP-Werte, Aufenthalte auf Intensivstation sowie die Dauer des Krankenhausaufenthaltes betrachtend, zeigte sich hingegen kein signifikanter Unterscheid zwischen beiden Kohorten.
Basierend auf unserer Arbeit ergeben sich keine Hinweise dafür, dass die restriktive Transfusionspraxis einer weniger restriktiven für AML-Patienten unterlegen ist. Limitierend auf die Aussagekraft der Ergebnisse wirken sich dabei die retrospektive Natur der Arbeit sowie die zeitliche Verschiebung der Behandlungszeiträume beider Kohorten aus.
Ergebnisse der bislang ausstehenden randomisierten Studien, die den Einfluss unterschiedlicher Transfusionsregimes auf Patienten mit hämatologischen Krankheiten untersuchen, sind in Bälde zu erwarten. Die bereits vorliegenden Pilotstudien und Ergebnisse der TRIST-Studie decken sich mit der von uns beobachteten Nicht-Unterlegenheit der restriktiven Transfusionspraxis für ein hämatoonkologisches Patientenkollektiv, sodass es abzuwarten gilt, ob sich dies auch in weiteren größeren randomisierten und kontrollierten Studien beweisen kann.
We analyze the experimental data on nuclei and hypernuclei yields recently obtained by the STAR collaboration. The hybrid dynamical and statistical approaches which have been developed previously are able to describe the experimental data reasonably. We discuss the intriguing difference between the yields of normal nuclei and hypernuclei which may be related to the properties of hypermatter at subnuclear densities. Most importantly new (hyper-)nuclei could be detected via particle correlations, and such measurements are relevant to pin down the production mechanism.
We analyze the experimental data on nuclei and hypernuclei yields recently obtained by the STAR collaboration. The hybrid dynamical and statistical approaches which have been developed previously are able to describe the experimental data reasonably. We discuss the intriguing difference between the yields of normal nuclei and hypernuclei which may be related to the properties of hypermatter at subnuclear densities. Most importantly new (hyper-)nuclei could be detected via particle correlations, and such measurements are relevant to pin down the production mechanism.
We report on new measurements of Cabibbo-suppressed semileptonic D+s decays using 3.19 fb−1 of e+e− annihilation data sample collected at a center-of-mass energy of 4.178~GeV with the BESIII detector at the BEPCII collider. Our results include branching fractions B(D+s→K0e+νe)=(3.25±0.38(stat.)±0.16(syst.))×10−3 and B(D+s→K∗0e+νe)=(2.37±0.26(stat.)±0.20(syst.))×10−3 which are much improved relative to previous measurements, and the first measurements of the hadronic form-factor parameters for these decays. For D+s→K0e+νe, we obtain f+(0)=0.720±0.084(stat.)±0.013(syst.), and for D+s→K∗0e+νe, we find form-factor ratios rV=V(0)/A1(0)=1.67±0.34(stat.)±0.16(syst.) and r2=A2(0)/A1(0)=0.77±0.28(stat.)±0.07(syst.).
he process e+e−→pK0Sn¯K−+c.c. and its intermediate processes are studied for the first time, using data samples collected with the BESIII detector at BEPCII at center-of-mass energies of 3.773, 4.008, 4.226, 4.258, 4.358, 4.416, and 4.600 GeV, with a total integrated luminosity of 7.4 fb−1. The Born cross section of e+e−→pK0Sn¯K−+c.c. is measured at each center-of-mass energy, but no significant resonant structure in the measured cross-section line shape between 3.773 and 4.600 GeV is observed. No evident structure is detected in the pK−, nK0S, pK0S, nK+, pn¯, or K0SK− invariant mass distributions except for Λ(1520). The Born cross sections of e+e−→Λ(1520)n¯K0S+c.c. and e+e−→Λ(1520)p¯K++c.c. are measured, and the 90\% confidence level upper limits on the Born cross sections of e+e−→Λ(1520)Λ¯(1520) are determined at the seven center-of-mass energies.
An amplitude analysis of the 𝐾𝑆𝐾𝑆 system produced in radiative 𝐽/𝜓 decays is performed using the (1310.6±7.0)×106 𝐽/𝜓 decays collected by the BESIII detector. Two approaches are presented. A mass-dependent analysis is performed by parametrizing the 𝐾𝑆𝐾𝑆 invariant mass spectrum as a sum of Breit-Wigner line shapes. Additionally, a mass-independent analysis is performed to extract a piecewise function that describes the dynamics of the 𝐾𝑆𝐾𝑆 system while making minimal assumptions about the properties and number of poles in the amplitude. The dominant amplitudes in the mass-dependent analysis include the 𝑓0(1710), 𝑓0(2200), and 𝑓′2(1525). The mass-independent results, which are made available as input for further studies, are consistent with those of the mass-dependent analysis and are useful for a systematic study of hadronic interactions. The branching fraction of radiative 𝐽/𝜓 decays to 𝐾𝑆𝐾𝑆 is measured to be (8.1±0.4)×10−4, where the uncertainty is systematic and the statistical uncertainty is negligible.
Using 16 energy points of e+e− annihilation data collected in the vicinity of the J/ψ resonance with the BESIII detector and with a total integrated luminosity of around 100 pb−1, we study the relative phase between the strong and electromagnetic amplitudes of J/ψ decays. The relative phase between J/ψ electromagnetic decay and the continuum process (e+e− annihilation without the J/ψ resonance) is confirmed to be zero by studying the cross section lineshape of μ+μ− production. The relative phase between J/ψ strong and electromagnetic decays is then measured to be (84.9 ± 3.6)◦ or (−84.7 ± 3.1)◦ for the 2(π+π−)π0 final state by investigating the interference pattern between the J/ψ decay and the continuum process. This is the first measurement of the relative phase between J/ψ strong and electromagnetic decays into a multihadron final state using the lineshape of the production cross section. We also study the production lineshape of the multihadron final state ηπ+π− with η → π+π−π0, which provides additional information about the phase between the J/ψ electromagnetic decay amplitude and the continuum process. Additionally, the branching fraction of J/ψ → 2(π+π−)π0 is measured to be (4.73 ± 0.44)% or (4.85 ± 0.45)%, and the branching fraction of J/ψ → ηπ+π− is measured to be (3.78 ± 0.68) × 10−4. Both of them are consistent with the world average values. The quoted uncertainties include both statistical and systematic uncertainties, which are mainly caused by the low statistics.
Bounding Dark Energy from the SPARC rotation curves: Data driven probe for galaxy virialization
(2024)
Dark Energy (DE) acts as a repulsive force that opposes gravitational attraction. Assuming galaxies maintain a steady state over extended periods, the estimated upper bound on DE studies its resistance to the attractive gravitational force from dark matter. Using the SPARC dataset, we fit the Navarro-Frenk-White (NFW) and Hernquist models to identify the most suitable galaxies for these models. Introducing the presence of DE in these galaxies helps establish the upper limit on its repulsive force. This upper bound on DE sits around ρ(<Λ)∼10−25~kg/m3, only two orders of magnitude higher than the one measured by Planck. We discuss the conditions for detecting DE in different systems and show the consistency of the upper bound from galaxies to other systems. The upper bound is of the same order of magnitude as ρ200=200ρc for both dark matter profiles. We also address the implications for future measurements on that upper bound and the condition for detecting the impact of Λ on galactic scales.
Regulating IP exclusion/inclusion on a global scale: the example of copyright vs. AI training
(2024)
This article builds upon the literature on inclusion/inclusivity in IP law by applying these concepts to the example of the scraping and mining of copyright-protected content for the purpose of training an artificial intelligence (AI) system or model. Which mode of operation dominates in this technological area: exclusion, inclusion or even inclusivity? The features of AI training appear to call for universal and sustainable “inclusivity” instead of a mere voluntary “inclusion” of AI provider bots by copyright holders. As the overview on the copyright status of AI training activities in different jurisdictions and emerging laws on AI safety (such as the EU AI Act) demonstrates, the global regulatory landscape is, however, much too fragmented and dynamic to immediately jump to an inclusive global AI regime. For the time being, legally secure global AI training requires the voluntary cooperation between AI providers and copyright holders, and innovative techno-legal reasoning is needed on how to effectuate this inclusion.
Determining the phase structure of Quantum Chromodynamics (QCD) and its Equation of State (EOS) at densities and temperatures realized inside neutron stars and their mergers is a long-standing open problem. The holographic V-QCD framework provides a model for the EOS of dense and hot QCD, which describes the deconfinement phase transition between a dense baryonic and a quark matter phase. We use this model in fully general relativistic hydrodynamic (GRHD) simulations to study the formation of quark matter and the emitted gravitational wave signal of binary systems that are similar to the first ever observed neutron star merger event GW170817.
Species lists play an important role in biology and practical domains like conservation, legislation, biosecurity and trade regulation. However, their effective use by non-specialist scientific and societal users is sometimes hindered by disagreements between competing lists. While it is well-known that such disagreements exist, it remains unclear how prevalent they are, what their nature is, and what causes them. In this study, we argue that these questions should be investigated using methods based on taxon concept rather than methods based on Linnaean names, and use such a concept-based method to quantify disagreement about bird classification and investigate its relation to research effort. We found that there was disagreement about 38% of all groups of birds recognized as a species, more than three times as much as indicated by previous measures. Disagreement about the delimitation of bird groups was the most common kind of conflict, outnumbering disagreement about nomenclature and disagreement about rank. While high levels of conflict about rank were associated with lower levels of research effort, this was not the case for conflict about the delimitation of bird groups. This suggests that taxonomic disagreement cannot be resolved simply by increasing research effort.
ISOE-Newsletter Nr. 3/2024
(2024)
Ein Landkreis macht’s vor: So geht guter Umgang mit Grundwasser +++ Wie Kommunen nachhaltige Ernährung mit planetarer Gesundheit verbinden und fördern können +++ Ökosysteme im Klimawandel: Die Bedeutung von sozial-ökologischen Kipppunkten für die Savannen Namibias +++ Kunst erfahren – Biodiversität wertschätzen? Tanzperformance zur Bedeutung von Insekten in der Stadt +++ Kunststoffe in Lebensmittelverpackungen reduzieren +++ Aktuelle Beiträge im ISOE Blog +++ Aus dem ISOE +++ Das ISOE in den Medien +++ Termine +++ Publikationen
The STAR Collaboration presents measurements of the semi-inclusive distribution of charged-particle jets recoiling from energetic direct-photon γdir and neutral-pion (π0) triggers in p+p and central Au+Au collisions at sNN−−−√=200 GeV over a broad kinematic range, for jet resolution parameters R=0.2 and 0.5. Medium-induced jet yield suppression is observed to be larger for R=0.2 than for 0.5, reflecting the angular range of jet energy redistribution due to quenching. The magnitude of suppression is similar for γdir- and π0-triggered data, which constrains the color-charge and path-length dependence of jet quenching. Theoretical model calculations incorporating jet quenching do not fully describe the measurements.
Using an 𝑒+𝑒− collision data sample with a total integrated luminosity of 3.19 fb−1 collected with the BESIII detector at a center-of-mass energy of 4.178 GeV, the branching fraction of the inclusive decay of the 𝐷+𝑠 meson to final states including at least three charged pions is measured for the first time to be ℬ(𝐷+𝑠→𝜋+𝜋+𝜋−𝑋)=(32.81±0.35stat±0.63syst)%. In this measurement the charged pions from 𝐾0𝑆 meson decays are excluded. The partial branching fractions of 𝐷+𝑠→𝜋+𝜋+𝜋−𝑋 are also measured as a function of the 𝜋+𝜋+𝜋− invariant mass.
The process e+e−→Σ+Σ¯− is studied from threshold up to 3.04 GeV/c2 via the initial-state radiation technique using data with an integrated luminosity of 12.0 fb−1, collected at center-of-mass energies between 3.773 and 4.258 GeV with the BESIII detector at the BEPCII collider. The pair production cross sections and the effective form factors of Σ are measured in eleven Σ+Σ¯− invariant mass intervals from threshold to 3.04 GeV/c2. The results are consistent with the previous results from Belle and BESIII. Furthermore, the branching fractions of the decays J/ψ→Σ+Σ¯− and ψ(3686)→Σ+Σ¯− are determined and the obtained results are consistent with the previous results of BESIII.
The process 𝑒+𝑒−→Σ+¯Σ− is studied from threshold up to 3.04 GeV/𝑐2 via the initial-state radiation technique using data with an integrated luminosity of 12.0 fb−1, collected at center-of-mass energies between 3.773 and 4.258 GeV with the BESIII detector at the BEPCII collider. The pair production cross sections and the effective form factors of Σ are measured in eleven Σ+¯Σ− invariant mass intervals from threshold to 3.04 GeV/𝑐2. The results are consistent with the previous results from Belle and BESIII. Furthermore, the branching fractions of the decays 𝐽/𝜓→Σ+¯Σ− and 𝜓(3686)→Σ+¯Σ− are determined and the obtained results are consistent with the previous results of BESIII.
The differential cross section for 𝑍0 production, measured as a function of the boson’s transverse momentum (𝑝T), provides important constraints on the evolution of the transverse momentum dependent parton distribution functions (TMDs). The transverse single spin asymmetry (TSSA) of the 𝑍0 is sensitive to one of the polarized TMDs, the Sivers function, which is predicted to have the opposite sign in 𝑝 + 𝑝 → 𝑊 ∕𝑍 + 𝑋 from that which enters in semi-inclusive deep inelastic scattering. In this Letter, the STAR Collaboration reports the first measurement of the 𝑍0∕𝛾∗ differential cross section as a function of its 𝑝T in 𝑝+𝑝 collisions at a center-of-mass energy of 510 GeV, together with the 𝑍0∕𝛾∗ total cross section. We also report the measurement of 𝑍0∕𝛾∗ TSSA in transversely polarized 𝑝+𝑝 collisions at 510 GeV.
In der Welt der wilden Kerle : eine populäre Serie im Zeichen des russisch-ukrainischen Krieges
(2024)
Zum Jahreswechsel 2023/2024 gelang einer russischen Fernsehserie, was während Russlands Krieg gegen die Ukraine eigentlich unvorstellbar scheint: Innerhalb weniger Tage entwickelte sich "Ehrenwort eines Kerls. Blut auf dem Asphalt" ("Slowo pazana. Krow na asfalte", 2023) beiderseits der Schützengräben zur populärsten Serie des Jahres. Die Zuschauer- und Klickzahlen erreichten Rekordhöhen und der Titelsong "Pyjala" (dt. 'Glas') der tatarischen Band Aigel schaffte es an die Spitze diverser Hitparaden in beiden Ländern. In der Russischen Föderation war die Serie zwar mit der Altersgrenze "18+" versehen und nur bei den privaten Streamingdiensten Wink und START zu sehen. Doch schon während der Ausstrahlung der acht Folgen der ersten Staffel vom 9. November bis 21. Dezember 2023 verbreitete die Serie sich blitzschnell über Telegram und andere digitale Kanäle. Sätze wie "Kerle entschuldigen sich nicht" oder "Denk dran, du bist jetzt ein Kerl, du bist jetzt auf der Straße, und ringsherum sind Feinde" wurden zu geflügelten Worten. Pädagogen und Politikerinnen schlugen Alarm, als in der Presse Berichte auftauchten, die von durch die Serie inspirierten Schlägereien berichteten, und zwar sowohl in Russland als auch in der Ukraine. Bevor die letzten Folgen überhaupt ausgestrahlt worden waren, gab es auf "Ehrenwort eines Kerls" bereits in beiden Ländern ein breites Medienecho, wobei die Kritiken kontrovers ausfielen und von enthusiastischer Begeisterung bis zu hellem Entsetzen und kategorischer Ablehnung reichten. In der Ukraine kreiste die Diskussion vor allem um die Frage, ob die Fernsehserie allein schon deshalb gefährliche Kriegspropaganda sei, weil sie aus dem Feindesland kommt. In Russland erregte die vermeintliche Romantisierung der Verbrecherwelt Anstoß. Manche Kritiker deuteten die Serie aber auch als subversiven Zerrspiegel der militärischen Aggression. Was war das aber für ein populärkulturelles Werk, das für so viel Aufmerksamkeit und Aufregung sorgte?
Natural Language Processing (NLP) for big data requires an efficient and sophisticated infrastructure to complete tasks both fast and correctly. Providing an intuitive and lightweight interaction with a framework that abstracts and simplifies complex tasks assists in reaching this goal. This bachelor thesis extends the NLP framework Docker Unified UIMA Interface (DUUI) by an API and a web-based graphical user interface to control and manage pipelines for automated analysis of large quantities of natural language. The extension aims to reduce the entry barrier into the field as well as to accelerate the creation and management of pipelines according to UIMA standards. Pipelines can be executed in the browser or using the web API directly and then monitored on a document level. The evaluation in usability and user experience indicates that the implementation benefits the framework by making its usage more user friendly, lightweight, and intuitive while also making the management of pipelines more efficient.
Graph4Med: a web application and a graph database for visualizing and analyzing medical databases
(2022)
Background: Medical databases normally contain large amounts of data in a variety of forms. Although they grant significant insights into diagnosis and treatment, implementing data exploration into current medical databases is challenging since these are often based on a relational schema and cannot be used to easily extract information for cohort analysis and visualization. As a consequence, valuable information regarding cohort distribution or patient similarity may be missed. With the rapid advancement of biomedical technologies, new forms of data from methods such as Next Generation Sequencing (NGS) or chromosome microarray (array CGH) are constantly being generated; hence it can be expected that the amount and complexity of medical data will rise and bring relational database systems to a limit.
Description: We present Graph4Med, a web application that relies on a graph database obtained by transforming a relational database. Graph4Med provides a straightforward visualization and analysis of a selected patient cohort. Our use case is a database of pediatric Acute Lymphoblastic Leukemia (ALL). Along routine patients’ health records it also contains results of latest technologies such as NGS data. We developed a suitable graph data schema to convert the relational data into a graph data structure and store it in Neo4j. We used NeoDash to build a dashboard for querying and displaying patients’ cohort analysis. This way our tool (1) quickly displays the overview of patients’ cohort information such as distributions of gender, age, mutations (fusions), diagnosis; (2) provides mutation (fusion) based similarity search and display in a maneuverable graph; (3) generates an interactive graph of any selected patient and facilitates the identification of interesting patterns among patients.
Conclusion: We demonstrate the feasibility and advantages of a graph database for storing and querying medical databases. Our dashboard allows a fast and interactive analysis and visualization of complex medical data. It is especially useful for patients similarity search based on mutations (fusions), of which vast amounts of data have been generated by NGS in recent years. It can discover relationships and patterns in patients cohorts that are normally hard to grasp. Expanding Graph4Med to more medical databases will bring novel insights into diagnostic and research.
Monitoring woody cover by remote sensing is considered a key methodology towards sustainable management of trees in dryland forests. However, while modern very high resolution satellite (VHRS) sensors allow woodland mapping at the individual tree level, the historical perspective is often hindered by lack of appropriate image data. In this first study employing the newly accessible historical HEXAGON KH-9 stereo-panoramic camera images for environmental research, we propose their use for mapping trees in open-canopy conditions. The 2–4 feet resolution panchromatic HEXAGON satellite photographs were taken 1971–1986 within the American reconnaissance programs that are better known to the scientific community for their lower-resolution CORONA images. Our aim is to evaluate the potential of combining historical CORONA and HEXAGON with recent WorldView VHRS imagery for retrospective woodland change mapping on the tree level. We mapped all trees on 30 1-ha test sites in open-canopy argan woodlands in Morocco in the field and from the VHRS imagery for estimating changes of tree density and size between 1967/1972 and 2018. Prior to image interpretation, we used simulations based on unmanned aerial system (UAS) imagery for exemplarily examining the role of illumination, viewing geometry and image resolution on the appearance of trees and their shadows in the historical panchromatic images. We show that understanding these parameters is imperative for correct detection and size-estimation of tree crowns. Our results confirm that tree maps derived solely from VHRS image analysis generally underestimate the number of small trees and trees in clumped-canopy groups. Nevertheless, HEXAGON images compare remarkably well with WorldView images and have much higher tree-mapping potential than CORONA. By classifying the trees in three sizes, we were able to measure tree-cover changes on an ordinal scale. Although we found no clear trend of forest degradation or recovery, our argan forest sites show varying patterns of change, which are further analysed in Part B of our study. We conclude that the HEXAGON stereo-panoramic camera images, of which 670,000 worldwide will soon be available, open exciting opportunities for retrospective monitoring of trees in open-canopy conditions and other woody vegetation patterns back into the 1980s and 1970s.
Climate forecasts show that in many regions the temporal distribution of precipitation events will become less predictable. Root traits may play key roles in dealing with changes in precipitation predictability, but their functional plastic responses, including transgenerational processes, are scarcely known. We investigated root trait plasticity of Papaver rhoeas with respect to higher versus lower intra-seasonal and inter-seasonal precipitation predictability (i.e., the degree of temporal autocorrelation among precipitation events) during a four-year outdoor multi-generation experiment. We first tested how the simulated predictability regimes affected intra-generational plasticity of root traits and allocation strategies of the ancestors, and investigated the selective forces acting on them. Second, we exposed three descendant generations to the same predictability regime experienced by their mothers or to a different one. We then investigated whether high inter-generational predictability causes root trait differentiation, whether transgenerational root plasticity existed and whether it was affected by the different predictability treatments. We found that the number of secondary roots, root biomass and root allocation strategies of ancestors were affected by changes in precipitation predictability, in line with intra-generational plasticity. Lower predictability induced a root response, possibly reflecting a fast-acquisitive strategy that increases water absorbance from shallow soil layers. Ancestors’ root traits were generally under selection, and the predictability treatments did neither affect the strength nor the direction of selection. Transgenerational effects were detected in root biomass and root weight ratio (RWR). In presence of lower predictability, descendants significantly reduced RWR compared to ancestors, leading to an increase in performance. This points to a change in root allocation in order to maintain or increase the descendants’ fitness. Moreover, transgenerational plasticity existed in maximum rooting depth and root biomass, and the less predictable treatment promoted the lowest coefficient of variation among descendants’ treatments in five out of six root traits. This shows that the level of maternal predictability determines the variation in the descendants’ responses, and suggests that lower phenotypic plasticity evolves in less predictable environments. Overall, our findings show that roots are functional plastic traits that rapidly respond to differences in precipitation predictability, and that the plasticity and adaptation of root traits may crucially determine how climate change will affect plants.
Zehn Jahre sub\urban sind ein Grund zum Feiern. Die kritische interdisziplinäre Stadtforschung in deutscher Sprache hat dank sub\urban einen Ort, an dem wir die mannigfaltigen Prozesse diskutieren und theoretisieren können, die Städte auf allen räumlichen Maßstabsebenen prägen. Kein Grund zum Feiern ist hingegen, dass viele dieser Prozesse dazu beitragen, dass wir in Verhältnissen leben, „in denen der Mensch ein erniedrigtes, ein geknechtetes, ein verlassenes, ein verächtliches Wesen ist“ (Marx 1976: 385). Noch immer gilt, dass es radikaler Kritik bedarf, um diese „Verhältnisse umzuwerfen“ (ebd.). Noch immer bedarf es dafür eines Verständnisses des Kapitalismus in seiner je konkreten Ausprägung und in seiner Verwobenheit mit sich wandelnden Herrschaftsformen von Patriarchat, Rassismus und Nationalismus, Homo-, Queer- und Transfeindlichkeit sowie all den anderen Formen des hierarchisierenden Ausschlusses, die für so viele Menschen das Leben zur Hölle machen (Arruzza/Bhattacharya/Fraser 2020; Brown 2018; Federici 2012; Harvey 2017). Radikale Kritik hinterfragt diese im Zeitverlauf sich wandelnden und zwischen Räumen sich unterscheidenden herrschenden Verhältnisse, betreibt mithin Aufklärung über sie, um sie in emanzipatorischer Weise zu verändern, ja zu überwinden.
In recent decades, a rapid range expansion of the golden jackal (Canis aureus) towards Northern and Western Europe has been observed. The golden jackal is a medium-sized canid, with a broad and flexible diet. Almost 200 different parasite species have been reported worldwide from C. aureus, including many parasites that are shared with dogs and cats and parasite species of public health concern. As parasites may follow the range shifts of their host, the range expansion of the golden jackal could be accompanied by changes in the parasite fauna in the new ecosystems. In the new distribution area, the golden jackal could affect ecosystem equilibrium, e.g., through changed competition situations or predation pressure. In a niche modeling approach, we project the future climatic habitat suitability of the golden jackal in Europe in the context of whether climatic changes promote range expansion. We use an ensemble forecast based on six presence-absence algorithms to estimate the climatic suitability of C. aureus for different time periods up to the year 2100 considering different IPCC scenarios on future development. As predictor variables, we used six bioclimatic variables provided by worldclim. Our results clearly indicate that areas with climatic conditions analogous to those of the current core distribution area of the golden jackal in Europe will strongly expand towards the north and the west in future decades. Thus, the observed range expansion may be favored by climate change. The occurrence of stable populations can be expected in Central Europe. With regard to biodiversity and public health concerns, the population and range dynamics of the golden jackal should be surveyed. Correlative niche models provide a useful and frequently applied tool for this purpose. The results can help to make monitoring more efficient by identifying areas with suitable habitat and thus a higher probability of occurrence.
Background: The assessment of therapeutic adherence and competence is essential to understand mechanisms that contribute to treatment outcome. Nevertheless, their assessment is often neglected in psychotherapy research.
Aims/Objective: To develop an adherence and a treatment-specific competence rating scale for Dialectical Behaviour Therapy for Posttraumatic Stress Disorder (DBT-PTSD), and to examine their psychometric properties. Global cognitive behavioural therapeutic competence and disorder-specific therapeutic competence were assessed using already existing scales to confirm their psychometric properties in our sample of patients with PTSD and emotion regulation difficulties.
Method: Two rating scales were developed using an inductive procedure. 155 videotaped therapy sessions from a multicenter randomised controlled trial were rated by trained raters using these scales, 40 randomly chosen videotapes involving eleven therapists and fourteen patients were doubly rated by two raters.
Results: Both the adherence scale (Patient-level ICC = .98; αs = .65; αp = .75) and the treatment-specific competence scale (Patient-level ICC = .98; αs = .78; αp = .82) for DBT-PTSD showed excellent interrater – and good reliability on the patient level. Content validity, including relevance and appropriateness of all items, was confirmed by experts in DBT-PTSD for the new treatment-specific competence scale.
Conclusion: Our results indicate that both scales are reliable instruments. They will be useful to examine possible effects of adherence and treatment-specific competence on DBT-PTSD treatment outcome.
Measurement of inclusive charged-particle jet production in Au+Au collisions at √sNN = 200 GeV
(2021)
The STAR Collaboration at the Relativistic Heavy Ion Collider reports the first measurement of inclusive jet production in peripheral and central Au+Au collisions at sNN−−−−√=200 GeV. Jets are reconstructed with the anti-kT algorithm using charged tracks with pseudorapidity |η|<1.0 and transverse momentum 0.2<pchT,jet<30 GeV/c, with jet resolution parameter R=0.2, 0.3, and 0.4. The large background yield uncorrelated with the jet signal is observed to be dominated by statistical phase space, consistent with a previous coincidence measurement. This background is suppressed by requiring a high-transverse-momentum (high-pT) leading hadron in accepted jet candidates. The bias imposed by this requirement is assessed, and the pT region in which the bias is small is identified. Inclusive charged-particle jet distributions are reported in peripheral and central Au+Au collisions for 5<pchT,jet<25 GeV/c and 5<pchT,jet<30 GeV/c, respectively. The charged-particle jet inclusive yield is suppressed for central Au+Au collisions, compared to both the peripheral Au+Au yield from this measurement and to the pp yield calculated using the PYTHIA event generator. The magnitude of the suppression is consistent with that of inclusive hadron production at high pT, and that of semi-inclusive recoil jet yield when expressed in terms of energy loss due to medium-induced energy transport. Comparison of inclusive charged-particle jet yields for different values of R exhibits no significant evidence for medium-induced broadening of the transverse jet profile for R<0.4 in central Au+Au collisions. The measured distributions are consistent with theoretical model calculations that incorporate jet quenching.
Measurement of inclusive charged-particle jet production in Au + Au collisions at √sNN=200 GeV
(2020)
The STAR Collaboration at the Relativistic Heavy Ion Collider reports the first measurement of inclusive jet production in peripheral and central Au+Au collisions at √𝑠𝑁𝑁=200 GeV. Jets are reconstructed with the anti-𝑘𝑇 algorithm using charged tracks with pseudorapidity |𝜂|<1.0 and transverse momentum 0.2<𝑝ch
𝑇,jet<30 GeV/𝑐, with jet resolution parameter 𝑅=0.2, 0.3, and 0.4. The large background yield uncorrelated with the jet signal is observed to be dominated by statistical phase space, consistent with a previous coincidence measurement. This background is suppressed by requiring a high-transverse-momentum (high-𝑝𝑇) leading hadron in accepted jet candidates. The bias imposed by this requirement is assessed, and the 𝑝𝑇 region in which the bias is small is identified. Inclusive charged-particle jet distributions are reported in peripheral and central Au+Au collisions for 5<𝑝ch
𝑇,jet<25 GeV/𝑐 and 5<𝑝ch
𝑇,jet<30 GeV/𝑐, respectively. The charged-particle jet inclusive yield is suppressed for central Au+Au collisions, compared to both the peripheral Au+Au yield from this measurement and to the 𝑝𝑝 yield calculated using the PYTHIA event generator. The magnitude of the suppression is consistent with that of inclusive hadron production at high 𝑝𝑇 and that of semi-inclusive recoil jet yield when expressed in terms of energy loss due to medium-induced energy transport. Comparison of inclusive charged-particle jet yields for different values of 𝑅 exhibits no significant evidence for medium-induced broadening of the transverse jet profile for 𝑅 <0.4 in central Au+Au collisions. The measured distributions are consistent with theoretical model calculations that incorporate jet quenching.
The main focus of this thesis is the application of the nonperturbative Functional Renormalization Group (FRG) to the study of low-energies effective models for Quantum Chromodynamics (QCD). The study of effective field theories and models is crucial for our understanding of physics, especially when we deal with fundamental interaction theories like QCD. In particular, the ultimate goal is the understanding of the critical properties of these models in such a way that we can have an insight on the actual critical phenomena of QCD, with a special focus on its chiral phase transition. The choice of the FRG method derives from the fact that it belongs to the class of functional non-perturbative methods and has also the advantage of linking physics at different energy scales. These features make FRG perfectly compatible with the task of studying non-perturbative phenomena and in particular phase transitions, like the ones expected for strongly interacting matter. However, the functional nature of the FRG approach and of the Wetterich equation has a consequence that its exact resolution is hardly possible, and an ansatz for the effective action is generally needed. In this work we choose to adopt the local-potential approximation (LPA), which prescribes to stop at zeroth order in the expansion in derivative operators of the quantum effective action, including only the quantum effective potential. In this work we exploited the key observation that the FRG flow equation can be cast, for specific models and truncation schemes, in the form of an advection-diffusion, possibly with a source term. This type of equation belongs to the class of problems faced in the context of viscous hydrodynamics. Therefore, an innovative approach to the solution of the FRG flow equation consists in the choice of a method developed specifically for the resolution of this class of hydrodynamic equations. In particular, the Kurganov-Tadmor finite-volume scheme is adopted. Throughout this work we apply this scheme to the study of different physical systems, showing the reliability and the flexibility of this approach.
In the first part of the thesis, we discuss the well-known O(N) model, using the hydrodynamic formulation to solve the FRG flow equation in the LPA truncation. We focus on the study of the critical behaviour of the system and calculate the corresponding critical exponents. Particular attention is given to the error estimation in the extraction of critical exponents, which is a needed and not widely explored aspect. The results are well compatible with others in the literature, obtained with different perturbative and nonperturbative methods, which validates the procedure. In the second part of the thesis, we introduce the quark-meson model as a low-energy effective model for QCD, with a specific focus on its chiral symmetry-breaking pattern and the subsequent dynamical quark-mass generation. The LPA flow equation is of the advection-diffusion type, with an extra source contribution which is due to the inclusion of fermionic degrees of freedom. We thus adopt the developed numerical techniques to derive the phase diagram of the model, which is in agreement with the one obtained with other techniques in the literature.
We also follow another possible way for the study of the critical properties of the quark-meson model: the so-called thermodynamic geometry. This approach is based on the interpretation of the parameter space of the system as a differential manifold. One can then obtain relevant information about the phase transitions from the Ricci scalar. We studied the chiral crossover investigating the behavior of the Ricci scalar up to the critical point, featuring a peaking behavior in the presence of the crossover. We then repeated this analysis in the chiral limit, where the phase transition is expected to be of second order. Via this geometric technique it is possible to have a different view on the chiral phase transition of QCD. This is the case since this approach is based on the calculation of quantities which are influenced by higher-order momenta of the thermodynamic potential, thus allowing for a more comprehensive analysis of the phase transition.
Finally, we exploit the numerical advancement to face the issue of the regulator choice in the FRG calculations. This is one of the most delicate issues which arise when using approximations to solve the FRG flow equation and deserves extensive investigation. In particular, we performed a vacuum parameter study and used the RG consistency requirement to determine the impact of the choice of the regulator on the physical observables and on the phase diagram of the model. Via this study we develop a systematic method to comparison the results obtained via different regulators. We show the importance of the choice of an appropriate UV cutoff in the determination of UV-independent IR observables and, consequently, the impact on the latter that the truncation of the effective average action and the choice of the regulator have.
The STAR Collaboration reports measurements of the transverse single-spin asymmetry (TSSA) of inclusive 𝜋0 at center-of-mass energies (√𝑠) of 200 GeV and 500 GeV in transversely polarized proton-proton collisions in the pseudo-rapidity region 2.7 to 4.0. The results at the two different energies show a continuous increase of the TSSA with Feynman-𝑥, and, when compared to previous measurements, no dependence on √𝑠 from 19.4 GeV to 500 GeV is found. To investigate the underlying physics leading to this large TSSA, different topologies have been studied. 𝜋0 with no nearby particles tend to have a higher TSSA than inclusive 𝜋0. The TSSA for inclusive electromagnetic jets, sensitive to the Sivers effect in the initial state, is substantially smaller, but shows the same behavior as the inclusive 𝜋0 asymmetry as a function of Feynman-𝑥. To investigate final-state effects, the Collins asymmetry of 𝜋0 inside electromagnetic jets has been measured. The Collins asymmetry is analyzed for its dependence on the 𝜋0 momentum transverse to the jet thrust axis and its dependence on the fraction of jet energy carried by the 𝜋0. The asymmetry was found to be small in each case for both center-of-mass energies. All the measurements are compared to QCD-based theoretical calculations for transverse-momentum-dependent parton distribution functions and fragmentation functions. Some discrepancies are found, which indicates new mechanisms might be involved.
We report a new measurement of transverse single-spin asymmetries for dijet production in collisions of polarized protons at s√ = 200 GeV. Correlations between the proton spin and the transverse momenta of its partons, each perpendicular to the proton momentum direction, are probed at high Q2 ≈160 GeV2. The associated Sivers observable ⟨kT⟩, the average parton transverse momentum, is extracted using simple kinematics. Nonzero Sivers effects are observed for the first time in dijets from proton-proton collisions, but only when the jets are sorted by their net charge, which enhances the u- or d-quark contributions to separate data samples. This also enables a simple kinematic approach for determination of the individual partonic contributions to the observed asymmetries.
We report a new measurement of transverse single-spin asymmetries for dijet production in collisions of polarized protons at s√ = 200 GeV. Possible correlations between the proton spin and the transverse momenta of its partons, mutually orthogonal, with each perpendicular to the proton momentum direction, are probed at high Q2 ≈160 GeV2. The associated Sivers observable ⟨kT⟩, the average parton transverse momentum, is extracted using simple kinematics. Nonzero Sivers effects are observed for the first time in proton-proton collisions, but only when the jets are sorted by their net charge, which enhances the u- or d-quark contributions to separate data samples. This also enables a determination of the individual partonic contributions to the observed asymmetries.
Angesichts der Bedrohung durch den Klimawandel sind Maßnahmen zur Reduktion von Treibhausgasemissionen dringend notwendig. Obwohl auf den Gesundheitssektor 5 bis 10 % der nationalen Treibhausgasemissionen entfallen, spielt das Thema Nachhaltigkeit in deutschen Kliniken bisher nur eine untergeordnete Rolle. Studien der letzten Jahre haben gezeigt, dass die Nutzung von Mehrwegartikeln gegenüber der Nutzung von Einwegartikeln in der Anästhesiologie einen Vorteil in Bezug auf die CO2-Emissionen bieten kann. Gleichzeitig stehen Kliniken vor der Herausforderung, kosteneffizient zu handeln. In der vorliegenden Promotionsarbeit werden deshalb die CO2-Emissionen sowie die Kosten von Einweg-Beatmungsschlauchsystemen, Einweg-Beatmungsmasken und Einweg-Laryngoskopspateln im Zentral-OP des Universitätsklinikums Frankfurt pro Nutzung ermittelt und diese mit den Kosten sowie den CO2-Emissionen von entsprechenden Mehrweg-Alternativen pro Nutzung verglichen. Daraus soll eine Handlungsempfehlung für die künftige Verwendung von Mehrwegmaterial oder Einwegmaterial abgeleitet und ein Beitrag zur Nachhaltigkeit in der Anästhesiologie geleistet werden.
Methodisch wurde eine deskriptive Untersuchung umgesetzt. Die Daten wurden anhand von Informationen, die von den Produktherstellern, der Host Energie GmbH, der Abteilung Einkauf und den Mitarbeiterinnen und Mitarbeitern der Materialaufbereitung des Universitätsklinikums Frankfurt zur Verfügung gestellt wurden sowie durch eigene Erhebung gesammelt. Die Kosten pro Nutzung wurden anhand der realen Bezugspreise aus dem Jahr 2022 für die Einwegmaterialien bzw. der Angebote der Anbieter für die Mehrwegprodukte errechnet. Die Kosten der Entsorgung wurden gewichtsbezogen addiert. Für die Mehrwegartikel wurden zudem die Kosten der Aufbereitung berücksichtigt. Für die Kalkulationen der CO2-Emissionen wurden die Konversionsfaktoren des DEFRA aus Großbritannien verwendet, die bei bekanntem Produktgewicht eine näherungsweise Bestimmung der Treibhausgasemissionen der Materialproduktion, der Entsorgung und der Aufbereitung erlauben. Patientendaten wurden nicht verwendet, so dass weder ein Ethikvotum noch ein Datenschutzvotum erforderlich waren.
Die Ergebnisse zeigen, dass alle untersuchten Mehrwegartikel pro Nutzung günstiger sind als die äquivalenten Einwegartikel, wobei der Preisunterschied bei den Laryngoskopspateln am größten ist. Diese kosten als Einwegartikel 2,66 € pro Nutzung, die Mehrweg-Alternative 0,12 € pro Nutzung. Gleichzeitig sind die Treibhausgasemissionen pro Nutzung für alle untersuchten Mehrwegartikel niedriger als für die entsprechenden Einwegartikel. Der Unterschied ist hier ebenfalls bei den Laryngoskopspateln am größten. Ein Einweg-Laryngoskopspatel der Größe 4 generiert pro Nutzung 0,228 kg CO2-Äquivalent, wohingegen die Mehrweg-Alternative nur 0,093 kg CO2-Äquivalent verursacht.
Im Fazit ergibt sich dadurch, dass die Verwendung von Mehrweg-Beatmungsschlauchsystemen, Mehrweg-Beatmungsmasken und Mehrweg-Laryngoskopspateln für das Universitätsklinikum Frankfurt einen sowohl ökonomischen als auch ökologischen Vorteil gegenüber der Verwendung der Einwegartikel bietet. Die Umstellung zu Mehrwegartikeln in der Anästhesiologie hat somit nicht nur das Potenzial Kosten einzusparen, sondern auch den CO2-Fußabdruck im Gesundheitssektor zu senken.
In response to pathogen infection, gasdermin (GSDM) proteins form membrane pores that induce a host cell death process called pyroptosis1–3. Studies of human and mouse GSDM pores reveal the functions and architectures of 24–33 protomers assemblies4–9, but the mechanism and evolutionary origin of membrane targeting and GSDM pore formation remain unknown. Here we determine a structure of a bacterial GSDM (bGSDM) pore and define a conserved mechanism of pore assembly. Engineering a panel of bGSDMs for site-specific proteolytic activation, we demonstrate that diverse bGSDMs form distinct pore sizes that range from smaller mammalian-like assemblies to exceptionally large pores containing >50 protomers. We determine a 3.3 Å cryo-EM structure of a Vitiosangium bGSDM in an active slinky-like oligomeric conformation and analyze bGSDM pores in a native lipid environment to create an atomic-level model of a full 52-mer bGSDM pore. Combining our structural analysis with molecular dynamics simulations and cellular assays, we define a stepwise model of GSDM pore assembly and demonstrate that pore formation is driven by local unfolding of membrane-spanning β-strand regions and pre-insertion of a covalently bound palmitoyl into the target membrane. These results yield insights into the diversity of GSDM pores found in nature and the function of an ancient post-translational modification in enabling a programmed host cell death process.
In response to pathogen infection, gasdermin (GSDM) proteins form membrane pores that induce a host cell death process called pyroptosis1–3. Studies of human and mouse GSDM pores reveal the functions and architectures of 24–33 protomers assemblies4–9, but the mechanism and evolutionary origin of membrane targeting and GSDM pore formation remain unknown. Here we determine a structure of a bacterial GSDM (bGSDM) pore and define a conserved mechanism of pore assembly. Engineering a panel of bGSDMs for site-specific proteolytic activation, we demonstrate that diverse bGSDMs form distinct pore sizes that range from smaller mammalian-like assemblies to exceptionally large pores containing >50 protomers. We determine a 3.3 Å cryo-EM structure of a Vitiosangium bGSDM in an active slinky-like oligomeric conformation and analyze bGSDM pores in a native lipid environment to create an atomic-level model of a full 52-mer bGSDM pore. Combining our structural analysis with molecular dynamics simulations and cellular assays, our results support a stepwise model of GSDM pore assembly and suggest that a covalently bound palmitoyl can leave a hydrophobic sheath and insert into the membrane before formation of the membrane-spanning β-strand regions. These results reveal the diversity of GSDM pores found in nature and explain the function of an ancient post-translational modification in enabling programmed host cell death.
Background: Alternative splicing is a key mechanism in eukaryotic cells to increase the effective number of functionally distinct gene products. Using bulk RNA sequencing, splicing variation has been studied both across human tissues and in genetically diverse individuals. This has identified disease-relevant splicing events, as well as associations between splicing and genomic variations, including sequence composition and conservation. However, variability in splicing between single cells from the same tissue and its determinants remain poorly understood.
Results: We applied parallel DNA methylation and transcriptome sequencing to differentiating human induced pluripotent stem cells to characterize splicing variation (exon skipping) and its determinants. Our results shows that splicing rates in single cells can be accurately predicted based on sequence composition and other genomic features. We also identified a moderate but significant contribution from DNA methylation to splicing variation across cells. By combining sequence information and DNA methylation, we derived an accurate model (AUC=0.85) for predicting different splicing modes of individual cassette exons. These explain conventional inclusion and exclusion patterns, but also more subtle modes of cell-to-cell variation in splicing. Finally, we identified and characterized associations between DNA methylation and splicing changes during cell differentiation.
Conclusions: Our study yields new insights into alternative splicing at the single-cell level and reveals a previously underappreciated component of DNA methylation variation on splicing.
Background: Alternative splicing is a key regulatory mechanism in eukaryotic cells and increases the effective number of functionally distinct gene products. Using bulk RNA sequencing, splicing variation has been studied across human tissues and in genetically diverse populations. This has identified disease-relevant splicing events, as well as associations between splicing and genomic variations, including sequence composition and conservation. However, variability in splicing between single cells from the same tissue or cell type and its determinants remain poorly understood.
Results: We applied parallel DNA methylation and transcriptome sequencing to differentiating human induced pluripotent stem cells to characterize splicing variation (exon skipping) and its determinants. Our results shows that variation in single-cell splicing can be accurately predicted based on local sequence composition and genomic features. We observe moderate but consistent contributions from local DNA methylation profiles to splicing variation across cells. A combined model that is built based on sequence as well as DNA methylation information accurately predicts different splicing modes of individual cassette exons (AUC=0.85). These categories include the conventional inclusion and exclusion patterns, but also more subtle modes of cell-to-cell variation in splicing. Finally, we identified and characterized associations between DNA methylation and splicing changes during cell differentiation.
Conclusions: Our study yields new insights into alternative splicing at the single-cell level and reveals a previously underappreciated link between DNA methylation variation and splicing.
Background: Alternative splicing is a key regulatory mechanism in eukaryotic cells and increases the effective number of functionally distinct gene products. Using bulk RNA sequencing, splicing variation has been studied across human tissues and in genetically diverse populations. This has identified disease-relevant splicing events, as well as associations between splicing and genomic features, including sequence composition and conservation. However, variability in splicing between single cells from the same tissue or cell type and its determinants remains poorly understood.
Results: We applied parallel DNA methylation and transcriptome sequencing to differentiating human induced pluripotent stem cells to characterize splicing variation (exon skipping) and its determinants. Our results show that variation in single-cell splicing can be accurately predicted based on local sequence composition and genomic features. We observe moderate but consistent contributions from local DNA methylation profiles to splicing variation across cells. A combined model that is built based on genomic features as well as DNA methylation information accurately predicts different splicing modes of individual cassette exons. These categories include the conventional inclusion and exclusion patterns, but also more subtle modes of cell-to-cell variation in splicing. Finally, we identified and characterized associations between DNA methylation and splicing changes during cell differentiation.
Conclusions: Our study yields new insights into alternative splicing at the single-cell level and reveals a previously underappreciated link between DNA methylation variation and splicing.