Refine
Year of publication
- 2020 (3447) (remove)
Document Type
- Article (1917)
- Part of Periodical (478)
- Doctoral Thesis (235)
- Preprint (173)
- Contribution to a Periodical (160)
- Book (129)
- Working Paper (123)
- Review (104)
- Part of a Book (86)
- Bachelor Thesis (11)
Language
- English (2175)
- German (1134)
- Portuguese (43)
- French (31)
- Turkish (28)
- Multiple languages (17)
- Spanish (13)
- Italian (3)
- slo (3)
Keywords
- taxonomy (88)
- Deutsch (53)
- new species (50)
- Spracherwerb (34)
- Sprachtest (33)
- Übersetzung (26)
- Capital Markets Union (25)
- Financial Markets (25)
- Literatur (25)
- Coronavirus (24)
Institute
- Medizin (744)
- Präsidium (278)
- Physik (271)
- Wirtschaftswissenschaften (220)
- Sustainable Architecture for Finance in Europe (SAFE) (167)
- Biowissenschaften (161)
- Frankfurt Institute for Advanced Studies (FIAS) (152)
- Informatik (117)
- Biochemie, Chemie und Pharmazie (104)
- Neuere Philologien (95)
Non-forest ecosystems, dominated by shrubs, grasses and herbaceous plants, provide ecosystem services including carbon sequestration and forage for grazing, yet are highly sensitive to climatic changes. Yet these ecosystems are poorly represented in remotely-sensed biomass products and are undersampled by in-situ monitoring. Current global change threats emphasise the need for new tools to capture biomass change in non-forest ecosystems at appropriate scales. Here we assess whether canopy height inferred from drone photogrammetry allows the estimation of aboveground biomass (AGB) across low-stature plant species sampled through a global site network. We found mean canopy height is strongly predictive of AGB across species, demonstrating standardised photogrammetric approaches are generalisable across growth forms and environmental settings. Biomass per-unit-of-height was similar within, but different among, plant functional types. We find drone-based photogrammetry allows for monitoring of AGB across large spatial extents and can advance understanding of understudied and vulnerable non-forested ecosystems across the globe.
Surface temperature is a fundamental parameter of Earth’s climate. Its evolution through time is commonly reconstructed using the oxygen isotope and the clumped isotope compositions of carbonate archives. However, reaction kinetics involved in the precipitation of carbonates can introduce inaccuracies in the derived temperatures. Here, we show that dual clumped isotope analyses, i.e., simultaneous ∆47 and ∆48 measurements on the single carbonate phase, can identify the origin and quantify the extent of these kinetic biases. Our results verify theoretical predictions and evidence that the isotopic disequilibrium commonly observed in speleothems and scleractinian coral skeletons is inherited from the dissolved inorganic carbon pool of their parent solutions. Further, we show that dual clumped isotope thermometry can achieve reliable palaeotemperature reconstructions, devoid of kinetic bias. Analysis of a belemnite rostrum implies that it precipitated near isotopic equilibrium and confirms the warmer-than-present temperatures during the Early Cretaceous at southern high latitudes.
Marie Holzman : Manuskripte
(2020)
Dielectrons are an excellent probe for the QCD matter created in created in ultra-relativistic heavy-ion collisions, since they are emitted during the whole evolution of the collision and do not interact strongly with the medium. To isolate the QGP signals, measurement of the dielectron production in vacuum and its modifications due to the presence of cold nuclear matter is necessary. We present and discuss results from a low magnetic field detector setup in proton-proton collisions at √s = 13 TeV, as well as the measurement of dielectron production in pp, p-Pb, and Pb-Pb collisions at √sNN = 5 TeV.
Heavy quarks are useful probes to investigate the properties of the Quark-Gluon Plasma (QGP) produced in heavy-ion collisions at the LHC, since they are produced in initial hard scattering processes. To single out the signals that are characteristic of the QGP, it is nevertheless crucial to understand the primordial heavy-quark production in vacuum, and to disentangle hot from cold nuclear matter effects. Moreover, observations of collective effects in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Heavy-flavour production in such collisions could give further insight into the underlying processes. The heavy-flavour production can be studied with e+e− pairs from correlated semileptonic decays of heavy-flavour hadrons. Compared to single heavy-flavour measurements, the dielectron yield contains information about the initial kinematical correlations between the charm and anti-charm quarks, which is otherwise not accessible, and is sensitive to soft heavy-flavour production. We report results on correlated e+e− pairs in pp collisions recorded by the ALICE detector at different collision energies. The production of heavy quarks is discussed by comparing the yield of dielectrons from heavy-flavour hadron decays as a function of invariant mass, pair transverse momentum and distance of closest approach to the primary vertex with different Monte Carlo event generators. The heavy-flavour production cross sections are also presented. Results from high-multiplicity pp collisions at √s=13 TeV and the status of the p-Pb analysis at √sNN=5.02 TeV are reported as well.
Heavy quarks are useful probes to investigate the properties of the Quark-Gluon Plasma (QGP) produced in heavy-ion collisions at the LHC, since they are produced in initial hard scattering processes. To single out the signals that are characteristic of the QGP, it is nevertheless crucial to understand the primordial heavy-quark production in vacuum, and to disentangle hot from cold nuclear matter effects. Moreover, observations of collective effects in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Heavy-flavour production in such collisions could give further insight into the underlying processes. The heavy-flavour production can be studied with e+e− pairs from correlated semileptonic decays of heavy-flavour hadrons. Compared to single heavy-flavour measurements, the dielectron yield contains information about the initial kinematical correlations between the charm and anti-charm quarks, which is otherwise not accessible, and is sensitive to soft heavy-flavour production. We report results on correlated e+e− pairs in pp collisions recorded by the ALICE detector at different collision energies. The production of heavy quarks is discussed by comparing the yield of dielectrons from heavy-flavour hadron decays as a function of invariant mass, pair transverse momentum and distance of closest approach to the primary vertex with different Monte Carlo event generators. The heavy-flavour production cross sections are also presented. Results from high-multiplicity pp collisions at √s=13 TeV and the status of the p-Pb analysis at √sNN=5.02 TeV are reported as well.
Heavy quarks are useful probes to investigate the properties of the Quark-Gluon Plasma (QGP) produced in heavy-ion collisions at the LHC, since they are produced in initial hard scattering processes. To single out the signals that are characteristic of the QGP, it is nevertheless crucial to understand the primordial heavy-quark production in vacuum, and to disentangle hot from cold nuclear matter effects. Moreover, observations of collective effects in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Heavy-flavour production in such collisions could give further insight into the underlying processes. The heavy-flavour production can be studied with e+e− pairs from correlated semileptonic decays of heavy-flavour hadrons. Compared to single heavy-flavour measurements, the dielectron yield contains information about the initial kinematical correlations between the charm and anti-charm quarks, which is otherwise not accessible, and is sensitive to soft heavy-flavour production. We report results on correlated e+e− pairs in pp collisions recorded by the ALICE detector at different collision energies. The production of heavy quarks is discussed by comparing the yield of dielectrons from heavy-flavour hadron decays as a function of invariant mass, pair transverse momentum and distance of closest approach to the primary vertex with different Monte Carlo event generators. The heavy-flavour production cross sections are also presented. Results from high-multiplicity pp collisions at √s=13 TeV and the status of the p-Pb analysis at √sNN=5.02 TeV are reported as well.
A comprehensive study of sillenite Bi12SiO20 single-crystal properties, including elastic stiffness and piezoelectric coefficients, dielectric permittivity, thermal expansion and molar heat capacity, is presented. Brillouin-interferometry measurements (up to 27 GPa), which were performed at high pressures for the first time, and ab initio calculations based on density functional theory (up to 50 GPa) show the stability of the sillenite structure in the investigated pressure range, in agreement with previous studies. Elastic stiffness coefficients c11 and c12 are found to increase continuously with pressure while c44 increases slightly for lower pressures and remains nearly constant above 15 GPa. Heat-capacity measurements were performed with a quasi-adiabatic calorimeter employing the relaxation method between 2 K and 395 K. No phase transition could be observed in this temperature interval. Standard molar entropy, enthalpy change and Debye temperature are extracted from the data. The results are found to be roughly half of the previous values reported in the literature. The discrepancy is attributed to the overestimation of the Debye temperature which was extracted from high-temperature data. Additionally, Debye temperatures obtained from mean sound velocities derived by Voigt-Reuss averaging are in agreement with our heat-capacity results. Finally, a complete set of electromechanical coefficients was deduced from the application of resonant ultrasound spectroscopy between 103 K and 733 K. No discontinuities in the temperature dependence of the coefficients are observed. High-temperature (up to 1100 K) resonant ultrasound spectra recorded for Bi12MO20 crystals revealed strong and reversible acoustic dissipation effects at 870 K, 960 K and 550 K for M = Si, Ge and Ti, respectively. Resonances with small contributions from the elastic shear stiffness c44 and the piezoelectric stress coefficient e123 are almost unaffected by this dissipation.
Using 2.93 fb−1 of 𝑒+𝑒− annihilation data collected at a center-of-mass energy √𝑠=3.773 GeV with the BESIII detector operating at the BEPCII collider, we search for the semileptonic 𝐷0(+) decays into a 𝑏1(1235)−(0) axial-vector meson for the first time. No significant signal is observed for either charge combination. The upper limits on the product branching fractions are ℬ𝐷0→𝑏1(1235)−𝑒+𝜈𝑒·ℬ𝑏1(1235) −→ 𝜔𝜋−<1.12×10−4 and ℬ𝐷+→𝑏1(1235)0𝑒+𝜈𝑒·ℬ𝑏1(1235)0→𝜔𝜋0<1.75×10−4 at the 90% confidence level.
Libra — a global virtual currency project initiated by Facebook — has been the subject of many controversial discussions since its announcement in June 2019. This paper provides a differentiated view on Libra, recognising that different development scenarios of Libra are conceivable. Libra could serve purely as an alternative payment system in combination with a dedicated payment token, the Libra coin. Alternatively, the Libra project could develop into a broader financial infrastructure for advanced financial services such as savings and loan products operating on the Libra Blockchain. Based on a comparison of the Libra architecture with other cryptocurrencies, the opportunities and challenges for the development of the respective Libra ecosystems are investigated from a commercial, regulatory and monetary policy perspective.
Mehr Nachhaltigkeit im deutschen Leitindex DAX : Reformvorschläge im Lichte des Wirecard-Skandals
(2020)
Im Rahmen der Aufarbeitung des Wirecard-Skandals wird auch eine Änderung der Kriterien zur Aufnahme in den deutschen Leitindex DAX diskutiert. Die bislang von der Deutschen Börse vorgesehenen Maßnahmen gehen in die richtige Richtung, sind aber nicht weitreichend genug. Es bedarf eines deutlichen Zeichens, dass sich künftig nur solche Unternehmen für den DAX qualifizieren können, die ein zumindest befriedigendes Maß an Nachhaltigkeit gemessen durch einen ESG-Risk-Score (Environment, Social, Governance) in ihrer Geschäftstätigkeit erreichen. Eine Simulation verdeutlicht, dass nach ESG-Kriterien seit langem kritisch betrachtete Unternehmen dem DAX nicht mehr angehören würden. Damit könnte mehr Kapital in nachhaltig wirtschaftende Unternehmen und Sektoren fließen.
We report on the measurement of the Central Exclusive Production of charged particle pairs h+h− (h = π, K, p) with the STAR detector at RHIC in proton-proton collisions at √s = 200 GeV. The charged particle pairs produced in the reaction pp → p′ + h+h− + p′ are reconstructed from the tracks in the central detector and identified using the specific energy loss and the time of flight method, while the forward-scattered protons are measured in the Roman Pot system. Exclusivity of the event is guaranteed by requiring the transverse momentum balance of all four final-state particles. Differential cross sections are measured as functions of observables related to the central hadronic final state and to the forward-scattered protons. They are measured in a fiducial region corresponding to the acceptance of the STAR detector and determined by the central particles’ transverse momenta and pseudorapidities as well as by the forward-scattered protons’ momenta. This fiducial region roughly corresponds to the square of the four-momentum transfers at the proton vertices in the range 0.04 GeV2 < −t1, −t2 < 0.2 GeV2, invariant masses of the charged particle pairs up to a few GeV and pseudorapidities of the centrally-produced hadrons in the range |η| < 0.7. The measured cross sections are compared to phenomenological predictions based on the Double Pomeron Exchange (DPE) model. Structures observed in the mass spectra of π+π− and K+K− pairs are consistent with the DPE model, while angular distributions of pions suggest a dominant spin-0 contribution to π+π− production. For π+π− production, the fiducial cross section is extrapolated to the Lorentz-invariant region, which allows decomposition of the invariant mass spectrum into continuum and resonant contributions. The extrapolated cross section is well described by the continuum production and at least three resonances, the f0(980), f2(1270) and f0(1500), with a possible small contribution from the f0(1370). Fits to the extrapolated differential cross section as a function of t1 and t2 enable extraction of the exponential slope parameters in several bins of the invariant mass of π+π− pairs. These parameters are sensitive to the size of the interaction region.
Measurement of inclusive charged-particle jet production in Au + Au collisions at √sNN=200 GeV
(2020)
The STAR Collaboration at the Relativistic Heavy Ion Collider reports the first measurement of inclusive jet production in peripheral and central Au+Au collisions at √𝑠𝑁𝑁=200 GeV. Jets are reconstructed with the anti-𝑘𝑇 algorithm using charged tracks with pseudorapidity |𝜂|<1.0 and transverse momentum 0.2<𝑝ch
𝑇,jet<30 GeV/𝑐, with jet resolution parameter 𝑅=0.2, 0.3, and 0.4. The large background yield uncorrelated with the jet signal is observed to be dominated by statistical phase space, consistent with a previous coincidence measurement. This background is suppressed by requiring a high-transverse-momentum (high-𝑝𝑇) leading hadron in accepted jet candidates. The bias imposed by this requirement is assessed, and the 𝑝𝑇 region in which the bias is small is identified. Inclusive charged-particle jet distributions are reported in peripheral and central Au+Au collisions for 5<𝑝ch
𝑇,jet<25 GeV/𝑐 and 5<𝑝ch
𝑇,jet<30 GeV/𝑐, respectively. The charged-particle jet inclusive yield is suppressed for central Au+Au collisions, compared to both the peripheral Au+Au yield from this measurement and to the 𝑝𝑝 yield calculated using the PYTHIA event generator. The magnitude of the suppression is consistent with that of inclusive hadron production at high 𝑝𝑇 and that of semi-inclusive recoil jet yield when expressed in terms of energy loss due to medium-induced energy transport. Comparison of inclusive charged-particle jet yields for different values of 𝑅 exhibits no significant evidence for medium-induced broadening of the transverse jet profile for 𝑅 <0.4 in central Au+Au collisions. The measured distributions are consistent with theoretical model calculations that incorporate jet quenching.
Digitale Technologien begünstigen den Einsatz einer dynamischen Preisgestaltung, also von Preisen, die für ein prinzipiell gleiches Produkt unangekündigt variieren. Dabei werden in der öffentlichen Diskussion unterschiedliche Ausgestaltungsformen dynamischer Preise oftmals vermischt, was eine sinnvolle Analyse der Vor- und Nachteile der dynamischen Preisgestaltung erschwert. Das Ziel des Beitrags ist die Darstellung der ökonomischen Grundlagen und die Diskussion sowie Klassifikation der Ausgestaltungsmöglichkeiten der dynamischen Preisgestaltung. Darüber hinaus erfolgt eine Bewertung der Vor- und Nachteile der dynamischen Preisgestaltung aus Käufer- und Verkäufersicht. Abschließend werden Implikationen für die betriebswirtschaftliche Forschung diskutiert.
Marie Holzman, 1922–1941
(2020)
Marie Holzman, geboren am 22. April 1922 in Jena, war die ältere Tochter des seit 1922/23 in Kaunas (Litauen) ansässigen Gründers und Inhabers der Verlagsbuchhandlung Pribačis Max Holzman (1889–1941) sowie der aus Jena stammenden Malerin und Kunsterzieherin Helene Czapski-Holzman (1891–1961). Nach dem deutschen Überfall auf die Sowjetunion wurde sie am 29. Oktober 1941 in Kaunas ermordet. Ihre Mutter hat zwei von ihrer Tochter aus dem Litauischen übersetzte Erzählungen bewahrt. Unlängst gelangten die beiden Manuskripte ans Exilarchiv der DNB in Frankfurt/M.
The Born cross sections for the process e+e−→η′π+π− at different center-of-mass energies between 2.00 and 3.08~GeV are reported with improved precision from an analysis of data samples collected with the BESIII detector operating at the BEPCII storage ring. An obvious structure is observed in the Born cross section line shape. Fit as a Breit-Wigner resonance, it has a statistical significance of 6.3σ and a mass and width of M=(2108±46±25)~MeV/c2 and Γ=(138±36±30)~MeV, where the uncertainties are statistical and systematic, respectively. These measured resonance parameters agree with the measurements of BABAR in e+e−→η′π+π− and BESIII in e+e−→ωπ0 within two standard deviations.
We report a study of the processes of e+e−→K+(D−sD∗0+D∗−sD0) based on e+e− annihilation samples collected with the BESIII detector operating at BEPCII at five center-of-mass energies ranging from 4.628 to 4.698 GeV with a total integrated luminosity of 3.7 fb−1. An excess over the known contributions of the conventional charmed mesons is observed near the D−sD∗0 and D∗−sD0 mass thresholds in the K+ recoil-mass spectrum for events collected at s√=4.681 GeV. The structure matches a mass-dependent-width Breit-Wigner line shape, whose pole mass and width are determined as (3982.5+1.8−2.6±2.1) MeV/c2 and (12.8+5.3−4.4±3.0) MeV, respectively. The first uncertainties are statistical and the second are systematic. The significance of the resonance hypothesis is estimated to be 5.3 σ over the pure contributions from the conventional charmed mesons. This is the first candidate of the charged hidden-charm tetraquark with strangeness, decaying into D−sD∗0 and D∗−sD0. However, the genuine properties of the excess need further exploration with more statistics.
Measurement of inclusive J/ψ polarization in p + p collisions at √s=200 GeV by the STAR experiment
(2020)
We report on new measurements of inclusive 𝐽/𝜓 polarization at midrapidity in 𝑝+𝑝 collisions at √𝑠=200 GeV by the STAR experiment at the Relativistic Heavy Ion Collider. The polarization parameters, 𝜆𝜃, 𝜆𝜙, and 𝜆𝜃𝜙, are measured as a function of transverse momentum (𝑝T) in both the helicity and Collins-Soper (CS) reference frames within 𝑝T<10 GeV/𝑐. Except for 𝜆𝜃 in the CS frame at the highest measured 𝑝T, all three polarization parameters are consistent with 0 in both reference frames without any strong 𝑝T dependence. Several model calculations are compared with data, and the one using the Color Glass Condensate effective field theory coupled with nonrelativistic QCD gives the best overall description of the experimental results, even though other models cannot be ruled out due to experimental uncertainties.
Ten hadronic final states of the ℎ𝑐 decays are investigated via the process 𝜓(3686)→𝜋0ℎ𝑐, using a data sample of (448.1±2.9)×106 𝜓(3686) events collected with the BESIII detector. The decay channel ℎ𝑐→𝐾+𝐾−𝜋+𝜋−𝜋0 is observed for the first time and has a measured significance of 6.0𝜎. The corresponding branching fraction is determined to be ℬ(ℎ𝑐→𝐾+𝐾−𝜋+𝜋−𝜋0)=(3.3±0.6±0.6)×10−3 (where the uncertainties are statistical and systematic, respectively). Evidence for the decays ℎ𝑐→𝜋+𝜋−𝜋0𝜂 and ℎ𝑐→𝐾0𝑆𝐾±𝜋∓𝜋+𝜋− is found with a significance of 3.6𝜎 and 3.8𝜎, respectively. The corresponding branching fractions (and upper limits) are obtained to be ℬ(ℎ𝑐→𝜋+𝜋−𝜋0𝜂)=(7.2±1.8±1.3)×10−3 (<1.8×10−2) and ℬ(ℎ𝑐→𝐾0𝑆𝐾±𝜋∓𝜋+𝜋−)=(2.8±0.9±0.5)×10−3 (<4.7×10−3). Upper limits on the branching fractions for the final states ℎ𝑐→𝐾+𝐾−𝜋0, 𝐾+𝐾−𝜂, 𝐾+𝐾−𝜋+𝜋−𝜂, 2(𝐾+𝐾−)𝜋0, 𝐾+𝐾−𝜋0𝜂, 𝐾0𝑆𝐾±𝜋∓, and 𝑝¯𝑝𝜋0𝜋0 are determined at a confidence level of 90%.
Using a dedicated data sample taken in 2018 on the J/ψ peak, we perform a detailed study of the trigger efficiencies of the BESIII detector. The efficiencies are determined from three representative physics processes, namely Bhabha scattering, dimuon production and generic hadronic events with charged particles. The combined efficiency of all active triggers approaches 100% in most cases, with uncertainties small enough not to affect most physics analyses.
Measurement of cross sections for e⁺e⁻ → μ⁺μ⁻ at center-of-mass energies from 3.80 to 4.60 GeV
(2020)
The observed cross sections for 𝑒+𝑒−→𝜇+𝜇− at energies from 3.8 to 4.6 GeV are measured using data samples taken with the BESIII detector operated at the BEPCII collider. We measure the muonic widths and determine the branching fractions of the charmonium states 𝜓(4040), 𝜓(4160), and 𝜓(4415) decaying to 𝜇+𝜇−, as well as making a first determination of the phase of the amplitudes. In addition, we observe evidence for a structure in the dimuon cross section near 4.220 GeV/𝑐2, which we denote as 𝑆(4220). Analyzing a coherent sum of amplitudes yields eight solutions, one of which gives a mass of 𝑀𝑆(4220) = 4216.7±8.9±4.1 MeV/𝑐2, a total width of Γtot S(4220) = 47.2±22.8±10.5 MeV, and a muonic width of Γ𝜇𝜇 𝑆(4220) = 1.53±1.26±0.54 keV, where the first uncertainties are statistical and the second systematic. The eight solutions give the central values of the mass, total width, muonic width to be, respectively, in the range from 4212.8 to 4219.4 MeV/𝑐2, from 36.4 to 49.6 MeV, and from 1.09 to 1.53 keV. The statistical significance of the 𝑆(4220) signal is 3.9𝜎. Correcting the total dimuon cross section for radiative effects yields a statistical significance for this structure of 8.1𝜎.
Cross sections of the process 𝑒+𝑒−→𝜋0𝜋0𝐽/𝜓 at center-of-mass energies between 3.808 and 4.600 GeV are measured with high precision by using 12.4 fb−1 of data samples collected with the BESIII detector operating at the BEPCII collider facility. A fit to the measured energy-dependent cross sections confirms the existence of the charmoniumlike state 𝑌(4220). The mass and width of the 𝑌(4220) are determined to be (4220.4±2.4±2.3) MeV/𝑐2 and (46.2±4.7±2.1) MeV, respectively, where the first uncertainties are statistical and the second systematic. The mass and width are consistent with those measured in the process 𝑒+𝑒−→𝜋+𝜋−𝐽/𝜓. The neutral charmonium-like state 𝑍𝑐(3900)0 is observed prominently in the 𝜋0𝐽/𝜓 invariant-mass spectrum, and, for the first time, an amplitude analysis is performed to study its properties. The spin-parity of 𝑍𝑐(3900)0 is determined to be 𝐽𝑃=1+, and the pole position is (3893.1±2.2±3.0)−𝑖(22.2±2.6±7.0) MeV/𝑐2, which is consistent with previous studies of electrically charged 𝑍𝑐(3900)±. In addition, cross sections of 𝑒+𝑒− → 𝜋0𝑍𝑐(3900)0 → 𝜋0𝜋0𝐽/𝜓 are extracted, and the corresponding line shape is found to agree with that of the 𝑌(4220).
Using 2.93 fb−1 of 𝑒+𝑒− collision data collected at a center-of-mass energy of 3.773 GeV with the BESIII detector, the first observation of the doubly Cabibbo-suppressed decay 𝐷+→𝐾+𝜋+𝜋−𝜋0 is reported. After removing decays that contain narrow intermediate resonances, including 𝐷+→𝐾+𝜂, 𝐷+→𝐾+𝜔, and 𝐷+→𝐾+𝜙, the branching fraction of the decay 𝐷+→𝐾+𝜋+𝜋−𝜋0 is measured to be (1.13±0.08stat±0.03syst)×10−3. The ratio of branching fractions of 𝐷+→𝐾+𝜋+𝜋−𝜋0 over 𝐷+→𝐾−𝜋+𝜋+𝜋0 is found to be (1.81±0.15)%, which corresponds to (6.28±0.52)tan4𝜃𝐶, where 𝜃𝐶 is the Cabibbo mixing angle. This ratio is significantly larger than the corresponding ratios for other doubly Cabibbo-suppressed decays. The asymmetry of the branching fractions of charge-conjugated decays 𝐷±→𝐾±𝜋±𝜋∓𝜋0 is also determined, and no evidence for 𝐶𝑃 violation is found. In addition, the first evidence for the 𝐷+→𝐾+𝜔 decay, with a statistical significance of 3.3𝜎, is presented and the branching fraction is measured to be ℬ(𝐷+→𝐾+𝜔) = (5.7+2.5−2.1stat±0.2syst)×10−5.
The process 𝑒+𝑒−→𝜙𝜂′ has been studied for the first time in detail using data sample collected with the BESIII detector at the BEPCII collider at center of mass energies from 2.05 to 3.08 GeV. A resonance with quantum numbers 𝐽𝑃𝐶=1−− is observed with mass 𝑀=(2177.5±4.8(stat)±19.5(syst))MeV/𝑐2 and width Γ=(149.0±15.6(stat)±8.9(syst)) MeV with a statistical significance larger than 10𝜎, including systematic uncertainties. If the observed structure is identified with the 𝜙(2170), then the ratio of partial width between the 𝜙𝜂′ by BESIII and 𝜙𝜂 by BABAR is (ℬ𝑅𝜙𝜂Γ𝑅𝑒𝑒)/(ℬ𝑅𝜙𝜂′Γ𝑅𝑒𝑒)=0.23±0.10(stat)±0.18(syst), which is smaller than the prediction of the 𝑠¯𝑠𝑔 hybrid models by several orders of magnitude.
By analyzing a data sample corresponding to an integrated luminosity of 2.93 fb−1 collected at a center-of-mass energy of 3.773 GeV with By analyzing a data sample corresponding to an integrated luminosity of 2.93 fb−1 collected at a center-of-mass energy of 3.773 GeV with the BESIII detector, we measure for the first time the absolute branching fraction of the 𝐷+→𝜂𝜇+𝜈𝜇 decay to be ℬ𝐷+→𝜂𝜇+𝜈𝜇=(10.4±1.0stat±0.5syst)×10−4. Using the world averaged value of ℬ𝐷+→𝜂𝑒+𝜈𝑒, the ratio of the two branching fractions is determined to be ℬ𝐷+→𝜂𝜇+𝜈𝜇/ℬ𝐷+→𝜂𝑒+𝜈𝑒=0.91±0.13(stat+syst), which agrees with the theoretical expectation of lepton flavor universality within uncertainty. By studying the differential decay rates in five four-momentum transfer intervals, we obtain the product of the hadronic form factor 𝑓𝜂+(0) and the 𝑐→𝑑 Cabibbo-Kobayashi-Maskawa matrix element |𝑉𝑐𝑑| to be 𝑓𝜂+(0)|𝑉𝑐𝑑|=0.087±0.008stat±0.002syst. Taking the input of |𝑉𝑐𝑑| from the global fit in the standard model, we determine 𝑓𝜂+(0)=0.39±0.04stat±0.01syst. On the other hand, using the value of 𝑓𝜂+(0) calculated in theory, we find |𝑉𝑐𝑑| = 0.242±0.022stat±0.006syst±0.033theory.
We report the first observation of the semimuonic decay 𝐷+→𝜔𝜇+𝜈𝜇 using an 𝑒+𝑒− collision data sample corresponding to an integrated luminosity of 2.93 fb−1 collected with the BESIII detector at a center-of-mass energy of 3.773 GeV. The absolute branching fraction of the 𝐷+→𝜔𝜇+𝜈𝜇 decay is measured to be ℬ𝐷+→𝜔𝜇+𝜈𝜇=(17.7±1.8stat±1.1syst)×10−4. Its ratio with the world average value of the branching fraction of the 𝐷+→𝜔𝑒+𝜈𝑒 decay probes lepton flavor universality and it is determined to be ℬ𝐷+→𝜔𝜇+𝜈𝜇/ℬPDG 𝐷+→𝜔𝑒+𝜈𝑒=1.05±0.14, in agreement with the standard model expectation within one standard deviation.
Cross sections of the process 𝑒+𝑒−→𝜋0𝜋0𝐽/𝜓 at center-of-mass energies between 3.808 and 4.600 GeV are measured with high precision by using 12.4 fb−1 of data samples collected with the BESIII detector operating at the BEPCII collider facility. A fit to the measured energy-dependent cross sections confirms the existence of the charmoniumlike state 𝑌(4220). The mass and width of the 𝑌(4220) are determined to be (4220.4±2.4±2.3) MeV/𝑐2 and (46.2±4.7±2.1) MeV, respectively, where the first uncertainties are statistical and the second systematic. The mass and width are consistent with those measured in the process 𝑒+𝑒−→𝜋+𝜋−𝐽/𝜓. The neutral charmonium-like state 𝑍𝑐(3900)0 is observed prominently in the 𝜋0𝐽/𝜓 invariant-mass spectrum, and, for the first time, an amplitude analysis is performed to study its properties. The spin-parity of 𝑍𝑐(3900)0 is determined to be 𝐽𝑃=1+, and the pole position is (3893.1±2.2±3.0)−𝑖(22.2±2.6±7.0) MeV/𝑐2, which is consistent with previous studies of electrically charged 𝑍𝑐(3900)±. In addition, cross sections of 𝑒+𝑒− → 𝜋0𝑍𝑐(3900)0 → 𝜋0𝜋0𝐽/𝜓 are extracted, and the corresponding line shape is found to agree with that of the 𝑌(4220).
The processes 𝑒+𝑒−→𝐷+ 𝑠𝐷𝑠1(2460)−+c.c. and 𝑒+𝑒−→𝐷*+ 𝑠𝐷𝑠1(2460)−+c.c. are studied for the first time using data samples collected with the BESIII detector at the BEPCII collider. The Born cross sections of 𝑒+𝑒−→𝐷+ 𝑠𝐷𝑠1(2460)−+c.c. at nine center-of-mass energies between 4.467 GeV and 4.600 GeV and those of 𝑒+𝑒−→𝐷*+ 𝑠𝐷𝑠1(2460)−+c.c. at √𝑠=4.590 GeV and 4.600 GeV are measured. No obvious charmonium or charmoniumlike structure is seen in the measured cross sections.
Digital spatial processes have been widely explored and investigated in subject-specific geographic research. So far, however, this research has not been sufficiently reflected in classrooms or teacher education, and remains unconnected to notions of geographical digital literacy. Viral constructions of space – realities shaped in everyday life that are experienced and (re-)produced by students and teachers alike through social media – present an opportunity for Geography education to adapt to the digital society. This paper attempts to connect viral constructions of space, the digital society and the knowledge teachers need to include viral constructions of space in the classroom using Mishra and Koehler’s (2006) TPACK model, a well-established means for summarizing teachers’ technological, pedagogical and content knowledge for a specific topic. The paper focuses on content knowledge, identifies five sub-types of viral constructions of space, and extracts nine descriptors of teachers’ content knowledge. By focusing on content knowledge, the paper presents a starting point for future investigations of pedagogical and technological teacher knowledge as well as their intersections. It also raises awareness of viral constructions of space as both a new essential topic in the Geography classroom and a phenomenon already shaping learning environments for spatial acquisition.
Using a sample of 106 million 𝜓(3686) decays, 𝜓(3686)→𝛾𝜒𝑐𝐽(𝐽=0,1,2) and 𝜓(3686)→𝛾𝜒𝑐𝐽,𝜒𝑐𝐽→𝛾𝐽/𝜓(𝐽=1,2) events are utilized to study inclusive 𝜒𝑐𝐽→anything, 𝜒𝑐𝐽→hadrons, and 𝐽/𝜓→anything distributions, including distributions of the number of charged tracks, electromagnetic calorimeter showers, and 𝜋0s, and to compare them with distributions obtained from the BESIII Monte Carlo simulation. Information from each Monte Carlo simulated decay event is used to construct matrices connecting the detected distributions to the input predetection “produced” distributions. Assuming these matrices also apply to data, they are used to predict the analogous produced distributions of the decay events. Using these, the charged particle multiplicities are compared with results from MARK I. Further, comparison of the distributions of the number of photons in data with those in Monte Carlo simulation indicates that G-parity conservation should be taken into consideration in the simulation.
Using 2.93 fb−1 of 𝑒+𝑒− collision data taken at a center-of-mass energy of 3.773 GeV by the BESIII detector at the BEPCII, we measure the branching fractions of the singly Cabibbo-suppressed decays 𝐷→𝜔𝜋𝜋 to be ℬ(𝐷0→𝜔𝜋+𝜋−)=(1.33±0.16±0.12)×10−3 and ℬ(𝐷+→𝜔𝜋+𝜋0)=(3.87±0.83±0.25)×10−3, where the first uncertainties are statistical and the second ones systematic. The statistical significances are 12.9𝜎 and 7.7𝜎, respectively. The precision of ℬ(𝐷0→𝜔𝜋+𝜋−) is improved by a factor of 2.1 over prior measurements, and ℬ(𝐷+→𝜔𝜋+𝜋0) is measured for the first time. No significant signal for 𝐷0→𝜔𝜋0𝜋0 is observed, and the upper limit on the branching fraction is ℬ(𝐷0→𝜔𝜋0𝜋0)<1.10×10−3 at the 90% confidence level. The branching fractions of 𝐷→𝜂𝜋𝜋 are also measured and consistent with existing results.
We report an amplitude analysis and branching fraction measurement of D+s→K+K−π+ decay using a data sample of 3.19 fb−1 recorded with BESIII detector at a center-of-mass energy of 4.178 GeV.
We perform a model-independent partial wave analysis in the low K+K− mass region to determine the K+K− S-wave lineshape,
followed by an amplitude analysis of our very pure high-statistics sample.
The amplitude analysis provides an accurate determination of the detection efficiency allowing us to measure the branching fraction B(D+s→K+K−π+)=(5.47±0.08stat±0.13sys)%.
We report an amplitude analysis and branching fraction measurement of D+s→K+K−π+ decay using a data sample of 3.19 fb−1 recorded with BESIII detector at a center-of-mass energy of 4.178 GeV.
We perform a model-independent partial wave analysis in the low K+K− mass region to determine the K+K− S-wave lineshape, followed by an amplitude analysis of our very pure high-statistics sample.
The amplitude analysis provides an accurate determination of the detection efficiency allowing us to measure the branching fraction B(D+s→K+K−π+)=(5.47±0.08stat±0.13sys)%.
We report an amplitude analysis and branching fraction measurement of D+s→K+K−π+ decay using a data sample of 3.19 fb−1 recorded with BESIII detector at a center-of-mass energy of 4.178 GeV.
We perform a model-independent partial wave analysis in the low K+K− mass region to determine the K+K− S-wave lineshape, followed by an amplitude analysis of our very pure high-statistics sample.
The amplitude analysis provides an accurate determination of the detection efficiency allowing us to measure the branching fraction B(D+s→K+K−π+)=(5.47±0.08stat±0.13sys)%.
Using 2.93 fb−1 of 𝑒+𝑒− collision data taken at a center-of-mass energy of 3.773 GeV with the BESIII detector, we report the first measurements of the absolute branching fractions of 14 hadronic 𝐷0(+) decays to exclusive final states with an 𝜂, e.g., 𝐷0→𝐾−𝜋+𝜂, 𝐾0𝑆𝜋0𝜂, 𝐾+𝐾−𝜂, 𝐾0𝑆𝐾0𝑆𝜂, 𝐾−𝜋+𝜋0𝜂, 𝐾0𝑆𝜋+𝜋−𝜂, 𝐾0𝑆𝜋0𝜋0𝜂, and 𝜋+𝜋−𝜋0𝜂; 𝐷+→𝐾0𝑆𝜋+𝜂, 𝐾0𝑆𝐾+𝜂, 𝐾−𝜋+𝜋+𝜂, 𝐾0𝑆𝜋+𝜋0𝜂, 𝜋+𝜋+𝜋−𝜂, and 𝜋+𝜋0𝜋0𝜂. Among these decays, the 𝐷0→𝐾−𝜋+𝜂 and 𝐷+→𝐾0 𝑆𝜋+𝜂 decays have the largest branching fractions, which are ℬ(𝐷0→𝐾−𝜋+𝜂) = (1.853±0.025stat±0.031syst)% and ℬ(𝐷+→𝐾0𝑆𝜋+𝜂) = (1.309±0.037stat±0.031syst)%, respectively. The charge-parity asymmetries for the six decays with highest event yields are determined, and no statistically significant charge-parity violation is found.
There has recently been a dramatic renewal of interest in hadron spectroscopy and charm physics. This renaissance has been driven in part by the discovery of a plethora of charmonium-like XYZ states at BESIII and B factories, and the observation of an intriguing proton-antiproton threshold enhancement and the possibly related X(1835) meson state at BESIII, as well as the threshold measurements of charm mesons and charm baryons.
We present a detailed survey of the important topics in tau-charm physics and hadron physics that can be further explored at BESIII during the remaining operation period of BEPCII. This survey will help in the optimization of the data-taking plan over the coming years, and provides physics motivation for the possible upgrade of BEPCII to higher luminosity.
This article takes the renowned study "Der Akt des Lesens" (1976) by Wolfgang Iser and its translation "The Act of Reading" (1978) as its starting point. The differences between the two texts are discussed in terms of Iser's own idea of translatability as a cultural practice that was outlined in the short text "On Translatability". This theoretical frame will shed light on the decisions made in his own translations, and will help to develop a conceptualization of self-translation as a practice inherent in cultural change. [...] I will propose a combination of two concepts, Iser's 'translatability' (in II.) and the notion of 'autocommunication' by Lotman (III.), to suggest a concept of self-translation that entails three interrelated aspects: a) translation as a rewriting of the text as such, b) translation as continued work on one's argument as well as c) the re-translation back to the original source as a manifestation of a change in one's thought structure - Änderungen der eigenen Denkstruktur, as one of Werner Heisenberg's papers is entitled, and to which I will come back in my conclusion (IV.). Hence, the focus is mainly systematic and conceptual, however, I will first comment on my example of self- and re-translation and start with a comparison of different versions of Iser's "Der Akt des Lesens" and the shorter texts that led to the actual monograph.
Anhand des wissenschaftsphilosophischen Sachbuches "Der Baum der Erkenntnis" ("El árbol del conocimiento"), das in den 1980er Jahren von den chilenischen Biologen und Neurowissenschaftlern Humberto Maturana und Francisco Varela in spanischer Sprache veröffentlich wurde und für den Soziologen Niklas Luhmann zu einem wichtigen Standbein seiner Theorie der Gesellschaft und der sozialen Systeme wurde, soll gezeigt werden, wie einerseits Selbstübersetzung von der Wissenschaft in die Öffentlichkeit und andererseits Fremdübersetzung von der einen Sprache in die andere - in diesem Fall aus dem Spanischen ins Deutsche und Englische - Interferenzen und Reibungen innerhalb des Wissenstransfers produzieren. Zunächst soll ein Überblick über die unterschiedlichen Formen des Popularisierens als interdiskursive Selbstübersetzung gegeben werden. Im zweiten Teil wird dann die interlinguale Übersetzungsleistung diskutiert, die das spanische Original durch jeweils andere Sprach- und Forschungskontexte leicht verändert und dem fremdsprachigen Begriffsrepertoire einverleibt. Zuletzt soll in einem dritten Teil mit Rückgriff auf Luhmann das Prinzip der Autopoiesis als ein Prozess ausgewiesen werden, der bereits in "El árbol del conocimiento" nicht einfach als ein Wissenstransfer zu verstehen ist, sondern als ein Transfer der Bedingungen, wie Wissen im Übersetzen vom Übersetzen entsteht.
Die maßgebliche Position des Übersetzungskonzepts ergibt nicht nur aus dem unweigerlich vielsprachigen, migrierenden und interdisziplinären Denken des Philosophen, Psychoanalytikers, Ökonomen und Aktivisten Cornelius Castoriadis. Vielmehr existiert ein konkreter Kern des Translatorischen in den zwei konstitutiven Gegenständen seiner Philosophie, im Imaginären einerseits und anderseits in der Politik, die auf die imaginäre Verfasstheit der Gesellschaft zurückgeht. Ob also Castoriadis seinen eigenen Text vom Französischen in die griechische Sprache überträgt oder ob er die Vorstellungswelt der kapitalistischen Gegenwart beschreibt - in beiden Fällen geht es um die Übersetzung als einen kreativen Prozess innerhalb der Sprache, des Denkens oder der Politik.
Goldsteins Karriere in den USA startete mit wichtigen Publikationen in der neuen Sprache und vollzog sich nahezu vollständig in der Fremdsprache, deutsche Aufsätze verzeichnet seine Bibliographie nach seiner Emigration nur ganz vereinzelt. [...] Auf der Ebene des bibliographischen Befundes handelte es sich also um einen nahezu vollständigen Übergang in die fremde Sprache unter bruchlos fortlaufender Publikationstätigkeit, an dessen Nahtstelle die Übersetzung seines wichtigsten Buches stand. [...] Aus der "Einführung in die Biologie unter besonderer Berücksichtigung der Erfahrungen am kranken Menschen" war "A Holistic Approach to Biology Derived from Pathological Data in Man" geworden. Das amerikanische Buch war keine 'Einführung' mehr, keine 'Introduction', sondern bereits der Untertitel signalisierte eine besondere Perspektive auf die Biologie. [...] Dabei liegt Goldsteins Originalität darin - so der erste Teil der These, die in diesem Aufsatz aus dem Vergleich des deutschen Originals mit der englischen Übersetzung entwickelt werden soll -, dass er aus seinen akribischen empirischen Beobachtungen eine neue epistemologische Perspektive auf organismische Prozesse entwickelt hat, mit der er zugleich die alten ontologischen Diskussionen um eine den Lebensphänomenen und den lebenden Systemen eigene Wesenheit überwunden zu haben hoffte. Genau diese Stoßrichtung verschleiert aber der entscheidende Zusatz zur englischen Übersetzung, wenn sie diesen 'approach' als 'holistisch' charakterisiert. Denn Holismus und Ganzheitlichkeit stehen typischerweise für jene ontologische Richtung des Vitalismus, gegen die sich Goldstein wiederholt explizit gewandt und die im Buch vor allem als Kritik an Hans Drieschs Entelechie-Lehre ihre Spuren hinterlassen hatte. Goldsteins epistemologische Neuausrichtung der Biologie hatte auch hinter seiner Distanz zur Psychoanalyse gestanden - bzw. sie theoretisch legitimiert, weil er wie viele seiner neuropsychiatrischen Kollegen die Verabsolutierung sexueller Motive seitens der Psychoanalyse nicht mitvollziehen wollte. 'Triebe' oder 'Instinkte' konnten als empirische Beschreibungen biologischer Verhaltensweisen durchaus ihre Zwecke erfüllen, aber wenn daraus eigene Entitäten wurden, war damit der Schritt von der Epistemologie zur Ontologie vollzogen, den Goldstein zu vermeiden suchte, weil seine Theorie des Organismus gerade auf die Differenz zwischen beiden wissenschaftlichen Perspektiven abhob. Der zweite Teil der These lautet deshalb, dass die englische Übersetzung von Goldsteins Buch maßgeblich seine Rezeption als holistischer Biophilosoph und damit als ontologischer Vitalist gebahnt hat. Diese entscheidende Verschiebung, die eigentlich im Widerspruch zur Präzisierung von 'Einführung' durch 'approach' steht, wurde vom Haupttitel noch verstärkt, weil hier gerade umgekehrt die epistemologische Perspektive fallengelassen wurde, die dem - zugegebenermaßen unübersetzbaren - Wort 'Aufbau' eingeschrieben war, während das selbstständig gewordene Wort 'Organismus' nun in der Tat eine Erörterung seines Wesens anzukündigen schien. [...] Vor dem Hintergrund der Entwicklung von Goldsteins Anschauungen erscheint die englische Übersetzung des Organismus-Buchs als Wendepunkt von einer Vitalismus-skeptischen und epistemologischen Form der Biophilosophie zu einer affirmativ-vitalistischen, ontologischen Position.
Für Walter Benjamin war das Übersetzen nicht zuletzt eine Lebenserfahrung. Spätestens seit 1933, also seitdem er als Exilant in Paris lebte, war das Oszillieren zwischen dem Deutschen und dem Französischen sein täglich Brot. Von den Herausforderungen, Spannungen und Aporien dieses permanenten Changierens zeugen in seinem Werk vor allem die Selbstübersetzungen. Denn Benjamin hat - was in der Forschung erstaunlich selten beachtet wird - zahlreiche Selbstübersetzungen aus dem Deutschen ins Französische angefertigt. [...] Das Exposé zum "Passagenwerk" hat Benjamin 1935 auf Deutsch unter dem Titel "Paris, die Hauptstadt des XIX. Jahrhunderts" verfasst und 1939 unter dem Titel "Paris, Capitale du XIXème siècle: Exposé" selbst ins Französische übersetzt. Um diese Schrift in ihrer deutschen und ihrer französischen Fassung geht es in meinem folgenden Beitrag. Ich möchte eine vergleichende Lektüre der beiden Exposés vorschlagen. In der Passage vom deutschen Ausgangstext zum französischen Text der Selbstübersetzung verändert sich, wie sich zeigen wird, die Form der Darstellung wie auch die Art der Argumentation auf signifikante Weise: Wir können in Benjamins Selbstübersetzung des Passagenexposés nachvollziehen, wie das Deutsche und das Französische in eine Konstellation treten und dennoch in keiner Weise ineinander aufgehen, da Fremdheit und Differenz das Paradigma der Übersetzung darstellen, wie übrigens auch im Zusammenhang mit Benjamins früher Übersetzungstheorie deutlich werden wird. Dies ist im Fall der Selbstübersetzung umso erstaunlicher, da es sich hier um ein vermeintlich identisches Autorsubjekt handelt. Die Selbstübersetzung stellt somit einen Fall der Übersetzung dar, in dem sich die Voraussetzungen der Benjamin'schen Übersetzungstheorie auf exemplarische Weise kristallisieren. Bevor ich eingehender auf die beiden Exposés und auf die Form der Selbstübersetzung zu sprechen komme, möchte ich deshalb einige sehr kurze allgemeine Überlegungen zum Verhältnis von Übersetzungstheorie und ‑praxis bei Walter Benjamin vorausschicken und sodann die beiden Exposés - das deutsche Original von 1935 und die Selbstübersetzung ins Französische von 1939 - kurz vorstellen und (werk‑)historisch einordnen. Vor diesem Hintergrund können und sollen dann einige Beobachtungen zu den beiden Exposés en détail diskutiert werden.
Auf der harten Schulbank der Sprache : Leo Spitzers Bemerkungen über das Erlernen des Türkischen
(2020)
Im Folgenden werde ich mich mit einem Aufsatz von Leo Spitzer befassen, der als sonderbarer Fall von Selbstübersetzung rezipiert wurde. Dieser Text, in dem er seine persönlichen Erfahrungen mit dem Erwerb des Türkischen reflektiert und anschließend aus semantisch vergleichender Perspektive die türkische Syntax untersucht, wurde beinahe zeitgleich in einer französischen und einer türkischen Fassung veröffentlicht. Im Jahre 1935 erschien die französische Fassung des Aufsatzes über das Türkischlernen im Bulletin de la Société de linguistique de Paris: "En apprenant le turc. (Considérations psychologiques sur cette langue)" (wörtlich: "Türkisch lernend. Psychologische Betrachtungen über diese Sprache"). Im gleichen Zeitraum, sogar etwas früher, erschien auch eine türkische Fassung des Beitrags unter dem Titel "Türkçeyi Öğrenirken" ("Türkisch lernend") - ohne Untertitel -, die in drei Teilen in der literarischen Zeitschrift Varlık (Das Sein) zwischen April 1934 und Januar 1935 veröffentlicht wurde. Interessanterweise scheint Spitzer den ersten Teil der türkischen Version selbst übersetzt oder direkt auf Türkisch verfasst zu haben. Die Übersetzung der beiden weiteren Teile wurde hingegen von seinem damaligen Assistenten Sabahattin Eyüboğlu (1908−1973) vorgenommen, der an der Übersetzungspolitik der Türkei in den dreißiger Jahren aktiv mitwirkte. Mit diesem Aufsatz bekommt die Frage nach sprach- bzw. wissenschaftspolitischem Wissenstransfer in Istanbul einen sonderbaren, ja karnevalistischen Gehalt: Ein Professor, der beauftragt wurde, die abendländische Philologie und Wissenskultur in die moderne Türkei einzuführen, zeigt sich selbst in der Haltung des Lernenden, der ein sprachwissenschaftliches Selbstexperiment wagt. Dadurch wird die Wahl der Sprache als Werkzeug der Wissensvermittlung infrage gestellt: Französisch und Türkisch, beides erlernte und dadurch vergleichbare Sprachen, die in diesem Aufsatz zugleich Gegenstand und Mittel der Stilforschung sind. Meine Überlegungen zu diesem Thema gliedern sich in drei aufeinander aufbauende Teile. Nach einer knappen Hinführung zu Leo Spitzers Sprachauffassung und seinem Verständnis von Sprachwissenschaft als Stilforschung werde ich die historische Situation seines Aufenthalts in Istanbul in den Jahren von 1933 bis 1936 umreißen. In einem zweiten Schritt werde ich den besagten Aufsatz genauer in Hinblick auf Fragen der Selbstübersetzung untersuchen und fragen, was Leo Spitzer motiviert hat, diese angebliche Selbstübersetzung vorzunehmen. Weder war er Turkologe noch hat er seine romanistischen Arbeiten selbst ins Türkische übersetzt: Die philologischen Forschungen, die er während des Exils in Istanbul unternommen hat, sind entweder auf Französisch oder auf Deutsch erschienen. Die Bedeutung der Entscheidung, speziell diesen Aufsatz über die türkische Sprache selbst zu 'übersetzen', möchte ich zuletzt angesichts der wissenschaftspolitischen Rivalität von Sprachen - in diesem Fall zwischen dem Französischen und dem Türkischen - beleuchten.
Die komplexe und diffizile Frage, der im vorliegenden Beitrag nachgegangen werden soll, ist die nach Heines Eigenschaft als Selbstübersetzer, womit das Hauptaugenmerk auf die sprachlichen d. h. mehrsprachigen Grundlagen seiner Vermittlungstätigkeit gerichtet wird. Denn die Bezeichnung Heines als Selbstübersetzer kann im Gegensatz zur Assoziierung mit dem Begriff des Kulturtransfers durchaus überraschen. Mit Heines Sprachkompetenz im Französischen und dem auktorialen Status seiner französischen Schriften werden Forschungsfragen berührt, die in der Heine-Kritik von ihren Anfängen bis heute durchaus kontrovers diskutiert werden. In diesem Zusammenhang wäre auch nachzudenken über den für die Analyse von Heines Schreib- und Veröffentlichungspraxis relevanten Begriff von Übersetzung bzw. Selbstübersetzung. Kann man im Falle Heines wirklich von einem alleinigen deutschen Original sprechen und den französischen Fassungen seiner Schriften - wie oft geschehen - einen bloß sekundären Status zuweisen? In diesem Problemzusammenhang spielt neben den textgenetischen und sprachlich-translatorischen Aspekten auch die Perspektive der Selbstdarstellung bzw. Selbstvermarktung des Dichters eine bedeutende Rolle, insofern man bei Heine von einer regelrechten Inszenierung als zweisprachigem Autor reden kann. Die von mir im Folgenden vertretene These wird hierbei lauten, dass Heinrich Heines durchaus fragwürdiger Status als Selbstübersetzer über normative Übersetzungs- und Sprachkompetenzkriterien hinaus vor allem als doppelte - d. h. binationale und zweisprachige - 'auctoritas' aufzufassen ist. Anders gesagt: Über die komplexe Frage der Textgenese hinaus soll die von Heine bewusst eingenommene Rolle als direkter Akteur zweier nationaler Wissenssysteme betont werden. Unter diesem Blickwinkel wird nicht zuletzt ein bemerkenswerter Nexus zwischen den vermittelten Wissensinhalten und deren sprachlichem Transfer sichtbar. So soll gezeigt werden, dass Heines dezidiert antinationalistisches, kosmopolitisches und universalistisches Denken zwischen Deutschland und Frankreich seine formale Entsprechung in einer interlingualen Wissenszirkulation zwischen der deutschen und der französischen Sprache findet, in deren Medium die von ihm entwickelten und vermittelten Theorien und Thesen prozessual entwickelt und weitergeschrieben werden. Eine solche Sichtweise auf Heine als translingualen Schriftsteller wurde bisher nicht immer ausreichend von der Forschung berücksichtigt und gewürdigt. Wie allgemein im Zusammenhang mit bikulturellen und bilingualen Autoren sowie sprachlich hybriden Schreibverfahren wird man mit blinden Flecken der Forschung und nationalphilologischen Widerständen konfrontiert, die eine mehr oder weniger symbolische 'Vereinsprachigung' von Heines Werken befördern oder implizieren. Dieser Umstand betrifft nicht nur die deutsche Heine-Rezeption, sondern ist bedauerlicherweise auch in der interkulturell aufgestellten französischen Germanistik der jüngeren Zeit zu beobachten, wie ich in einem abschließenden Exkurs zeigen möchte.
Von Alessandro Manzonis 'italienischem' Beitrag zur europäischen Romantik-Debatte, der "Lettre à M.r C*** sur l'unité de temps et de lieu dans la tragédie", gibt es keine italienische Ausgangsversion. Im strengen Sinn handelt es sich bei dem hier infrage stehenden Text also nicht um eine Selbstübersetzung, sondern um ein Schreiben in der Fremdsprache Französisch, für das sich sein Autor gegenstands- und situationsbezogen entscheidet. Gleichzeitig ist bekannt, dass Manzonis Schreiben von französischer Politik, französischer Wissenschaft und Kultur gar nicht zu trennen ist. [...] Manzoni erfindet mit den "Promessi Sposi" Sprache als Dichtung und Dichtung als Sprache, als sogenannte National- und Weltliteratur. Die "Lettre à M. Chauvet" markiert dabei, so möchte ich im Folgenden zeigen, eine Art sprachlichen Wendepunkt, an dem die Sprache zu einem metaphorischen Exil wird und als ebenso kontingent wie notwendig erscheint. Der durch die "Lettre à M. Chauvet" initiierte Wissenstransfer wird bedingt durch die spezifische sprachliche und politische Situation Italiens in der ersten Hälfte des 19. Jahrhunderts. Nur vor diesem Hintergrund wird Manzonis biographische, kulturelle und sprachliche Zwischenposition verständlich. Denn erst im Abgleich der mehrsprachigen Situation (Dialekt, Französisch, hochitalienische Schriftsprache) mit dem Medium einer kodifizierten Schriftsprache (dem Französischen) wird jene Mehrsprachigkeit zu einem Mangel (I.). Die Kluft zwischen geschriebener und gesprochener Sprache wird in der französischen Fremdsprache zu einem epistemischen Zwischenraum, der - je nach Sprecher- und Adressatenperspektive - verschiedene Wissensbereiche betrifft. Editionsphilologisch wird der einzige von Manzoni auf Französisch publizierte Text zu einem Problem von Autorschaft: Co-Autorschaft scheint noch heute (oder gerade heute) philologisch einen Stein des Anstoßes darzustellen (II.). Auf der Ebene des poetologischen Gegenstands zeigt ein Vergleich der frühen Textfassung ('Primo Sbozzo') mit der Druckfassung, dass dieser Text sich im Verlauf der Abfassung mehr und mehr von seinem Ausgangstext (Manzonis Tragödie "Il Conte di Carmagnola" und deren Rezension durch Victor Chauvet) löst und zu einer zukünftigen Poetik der "Promessi Sposi" tendiert (III.). Die kulturelle Zwischenposition des Textes führt dazu, dass der Herausgeber Fauriel auf französischer Seite Manzonis Position als Kritik eines 'Outsiders' gezielt nutzen kann - davon zeugen die textuelle Rahmung wie auch die Missverständnisse, die im Briefwechsel geklärt werden; der Dichter Manzoni wiederum wird in der Perfektionierung des Französischen immer mehr auf das Problem einer sprachlichen Unverständlichkeit des Schriftitalienischen gestoßen (IV.). In der Zusammenfassung lässt sich die "Lettre à M. Chauvet" als offener Text beschreiben, an dem sich, je nach Produktions- und Rezeptionsperspektive, poetologische, subjekttheoretische, kultur- und sprachkritische Fragestellungen kreuzen (V.).
Leben und Werke der Brüder Wilhelm (1767−1835) und Alexander (1769−1859) von Humboldt vollzogen sich in vielfältigen Kultur- und Wissenstransfers. Beide waren, in jeweils unterschiedlichem Ausmaß, Forschungsreisende, wissenschaftliche Publizisten, Wissenschaftspolitiker, Staats- und Hofbedienstete. In der Vielfalt dieser Funktionen waren sie darauf angewiesen, zwischen fachlicher, politischer und öffentlicher Kommunkation vermitteln zu können, sich also selbst zu übersetzen - in einem zunächst einmal weiten Verständnis von Übersetzung, das neben sprachlichen Übertragungen auch die "Notwendigkeit kultureller Übersetzungsprozesse" meint und auf ein "Immer-schon-Übersetztsein" von Kulturen in ihrer Vielheit und Mannigfaltigkeit verweist. Solche kulturellen Transfers erweisen sich aber bei den Humboldts auf spezifische Weise als sprachgebunden. Daher lässt sich der im weiteren Sinne übersetzende Charakter ihres wissenschaftlichen und politischen Wirkens auf die interlingualen Selbstübersetzungen hin engführen, die untrennbar mit der mehrsprachigen Genese ihres jeweiligen Gesamtwerks verbunden sind. [...] Als besonderer Fall von kosmopolitischer Mehrsprachigkeit ist für beide Humboldts die deutsch-französische Beziehungs- und Verflechtungsgeschichte betont worden. Für beide besaß die französische Sprache einen zentralen Stellenwert: als von klein auf gesprochene Zweitsprache und als wissenschaftliche lingua franca der Zeit um 1800. Sie war immer dort mit im Spiel, wo sich Wilhelm und Alexander von Humboldt als Selbstübersetzer betätigten. Dabei ging die Übersetzungsrichtung sowohl aus dem Deutschen ins Französische als auch umgekehrt; übersetzt wurden sowohl komplette eigene Texte als auch Abschnitte aus teils publizierten, teils unpublizierten Arbeiten, die in der jeweils anderen Sprache zum Ausgangsmaterial für neue Schriften werden konnten. Auf diese Weise entstanden zweisprachige Textkorpora verschiedenen Zuschnitts, wie im Folgenden an chronologisch angeordneten Beispielen dargelegt werden soll, und zwar sowohl hinsichtlich der jeweiligen wissenschaftlichen Kontexte als auch im mikrologischen Blick auf die Texte selbst. Zwei der Beispiele stammen von Wilhelm von Humboldt: ein deutsch-französisches Konglomerat zur Ästhetik (II.) und ein französisch-deutsches über altamerikanische Sprachen (IV.); die beiden anderen von Alexander von Humboldt: die parallel auf Französisch und Deutsch veröffentlichte Abhandlung zur Geographie der Pflanzen (III.) und die französische Übersetzung der zuerst auf Deutsch publizierten Einleitung zum mehrbändigen Spätwerk, dem Kosmos (V.). Abschließend soll nochmals die pragmatische und theoretische Reichweite der Humboldt'schen Selbstübersetzungen benannt werden (VI.).
Zwar verfasste und veröffentlichte Schlegel das Gros seiner Schriften auf Deutsch, mehrere wichtige Publikationen erschienen aber auf Französisch und Latein. Schlegels berühmte Übersetzungsleistungen und seine mehrsprachige Publikationspraxis legen die Vermutung nahe, er habe sich als Selbstübersetzer betätigt, um etwa seine deutschsprachigen Schriften für ein internationales Publikum zugänglich zu machen. Dies ist jedoch nicht der Fall: Entweder wurden seine Schriften von Dritten übersetzt, mit oder ohne Mitwirkung des Autors, oder er selbst hat die jeweilige Schrift direkt in der Fremdsprache verfasst. Diese Sprachwahl ist meines Erachtens ein wesentlicher Grundzug von Schlegels Publikationsstrategie. Schlegel übersetzte seine Schriften weder selbst noch gab er sie als Selbstübersetzungen an. Und trotzdem lässt sich bei Schlegel das Phänomen der Selbstübersetzung feststellen. Denn er übernahm frühere Gedanken, Ausdrücke und Botschaften in neue, in einer anderen Sprache verfasste Schriften und passte sie dem Zielpublikum an. So tauchen äquivalente Textpassagen, seien es einzelne Sätze, seien es ganze Absätze, in Texten auf, die in unterschiedlichen Sprachen verfasst wurden. Diese Selbstübersetzungen spiegeln auf einer interlingualen Ebene die enge intertextuelle Vernetzung seines scheinbar disparaten Werks wider und sind eng mit analogen intralingualen Verfahren wie Kommentieren, Paraphrasieren, Zusammenfassen oder Zitieren verbunden, die Schlegels Schreib- und Arbeitsweise prägen. Seine Texte zeichnen sich durch eine starke Selbstreferenz aus, die über die häufig vorkommenden und selten ausgewiesenen Selbstzitate sowie die Übernahme und Weiterentwicklung bereits an anderer Stelle ausformulierter Gedanken hinausgeht. Zunächst soll Schlegels selbstreferentielle Schreibweise anhand eines kurzen zweisprachigen Textvergleichs veranschaulicht werden (I.). Anschließend wird im Hauptteil dieses Beitrags (II.) Schlegels Selbstübersetzung anhand der "Comparaison entre la Phèdre de Racine et celle d'Euripide" und der Wiener Vorlesungen "Über dramatische Kunst und Literatur" beleuchtet. In einem abschließenden Abschnitt (III.) soll die mehrsprachige Publikationspraxis hinterfragt und die Bedeutung der Sprachwahl für Schlegels Denken erläutert werden.
In the first half of this article I will explore Van Helmont's philosophy of language and translation, in part by contextualizing it within the sixteenth- and seventeenth-century traditions upon which he drew. Since Van Helmont is so explicit about the philosophy of language and translation that he developed, I will investigate in this article if he turned his philosophy into practice. Therefore, the second half of this article will discuss Van Helmont's practices of using and translating between his two main languages (Dutch and Latin). The way in which he employed the languages in which he wrote raises questions about his practice of self-translation and the use of language. Did his mother tongue always figure as the first language into which his thoughts were translated, or could it also have been Latin as the first language for his profession? Van Helmont might have been switching primary languages for the different purposes of his writings. Before going into more detail about his philosophy and use of language, I will briefly introduce this relatively unknown author to the reader.
Seit Jacob Burckhardts Thesen zum Individualismus in der Renaissance gilt das 'Selbst' als eine eingehend diskutierte Größe, die ungeachtet ihrer diversen Erscheinungsformen nicht nur als das bündige Kennzeichen einer Schwellenzeit, sondern geradezu als ihr alleiniges Produkt angesehen wurde: die Epoche zwischen Spätmittelalter und Aufklärung steht schlichtweg für den 'Ursprung' der modernen Individualität. [...] Dem wohl prominentesten oratorischen Initiator, der explizit für eine 'reformatio mundi' steht, widerfuhr seitens der historiographischen Nachwelt bezeichnenderweise beides, eine ergebene Verklärung als Führergestalt sowie die Apotheose im prometheischen Schema: Martin Luther gilt bis heute nicht nur als 'Rebell' oder 'Revolutionär', sondern vor allem auch als 'Schöpfer der deutschen Schriftsprache'. Spätestens hier gilt es zu differenzieren. Gerade die Person des Wittenberger Gelehrten, Predigers und Seelsorgers gibt Anlass zur sorgsamen Klärung eines frühneuzeitlichen Selbst-Bewusstseins. Das Selbst-Verständnis des Augustinermönchs gründet zunächst ausschließlich in theologischen Traditionen und ist daher vor allem mit Komposita wie 'Selbsterforschung', 'Selbstgespräch' ('soliloquium') und 'Selbsterkenntnis' ('cognitio sui') in Verbindung zu bringen. Diese Kategorien aber sind primär und unlösbar an die Sündhaftigkeit jedes einzelnen Menschen gekoppelt [...] Und dennoch - in diesem der Moderne eher fremd anmutenden Katalog der genannten Komposita begegnet tatsächlich auch die 'SelbstÜbersetzung': als Praxis und Phänomen wie in Form erhellender Paraphrasierungen. Voraussetzung für die Praxis scheint zunächst ein Geschehen auf der machtpolitischen Ebene zu sein. Mit dem Rückgang der von Papst und Kaiser getragenen Universalherrschaft im westlichen Europa, mit dem Schwinden eines nachantiken Herrschaftsanspruchs ('translatio imperii') im Sinne des mittelalterlichen römischen Reichsgedankens ist gleichermaßen auch der rasch fortschreitende Rückgang des Lateinischen als flächendeckende Gelehrten- und Verwaltungssprache eingeleitet. Indem nun die einzelnen regionalen Teilherrschaften nach politischer Autonomie streben, erhält auch die polyphone Sprachkultur Europas einen neuen Stellenwert: weitgehend noch pränationale, also eher territoriale Interessenkollektive verlegen sich auf die Regelung ihrer Angelegenheiten im eigenen Idiom. Dieses kann damit aus dem Schattendasein eines als 'laienhaft' und 'illiterat' verachteten Ungenügens heraustreten und sich als vollgültiges Pendant der antiken Vorgaben erweisen. [...] Ein zweites Novum tritt zeitgleich hinzu. Zwecks parteigebundener Regelung der lokalen und bald auch der bi- oder multilateralen Angelegenheiten erhalten die antiken Vorgaben der Redekunst eine neue und sehr zentrale Funktion: das vor Ort im Sinne bestimmter Interessen auftretende Einzel- oder Kollektivsubjekt (Territorialgewalt) artikuliert sich sprachlich-persuasiv, vertritt einen lokalen Standpunkt ('opinio' / 'point de vue') und versucht seine Zuhörer durch Überzeugung zu einem politisch wirksamen Handeln zu bewegen. Hierfür dient der Autortext, ebenso aber auch seine zweckgerichtete Übersetzung. In dieser konkretisiert sich ein Subjekt nicht nur als Mittler zwischen Parteien, Ständen und Interessen, sondern auch zwischen Sprachen und Kulturen.
Selbstübersetzung - das heißt: Autorinnen und Autoren übertragen ihre eigenen Texte aus einer Sprache in eine andere Sprache, fungieren also als ihre eigenen Übersetzerinnen und Übersetzer. In einer Selbstübersetzung sind demnach Autor und Übersetzer identisch. Aus dieser vermeintlich einfachen Definition ergibt sich eine Reihe von komplexen Forschungsfragen. Sie lassen sich anhand einiger Leitbegriffe formulieren, um das im vorliegenden Band unternommene Gespräch über die Grenzen von Philologien und Disziplinen hinweg ansatzweise zu systematisieren.
Selbstübersetzungen sind faszinierende Erscheinungsformen mehr- und anderssprachigen Schreibens. Sie bewegen sich an der Grenze zwischen pragmatischem Nutzen und autorschaftlicher Strategie, sie eröffnen die Lizenz zum produktiven Weiterdenken, Umstellen und Fortsetzen des Ausgangstextes, und oft werfen sie die Frage auf, welche der so entstehenden Versionen das Original und welche die Übersetzung ist. Bisherige Untersuchungen zum Thema konzentrierten sich meist auf literarische, insbesondere poetische Selbstübersetzungen. Der vorliegende Band nimmt dagegen das Moment der Übertragung von Wissen in den Blick - und damit den spannungsvollen Zusammenhang von sprachlichem Transfer und Wissenstransfer. Die Beiträge widmen sich gelehrten und intellektuellen Selbstübersetzern aus fünf Jahrhunderten: Martin Luther, Jan Baptista van Helmont, August Wilhelm Schlegel, Wilhelm und Alexander von Humboldt, Alessandro Manzoni, Heinrich Heine, Leo Spitzer, Walter Benjamin, Kurt Goldstein, Eugen Rosenstock-Huessy, Cornelius Castoriadis, Humberto Maturana und Wolfgang Iser.
During the 2016-17 and 2018-19 running periods, the BESIII experiment collected 7.5~fb−1 of e+e− collision data at center-of-mass energies ranging from 4.13 to 4.44~GeV. These data samples are primarily used for the study of excited charmonium and charmoniumlike states. By analyzing the di-muon process e+e−→(γISR/FSR)μ+μ−, we measure the center-of-mass energies of the data samples with a precision of 0.6 MeV. Through a run-by-run study, we find that the center-of-mass energies were stable throughout most of the data-taking period.
Metabolic differences between symbiont subpopulations in the deep-sea tubeworm Riftia pachyptila
(2020)
The hydrothermal vent tube worm Riftia pachyptila lives in intimate symbiosis with intracellular sulfur-oxidizing gammaproteobacteria. Although the symbiont population consists of a single 16S rRNA phylotype, bacteria in the same host animal exhibit a remarkable degree of metabolic diversity: They simultaneously utilize two carbon fixation pathways and various energy sources and electron acceptors. Whether these multiple metabolic routes are employed in the same symbiont cells, or rather in distinct symbiont subpopulations, was unclear. As Riftia symbionts vary considerably in cell size and shape, we enriched individual symbiont cell sizes by density gradient centrifugation in order to test whether symbiont cells of different sizes show different metabolic profiles. Metaproteomic analysis and statistical evaluation using clustering and random forests, supported by microscopy and flow cytometry, strongly suggest that Riftia symbiont cells of different sizes represent metabolically dissimilar stages of a physiological differentiation process: Small symbionts actively divide and may establish cellular symbiont-host interaction, as indicated by highest abundance of the cell division key protein FtsZ and highly abundant chaperones and porins in this initial phase. Large symbionts, on the other hand, apparently do not divide, but still replicate DNA, leading to DNA endoreduplication. Highest abundance of enzymes for CO2 fixation, carbon storage and biosynthesis in large symbionts indicates that in this late differentiation stage the symbiont’s metabolism is efficiently geared towards the production of organic material. We propose that this division of labor between smaller and larger symbionts benefits the productivity of the symbiosis as a whole.
Highlights
• German patients with LGS identified using most specific algorithm to date.
• Prevalence of probable LGS with epilepsy diagnosis before age 6 was 6.5 per 100,000.
• High healthcare costs of €22,787 PPY; mostly due to inpatient and home nursing care.
• Costs were greater in patients prescribed rescue medications.
• Over 10 years, LGS patients had significant mortality vs. controls (2.88 vs. 0.01%).
Abstract
Objective: This retrospective study examined patients with probable Lennox-Gastaut syndrome (LGS) identified from German healthcare data.
Methods: This 10-year study (2007–2016) assessed healthcare insurance claims information from the Vilua Healthcare research database. A selection algorithm considering diagnoses and drug prescriptions identified patients with probable LGS. To increase the sensitivity of the identification algorithm, two populations were defined: all patients with probable LGS (broadly defined) and only those with a documented epilepsy diagnosis before 6 years of age (narrowly defined). This specific criterion was used as LGS typically has a peak seizure onset between age 3 and 5 years. Primary analyses were prevalence and demographics; secondary analyses included healthcare costs, hospitalization rate and length of stay (LOS), medication use, and mortality.
Results: In the final year of the study, 545 patients with broadly defined probable LGS (mean [range] age: 31.4 [2–89] years; male: 53%) were identified. Using the narrowly defined probable LGS definition, the number of patients was reduced to 102 (mean [range] age: 7.4 [2–14] years; male: 52%). Prevalence of broadly defined and narrowly defined probable LGS was 39.2 and 6.5 per 100,000 people. During the 10-year study, 208 patients with narrowly defined probable LGS were identified and followed up for 1379 patient-years. The mean annual cost of healthcare was €22,787 per patient-year (PPY); greatest costs were attributable to inpatient care (33%), home nursing care (13%), and medication (10%). Mean annual healthcare costs were significantly greater for those with prescribed rescue medication (45% of patient-years) versus those without (€33,872 vs. €13,785 PPY, p < 0.001). Mean (standard deviation [SD]) annual hospitalization rate was 1.6 (2.0) PPY with mean (SD) annual LOS of 22.7 (46.0) days. Annual hospitalization rate was significantly greater in those who were prescribed rescue medication versus those who were not (2.2 [2.3] vs. 1.1 [1.6] PPY, p < 0.001). The mean (SD) number of different medications prescribed was 11.3 (7.3) PPY and 33.8 (17.0) over the entire observable time per patient (OET); antiepileptic drugs only accounted for 2.1 (1.1) of the medications prescribed PPY and 3.8 (2.0) OET. Over the 10-year study period, mortality in patients with narrowly defined probable LGS was significantly higher than the matched control population (six events [2.88%] vs. one event [0.01%], p < 0.001).
Conclusion: Annual healthcare costs incurred by patients with probable LGS in Germany were substantial, and mostly attributable to inpatient care, home nursing care, and medication. Patients prescribed with rescue medication incurred significantly greater costs than those who were not. Patients with narrowly defined probable LGS had a higher mortality rate versus control populations.
100 Jahre Dieter Janz
(2020)
The 20 April 2020 marks the centenary of Dieter Janz’s birth. This issue of Zeitschrift für Epileptologie is published in his honor with the aim of tracing the work of Dieter Janz over the last five decades and summarizing new findings on the Janz syndrome (Juvenile Myoclonic Epilepsy), which is named after him.
Protein turnover, the net result of protein synthesis and degradation, enables cells to remodel their proteomes in response to internal and external cues. Previously, we analyzed protein turnover rates in cultured brain cells under basal neuronal activity and found that protein turnover is influenced by subcellular localization, protein function, complex association, cell type of origin, and by the cellular environment (Dörrbaum et al., 2018). Here, we advanced our experimental approach to quantify changes in protein synthesis and degradation, as well as the resulting changes in protein turnover or abundance in rat primary hippocampal cultures during homeostatic scaling. Our data demonstrate that a large fraction of the neuronal proteome shows changes in protein synthesis and/or degradation during homeostatic up- and down-scaling. More than half of the quantified synaptic proteins were regulated, including pre- as well as postsynaptic proteins with diverse molecular functions.
We examined the feedback between the major protein degradation pathway, the ubiquitin-proteasome system (UPS), and protein synthesis in rat and mouse neurons. When protein degradation was inhibited, we observed a coordinate dramatic reduction in nascent protein synthesis in neuronal cell bodies and dendrites. The mechanism for translation inhibition involved the phosphorylation of eIF2α, surprisingly mediated by eIF2α kinase 1, or heme-regulated kinase inhibitor (HRI). Under basal conditions, neuronal expression of HRI is barely detectable. Following proteasome inhibition, HRI protein levels increase owing to stabilization of HRI and enhanced translation, likely via the increased availability of tRNAs for its rare codons. Once expressed, HRI is constitutively active in neurons because endogenous heme levels are so low; HRI activity results in eIF2α phosphorylation and the resulting inhibition of translation. These data demonstrate a novel role for neuronal HRI that senses and responds to compromised function of the proteasome to restore proteostasis.
Keystone mutualisms, such as corals, lichens or mycorrhizae, sustain fundamental ecosystem functions. Range dynamics of these symbioses are, however, inherently difficult to predict because host species may switch between different symbiont partners in different environments, thereby altering the range of the mutualism as a functional unit. Biogeographic models of mutualisms thus have to consider both the ecological amplitudes of various symbiont partners and the abiotic conditions that trigger symbiont replacement. To address this challenge, we here investigate 'symbiont turnover zones'--defined as demarcated regions where symbiont replacement is most likely to occur, as indicated by overlapping abundances of symbiont ecotypes. Mapping the distribution of algal symbionts from two species of lichen-forming fungi along four independent altitudinal gradients, we detected an abrupt and consistent β-diversity turnover suggesting parallel niche partitioning. Modelling contrasting environmental response functions obtained from latitudinal distributions of algal ecotypes consistently predicted a confined altitudinal turnover zone. In all gradients this symbiont turnover zone is characterized by approximately 12°C average annual temperature and approximately 5°C mean temperature of the coldest quarter, marking the transition from Mediterranean to cool temperate bioregions. Integrating the conditions of symbiont turnover into biogeographic models of mutualisms is an important step towards a comprehensive understanding of biodiversity dynamics under ongoing environmental change.
Two-person neuroscience (2 PN) is a recently introduced conceptual and methodological framework used to investigate the neural basis of human social interaction from simultaneous neuroimaging of two or more subjects (hyperscanning). In this study, we adopted a 2 PN approach and a multiple-brain connectivity model to investigate the neural basis of a form of cooperation called joint action. We hypothesized different intra-brain and inter-brain connectivity patterns when comparing the interpersonal properties of joint action with non-interpersonal conditions, with a focus on co-representation, a core ability at the basis of cooperation. 32 subjects were enrolled in dual-EEG recordings during a computerized joint action task including three conditions: one in which the dyad jointly acted to pursue a common goal (joint), one in which each subject interacted with the PC (PC), and one in which each subject performed the task individually (Solo).
A combination of multiple-brain connectivity estimation and specific indices derived from graph theory allowed to compare interpersonal with non-interpersonal conditions in four different frequency bands. Our results indicate that all the indices were modulated by the interaction, and returned a significantly stronger integration of multiple-subject networks in the joint vs. PC and Solo conditions. A subsequent classification analysis showed that features based on multiple-brain indices led to a better discrimination between social and non-social conditions with respect to single-subject indices. Taken together, our results suggest that multiple-brain connectivity can provide a deeper insight into the understanding of the neural basis of cooperation in humans.
Bone vasculature provides protection and signals necessary to control stem cell quiescence and renewal1. Specifically, type H capillaries, which highly express Endomucin, constitute the endothelial niche supporting a microenvironment of osteoprogenitors and long-term hematopoietic stem cells2–4. The age-dependent decline in type H endothelial cells was shown to be associated with bone dysregulation and accumulation of hematopoietic stem cells, which display cell-intrinsic alterations and reduced functionality3. The regulation of bone vasculature by chronic diseases, such as heart failure is unknown. Here, we describe the effects of myocardial infarction and post-infarction heart failure on the vascular bone cell composition. We demonstrate an age-independent loss of type H bone endothelium in heart failure after myocardial infarction in both mice and in humans. Using single-cell RNA sequencing, we delineate the transcriptional heterogeneity of human bone marrow endothelium showing increased expression of inflammatory genes, including IL1B and MYC, in ischemic heart failure. Inhibition of NLRP3-dependent IL-1β production partially prevents the post-myocardial infarction loss of type H vasculature in mice. These results provide a rationale for using anti-inflammatory therapies to prevent or reverse the deterioration of vascular bone function in ischemic heart disease.
We use the quantum null energy condition in strongly coupled two-dimensional field theories (QNEC2) as diagnostic tool to study a variety of phase structures, including crossover, second and first order phase transitions. We find a universal QNEC2 constraint for first order phase transitions with kinked entanglement entropy and discuss in general the relation between the QNEC2-inequality and monotonicity of the Casini-Huerta c-function. We then focus on a specific example, the holographic dual of which is modelled by three-dimensional Einstein gravity plus a massive scalar field with one free parameter in the self-interaction potential. We study translation invariant stationary states dual to domain walls and black branes. Depending on the value of the free parameter we find crossover, second and first order phase transitions between such states, and the c-function either flows to zero or to a finite value in the infrared. Strikingly, evaluating QNEC2 for ground state solutions allows to predict the existence of phase transitions at finite temperature.
We use the quantum null energy condition in strongly coupled two-dimensional field theories (QNEC2) as diagnostic tool to study a variety of phase structures, including crossover, second and first order phase transitions. We find a universal QNEC2 constraint for first order phase transitions with kinked entanglement entropy and discuss in general the relation between the QNEC2-inequality and monotonicity of the Casini-Huerta c-function. We then focus on a specific example, the holographic dual of which is modelled by three-dimensional Einstein gravity plus a massive scalar field with one free parameter in the self-interaction potential. We study translation invariant stationary states dual to domain walls and black branes. Depending on the value of the free parameter we find crossover, second and first order phase transitions between such states, and the c-function either flows to zero or to a finite value in the infrared. Strikingly, evaluating QNEC2 for ground state solutions allows to predict the existence of phase transitions at finite temperature.
We use holography to study the dynamics of a strongly-coupled gauge theory in four-dimensional de Sitter space with Hubble rate H. The gauge theory is non-conformal with a characteristic mass scale M. We solve Einstein’s equations numerically and determine the time evolution of homogeneous gauge theory states. If their initial energy density is high compared with H4 then the early-time evolution is well described by viscous hydrodynamics with a non-zero bulk viscosity. At late times the dynamics is always far from equilibrium. The asymptotic late-time state preserves the full de Sitter symmetry group and its dual geometry is a domain-wall in AdS5. The approach to this state is characterised by an emergent relation of the form P = w E that is different from the equilibrium equation of state in flat space. The constant w does not depend on the initial conditions but only on H/M and is negative if the ratio H/M is close to unity. The event and the apparent horizons of the late-time solution do not coincide with one another, reflecting its non-equilibrium nature. In between them lies an “entanglement horizon” that cannot be penetrated by extremal surfaces anchored at the boundary, which we use to compute the entanglement entropy of boundary regions. If the entangling region equals the observable universe then the extremal surface coincides with a bulk cosmological horizon that just touches the event horizon, while for larger regions the extremal surface probes behind the event horizon.
Background: Data on the arrhythmic burden of women at risk for sudden cardiac death are limited, especially in patients using the wearable cardioverter-defibrillator (WCD).
Objective: We aimed to characterize WCD compliance, atrial and ventricular arrhythmic burden, and WCD outcomes by sex in patients enrolled in the Prospective Registry of Patients Using the Wearable Cardioverter Defibrillator (WEARIT-II U.S. Registry).
Methods: In the WEARIT-II Registry, we stratified 2000 patients by sex into women (n = 598) and men (n = 1402). WCD wear time, ventricular and atrial arrhythmic events during WCD use, and implantable cardioverter-defibrillator (ICD) implantation rates at the end of WCD use were evaluated.
Results: The mean WCD wear time was similar in women and men (94 days vs 90 days; P = .145), with longer daily use in women (21.4 h/d vs 20.7 h/d; P = .001). Burden of ventricular tachycardia or ventricular fibrillation was higher in women, with 30 events per 100 patient-years compared with 18 events per 100 patient-years in men (P = .017), with similar findings for treated and non-treated ventricular tachycardia/ventricular fibrillation. Recurrent atrial arrhythmias/sustained ventricular tachycardia was also more frequent in women than in men (167 events per 100 patient-years vs 73 events per 100 patient-years; P = .042). However, ICD implantation rate at the end of WCD use was similar in both women and men (41% vs 39%; P = .448).
Conclusion: In the WEARIT-II Registry, we have shown a higher burden of ventricular and atrial arrhythmic events in women than in men. ICD implantation rates at the end of WCD use were similar. Our findings warrant monitoring women at risk for sudden cardiac death who have a high burden of atrial and ventricular arrhythmias while using the WCD.
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Ziel dieser Untersuchung ist es, die Legitimität der Kriminalisierung des Glücksspiels in Brasilien zu hinterfragen. Dies geschieht mit besonderem Augenmerk auf das spezifische brasilianische Glücksspiel, das „Spiel der Tiere“ (Jogo do bicho), das Ende des 19. Jahrhunderts in der Stadt Rio de Janeiro, der Hauptstadt des damaligen kaiserlichen Brasiliens, entstand. Es handelt sich um eine Form des Glücksspiels, die sich in ganz Brasilien verbreitet hat und bereits Gegenstand mehrerer akademischer Studien in den Bereichen Anthropologie und Soziologie war. Das Verbot dieser Art von Glücksspiel, seine Kriminalisierung, seine große Beliebtheit und seine gesellschaftliche Toleranz sind jedoch Gründe dafür, dass das Spiel der Tiere im Besonderen und das Glücksspiel im Allgemeinen auch im juristischen Bereich, insbesondere im Strafrecht, ein bisher vernachlässigter Forschungsgegenstand von großem Interesse ist.
Das Hauptaugenmerk dieser Arbeit liegt auf der Analyse der Kriminalisierung des Glücksspiels, das in Brasilien seit über einem Jahrhundert unter freiem Himmel praktiziert wird. Bei dieser Analyse werden die Gründe für die Kriminalisierung und die Legitimität des Verbots in Frage gestellt. Zu diesem Zweck ist der Text, abgesehen von der Einleitung und der Schlussfolgerung, in sechs Kapitel unterteilt.
Kapitel 1 beschreibt die Geschichte des brasilianischen Tierspiels, die Ursprünge seines Verbots und seiner Kriminalisierung. In Kapitel 2 wird über die Wirklichkeit der Strafverfolgung in diesem „Kriminalitätsbereich“ berichtet. Kapitel 3 stellt den ent-sprechenden Straftatbestand des brasilianischen „Código Penal“ im Kontext der Systematik des brasilianischen Strafgesetzbuches vor. Kapitel 4 widmet sich zunächst den verfassungsrechtlichen Grenzen der Kriminalisierung, und danach einem Überblick über das deutsche Glücksspielverbot und die Glücksspielregulierung. Ergänzt wird diese Suche in Kapitel 5, in dem die Forschung das strafrechtliche Glücksspielverbot in den Kontext der Debatte über die Abgrenzung zwischen und den Zusammenhang von Recht und Moral. Im abschließenden Kapitel 6 wird das (strafrechtlich sanktionierte) Glücksspielverbot mit den klassischen Legitimations-anforderungen konfrontiert.
Was in der strafrechtlichen Literatur der ersten Hälfte des 20. Jahrhunderts zur Rechtfertigung des Verbots zu lesen ist, deutet auf einen großen Einfluss moralischer Argumente hin. Diese Argumente haben bis heute an Gewicht nicht verloren, auch wenn die Befürworter der Beibehaltung der Kriminalisierung versuchen, ihre letztlich moralistische Ideologie gegen das Glücksspiel mit Argumenten wie der Begleitkriminalität des Glückspiels zu verschleiern, die eher eine Folge als eine Ursache der Kriminalisierung ist.
Decline in physical activity in the weeks preceding sustained ventricular arrhythmia in women
(2020)
Background: Heightened risk of cardiac arrest following physical exertion has been reported. Among patients with an implantable defibrillator, an appropriate shock for sustained ventricular arrhythmia was preceded by a retrospective self-report of engaging in mild-to-moderate physical activity. Previous studies evaluating the relationship between activity and sudden cardiac arrest lacked an objective measure of physical activity and women were often underrepresented.
Objective: To determine the relationship between physical activity, recorded by accelerometer in a wearable cardioverter-defibrillator (WCD), and sustained ventricular arrhythmia among female patients.
Methods: A dataset of female adult patients prescribed a WCD for a diagnosis of myocardial infarction or dilated cardiomyopathy was compiled from a commercial database. Curve estimation, to include linear and nonlinear interpolation, was applied to physical activity as a function of time (days before arrhythmia).
Results: Among women who received an appropriate WCD shock for sustained ventricular arrhythmia (N = 120), a quadratic relationship between time and activity was present prior to shock. Physical activity increased starting at the beginning of the 30-day period up until day -16 (16 days before the ventricular arrhythmia) when activity begins to decline.
Conclusion: For patients who received treatment for sustained ventricular arrhythmia, a decline in physical activity was found during the 2 weeks preceding the arrhythmic event. Device monitoring for a sustained decline in physical activity may be useful to identify patients at near-term risk of a cardiac arrest.
Das Staatsangehörigkeitsrecht verankert rechtlich Vorstellungen über Zugehörigkeit und bestimmt wer vollumfängliche Rechte in einer Gesellschaft hat und wer nicht. Jahrzehntelang wurde Migration in Deutschland als etwas temporäres betrachtet. Im Staatsangehörigkeitsrecht galt bis zur Reform 1999/2000 weitgehend das „ius sanguinis“, das Abstammungsrecht, das auf einem rassistischen und völkischen Staatsverständnis beruht. Diese Reform bedeutete somit mehr als eine reine Gesetzesänderung. Sie war eine Anerkennung Deutschlands als Einwanderungsland und die Veränderung der Vorstellung deutscher Identität. Als Reaktion entbrannte infolge der Reformpläne eine hitzige, rassistische Debatte in der Öffentlichkeit über ebendiese Fragen, die unter dem polarisierten Schlagwort „Doppelpass“ verhandelt wurde. Es war die lauteste migrationspolitische Debatte dieser Zeit.
Kurze Zeit vor Beginn dieser Debatte war die rechtsterroristische Gruppe „Nationalsozialistischer Untergrund“ (NSU) abgetaucht, um einem Haftbefehl zu entgehen. Der NSU war ein deutsches, neonazistisches Netzwerk, in dessen Mittelpunkt drei Terrorist*innen standen. Sie verübten über einen Zeitraum von zwölf Jahren eine rassistische Mordserie an neun Personen türkischer, kurdischer und griechischer Herkunft sowie drei Sprengstoffanschläge auf migrantische Orte und ermordeten eine Polizistin. Den ersten ihrer Sprengstoffanschläge begingen sie nur einen Monat nach der Unterzeichnung der Reform. Wenige Monate nach dem Inkrafttreten des Gesetzes begannen sie mit dem Anschlag auf Enver Şimşek ihre rassistische Mordserie.
Diese Arbeit untersucht anhand der Struktur der Historisch-Materialistischen Politikanalyse das Migrationsregime um die Staatsangehörigkeitsreform von 1999/2000 und wie der NSU darin verortet werden kann.
Die Kontextanalyse stellt auf der Grundlage einer Literaturrecherche die relevanten historischen und strukturellen Faktoren der Debatte sowie des NSU dar. Im nächsten Schritt werden mithilfe einer Analyse von Zeitungsartikel aus dieser Zeit die relevanten Akteur*innen identifiziert und in die vier Hegemonieprojekte neoliberal, sozial, linksliberal-alternativ und konservativ gruppiert. Darauffolgend wird der Ablauf der Debatte in vier Phasen darstellt und als Aushandlung der vier Hegemonieprojekte rekonstruiert. Dabei zeigt sich, dass kein Projekt sich vollumfänglich durchsetzen und Hegemonie erreichen konnte, sie jedoch unterschiedlich stark in den Medien repräsentiert wurden.
Im letzten Schritt betrachtet diese Arbeit Verbindungen dieser Migrationsregime-Analyse zum NSU. Sie kommt zu dem Ergebnis, dass der NSU kein Akteur im Migrationsregime um die Staatsangehörigkeitsdebatte von 1998/99 war. Aufgrund der geringen Erkenntnisse über spezifische Meinungen des NSU zum Staatsangehörigkeitsrecht, können keine kausalen Beziehungen hergestellt werden. Dennoch zeigt diese Arbeit Gemeinsamkeiten in den Weltbildern, Annahmen und migrationspolitischen Zielen des NSU, des konservativen Hegemonieprojektes sowie Teilen der Bevölkerung auf. Dadurch wird ein Beitrag dazu geleistet den NSU als Produkt und Teil der deutschen Gesellschaft zu begreifen.
Attention-Deficit/Hyperactivity Disorder (ADHD) and obesity are frequently comorbid, genetically correlated, and share brain substrates. The biological mechanisms driving this association are unclear, but candidate systems, like dopaminergic neurotransmission and circadian rhythm, have been suggested. Our aim was to identify the biological mechanisms underpinning the genetic link between ADHD and obesity measures and investigate associations of overlapping genes with brain volumes. We tested the association of dopaminergic and circadian rhythm gene sets with ADHD, body mass index (BMI), and obesity (using GWAS data of N = 53,293, N = 681,275, and N = 98,697, respectively). We then conducted genome-wide ADHD–BMI and ADHD–obesity gene-based meta-analyses, followed by pathway enrichment analyses. Finally, we tested the association of ADHD–BMI overlapping genes with brain volumes (primary GWAS data N = 10,720–10,928; replication data N = 9428). The dopaminergic gene set was associated with both ADHD (P = 5.81 × 10−3) and BMI (P = 1.63 × 10−5); the circadian rhythm was associated with BMI (P = 1.28 × 10−3). The genome-wide approach also implicated the dopaminergic system, as the Dopamine-DARPP32 Feedback in cAMP Signaling pathway was enriched in both ADHD–BMI and ADHD–obesity results. The ADHD–BMI overlapping genes were associated with putamen volume (P = 7.7 × 10−3; replication data P = 3.9 × 10−2)—a brain region with volumetric reductions in ADHD and BMI and linked to inhibitory control. Our findings suggest that dopaminergic neurotransmission, partially through DARPP-32-dependent signaling and involving the putamen, is a key player underlying the genetic overlap between ADHD and obesity measures. Uncovering shared etiological factors underlying the frequently observed ADHD–obesity comorbidity may have important implications in terms of prevention and/or efficient treatment of these conditions.
Inhibitors against the NS3-4A protease of hepatitis C virus (HCV) have proven to be useful drugs in the treatment of HCV infection. Although variants have been identified with mutations that confer resistance to these inhibitors, the mutations do not restore replicative fitness and no secondary mutations that rescue fitness have been found. To gain insight into the molecular mechanisms underlying the lack of fitness compensation, we screened known resistance mutations in infectious HCV cell culture with different genomic backgrounds. We observed that the Q41R mutation of NS3-4A efficiently rescues the replicative fitness in cell culture for virus variants containing mutations at NS3-Asp168. To understand how the Q41R mutation rescues activity, we performed protease activity assays complemented by molecular dynamics simulations, which showed that protease-peptide interactions far outside the targeted peptide cleavage sites mediate substrate recognition by NS3-4A and support protease cleavage kinetics. These interactions shed new light on the mechanisms by which NS3-4A cleaves its substrates, viral polyproteins and a prime cellular antiviral adaptor protein, the mitochondrial antiviral signaling protein MAVS. Peptide binding is mediated by an extended hydrogen-bond network in NS3-4A that was effectively optimized for protease-MAVS binding in Asp168 variants with rescued replicative fitness from NS3-Q41R. In the protease harboring NS3-Q41R, the N-terminal cleavage products of MAVS retained high affinity to the active site, rendering the protease susceptible for potential product inhibition. Our findings reveal delicately balanced protease-peptide interactions in viral replication and immune escape that likely restrict the protease adaptive capability and narrow the virus evolutionary space.
Cryo-electron tomography combined with subtomogram averaging (StA) has yielded high-resolution structures of macromolecules in their native context. However, high-resolution StA is not commonplace due to beam-induced sample drift, images with poor signal-to-noise ratios (SNR), challenges in CTF correction, and limited particle number. Here we address these issues by collecting tilt series with a higher electron dose at the zero-degree tilt. Particles of interest are then located within reconstructed tomograms, processed by conventional StA, and then re-extracted from the high-dose images in 2D. Single particle analysis tools are then applied to refine the 2D particle alignment and generate a reconstruction. Use of our hybrid StA (hStA) workflow improved the resolution for tobacco mosaic virus from 7.2 to 4.4 Å and for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. These resolution gains make hStA a promising approach for other StA projects aimed at achieving subnanometer resolution.
Cryo electron tomography (cryo-ET) combined with subtomogram averaging (StA) enables structural determination of macromolecules in their native context. A few structures were reported by StA at resolution higher than 4.5 Å, however all of these are from viral structural proteins or vesicle coats. Reaching high resolution for a broader range of samples is uncommon due to beam-induced sample drift, poor signal-to-noise ratio (SNR) of images, challenges in CTF correction, limited number of particles. Here we propose a strategy to address these issues, which consists of a tomographic data collection scheme and a processing workflow. Tilt series are collected with higher electron dose at zero-degree tilt in order to increase SNR. Next, after performing StA conventionally, we extract 2D projections of the particles of interest from the higher SNR images and use the single particle analysis tools to refine the particle alignment and generate a reconstruction. We benchmarked our proposed hybrid StA (hStA) workflow and improved the resolution for tobacco mosaic virus from 7.2 to 5.2 Å and the resolution for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. We demonstrate that hStA can improve the resolution obtained by conventional StA and promises to be a useful tool for StA projects aiming at subnanometer resolution or higher.
Relationship between regional white matter hyperintensities and alpha oscillations in older adults
(2020)
Objective: To investigate whether regional white matter hyperintensities (WMHs) relate to alpha oscillations (AO) in a large population-based sample of elderly individuals.
Methods: We associated voxel-wise WMHs from high-resolution 3-Tesla MRI with neuronal alpha oscillations (AO) from resting-state multichannel EEG at sensor (N=907) and source space (N=855) in older participants of the LIFE-Adult study (60–80 years). In EEG, we computed relative alpha power (AP), individual alpha peak frequency (IAPF), as well as long-range temporal correlations (LRTC) that represent dynamic properties of the signal. We implemented whole-brain voxel-wise regression models to identify regions where parameters of AO were linked to probability of WMH occurrence. We further used mediation analyses to examine whether WMH volume mediated the relationship between age and AO.
Results: Higher prevalence of WMHs in the superior and posterior corona radiata was related to elevated relative AP, with strongest correlations in the bilateral occipital cortex, even after controlling for potential confounding factors. The age-related increase of relative AP in the right temporal brain region was shown to be mediated by total WMH volume.
Conclusion: A high relative AP corresponding to increased regional WMHs was not associated with age per se, in fact, this relationship was mediated by WMHs. We argue that the WMH-associated increase of AP reflects a generalized and likely compensatory spread of AO leading to a larger number of synchronously recruited neurons. Our findings thus suggest that longitudinal EEG recordings might be sensitive to detect functional changes due to WMHs.
Relationship between regional white matter hyperintensities and alpha oscillations in older adults
(2020)
White matter hyperintensities (WMHs) in the cerebral white matter and attenuation of alpha oscillations (AO; 7–13 Hz) occur with the advancing age. However, a crucial question remains, whether changes in AO relate to aging per se or they rather reflect the impact of age-related neuropathology like WMHs. In this study, using a large cohort (N=907) of elderly participants (60-80 years), we assessed relative alpha power (AP), individual alpha peak frequency (IAPF) and long-range temporal correlations (LRTC) from resting-state EEG. We further associated these parameters with voxel-wise WMHs from 3T MRI. We found that higher prevalence of WMHs in the superior and posterior corona radiata was related to elevated relative AP, with strongest correlations in the bilateral occipital cortex, even after controlling for potential confounding factors. In contrast, we observed no significant relation of probability of WMH occurrence with IAPF and LRTC. We argue that the WMH-associated increase of AP reflects generalized and likely compensatory changes of AO leading to a larger number of synchronously recruited neurons.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
The tremendous diversity of life in the ocean has proven to be a rich source of inspiration for drug discovery, with success rates for marine natural products up to 4 times higher than other naturally derived compounds. Yet the marine biodiscovery pipeline is characterized by chronic underfunding, bottlenecks and, ultimately, untapped potential. For instance, a lack of taxonomic capacity means that, on average, 20 years pass between the discovery of new organisms and the formal publication of scientific names, a prerequisite to proceed with detecting and isolating promising bioactive metabolites. The need for “edge” research that can spur novel lines of discovery and lengthy high-risk drug discovery processes, are poorly matched with research grant cycles. Here we propose five concrete pathways to broaden the biodiscovery pipeline and open the social and economic potential of the ocean genome for global benefit: (1) investing in fundamental research, even when the links to industry are not immediately apparent; (2) cultivating equitable collaborations between academia and industry that share both risks and benefits for these foundational research stages; (3) providing new opportunities for early-career researchers and under-represented groups to engage in high-risk research without risking their careers; (4) sharing data with global networks; and (5) protecting genetic diversity at its source through strong conservation efforts. The treasures of the ocean have provided fundamental breakthroughs in human health and still remain under-utilised for human benefit, yet that potential may be lost if we allow the biodiscovery pipeline to become blocked in a search for quick-fix solutions.
Macro-finance theory predicts that financial fragility builds up when volatility is low. This “volatility paradox’” challenges traditional systemic risk measures. I explore a new dimension of systemic risk, spillover persistence, which is the average time horizon at which a firm’s losses increase future risk in the financial system. Using firm-level data covering more than 30 years and 50 countries, I document that persistence declines when fragility builds up: before crises, during stock market booms, and when banks take more risks. In contrast, persistence increases with loss amplification: during crises and fire sales. These findings support key predictions of recent macrofinance models.
Understanding effects of emotional valence and stress on children’s memory is important for educational and legal contexts. This study disentangles the effects of emotional content of to-be-remembered information (i.e., items differing in emotional valence and arousal), stress exposure, and associated cortisol secretion on children’s memory. We also examine whether girls’ memory is more affected by stress induction. 143 6-to-7-year-old children were randomly allocated to the Trier Social Stress Test for Children (n = 103) or a control condition (n = 40). 25 minutes after stressor onset, children incidentally encoded 75 objects varying in emotional valence (crossed with arousal) together with neutral scene backgrounds. We found that response-bias corrected memory was worse for low arousing negative items than neutral and positive items, with the latter two categories not being different from each other. Whilst boys’ memory was largely unaffected by stress, girls in the stress condition showed worse memory for negative items, especially the low arousing ones, than girls in the control condition. Girls, compared to boys, reported higher subjective stress increases following stress exposure, and had higher cortisol stress responses. Whilst a higher cortisol stress response was associated with better emotional memory in girls in the stress condition, boys’ memory was not associated with their cortisol secretion. Taken together, our study suggests that 6-to-7-year-old children, more so girls, show memory suppression for negative information. Girls’ memory for negative information, compared to boys, is also more strongly modulated by stress experience and the associated cortisol response.
Aims: Acetylsalicylic acid (ASA) is widely used for the prevention of atherothrombotic events in patients with chronic coronary artery disease (CAD) and peripheral artery disease (PAD), but the risk of vascular events remains high. We aimed at identifying randomised controlled trials (RCTs) on antithrombotic treatments in patients with chronic CAD or PAD.
Methods: Searches were conducted on MEDLINE, EMBASE, and CENTRAL on March 1st, 2018. This systematic review (SR) uses a narrative synthesis to summarize the evidence for the efficacy and safety of antiplatelet and anticoagulant therapies in the population of both chronic CAD or PAD patients.
Results: Four RCTs from 27 publications were included. Study groups included 15,603 to 27,395 patients. ASA alone was the most extensively studied (n = 3); other studies included rivaroxaban with or without ASA (n = 1), vorapaxar alone (n = 1), and clopidogrel with (n = 1) or without ASA (n = 1). Clopidogrel alone and clopidogrel plus ASA compared to ASA presented similar efficacy with comparable safety profile. Rivaroxaban plus ASA significantly reduced the risk of the composite of cardiovascular death, myocardial infarction, and stroke compared to ASA alone, although major bleeding with rivaroxaban plus ASA increased.
Conclusion: There is limited and heterogeneous evidence on the prevention of atherothrombotic events in patients with chronic CAD or PAD. Clopidogrel alone and clopidogrel plus ASA did not demonstrate superiority over ASA alone. A combination of rivaroxaban plus ASA may offer significant additional benefit in reducing cardiovascular outcomes, yet it may increase the risk of bleeding, compared to ASA alone.
Determination of a minimal postmortem interval via age estimation of necrophagous diptera has been restricted to the juvenile stages and the time until emergence of the adult fly, i.e. up until 2–6 weeks depending on species and temperature. Age estimation of adult flies could extend this period by adding the age of the fly to the time needed for complete development. In this context pteridines are promising metabolites, as they accumulate in the eyes of flies with increasing age. We studied adults of the blow fly Lucilia sericata at constant temperatures of 16 °C and 25 °C up to an age of 25 days and estimated their pteridine levels by fluorescence spectroscopy. Age was given in accumulated degree days (ADD) across temperatures. Additionally, a mock case was set up to test the applicability of the method. Pteridine increases logarithmically with increasing ADD, but after 70–80 ADD the increase slows down and the curve approaches a maximum. Sex had a significant impact (p < 4.09 × 10−6) on pteridine fluorescence level, while body-size and head-width did not. The mock case demonstrated that a slight overestimation of the real age (in ADD) only occurred in two out of 30 samples. Age determination of L. sericata on the basis of pteridine levels seems to be limited to an age of about 70 ADD, but depending on the ambient temperature this could cover an extra amount of time of about 5–7 days after completion of the metamorphosis.
Cabozantinib (Cabometyx®) is a potent multikinase inhibitor targeting the vascular endothelial growth factor (VEGF) receptor 2, the mesenchymal-epithelial transition factor (MET) receptor, and the “anexelekto” (AXL) receptor tyrosine kinase. It is approved for the treatment of advanced hepatocellular carcinoma (HCC) after failure of sorafenib in Europe (since November 2018) and in the USA (since January 2019). The approval of cabozantinib was based on results of the randomized, placebo-controlled, phase 3 CELESTIAL trial in patients with unresectable HCC, who received one or two prior lines of treatment including sorafenib. At the second planned interim analysis, the trial was stopped, because the primary end point overall survival was clearly in favor for cabozantinib. Additionally, median progression-free survival was superior to placebo. The most common ≥ grade 3 relevant adverse events in patients with HCC treated with cabozantinib were palmar–plantar erythrodysesthesia, hypertension, fatigue, and diarrhea. In this review, current data on cabozantinib for the treatment of patients with advanced HCC, with a focus on the management of common adverse events and ongoing clinical trials, are discussed.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure to obtain ratings of instructional quality along the dimensions of cognitive activation, classroom management, and constructive support based on only 30 s of classroom observations. Ratings were compared to the longitudinal data of students taught in the videos to investigate the connections between the brief glimpses into instructional quality and student learning. In addition, we included samples of raters with different backgrounds (university students, middle school students and educational research experts) to understand the differences in thin slice ratings with respect to their predictive power regarding student learning. Results suggest that each group provides reliable ratings, as measured by a high degree of agreement between raters, as well predictive ratings with respect to students’ learning. Furthermore, we find experts’ and middle school students’ ratings of classroom management and constructive support, respectively, explain unique components of variance in student test scores. This incremental validity can be explained with the amount of implicit knowledge (experts) and an attunement to assess specific cues that is attributable to an emotional involvement (students).
Die Gattungen Nicotiana tabacum und Nicotiana rustica der Tabakpflanze sind von großer wirtschaftlicher Bedeutung. Aus ihnen wird Tabak hergestellt, der mit Alkohol zur weltweit am häufigsten konsumierten Genussdroge zählt. Aufgrund seiner Legalität wird die Toxizität trotz steigender Warnung und Aufklärung immer noch unterschätzt. Die Toxizität der Tabakpflanze ist vor allem auf das Alkaloid Nikotin zurückzuführen. Dass es selten zu einer Vergiftung durch die reine Pflanze kommt, liegt daran, dass sie optisch kaum zum Verzehr anregt. Häufiger dagegen ist eine Vergiftung durch z. B. verschluckte Zigarettenstummel, die vor allem für Kinder sehr gefährlich sein kann. Eine weitere Gefahr der Vergiftung entsteht bei der Tabakernte. Nikotin wird auch über die Haut aufgenommen und kann so zu der Green Tobacco Sickness bei Tabakplantagenarbeitern führen. Im Ernstfall existiert kein Antidot. Aktivkohle sollte so schnell wie möglich gegeben werden, um die Resorption zu vermindern. Ansonsten muss das Nikotin mit einer Magenwäsche aus dem Körper gefiltert werden. Präventiv sollten deshalb verstärkt auf die Gefahren des Tabaks aufmerksam gemacht werden.
The metasomatised continental mantle may play a key role in the generation of some ore deposits, in particular mineral systems enriched in platinum-group elements (PGE) and Au. The cratonic lithosphere is the longest-lived potential source for these elements, but the processes that facilitate their pre-concentration in the mantle and their later remobilisation to the crust are not yet well-established. Here, we report new results on the petrography, major-element, and siderophile- and chalcophile-element composition of native Ni, base metal sulphides (BMS), and spinels in a suite of well-characterised, highly metasomatised and weakly serpentinised peridotite xenoliths from the Bultfontein kimberlite in the Kaapvaal Craton, and integrate these data with published analyses. Pentlandite in polymict breccias (failed kimberlite intrusions at mantle depth) has lower trace-element contents (e.g., median total PGE 0.72 ppm) than pentlandite in phlogopite peridotites and Mica-Amphibole-Rutile-Ilmenite-Diopside (MARID) rocks (median 1.6 ppm). Spinel is an insignificant host for all elements except Zn, and BMS and native Ni account for typically <25% of the bulk-rock PGE and Au. High bulk-rock Te/S suggest a role for PGE-bearing tellurides, which, along with other compounds of metasomatic origin, may host the missing As, Ag, Cd, Sb, Te and, in part, Bi that are unaccounted for by the main assemblage.
The close spatial relationship between BMS and metasomatic minerals (e.g., phlogopite, ilmenite) indicates that the lithospheric mantle beneath Bultfontein was resulphidised by metasomatism after initial melt depletion during stabilisation of the cratonic lithosphere. Newly-formed BMS are markedly PGE-poor, as total PGE contents are <4.2 ppm in pentlandite from seven samples, compared to >26 ppm in BMS in other peridotite xenoliths from the Kaapvaal craton. This represents a strong dilution of the original PGE abundances at the mineral scale, perhaps starting from precursor PGE alloy and small volumes of residual BMS. The latter may have been the precursor to native Ni, which occurs in an unusual Ni-enriched zone in a harzburgite and displays strongly variable, but overall high PGE abundances (up to 81 ppm). In strongly metasomatised peridotites, Au is enriched relative to Pd, and was probably added along with S. A combination of net introduction of S, Au +/− PGE from the asthenosphere and intra-lithospheric redistribution, in part sourced from subducted materials, during metasomatic events may have led to sulphide precipitation at ~80–120 km beneath Bultfontein. This process locally enhanced the metallogenic fertility of this lithospheric reservoir. Further mobilisation of the metal budget stored in these S-rich domains and upwards transport into the crust may require interaction with sulphide-undersaturated melts that can dissolve sulphides along with the metals they store.
Objectives: Lumbar spinal stenosis (LSS) and lumbar disc herniation (LDH) are often accompanied by frequently occurring leg cramps severely affecting patients’ life and sleep quality. Recent evidence suggests that neuromuscular electric stimulation (NMES) of cramp-prone muscles may prevent cramps in lumbar disorders.
Materials and Methods: Thirty-two men and women (63 ± 9 years) with LSS and/or LDH suffering from cramps were randomly allocated to four different groups. Unilateral stimulation of the gastrocnemius was applied twice a week over four weeks (3 × 6 × 5 sec stimulation trains at 30 Hz above the individual cramp threshold frequency [CTF]). Three groups received either 85%, 55%, or 25% of their maximum tolerated stimulation intensity, whereas one group only received pseudo-stimulation.
Results: The number of reported leg cramps decreased in the 25% (25 ± 14 to 7 ± 4; p = 0.002), 55% (24 ± 10 to 10 ± 11; p = 0.014) and 85%NMES (23 ± 17 to 1 ± 1; p < 0.001) group, whereas it remained unchanged after pseudo-stimulation (20 ± 32 to 19 ± 33; p > 0.999). In the 25% and 85%NMES group, this improvement was accompanied by an increased CTF (p < 0.001).
Conclusion: Regularly applied NMES of the calf muscles reduces leg cramps in patients with LSS/LDH even at low stimulation intensity.
We show explicit formulas for the evaluation of (possibly higher-order) fractional Laplacians (-△)ˢ of some functions supported on ellipsoids. In particular, we derive the explicit expression of the torsion function and give examples of s-harmonic functions. As an application, we infer that the weak maximum principle fails in eccentric ellipsoids for s ∈ (1; √3 + 3/2) in any dimension n ≥ 2. We build a counterexample in terms of the torsion function times a polynomial of degree 2. Using point inversion transformations, it follows that a variety of bounded and unbounded domains do not satisfy positivity preserving properties either and we give some examples.
Highlights
• PUR, PVC and PLA microplastics affect life-history parameters of Daphnia magna.
• Natural kaolin particles are less toxic than microplastics.
• Microplastic toxicity is material-specific, e.g. PVC is most toxic on reproduction.
• In case of PVC, plastic chemicals are the main driver of microplastic toxicity.
• PLA bioplastics are similarly toxic as conventional plastics.
Abstract
Given the ubiquitous presence of microplastics in aquatic environments, an evaluation of their toxicity is essential. Microplastics are a heterogeneous set of materials that differ not only in particle properties, like size and shape, but also in chemical composition, including polymers, additives and side products. Thus far, it remains unknown whether the plastic chemicals or the particle itself are the driving factor for microplastic toxicity. To address this question, we exposed Daphnia magna for 21 days to irregular polyvinyl chloride (PVC), polyurethane (PUR) and polylactic acid (PLA) microplastics as well as to natural kaolin particles in high concentrations (10, 50, 100, 500 mg/L, ≤ 59 μm) and different exposure scenarios, including microplastics and microplastics without extractable chemicals as well as the extracted and migrating chemicals alone. All three microplastic types negatively affected the life-history of D. magna. However, this toxicity depended on the endpoint and the material. While PVC had the largest effect on reproduction, PLA reduced survival most effectively. The latter indicates that bio-based and biodegradable plastics can be as toxic as their conventional counterparts. The natural particle kaolin was less toxic than microplastics when comparing numerical concentrations. Importantly, the contribution of plastic chemicals to the toxicity was also plastic type-specific. While we can attribute effects of PVC to the chemicals used in the material, effects of PUR and PLA plastics were induced by the mere particle. Our study demonstrates that plastic chemicals can drive microplastic toxicity. This highlights the importance of considering the individual chemical composition of plastics when assessing their environmental risks. Our results suggest that less studied polymer types, like PVC and PUR, as well as bioplastics are of particular toxicological relevance and should get a higher priority in ecotoxicological studies.
Die Funken der Erlösung : Journal zur Übersetzung des Romans "Die Jakobsbücher" von Olga Tokarczuk
(2020)
"Für unsere Übersetzung waren all diese Überlegungen insofern bedeutsam, als wir zu entscheiden hatten, welche kulturhistorischen Verortungen wir schaffen, welche Konnotationen wir aufrufen wollten - durch die Verwendung eben dieses oder jenes Wortes -, und mit welchen Mitteln es möglich wäre, auch das Prozesshafte der Geschichte abzubilden, den Weg, den Jakob und Frank und seine Compagnie zurücklegen, im Sinne der physischen wie der kulturellen Topographie."
Deubiquitinases (DUBs) are vital for the regulation of ubiquitin signals, and both catalytic activity of and target recruitment by DUBs need to be tightly controlled. Here, we identify asparagine hydroxylation as a novel posttranslational modification involved in the regulation of Cezanne (also known as OTU domain–containing protein 7B (OTUD7B)), a DUB that controls key cellular functions and signaling pathways. We demonstrate that Cezanne is a substrate for factor inhibiting HIF1 (FIH1)- and oxygen-dependent asparagine hydroxylation. We found that FIH1 modifies Asn35 within the uncharacterized N-terminal ubiquitin-associated (UBA)-like domain of Cezanne (UBACez), which lacks conserved UBA domain properties. We show that UBACez binds Lys11-, Lys48-, Lys63-, and Met1-linked ubiquitin chains in vitro, establishing UBACez as a functional ubiquitin-binding domain. Our findings also reveal that the interaction of UBACez with ubiquitin is mediated via a noncanonical surface and that hydroxylation of Asn35 inhibits ubiquitin binding. Recently, it has been suggested that Cezanne recruitment to specific target proteins depends on UBACez. Our results indicate that UBACez can indeed fulfill this role as regulatory domain by binding various ubiquitin chain types. They also uncover that this interaction with ubiquitin, and thus with modified substrates, can be modulated by oxygen-dependent asparagine hydroxylation, suggesting that Cezanne is regulated by oxygen levels.
In diesem Beitrag werden Spezifika der mit der qualitativen Inhaltsanalyse vorgenommenen Leserezeptionsforschung dargestellt. Der Schwerpunkt liegt auf dem literarischen Lesen. In Analysen von Textrezeptionszeugnissen, die zu literaturdidaktischen Forschungszwecken vorgenommen werden, ergibt sich eine doppelt-hermeneutische Herausforderung: Ziel ist es zu verstehen, was Leser_innen in Texten verstehen. Für den Analyseprozess folgen daraus spezifische Anforderungen: Erstens muss der Umfang der Kontexteinheit geklärt werden. Hier sind differenzierte Antworten notwendig, weil sich der gegebene Kontext im Leseprozess ständig verändert. Zweitens erfordert das Forschungsinteresse eine bestimmte Art von Kategorien, die in der Literatur als formal bzw. analytisch bezeichnet werden. Eine weitere Differenzierung zwischen strikt formalen und theoriebasiert formalen Kategorien wird hier vorgeschlagen. Drittens muss geklärt werden, ob die rekonstruierten Leseaktivitäten Prozesse sind, oder ob sie auf zugrunde liegende Dispositionen schließen lassen. Diese Anforderungen werden diskutiert und mit Lösungsansätzen versehen.
Highlights
• Explanation of mobility design and its practical, aesthetic and emblematic effects on travel behaviour.
• Review of recent studies on mobility design elements and the promotion of non-motorised travel.
• Discussion of research gaps and methodological challenges of data collection and comparability.
Abstract
To promote non-motorised travel, many travel behaviour studies acknowledge the importance of the built environment to modal choice, for example with its density or mix of uses. From a mobility design theory perspective, however, objects and environments affect human perceptions, assessments and behaviour in at least three different ways: by their practical, aesthetic and emblematic functions. This review of existing evidence will argue that travel behaviour research has so far mainly focused on the practical function of the built environment. For that purpose, we systematically identified 56 relevant studies on the impacts of the built environment on non-motorised travel behaviour in the Web of Science database. The focus of research on the practical design function primary involves land use distribution, street network connectivity and the presence of walking and cycling facilities. Only a small number of papers address the aesthetic and emblematic functions. These show that the perceived attractiveness of an environment and evoked feelings of traffic safety increase the likelihood of walking and cycling. However, from a mobility design perspective, the results of the review indicate a gap regarding comprehensive research on the effects of the aesthetic and emblematic functions of the built environment. Further research involving these functions might contribute to a better understanding of how to promote non-motorised travel more effectively. Moreover, limitations related to survey techniques, regional distribution and the comparability of results were identified.
Assessment of individual therapeutic responses provides valuable information concerning treatment benefits in individual patients. We evaluated individual therapeutic responses as determined by the Disease Activity Score-28 joints critical difference for improvement (DAS28-dcrit) in rheumatoid arthritis (RA) patients treated with intravenous tocilizumab or comparator anti-tumor necrosis factor (TNF) agents. The previously published DAS28-dcrit value [DAS28 decrease (improvement) ≥ 1.8] was retrospectively applied to data from two studies of tocilizumab in RA, the 52-week ACT-iON observational study and the 24-week ADACTA randomized study. Data were compared within (not between) studies. DAS28 was calculated with erythrocyte sedimentation rate as the inflammatory marker. Stability of DAS28-dcrit responses and European League Against Rheumatism (EULAR) good responses was determined by evaluating repeated responses at subsequent timepoints. A logistic regression model was used to calculate p values for differences in response rates between active agents. Patient-reported outcomes (PROs; pain, global health, function, and fatigue) in DAS28-dcrit responder versus non-responder groups were compared with an ANCOVA model. DAS28-dcrit individual response rates were 78.2% in tocilizumab-treated patients and 58.2% in anti-TNF-treated patients at week 52 in the ACT-ion study (p = 0.0001) and 90.1% versus 59.1% at week 24 in the ADACTA study (p < 0.0001). DAS28-dcrit responses showed greater stability over time (up to 52 weeks) than EULAR good responses. For both active treatments, DAS28-dcrit responses were associated with statistically significant improvements in mean PRO values compared with non-responders. The DAS28-dcrit response criterion provides robust assessments of individual responses to RA therapy and may be useful for discriminating between active agents in clinical studies and guiding treat-to-target decisions in daily practice.
Methoden
(2020)
Rezension zu: Akremi, Leila, Nina Baur, Hubert Knoblauch und Boris Traue (Hrsg.): Handbuch Interpretativ forschen. Weinheim, Basel: Beltz Juventa 2018. 961 Seiten. ISBN: 978-3-7799-3126-3. Preis: C 49,95.
Human RNF213, which encodes the protein mysterin, is a known susceptibility gene for moyamoya disease (MMD), a cerebrovascular condition with occlusive lesions and compensatory angiogenesis. Mysterin mutations, together with exposure to environmental trigger factors, lead to an elevated stroke risk since childhood. Mysterin is induced during cell stress, to function as cytosolic AAA+ ATPase and ubiquitylation enzyme. Little knowledge exists, in which context mysterin is needed. Here, we found that genetic ablation of several mitochondrial matrix factors, such as the peptidase ClpP, the transcription factor Tfam, as well as the peptidase and AAA+ ATPase Lonp1, potently induces Rnf213 transcript expression in various organs, in parallel with other components of the innate immune system. Mostly in mouse fibroblasts and human endothelial cells, the Rnf213 levels showed prominent upregulation upon Poly(I:C)-triggered TLR3-mediated responses to dsRNA toxicity, as well as upon interferon gamma treatment. Only partial suppression of Rnf213 induction was achieved by C16 as an antagonist of PKR (dsRNA-dependent protein kinase). Since dysfunctional mitochondria were recently reported to release immune-stimulatory dsRNA into the cytosol, our results suggest that mysterin becomes relevant when mitochondrial dysfunction or infections have triggered RNA-dependent inflammation. Thus, MMD has similarities with vasculopathies that involve altered nucleotide processing, such as Aicardi-Goutières syndrome or systemic lupus erythematosus. Furthermore, in MMD, the low penetrance of RNF213 mutations might be modified by dysfunctions in mitochondria or the TLR3 pathway.