Refine
Year of publication
- 2020 (1918) (remove)
Document Type
- Article (1918) (remove)
Language
- English (1417)
- German (404)
- Portuguese (40)
- Turkish (23)
- French (16)
- Spanish (11)
- Italian (3)
- slo (3)
- Multiple languages (1)
Has Fulltext
- yes (1918) (remove)
Is part of the Bibliography
- no (1918) (remove)
Keywords
- Deutsch (38)
- taxonomy (35)
- COVID-19 (20)
- inflammation (18)
- SARS-CoV-2 (15)
- Coronavirus (14)
- Übersetzung (14)
- Literatur (13)
- new species (12)
- Ästhetik (12)
Institute
- Medizin (608)
- Physik (155)
- Biowissenschaften (113)
- Frankfurt Institute for Advanced Studies (FIAS) (72)
- Gesellschaftswissenschaften (71)
- Präsidium (59)
- Biochemie, Chemie und Pharmazie (49)
- Informatik (44)
- Psychologie (44)
- Biochemie und Chemie (43)
Machine learning entails a broad range of techniques that have been widely used in Science and Engineering since decades. High-energy physics has also profited from the power of these tools for advanced analysis of colliders data. It is only up until recently that Machine Learning has started to be applied successfully in the domain of Accelerator Physics, which is testified by intense efforts deployed in this domain by several laboratories worldwide. This is also the case of CERN, where recently focused efforts have been devoted to the application of Machine Learning techniques to beam dynamics studies at the Large Hadron Collider (LHC). This implies a wide spectrum of applications from beam measurements and machine performance optimisation to analysis of numerical data from tracking simulations of non-linear beam dynamics. In this paper, the LHC-related applications that are currently pursued are presented and discussed in detail, paying also attention to future developments.
Background: Patients undergoing allogeneic stem cell transplantation (aSCT) are at high risk to develop an invasive fungal disease (IFD). Optimisation of antifungal prophylaxis strategies may improve patient outcomes and reduce treatment costs.
Objectives: To analyse the clinical and economical impact of using continuous micafungin as antifungal prophylaxis.
Patients/Methods: We performed a single-centre evaluation comparing patients who received either oral posaconazole with micafungin as intravenous bridging as required (POS-MIC) to patients who received only micafungin (MIC) as antifungal prophylaxis after aSCT. Epidemiological, clinical and direct treatment cost data extracted from the Cologne Cohort of Neutropenic Patients (CoCoNut) were analysed.
Results: Three hundred and thirteen patients (97 and 216 patients in the POS-MIC and MIC groups, respectively) were included into the analysis. In the POS-MIC and MIC groups, median overall length of stay was 42 days (IQR: 35–52 days) vs 40 days (IQR: 35–49 days; p = .296), resulting in median overall costs of €42,964 (IQR: €35,040–€56,348) vs €43,291 (IQR: €37,281 vs €51,848; p = .993), respectively. Probable/proven IFD in the POS-MIC and MIC groups occurred in 5 patients (5%) vs 3 patients (1%; p = .051), respectively. The Kaplan-Meier analysis showed improved outcome of patients in the MIC group at day 100 (p = .037) and day 365 (p < .001) following aSCT.
Conclusions: Our study results demonstrate improved outcomes in the MIC group compared with the POS-MIC group, which can in part be explained by a tendency towards less probable/proven IFD. Higher drug acquisition costs of micafungin did not translate into higher overall costs.
Surface temperature is a fundamental parameter of Earth’s climate. Its evolution through time is commonly reconstructed using the oxygen isotope and the clumped isotope compositions of carbonate archives. However, reaction kinetics involved in the precipitation of carbonates can introduce inaccuracies in the derived temperatures. Here, we show that dual clumped isotope analyses, i.e., simultaneous ∆47 and ∆48 measurements on the single carbonate phase, can identify the origin and quantify the extent of these kinetic biases. Our results verify theoretical predictions and evidence that the isotopic disequilibrium commonly observed in speleothems and scleractinian coral skeletons is inherited from the dissolved inorganic carbon pool of their parent solutions. Further, we show that dual clumped isotope thermometry can achieve reliable palaeotemperature reconstructions, devoid of kinetic bias. Analysis of a belemnite rostrum implies that it precipitated near isotopic equilibrium and confirms the warmer-than-present temperatures during the Early Cretaceous at southern high latitudes.
Heavy quarks are useful probes to investigate the properties of the Quark-Gluon Plasma (QGP) produced in heavy-ion collisions at the LHC, since they are produced in initial hard scattering processes. To single out the signals that are characteristic of the QGP, it is nevertheless crucial to understand the primordial heavy-quark production in vacuum, and to disentangle hot from cold nuclear matter effects. Moreover, observations of collective effects in high-multiplicity pp and p-Pb collisions show surprising similarities with those in heavy-ion collisions. Heavy-flavour production in such collisions could give further insight into the underlying processes. The heavy-flavour production can be studied with e+e− pairs from correlated semileptonic decays of heavy-flavour hadrons. Compared to single heavy-flavour measurements, the dielectron yield contains information about the initial kinematical correlations between the charm and anti-charm quarks, which is otherwise not accessible, and is sensitive to soft heavy-flavour production. We report results on correlated e+e− pairs in pp collisions recorded by the ALICE detector at different collision energies. The production of heavy quarks is discussed by comparing the yield of dielectrons from heavy-flavour hadron decays as a function of invariant mass, pair transverse momentum and distance of closest approach to the primary vertex with different Monte Carlo event generators. The heavy-flavour production cross sections are also presented. Results from high-multiplicity pp collisions at √s=13 TeV and the status of the p-Pb analysis at √sNN=5.02 TeV are reported as well.
A comprehensive study of sillenite Bi12SiO20 single-crystal properties, including elastic stiffness and piezoelectric coefficients, dielectric permittivity, thermal expansion and molar heat capacity, is presented. Brillouin-interferometry measurements (up to 27 GPa), which were performed at high pressures for the first time, and ab initio calculations based on density functional theory (up to 50 GPa) show the stability of the sillenite structure in the investigated pressure range, in agreement with previous studies. Elastic stiffness coefficients c11 and c12 are found to increase continuously with pressure while c44 increases slightly for lower pressures and remains nearly constant above 15 GPa. Heat-capacity measurements were performed with a quasi-adiabatic calorimeter employing the relaxation method between 2 K and 395 K. No phase transition could be observed in this temperature interval. Standard molar entropy, enthalpy change and Debye temperature are extracted from the data. The results are found to be roughly half of the previous values reported in the literature. The discrepancy is attributed to the overestimation of the Debye temperature which was extracted from high-temperature data. Additionally, Debye temperatures obtained from mean sound velocities derived by Voigt-Reuss averaging are in agreement with our heat-capacity results. Finally, a complete set of electromechanical coefficients was deduced from the application of resonant ultrasound spectroscopy between 103 K and 733 K. No discontinuities in the temperature dependence of the coefficients are observed. High-temperature (up to 1100 K) resonant ultrasound spectra recorded for Bi12MO20 crystals revealed strong and reversible acoustic dissipation effects at 870 K, 960 K and 550 K for M = Si, Ge and Ti, respectively. Resonances with small contributions from the elastic shear stiffness c44 and the piezoelectric stress coefficient e123 are almost unaffected by this dissipation.
Using 2.93 fb−1 of 𝑒+𝑒− annihilation data collected at a center-of-mass energy √𝑠=3.773 GeV with the BESIII detector operating at the BEPCII collider, we search for the semileptonic 𝐷0(+) decays into a 𝑏1(1235)−(0) axial-vector meson for the first time. No significant signal is observed for either charge combination. The upper limits on the product branching fractions are ℬ𝐷0→𝑏1(1235)−𝑒+𝜈𝑒·ℬ𝑏1(1235) −→ 𝜔𝜋−<1.12×10−4 and ℬ𝐷+→𝑏1(1235)0𝑒+𝜈𝑒·ℬ𝑏1(1235)0→𝜔𝜋0<1.75×10−4 at the 90% confidence level.
Libra — a global virtual currency project initiated by Facebook — has been the subject of many controversial discussions since its announcement in June 2019. This paper provides a differentiated view on Libra, recognising that different development scenarios of Libra are conceivable. Libra could serve purely as an alternative payment system in combination with a dedicated payment token, the Libra coin. Alternatively, the Libra project could develop into a broader financial infrastructure for advanced financial services such as savings and loan products operating on the Libra Blockchain. Based on a comparison of the Libra architecture with other cryptocurrencies, the opportunities and challenges for the development of the respective Libra ecosystems are investigated from a commercial, regulatory and monetary policy perspective.
Mehr Nachhaltigkeit im deutschen Leitindex DAX : Reformvorschläge im Lichte des Wirecard-Skandals
(2020)
Im Rahmen der Aufarbeitung des Wirecard-Skandals wird auch eine Änderung der Kriterien zur Aufnahme in den deutschen Leitindex DAX diskutiert. Die bislang von der Deutschen Börse vorgesehenen Maßnahmen gehen in die richtige Richtung, sind aber nicht weitreichend genug. Es bedarf eines deutlichen Zeichens, dass sich künftig nur solche Unternehmen für den DAX qualifizieren können, die ein zumindest befriedigendes Maß an Nachhaltigkeit gemessen durch einen ESG-Risk-Score (Environment, Social, Governance) in ihrer Geschäftstätigkeit erreichen. Eine Simulation verdeutlicht, dass nach ESG-Kriterien seit langem kritisch betrachtete Unternehmen dem DAX nicht mehr angehören würden. Damit könnte mehr Kapital in nachhaltig wirtschaftende Unternehmen und Sektoren fließen.
We report on the measurement of the Central Exclusive Production of charged particle pairs h+h− (h = π, K, p) with the STAR detector at RHIC in proton-proton collisions at √s = 200 GeV. The charged particle pairs produced in the reaction pp → p′ + h+h− + p′ are reconstructed from the tracks in the central detector and identified using the specific energy loss and the time of flight method, while the forward-scattered protons are measured in the Roman Pot system. Exclusivity of the event is guaranteed by requiring the transverse momentum balance of all four final-state particles. Differential cross sections are measured as functions of observables related to the central hadronic final state and to the forward-scattered protons. They are measured in a fiducial region corresponding to the acceptance of the STAR detector and determined by the central particles’ transverse momenta and pseudorapidities as well as by the forward-scattered protons’ momenta. This fiducial region roughly corresponds to the square of the four-momentum transfers at the proton vertices in the range 0.04 GeV2 < −t1, −t2 < 0.2 GeV2, invariant masses of the charged particle pairs up to a few GeV and pseudorapidities of the centrally-produced hadrons in the range |η| < 0.7. The measured cross sections are compared to phenomenological predictions based on the Double Pomeron Exchange (DPE) model. Structures observed in the mass spectra of π+π− and K+K− pairs are consistent with the DPE model, while angular distributions of pions suggest a dominant spin-0 contribution to π+π− production. For π+π− production, the fiducial cross section is extrapolated to the Lorentz-invariant region, which allows decomposition of the invariant mass spectrum into continuum and resonant contributions. The extrapolated cross section is well described by the continuum production and at least three resonances, the f0(980), f2(1270) and f0(1500), with a possible small contribution from the f0(1370). Fits to the extrapolated differential cross section as a function of t1 and t2 enable extraction of the exponential slope parameters in several bins of the invariant mass of π+π− pairs. These parameters are sensitive to the size of the interaction region.
Measurement of inclusive charged-particle jet production in Au + Au collisions at √sNN=200 GeV
(2020)
The STAR Collaboration at the Relativistic Heavy Ion Collider reports the first measurement of inclusive jet production in peripheral and central Au+Au collisions at √𝑠𝑁𝑁=200 GeV. Jets are reconstructed with the anti-𝑘𝑇 algorithm using charged tracks with pseudorapidity |𝜂|<1.0 and transverse momentum 0.2<𝑝ch
𝑇,jet<30 GeV/𝑐, with jet resolution parameter 𝑅=0.2, 0.3, and 0.4. The large background yield uncorrelated with the jet signal is observed to be dominated by statistical phase space, consistent with a previous coincidence measurement. This background is suppressed by requiring a high-transverse-momentum (high-𝑝𝑇) leading hadron in accepted jet candidates. The bias imposed by this requirement is assessed, and the 𝑝𝑇 region in which the bias is small is identified. Inclusive charged-particle jet distributions are reported in peripheral and central Au+Au collisions for 5<𝑝ch
𝑇,jet<25 GeV/𝑐 and 5<𝑝ch
𝑇,jet<30 GeV/𝑐, respectively. The charged-particle jet inclusive yield is suppressed for central Au+Au collisions, compared to both the peripheral Au+Au yield from this measurement and to the 𝑝𝑝 yield calculated using the PYTHIA event generator. The magnitude of the suppression is consistent with that of inclusive hadron production at high 𝑝𝑇 and that of semi-inclusive recoil jet yield when expressed in terms of energy loss due to medium-induced energy transport. Comparison of inclusive charged-particle jet yields for different values of 𝑅 exhibits no significant evidence for medium-induced broadening of the transverse jet profile for 𝑅 <0.4 in central Au+Au collisions. The measured distributions are consistent with theoretical model calculations that incorporate jet quenching.
Digitale Technologien begünstigen den Einsatz einer dynamischen Preisgestaltung, also von Preisen, die für ein prinzipiell gleiches Produkt unangekündigt variieren. Dabei werden in der öffentlichen Diskussion unterschiedliche Ausgestaltungsformen dynamischer Preise oftmals vermischt, was eine sinnvolle Analyse der Vor- und Nachteile der dynamischen Preisgestaltung erschwert. Das Ziel des Beitrags ist die Darstellung der ökonomischen Grundlagen und die Diskussion sowie Klassifikation der Ausgestaltungsmöglichkeiten der dynamischen Preisgestaltung. Darüber hinaus erfolgt eine Bewertung der Vor- und Nachteile der dynamischen Preisgestaltung aus Käufer- und Verkäufersicht. Abschließend werden Implikationen für die betriebswirtschaftliche Forschung diskutiert.
Measurement of inclusive J/ψ polarization in p + p collisions at √s=200 GeV by the STAR experiment
(2020)
We report on new measurements of inclusive 𝐽/𝜓 polarization at midrapidity in 𝑝+𝑝 collisions at √𝑠=200 GeV by the STAR experiment at the Relativistic Heavy Ion Collider. The polarization parameters, 𝜆𝜃, 𝜆𝜙, and 𝜆𝜃𝜙, are measured as a function of transverse momentum (𝑝T) in both the helicity and Collins-Soper (CS) reference frames within 𝑝T<10 GeV/𝑐. Except for 𝜆𝜃 in the CS frame at the highest measured 𝑝T, all three polarization parameters are consistent with 0 in both reference frames without any strong 𝑝T dependence. Several model calculations are compared with data, and the one using the Color Glass Condensate effective field theory coupled with nonrelativistic QCD gives the best overall description of the experimental results, even though other models cannot be ruled out due to experimental uncertainties.
Ten hadronic final states of the ℎ𝑐 decays are investigated via the process 𝜓(3686)→𝜋0ℎ𝑐, using a data sample of (448.1±2.9)×106 𝜓(3686) events collected with the BESIII detector. The decay channel ℎ𝑐→𝐾+𝐾−𝜋+𝜋−𝜋0 is observed for the first time and has a measured significance of 6.0𝜎. The corresponding branching fraction is determined to be ℬ(ℎ𝑐→𝐾+𝐾−𝜋+𝜋−𝜋0)=(3.3±0.6±0.6)×10−3 (where the uncertainties are statistical and systematic, respectively). Evidence for the decays ℎ𝑐→𝜋+𝜋−𝜋0𝜂 and ℎ𝑐→𝐾0𝑆𝐾±𝜋∓𝜋+𝜋− is found with a significance of 3.6𝜎 and 3.8𝜎, respectively. The corresponding branching fractions (and upper limits) are obtained to be ℬ(ℎ𝑐→𝜋+𝜋−𝜋0𝜂)=(7.2±1.8±1.3)×10−3 (<1.8×10−2) and ℬ(ℎ𝑐→𝐾0𝑆𝐾±𝜋∓𝜋+𝜋−)=(2.8±0.9±0.5)×10−3 (<4.7×10−3). Upper limits on the branching fractions for the final states ℎ𝑐→𝐾+𝐾−𝜋0, 𝐾+𝐾−𝜂, 𝐾+𝐾−𝜋+𝜋−𝜂, 2(𝐾+𝐾−)𝜋0, 𝐾+𝐾−𝜋0𝜂, 𝐾0𝑆𝐾±𝜋∓, and 𝑝¯𝑝𝜋0𝜋0 are determined at a confidence level of 90%.
Using a dedicated data sample taken in 2018 on the J/ψ peak, we perform a detailed study of the trigger efficiencies of the BESIII detector. The efficiencies are determined from three representative physics processes, namely Bhabha scattering, dimuon production and generic hadronic events with charged particles. The combined efficiency of all active triggers approaches 100% in most cases, with uncertainties small enough not to affect most physics analyses.
Measurement of cross sections for e⁺e⁻ → μ⁺μ⁻ at center-of-mass energies from 3.80 to 4.60 GeV
(2020)
The observed cross sections for 𝑒+𝑒−→𝜇+𝜇− at energies from 3.8 to 4.6 GeV are measured using data samples taken with the BESIII detector operated at the BEPCII collider. We measure the muonic widths and determine the branching fractions of the charmonium states 𝜓(4040), 𝜓(4160), and 𝜓(4415) decaying to 𝜇+𝜇−, as well as making a first determination of the phase of the amplitudes. In addition, we observe evidence for a structure in the dimuon cross section near 4.220 GeV/𝑐2, which we denote as 𝑆(4220). Analyzing a coherent sum of amplitudes yields eight solutions, one of which gives a mass of 𝑀𝑆(4220) = 4216.7±8.9±4.1 MeV/𝑐2, a total width of Γtot S(4220) = 47.2±22.8±10.5 MeV, and a muonic width of Γ𝜇𝜇 𝑆(4220) = 1.53±1.26±0.54 keV, where the first uncertainties are statistical and the second systematic. The eight solutions give the central values of the mass, total width, muonic width to be, respectively, in the range from 4212.8 to 4219.4 MeV/𝑐2, from 36.4 to 49.6 MeV, and from 1.09 to 1.53 keV. The statistical significance of the 𝑆(4220) signal is 3.9𝜎. Correcting the total dimuon cross section for radiative effects yields a statistical significance for this structure of 8.1𝜎.
Cross sections of the process 𝑒+𝑒−→𝜋0𝜋0𝐽/𝜓 at center-of-mass energies between 3.808 and 4.600 GeV are measured with high precision by using 12.4 fb−1 of data samples collected with the BESIII detector operating at the BEPCII collider facility. A fit to the measured energy-dependent cross sections confirms the existence of the charmoniumlike state 𝑌(4220). The mass and width of the 𝑌(4220) are determined to be (4220.4±2.4±2.3) MeV/𝑐2 and (46.2±4.7±2.1) MeV, respectively, where the first uncertainties are statistical and the second systematic. The mass and width are consistent with those measured in the process 𝑒+𝑒−→𝜋+𝜋−𝐽/𝜓. The neutral charmonium-like state 𝑍𝑐(3900)0 is observed prominently in the 𝜋0𝐽/𝜓 invariant-mass spectrum, and, for the first time, an amplitude analysis is performed to study its properties. The spin-parity of 𝑍𝑐(3900)0 is determined to be 𝐽𝑃=1+, and the pole position is (3893.1±2.2±3.0)−𝑖(22.2±2.6±7.0) MeV/𝑐2, which is consistent with previous studies of electrically charged 𝑍𝑐(3900)±. In addition, cross sections of 𝑒+𝑒− → 𝜋0𝑍𝑐(3900)0 → 𝜋0𝜋0𝐽/𝜓 are extracted, and the corresponding line shape is found to agree with that of the 𝑌(4220).
Using 2.93 fb−1 of 𝑒+𝑒− collision data collected at a center-of-mass energy of 3.773 GeV with the BESIII detector, the first observation of the doubly Cabibbo-suppressed decay 𝐷+→𝐾+𝜋+𝜋−𝜋0 is reported. After removing decays that contain narrow intermediate resonances, including 𝐷+→𝐾+𝜂, 𝐷+→𝐾+𝜔, and 𝐷+→𝐾+𝜙, the branching fraction of the decay 𝐷+→𝐾+𝜋+𝜋−𝜋0 is measured to be (1.13±0.08stat±0.03syst)×10−3. The ratio of branching fractions of 𝐷+→𝐾+𝜋+𝜋−𝜋0 over 𝐷+→𝐾−𝜋+𝜋+𝜋0 is found to be (1.81±0.15)%, which corresponds to (6.28±0.52)tan4𝜃𝐶, where 𝜃𝐶 is the Cabibbo mixing angle. This ratio is significantly larger than the corresponding ratios for other doubly Cabibbo-suppressed decays. The asymmetry of the branching fractions of charge-conjugated decays 𝐷±→𝐾±𝜋±𝜋∓𝜋0 is also determined, and no evidence for 𝐶𝑃 violation is found. In addition, the first evidence for the 𝐷+→𝐾+𝜔 decay, with a statistical significance of 3.3𝜎, is presented and the branching fraction is measured to be ℬ(𝐷+→𝐾+𝜔) = (5.7+2.5−2.1stat±0.2syst)×10−5.
The process 𝑒+𝑒−→𝜙𝜂′ has been studied for the first time in detail using data sample collected with the BESIII detector at the BEPCII collider at center of mass energies from 2.05 to 3.08 GeV. A resonance with quantum numbers 𝐽𝑃𝐶=1−− is observed with mass 𝑀=(2177.5±4.8(stat)±19.5(syst))MeV/𝑐2 and width Γ=(149.0±15.6(stat)±8.9(syst)) MeV with a statistical significance larger than 10𝜎, including systematic uncertainties. If the observed structure is identified with the 𝜙(2170), then the ratio of partial width between the 𝜙𝜂′ by BESIII and 𝜙𝜂 by BABAR is (ℬ𝑅𝜙𝜂Γ𝑅𝑒𝑒)/(ℬ𝑅𝜙𝜂′Γ𝑅𝑒𝑒)=0.23±0.10(stat)±0.18(syst), which is smaller than the prediction of the 𝑠¯𝑠𝑔 hybrid models by several orders of magnitude.
By analyzing a data sample corresponding to an integrated luminosity of 2.93 fb−1 collected at a center-of-mass energy of 3.773 GeV with By analyzing a data sample corresponding to an integrated luminosity of 2.93 fb−1 collected at a center-of-mass energy of 3.773 GeV with the BESIII detector, we measure for the first time the absolute branching fraction of the 𝐷+→𝜂𝜇+𝜈𝜇 decay to be ℬ𝐷+→𝜂𝜇+𝜈𝜇=(10.4±1.0stat±0.5syst)×10−4. Using the world averaged value of ℬ𝐷+→𝜂𝑒+𝜈𝑒, the ratio of the two branching fractions is determined to be ℬ𝐷+→𝜂𝜇+𝜈𝜇/ℬ𝐷+→𝜂𝑒+𝜈𝑒=0.91±0.13(stat+syst), which agrees with the theoretical expectation of lepton flavor universality within uncertainty. By studying the differential decay rates in five four-momentum transfer intervals, we obtain the product of the hadronic form factor 𝑓𝜂+(0) and the 𝑐→𝑑 Cabibbo-Kobayashi-Maskawa matrix element |𝑉𝑐𝑑| to be 𝑓𝜂+(0)|𝑉𝑐𝑑|=0.087±0.008stat±0.002syst. Taking the input of |𝑉𝑐𝑑| from the global fit in the standard model, we determine 𝑓𝜂+(0)=0.39±0.04stat±0.01syst. On the other hand, using the value of 𝑓𝜂+(0) calculated in theory, we find |𝑉𝑐𝑑| = 0.242±0.022stat±0.006syst±0.033theory.
We report the first observation of the semimuonic decay 𝐷+→𝜔𝜇+𝜈𝜇 using an 𝑒+𝑒− collision data sample corresponding to an integrated luminosity of 2.93 fb−1 collected with the BESIII detector at a center-of-mass energy of 3.773 GeV. The absolute branching fraction of the 𝐷+→𝜔𝜇+𝜈𝜇 decay is measured to be ℬ𝐷+→𝜔𝜇+𝜈𝜇=(17.7±1.8stat±1.1syst)×10−4. Its ratio with the world average value of the branching fraction of the 𝐷+→𝜔𝑒+𝜈𝑒 decay probes lepton flavor universality and it is determined to be ℬ𝐷+→𝜔𝜇+𝜈𝜇/ℬPDG 𝐷+→𝜔𝑒+𝜈𝑒=1.05±0.14, in agreement with the standard model expectation within one standard deviation.
Cross sections of the process 𝑒+𝑒−→𝜋0𝜋0𝐽/𝜓 at center-of-mass energies between 3.808 and 4.600 GeV are measured with high precision by using 12.4 fb−1 of data samples collected with the BESIII detector operating at the BEPCII collider facility. A fit to the measured energy-dependent cross sections confirms the existence of the charmoniumlike state 𝑌(4220). The mass and width of the 𝑌(4220) are determined to be (4220.4±2.4±2.3) MeV/𝑐2 and (46.2±4.7±2.1) MeV, respectively, where the first uncertainties are statistical and the second systematic. The mass and width are consistent with those measured in the process 𝑒+𝑒−→𝜋+𝜋−𝐽/𝜓. The neutral charmonium-like state 𝑍𝑐(3900)0 is observed prominently in the 𝜋0𝐽/𝜓 invariant-mass spectrum, and, for the first time, an amplitude analysis is performed to study its properties. The spin-parity of 𝑍𝑐(3900)0 is determined to be 𝐽𝑃=1+, and the pole position is (3893.1±2.2±3.0)−𝑖(22.2±2.6±7.0) MeV/𝑐2, which is consistent with previous studies of electrically charged 𝑍𝑐(3900)±. In addition, cross sections of 𝑒+𝑒− → 𝜋0𝑍𝑐(3900)0 → 𝜋0𝜋0𝐽/𝜓 are extracted, and the corresponding line shape is found to agree with that of the 𝑌(4220).
The processes 𝑒+𝑒−→𝐷+ 𝑠𝐷𝑠1(2460)−+c.c. and 𝑒+𝑒−→𝐷*+ 𝑠𝐷𝑠1(2460)−+c.c. are studied for the first time using data samples collected with the BESIII detector at the BEPCII collider. The Born cross sections of 𝑒+𝑒−→𝐷+ 𝑠𝐷𝑠1(2460)−+c.c. at nine center-of-mass energies between 4.467 GeV and 4.600 GeV and those of 𝑒+𝑒−→𝐷*+ 𝑠𝐷𝑠1(2460)−+c.c. at √𝑠=4.590 GeV and 4.600 GeV are measured. No obvious charmonium or charmoniumlike structure is seen in the measured cross sections.
Digital spatial processes have been widely explored and investigated in subject-specific geographic research. So far, however, this research has not been sufficiently reflected in classrooms or teacher education, and remains unconnected to notions of geographical digital literacy. Viral constructions of space – realities shaped in everyday life that are experienced and (re-)produced by students and teachers alike through social media – present an opportunity for Geography education to adapt to the digital society. This paper attempts to connect viral constructions of space, the digital society and the knowledge teachers need to include viral constructions of space in the classroom using Mishra and Koehler’s (2006) TPACK model, a well-established means for summarizing teachers’ technological, pedagogical and content knowledge for a specific topic. The paper focuses on content knowledge, identifies five sub-types of viral constructions of space, and extracts nine descriptors of teachers’ content knowledge. By focusing on content knowledge, the paper presents a starting point for future investigations of pedagogical and technological teacher knowledge as well as their intersections. It also raises awareness of viral constructions of space as both a new essential topic in the Geography classroom and a phenomenon already shaping learning environments for spatial acquisition.
Using a sample of 106 million 𝜓(3686) decays, 𝜓(3686)→𝛾𝜒𝑐𝐽(𝐽=0,1,2) and 𝜓(3686)→𝛾𝜒𝑐𝐽,𝜒𝑐𝐽→𝛾𝐽/𝜓(𝐽=1,2) events are utilized to study inclusive 𝜒𝑐𝐽→anything, 𝜒𝑐𝐽→hadrons, and 𝐽/𝜓→anything distributions, including distributions of the number of charged tracks, electromagnetic calorimeter showers, and 𝜋0s, and to compare them with distributions obtained from the BESIII Monte Carlo simulation. Information from each Monte Carlo simulated decay event is used to construct matrices connecting the detected distributions to the input predetection “produced” distributions. Assuming these matrices also apply to data, they are used to predict the analogous produced distributions of the decay events. Using these, the charged particle multiplicities are compared with results from MARK I. Further, comparison of the distributions of the number of photons in data with those in Monte Carlo simulation indicates that G-parity conservation should be taken into consideration in the simulation.
Using 2.93 fb−1 of 𝑒+𝑒− collision data taken at a center-of-mass energy of 3.773 GeV by the BESIII detector at the BEPCII, we measure the branching fractions of the singly Cabibbo-suppressed decays 𝐷→𝜔𝜋𝜋 to be ℬ(𝐷0→𝜔𝜋+𝜋−)=(1.33±0.16±0.12)×10−3 and ℬ(𝐷+→𝜔𝜋+𝜋0)=(3.87±0.83±0.25)×10−3, where the first uncertainties are statistical and the second ones systematic. The statistical significances are 12.9𝜎 and 7.7𝜎, respectively. The precision of ℬ(𝐷0→𝜔𝜋+𝜋−) is improved by a factor of 2.1 over prior measurements, and ℬ(𝐷+→𝜔𝜋+𝜋0) is measured for the first time. No significant signal for 𝐷0→𝜔𝜋0𝜋0 is observed, and the upper limit on the branching fraction is ℬ(𝐷0→𝜔𝜋0𝜋0)<1.10×10−3 at the 90% confidence level. The branching fractions of 𝐷→𝜂𝜋𝜋 are also measured and consistent with existing results.
Using 2.93 fb−1 of 𝑒+𝑒− collision data taken at a center-of-mass energy of 3.773 GeV with the BESIII detector, we report the first measurements of the absolute branching fractions of 14 hadronic 𝐷0(+) decays to exclusive final states with an 𝜂, e.g., 𝐷0→𝐾−𝜋+𝜂, 𝐾0𝑆𝜋0𝜂, 𝐾+𝐾−𝜂, 𝐾0𝑆𝐾0𝑆𝜂, 𝐾−𝜋+𝜋0𝜂, 𝐾0𝑆𝜋+𝜋−𝜂, 𝐾0𝑆𝜋0𝜋0𝜂, and 𝜋+𝜋−𝜋0𝜂; 𝐷+→𝐾0𝑆𝜋+𝜂, 𝐾0𝑆𝐾+𝜂, 𝐾−𝜋+𝜋+𝜂, 𝐾0𝑆𝜋+𝜋0𝜂, 𝜋+𝜋+𝜋−𝜂, and 𝜋+𝜋0𝜋0𝜂. Among these decays, the 𝐷0→𝐾−𝜋+𝜂 and 𝐷+→𝐾0 𝑆𝜋+𝜂 decays have the largest branching fractions, which are ℬ(𝐷0→𝐾−𝜋+𝜂) = (1.853±0.025stat±0.031syst)% and ℬ(𝐷+→𝐾0𝑆𝜋+𝜂) = (1.309±0.037stat±0.031syst)%, respectively. The charge-parity asymmetries for the six decays with highest event yields are determined, and no statistically significant charge-parity violation is found.
There has recently been a dramatic renewal of interest in hadron spectroscopy and charm physics. This renaissance has been driven in part by the discovery of a plethora of charmonium-like XYZ states at BESIII and B factories, and the observation of an intriguing proton-antiproton threshold enhancement and the possibly related X(1835) meson state at BESIII, as well as the threshold measurements of charm mesons and charm baryons.
We present a detailed survey of the important topics in tau-charm physics and hadron physics that can be further explored at BESIII during the remaining operation period of BEPCII. This survey will help in the optimization of the data-taking plan over the coming years, and provides physics motivation for the possible upgrade of BEPCII to higher luminosity.
Highlights
• German patients with LGS identified using most specific algorithm to date.
• Prevalence of probable LGS with epilepsy diagnosis before age 6 was 6.5 per 100,000.
• High healthcare costs of €22,787 PPY; mostly due to inpatient and home nursing care.
• Costs were greater in patients prescribed rescue medications.
• Over 10 years, LGS patients had significant mortality vs. controls (2.88 vs. 0.01%).
Abstract
Objective: This retrospective study examined patients with probable Lennox-Gastaut syndrome (LGS) identified from German healthcare data.
Methods: This 10-year study (2007–2016) assessed healthcare insurance claims information from the Vilua Healthcare research database. A selection algorithm considering diagnoses and drug prescriptions identified patients with probable LGS. To increase the sensitivity of the identification algorithm, two populations were defined: all patients with probable LGS (broadly defined) and only those with a documented epilepsy diagnosis before 6 years of age (narrowly defined). This specific criterion was used as LGS typically has a peak seizure onset between age 3 and 5 years. Primary analyses were prevalence and demographics; secondary analyses included healthcare costs, hospitalization rate and length of stay (LOS), medication use, and mortality.
Results: In the final year of the study, 545 patients with broadly defined probable LGS (mean [range] age: 31.4 [2–89] years; male: 53%) were identified. Using the narrowly defined probable LGS definition, the number of patients was reduced to 102 (mean [range] age: 7.4 [2–14] years; male: 52%). Prevalence of broadly defined and narrowly defined probable LGS was 39.2 and 6.5 per 100,000 people. During the 10-year study, 208 patients with narrowly defined probable LGS were identified and followed up for 1379 patient-years. The mean annual cost of healthcare was €22,787 per patient-year (PPY); greatest costs were attributable to inpatient care (33%), home nursing care (13%), and medication (10%). Mean annual healthcare costs were significantly greater for those with prescribed rescue medication (45% of patient-years) versus those without (€33,872 vs. €13,785 PPY, p < 0.001). Mean (standard deviation [SD]) annual hospitalization rate was 1.6 (2.0) PPY with mean (SD) annual LOS of 22.7 (46.0) days. Annual hospitalization rate was significantly greater in those who were prescribed rescue medication versus those who were not (2.2 [2.3] vs. 1.1 [1.6] PPY, p < 0.001). The mean (SD) number of different medications prescribed was 11.3 (7.3) PPY and 33.8 (17.0) over the entire observable time per patient (OET); antiepileptic drugs only accounted for 2.1 (1.1) of the medications prescribed PPY and 3.8 (2.0) OET. Over the 10-year study period, mortality in patients with narrowly defined probable LGS was significantly higher than the matched control population (six events [2.88%] vs. one event [0.01%], p < 0.001).
Conclusion: Annual healthcare costs incurred by patients with probable LGS in Germany were substantial, and mostly attributable to inpatient care, home nursing care, and medication. Patients prescribed with rescue medication incurred significantly greater costs than those who were not. Patients with narrowly defined probable LGS had a higher mortality rate versus control populations.
100 Jahre Dieter Janz
(2020)
The 20 April 2020 marks the centenary of Dieter Janz’s birth. This issue of Zeitschrift für Epileptologie is published in his honor with the aim of tracing the work of Dieter Janz over the last five decades and summarizing new findings on the Janz syndrome (Juvenile Myoclonic Epilepsy), which is named after him.
Protein turnover, the net result of protein synthesis and degradation, enables cells to remodel their proteomes in response to internal and external cues. Previously, we analyzed protein turnover rates in cultured brain cells under basal neuronal activity and found that protein turnover is influenced by subcellular localization, protein function, complex association, cell type of origin, and by the cellular environment (Dörrbaum et al., 2018). Here, we advanced our experimental approach to quantify changes in protein synthesis and degradation, as well as the resulting changes in protein turnover or abundance in rat primary hippocampal cultures during homeostatic scaling. Our data demonstrate that a large fraction of the neuronal proteome shows changes in protein synthesis and/or degradation during homeostatic up- and down-scaling. More than half of the quantified synaptic proteins were regulated, including pre- as well as postsynaptic proteins with diverse molecular functions.
We examined the feedback between the major protein degradation pathway, the ubiquitin-proteasome system (UPS), and protein synthesis in rat and mouse neurons. When protein degradation was inhibited, we observed a coordinate dramatic reduction in nascent protein synthesis in neuronal cell bodies and dendrites. The mechanism for translation inhibition involved the phosphorylation of eIF2α, surprisingly mediated by eIF2α kinase 1, or heme-regulated kinase inhibitor (HRI). Under basal conditions, neuronal expression of HRI is barely detectable. Following proteasome inhibition, HRI protein levels increase owing to stabilization of HRI and enhanced translation, likely via the increased availability of tRNAs for its rare codons. Once expressed, HRI is constitutively active in neurons because endogenous heme levels are so low; HRI activity results in eIF2α phosphorylation and the resulting inhibition of translation. These data demonstrate a novel role for neuronal HRI that senses and responds to compromised function of the proteasome to restore proteostasis.
Keystone mutualisms, such as corals, lichens or mycorrhizae, sustain fundamental ecosystem functions. Range dynamics of these symbioses are, however, inherently difficult to predict because host species may switch between different symbiont partners in different environments, thereby altering the range of the mutualism as a functional unit. Biogeographic models of mutualisms thus have to consider both the ecological amplitudes of various symbiont partners and the abiotic conditions that trigger symbiont replacement. To address this challenge, we here investigate 'symbiont turnover zones'--defined as demarcated regions where symbiont replacement is most likely to occur, as indicated by overlapping abundances of symbiont ecotypes. Mapping the distribution of algal symbionts from two species of lichen-forming fungi along four independent altitudinal gradients, we detected an abrupt and consistent β-diversity turnover suggesting parallel niche partitioning. Modelling contrasting environmental response functions obtained from latitudinal distributions of algal ecotypes consistently predicted a confined altitudinal turnover zone. In all gradients this symbiont turnover zone is characterized by approximately 12°C average annual temperature and approximately 5°C mean temperature of the coldest quarter, marking the transition from Mediterranean to cool temperate bioregions. Integrating the conditions of symbiont turnover into biogeographic models of mutualisms is an important step towards a comprehensive understanding of biodiversity dynamics under ongoing environmental change.
Two-person neuroscience (2 PN) is a recently introduced conceptual and methodological framework used to investigate the neural basis of human social interaction from simultaneous neuroimaging of two or more subjects (hyperscanning). In this study, we adopted a 2 PN approach and a multiple-brain connectivity model to investigate the neural basis of a form of cooperation called joint action. We hypothesized different intra-brain and inter-brain connectivity patterns when comparing the interpersonal properties of joint action with non-interpersonal conditions, with a focus on co-representation, a core ability at the basis of cooperation. 32 subjects were enrolled in dual-EEG recordings during a computerized joint action task including three conditions: one in which the dyad jointly acted to pursue a common goal (joint), one in which each subject interacted with the PC (PC), and one in which each subject performed the task individually (Solo).
A combination of multiple-brain connectivity estimation and specific indices derived from graph theory allowed to compare interpersonal with non-interpersonal conditions in four different frequency bands. Our results indicate that all the indices were modulated by the interaction, and returned a significantly stronger integration of multiple-subject networks in the joint vs. PC and Solo conditions. A subsequent classification analysis showed that features based on multiple-brain indices led to a better discrimination between social and non-social conditions with respect to single-subject indices. Taken together, our results suggest that multiple-brain connectivity can provide a deeper insight into the understanding of the neural basis of cooperation in humans.
Background: Data on the arrhythmic burden of women at risk for sudden cardiac death are limited, especially in patients using the wearable cardioverter-defibrillator (WCD).
Objective: We aimed to characterize WCD compliance, atrial and ventricular arrhythmic burden, and WCD outcomes by sex in patients enrolled in the Prospective Registry of Patients Using the Wearable Cardioverter Defibrillator (WEARIT-II U.S. Registry).
Methods: In the WEARIT-II Registry, we stratified 2000 patients by sex into women (n = 598) and men (n = 1402). WCD wear time, ventricular and atrial arrhythmic events during WCD use, and implantable cardioverter-defibrillator (ICD) implantation rates at the end of WCD use were evaluated.
Results: The mean WCD wear time was similar in women and men (94 days vs 90 days; P = .145), with longer daily use in women (21.4 h/d vs 20.7 h/d; P = .001). Burden of ventricular tachycardia or ventricular fibrillation was higher in women, with 30 events per 100 patient-years compared with 18 events per 100 patient-years in men (P = .017), with similar findings for treated and non-treated ventricular tachycardia/ventricular fibrillation. Recurrent atrial arrhythmias/sustained ventricular tachycardia was also more frequent in women than in men (167 events per 100 patient-years vs 73 events per 100 patient-years; P = .042). However, ICD implantation rate at the end of WCD use was similar in both women and men (41% vs 39%; P = .448).
Conclusion: In the WEARIT-II Registry, we have shown a higher burden of ventricular and atrial arrhythmic events in women than in men. ICD implantation rates at the end of WCD use were similar. Our findings warrant monitoring women at risk for sudden cardiac death who have a high burden of atrial and ventricular arrhythmias while using the WCD.
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Decline in physical activity in the weeks preceding sustained ventricular arrhythmia in women
(2020)
Background: Heightened risk of cardiac arrest following physical exertion has been reported. Among patients with an implantable defibrillator, an appropriate shock for sustained ventricular arrhythmia was preceded by a retrospective self-report of engaging in mild-to-moderate physical activity. Previous studies evaluating the relationship between activity and sudden cardiac arrest lacked an objective measure of physical activity and women were often underrepresented.
Objective: To determine the relationship between physical activity, recorded by accelerometer in a wearable cardioverter-defibrillator (WCD), and sustained ventricular arrhythmia among female patients.
Methods: A dataset of female adult patients prescribed a WCD for a diagnosis of myocardial infarction or dilated cardiomyopathy was compiled from a commercial database. Curve estimation, to include linear and nonlinear interpolation, was applied to physical activity as a function of time (days before arrhythmia).
Results: Among women who received an appropriate WCD shock for sustained ventricular arrhythmia (N = 120), a quadratic relationship between time and activity was present prior to shock. Physical activity increased starting at the beginning of the 30-day period up until day -16 (16 days before the ventricular arrhythmia) when activity begins to decline.
Conclusion: For patients who received treatment for sustained ventricular arrhythmia, a decline in physical activity was found during the 2 weeks preceding the arrhythmic event. Device monitoring for a sustained decline in physical activity may be useful to identify patients at near-term risk of a cardiac arrest.
Attention-Deficit/Hyperactivity Disorder (ADHD) and obesity are frequently comorbid, genetically correlated, and share brain substrates. The biological mechanisms driving this association are unclear, but candidate systems, like dopaminergic neurotransmission and circadian rhythm, have been suggested. Our aim was to identify the biological mechanisms underpinning the genetic link between ADHD and obesity measures and investigate associations of overlapping genes with brain volumes. We tested the association of dopaminergic and circadian rhythm gene sets with ADHD, body mass index (BMI), and obesity (using GWAS data of N = 53,293, N = 681,275, and N = 98,697, respectively). We then conducted genome-wide ADHD–BMI and ADHD–obesity gene-based meta-analyses, followed by pathway enrichment analyses. Finally, we tested the association of ADHD–BMI overlapping genes with brain volumes (primary GWAS data N = 10,720–10,928; replication data N = 9428). The dopaminergic gene set was associated with both ADHD (P = 5.81 × 10−3) and BMI (P = 1.63 × 10−5); the circadian rhythm was associated with BMI (P = 1.28 × 10−3). The genome-wide approach also implicated the dopaminergic system, as the Dopamine-DARPP32 Feedback in cAMP Signaling pathway was enriched in both ADHD–BMI and ADHD–obesity results. The ADHD–BMI overlapping genes were associated with putamen volume (P = 7.7 × 10−3; replication data P = 3.9 × 10−2)—a brain region with volumetric reductions in ADHD and BMI and linked to inhibitory control. Our findings suggest that dopaminergic neurotransmission, partially through DARPP-32-dependent signaling and involving the putamen, is a key player underlying the genetic overlap between ADHD and obesity measures. Uncovering shared etiological factors underlying the frequently observed ADHD–obesity comorbidity may have important implications in terms of prevention and/or efficient treatment of these conditions.
Inhibitors against the NS3-4A protease of hepatitis C virus (HCV) have proven to be useful drugs in the treatment of HCV infection. Although variants have been identified with mutations that confer resistance to these inhibitors, the mutations do not restore replicative fitness and no secondary mutations that rescue fitness have been found. To gain insight into the molecular mechanisms underlying the lack of fitness compensation, we screened known resistance mutations in infectious HCV cell culture with different genomic backgrounds. We observed that the Q41R mutation of NS3-4A efficiently rescues the replicative fitness in cell culture for virus variants containing mutations at NS3-Asp168. To understand how the Q41R mutation rescues activity, we performed protease activity assays complemented by molecular dynamics simulations, which showed that protease-peptide interactions far outside the targeted peptide cleavage sites mediate substrate recognition by NS3-4A and support protease cleavage kinetics. These interactions shed new light on the mechanisms by which NS3-4A cleaves its substrates, viral polyproteins and a prime cellular antiviral adaptor protein, the mitochondrial antiviral signaling protein MAVS. Peptide binding is mediated by an extended hydrogen-bond network in NS3-4A that was effectively optimized for protease-MAVS binding in Asp168 variants with rescued replicative fitness from NS3-Q41R. In the protease harboring NS3-Q41R, the N-terminal cleavage products of MAVS retained high affinity to the active site, rendering the protease susceptible for potential product inhibition. Our findings reveal delicately balanced protease-peptide interactions in viral replication and immune escape that likely restrict the protease adaptive capability and narrow the virus evolutionary space.
Cryo-electron tomography combined with subtomogram averaging (StA) has yielded high-resolution structures of macromolecules in their native context. However, high-resolution StA is not commonplace due to beam-induced sample drift, images with poor signal-to-noise ratios (SNR), challenges in CTF correction, and limited particle number. Here we address these issues by collecting tilt series with a higher electron dose at the zero-degree tilt. Particles of interest are then located within reconstructed tomograms, processed by conventional StA, and then re-extracted from the high-dose images in 2D. Single particle analysis tools are then applied to refine the 2D particle alignment and generate a reconstruction. Use of our hybrid StA (hStA) workflow improved the resolution for tobacco mosaic virus from 7.2 to 4.4 Å and for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. These resolution gains make hStA a promising approach for other StA projects aimed at achieving subnanometer resolution.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
The tremendous diversity of life in the ocean has proven to be a rich source of inspiration for drug discovery, with success rates for marine natural products up to 4 times higher than other naturally derived compounds. Yet the marine biodiscovery pipeline is characterized by chronic underfunding, bottlenecks and, ultimately, untapped potential. For instance, a lack of taxonomic capacity means that, on average, 20 years pass between the discovery of new organisms and the formal publication of scientific names, a prerequisite to proceed with detecting and isolating promising bioactive metabolites. The need for “edge” research that can spur novel lines of discovery and lengthy high-risk drug discovery processes, are poorly matched with research grant cycles. Here we propose five concrete pathways to broaden the biodiscovery pipeline and open the social and economic potential of the ocean genome for global benefit: (1) investing in fundamental research, even when the links to industry are not immediately apparent; (2) cultivating equitable collaborations between academia and industry that share both risks and benefits for these foundational research stages; (3) providing new opportunities for early-career researchers and under-represented groups to engage in high-risk research without risking their careers; (4) sharing data with global networks; and (5) protecting genetic diversity at its source through strong conservation efforts. The treasures of the ocean have provided fundamental breakthroughs in human health and still remain under-utilised for human benefit, yet that potential may be lost if we allow the biodiscovery pipeline to become blocked in a search for quick-fix solutions.
Aims: Acetylsalicylic acid (ASA) is widely used for the prevention of atherothrombotic events in patients with chronic coronary artery disease (CAD) and peripheral artery disease (PAD), but the risk of vascular events remains high. We aimed at identifying randomised controlled trials (RCTs) on antithrombotic treatments in patients with chronic CAD or PAD.
Methods: Searches were conducted on MEDLINE, EMBASE, and CENTRAL on March 1st, 2018. This systematic review (SR) uses a narrative synthesis to summarize the evidence for the efficacy and safety of antiplatelet and anticoagulant therapies in the population of both chronic CAD or PAD patients.
Results: Four RCTs from 27 publications were included. Study groups included 15,603 to 27,395 patients. ASA alone was the most extensively studied (n = 3); other studies included rivaroxaban with or without ASA (n = 1), vorapaxar alone (n = 1), and clopidogrel with (n = 1) or without ASA (n = 1). Clopidogrel alone and clopidogrel plus ASA compared to ASA presented similar efficacy with comparable safety profile. Rivaroxaban plus ASA significantly reduced the risk of the composite of cardiovascular death, myocardial infarction, and stroke compared to ASA alone, although major bleeding with rivaroxaban plus ASA increased.
Conclusion: There is limited and heterogeneous evidence on the prevention of atherothrombotic events in patients with chronic CAD or PAD. Clopidogrel alone and clopidogrel plus ASA did not demonstrate superiority over ASA alone. A combination of rivaroxaban plus ASA may offer significant additional benefit in reducing cardiovascular outcomes, yet it may increase the risk of bleeding, compared to ASA alone.
Determination of a minimal postmortem interval via age estimation of necrophagous diptera has been restricted to the juvenile stages and the time until emergence of the adult fly, i.e. up until 2–6 weeks depending on species and temperature. Age estimation of adult flies could extend this period by adding the age of the fly to the time needed for complete development. In this context pteridines are promising metabolites, as they accumulate in the eyes of flies with increasing age. We studied adults of the blow fly Lucilia sericata at constant temperatures of 16 °C and 25 °C up to an age of 25 days and estimated their pteridine levels by fluorescence spectroscopy. Age was given in accumulated degree days (ADD) across temperatures. Additionally, a mock case was set up to test the applicability of the method. Pteridine increases logarithmically with increasing ADD, but after 70–80 ADD the increase slows down and the curve approaches a maximum. Sex had a significant impact (p < 4.09 × 10−6) on pteridine fluorescence level, while body-size and head-width did not. The mock case demonstrated that a slight overestimation of the real age (in ADD) only occurred in two out of 30 samples. Age determination of L. sericata on the basis of pteridine levels seems to be limited to an age of about 70 ADD, but depending on the ambient temperature this could cover an extra amount of time of about 5–7 days after completion of the metamorphosis.
Cabozantinib (Cabometyx®) is a potent multikinase inhibitor targeting the vascular endothelial growth factor (VEGF) receptor 2, the mesenchymal-epithelial transition factor (MET) receptor, and the “anexelekto” (AXL) receptor tyrosine kinase. It is approved for the treatment of advanced hepatocellular carcinoma (HCC) after failure of sorafenib in Europe (since November 2018) and in the USA (since January 2019). The approval of cabozantinib was based on results of the randomized, placebo-controlled, phase 3 CELESTIAL trial in patients with unresectable HCC, who received one or two prior lines of treatment including sorafenib. At the second planned interim analysis, the trial was stopped, because the primary end point overall survival was clearly in favor for cabozantinib. Additionally, median progression-free survival was superior to placebo. The most common ≥ grade 3 relevant adverse events in patients with HCC treated with cabozantinib were palmar–plantar erythrodysesthesia, hypertension, fatigue, and diarrhea. In this review, current data on cabozantinib for the treatment of patients with advanced HCC, with a focus on the management of common adverse events and ongoing clinical trials, are discussed.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure to obtain ratings of instructional quality along the dimensions of cognitive activation, classroom management, and constructive support based on only 30 s of classroom observations. Ratings were compared to the longitudinal data of students taught in the videos to investigate the connections between the brief glimpses into instructional quality and student learning. In addition, we included samples of raters with different backgrounds (university students, middle school students and educational research experts) to understand the differences in thin slice ratings with respect to their predictive power regarding student learning. Results suggest that each group provides reliable ratings, as measured by a high degree of agreement between raters, as well predictive ratings with respect to students’ learning. Furthermore, we find experts’ and middle school students’ ratings of classroom management and constructive support, respectively, explain unique components of variance in student test scores. This incremental validity can be explained with the amount of implicit knowledge (experts) and an attunement to assess specific cues that is attributable to an emotional involvement (students).
Die Gattungen Nicotiana tabacum und Nicotiana rustica der Tabakpflanze sind von großer wirtschaftlicher Bedeutung. Aus ihnen wird Tabak hergestellt, der mit Alkohol zur weltweit am häufigsten konsumierten Genussdroge zählt. Aufgrund seiner Legalität wird die Toxizität trotz steigender Warnung und Aufklärung immer noch unterschätzt. Die Toxizität der Tabakpflanze ist vor allem auf das Alkaloid Nikotin zurückzuführen. Dass es selten zu einer Vergiftung durch die reine Pflanze kommt, liegt daran, dass sie optisch kaum zum Verzehr anregt. Häufiger dagegen ist eine Vergiftung durch z. B. verschluckte Zigarettenstummel, die vor allem für Kinder sehr gefährlich sein kann. Eine weitere Gefahr der Vergiftung entsteht bei der Tabakernte. Nikotin wird auch über die Haut aufgenommen und kann so zu der Green Tobacco Sickness bei Tabakplantagenarbeitern führen. Im Ernstfall existiert kein Antidot. Aktivkohle sollte so schnell wie möglich gegeben werden, um die Resorption zu vermindern. Ansonsten muss das Nikotin mit einer Magenwäsche aus dem Körper gefiltert werden. Präventiv sollten deshalb verstärkt auf die Gefahren des Tabaks aufmerksam gemacht werden.
The metasomatised continental mantle may play a key role in the generation of some ore deposits, in particular mineral systems enriched in platinum-group elements (PGE) and Au. The cratonic lithosphere is the longest-lived potential source for these elements, but the processes that facilitate their pre-concentration in the mantle and their later remobilisation to the crust are not yet well-established. Here, we report new results on the petrography, major-element, and siderophile- and chalcophile-element composition of native Ni, base metal sulphides (BMS), and spinels in a suite of well-characterised, highly metasomatised and weakly serpentinised peridotite xenoliths from the Bultfontein kimberlite in the Kaapvaal Craton, and integrate these data with published analyses. Pentlandite in polymict breccias (failed kimberlite intrusions at mantle depth) has lower trace-element contents (e.g., median total PGE 0.72 ppm) than pentlandite in phlogopite peridotites and Mica-Amphibole-Rutile-Ilmenite-Diopside (MARID) rocks (median 1.6 ppm). Spinel is an insignificant host for all elements except Zn, and BMS and native Ni account for typically <25% of the bulk-rock PGE and Au. High bulk-rock Te/S suggest a role for PGE-bearing tellurides, which, along with other compounds of metasomatic origin, may host the missing As, Ag, Cd, Sb, Te and, in part, Bi that are unaccounted for by the main assemblage.
The close spatial relationship between BMS and metasomatic minerals (e.g., phlogopite, ilmenite) indicates that the lithospheric mantle beneath Bultfontein was resulphidised by metasomatism after initial melt depletion during stabilisation of the cratonic lithosphere. Newly-formed BMS are markedly PGE-poor, as total PGE contents are <4.2 ppm in pentlandite from seven samples, compared to >26 ppm in BMS in other peridotite xenoliths from the Kaapvaal craton. This represents a strong dilution of the original PGE abundances at the mineral scale, perhaps starting from precursor PGE alloy and small volumes of residual BMS. The latter may have been the precursor to native Ni, which occurs in an unusual Ni-enriched zone in a harzburgite and displays strongly variable, but overall high PGE abundances (up to 81 ppm). In strongly metasomatised peridotites, Au is enriched relative to Pd, and was probably added along with S. A combination of net introduction of S, Au +/− PGE from the asthenosphere and intra-lithospheric redistribution, in part sourced from subducted materials, during metasomatic events may have led to sulphide precipitation at ~80–120 km beneath Bultfontein. This process locally enhanced the metallogenic fertility of this lithospheric reservoir. Further mobilisation of the metal budget stored in these S-rich domains and upwards transport into the crust may require interaction with sulphide-undersaturated melts that can dissolve sulphides along with the metals they store.
Objectives: Lumbar spinal stenosis (LSS) and lumbar disc herniation (LDH) are often accompanied by frequently occurring leg cramps severely affecting patients’ life and sleep quality. Recent evidence suggests that neuromuscular electric stimulation (NMES) of cramp-prone muscles may prevent cramps in lumbar disorders.
Materials and Methods: Thirty-two men and women (63 ± 9 years) with LSS and/or LDH suffering from cramps were randomly allocated to four different groups. Unilateral stimulation of the gastrocnemius was applied twice a week over four weeks (3 × 6 × 5 sec stimulation trains at 30 Hz above the individual cramp threshold frequency [CTF]). Three groups received either 85%, 55%, or 25% of their maximum tolerated stimulation intensity, whereas one group only received pseudo-stimulation.
Results: The number of reported leg cramps decreased in the 25% (25 ± 14 to 7 ± 4; p = 0.002), 55% (24 ± 10 to 10 ± 11; p = 0.014) and 85%NMES (23 ± 17 to 1 ± 1; p < 0.001) group, whereas it remained unchanged after pseudo-stimulation (20 ± 32 to 19 ± 33; p > 0.999). In the 25% and 85%NMES group, this improvement was accompanied by an increased CTF (p < 0.001).
Conclusion: Regularly applied NMES of the calf muscles reduces leg cramps in patients with LSS/LDH even at low stimulation intensity.
We show explicit formulas for the evaluation of (possibly higher-order) fractional Laplacians (-△)ˢ of some functions supported on ellipsoids. In particular, we derive the explicit expression of the torsion function and give examples of s-harmonic functions. As an application, we infer that the weak maximum principle fails in eccentric ellipsoids for s ∈ (1; √3 + 3/2) in any dimension n ≥ 2. We build a counterexample in terms of the torsion function times a polynomial of degree 2. Using point inversion transformations, it follows that a variety of bounded and unbounded domains do not satisfy positivity preserving properties either and we give some examples.
Highlights
• PUR, PVC and PLA microplastics affect life-history parameters of Daphnia magna.
• Natural kaolin particles are less toxic than microplastics.
• Microplastic toxicity is material-specific, e.g. PVC is most toxic on reproduction.
• In case of PVC, plastic chemicals are the main driver of microplastic toxicity.
• PLA bioplastics are similarly toxic as conventional plastics.
Abstract
Given the ubiquitous presence of microplastics in aquatic environments, an evaluation of their toxicity is essential. Microplastics are a heterogeneous set of materials that differ not only in particle properties, like size and shape, but also in chemical composition, including polymers, additives and side products. Thus far, it remains unknown whether the plastic chemicals or the particle itself are the driving factor for microplastic toxicity. To address this question, we exposed Daphnia magna for 21 days to irregular polyvinyl chloride (PVC), polyurethane (PUR) and polylactic acid (PLA) microplastics as well as to natural kaolin particles in high concentrations (10, 50, 100, 500 mg/L, ≤ 59 μm) and different exposure scenarios, including microplastics and microplastics without extractable chemicals as well as the extracted and migrating chemicals alone. All three microplastic types negatively affected the life-history of D. magna. However, this toxicity depended on the endpoint and the material. While PVC had the largest effect on reproduction, PLA reduced survival most effectively. The latter indicates that bio-based and biodegradable plastics can be as toxic as their conventional counterparts. The natural particle kaolin was less toxic than microplastics when comparing numerical concentrations. Importantly, the contribution of plastic chemicals to the toxicity was also plastic type-specific. While we can attribute effects of PVC to the chemicals used in the material, effects of PUR and PLA plastics were induced by the mere particle. Our study demonstrates that plastic chemicals can drive microplastic toxicity. This highlights the importance of considering the individual chemical composition of plastics when assessing their environmental risks. Our results suggest that less studied polymer types, like PVC and PUR, as well as bioplastics are of particular toxicological relevance and should get a higher priority in ecotoxicological studies.
Deubiquitinases (DUBs) are vital for the regulation of ubiquitin signals, and both catalytic activity of and target recruitment by DUBs need to be tightly controlled. Here, we identify asparagine hydroxylation as a novel posttranslational modification involved in the regulation of Cezanne (also known as OTU domain–containing protein 7B (OTUD7B)), a DUB that controls key cellular functions and signaling pathways. We demonstrate that Cezanne is a substrate for factor inhibiting HIF1 (FIH1)- and oxygen-dependent asparagine hydroxylation. We found that FIH1 modifies Asn35 within the uncharacterized N-terminal ubiquitin-associated (UBA)-like domain of Cezanne (UBACez), which lacks conserved UBA domain properties. We show that UBACez binds Lys11-, Lys48-, Lys63-, and Met1-linked ubiquitin chains in vitro, establishing UBACez as a functional ubiquitin-binding domain. Our findings also reveal that the interaction of UBACez with ubiquitin is mediated via a noncanonical surface and that hydroxylation of Asn35 inhibits ubiquitin binding. Recently, it has been suggested that Cezanne recruitment to specific target proteins depends on UBACez. Our results indicate that UBACez can indeed fulfill this role as regulatory domain by binding various ubiquitin chain types. They also uncover that this interaction with ubiquitin, and thus with modified substrates, can be modulated by oxygen-dependent asparagine hydroxylation, suggesting that Cezanne is regulated by oxygen levels.
In diesem Beitrag werden Spezifika der mit der qualitativen Inhaltsanalyse vorgenommenen Leserezeptionsforschung dargestellt. Der Schwerpunkt liegt auf dem literarischen Lesen. In Analysen von Textrezeptionszeugnissen, die zu literaturdidaktischen Forschungszwecken vorgenommen werden, ergibt sich eine doppelt-hermeneutische Herausforderung: Ziel ist es zu verstehen, was Leser_innen in Texten verstehen. Für den Analyseprozess folgen daraus spezifische Anforderungen: Erstens muss der Umfang der Kontexteinheit geklärt werden. Hier sind differenzierte Antworten notwendig, weil sich der gegebene Kontext im Leseprozess ständig verändert. Zweitens erfordert das Forschungsinteresse eine bestimmte Art von Kategorien, die in der Literatur als formal bzw. analytisch bezeichnet werden. Eine weitere Differenzierung zwischen strikt formalen und theoriebasiert formalen Kategorien wird hier vorgeschlagen. Drittens muss geklärt werden, ob die rekonstruierten Leseaktivitäten Prozesse sind, oder ob sie auf zugrunde liegende Dispositionen schließen lassen. Diese Anforderungen werden diskutiert und mit Lösungsansätzen versehen.
Highlights
• Explanation of mobility design and its practical, aesthetic and emblematic effects on travel behaviour.
• Review of recent studies on mobility design elements and the promotion of non-motorised travel.
• Discussion of research gaps and methodological challenges of data collection and comparability.
Abstract
To promote non-motorised travel, many travel behaviour studies acknowledge the importance of the built environment to modal choice, for example with its density or mix of uses. From a mobility design theory perspective, however, objects and environments affect human perceptions, assessments and behaviour in at least three different ways: by their practical, aesthetic and emblematic functions. This review of existing evidence will argue that travel behaviour research has so far mainly focused on the practical function of the built environment. For that purpose, we systematically identified 56 relevant studies on the impacts of the built environment on non-motorised travel behaviour in the Web of Science database. The focus of research on the practical design function primary involves land use distribution, street network connectivity and the presence of walking and cycling facilities. Only a small number of papers address the aesthetic and emblematic functions. These show that the perceived attractiveness of an environment and evoked feelings of traffic safety increase the likelihood of walking and cycling. However, from a mobility design perspective, the results of the review indicate a gap regarding comprehensive research on the effects of the aesthetic and emblematic functions of the built environment. Further research involving these functions might contribute to a better understanding of how to promote non-motorised travel more effectively. Moreover, limitations related to survey techniques, regional distribution and the comparability of results were identified.
Assessment of individual therapeutic responses provides valuable information concerning treatment benefits in individual patients. We evaluated individual therapeutic responses as determined by the Disease Activity Score-28 joints critical difference for improvement (DAS28-dcrit) in rheumatoid arthritis (RA) patients treated with intravenous tocilizumab or comparator anti-tumor necrosis factor (TNF) agents. The previously published DAS28-dcrit value [DAS28 decrease (improvement) ≥ 1.8] was retrospectively applied to data from two studies of tocilizumab in RA, the 52-week ACT-iON observational study and the 24-week ADACTA randomized study. Data were compared within (not between) studies. DAS28 was calculated with erythrocyte sedimentation rate as the inflammatory marker. Stability of DAS28-dcrit responses and European League Against Rheumatism (EULAR) good responses was determined by evaluating repeated responses at subsequent timepoints. A logistic regression model was used to calculate p values for differences in response rates between active agents. Patient-reported outcomes (PROs; pain, global health, function, and fatigue) in DAS28-dcrit responder versus non-responder groups were compared with an ANCOVA model. DAS28-dcrit individual response rates were 78.2% in tocilizumab-treated patients and 58.2% in anti-TNF-treated patients at week 52 in the ACT-ion study (p = 0.0001) and 90.1% versus 59.1% at week 24 in the ADACTA study (p < 0.0001). DAS28-dcrit responses showed greater stability over time (up to 52 weeks) than EULAR good responses. For both active treatments, DAS28-dcrit responses were associated with statistically significant improvements in mean PRO values compared with non-responders. The DAS28-dcrit response criterion provides robust assessments of individual responses to RA therapy and may be useful for discriminating between active agents in clinical studies and guiding treat-to-target decisions in daily practice.
Human RNF213, which encodes the protein mysterin, is a known susceptibility gene for moyamoya disease (MMD), a cerebrovascular condition with occlusive lesions and compensatory angiogenesis. Mysterin mutations, together with exposure to environmental trigger factors, lead to an elevated stroke risk since childhood. Mysterin is induced during cell stress, to function as cytosolic AAA+ ATPase and ubiquitylation enzyme. Little knowledge exists, in which context mysterin is needed. Here, we found that genetic ablation of several mitochondrial matrix factors, such as the peptidase ClpP, the transcription factor Tfam, as well as the peptidase and AAA+ ATPase Lonp1, potently induces Rnf213 transcript expression in various organs, in parallel with other components of the innate immune system. Mostly in mouse fibroblasts and human endothelial cells, the Rnf213 levels showed prominent upregulation upon Poly(I:C)-triggered TLR3-mediated responses to dsRNA toxicity, as well as upon interferon gamma treatment. Only partial suppression of Rnf213 induction was achieved by C16 as an antagonist of PKR (dsRNA-dependent protein kinase). Since dysfunctional mitochondria were recently reported to release immune-stimulatory dsRNA into the cytosol, our results suggest that mysterin becomes relevant when mitochondrial dysfunction or infections have triggered RNA-dependent inflammation. Thus, MMD has similarities with vasculopathies that involve altered nucleotide processing, such as Aicardi-Goutières syndrome or systemic lupus erythematosus. Furthermore, in MMD, the low penetrance of RNF213 mutations might be modified by dysfunctions in mitochondria or the TLR3 pathway.
Purpose: Neonatal surgery for abdominal wall defects is not performed in a centralized manner in Germany. The aim of this study was to investigate whether treatment for abdominal wall defects in Germany is equally effective compared to international results despite the decentralized care.
Methods: All newborn patients who were clients of the major statutory health insurance company in Germany between 2009 and 2013 and who had a diagnosis of gastroschisis or omphalocele were included. Mortality during the first year of life was analysed.
Results: The 316 patients with gastroschisis were classified as simple (82%) or complex (18%) cases. The main associated anomalies in the 197 patients with omphalocele were trisomy 18/21 (8%), cardiac anomalies (32%) and anomalies of the urinary tract (10%). Overall mortality was 4% for gastroschisis and 16% for omphalocele. Significant factors for non-survival were birth weight below 1500 g for both groups, complex gastroschisis, volvulus and anomalies of the blood supply to the intestine in gastroschisis, and female gender, trisomy 18/21 and lung hypoplasia in omphalocele.
Conclusions: Despite the fact that paediatric surgical care is organized in a decentralized manner in Germany, the mortality rates for gastroschisis and omphalocele are equal to those reported in international data.
A convex body is unconditional if it is symmetric with respect to reflections in all coordinate hyperplanes. We investigate unconditional lattice polytopes with respect to geometric, combinatorial, and algebraic properties. In particular, we characterize unconditional reflexive polytopes in terms of perfect graphs. As a prime example, we study the signed Birkhoff polytope. Moreover, we derive constructions for Gale-dual pairs of polytopes and we explicitly describe Gröbner bases for unconditional reflexive polytopes coming from partially ordered sets.
Purpose of Review: To provide an overview of current surgical peri-implantitis treatment options.
Recent Findings: Surgical procedures for peri-implantitis treatment include two main approaches: non-augmentative and augmentative therapy. Open flap debridement (OFD) and resective treatment are non-augmentative techniques that are indicated in the presence of horizontal bone loss in aesthetically nondemanding areas. Implantoplasty performed adjunctively at supracrestally and buccally exposed rough implant surfaces has been shown to efficiently attenuate soft tissue inflammation compared to control sites. However, this was followed by more pronounced soft tissue recession. Adjunctive augmentative measures are recommended at peri-implantitis sites exhibiting intrabony defects with a minimum depth of 3 mm and in the presence of keratinized mucosa. In more advanced cases with combined defect configurations, a combination of augmentative therapy and implantoplasty at exposed rough implant surfaces beyond the bony envelope is feasible.
Summary: For the time being, no particular surgical protocol or material can be considered as superior in terms of long-term peri-implant tissue stability.
Purpose of Review: Attention deficit hyperactivity disorder (ADHD) shows high heritability in formal genetic studies. In our review article, we provide an overview on common and rare genetic risk variants for ADHD and their link to clinical practice.
Recent findings: The formal heritability of ADHD is about 80% and therefore higher than most other psychiatric diseases. However, recent studies estimate the proportion of heritability based on singlenucleotide variants (SNPs) at 22%. It is a matter of debate which genetic mechanisms explain this huge difference. While frequent variants in first mega-analyses of genome-wideassociation study data containing several thousand patients give the first genome-wide results, explaining only little variance, the methodologically more difficult analyses of rare variants are still in their infancy. Some rare genetic syndromes show higher prevalence for ADHD indicating a potential role for a small number of patients. In contrast, polygenic risk scores (PRS) could potentially be applied to every patient. We give an overview how PRS explain different behavioral phenotypes in ADHD and how they could be used for diagnosis and therapy prediction.
Summary: Knowledge about a patient’s genetic makeup is not yet mandatory for ADHD therapy or diagnosis. PRS however have been introduced successfully in other areas of clinical medicine, and their application in psychiatry will begin within the next years. In order to ensure competent advice for patients, knowledge of the current state of research is useful forpsychiatrists.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.
We use recent results by Bainbridge–Chen–Gendron–Grushevsky–Möller on compactifications of strata of abelian differentials to give a comprehensive solution to the realizability problem for effective tropical canonical divisors in equicharacteristic zero. Given a pair (Γ,D) consisting of a stable tropical curve Γ and a divisor D in the canonical linear system on Γ, we give a purely combinatorial condition to decide whether there is a smooth curve X over a non-Archimedean field whose stable reduction has Γ as its dual tropical curve together with an effective canonical divisor KX that specializes to D.
Inhomogeneous phases in the Gross-Neveu model in 1 + 1 dimensions at finite number of flavors
(2020)
We explore the thermodynamics of the 1+1-dimensional Gross-Neveu (GN) model at a finite number of fermion flavors Nf, finite temperature, and finite chemical potential using lattice field theory. In the limit Nf→∞ the model has been solved analytically in the continuum. In this limit three phases exist: a massive phase, in which a homogeneous chiral condensate breaks chiral symmetry spontaneously; a massless symmetric phase with vanishing condensate; and most interestingly an inhomogeneous phase with a condensate, which oscillates in the spatial direction. In the present work we use chiral lattice fermions (naive fermions and SLAC fermions) to simulate the GN model with 2, 8, and 16 flavors. The results obtained with both discretizations are in agreement. Similarly as for Nf→∞ we find three distinct regimes in the phase diagram, characterized by a qualitatively different behavior of the two-point function of the condensate field. For Nf=8 we map out the phase diagram in detail and obtain an inhomogeneous region smaller as in the limit Nf→∞, where quantum fluctuations are suppressed. We also comment on the existence or absence of Goldstone bosons related to the breaking of translation invariance in 1+1 dimensions.
Erratum for: Cyclic AMP induces transactivation of the receptors for epidermal growth factor and nerve growth factor, thereby modulating activation of MAP kinase, Akt, and neurite outgrowth in PC12 cells.Journal of biological chemistry, 2002 Nov 15;277(46):43623-30. doi: 10.1074/jbc.M203926200. Epub 2002 Sep 5.
Type-II multiferroic materials, in which ferroelectric polarization is induced by inversion non-symmetric magnetic order, promise new and highly efficient multifunctional applications based on mutual control of magnetic and electric properties. However, to date this phenomenon is limited to low temperatures. Here we report giant pressure-dependence of the multiferroic critical temperature in CuBr2: at 4.5 GPa it is enhanced from 73.5 to 162 K, to our knowledge the highest TC ever reported for non-oxide type-II multiferroics. This growth shows no sign of saturating and the dielectric loss remains small under these high pressures. We establish the structure under pressure and demonstrate a 60\% increase in the two-magnon Raman energy scale up to 3.6 GPa. First-principles structural and magnetic energy calculations provide a quantitative explanation in terms of dramatically pressure-enhanced interactions between CuBr2 chains. These large, pressure-tuned magnetic interactions motivate structural control in cuprous halides as a route to applied high-temperature multiferroicity.
Deconfinement of Mott localized electrons into topological and spin–orbit-coupled Dirac fermions
(2020)
The interplay of electronic correlations, spin–orbit coupling and topology holds promise for the realization of exotic states of quantum matter. Models of strongly interacting electrons on honeycomb lattices have revealed rich phase diagrams featuring unconventional quantum states including chiral superconductivity and correlated quantum spin Hall insulators intertwining with complex magnetic order. Material realizations of these electronic states are, however, scarce or inexistent. In this work, we propose and show that stacking 1T-TaSe2 into bilayers can deconfine electrons from a deep Mott insulating state in the monolayer to a system of correlated Dirac fermions subject to sizable spin–orbit coupling in the bilayer. 1T-TaSe2 develops a Star-of-David charge density wave pattern in each layer. When the Star-of-David centers belonging to two adyacent layers are stacked in a honeycomb pattern, the system realizes a generalized Kane–Mele–Hubbard model in a regime where Dirac semimetallic states are subject to significant Mott–Hubbard interactions and spin–orbit coupling. At charge neutrality, the system is close to a quantum phase transition between a quantum spin Hall and an antiferromagnetic insulator. We identify a perpendicular electric field and the twisting angle as two knobs to control topology and spin–orbit coupling in the system. Their combination can drive it across hitherto unexplored grounds of correlated electron physics, including a quantum tricritical point and an exotic first-order topological phase transition.
Cancer‐associated venous thromboembolism (VTE) is a frequent, potentially life‐threatening event that complicates cancer management. Anticoagulants are the cornerstone of therapy for the treatment and prevention of cancer‐associated thrombosis (CAT); factor Xa–inhibiting direct oral anticoagulants (DOACs; apixaban, edoxaban, and rivaroxaban), which have long been recommended for the treatment of VTE in patients without cancer, have been investigated in this setting. The first randomized comparisons of DOACs against low‐molecular‐weight heparin for the treatment of CAT indicated that DOACs are efficacious in this setting, with findings reflected in recent updates to published guidance on CAT treatment. However, the higher risk of bleeding events (particularly in the gastrointestinal tract) with DOACs highlights the need for appropriate patient selection. Further insights will be gained from additional studies that are ongoing or awaiting publication. The efficacy and safety of DOAC thromboprophylaxis in ambulatory patients with cancer at a high risk of VTE have also been assessed in placebo‐controlled randomized controlled trials of apixaban and rivaroxaban. Both studies showed efficacy benefits with DOACs, but both studies also showed a nonsignificant increase in major bleeding events while on treatment. This review summarizes the evidence base for rivaroxaban use in CAT, the patient profile potentially most suited to DOAC use, and ongoing controversies under investigation. We also describe ongoing studies from the CALLISTO (Cancer Associated thrombosis—expLoring soLutions for patients through Treatment and Prevention with RivarOxaban) program, which comprises several randomized clinical trials and real‐world evidence studies, including investigator‐initiated research.
We study in detail the nuclear aspects of a neutron-star merger in which deconfinement to quark matter takes place. For this purpose, we make use of the Chiral Mean Field (CMF) model, an effective relativistic model that includes self-consistent chiral symmetry restoration and deconfinement to quark matter and, for this reason, predicts the existence of different degrees of freedom depending on the local density/chemical potential and temperature. We then use the out-of-chemical-equilibrium finite-temperature CMF equation of state in full general-relativistic simulations to analyze which regions of different QCD phase diagrams are probed and which conditions, such as strangeness and entropy, are generated when a strong first-order phase transition appears. We also investigate the amount of electrons present in different stages of the merger and discuss how far from chemical equilibrium they can be and, finally, draw some comparisons with matter created in supernova explosions and heavy-ion collisions.
Evaluation of a rapid turn-over, fully-automated ADAMTS13 activity assay: a method comparison study
(2020)
Thrombotic thrombocytopenic purpura (TTP) is a life-threatening thrombotic microangiopathy caused by severely reduced activity of the von-Willebrand factor-cleaving protease ADAMTS13, mainly caused by anti-ADAMTS-13 antibodies. Although several test systems for ADAMTS13 measurement exist, long turn-around times hamper the usability in daily practice. We performed a method comparison study for two commercially available ADAMTS13 assays and evaluated the agreement between the fully-automated rapid turn-over HemosIL AcuStar ADAMTS13 Activity assay and the manually performed TECHNOZYM ADAMTS-13 Activity assay. Twenty-four paired test samples derived from 10 consecutively recruited patients (n = 8, acquired TTP; n = 1, atypical hemolytic uremic syndrome; n = 1, control), of which nine test samples were collected in case of clinically apparent TTP and 13 samples were collected from TTP patients in clinical remission were included. Overall correlation between the TECHNOZYM and AcuStar assay was good with a Pearson R of 0.93 (p < 0.001). Agreement between the assays assessed with the Passing–Bablok analysis showed high agreement with an Intercept of − 2.56 (95% confidence interval [CI], − 5.07 to − 0.86) and Slope of 1.04 (95% CI 0.84–1.17). The absolute mean bias was 2.54% (standard difference [SD], 6.38%; 95% CI to 10.0–15.05%). Intra-method reliability was high with an absolute mean bias of − 0.13% (SD 3.21%; 95% CI to 6.42–6.16%). The observer agreement for categorial thresholds (> or < 10% ADAMTS3 activity) was kappa = 0.82 (95% CI 0.59–1.0). Conclusively, overall agreement between the testing methods was sufficient and we support previously published data suggesting the AcuStar assay being a valuable and accurate tool for ADAMTS13 activity testing and TTP diagnostics.
Corporate governance is the set of rules, be they legal or self-regulatory, practices and processes pursuant to which an insurance undertaking is administrated. Good corporate governance is not only key to establishing oneself and succeeding in a competitive environment but also to safeguarding the interests of all stakeholders in an insurance undertaking. It is insofar not surprising that mandatory requirements on the administration of insurance undertakings have become rather prolific in recent years, in an attempt by regulators to protect especially policyholders against perceived risks hailing from improperly governed insurance undertakings. In Germany this has been regarded by many undertakings as an overly paternalistic approach of the legislator, especially considering that the German insurance sector has experienced for decades if not centuries a remarkably low number of insolvencies and that German insurers were neither the trigger nor the (especially) endangered actors in the financial crisis commencing in 2007. Notwithstanding the true core of this criticism, that the insurance industry was taken to a certain degree hostage by the shortcomings within the banking sector, the reform of German Insurance Supervisory Law via implementation of the Solvency II-System has brought many advances in the sense of better governance of insurance undertakings and has also brought to light many deficiencies that the administration of some insurance undertakings may have suffered from in the past, which are now more properly addressed.
Macrophages acquire anti-inflammatory and proresolving functions to facilitate resolution of inflammation and promote tissue repair. While alternatively activated macrophages (AAMs), also referred to as M2 macrophages, polarized by type 2 (Th2) cytokines IL-4 or IL-13 contribute to the suppression of inflammatory responses and play a pivotal role in wound healing, contemporaneous exposure to apoptotic cells (ACs) potentiates the expression of anti-inflammatory and tissue repair genes. Given that liver X receptors (LXRs), which coordinate sterol metabolism and immune cell function, play an essential role in the clearance of ACs, we investigated whether LXR activation following engulfment of ACs selectively potentiates the expression of Th2 cytokine-dependent genes in primary human AAMs. We show that AC uptake simultaneously upregulates LXR-dependent, but suppresses SREBP-2-dependent gene expression in macrophages, which are both prevented by inhibiting Niemann–Pick C1 (NPC1)-mediated sterol transport from lysosomes. Concurrently, macrophages accumulate sterol biosynthetic intermediates desmosterol, lathosterol, lanosterol, and dihydrolanosterol but not cholesterol-derived oxysterols. Using global transcriptome analysis, we identify anti-inflammatory and proresolving genes including interleukin-1 receptor antagonist (IL1RN) and arachidonate 15-lipoxygenase (ALOX15) whose expression are selectively potentiated in macrophages upon concomitant exposure to ACs or LXR agonist T0901317 (T09) and Th2 cytokines. We show priming macrophages via LXR activation enhances the cellular capacity to synthesize inflammation-suppressing specialized proresolving mediator (SPM) precursors 15-HETE and 17-HDHA as well as resolvin D5. Silencing LXRα and LXRβ in macrophages attenuates the potentiation of ALOX15 expression by concomitant stimulation of ACs or T09 and IL-13. Collectively, we identify a previously unrecognized mechanism of regulation whereby LXR integrates AC uptake to selectively shape Th2-dependent gene expression in AAMs.
Mongolian spots (MS) are congenital dermal conditions resulting from neural crest-derived melanocytes migration to the skin during embryogenesis. MS incidences are highly variable in different populations. Morphologically, MS present as hyperpigmented maculae of varying size and form, ranging from round spots of 1 cm in diameter to extensive discolorations covering predominantly the lower back and buttocks. Due to their coloring, which is also dependent on the skin type, MS may mimic hematoma thus posing a challenge on the physician conducting examinations of children in cases of suspected child abuse. In the present study, MS incidences and distribution, as well as skin types, were documented in a collective of 253 children examined on the basis of suspected child abuse. From these data, a classification scheme was derived to document MS and to help identify cases with a need for recurrent examination for unambiguous interpretation of initial findings alongside the main decisive factors for re-examination such as general circumstances of the initial examination (e. g., experience of the examiner, lighting conditions) and given dermatological conditions of the patient (e. g., diaper rash).
Objective: Relative to urban populations, rural patients may have more limited access to care, which may undermine timely bladder cancer (BCa) diagnosis and even survival.
Methods: We tested the effect of residency status (rural areas [RA < 2500 inhabitants] vs. urban clusters [UC ≥ 2500 inhabitants] vs. urbanized areas [UA, ≥50,000 inhabitants]) on BCa stage at presentation, as well as on cancer-specific mortality (CSM) and other cause mortality (OCM), according to the US Census Bureau definition. Multivariate competing risks regression (CRR) models were fitted after matching of RA or UC with UA in stage-stratified analyses.
Results: Of 222,330 patients, 3496 (1.6%) resided in RA, 25,462 (11.5%) in UC and 193,372 (87%) in UA. Age, tumor stage, radical cystectomy rates or chemotherapy use were comparable between RA, UC and UA (all p > 0.05). At 10 years, RA was associated with highest OCM followed by UC and UA (30.9% vs. 27.7% vs. 25.6%, p < 0.01). Similarly, CSM was also marginally higher in RA or UC vs. UA (20.0% vs. 20.1% vs. 18.8%, p = 0.01). In stage-stratified, fully matched CRR analyses, increased OCM and CSM only applied to stage T1 BCa patients.
Conclusion: We did not observe meaningful differences in access to treatment or stage distribution, according to residency status. However, RA and to a lesser extent UC residency status, were associated with higher OCM and marginally higher CSM in T1N0M0 patients. This observation should be further validated or refuted in additional epidemiological investigations.
There is limited knowledge on the prevalence and risk factors of diabetic retinopathy (DR) in dialysis patients. We have investigated the association between diabetes mellitus and lipid-related biomarkers and retinopathy in hemodialysis patients. We reviewed 1,255 hemodialysis patients with type 2 diabetes mellitus (T2DM) who participated in the German Diabetes and Dialysis Study (4D Study). Associations between categorical clinical, biochemical variables and diabetic retinopathy were examined by logistic regression. On average, patients were 66 ± 8 years of age, 54% were male and the HbA1c was 6.7% ± 1.3%. DR, found in 71% of the patients, was significantly and positively associated with fasting glucose, HbA1c, time on dialysis, age, systolic blood pressure, body mass index and the prevalence of other microvascular diseases (e.g. neuropathy). Unexpectedly, DR was associated with high HDL cholesterol and high apolipoproteins AI and AII. Patients with coronary artery disease were less likely to have DR. DR was not associated with gender, smoking, diastolic blood pressure, VLDL cholesterol, triglycerides, and LDL cholesterol. In summary, the prevalence of DR in patients with type 2 diabetes mellitus requiring hemodialysis is higher than in patients suffering from T2DM, who do not receive hemodialysis. DR was positively related to systolic blood pressure (BP), glucometabolic control, and, paradoxically, HDL cholesterol. This data suggests that glucose and blood pressure control may delay the development of DR in patients with diabetes mellitus on dialysis.
The genus Ebolavirus comprises some of the deadliest viruses for primates and humans and associated disease outbreaks are increasing in Africa. Different evidence suggests that bats are putative reservoir hosts and play a major role in the transmission cycle of these filoviruses. Thus, detailed knowledge about their distribution might improve risk estimations of where future disease outbreaks might occur. A MaxEnt niche modelling approach based on climatic variables and land cover was used to investigate the potential distribution of 9 bat species associated to the Zaire ebolavirus. This viral species has led to major Ebola outbreaks in Africa and is known for causing high mortalities. Modelling results suggest suitable areas mainly in the areas near the coasts of West Africa with extensions into Central Africa, where almost all of the 9 species studied find suitable habitat conditions. Previous spillover events and outbreak sites of the virus are covered by the modelled distribution of 3 bat species that have been tested positive for the virus not only using serology tests but also PCR methods. Modelling the habitat suitability of the bats is an important step that can benefit public information campaigns and may ultimately help control future outbreaks of the disease.
Aktuelle wissenschaftliche Auseinandersetzungen mit dem Sinnerleben Beschäftigter thematisieren vor allem die Problematik eines belastungsbedingten Sinnverlustes. Danach leiden immer mehr Beschäftigte darunter, ihre Arbeit nicht mehr als sinnvoll empfinden zu können. Eine solche Perspektive lässt allerdings die subjektiven Gestaltungsleistungen und Aneignungsformen von Arbeit aus dem Blick geraten. Diesen wendet sich der Beitrag zu, indem er danach fragt, inwieweit sich unterschiedliche Formen der Aneignung von Arbeit identifizieren lassen. Auf der Basis von Interviews mit vierzig hochqualifizierten Beschäftigten werden drei unterschiedliche Aneignungsmodi mit ihren inhärenten Ambivalenzen identifiziert. Jeder Modus steht für eine spezifische Sichtweise auf die eigenen Gestaltungsmöglichkeiten und für eine Form der primären Sinnzuschreibung in der Arbeit. Differenziert werden drei Idealtypen – „progressive Sinngestaltung“, „widerständige Sinnbewahrung“ sowie „pragmatische Sinnbewahrung“ –, anhand derer die Heterogenität und die Ambivalenzen der Aneignung professioneller Arbeit deutlich werden. Der Beitrag liefert so Erkenntnisse über die subjektiven Praktiken des Bedeutsam-Machens von Arbeit und trägt zur Erforschung des Zusammenspiels von Arbeit und Subjektivität bei.
Objectives: Evaluation of surgical and non-surgical air-polishing in vitro efficacy for implant surface decontamination.
Material and methods: One hundred eighty implants were distributed to three differently angulated bone defect models (30°, 60°, 90°). Biofilm was imitated using indelible red color. Sixty implants were used for each defect, 20 of which were air-polished with three different types of glycine air powder abrasion (GAPA1–3) combinations. Within 20 equally air-polished implants, a surgical and non-surgical (with/without mucosa mask) procedure were simulated. All implants were photographed to determine the uncleaned surface. Changes in surface morphology were assessed using scanning electron micrographs (SEM).
Results: Cleaning efficacy did not show any significant differences between GAPA1–3 for surgical and non-surgical application. Within a cleaning method significant (p < 0.001) differences for GAPA2 between 30° (11.77 ± 2.73%) and 90° (7.25 ± 1.42%) in the non-surgical and 30° (8.26 ± 1.02%) and 60° (5.02 ± 0.84%) in the surgical simulation occurred. The surgical use of air-polishing (6.68 ± 1.66%) was significantly superior (p < 0.001) to the non-surgical (10.13 ± 2.75%). SEM micrographs showed no surface damages after use of GAPA.
Conclusions: Air-polishing is an efficient, surface protective method for surgical and non-surgical implant surface decontamination in this in vitro model. No method resulted in a complete cleaning of the implant surface.
Clinical relevance: Air-polishing appears to be promising for implant surface decontamination regardless of the device.
Purpose: COVID-19 pandemic had multiple influences on the social, industrial, and medical situation in all affected countries. Measures of obligatory medical confinement were suspensions of scheduled non-emergent surgical procedures and outpatients’ clinics as well as overall access restrictions to hospitals and medical practices. The aim of this retrospective study was to assess if the obligatory confinement (lockdown) had an effect on the number of appendectomies (during and after the period of lockdown).
Methods: This retrospective study was based on anonymized nationwide administrative claims data of the German Local General Sickness Fund (AOK). Patients admitted for diseases of the appendix (ICD-10: K35-K38) or abdominal and pelvic pain (ICD-10: R10) who underwent an appendectomy (OPS: 5-470) were included. The study period included 6 weeks of German lockdown (16 March–26 April 2020) as well as 6 weeks before (03 February–15 March 2020) and after (27 April–07 June 2020). These periods were compared to the respective one in 2018 and 2019.
Results: The overall number of appendectomies was significantly reduced during the lockdown time in 2020 compared to that in 2018 and 2019. This decrease affects only appendectomies due to acute simple (ICD-10: K35.30, K35.8) and non-acute appendicitis (ICD-10: K36-K38, R10). Numbers for appendectomies in acute complex appendicitis remained unchanged. Female patients and in the age group 1–18 years showed the strongest decrease in number of cases.
Conclusion: The lockdown in Germany resulted in a decreased number of appendectomies. This affected mainly appendectomies in simple acute and non-acute appendicitis, but not complicated acute appendicitis. The study gives no evidence that the confinement measures resulted in a deterioration of medical care for appendicitis.
Background: Alterations in the SCN5A gene encoding the cardiac sodium channel Nav1.5 have been linked to a number of arrhythmia syndromes and diseases including long-QT syndrome (LQTS), Brugada syndrome (BrS) and dilative cardiomyopathy (DCM), which may predispose to fatal arrhythmias and sudden death. We identified the heterozygous variant c.316A > G, p.(Ser106Gly) in a 35-year-old patient with survived cardiac arrest. In the present study, we aimed to investigate the functional impact of the variant to clarify the medical relevance.
Methods: Mutant as well as wild type GFP tagged Nav1.5 channels were expressed in HEK293 cells. We performed functional characterization experiments using patch-clamp technique.
Results: Electrophysiological measurements indicated, that the detected missense variant alters Nav1.5 channel functionality leading to a gain-of-function effect. Cells expressing S106G channels show an increase in Nav1.5 current over the entire voltage window.
Conclusion: The results support the assumption that the detected sequence aberration alters Nav1.5 channel function and may predispose to cardiac arrhythmias and sudden cardiac death.
Objectives: To immunohistochemically characterize and correlate macrophage M1/M2 polarization status with disease severity at peri-implantitis sites.
Materials and methods: A total of twenty patients (n = 20 implants) diagnosed with peri-implantitis (i.e., bleeding on probing with or without suppuration, probing depths ≥ 6 mm, and radiographic marginal bone loss ≥ 3 mm) were included. The severity of peri-implantitis was classified according to established criteria (i.e., slight, moderate, and advanced). Granulation tissue biopsies were obtained during surgical therapy and prepared for immunohistological assessment and macrophage polarization characterization. Macrophages, M1, and M2 phenotypes were identified through immunohistochemical markers (i.e., CD68, CD80, and CD206) and quantified through histomorphometrical analyses.
Results: Macrophages exhibiting a positive CD68 expression occupied a mean proportion of 14.36% (95% CI 11.4–17.2) of the inflammatory connective tissue (ICT) area. Positive M1 (CD80) and M2 (CD206) macrophages occupied a mean value of 7.07% (95% CI 5.9–9.4) and 5.22% (95% CI 3.8–6.6) of the ICT, respectively. The mean M1/M2 ratio was 1.56 (95% CI 1–12–1.9). Advanced peri-implantitis cases expressed a significantly higher M1 (%) when compared with M2 (%) expression. There was a significant correlation between CD68 (%) and M1 (%) expression and probing depth (PD) values.
Conclusion: The present immunohistochemical analysis suggests that macrophages constitute a considerable proportion of the inflammatory cellular composition at peri-implantitis sites, revealing a significant higher expression for M1 inflammatory phenotype at advanced peri-implantitis sites, which could possibly play a critical role in disease progression.
Clinical relevance: Macrophages have critical functions to establish homeostasis and disease. Bacteria might induce oral dysbiosis unbalancing the host’s immunological response and triggering inflammation around dental implants. M1/M2 status could possibly reveal peri-implantitis’ underlying pathogenesis.
Respiratory complex I catalyzes electron transfer from NADH to ubiquinone (Q) coupled to vectorial proton translocation across the inner mitochondrial membrane. Despite recent progress in structure determination of this very large membrane protein complex, the coupling mechanism is a matter of ongoing debate and the function of accessory subunits surrounding the canonical core subunits is essentially unknown. Concerted rearrangements within a cluster of conserved loops of central subunits NDUFS2 (β1-β2S2 loop), ND1 (TMH5-6ND1 loop) and ND3 (TMH1-2ND3 loop) were suggested to be critical for its proton pumping mechanism. Here, we show that stabilization of the TMH1-2ND3 loop by accessory subunit LYRM6 (NDUFA6) is pivotal for energy conversion by mitochondrial complex I. We determined the high-resolution structure of inactive mutant F89ALYRM6 of eukaryotic complex I from the yeast Yarrowia lipolytica and found long-range structural changes affecting the entire loop cluster. In atomistic molecular dynamics simulations of the mutant, we observed conformational transitions in the loop cluster that disrupted a putative pathway for delivery of substrate protons required in Q redox chemistry. Our results elucidate in detail the essential role of accessory subunit LYRM6 for the function of eukaryotic complex I and offer clues on its redox-linked proton pumping mechanism.
Vor dem Hintergrund der zunehmenden Veränderung des städtischen Lebensumfeldes durch Gentrifizierung, investorenfreundliche Stadtpolitik, Privatisierung öffentlicher Räume, Einsparung öffentlicher Investitionen und den Abbau demokratischer Beteiligungsinstrumente haben wir uns gefragt: Wie könnte eine solidarische Stadt der Zukunft aussehen? Welche Gegenentwürfe zu aktuell herrschenden Paradigmen in der Stadtentwicklung zeigen uns Wege aus der Alternativlosigkeit hin zu einer solidarischen Praxis auf Quartiersebene? Im Rahmen einer angewandten kritischen Geografie möchten wir zeigen, dass es eine Vielzahl an Projekten und Initiativen gibt, die die Kreativlosigkeit, zu der uns der Neoliberalismus erzogen hat, durchbrechen und an konkreten Ideen und deren praktischer Umsetzung arbeiten. Als theoretische Annäherung dafür setzen wir uns mit Utopien und deren Potenzialen für eine politische Praxis auseinander. Da wir selbst im Kontext stadtpolitischer Gruppen engagiert sind, nutzen wir die aktivistische Stadtforschung als methodischen Rahmen unserer Forschung. Daraus entstanden ist ein Faltblatt, der „Kompass für ein solidarisches Quartier“, welcher als aktivistisches Werkzeug und Ideengeber für die konkrete Umsetzung transformativer Stadtpolitik dienen soll.
The production of K∗(892)0 and ϕ(1020) in pp collisions at s√ = 8 TeV was measured using Run 1 data collected by the ALICE collaboration at the LHC. The pT-differential yields d2N/dydpT in the range 0<pT<20 GeV/c for K∗0 and 0.4<pT<16 GeV/c for ϕ have been measured at midrapidity, |y|<0.5. Moreover, improved measurements of the K∗(892)0 and ϕ(1020) at s√=7TeV are presented. The collision energy dependence of pT distributions, pT-integrated yields and particle ratios in inelastic pp collisions are examined. The results are also compared with different collision systems. The values of the particle ratios are found to be similar to those measured at other LHC energies. In pp collisions a hardening of the particle spectra is observed with increasing energy, but at the same time it is also observed that the relative particle abundances are independent of the collision energy. The pT-differential yields of K∗0 and ϕ in pp collisions at s√=8 TeV are compared with the expectations of different Monte Carlo event generators.
The transverse momentum (pT) differential yields of (anti-)3He and (anti-)3H measured in p-Pb collisions at sNN−−−√ = 5.02 TeV with ALICE at the Large Hadron Collider (LHC) are presented. The ratios of the pT-integrated yields of (anti-)3He and (anti-)3H to the proton yields are reported, as well as the pT dependence of the coalescence parameters B3 for (anti-)3He and (anti-)3H. For (anti-)3He, the results obtained in four classes of the mean charged-particle multiplicity density are also discussed. These results are compared to predictions from a canonical statistical hadronization model and coalescence approaches. An upper limit on the total yield of 4He¯ is determined.
The global polarization of the Λ and Λ¯¯¯¯ hyperons is measured for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV recorded with the ALICE at the LHC. The results are reported differentially as a function of collision centrality and hyperon's transverse momentum (pT) for the range of centrality 5-50%, 0.5<pT<5 GeV/c, and rapidity |y|<0.5. The hyperon global polarization averaged for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV is found to be consistent with zero, ⟨PH⟩ (%) ≈ - 0.01 ± 0.05 (stat.) ± 0.03 (syst.) in the collision centrality range 15-50%, where the largest signal is expected. The results are compatible with expectations based on an extrapolation from measurements at lower collision energies at RHIC, hydrodynamical model calculations, and empirical estimates based on collision energy dependence of directed flow, all of which predict the global polarization values at LHC energies of the order of 0.01%.
The Quark Gluon Plasma (QGP) produced in ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC) can be studied by measuring the modifications of jets formed by hard scattered partons which interact with the medium. We studied these modifications via angular correlations of jets with charged hadrons for jets with momenta 20 < pjetT < 40 GeV/c as a function of the associated particle momentum. The reaction plane fit (RPF) method is used in this analysis to remove the flow modulated background. The analysis of angular correlations for different orientations of the jet relative to the second order event plane allows for the study of the path length dependence of medium modifications to jets. We present the dependence of azimuthal angular correlations of charged hadrons with respect to the angle of the axis of a reconstructed jet relative to the event plane in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. The dependence of particle yields associated with jets on the angle of the jet with respect to the event plane is presented. Correlations at different angles relative to the event plane are compared through ratios and differences of the yield. No dependence of the results on the angle of the jet with respect to the event plane is observed within uncertainties, which is consistent with no significant path length dependence of the medium modifications for this observable.
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN−−−√ = 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the transverse momentum intervals pT>0.2 GeV/c and 3<pT< 6 GeV/c in the 5-40% and 10-40% centrality classes, respectively. The difference between the positively and negatively charged hadron v1 has a positive slope as a function of pseudorapidity η, dΔv1/dη=[1.68 ± 0.49 (stat.) ± 0.41 (syst.)] ×10−4. The same measurement for D0 and D¯0 mesons yields a positive value dΔv1/dη= [4.9 ± 1.7 (stat.) ± 0.6 (syst.)]×10−1, which is about three orders of magnitude larger than the one of the charged hadrons. These measurements can provide new insights into the effects of the strong electromagnetic field and the initial tilt of matter created in non-central heavy-ion collisions on the dynamics of light (u, d, and s) and heavy (c) quarks. The large difference between the observed Δv1 of charged hadrons and D0 mesons may reflect different sensitivity of the charm and light quarks to the early time dynamics of a heavy-ion collision. These observations challenge some of the recent theoretical calculations, which predicted a negative and an order of magnitude smaller value of dΔv1/dη for both light-flavour and charmed hadrons.
The first measurements of dielectron production at midrapidity (|ηc|<0.8) in proton-proton and proton-lead collisions at sNN−−−√ = 5.02 TeV at the LHC are presented. The dielectron cross section is measured with the ALICE detector as a function of the invariant mass mee and the pair transverse momentum pT,ee in the ranges mee < 3.5 GeV/c2 and pT,ee < 8.0 GeV/c2, in both collision systems. In proton-proton collisions, the charm and beauty cross sections are determined at midrapidity from a fit to the data with two different event generators. This complements the existing dielectron measurements performed at s√ = 7 and 13 TeV. The slope of the s√ dependence of the three measurements is described by FONLL calculations. The dielectron cross section measured in proton-lead collisions is in agreement, within the current precision, with the expected dielectron production without any nuclear matter effects for e+e− pairs from open heavy-flavor hadron decays. For the first time at LHC energies, the dielectron production in proton-lead and proton-proton collisions are directly compared at the same sNN−−−√ via the dielectron nuclear modification factor RpPb. The measurements are compared to model calculations including cold nuclear matter effects, or additional sources of dielectrons from thermal radiation.
This article reports measurements of the pT-differential inclusive jet cross-section in pp collisions at s√ = 5.02 TeV and the pT-differential inclusive jet yield in Pb-Pb 0-10% central collisions at sNN−−−√ = 5.02 TeV. Jets were reconstructed at mid-rapidity with the ALICE tracking detectors and electromagnetic calorimeter using the anti-kT algorithm. For pp collisions, we report jet cross-sections for jet resolution parameters R=0.1−0.6 over the range 20<pT,jet<140 GeV/c, as well as the jet cross-section ratios of different R, and comparisons to two next-to-leading-order (NLO)-based theoretical predictions. For Pb-Pb collisions, we report the R=0.2 and R=0.4 jet spectra for 40<pT,jet<140 GeV/c and 60<pT,jet<140 GeV/c, respectively. The scaled ratio of jet yields observed in Pb-Pb to pp collisions, RAA, is constructed, and exhibits strong jet quenching and a clear pT-dependence for R=0.2. No significant R-dependence of the jet RAA is observed within the uncertainties of the measurement. These results are compared to several theoretical predictions.
Mid-rapidity production of π±, K± and (p¯)p measured by the ALICE experiment at the LHC, in Pb-Pb and inelastic pp collisions at sNN−−−√ = 5.02 TeV, is presented. The invariant yields are measured over a wide transverse momentum (pT) range from hundreds of MeV/c up to 20 GeV/c. The results in Pb-Pb collisions are presented as a function of the collision centrality, in the range 0−90%. The comparison of the pT-integrated particle ratios, i.e. proton-to-pion (p/π) and kaon-to-pion (K/π) ratios, with similar measurements in Pb-Pb collisions at sNN−−−√ = 2.76 TeV show no significant energy dependence. Blast-wave fits of the pT spectra indicate that in the most central collisions radial flow is slightly larger at 5.02 TeV with respect to 2.76 TeV. Particle ratios (p/π, K/π) as a function of pT show pronounced maxima at pT ≈ 3 GeV/c in central Pb-Pb collisions. At high pT, particle ratios at 5.02 TeV are similar to those measured in pp collisions at the same energy and in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. Using the pp reference spectra measured at the same collision energy of 5.02 TeV, the nuclear modification factors for the different particle species are derived. Within uncertainties, the nuclear modification factor is particle species independent for high pT and compatible with measurements at sNN−−−√ = 2.76 TeV. The results are compared to state-of-the-art model calculations, which are found to describe the observed trends satisfactorily.
In bioengineering, scaffold proteins have been increasingly used to recruit molecules to parts of a cell, or to enhance the efficacy of biosynthetic or signalling pathways. For example, scaffolds can be used to make weak or non-immunogenic small molecules immunogenic by attaching them to the scaffold, in this role called carrier. Here, we present the dodecin from Mycobacterium tuberculosis (mtDod) as a new scaffold protein. MtDod is a homododecameric complex of spherical shape, high stability and robust assembly, which allows the attachment of cargo at its surface. We show that mtDod, either directly loaded with cargo or equipped with domains for non-covalent and covalent loading of cargo, can be produced recombinantly in high quantity and quality in Escherichia coli. Fusions of mtDod with proteins of up to four times the size of mtDod, e.g. with monomeric superfolder green fluorescent protein creating a 437 kDa large dodecamer, were successfully purified, showing mtDod’s ability to function as recruitment hub. Further, mtDod equipped with SYNZIP and SpyCatcher domains for post-translational recruitment of cargo was prepared of which the mtDod/SpyCatcher system proved to be particularly useful. In a case study, we finally show that mtDod-peptide fusions allow producing antibodies against human heat shock proteins and the C-terminus of heat shock cognate 70 interacting protein (CHIP).
Aim: The primary aim of this study was to analyze frequency and characteristics of combined facial and peripheral trauma with consecutive hospitalization and treatment.
Materials and methods: The study included all patients with concomitant orthopedic-traumatolgical (OT) and craniomaxillofacial (CMF) injuries admitted to our level I trauma center in 2018. The data were collected by analysis of the institution’s database and radiological reviews and included age, sex, injury type, weekday and time of presentation. All patients were examined and treated by a team of surgeons specialized in OT and CMF directly after presentation.
Results: A total number of 1040 combined OT and CMF patients were identified. Mean age was 33.0 ± 26.2 years. 67.3% (n = 700) were male patients. Primary presentation happened most frequently on Sundays (n = 199) and between 7 and 8 pm (n = 74). 193 OT fractures were documented, where cervical spine injuries were most frequent (n = 30). 365 facial and skull fractures were recorded. 10.8% of the 204 patients with fractures of the viscerocranium presented with at least one fracture of the extremity, 7.8% (16/204) with cervical spine fractures, 33.3% (68/204) with signs of closed brain trauma and 9.8% (20/204) with intracranial hemorrhage.
Discussion: The study shows a high frequency of combined facial with OT-injuries and brain damage in a predominantly young and male cohort. Attendance by interdisciplinary teams of both CMF and OT surgeons specialized in cervical spine trauma surgery is highly advisable for adequate treatment.
Conclusion: Diagnostics and treatment should be performed by a highly specialized OT and CMF team, with a consulting neurosurgeon in a level-1 trauma center to avoid missed diagnoses and keep mortality low.
Wie kaum ein anderes Bildmotiv machen schmelzende Gletscher den Klimawandel sichtbar. Sie spielen deshalb eine zentrale Rolle für die Klimaforschung selbst, für die Popularisierung ihrer alarmierenden Erkenntnisse sowie für die zeitgenössische Kunst, die im Lichte dieser Einsichten nach einer adäquaten neuen Ästhetik sucht. Entsprechend umfangreich fällt inzwischen auch die kulturwissenschaftliche Auseinandersetzung mit Gletscherbildern aus. Zahlreiche Ausstellungskataloge und umfangreiche Studien verfolgen deren Entwicklung vom frühen 17. Jahrhundert, auf das die ersten bildlichen Darstellungen datiert sind, bis in die Gegenwart, in der Gletscher und ihr Verschwinden zum Emblem der globalen Erwärmung geworden sind. Der Heuristik des Vergleichs kommt dabei eine wichtige Funktion zu: Nicht nur bildet sie die Basis etwa für klassisch kunsthistorische Untersuchungen, deren Augenmerk dem Wandel der Ausdrucksformen und Abbildungskonventionen von Gletscherbildern (etwa auf einer Skala zwischen Idealisierung und Realismus) gilt. Überdies und insbesondere ist auch der Prozess des Verschwindens auf den vergleichenden Blick angewiesen, denn dieser offenbart sich ja erst auf diese Weise in seiner ganzen Dramatik. Dieser Aufsatz jedoch wählt eine andere Perspektive: In begrifflicher Anlehnung an Jussi Parikkas 'Mediengeologie' und vor dem Hintergrund des umfassenden Felds der Medienökologie wird im Folgenden eine "Medienglaziologie" umrissen, die Gletscher selbst als Medien versteht. Ganz im Sinne des medienkomparatistischen Forschungsparadigmas, dass sich spezifische Medialitäten erst aus einer medienvergleichenden Perspektive erschließen, wird der Frage nachgegangen, wie sich dieses "Medien-Werden" der Gletscher im und durch den Vergleich mit anderen (technischen) Medien vollzieht. Dabei konzentriere ich mich zeitlich auf das 19. und frühe 20. Jahrhundert und regional auf die Alpengletscher, deren wissenschaftliche Erforschung die Disziplin der Glaziologie begründete.
Briefe, das Gespräch zweier Abwesender miteinander, spielen in vielen Filmen eine große Rolle. Sie werden eingeblendet oder per 'voice over' vorgelesen, man sieht Lese- und Schreibszenen, die mit der Vieldeutigkeit des Geschriebenen spielen. Der Brief sei, so Christina Bartz, "wegen der kommunikativen Verbindung über zeitliche und räumliche Distanzen hinweg" "besonders anschlussfähig für den Film", der ebenfalls durch die Montage räumlich und zeitlich Getrenntes zusammenbringt. Im Gegensatz zum Film ist der Brief jedoch kein Massenmedium sondern Individualkommunikation. Das Zeigen des Mediums Brief oder das Ersetzen dieses historischen Mediums durch ein aktuelleres im Film bietet immer auch die Möglichkeit der Medienreflexion. In meinem Beitrag möchte ich anhand zweier prominenter Beispiele zum einen beobachten, wie in filmischen Adaptionen briefgeprägter literarischer Texte mit Briefen umgegangen wird, und zum anderen, wie anhand der Briefthematik eine Medienreflexion stattfindet. Ich stelle dazu zwei Melodramen vor, in denen Briefe und das damit einhergehende Erkennen und Verkennen eine zentrale Rolle spielen: Max Ophüls' "Letter from an unknown woman" (USA 1948), der Verfilmung von Stefan Zweigs Novelle "Brief einer Unbekannten" (1922), und "Atonement" (2007), die Adaption von Ian McEwans gleichnamigen Roman von 2001.
Die Verunsicherung auf dem Feld zeitgenössischer Kunst berührt nicht nur die Frage nach der Qualität von Kunst, sondern auch jene der Grenze zwischen Kunst(werk) und ihrem (bzw. seinem) jeweiligen Außen. [...] Kunst, die einen herkömmlichen Werkbegriff in Frage stellt (und vom breiten Publikum oft abgelehnt wird), aber doch verortet und verortbar und daher, zumindest weitestgehend, als Kunst erkennbar ist, soll im folgenden Gegenwartskunst genannt werden, die in den Alltag integrierte und intervenierende und manchmal nicht als Kunst wahrgenommene Kunst als Situationskunst. Gegenwartskunst setzt ihre Autonomie und eine klare Grenze zwischen Kunst und Nicht-Kunst voraus, Situationskunst (die man als eine radikale Ausformung und somit als Teil der Gegenwartskunst ansehen könnte) sät Zweifel an der Kunstautonomie, auch wenn sie diese häufig als Argument gegen Anrufungen oder Übergriffe von Politik, Religion oder Alltagswirklichkeit verwendet bzw. verwenden 'muss'. Bei beiden Formen, die sich in vielen Fällen überschneiden, wird im herkömmlichen Sinne nichts mehr erschaffen ('poesis'), sondern etwas gefunden bzw. letztlich 'einfach' etwas getan ('praxis'). In beiden Fällen versteht sich nichts mehr von selbst: Es ist in der Rezeption - zumindest im ersten Moment - unklar, ob wir es überhaupt mit Kunst zu tun haben. In anderen Worten: Wir können uns im Moment des Ausstellungsbesuches also nicht auf unsere Sinneswahrnehmungen, auf unsere Erfahrung und auf unser implizites (Vor-)Wissen verlassen, wenn wir wissen wollen, womit wir es zu tun haben und was das alles soll. Wir benötigen also nicht zuletzt Erklärungen und Erläuterungen (die wieder zu implizitem Wissen gerinnen können) - und das ist ein Grund, warum zeitgenössische Kunst für die Komparatistik interessant sein könnte. Davon wird noch zu sprechen sein. Die Begriffe Gegenwarts- und Situationskunst decken einen sehr weiten Bereich von Phänomenen ab. Daher wird das Folgende eine kursorische Skizze werden, bei der in erster Linie auf solche Phänomene und ihre Gemeinsamkeiten abgezielt werden soll, die für die Komparatistik von Interesse sind. Im Zentrum steht nicht eine genaue Analyse und Interpretation von Phänomenen, sondern die Frage, was im Hinblick auf die Disziplin der Komparatistik spannend für Analyse und Interpretation wäre. Die im Folgenden diskutierten Phänomene und Beispiele befinden sich auf jeden Fall in der Peripherie der Komparatistik mit allen Nachteilen, welche die Arbeit in Peripherien mit sich bringt.
Inschriften sind Formen, die durch eine besondere mediale Disposition charakterisiert sind. Was Inschriften auszeichnet, ist, neben ihrem engen Bezug zu einem materiellen Träger, ihre eigentümliche Position auf der Schwelle von Schrift und Bild. [...] Die Eigenart der Inschrift, ein Wort oder einen Text als sichtbare Zeichenfolge auszustellen, hat der italienische Epigraphieforscher Armando Petrucci im Begriff der 'scrittura esposta' zum Ausdruck gebracht. [...] Versucht man, die damit berührte spezifische Potenz der Inschrift genauer zu erfassen, liegt es nahe, zunächst auf die visuelle Dimension zurückzukommen. Es ist, so darf man annehmen, die Fähigkeit der Inschrift, als Bild zu erscheinen, die es ihr erlaubt, in den Blick des Betrachters zu treten und sich jenem als exponierte Figur vor Augen zu stellen. Mit dieser bildhaften Erscheinungsform, so ließe sich das Argument weiterführen, verbinden sich ästhetische Qualitäten der sinnlichen Eindrücklichkeit und Präsenz, die der Inschrift die ihr eigentümliche Ausdrucks- und Aussagekraft verleihen. [...] Mit dieser Erklärung ist unterdessen nur die eine Seite der Inschrift und ihrer medien- und wirkungsästhetischen Beschaffenheit erfasst. Das Besondere der Inschrift erschöpft sich nicht in deren Eigenart als ausgestellter, exponierter Zeichenformation. Die Inschrift ist nicht nur 'esposta', sondern ebenso 'scrittura'. Die besondere Gestaltungs- und Wirkungsweise der Inschrift beruht mithin nicht allein auf deren bildhafter Disposition. Die Wirkkraft der Inschrift verdankt sich, so die hier vorgeschlagene These, dem Umstand, dass diese, auch wenn sie sich als exponierte, eingängig und weithin sichtbare Gestalt zur Geltung bringt, zugleich ihren Charakter als Schrift bewahrt und diesen nicht weniger deutlich hervorkehrt. Wer eine Inschrift betrachtet, der erblickt in ihrer bildhaften Gestaltung zugleich die visuelle Form eines Textes, einer sprachlichen Äußerung. Durch ihre Gestaltung als 'scrittura' erscheint die Inschrift somit in einer Form, die in spezifischer Weise mit Momenten der Macht und Autorität versehen ist. Ist doch die Schrift dasjenige Medium, in dem uns, in einer von der Antike bis in die Neuzeit und Moderne reichenden Tradition, das Gesetz, die aufgezeichnete und materialisierte 'Stimme des Souveräns' entgegentritt. Das Besondere der Inschrift scheint also, so lässt sich vorläufig festhalten, darin zu bestehen, dass sie die Medien von Bild und Schrift in einer spezifischen Weise miteinander verknüpft. In ihr sind mediale und ästhetische Qualitäten wirksam, die teils dem Bild, teils der Schrift angehören. Auf diesem Zusammenspiel beruht auch das eigentümliche Wirkungspotential, das sich mit dieser Äußerungsform verbindet. In der Folge wird es darum gehen, dieses Zusammenwirken bildlicher und skripturaler Aspekte genauer zu erkunden und vor diesem Hintergrund die Bedeutung und Wirkkraft inschriftlicher Zeichen insbesondere in politischen Kontexten zu untersuchen.
Das Internet findet auf unterschiedlichste Weise Eingang in den Film: Digitale Formate wie Webserien, Podcasts oder sogar Tweets werden im Medienwechsel Grundlage filmischer Adaptionen, filmische Experimente mit interaktiven und virtuellen Technologien generieren neue, zwischen Film und Computerspiel angesiedelte Medienkombinationen, transmediale Erweiterungen führen auf verschiedene Arten Film- und Serienuniversen im digitalen Raum fort und intermediale Bezüge erzählen durch die Imitation einer digitalen Ästhetik nicht (nur) über das Altermedium, sondern oft auch durch das andere Medium. Zu letzterer intermedialer Kategorie gehörende Phänomene der Thematisierung, Evozierung oder Simulierung sollen hier im Kontext der Darstellung des Internets analysiert werden. Aufgrund der Ubiquität digitaler Medien im Alltag spielen seit einigen Jahren neuere Technologien als Bezugsmedien eine zentrale Rolle in vielen Filmen und Serien. Filmische Internetanwendungen werden dabei vor allem als grafische Benutzeroberfläche, als Nutzungsschnittstelle zwischen Anwender und technischem Gerät visualisiert, die Repräsentation der Hardware erscheint meist nachrangig. Nicht die Darstellung von Computern und Smartphones, sondern die Inszenierung von vernetzten Systemen, Räumen und Kommunikationsstrukturen steht daher im Fokus dieses Artikels. Eingegangen werden soll in diesem Zusammenhang insbesondere auf intermediale Evozierungen des Altermediums durch die Nachahmung digitaler Ästhetiken vermittels des Formenrepertoires des Films, simulierte Screen- und Desktopfilme und auf die Darstellung der dominant schrift- und zeichenbasierten digitalen Kultur durch die Integration von Schrift im Filmbild. Begonnen wird die Untersuchung mit einer Betrachtung von visuellen Metaphern und Strategien der Sichtbarmachung virtueller Räume.
Fremde Welten - eigene Welten : zur kategorisierenden Rolle von Abweichungen für Fiktionalität
(2020)
Selbst Texte und Filme mit dargestellten Welten jenseits irgendeiner temporalen oder spatialen Relation zur uns bekannten Welt sind notwendigerweise bloße Produkte ihrer jeweiligen Entstehungszeit. Umso mehr gilt das für Plots zu beachten, die in der irdischen Zukunft oder auf von der Erde weit entfernten Planeten spielen. Und weil all diese Filme und Texte ein Produkt einer ganz konkreten Zeit, einer ganz konkreten Kultur sind, gilt auch für deren mal mehr, mal weniger fremde Welten, dass sie auf Analogien zur jeweils wirklichen Welt ihrer jeweiligen Entstehungszeit, auf Referenzen zu dieser, zu untersuchen sind. Die fremden Welten können dann, müssen gar, nicht nur eigentlich, sondern auch uneigentlich, zeichenhaft gelesen werden. Der Grad an Explizitheit wie auch Konkretheit der jeweiligen Referenzen mag zwar von Text zu Text, von Film zu Film verschieden sein, in den meisten Fällen werden allerdings die anzutreffenden Analogien zu den jeweils außertextuell oder außerfilmisch existenten Gegebenheiten ausreichen, um sowohl die jeweils temporale als auch spatiale Differenz zu neutralisieren: Die fremde Welt des Textes oder Films stellt eben doch nur ein Spiegelbild, Zerrbild oder Wunschbild der wirklichen, außertextuellen oder außerfilmischen Welt dar. In fantastisch-utopischen Fiktionen ist der Umstand, dass die dargestellten Welten von der zeitgenössischen Wirklichkeit abweichen, dennoch aber auf diese zeitgenössische Wirklichkeit zu beziehen sind, das entscheidende Wesensmerkmal. Demnach ist es vorrangiger Sinn und Zweck der in diesen Gattungen konstruierten, von der raumzeitlichen Wirklichkeit sich differenzierenden, d. h. abweichenden Anders-, Parallel- und Zukunftswelten, Projektionsflächen für die Thematisierung von Sachverhalten eben jener Wirklichkeit anzubieten. Auf diese Weise sollen entweder Themen, die in ihrem ursprünglichen Kontext nur schwerlich oder gar nicht thematisiert werden können, thematisierbar gemacht werden oder aber auch in ihrem Kontext thematisierbare Sachverhalte durch Isolierung von ihrem ursprünglichen Kontext in ein neues Licht gerückt, anders akzentuiert und damit womöglich auch präzisiert und kritisiert werden. Die Existenz einer rational-logischen Begründung der in der dargestellten Welt von der außerfilmischen oder außertextuellen Wirklichkeit abweichenden Phänomene ist dabei nicht zwingend notwendig. Vielmehr stellt deren Vorhandensein lediglich das Unterscheidungskriterium von Science-Fiction zu anderen fantastisch-utopischen Gattungen dar. Im Folgenden soll anhand von Tim Burtons Neuverfilmung von "Planet der Affen" (USA 2001) gezeigt werden, inwiefern eine fremde Welt als eigene Welt gelesen werden kann. Im Anschluss - und das stellt quasi das 'Novum' dieses Beitrags dar - soll vor dem Hintergrund einer mengentheoretischen Definition von Fiktionalität, Faktualität und Fake erörtert werden, inwieweit es sinnvoll ist, das von der wirklichen Welt Abweichende als Distinktionsmerkmal von Fiktionalität heranzuziehen.