Refine
Year of publication
- 2020 (3438) (remove)
Document Type
- Article (1914)
- Part of Periodical (478)
- Doctoral Thesis (235)
- Preprint (169)
- Contribution to a Periodical (160)
- Book (128)
- Working Paper (123)
- Review (104)
- Part of a Book (86)
- Bachelor Thesis (11)
Language
- English (2168)
- German (1132)
- Portuguese (43)
- French (31)
- Turkish (28)
- Multiple languages (17)
- Spanish (13)
- Italian (3)
- slo (3)
Keywords
- taxonomy (88)
- Deutsch (53)
- new species (50)
- Spracherwerb (34)
- Sprachtest (33)
- Capital Markets Union (25)
- Financial Markets (25)
- Literatur (25)
- Übersetzung (25)
- Coronavirus (24)
Institute
- Medizin (744)
- Präsidium (278)
- Physik (267)
- Wirtschaftswissenschaften (220)
- Sustainable Architecture for Finance in Europe (SAFE) (167)
- Biowissenschaften (161)
- Frankfurt Institute for Advanced Studies (FIAS) (150)
- Informatik (116)
- Biochemie, Chemie und Pharmazie (104)
- Neuere Philologien (95)
Objectives: Evaluation of surgical and non-surgical air-polishing in vitro efficacy for implant surface decontamination.
Material and methods: One hundred eighty implants were distributed to three differently angulated bone defect models (30°, 60°, 90°). Biofilm was imitated using indelible red color. Sixty implants were used for each defect, 20 of which were air-polished with three different types of glycine air powder abrasion (GAPA1–3) combinations. Within 20 equally air-polished implants, a surgical and non-surgical (with/without mucosa mask) procedure were simulated. All implants were photographed to determine the uncleaned surface. Changes in surface morphology were assessed using scanning electron micrographs (SEM).
Results: Cleaning efficacy did not show any significant differences between GAPA1–3 for surgical and non-surgical application. Within a cleaning method significant (p < 0.001) differences for GAPA2 between 30° (11.77 ± 2.73%) and 90° (7.25 ± 1.42%) in the non-surgical and 30° (8.26 ± 1.02%) and 60° (5.02 ± 0.84%) in the surgical simulation occurred. The surgical use of air-polishing (6.68 ± 1.66%) was significantly superior (p < 0.001) to the non-surgical (10.13 ± 2.75%). SEM micrographs showed no surface damages after use of GAPA.
Conclusions: Air-polishing is an efficient, surface protective method for surgical and non-surgical implant surface decontamination in this in vitro model. No method resulted in a complete cleaning of the implant surface.
Clinical relevance: Air-polishing appears to be promising for implant surface decontamination regardless of the device.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
Methoden
(2020)
Rezension zu: Akremi, Leila, Nina Baur, Hubert Knoblauch und Boris Traue (Hrsg.): Handbuch Interpretativ forschen. Weinheim, Basel: Beltz Juventa 2018. 961 Seiten. ISBN: 978-3-7799-3126-3. Preis: C 49,95.
Purpose: Neonatal surgery for abdominal wall defects is not performed in a centralized manner in Germany. The aim of this study was to investigate whether treatment for abdominal wall defects in Germany is equally effective compared to international results despite the decentralized care.
Methods: All newborn patients who were clients of the major statutory health insurance company in Germany between 2009 and 2013 and who had a diagnosis of gastroschisis or omphalocele were included. Mortality during the first year of life was analysed.
Results: The 316 patients with gastroschisis were classified as simple (82%) or complex (18%) cases. The main associated anomalies in the 197 patients with omphalocele were trisomy 18/21 (8%), cardiac anomalies (32%) and anomalies of the urinary tract (10%). Overall mortality was 4% for gastroschisis and 16% for omphalocele. Significant factors for non-survival were birth weight below 1500 g for both groups, complex gastroschisis, volvulus and anomalies of the blood supply to the intestine in gastroschisis, and female gender, trisomy 18/21 and lung hypoplasia in omphalocele.
Conclusions: Despite the fact that paediatric surgical care is organized in a decentralized manner in Germany, the mortality rates for gastroschisis and omphalocele are equal to those reported in international data.
We study in detail the nuclear aspects of a neutron-star merger in which deconfinement to quark matter takes place. For this purpose, we make use of the Chiral Mean Field (CMF) model, an effective relativistic model that includes self-consistent chiral symmetry restoration and deconfinement to quark matter and, for this reason, predicts the existence of different degrees of freedom depending on the local density/chemical potential and temperature. We then use the out-of-chemical-equilibrium finite-temperature CMF equation of state in full general-relativistic simulations to analyze which regions of different QCD phase diagrams are probed and which conditions, such as strangeness and entropy, are generated when a strong first-order phase transition appears. We also investigate the amount of electrons present in different stages of the merger and discuss how far from chemical equilibrium they can be and, finally, draw some comparisons with matter created in supernova explosions and heavy-ion collisions.
Mongolian spots (MS) are congenital dermal conditions resulting from neural crest-derived melanocytes migration to the skin during embryogenesis. MS incidences are highly variable in different populations. Morphologically, MS present as hyperpigmented maculae of varying size and form, ranging from round spots of 1 cm in diameter to extensive discolorations covering predominantly the lower back and buttocks. Due to their coloring, which is also dependent on the skin type, MS may mimic hematoma thus posing a challenge on the physician conducting examinations of children in cases of suspected child abuse. In the present study, MS incidences and distribution, as well as skin types, were documented in a collective of 253 children examined on the basis of suspected child abuse. From these data, a classification scheme was derived to document MS and to help identify cases with a need for recurrent examination for unambiguous interpretation of initial findings alongside the main decisive factors for re-examination such as general circumstances of the initial examination (e. g., experience of the examiner, lighting conditions) and given dermatological conditions of the patient (e. g., diaper rash).
There is limited knowledge on the prevalence and risk factors of diabetic retinopathy (DR) in dialysis patients. We have investigated the association between diabetes mellitus and lipid-related biomarkers and retinopathy in hemodialysis patients. We reviewed 1,255 hemodialysis patients with type 2 diabetes mellitus (T2DM) who participated in the German Diabetes and Dialysis Study (4D Study). Associations between categorical clinical, biochemical variables and diabetic retinopathy were examined by logistic regression. On average, patients were 66 ± 8 years of age, 54% were male and the HbA1c was 6.7% ± 1.3%. DR, found in 71% of the patients, was significantly and positively associated with fasting glucose, HbA1c, time on dialysis, age, systolic blood pressure, body mass index and the prevalence of other microvascular diseases (e.g. neuropathy). Unexpectedly, DR was associated with high HDL cholesterol and high apolipoproteins AI and AII. Patients with coronary artery disease were less likely to have DR. DR was not associated with gender, smoking, diastolic blood pressure, VLDL cholesterol, triglycerides, and LDL cholesterol. In summary, the prevalence of DR in patients with type 2 diabetes mellitus requiring hemodialysis is higher than in patients suffering from T2DM, who do not receive hemodialysis. DR was positively related to systolic blood pressure (BP), glucometabolic control, and, paradoxically, HDL cholesterol. This data suggests that glucose and blood pressure control may delay the development of DR in patients with diabetes mellitus on dialysis.
The invariant differential cross section of inclusive ω(782) meson production at midrapidity (|y|<0.5) in pp collisions at s√ = 7 TeV was measured with the ALICE detector at the LHC over a transverse momentum range of 2 < pT < 17 GeV/c. The ω meson was reconstructed via its ω→π+π−π0 decay channel. The measured ω production cross section is compared to various calculations: PYTHIA 8.2 Monash 2013 describes the data, while PYTHIA 8.2 Tune 4C overestimates the data by about 50%. A recent NLO calculation, which includes a model describing the fragmentation of the whole vector-meson nonet, describes the data within uncertainties below 6 GeV/c, while it overestimates the data by up to 50% for higher pT. The ω/π0 ratio is in agreement with previous measurements at lower collision energies and the PYTHIA calculations. In addition, the measurement is compatible with transverse mass scaling within the measured pT range and the ratio is constant with Cω/π0 = 0.67 ± 0.03 (stat) ± 0.04 (sys) above a transverse momentum of 2.5 GeV/c.
Pion–kaon femtoscopy and the lifetime of the hadronic phase in Pb−Pb collisions at √sNN = 2.76 TeV
(2020)
In this paper, the first femtoscopic analysis of pion-kaon correlations at the LHC is reported. The analysis was performed on the Pb-Pb collision data at sNN−−−√ = 2.76 TeV recorded with the ALICE detector. The non-identical particle correlations probe the spatio-temporal separation between sources of different particle species as well as the average source size of the emitting system. The sizes of the pion and kaon sources increase with centrality, and pions are emitted closer to the centre of the system and/or later than kaons. This is naturally expected in a system with strong radial flow and is qualitatively reproduced by hydrodynamic models. ALICE data on pion-kaon emission asymmetry are consistent with (3+1)-dimensional viscous hydrodynamics coupled to a statistical hadronization model, resonance propagation, and decay code THERMINATOR 2 calculation, with an additional time delay between 1 and 2 fm/c for kaons. The delay can be interpreted as evidence for a significant hadronic rescattering phase in heavy-ion collisions at the LHC.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.
Assessment of individual therapeutic responses provides valuable information concerning treatment benefits in individual patients. We evaluated individual therapeutic responses as determined by the Disease Activity Score-28 joints critical difference for improvement (DAS28-dcrit) in rheumatoid arthritis (RA) patients treated with intravenous tocilizumab or comparator anti-tumor necrosis factor (TNF) agents. The previously published DAS28-dcrit value [DAS28 decrease (improvement) ≥ 1.8] was retrospectively applied to data from two studies of tocilizumab in RA, the 52-week ACT-iON observational study and the 24-week ADACTA randomized study. Data were compared within (not between) studies. DAS28 was calculated with erythrocyte sedimentation rate as the inflammatory marker. Stability of DAS28-dcrit responses and European League Against Rheumatism (EULAR) good responses was determined by evaluating repeated responses at subsequent timepoints. A logistic regression model was used to calculate p values for differences in response rates between active agents. Patient-reported outcomes (PROs; pain, global health, function, and fatigue) in DAS28-dcrit responder versus non-responder groups were compared with an ANCOVA model. DAS28-dcrit individual response rates were 78.2% in tocilizumab-treated patients and 58.2% in anti-TNF-treated patients at week 52 in the ACT-ion study (p = 0.0001) and 90.1% versus 59.1% at week 24 in the ADACTA study (p < 0.0001). DAS28-dcrit responses showed greater stability over time (up to 52 weeks) than EULAR good responses. For both active treatments, DAS28-dcrit responses were associated with statistically significant improvements in mean PRO values compared with non-responders. The DAS28-dcrit response criterion provides robust assessments of individual responses to RA therapy and may be useful for discriminating between active agents in clinical studies and guiding treat-to-target decisions in daily practice.
Using a structural life-cycle model, we quantify the long-term impact of school closures during the Corona crisis on children affected at different ages and coming from households with different parental characteristics. In the model, public investment through schooling is combined with parental time and resource investments in the production of child human capital at different stages in the children's development process. We quantitatively characterize both the long-term earnings consequences on children from a Covid-19 induced loss of schooling, as well as the associated welfare losses. Due to self-productivity in the human capital production function, skill attainment at a younger stage of the life cycle raises skill attainment at later stages, and thus younger children are hurt more by the school closures than older children. We find that parental reactions reduce the negative impact of the school closures, but do not fully offset it. The negative impact of the crisis on children's welfare is especially severe for those with parents with low educational attainment and low assets. The school closures themselves are primarily responsible for the negative impact of the Covid-19 shock on the long-run welfare of the children, with the pandemic-induced income shock to parents playing a secondary role.
The global polarization of the Λ and Λ¯¯¯¯ hyperons is measured for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV recorded with the ALICE at the LHC. The results are reported differentially as a function of collision centrality and hyperon's transverse momentum (pT) for the range of centrality 5-50%, 0.5<pT<5 GeV/c, and rapidity |y|<0.5. The hyperon global polarization averaged for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV is found to be consistent with zero, ⟨PH⟩ (%) ≈ 0.01 ± 0.06 (stat.) ± 0.03 (syst.) in the collision centrality range 15-50%, where the largest signal is expected. The results are compatible with expectations based on an extrapolation from measurements at lower collision energies at RHIC, hydrodynamical model calculations, and empirical estimates based on collision energy dependence of directed flow, all of which predict the global polarization values at LHC energies of the order of 0.01%.
The global polarization of the Λ and Λ¯¯¯¯ hyperons is measured for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV recorded with the ALICE at the LHC. The results are reported differentially as a function of collision centrality and hyperon's transverse momentum (pT) for the range of centrality 5-50%, 0.5<pT<5 GeV/c, and rapidity |y|<0.5. The hyperon global polarization averaged for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV is found to be consistent with zero, ⟨PH⟩ (%) ≈ - 0.01 ± 0.05 (stat.) ± 0.03 (syst.) in the collision centrality range 15-50%, where the largest signal is expected. The results are compatible with expectations based on an extrapolation from measurements at lower collision energies at RHIC, hydrodynamical model calculations, and empirical estimates based on collision energy dependence of directed flow, all of which predict the global polarization values at LHC energies of the order of 0.01%.
The inclusive J/ψ production in Pb-Pb collisions at the center-of-mass energy per nucleon pair sNN−−−√ = 5.02 TeV, measured with the ALICE detector at the CERN LHC, is reported. The J/ψ meson is reconstructed via the dimuon decay channel at forward rapidity (2.5<y<4) down to zero transverse momentum. The suppression of the J/ψ yield in Pb-Pb collisions with respect to binary-scaled pp collisions is quantified by the nuclear modification factor (RAA). The RAA at sNN−−−√ = 5.02 TeV is presented and compared with previous measurements at sNN−−−√ = 2.76 TeV as a function of the centrality of the collision, and of the J/ψ transverse momentum and rapidity. The inclusive J/ψ RAA shows a suppression increasing toward higher pT, with a steeper dependence for central collisions. The modification of the J/ψ average pT and p2T is also studied. Comparisons with the results of models based on a transport equation and on statistical hadronization are also carried out.
The production yield of the Λ(1520) baryon resonance is measured at mid-rapidity in Pb-Pb collisions at sNN−−−√ = 2.76 TeV with the ALICE detector at the LHC. The measurement is performed in the Λ(1520)→pK− (and charge conjugate) hadronic decay channel as a function of the transverse momentum (pT) and collision centrality. The pT-integrated production rate of Λ(1520) relative to Λ in central collisions is suppressed by about a factor of 2 with respect to peripheral collisions. This is the first observation of the suppression of a baryonic resonance at the LHC and the first 3σ evidence of Λ(1520) suppression within a single collision system. The measured Λ(1520)/Λ ratio in central collisions is smaller than the value predicted by the statistical hadronisation model calculations. The shape of the measured pT distribution and the centrality dependence of the suppression are reproduced by the EPOS3 Monte Carlo event generator. The measurement adds further support to the formation of a dense hadronic phase in the final stages of the evolution of the fireball created in heavy-ion collisions, lasting long enough to cause a significant reduction in the observable yield of short-lived resonances.
In bioengineering, scaffold proteins have been increasingly used to recruit molecules to parts of a cell, or to enhance the efficacy of biosynthetic or signalling pathways. For example, scaffolds can be used to make weak or non-immunogenic small molecules immunogenic by attaching them to the scaffold, in this role called carrier. Here, we present the dodecin from Mycobacterium tuberculosis (mtDod) as a new scaffold protein. MtDod is a homododecameric complex of spherical shape, high stability and robust assembly, which allows the attachment of cargo at its surface. We show that mtDod, either directly loaded with cargo or equipped with domains for non-covalent and covalent loading of cargo, can be produced recombinantly in high quantity and quality in Escherichia coli. Fusions of mtDod with proteins of up to four times the size of mtDod, e.g. with monomeric superfolder green fluorescent protein creating a 437 kDa large dodecamer, were successfully purified, showing mtDod’s ability to function as recruitment hub. Further, mtDod equipped with SYNZIP and SpyCatcher domains for post-translational recruitment of cargo was prepared of which the mtDod/SpyCatcher system proved to be particularly useful. In a case study, we finally show that mtDod-peptide fusions allow producing antibodies against human heat shock proteins and the C-terminus of heat shock cognate 70 interacting protein (CHIP).
Two-particle correlation functions were measured for pp¯¯¯, pΛ¯¯¯¯, p¯¯¯Λ, and ΛΛ¯¯¯¯ pairs in Pb-Pb collisions at sNN−−−√=2.76 TeV and sNN−−−√=5.02 TeV recorded by the ALICE detector. From a simultaneous fit to all obtained correlation functions, real and imaginary components of the scattering lengths, as well as the effective ranges, were extracted for combined pΛ¯¯¯¯ and p¯¯¯Λ pairs and, for the first time, for ΛΛ¯¯¯¯ pairs. Effective averaged scattering parameters for heavier baryon-antibaryon pairs, not measured directly, are also provided. The results reveal similarly strong interaction between measured baryon-antibaryon pairs, suggesting that they all annihilate in the same manner at the same pair relative momentum k∗. Moreover, the reported significant non-zero imaginary part and negative real part of the scattering length provide motivation for future baryon-antibaryon bound state searches.
Introduction: Affective disorders are a major global burden, with approximately 15% of people worldwide suffering from some form of affective disorder. In patients experiencing their first depressive episode, in most cases it cannot be distinguished whether this is due to bipolar disorder (BD) or major depressive disorder (MDD). Valid fluid biomarkers able to discriminate between the two disorders in a clinical setting are not yet available.
Material and Methods: Seventy depressed patients suffering from BD (bipolar I and II subtypes) and 42 patients with major MDD were recruited and blood samples were taken for proteomic analyses after 8 h fasting. Proteomic profiles were analyzed using the Multiplex Immunoassay platform from Myriad Rules Based Medicine (Myriad RBM; Austin, Texas, USA). Human DiscoveryMAPTM was used to measure the concentration of various proteins, peptides, and small molecules. A multivariate predictive model was consequently constructed to differentiate between BD and MDD.
Results: Based on the various proteomic profiles, the algorithm could discriminate depressed BD patients from MDD patients with an accuracy of 67%.
Discussion: The results of this preliminary study suggest that future discrimination between bipolar and unipolar depression in a single case could be possible, using predictive biomarker models based on blood proteomic profiling.
The genus Ebolavirus comprises some of the deadliest viruses for primates and humans and associated disease outbreaks are increasing in Africa. Different evidence suggests that bats are putative reservoir hosts and play a major role in the transmission cycle of these filoviruses. Thus, detailed knowledge about their distribution might improve risk estimations of where future disease outbreaks might occur. A MaxEnt niche modelling approach based on climatic variables and land cover was used to investigate the potential distribution of 9 bat species associated to the Zaire ebolavirus. This viral species has led to major Ebola outbreaks in Africa and is known for causing high mortalities. Modelling results suggest suitable areas mainly in the areas near the coasts of West Africa with extensions into Central Africa, where almost all of the 9 species studied find suitable habitat conditions. Previous spillover events and outbreak sites of the virus are covered by the modelled distribution of 3 bat species that have been tested positive for the virus not only using serology tests but also PCR methods. Modelling the habitat suitability of the bats is an important step that can benefit public information campaigns and may ultimately help control future outbreaks of the disease.
First record of the orchid bee Euglossa dilemma (Hymenoptera: Apidae) in Hispaniola, the Antilles
(2020)
The occurrence of the orchid bee Euglossa dilemma Bembé and Eltz (Hymenoptera: Apidae) is recorded for the first time for the island of Hispaniola in the Greater Antilles. Males were observed visiting varieties of sweet basil plants (Ocimum basilicum Linnaeus (Lamiaceae)) to obtain fragrances used during courtship and reproduction. Our observations showed that the species is established in Hispaniola and that it does not require the presence of orchids for reproductive success, being able to adapt to new plant resources it finds in the areas it colonizes. These observations correspond to what was found in Florida, United States, where Euglossa dilemma was also recently introduced. It is not clear how the species was introduced to Hispaniola, but Euglossa dilemma is clearly an adventive species that is colonizing the Antilles in addition to peninsular Florida.
One of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete space-time lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and so high-quality measurements exist only for hadrons containing up and down quarks. Here we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton-proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of p-omega baryon correlations, the effect of the strong interaction for this hadron-hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations. The large number of hyperons identified in proton-proton collisions at the LHC, together with an accurate modelling of the small (approximately one femtometre) inter-particle distance and exact predictions for the correlation functions, enables a detailed determination of the short-range part of the nucleon-hyperon interaction.
One of the big challenges for nuclear physics today is to understand, starting from first principles, the effective interaction between hadrons with different quark content. First successes have been achieved utilizing techniques to solve the dynamics of quarks and gluons on discrete space-time lattices. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons and hence, high quality measurements exist only for hadrons containing up and down quarks. In this work, we demonstrate that measuring correlations in the momentum space between hadron pairs produced in ultrarelativistic proton–proton collisions at the CERN LHC provides a precise method to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate for the first time how, using precision measurements of p–Ω− correlations, the effect of the strong interaction for this hadron–hadron pair can be studied and compared with predictions from lattice calculations.
Die Funken der Erlösung : Journal zur Übersetzung des Romans "Die Jakobsbücher" von Olga Tokarczuk
(2020)
"Für unsere Übersetzung waren all diese Überlegungen insofern bedeutsam, als wir zu entscheiden hatten, welche kulturhistorischen Verortungen wir schaffen, welche Konnotationen wir aufrufen wollten - durch die Verwendung eben dieses oder jenes Wortes -, und mit welchen Mitteln es möglich wäre, auch das Prozesshafte der Geschichte abzubilden, den Weg, den Jakob und Frank und seine Compagnie zurücklegen, im Sinne der physischen wie der kulturellen Topographie."
Background: A prototype of a noninvasive glucometer combining skin excitation by a mid-infrared quantum cascade laser with photothermal detection was evaluated in glucose correlation tests including 100 volunteers (41 people with diabetes and 59 healthy people).
Methods: Invasive reference measurements using a clinical glucometer and noninvasive measurements at a finger of the volunteer were simultaneously recorded in five-minute intervals starting from fasting glucose values for healthy subjects (low glucose values for diabetes patients) over a two-hour period. A glucose range from >50 to <350 mg/dL was covered. Machine learning algorithms were used to predict glucose values from the photothermal spectra. Data were analyzed for the average percent disagreement of the noninvasive measurements with the clinical reference measurement and visualized in consensus error grids.
Results: 98.8% (full data set) and 99.1% (improved algorithm) of glucose results were within Zones A and B of the grid, indicating the highest accuracy level. Less than 1% of the data were in Zone C, and none in Zone D or E. The mean and median percent differences between the invasive as a reference and the noninvasive method were 12.1% and 6.5%, respectively, for the full data set, and 11.3% and 6.4% with the improved algorithm.
Conclusions: Our results demonstrate that noninvasive blood glucose analysis combining mid-infrared spectroscopy and photothermal detection is feasible and comparable in accuracy with minimally invasive glucometers and finger pricking devices which use test strips. As a next step, a handheld version of the present device for diabetes patients is being developed.
Obstructive Sleep Apnea is emerging as a global health epidemic, particularly due to the obesity pandemic. However, comprehensive prevalence data are still lacking and global OSA research has not yet been structurally evaluated. Using the latest comprehensive age/gender-specific BMI and obesity data, a global landscape estimating the risk/burden of OSA was created. Results were presented in relation to an in-depth analysis of OSA research and countries’ socioeconomic/scientific background. While the USA, Canada, and Japan are the highest publishing countries on OSA, Iceland, Greece, and Israel appeared at the forefront when relating the scientific output to socioeconomic parameters. Conversely, China, India, and Russia showed relatively low performances in these relations. Analysis of the estimated population at risk (EPR) of OSA showed the USA, China, India, and Brazil as the leading countries. Although the EPR and OSA research correlated strongly, major regional discrepancies between the estimated demand and actual research performances were identified, mainly in, but not limited to, developing nations. Our study highlights regional challenges/imbalances in the global activity on OSA and allows targeted measures to mitigate the burden of undiagnosed/untreated OSA. Furthermore, the inclusion of disadvantaged countries in international collaborations could stimulate local research efforts and provide valuable insights into the regional epidemiology of OSA.
Background: Fabry disease (FD), the second most prevalent lysosomal storage disorder, is classified as a rare disease. It often leads to significant quality of life impairments and premature death. Many cases remain undiagnosed due to the rarity and heterogeneity. Further, costs related to treatment often constitute a substantial financial burden for patients and health systems. While its epidemiology is still unclear, newborn screenings suggest that its actual prevalence rate is significantly higher than previously suspected. Methods: Based on well-established methodologies, this study gives an overview about the background of the development of FD-related research and provides a critical view of future needs. Results: On the grounds of benchmarking findings, an increasing research activity on FD can be observed. Most publishing countries are the USA, some European countries, Japan, Taiwan, and South Korea. In general, high-income countries publish comparably more on FD than low- or middle-income economies. The countries' financial and infrastructural background are unveiled as crucial factors for the FD research activity. Conclusions: Overall, there is a need to foster FD research infrastructure in developing and emerging countries with focus on cost-intensive genetic research that is independent from economic interests of big pharmaceutical companies.
Purpose: The Masquelet technique for the treatment of large bone defects is a two-stage procedure based on an induced membrane. Compared to mature periosteum, the induced membrane differs significantly. However, both play a crucial role in bone regeneration. As part of a histological and radiological post-evaluation of an earlier project, we analyzed the influence of the granule size of the bone void filler Herafill® on development of periosteum regrowth in a critical size defect.
Methods: We compared three different sizes of Herafill® granules (Heraeus Medical GmbH, Wehrheim) in vivo in a rat femoral critical size defect (10 mm) treated with the induced membrane technique. After 8 weeks healing time, femurs were harvested and taken for histological and radiological analysis.
Results: A significantly increased regrowth of periosteum into the defect was found when small granules were used. Large granules showed significantly increased occurrence of bone capping. Small granules lead to significant increase in callus formation in the vicinity to the membrane.
Conclusion: The size of Herafill® granules has significant impact on the development of periosteal-like structures around the defect using Masquelet’s induced membrane technique. Small granules show significantly increased regrowth of periosteum and improved bone formation adjacent to the induced membrane.
Needlestick injuries: a density-equalizing mapping and socioeconomic analysis of the global research
(2020)
Background: Needlestick injuries have caused a deleterious effect on the physical and mental health of millions of health-care workers over the past decades, being responsible for occupational infections with viruses such as HIV or hepatis C. Despite this heavy burden of disease, no concise studies have been published on the global research landscape so far.
Methods: We used the New Quality and Quantity Indices in Science platform to analyze global NSI research (n = 2987 articles) over the past 115 years using the Web of Science and parameters such as global versus country-specific research activities, semi-qualitative issues, and socioeconomic figures.
Results: Density-equalizing mapping showed that although a total of n = 106 countries participated in NSI research, large parts of Africa and South America were almost invisible regarding global participation in NSI research. Average citation rate (cr) analysis indicated a high rate for Switzerland (cr = 25.1), Italy (cr = 23.5), and Japan (cr = 19.2). Socioeconomic analysis revealed that the UK had the highest quotient QGDP of 0.13 NSI-specific publications per bill. US-$ gross domestic product (GDP), followed by South Africa (QGDP = 0.12). Temporal analysis of HIV versus hepatitis research indicated that NSI-HIV research culminated in the early 1990s, whereas NSI-hepatitis research increased over the observed period from the 1980s until the last decade.
Conclusion: Albeit NSI research activity is generally increasing, the growth is asymmetrical from a global viewpoint. International strategies should be followed that put a focus on NSI in non-industrialized areas of the world.
Stabilization exercise (SE) is evident for the management of chronic non-specific low back pain (LBP). The optimal dose-response-relationship for the utmost treatment success is, thus, still unknown. The purpose is to systematically review the dose-response-relationship of stabilisation exercises on pain and disability in patients with chronic non-specific LBP. A systematic review with meta-regression was conducted (Pubmed, Web of Knowledge, Cochrane). Eligibility criteria were RCTs on patients with chronic non-specific LBP, written in English/German and adopting a longitudinal core-specific/stabilising/motor control exercise intervention with at least one outcome for pain intensity and/or disability. Meta-regressions (dependent variable = effect sizes (Cohens d) of the interventions (for pain and for disability), independent variable = training characteristics (duration, frequency, time per session)), and controlled for (low) study quality (PEDro) and (low) sample sizes (n) were conducted to reveal the optimal dose required for therapy success. From the 3,415 studies initially selected, 50 studies (n = 2,786 LBP patients) were included. N = 1,239 patients received SE. Training duration was 7.0 ± 3.3 weeks, training frequency was 3.1 ± 1.8 sessions per week with a mean training time of 44.6 ± 18.0 min per session. The meta-regressions’ mean effect size was d = 1.80 (pain) and d = 1.70 (disability). Total R2 was 0.445 and 0.17. Moderate quality evidence (R2 = 0.231) revealed that a training duration of 20 to 30 min elicited the largest effect (both in pain and disability, logarithmic association). Low quality evidence (R2 = 0.125) revealed that training 3 to 5 times per week led to the largest effect of SE in patients with chronic non-specific LBP (inverted U-shaped association). In patients with non-specific chronic LBP, stabilization exercise with a training frequency of 3 to 5 times per week (Grade C) and a training time of 20 to 30 min per session (Grade A) elicited the largest effect on pain and disability.
Ice particle activation and evolution have important atmospheric implications for cloud formation, initiation of precipitation and radiative interactions. The initial formation of atmospheric ice by heterogeneous ice nucleation requires the presence of a nucleating seed, an ice-nucleating particle (INP), to facilitate its first emergence. Unfortunately, only a few long-term measurements of INPs exist, and as a result, knowledge about geographic and seasonal variations of INP concentrations is sparse. Here we present data from nearly 2 years of INP measurements from four stations in different regions of the world: the Amazon (Brazil), the Caribbean (Martinique), central Europe (Germany) and the Arctic (Svalbard). The sites feature diverse geographical climates and ecosystems that are associated with dissimilar transport patterns, aerosol characteristics and levels of anthropogenic impact (ranging from near pristine to mostly rural). Interestingly, observed INP concentrations, which represent measurements in the deposition and condensation freezing modes, do not differ greatly from site to site but usually fall well within the same order of magnitude. Moreover, short-term variability overwhelms all long-term trends and/or seasonality in the INP concentration at all locations. An analysis of the frequency distributions of INP concentrations suggests that INPs tend to be well mixed and reflective of large-scale air mass movements. No universal physical or chemical parameter could be identified to be a causal link driving INP climatology, highlighting the complex nature of the ice nucleation process. Amazonian INP concentrations were mostly unaffected by the biomass burning season, even though aerosol concentrations increase by a factor of 10 from the wet to dry season. Caribbean INPs were positively correlated to parameters related to transported mineral dust, which is known to increase during the Northern Hemisphere summer. A wind sector analysis revealed the absence of an anthropogenic impact on average INP concentrations at the site in central Europe. Likewise, no Arctic haze influence was observed on INPs at the Arctic site, where low concentrations were generally measured. We consider the collected data to be a unique resource for the community that illustrates some of the challenges and knowledge gaps of the field in general, while specifically highlighting the need for more long-term observations of INPs worldwide.
Bioaerosols are considered to play a relevant role in atmospheric processes, but their sources, properties, and spatiotemporal distribution in the atmosphere are not yet well characterized. In the Amazon Basin, primary biological aerosol particles (PBAPs) account for a large fraction of coarse particulate matter, and fungal spores are among the most abundant PBAPs in this area as well as in other vegetated continental regions. Furthermore, PBAPs could also be important ice nuclei in Amazonia. Measurement data on the release of fungal spores under natural conditions, however, are sparse. Here we present an experimental approach to analyze and quantify the spore release from fungi and other spore-producing organisms under natural and laboratory conditions. For measurements under natural conditions, the samples were kept in their natural environment and a setup was developed to estimate the spore release numbers and sizes as well as the microclimatic factors temperature and air humidity in parallel to the mesoclimatic parameters net radiation, rain, and fog occurrence. For experiments in the laboratory, we developed a cuvette to assess the particle size and number of newly released fungal spores under controlled conditions, simultaneously measuring temperature and relative humidity inside the cuvette. Both approaches were combined with bioaerosol sampling techniques to characterize the released particles using microscopic methods. For fruiting bodies of the basidiomycetous species, Rigidoporus microporus, the model species for which these techniques were tested, the highest frequency of spore release occurred in the range from 62 % to 96 % relative humidity. The results obtained for this model species reveal characteristic spore release patterns linked to environmental or experimental conditions, indicating that the moisture status of the sample may be a regulating factor, whereas temperature and light seem to play a minor role for this species. The presented approach enables systematic studies aimed at the quantification and validation of spore emission rates and inventories, which can be applied to a regional mapping of cryptogamic organisms under given environmental conditions.
Purpose: In the clinical routine, detection of focal cortical dysplasia (FCD) by visual inspection is challenging. Still, information about the presence and location of FCD is highly relevant for prognostication and treatment decisions. Therefore, this study aimed to develop, describe and test a method for the calculation of synthetic anatomies using multiparametric quantitative MRI (qMRI) data and surface-based analysis, which allows for an improved visualization of FCD.
Materials and Methods: Quantitative T1-, T2- and PD-maps and conventional clinical datasets of patients with FCD and epilepsy were acquired. Tissue segmentation and delineation of the border between white matter and cortex was performed. In order to detect blurring at this border, a surface-based calculation of the standard deviation of each quantitative parameter (T1, T2, and PD) was performed across the cortex and the neighboring white matter for each cortical vertex. The resulting standard deviations combined with measures of the cortical thickness were used to enhance the signal of conventional FLAIR-datasets. The resulting synthetically enhanced FLAIR-anatomies were compared with conventional MRI-data utilizing regions of interest based analysis techniques.
Results: The synthetically enhanced FLAIR-anatomies showed higher signal levels than conventional FLAIR-data at the FCD sites (p = 0.005). In addition, the enhanced FLAIR-anatomies exhibited higher signal levels at the FCD sites than in the corresponding contralateral regions (p = 0.005). However, false positive findings occurred, so careful comparison with conventional datasets is mandatory.
Conclusion: Synthetically enhanced FLAIR-anatomies resulting from surface-based multiparametric qMRI-analyses have the potential to improve the visualization of FCD and, accordingly, the treatment of the respective patients.
Cortical changes in epilepsy patients with focal cortical dysplasia: new insights with T2 mapping
(2020)
Background: In epilepsy patients with focal cortical dysplasia (FCD) as the epileptogenic focus, global cortical signal changes are generally not visible on conventional MRI. However, epileptic seizures or antiepileptic medication might affect normal-appearing cerebral cortex and lead to subtle damage. Purpose: To investigate cortical properties outside FCD regions with T2-relaxometry. Study Type: Prospective study. Subjects: Sixteen patients with epilepsy and FCD and 16 age-/sex-matched healthy controls. Field Strength/Sequence: 3T, fast spin-echo T2-mapping, fluid-attenuated inversion recovery (FLAIR), and synthetic T1-weighted magnetization-prepared rapid acquisition of gradient-echoes (MP-RAGE) datasets derived from T1-maps. Assessment: Reconstruction of the white matter and cortical surfaces based on MP-RAGE structural images was performed to extract cortical T2 values, excluding lesion areas. Three independent raters confirmed that morphological cortical/juxtacortical changes in the conventional FLAIR datasets outside the FCD areas were definitely absent for all patients. Averaged global cortical T2 values were compared between groups. Furthermore, group comparisons of regional cortical T2 values were performed using a surface-based approach. Tests for correlations with clinical parameters were carried out. Statistical Tests: General linear model analysis, permutation simulations, paired and unpaired t-tests, and Pearson correlations. Results: Cortical T2 values were increased outside FCD regions in patients (83.4 ± 2.1 msec, control group 81.4 ± 2.1 msec, P = 0.01). T2 increases were widespread, affecting mainly frontal, but also parietal and temporal regions of both hemispheres. Significant correlations were not observed (P ≥ 0.55) between cortical T2 values in the patient group and the number of seizures in the last 3 months or the number of anticonvulsive drugs in the medical history. Data Conclusion: Widespread increases in cortical T2 in FCD-associated epilepsy patients were found, suggesting that structural epilepsy in patients with FCD is not only a symptom of a focal cerebral lesion, but also leads to global cortical damage not visible on conventional MRI. Evidence Level: 21. Technical efficacy Stage: 3 J. MAGN. RESON. IMAGING 2020;52:1783–1789.
Background: Austria has recently been embroiled in the complex debate on the legalization of measures to end life prematurely. Empirical data on end-of-life decisions made by Austrian physicians barely exists. This study is the first in Austria aimed at finding out how physicians generally approach and make end-of-life therapy decisions.
Methods: The European end-of-life decisions (EURELD) questionnaire, translated and adapted by Schildmann et al., was used to conduct this cross-sectional postal survey. Questions on palliative care training, legal issues, and use of and satisfaction with palliative care were added. All Austrian specialists in hematology and oncology, a representative sample of doctors specialized in internal medicine, and a sample of general practitioners, were invited to participate in this anonymous postal survey.
Results: Five hundred forty-eight questionnaires (response rate: 10.4%) were evaluated. 88.3% of participants had treated a patient who had died in the previous 12 months. 23% of respondents had an additional qualification in palliative medicine. The cause of death in 53.1% of patients was cancer, and 44.8% died at home. In 86.3% of cases, pain relief and / or symptom relief had been intensified. Further treatment had been withheld by 60.0%, and an existing treatment discontinued by 49.1% of respondents. In 5 cases, the respondents had prescribed, provided or administered a drug which had resulted in death. 51.3% of physicians said they would never carry out physician-assisted suicide (PAS), while 30.3% could imagine doing so under certain conditions. 38.5% of respondents supported the current prohibition of PAS, 23.9% opposed it, and 33.2% were undecided. 52.4% of physicians felt the legal situation with respect to measures to end life prematurely was ambiguous. An additional qualification in palliative medicine had no influence on measures taken, or attitudes towards PAS.
Conclusions: The majority of doctors perform symptom control in terminally ill patients. PAS is frequently requested but rarely carried out. Attending physicians felt the legal situation was ambiguous. Physicians should therefore receive training in current legislation relating to end-of-life choices and medical decisions. The data collected in this survey will help political decision-makers provide the necessary legal framework for end-of-life medical care.
Objectives: To review systematically the past 10 years of research activity into the healthcare experiences (HCX) of patients with chronic heart failure (CHF) in Germany, in order to identify research foci and gaps and make recommendations for future research. Design: In this scoping review, six databases and grey literature sources were systematically searched for articles reporting HCX of patients with CHF in Germany that were published between 2008 and 2018. Extracted results were summarised using quantitative and qualitative descriptive analysis. Results: Of the 18 studies (100%) that met the inclusion criteria, most were observational studies (60%) that evaluated findings quantitatively (60%). HCX were often concerned with patient information, global satisfaction as well as relationships and communication between patients and providers and generally covered ambulatory care, hospital care and rehabilitation services. Overall, the considerable heterogeneity of the included studies’ outcomes only permitted relatively trivial levels of synthesis. Conclusion: In Germany, research on HCX of patients with CHF is characterised by missing, inadequate and insufficient information. Future research would benefit from qualitative analyses, evidence syntheses, longitudinal analyses that investigate HCX throughout the disease trajectory, and better reporting of sociodemographic data. Furthermore, research should include studies that are based on digital data, reports of experiences gained in under-investigated yet patient-relevant healthcare settings and include more female subjects.
Objectives: The ongoing coronavirus pandemic is challenging, especially in severely affected patients who require intubation and sedation. Although the potential benefits of sedation with volatile anesthetics in coronavirus disease 2019 patients are currently being discussed, the use of isoflurane in patients with coronavirus disease 2019–induced acute respiratory distress syndrome has not yet been reported. Design: We performed a retrospective analysis of critically ill patients with hypoxemic respiratory failure requiring mechanical ventilation. Setting: The study was conducted with patients admitted between April 4 and May 15, 2020 to our ICU. Patients: We included five patients who were previously diagnosed with severe acute respiratory syndrome coronavirus 2 infection. Intervention: Even with high doses of several IV sedatives, the targeted level of sedation could not be achieved. Therefore, the sedation regimen was switched to inhalational isoflurane. Clinical data were recorded using a patient data management system. We recorded demographical data, laboratory results, ventilation variables, sedative dosages, sedation level, prone positioning, duration of volatile sedation and outcomes. Measurements & Main Results: Mean age (four men, one women) was 53.0 (± 12.7) years. The mean duration of isoflurane sedation was 103.2 (± 66.2) hours. Our data demonstrate a substantial improvement in the oxygenation ratio when using isoflurane sedation. Deep sedation as assessed by the Richmond Agitation and Sedation Scale was rapidly and closely controlled in all patients, and the subsequent discontinuation of IV sedation was possible within the first 30 minutes. No adverse events were detected. Conclusions: Our findings demonstrate the feasibility of isoflurane sedation in five patients suffering from severe coronavirus disease 2019 infection. Volatile isoflurane was able to achieve the required deep sedation and reduced the need for IV sedation.
Purpose: To investigate cortical thickness and cortical quantitative T2 values as imaging markers of microstructural tissue damage in patients with unilateral high-grade internal carotid artery occlusive disease (ICAOD).
Methods: A total of 22 patients with ≥70% stenosis (mean age 64.8 years) and 20 older healthy control subjects (mean age 70.8 years) underwent structural magnetic resonance imaging (MRI) and high-resolution quantitative (q)T2 mapping. Generalized linear mixed models (GLMM) controlling for age and white matter lesion volume were employed to investigate the effect of ICAOD on imaging parameters of cortical microstructural integrity in multivariate analyses.
Results: There was a significant main effect (p < 0.05) of the group (patients/controls) on both cortical thickness and cortical qT2 values with cortical thinning and increased cortical qT2 in patients compared to controls, irrespective of the hemisphere. The presence of upstream carotid stenosis had a significant main effect on cortical qT2 values (p = 0.01) leading to increased qT2 in the poststenotic hemisphere, which was not found for cortical thickness. The GLMM showed that in general cortical thickness was decreased and cortical qT2 values were increased with increasing age (p < 0.05).
Conclusion: Unilateral high-grade carotid occlusive disease is associated with widespread cortical thinning and prolongation of cortical qT2, presumably reflecting hypoperfusion-related microstructural cortical damage similar to accelerated aging of the cerebral cortex. Cortical thinning and increase of cortical qT2 seem to reflect different aspects and different pathophysiological states of cortical degeneration. Quantitative T2 mapping might be a sensitive imaging biomarker for early cortical microstructural damage.
An important measure in pain research is the intensity of nociceptive stimuli and their cortical representation. However, there is evidence of different cerebral representations of nociceptive stimuli, including the fact that cortical areas recruited during processing of intranasal nociceptive chemical stimuli included those outside the traditional trigeminal areas. Therefore, the aim of this study was to investigate the major cerebral representations of stimulus intensity associated with intranasal chemical trigeminal stimulation. Trigeminal stimulation was achieved with carbon dioxide presented to the nasal mucosa. Using a single‐blinded, randomized crossover design, 24 subjects received nociceptive stimuli with two different stimulation paradigms, depending on the just noticeable differences in the stimulus strengths applied. Stimulus‐related brain activations were recorded using functional magnetic resonance imaging with event‐related design. Brain activations increased significantly with increasing stimulus intensity, with the largest cluster at the right Rolandic operculum and a global maximum in a smaller cluster at the left lower frontal orbital lobe. Region of interest analyses additionally supported an activation pattern correlated with the stimulus intensity at the piriform cortex as an area of special interest with the trigeminal input. The results support the piriform cortex, in addition to the secondary somatosensory cortex, as a major area of interest for stimulus strength‐related brain activation in pain models using trigeminal stimuli. This makes both areas a primary objective to be observed in human experimental pain settings where trigeminal input is used to study effects of analgesics.
Introduction: From the beginning of the corona pandemic until August 19, 2020, more than 21,989,366 cases have been reported worldwide – 228,495 in Germany alone, including 12,648 children aged 0–14. In many countries, the proportion of infected children in the total population is comparatively low; in addition, children often have no or milder symptoms and are less likely to transmit the pathogen to adults than the other way round. Based on the registration data in Frankfurt am Main, Germany, the symptoms of children in comparison with adults and the likely routes of transmission are presented below.
Materials and methods: The documentation of the mandatory reports includes personal data (name, date of birth, gender, place of residence), disease characteristics (date of report, date of onset of the disease, symptoms), possible contact persons (family, others) and i.a. possible activity or care in children’s community facilities. All reports were viewed, especially with regard to likely transmission routes.
Results: From March 1 to July 31, 2020, 1,977 infected people were reported, including 138 children between the ages of 0 and 14 years. Children had fewer and milder symptoms than adults. None of the children experienced severe respiratory symptoms or the need for ventilation. 62% of the children had no symptoms at all (19% adults), 5% of the children were hospitalized (24% adults), and none of the children died (3.8% adults).
After excluding a cluster of 34 children from refugee accommodations and 14 children from a parish, 78% of the remaining 90 children had been infected by an adult within the family, and only 4% were likely to have a reverse transmission route. In 5.5% of cases, transmission in a community facility was likely.
Discussion: The results of the registration data from Frankfurt am Main, Germany confirm the results published in other countries: Children are less likely to become infected, and if infected, their symptoms are less severe than in adults, and they are apparently not the main drivers of virus transmission. Therefore, scientific medical associations strongly recommend reopening schools.
Keystone mutualisms, such as corals, lichens or mycorrhizae, sustain fundamental ecosystem functions. Range dynamics of these symbioses are, however, inherently difficult to predict because host species may switch between different symbiont partners in different environments, thereby altering the range of the mutualism as a functional unit. Biogeographic models of mutualisms thus have to consider both the ecological amplitudes of various symbiont partners and the abiotic conditions that trigger symbiont replacement. To address this challenge, we here investigate 'symbiont turnover zones'--defined as demarcated regions where symbiont replacement is most likely to occur, as indicated by overlapping abundances of symbiont ecotypes. Mapping the distribution of algal symbionts from two species of lichen-forming fungi along four independent altitudinal gradients, we detected an abrupt and consistent β-diversity turnover suggesting parallel niche partitioning. Modelling contrasting environmental response functions obtained from latitudinal distributions of algal ecotypes consistently predicted a confined altitudinal turnover zone. In all gradients this symbiont turnover zone is characterized by approximately 12°C average annual temperature and approximately 5°C mean temperature of the coldest quarter, marking the transition from Mediterranean to cool temperate bioregions. Integrating the conditions of symbiont turnover into biogeographic models of mutualisms is an important step towards a comprehensive understanding of biodiversity dynamics under ongoing environmental change.
Das Internet findet auf unterschiedlichste Weise Eingang in den Film: Digitale Formate wie Webserien, Podcasts oder sogar Tweets werden im Medienwechsel Grundlage filmischer Adaptionen, filmische Experimente mit interaktiven und virtuellen Technologien generieren neue, zwischen Film und Computerspiel angesiedelte Medienkombinationen, transmediale Erweiterungen führen auf verschiedene Arten Film- und Serienuniversen im digitalen Raum fort und intermediale Bezüge erzählen durch die Imitation einer digitalen Ästhetik nicht (nur) über das Altermedium, sondern oft auch durch das andere Medium. Zu letzterer intermedialer Kategorie gehörende Phänomene der Thematisierung, Evozierung oder Simulierung sollen hier im Kontext der Darstellung des Internets analysiert werden. Aufgrund der Ubiquität digitaler Medien im Alltag spielen seit einigen Jahren neuere Technologien als Bezugsmedien eine zentrale Rolle in vielen Filmen und Serien. Filmische Internetanwendungen werden dabei vor allem als grafische Benutzeroberfläche, als Nutzungsschnittstelle zwischen Anwender und technischem Gerät visualisiert, die Repräsentation der Hardware erscheint meist nachrangig. Nicht die Darstellung von Computern und Smartphones, sondern die Inszenierung von vernetzten Systemen, Räumen und Kommunikationsstrukturen steht daher im Fokus dieses Artikels. Eingegangen werden soll in diesem Zusammenhang insbesondere auf intermediale Evozierungen des Altermediums durch die Nachahmung digitaler Ästhetiken vermittels des Formenrepertoires des Films, simulierte Screen- und Desktopfilme und auf die Darstellung der dominant schrift- und zeichenbasierten digitalen Kultur durch die Integration von Schrift im Filmbild. Begonnen wird die Untersuchung mit einer Betrachtung von visuellen Metaphern und Strategien der Sichtbarmachung virtueller Räume.
Inschriften sind Formen, die durch eine besondere mediale Disposition charakterisiert sind. Was Inschriften auszeichnet, ist, neben ihrem engen Bezug zu einem materiellen Träger, ihre eigentümliche Position auf der Schwelle von Schrift und Bild. [...] Die Eigenart der Inschrift, ein Wort oder einen Text als sichtbare Zeichenfolge auszustellen, hat der italienische Epigraphieforscher Armando Petrucci im Begriff der 'scrittura esposta' zum Ausdruck gebracht. [...] Versucht man, die damit berührte spezifische Potenz der Inschrift genauer zu erfassen, liegt es nahe, zunächst auf die visuelle Dimension zurückzukommen. Es ist, so darf man annehmen, die Fähigkeit der Inschrift, als Bild zu erscheinen, die es ihr erlaubt, in den Blick des Betrachters zu treten und sich jenem als exponierte Figur vor Augen zu stellen. Mit dieser bildhaften Erscheinungsform, so ließe sich das Argument weiterführen, verbinden sich ästhetische Qualitäten der sinnlichen Eindrücklichkeit und Präsenz, die der Inschrift die ihr eigentümliche Ausdrucks- und Aussagekraft verleihen. [...] Mit dieser Erklärung ist unterdessen nur die eine Seite der Inschrift und ihrer medien- und wirkungsästhetischen Beschaffenheit erfasst. Das Besondere der Inschrift erschöpft sich nicht in deren Eigenart als ausgestellter, exponierter Zeichenformation. Die Inschrift ist nicht nur 'esposta', sondern ebenso 'scrittura'. Die besondere Gestaltungs- und Wirkungsweise der Inschrift beruht mithin nicht allein auf deren bildhafter Disposition. Die Wirkkraft der Inschrift verdankt sich, so die hier vorgeschlagene These, dem Umstand, dass diese, auch wenn sie sich als exponierte, eingängig und weithin sichtbare Gestalt zur Geltung bringt, zugleich ihren Charakter als Schrift bewahrt und diesen nicht weniger deutlich hervorkehrt. Wer eine Inschrift betrachtet, der erblickt in ihrer bildhaften Gestaltung zugleich die visuelle Form eines Textes, einer sprachlichen Äußerung. Durch ihre Gestaltung als 'scrittura' erscheint die Inschrift somit in einer Form, die in spezifischer Weise mit Momenten der Macht und Autorität versehen ist. Ist doch die Schrift dasjenige Medium, in dem uns, in einer von der Antike bis in die Neuzeit und Moderne reichenden Tradition, das Gesetz, die aufgezeichnete und materialisierte 'Stimme des Souveräns' entgegentritt. Das Besondere der Inschrift scheint also, so lässt sich vorläufig festhalten, darin zu bestehen, dass sie die Medien von Bild und Schrift in einer spezifischen Weise miteinander verknüpft. In ihr sind mediale und ästhetische Qualitäten wirksam, die teils dem Bild, teils der Schrift angehören. Auf diesem Zusammenspiel beruht auch das eigentümliche Wirkungspotential, das sich mit dieser Äußerungsform verbindet. In der Folge wird es darum gehen, dieses Zusammenwirken bildlicher und skripturaler Aspekte genauer zu erkunden und vor diesem Hintergrund die Bedeutung und Wirkkraft inschriftlicher Zeichen insbesondere in politischen Kontexten zu untersuchen.
Die Verunsicherung auf dem Feld zeitgenössischer Kunst berührt nicht nur die Frage nach der Qualität von Kunst, sondern auch jene der Grenze zwischen Kunst(werk) und ihrem (bzw. seinem) jeweiligen Außen. [...] Kunst, die einen herkömmlichen Werkbegriff in Frage stellt (und vom breiten Publikum oft abgelehnt wird), aber doch verortet und verortbar und daher, zumindest weitestgehend, als Kunst erkennbar ist, soll im folgenden Gegenwartskunst genannt werden, die in den Alltag integrierte und intervenierende und manchmal nicht als Kunst wahrgenommene Kunst als Situationskunst. Gegenwartskunst setzt ihre Autonomie und eine klare Grenze zwischen Kunst und Nicht-Kunst voraus, Situationskunst (die man als eine radikale Ausformung und somit als Teil der Gegenwartskunst ansehen könnte) sät Zweifel an der Kunstautonomie, auch wenn sie diese häufig als Argument gegen Anrufungen oder Übergriffe von Politik, Religion oder Alltagswirklichkeit verwendet bzw. verwenden 'muss'. Bei beiden Formen, die sich in vielen Fällen überschneiden, wird im herkömmlichen Sinne nichts mehr erschaffen ('poesis'), sondern etwas gefunden bzw. letztlich 'einfach' etwas getan ('praxis'). In beiden Fällen versteht sich nichts mehr von selbst: Es ist in der Rezeption - zumindest im ersten Moment - unklar, ob wir es überhaupt mit Kunst zu tun haben. In anderen Worten: Wir können uns im Moment des Ausstellungsbesuches also nicht auf unsere Sinneswahrnehmungen, auf unsere Erfahrung und auf unser implizites (Vor-)Wissen verlassen, wenn wir wissen wollen, womit wir es zu tun haben und was das alles soll. Wir benötigen also nicht zuletzt Erklärungen und Erläuterungen (die wieder zu implizitem Wissen gerinnen können) - und das ist ein Grund, warum zeitgenössische Kunst für die Komparatistik interessant sein könnte. Davon wird noch zu sprechen sein. Die Begriffe Gegenwarts- und Situationskunst decken einen sehr weiten Bereich von Phänomenen ab. Daher wird das Folgende eine kursorische Skizze werden, bei der in erster Linie auf solche Phänomene und ihre Gemeinsamkeiten abgezielt werden soll, die für die Komparatistik von Interesse sind. Im Zentrum steht nicht eine genaue Analyse und Interpretation von Phänomenen, sondern die Frage, was im Hinblick auf die Disziplin der Komparatistik spannend für Analyse und Interpretation wäre. Die im Folgenden diskutierten Phänomene und Beispiele befinden sich auf jeden Fall in der Peripherie der Komparatistik mit allen Nachteilen, welche die Arbeit in Peripherien mit sich bringt.
Briefe, das Gespräch zweier Abwesender miteinander, spielen in vielen Filmen eine große Rolle. Sie werden eingeblendet oder per 'voice over' vorgelesen, man sieht Lese- und Schreibszenen, die mit der Vieldeutigkeit des Geschriebenen spielen. Der Brief sei, so Christina Bartz, "wegen der kommunikativen Verbindung über zeitliche und räumliche Distanzen hinweg" "besonders anschlussfähig für den Film", der ebenfalls durch die Montage räumlich und zeitlich Getrenntes zusammenbringt. Im Gegensatz zum Film ist der Brief jedoch kein Massenmedium sondern Individualkommunikation. Das Zeigen des Mediums Brief oder das Ersetzen dieses historischen Mediums durch ein aktuelleres im Film bietet immer auch die Möglichkeit der Medienreflexion. In meinem Beitrag möchte ich anhand zweier prominenter Beispiele zum einen beobachten, wie in filmischen Adaptionen briefgeprägter literarischer Texte mit Briefen umgegangen wird, und zum anderen, wie anhand der Briefthematik eine Medienreflexion stattfindet. Ich stelle dazu zwei Melodramen vor, in denen Briefe und das damit einhergehende Erkennen und Verkennen eine zentrale Rolle spielen: Max Ophüls' "Letter from an unknown woman" (USA 1948), der Verfilmung von Stefan Zweigs Novelle "Brief einer Unbekannten" (1922), und "Atonement" (2007), die Adaption von Ian McEwans gleichnamigen Roman von 2001.
Wie kaum ein anderes Bildmotiv machen schmelzende Gletscher den Klimawandel sichtbar. Sie spielen deshalb eine zentrale Rolle für die Klimaforschung selbst, für die Popularisierung ihrer alarmierenden Erkenntnisse sowie für die zeitgenössische Kunst, die im Lichte dieser Einsichten nach einer adäquaten neuen Ästhetik sucht. Entsprechend umfangreich fällt inzwischen auch die kulturwissenschaftliche Auseinandersetzung mit Gletscherbildern aus. Zahlreiche Ausstellungskataloge und umfangreiche Studien verfolgen deren Entwicklung vom frühen 17. Jahrhundert, auf das die ersten bildlichen Darstellungen datiert sind, bis in die Gegenwart, in der Gletscher und ihr Verschwinden zum Emblem der globalen Erwärmung geworden sind. Der Heuristik des Vergleichs kommt dabei eine wichtige Funktion zu: Nicht nur bildet sie die Basis etwa für klassisch kunsthistorische Untersuchungen, deren Augenmerk dem Wandel der Ausdrucksformen und Abbildungskonventionen von Gletscherbildern (etwa auf einer Skala zwischen Idealisierung und Realismus) gilt. Überdies und insbesondere ist auch der Prozess des Verschwindens auf den vergleichenden Blick angewiesen, denn dieser offenbart sich ja erst auf diese Weise in seiner ganzen Dramatik. Dieser Aufsatz jedoch wählt eine andere Perspektive: In begrifflicher Anlehnung an Jussi Parikkas 'Mediengeologie' und vor dem Hintergrund des umfassenden Felds der Medienökologie wird im Folgenden eine "Medienglaziologie" umrissen, die Gletscher selbst als Medien versteht. Ganz im Sinne des medienkomparatistischen Forschungsparadigmas, dass sich spezifische Medialitäten erst aus einer medienvergleichenden Perspektive erschließen, wird der Frage nachgegangen, wie sich dieses "Medien-Werden" der Gletscher im und durch den Vergleich mit anderen (technischen) Medien vollzieht. Dabei konzentriere ich mich zeitlich auf das 19. und frühe 20. Jahrhundert und regional auf die Alpengletscher, deren wissenschaftliche Erforschung die Disziplin der Glaziologie begründete.
The elliptic flow (v2) of (anti-)3He is measured in Pb–Pb collisions at √sNN=5.02TeV in the transverse-momentum (pT) range of 2–6 GeV/c for the centrality classes 0–20%, 20–40%, and 40–60% using the event-plane method. This measurement is compared to that of pions, kaons, and protons at the same center-of-mass energy. A clear mass ordering is observed at low pT, as expected from relativistic hydrodynamics. The violation of the scaling of v2 with the number of constituent quarks at low pT, already observed for identified hadrons and deuterons at LHC energies, is confirmed also for (anti-)3He. The elliptic flow of (anti-)3He is underestimated by the Blast-Wave model and overestimated by a simple coalescence approach based on nucleon scaling. The elliptic flow of (anti-)3He measured in the centrality classes 0–20% and 20–40% is well described by a more sophisticated coalescence model where the phase-space distributions of protons and neutrons are generated using the iEBE-VISHNU hybrid model with AMPT initial conditions.
Macro-finance theory predicts that financial fragility builds up when volatility is low. This “volatility paradox’” challenges traditional systemic risk measures. I explore a new dimension of systemic risk, spillover persistence, which is the average time horizon at which a firm’s losses increase future risk in the financial system. Using firm-level data covering more than 30 years and 50 countries, I document that persistence declines when fragility builds up: before crises, during stock market booms, and when banks take more risks. In contrast, persistence increases with loss amplification: during crises and fire sales. These findings support key predictions of recent macrofinance models.
The impact of the appropriate and inappropriately applied statistical metrics to verify the State of Control of pharmaceutical manufacturing has been reviewed from an auditor’s perspective. Good and bad statistical practices have been presented in an attempt for manufactures to appreciate the risks of using these metrics. Conclusions concerning (1) control charts to be used instead of line/run charts for trend analysis, (2) Ppk as the preferred capability index (but still with an ambition to get processes into statistical control), (3) show process capability indices along with their respective control charts (4) determine which Manufacturing State of Control the product/process lies in (5) an effective Control Strategy can only be implemented if the Manufacturing State of Control is understood, (6) when presenting data consider what is truly representative of the product/process and not the average (7) Management should align with ICH Q10 more effectively to provide statistical resources for their personnel.
Linking epigenetic signature and metabolic phenotype in IDH mutant and IDH wildtype diffuse glioma
(2020)
Aims: Changes in metabolism are known to contribute to tumour phenotypes. If and how metabolic alterations in brain tumours contribute to patient outcome is still poorly understood. Epigenetics impact metabolism and mitochondrial function. The aim of this study is a characterisation of metabolic features in molecular subgroups of isocitrate dehydrogenase mutant (IDHmut) and isocitrate dehydrogenase wildtype (IDHwt) gliomas. Methods: We employed DNA methylation pattern analyses with a special focus on metabolic genes, large-scale metabolism panel immunohistochemistry (IHC), qPCR-based determination of mitochondrial DNA copy number and immune cell content using IHC and deconvolution of DNA methylation data. We analysed molecularly characterised gliomas (n = 57) for in depth DNA methylation, a cohort of primary and recurrent gliomas (n = 22) for mitochondrial copy number and validated these results in a large glioma cohort (n = 293). Finally, we investigated the potential of metabolic markers in Bevacizumab (Bev)-treated gliomas (n = 29). Results: DNA methylation patterns of metabolic genes successfully distinguished the molecular subtypes of IDHmut and IDHwt gliomas. Promoter methylation of lactate dehydrogenase A negatively correlated with protein expression and was associated with IDHmut gliomas. Mitochondrial DNA copy number was increased in IDHmut tumours and did not change in recurrent tumours. Hierarchical clustering based on metabolism panel IHC revealed distinct subclasses of IDHmut and IDHwt gliomas with an impact on patient outcome. Further quantification of these markers allowed for the prediction of survival under anti-angiogenic therapy. Conclusion: A mitochondrial signature was associated with increased survival in all analyses, which could indicate tumour subgroups with specific metabolic vulnerabilities.
Simple Summary: Targeted therapies are of growing interest to physicians in cancer treatment. These drugs target specific genes and proteins involved in the growth and survival of cancer cells. Brain tumor therapy is complicated by the fact that not all drugs can penetrate the blood brain barrier and reach their target. We explored the non-invasive method, Magnetic Resonance Spectroscopy, for monitoring drug penetration and its effects in live animals bearing brain tumors. We were able to show the presence of the investigated drug in mouse brains and its on-target activity.
Abstract: Background: BAY1436032 is a fluorine-containing inhibitor of the R132X-mutant isocitrate dehydrogenase (mIDH1). It inhibits the mIDH1-mediated production of 2-hydroxyglutarate (2-HG) in glioma cells. We investigated brain penetration of BAY1436032 and its effects using 1H/19F-Magnetic Resonance Spectroscopy (MRS). Methods: 19F-Nuclear Magnetic Resonance (NMR) Spectroscopy was conducted on serum samples from patients treated with BAY1436032 (NCT02746081 trial) in order to analyze 19F spectroscopic signal patterns and concentration-time dynamics of protein-bound inhibitor to facilitate their identification in vivo MRS experiments. Hereafter, 30 mice were implanted with three glioma cell lines (LNT-229, LNT-229 IDH1-R132H, GL261). Mice bearing the IDH-mutated glioma cells received 5 days of treatment with BAY1436032 between baseline and follow-up 1H/19F-MRS scan. All other animals underwent a single scan after BAY1436032 administration. Mouse brains were analyzed by liquid chromatography-mass spectrometry (LC-MS/MS). Results: Evaluation of 1H-MRS data showed a decrease in 2-HG/total creatinine (tCr) ratios from the baseline to post-treatment scans in the mIDH1 murine model. Whole brain concentration of BAY1436032, as determined by 19F-MRS, was similar to total brain tissue concentration determined by Liquid Chromatography with tandem mass spectrometry (LC-MS/MS), with a signal loss due to protein binding. Intratumoral drug concentration, as determined by LC-MS/MS, was not statistically different in models with or without R132X-mutant IDH1 expression. Conclusions: Non-invasive monitoring of mIDH1 inhibition by BAY1436032 in mIDH1 gliomas is feasible.
The production of light neutral mesons in AA collisions probes the physics of the Quark-Gluon Plasma (QGP), which is formed in heavy-ion collisions at the LHC. More specifically, the centrality dependent neutral meson spectra in AA collisions compared to its spectra in minimum-bias pp collisions, scaled with the number of hard collisions, provides information on the energy loss of partons traversing the QGP. The measurement allows to test with high precision the predictions of theoretical model calculations. In addition, the decay of the π0 and η mesons are the dominant back- grounds for all direct photon measurements. Therefore, pushing the limits of the precision of neutral meson production is key to learning about the temperature and space-time evolution of the QGP.
In the ALICE experiment neutral mesons can be detected via their decay into two photons. The latter can be reconstructed using the two calorimeters EMCal and PHOS or via conversions in the detector material. The excellent momentum resolution of the conversion photons down to very low pT and the high reconstruction efficiency and triggering capability of calorimeters at high pT, allow us to measure the pT dependent invariant yield of light neutral mesons over a wide kinematic range.
Combining state-of-the-art reconstruction techniques with the high statistics delivered by the LHC in Run 2 gives us the opportunity to enhance the precision of our measurements. In these proceedings, new ALICE run 2 preliminary results for neutral meson production in pp and Pb–Pb collisions at LHC energies are presented.
Nature affects human well-being in multiple ways. However, the association between species diversity and human well-being at larger spatial scales remains largely unexplored. Here, we examine the relationship between species diversity and human well-being at the continental scale, while controlling for other known drivers of well-being. We related socio-economic data from more than 26,000 European citizens across 26 countries with macroecological data on species diversity and nature characteristics for Europe. Human well-being was measured as self-reported life-satisfaction and species diversity as the species richness of several taxonomic groups (e.g. birds, mammals and trees). Our results show that bird species richness is positively associated with life-satisfaction across Europe. We found a relatively strong relationship, indicating that the effect of bird species richness on life-satisfaction may be of similar magnitude to that of income. We discuss two, non-exclusive pathways for this relationship: the direct multisensory experience of birds, and beneficial landscape properties which promote both bird diversity and people's well-being. Based on these results, this study argues that management actions for the protection of birds and the landscapes that support them would benefit humans. We suggest that political and societal decision-making should consider the critical role of species diversity for human well-being.
Introduction: Recommendations for venous thromboembolism and deep venous thrombosis (DVT) prophylaxis using graduated compression stockings (GCS) is historically based and has been critically examined in current publications. Existing guidelines are inconclusive as to recommend the general use of GCS.
Patients/Methods: 24 273 in-patients (general surgery and orthopedic patients) undergoing surgery between 2006 and 2016 were included in a retrospectively analysis from a single center. From January 2006 to January 2011 perioperative GCS was employed additionally to drug prophylaxis and from February 2011 to March 2016 patients received drug prophylaxis alone. According to german guidelines all patients received venous thromboembolism prophylaxis with weight-adapted LMWH. Risk stratification (low risk, moderate risk, high risk) was based on the guideline of the American College of Chest Physicians. Data analysis was performed before and after propensity matching (PM). The defined primary endpoint was the incidence of symptomatic or fatal pulmonary embolism (PE). A secondary endpoint was the incidence of deep venous thromboembolism (DVT).
Results: After risk stratification (low risk n = 16 483; moderate risk n = 4464; high risk n = 3326) a total of 24 273 patient were analyzed. Before to PM the relative risk for the occurrence of a PE or DVT was not increased by abstaining from GCS. After PM two groups of 11 312 patients each, one with and one without GCS application, were formed. When comparing the two groups, the relative risk (RR) for the occurrence of a pulmonary embolism was: Low Risk 0.99 [CI95% 0.998–1.000]; Moderate Risk 0.999 [CI95% 0.95–1.003]; High Risk 0.996 [CI95% 0.992–1.000] (p > 0.05). The incidence of PE in the total group LMWH alone was 0.1% (n = 16). In the total group using LMWH + GCS, the incidence was 0.3% (n = 29). RR after PM was 0.999 [CI95% 0.998–1.00].
Conclusion: In comparison to prior studies with only small numbers of patients our trial shows in a large group of patients with moderate and high risk developing VTE we can support the view that abstaining from GCS-use does not increase the incidence of symptomatic or fatal PE and symptomatic DVT.
Over the last 15 years the Diagnostic Center of Acute Leukemia (DCAL) at the Frankfurt University has diagnosed and elucidated the Mixed Lineage Leukemia (MLL) recombinome with >100 MLL fusion partners. When analyzing all these different events, balanced chromosomal translocations were found to comprise the majority of these cases (~70%), while other types of genetic rearrangements (3-way-translocations, spliced fusions, 11q inversions, interstitial deletions or insertion of chromosomal fragments into other chromosomes) account for about 30%. In nearly all those complex cases, functional fusion proteins can be produced by transcription, splicing and translation. With a few exceptions (10 out of 102 fusion genes which were per se out-of-frame), all these genetic rearrangements produced a direct MLL fusion gene, and in 94% of cases an additional reciprocal fusion gene. So far, 114 patients (out of 2454 = ~5%) have been diagnosed only with the reciprocal fusion allele, displaying no MLL-X allele. The fact that so many MLL rearrangements bear at least two fusion alleles, but also our findings that several direct MLL fusions were either out-of-frame fusions or missing, raises the question about the function and importance of reciprocal MLL fusions. Recent findings also demonstrate the presence of reciprocal MLL fusions in sarcoma patients. Here, we want to discuss the role of reciprocal MLL fusion proteins for leukemogenesis and beyond.
Two-person neuroscience (2 PN) is a recently introduced conceptual and methodological framework used to investigate the neural basis of human social interaction from simultaneous neuroimaging of two or more subjects (hyperscanning). In this study, we adopted a 2 PN approach and a multiple-brain connectivity model to investigate the neural basis of a form of cooperation called joint action. We hypothesized different intra-brain and inter-brain connectivity patterns when comparing the interpersonal properties of joint action with non-interpersonal conditions, with a focus on co-representation, a core ability at the basis of cooperation. 32 subjects were enrolled in dual-EEG recordings during a computerized joint action task including three conditions: one in which the dyad jointly acted to pursue a common goal (joint), one in which each subject interacted with the PC (PC), and one in which each subject performed the task individually (Solo).
A combination of multiple-brain connectivity estimation and specific indices derived from graph theory allowed to compare interpersonal with non-interpersonal conditions in four different frequency bands. Our results indicate that all the indices were modulated by the interaction, and returned a significantly stronger integration of multiple-subject networks in the joint vs. PC and Solo conditions. A subsequent classification analysis showed that features based on multiple-brain indices led to a better discrimination between social and non-social conditions with respect to single-subject indices. Taken together, our results suggest that multiple-brain connectivity can provide a deeper insight into the understanding of the neural basis of cooperation in humans.
Inhibitors against the NS3-4A protease of hepatitis C virus (HCV) have proven to be useful drugs in the treatment of HCV infection. Although variants have been identified with mutations that confer resistance to these inhibitors, the mutations do not restore replicative fitness and no secondary mutations that rescue fitness have been found. To gain insight into the molecular mechanisms underlying the lack of fitness compensation, we screened known resistance mutations in infectious HCV cell culture with different genomic backgrounds. We observed that the Q41R mutation of NS3-4A efficiently rescues the replicative fitness in cell culture for virus variants containing mutations at NS3-Asp168. To understand how the Q41R mutation rescues activity, we performed protease activity assays complemented by molecular dynamics simulations, which showed that protease-peptide interactions far outside the targeted peptide cleavage sites mediate substrate recognition by NS3-4A and support protease cleavage kinetics. These interactions shed new light on the mechanisms by which NS3-4A cleaves its substrates, viral polyproteins and a prime cellular antiviral adaptor protein, the mitochondrial antiviral signaling protein MAVS. Peptide binding is mediated by an extended hydrogen-bond network in NS3-4A that was effectively optimized for protease-MAVS binding in Asp168 variants with rescued replicative fitness from NS3-Q41R. In the protease harboring NS3-Q41R, the N-terminal cleavage products of MAVS retained high affinity to the active site, rendering the protease susceptible for potential product inhibition. Our findings reveal delicately balanced protease-peptide interactions in viral replication and immune escape that likely restrict the protease adaptive capability and narrow the virus evolutionary space.
Highlights
• Explanation of mobility design and its practical, aesthetic and emblematic effects on travel behaviour.
• Review of recent studies on mobility design elements and the promotion of non-motorised travel.
• Discussion of research gaps and methodological challenges of data collection and comparability.
Abstract
To promote non-motorised travel, many travel behaviour studies acknowledge the importance of the built environment to modal choice, for example with its density or mix of uses. From a mobility design theory perspective, however, objects and environments affect human perceptions, assessments and behaviour in at least three different ways: by their practical, aesthetic and emblematic functions. This review of existing evidence will argue that travel behaviour research has so far mainly focused on the practical function of the built environment. For that purpose, we systematically identified 56 relevant studies on the impacts of the built environment on non-motorised travel behaviour in the Web of Science database. The focus of research on the practical design function primary involves land use distribution, street network connectivity and the presence of walking and cycling facilities. Only a small number of papers address the aesthetic and emblematic functions. These show that the perceived attractiveness of an environment and evoked feelings of traffic safety increase the likelihood of walking and cycling. However, from a mobility design perspective, the results of the review indicate a gap regarding comprehensive research on the effects of the aesthetic and emblematic functions of the built environment. Further research involving these functions might contribute to a better understanding of how to promote non-motorised travel more effectively. Moreover, limitations related to survey techniques, regional distribution and the comparability of results were identified.
This research examines the impact of online display advertising and paid search advertising relative to offline advertising on firm performance and firm value. Using proprietary data on annualized advertising expenditures for 1651 firms spanning seven years, we document that both display advertising and paid search advertising exhibit positive effects on firm performance (measured by sales) and firm value (measured by Tobin's q). Paid search advertising has a more positive effect on sales than offline advertising, consistent with paid search being closest to the actual purchase decision and having enhanced targeting abilities. Display advertising exhibits a relatively more positive effect on Tobin's q than offline advertising, consistent with its long-term effects. The findings suggest heterogeneous economic benefits across different types of advertising, with direct implications for managers in analyzing advertising effectiveness and external stakeholders in assessing firm performance.
The US Treasury recently permitted deferred longevity income annuities to be included in pension plan menus as a default payout solution, yet little research has investigated whether more people should convert some of the $18 trillion they hold in employer-based defined contribution plans into lifelong income streams. We investigate this innovation using a calibrated lifecycle consumption and portfolio choice model embodying realistic institutional considerations. Our welfare analysis shows that defaulting a modest portion of retirees’ 401(k) assets (over a threshold) is an attractive way to enhance retirement security, enhancing welfare by up to 20% of retiree plan accruals.
In this paper a new method of experimental data analysis, the Particle-Set Identification method, is presented. The method allows to reconstruct moments of multiplicity distribution of identified particles. The difficulty the method copes with is due to incomplete particle identification – a particle mass is frequently determined with a resolution which does not allow for a unique determination of the particle type. Within this method the moments of order k are calculated from mean multiplicities of k-particle sets of a given type. The Particle-Set Identification method remains valid even in the case of correlations between mass measurements for different particles. This distinguishes it from the Identity method introduced by us previously to solve the problem of incomplete particle identification in studies of particle fluctuations.
Deubiquitinases (DUBs) are vital for the regulation of ubiquitin signals, and both catalytic activity of and target recruitment by DUBs need to be tightly controlled. Here, we identify asparagine hydroxylation as a novel posttranslational modification involved in the regulation of Cezanne (also known as OTU domain–containing protein 7B (OTUD7B)), a DUB that controls key cellular functions and signaling pathways. We demonstrate that Cezanne is a substrate for factor inhibiting HIF1 (FIH1)- and oxygen-dependent asparagine hydroxylation. We found that FIH1 modifies Asn35 within the uncharacterized N-terminal ubiquitin-associated (UBA)-like domain of Cezanne (UBACez), which lacks conserved UBA domain properties. We show that UBACez binds Lys11-, Lys48-, Lys63-, and Met1-linked ubiquitin chains in vitro, establishing UBACez as a functional ubiquitin-binding domain. Our findings also reveal that the interaction of UBACez with ubiquitin is mediated via a noncanonical surface and that hydroxylation of Asn35 inhibits ubiquitin binding. Recently, it has been suggested that Cezanne recruitment to specific target proteins depends on UBACez. Our results indicate that UBACez can indeed fulfill this role as regulatory domain by binding various ubiquitin chain types. They also uncover that this interaction with ubiquitin, and thus with modified substrates, can be modulated by oxygen-dependent asparagine hydroxylation, suggesting that Cezanne is regulated by oxygen levels.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
Highlights
• PUR, PVC and PLA microplastics affect life-history parameters of Daphnia magna.
• Natural kaolin particles are less toxic than microplastics.
• Microplastic toxicity is material-specific, e.g. PVC is most toxic on reproduction.
• In case of PVC, plastic chemicals are the main driver of microplastic toxicity.
• PLA bioplastics are similarly toxic as conventional plastics.
Abstract
Given the ubiquitous presence of microplastics in aquatic environments, an evaluation of their toxicity is essential. Microplastics are a heterogeneous set of materials that differ not only in particle properties, like size and shape, but also in chemical composition, including polymers, additives and side products. Thus far, it remains unknown whether the plastic chemicals or the particle itself are the driving factor for microplastic toxicity. To address this question, we exposed Daphnia magna for 21 days to irregular polyvinyl chloride (PVC), polyurethane (PUR) and polylactic acid (PLA) microplastics as well as to natural kaolin particles in high concentrations (10, 50, 100, 500 mg/L, ≤ 59 μm) and different exposure scenarios, including microplastics and microplastics without extractable chemicals as well as the extracted and migrating chemicals alone. All three microplastic types negatively affected the life-history of D. magna. However, this toxicity depended on the endpoint and the material. While PVC had the largest effect on reproduction, PLA reduced survival most effectively. The latter indicates that bio-based and biodegradable plastics can be as toxic as their conventional counterparts. The natural particle kaolin was less toxic than microplastics when comparing numerical concentrations. Importantly, the contribution of plastic chemicals to the toxicity was also plastic type-specific. While we can attribute effects of PVC to the chemicals used in the material, effects of PUR and PLA plastics were induced by the mere particle. Our study demonstrates that plastic chemicals can drive microplastic toxicity. This highlights the importance of considering the individual chemical composition of plastics when assessing their environmental risks. Our results suggest that less studied polymer types, like PVC and PUR, as well as bioplastics are of particular toxicological relevance and should get a higher priority in ecotoxicological studies.
Decline in physical activity in the weeks preceding sustained ventricular arrhythmia in women
(2020)
Background: Heightened risk of cardiac arrest following physical exertion has been reported. Among patients with an implantable defibrillator, an appropriate shock for sustained ventricular arrhythmia was preceded by a retrospective self-report of engaging in mild-to-moderate physical activity. Previous studies evaluating the relationship between activity and sudden cardiac arrest lacked an objective measure of physical activity and women were often underrepresented.
Objective: To determine the relationship between physical activity, recorded by accelerometer in a wearable cardioverter-defibrillator (WCD), and sustained ventricular arrhythmia among female patients.
Methods: A dataset of female adult patients prescribed a WCD for a diagnosis of myocardial infarction or dilated cardiomyopathy was compiled from a commercial database. Curve estimation, to include linear and nonlinear interpolation, was applied to physical activity as a function of time (days before arrhythmia).
Results: Among women who received an appropriate WCD shock for sustained ventricular arrhythmia (N = 120), a quadratic relationship between time and activity was present prior to shock. Physical activity increased starting at the beginning of the 30-day period up until day -16 (16 days before the ventricular arrhythmia) when activity begins to decline.
Conclusion: For patients who received treatment for sustained ventricular arrhythmia, a decline in physical activity was found during the 2 weeks preceding the arrhythmic event. Device monitoring for a sustained decline in physical activity may be useful to identify patients at near-term risk of a cardiac arrest.
Entorhinal-retrosplenial circuits for allocentric-egocentric transformation of boundary coding
(2020)
Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal’s direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal’s next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Aim: To assess volumetric tissue changes at peri‐implantitis sites following combined surgical therapy of peri‐implantitis over a 6‐month follow‐up period.
Materials and Methods: Twenty patients (n = 28 implants) diagnosed with peri‐implantitis underwent access flap surgery, implantoplasty at supracrestally or bucally exposed implant surfaces and augmentation at intra‐bony components using a natural bone mineral and application of a native collagen membrane during clinical routine treatments. The peri‐implant region of interest (ROI) was intra‐orally scanned pre‐operatively (S0), and after 1 (S1) and 6 (S2) months following surgical therapy. Digital files were converted to standard tessellation language (STL) format for superimposition and assessment of peri‐implant volumetric variations between time points. The change in thickness was assessed at a standardized ROI, subdivided into three equidistant sections (i.e. marginal, medial and apical). Peri‐implant soft tissue contour area (STCA) (mm2) and its corresponding contraction rates (%) were also assessed.
Results: Peri‐implant tissues revealed a mean thickness change (loss) of −0.11 and −0.28 mm at 1 and 6 months. S0 to S1 volumetric variations pointed to a thickness change of −0.46, 0.08 and 0.4 mm at marginal, medial and apical regions, respectively. S0 to S2 analysis exhibited corresponding thickness changes of −0.61, −0.25 and −0.09 mm, respectively. The thickness differences between the areas were statistically significant at both time periods. The mean peri‐implant STCA totalled to 189.2, 175 and 158.9 mm2 at S0, S1 and S2, showing a significant STCA contraction rate of 7.9% from S0 to S1 and of 18.5% from S0 to S2. Linear regression analysis revealed a significant association between the pre‐operative width of keratinized mucosa (KM) and STCA contraction rate.
Conclusions: The peri‐implant mucosa undergoes considerable volumetric changes after combined surgical therapy. However, tissue contraction appears to be influenced by the width of KM.
In resource-limited or point-of-care settings, rapid diagnostic tests (RDTs), that aim to simultaneously detect HIV antibodies and p24 capsid (p24CA) antigen with high sensitivity, can pose important alternatives to screen for early infections. We evaluated the performance of the antibody and antigen components of the old and novel version of the Determine™ HIV-1/2 Ag/Ab Combo RDTs in parallel to quantifications in a fourth-generation antigen/antibody immunoassay (4G-EIA), p24CA antigen immunoassay (p24CA-EIA), immunoblots, and nucleic acid quantification. We included plasma samples of acute, treatment-naïve HIV-1 infections (Fiebig stages I–VI, subtypes A1, B, C, F, CRF02_AG, CRF02_AE, URF) or chronic HIV-1 and HIV-2 infections. The tests’ antigen component was evaluated also for a panel of subtype B HIV-1 transmitted/founder (T/F) viruses, HIV-2 strains and HIV-2 primary isolates. Furthermore, we assessed the analytical sensitivity of the RDTs to detect p24CA using a highly purified HIV-1NL4-3 p24CA standard. We found that 77% of plasma samples from acutely infected, immunoblot-negative HIV-1 patients in Fiebig stages II–III were identified by the new RDT, while only 25% scored positive in the old RDT. Both RDTs reacted to all samples from chronically HIV-1-infected and acutely HIV-1-infected patients with positive immunoblots. All specimens from chronically infected HIV-2 patients scored positive in the new RDT. Of note, the sensitivity of the RDTs to detect recombinant p24CA from a subtype B virus ranged between 50 and 200 pg/mL, mirrored also by the detection of HIV-1 T/F viruses only at antigen concentrations tenfold higher than suggested by the manufacturer. The RTD failed to recognize any of the HIV-2 viruses tested. Our results indicate that the new version of the Determine™ HIV-1/2 Ag/Ab Combo displays an increased sensitivity to detect HIV-1 p24CA-positive, immunoblot-negative plasma samples compared to the precursor version. The sensitivity of 4G-EIA and p24CA-EIA to detect the major structural HIV antigen, and thus to diagnose acute infections prior to seroconversion, is still superior.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Mitochondria have a central role in regulating a range of cellular activities and host responses upon bacterial infection. Multiple pathogens affect mitochondria dynamics and functions to influence their intracellular survival or evade host immunity. On the other side, major host responses elicited against infections are directly dependent on mitochondrial functions, thus placing mitochondria centrally in maintaining homeostasis upon infection. In this review, we summarize how different bacteria and viruses impact morphological and functional changes in host mitochondria and how this manipulation can influence microbial pathogenesis as well as the host cell metabolism and immune responses.
In this proceeding, we review our recent work using deep convolutional neural network (CNN) to identify the nature of the QCD transition in a hybrid modeling of heavy-ion collisions. Within this hybrid model, a viscous hydrodynamic model is coupled with a hadronic cascade “after-burner”. As a binary classification setup, we employ two different types of equations of state (EoS) of the hot medium in the hydrodynamic evolution. The resulting final-state pion spectra in the transverse momentum and azimuthal angle plane are fed to the neural network as the input data in order to distinguish different EoS. To probe the effects of the fluctuations in the event-by-event spectra, we explore different scenarios for the input data and make a comparison in a systematic way. We observe a clear hierarchy in the predictive power when the network is fed with the event-by-event, cascade-coarse-grained and event-fine-averaged spectra. The carefully-trained neural network can extract high-level features from pion spectra to identify the nature of the QCD transition in a realistic simulation scenario.
Previous studies reported on the safety and applicability of mesenchymal stem/stromal cells (MSCs) to ameliorate pulmonary inflammation in acute respiratory distress syndrome (ARDS). Thus, multiple clinical trials assessing the potential of MSCs for COVID-19 treatment are underway. Yet, as SARS-inducing coronaviruses infect stem/progenitor cells, it is unclear whether MSCs could be infected by SARS-CoV-2 upon transplantation to COVID-19 patients. We found that MSCs from bone marrow, amniotic fluid, and adipose tissue carry angiotensin-converting enzyme 2 and transmembrane protease serine subtype 2 at low levels on the cell surface under steady-state and inflammatory conditions. We did not observe SARS-CoV-2 infection or replication in MSCs at steady state under inflammatory conditions, or in direct contact with SARS-CoV-2-infected Caco-2 cells. Further, indoleamine 2,3-dioxygenase 1 production in MSCs was not impaired in the presence of SARS-CoV-2. We show that MSCs are resistant to SARS-CoV-2 infection and retain their immunomodulation potential, supporting their potential applicability for COVID-19 treatment.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
The striking similarities that have been observed between high-multiplicity proton-proton (pp) collisions and heavy-ion collisions can be explored through multiplicity-differential measurements of identified hadrons in pp collisions. With these measurements, it is possible to study mechanisms such as collective flow that determine the shapes of hadron transverse momentum (pT) spectra, to search for possible modifications of the yields of short-lived hadronic resonances due to scattering effects in an extended hadron-gas phase, and to investigate different explanations provided by phenomenological models for enhancement of strangeness production with increasing multiplicity. In this paper, these topics are addressed through measurements of the K∗(892)0 and φ(1020) mesons at midrapidity in pp collisions at √s = 13 TeV as a function of the charged-particle multiplicity. The results include the pT spectra, pT-integrated yields, mean transverse momenta, and the ratios of the yields of these resonances to those of longer-lived hadrons. Comparisons with results from other collision systems and energies, as well as predictions from phenomenological models, are also discussed.
The inclusive J/ψ meson production in Pb–Pb collisions at a center-of-mass energy per nucleon–nucleon collision of sNN=5.02 TeV at midrapidity (|y|<0.9) is reported by the ALICE Collaboration. The measurements are performed in the dielectron decay channel, as a function of event centrality and J/ψ transverse momentum pT, down to pT=0. The J/ψ mean transverse momentum 〈pT〉 and rAA ratio, defined as 〈pT2〉PbPb/〈pT2〉pp, are evaluated. Both observables show a centrality dependence decreasing towards central (head-on) collisions. The J/ψ nuclear modification factor RAA exhibits a strong pT dependence with a large suppression at high pT and an increase to unity for decreasing pT. When integrating over the measured momentum range pT<10 GeV/c, the J/ψ RAA shows a weak centrality dependence. Each measurement is compared with results at lower center-of-mass energies and with ALICE measurements at forward rapidity, as well as to theory calculations. All reported features of the J/ψ production at low pT are consistent with a dominant contribution to the J/ψ yield originating from charm quark (re)combination.
This paper presents the first measurements of the charge independent (CI) and charge dependent (CD) two-particle transverse momentum correlators GCI 2 and GCD 2 in Pb–Pb collisions at √sNN = 2.76 TeV by the ALICE collaboration. The two-particle transverse momentum correlator G2 was introduced as a measure of the momentum current transfer between neighboring system cells. The correlators are measured as a function of pair separation in pseudorapidity (Δη) and azimuth (Δφ) and as a function of collision centrality. From peripheral to central collisions, the correlator GCI 2 exhibits a longitudinal broadening while undergoing a monotonic azimuthal narrowing. By contrast, GCD 2 exhibits a narrowing along both dimensions. These features are not reproduced by models such as HIJING and AMPT. However, the observed narrowing of the correlators from peripheral to central collisions is expected to result from the stronger transverse flow profiles produced in more central collisions and the longitudinal broadening is predicted to be sensitive to momentum currents and the shear viscosity per unit of entropy density η/s of the matter produced in the collisions. The observed broadening is found to be consistent with the hypothesized lower bound of η/s and is in qualitative agreement with values obtained from anisotropic flow measurements.
This Letter presents the first direct investigation of the p–0 interaction, using the femtoscopy technique in high-multiplicity pp collisions at √s = 13 TeV measured by the ALICE detector. The 0 is reconstructed via the decay channel to Λγ, and the subsequent decay of Λ to pπ−. The photon is detected via the conversion in material to e+e− pairs exploiting the capability of the ALICE detector to measure electrons at low transverse momenta. The measured p–0 correlation indicates a shallow strong interaction. The comparison of the data to several theoretical predictions obtained employing the Correlation Analysis Tool using the Schrödinger Equation (CATS) and the Lednický–Lyuboshits approach shows that the current experimental precision does not yet allow to discriminate between different models, as it is the case for the available scattering and hypernuclei data. Nevertheless, the p–0 correlation function is found to be sensitive to the strong interaction, and driven by the interplay of the different spin and isospin channels. This pioneering study demonstrates the feasibility of a femtoscopic measurement in the p–0 channel and with the expected larger data samples in LHC Run 3 and Run 4, the p–0 interaction will be constrained with high precision.
Multiplicity dependence of light (anti-)nuclei production in p–Pb collisions at √sNN =5.02 TeV
(2020)
The measurement of the deuteron and anti-deuteron production in the rapidity range −1 < y < 0 as a function of transverse momentum and event multiplicity in p–Pb collisions at √sNN = 5.02 TeV is presented. (Anti-)deuterons are identified via their specific energy loss dE/dx and via their time-offlight. Their production in p–Pb collisions is compared to pp and Pb–Pb collisions and is discussed within the context of thermal and coalescence models. The ratio of integrated yields of deuterons to protons (d/p) shows a significant increase as a function of the charged-particle multiplicity of the event starting from values similar to those observed in pp collisions at low multiplicities and approaching those observed in Pb–Pb collisions at high multiplicities. The mean transverse particle momenta are extracted from the deuteron spectra and the values are similar to those obtained for p and particles. Thus, deuteron spectra do not follow mass ordering. This behaviour is in contrast to the trend observed for non-composite particles in p–Pb collisions. In addition, the production of the rare 3He and 3He nuclei has been studied. The spectrum corresponding to all non-single diffractive p-Pb collisions is obtained in the rapidity window −1 < y < 0 and the pT-integrated yield dN/dy is extracted. It is found that the yields of protons, deuterons, and 3He, normalised by the spin degeneracy factor, follow an exponential decrease with mass number.
The ALICE collaboration at the CERN LHC reports novel measurements of jet substructure in pp collisions at √s = 7 TeV and central Pb–Pb collisions at √sNN = 2.76 TeV. Jet substructure of track-based jets is explored via iterative declustering and grooming techniques. We present the measurement of the momentum sharing of two-prong substructure exposed via grooming, the zg, and its dependence on the opening angle, in both pp and Pb–Pb collisions. We also present the measurement of the distribution of the number of branches obtained in the iterative declustering of the jet, which is interpreted as the number of its hard splittings. In Pb–Pb collisions, we observe a suppression of symmetric splittings at large opening angles and an enhancement of splittings at small opening angles relative to pp collisions, with no significant modification of the number of splittings. The results are compared to predictions from various Monte Carlo event generators to test the role of important concepts in the evolution of the jet in the medium such as colour coherence.
ϒ production in p–Pb interactions is studied at the centre-of-mass energy per nucleon–nucleon collision √sNN = 8.16 TeV with the ALICE detector at the CERN LHC. The measurement is performed reconstructing bottomonium resonances via their dimuon decay channel, in the centre-of-mass rapidity intervals 2.03 < ycms < 3.53 and −4.46 < ycms < −2.96, down to zero transverse momentum. In this work, results on the ϒ(1S) production cross section as a function of rapidity and transverse momentum are presented. The corresponding nuclear modification factor shows a suppression of the ϒ(1S) yields with respect to pp collisions, both at forward and backward rapidity. This suppression is stronger in the low transverse momentum region and shows no significant dependence on the centrality of the interactions. Furthermore, the ϒ(2S) nuclear modification factor is evaluated, suggesting a suppression similar to that of the ϒ(1S). A first measurement of the ϒ(3S) has also been performed. Finally, results are compared with previous ALICE measurements in p–Pb collisions at √sNN = 5.02 TeV and with theoretical calculations.
Measurement of groomed jet substructure observables in p+p collisions at √s = 200 GeV with STAR
(2020)
In this letter, measurements of the shared momentum fraction (zg) and the groomed jet radius (Rg), as defined in the SoftDrop algorithm, are reported in p+p collisions at √s = 200 GeV collected by the STAR experiment. These substructure observables are differentially measured for jets of varying resolution parameters from R = 0.2 − 0.6 in the transverse momentum range 15 < pT,jet < 60 GeV/c. These studies show that, in the pT,jet range accessible at √s = 200 GeV and with increasing jet resolution parameter and jet transverse momentum, the zg distribution asymptotically converges to the DGLAP splitting kernel for a quark radiating a gluon. The groomed jet radius measurements reflect a momentum-dependent narrowing of the jet structure for jets of a given resolution parameter, i.e., the larger the pT,jet, the narrower the first splitting. For the first time, these fully corrected measurements are compared to Monte Carlo generators with leading order QCD matrix elements and leading log in the parton shower, and to state-of-the-art theoretical calculations at next-to-leading-log accuracy. We observe that PYTHIA 6 with parameters tuned to reproduce RHIC measurements is able to quantitatively describe data, whereas PYTHIA 8 and HERWIG 7, tuned to reproduce LHC data, are unable to provide a simultaneous description of both zg and Rg, resulting in opportunities for fine parameter tuning of these models for p+p collisions at RHIC energies. We also find that the theoretical calculations without non-perturbative corrections are able to qualitatively describe the trend in data for jets of large resolution parameters at high pT,jet, but fail at small jet resolution parameters and low jet transverse momenta.
We report on the measurement of the size of the particle-emitting source from two-baryon correlations with ALICE in high-multiplicity pp collisions at √s = 13 TeV. The source radius is studied with low relative momentum p–p, p–p, p–, and p– pairs as a function of the pair transverse mass mT considering for the first time in a quantitative way the effect of strong resonance decays. After correcting for this effect, the radii extracted for pairs of different particle species agree. This indicates that protons, antiprotons, s, and s originate from the same source. Within the measured mT range (1.1–2.2) GeV/c2 the invariant radius of this common source varies between 1.3 and 0.85 fm. These results provide a precise reference for studies of the strong hadron–hadron interactions and for the investigation of collective properties in small colliding systems.
Multiplicity dependence of inclusive J/ψ production at midrapidity in pp collisions at √s = 13 TeV
(2020)
Measurements of the inclusive J/ψ yield as a function of charged-particle pseudorapidity density dNch/dη in pp collisions at √s = 13 TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity (|y| < 0.9) in the dielectron channel, for events selected based on the charged-particle multiplicity at midrapidity (|η| < 1) and at forward rapidity (−3.7 < η < −1.7 and 2.8 < η < 5.1); both observables are normalized to their corresponding averages in minimum bias events. The increase of the normalized J/ψ yield with normalized dNch/dη is significantly stronger than linear and dependent on the transverse momentum. The data are compared to theoretical predictions, which describe the observed trends well, albeit not always quantitatively.
Investigation of the linear and mode-coupled flow harmonics in Au+Au collisions at √sNN = 200 GeV
(2020)
Flow harmonics (vn) of the Fourier expansion for the azimuthal distributions of hadrons are commonly employed to quantify the azimuthal anisotropy of particle production relative to the collision symmetry planes. While lower order Fourier coefficients (v2 and v3) are more directly related to the corresponding eccentricities of the initial state, the higher-order flow harmonics (vn>3) can be induced by a modecoupled response to the lower-order anisotropies, in addition to a linear response to the same-order anisotropies. These higher-order flow harmonics and their linear and mode-coupled contributions can be used to more precisely constrain the initial conditions and the transport properties of the medium in theoretical models. The multiparticle azimuthal cumulant method is used to measure the linear and mode-coupled contributions in the higher-order anisotropic flow, the mode-coupled response coefficients, and the correlations of the event plane angles for charged particles as functions of centrality and transverse momentum in Au+Au collisions at nucleon-nucleon center-of-mass energy √sN N= 200 GeV. The results are compared to similar LHC measurements as well as to several viscous hydrodynamic calculations with varying initial conditions.
Monoterpenes and their monoterpenoid derivatives form a subclass of terpene(oid)s. They are widely used in medicines/pharmaceuticals, as flavor and fragrance compounds, or in agriculture and are also considered as future biofuels. However, for many of these substances, the extraction from natural sources poses challenges such as occurring at low concentrations in their raw material or because the natural sources are diminishing. Furthermore, many of the structurally more complex terpenoids cannot be chemically synthesized in an economic way. Therefore, microbial production provides an attractive alternative, taking advantage of the often distinct regio- and stereoselectivity of enzymatic reactions. However, monoterpenes and monoterpenoids are challenging products for industrial biotechnology processes due to their pronounced cytotoxicity, which complicates the production in microorganisms compared to longer-chain terpenes (sesquiterpenes, diterpenes, etc.).
The aim of this thesis was to generate a biotechnological complement to fossil-resources-based chemical processes for industrial monoterpenoid production. Therefore, a starting point for the further development of a microbial cell factory based on the microbe Pseudomonas putida KT2440 was aimed to be created. This production organism should be able to conduct a whole- cell biocatalysis to selectively oxyfunctionalize monoterpene hydrocarbons using renewable industrial by-products and waste streams as raw material for monoterpenoid production (Figure 1). As a model substance, the production of (-)-menthol should be addressed due to its industrial significance. (-)-Menthol is one of the world’s most widely-used flavor and fragrance compounds by volume as well as a medical component, having an annual production volume of over 30,000 tons. An approach for (-)-menthol production from renewable resources could be a biotechnological(-chemical) two-step conversion (Figure 1), starting from (+)-limonene, a by-product of the citrus fruit processing industry.
The thesis project was divided into three parts. In the first part, enzymes (limonene-3- hydroxylases) were to be identified that can convert (+)-limonene into the precursor of (-)-menthol, (+)-trans-isopiperitenol. To counteract product toxicity, in the second part, the tolerance of the intended production organism P. putida KT2440 towards monoterpenes and their monoterpenoid derivatives should be increased. Finally, in the third part, the identified hydroxylase enzymes would be expressed in the improved P. putida KT2440 strain to create a whole-cell biocatalyst for the first reaction step of a two-step (-)-menthol production, starting from (+)-limonene.
To achieve these objectives, different genetic/molecular biology and analytical methods were applied. In this way, two cytochrome P450 monooxygenase enzymes from the fungi Aureobasidium pullulans and Hormonema carpetanum could be identified and functionally expressed in Pichia pastoris, which can catalyze the intended hydroxylation reaction on (+) limonene with high stereo- and regioselectivity. A further characterization of the enzyme from A. pullulans showed that apart from (+) limonene the protein can also hydroxylate ( ) limonene, - and -pinene, as well as 3-carene.
Furthermore, within this thesis, mechanisms of microbial monoterpenoid resistance of P. putida could be identified. It was shown that the different monoterpenes and monoterpenoids tested have very different toxicity levels and that mainly the Ttg efflux pumps of P. putida GS1 are responsible for the tolerance to many of these compounds. Based on these results, a P. putida KT2440 strain with increased resistance to various monoterpenoids, including isopiperitenol, could then be generated, which can be used as a host organism for the further development of monoterpenoid-producing cell factories.
While within the scope of this work the heterologous expression of the fungal gene in prokaryotic cells in a functional form could not be realized despite different approaches, the identified enzymes, the monoterpenoid-tolerant P. putida strain and a plasmid developed for heterologous gene expression in P. putida provide a starting point for the further design of a microbial cell factory for biotechnological monoterpenoid production.
The miRNA biogenesis is tightly regulated to avoid dysfunction and consequent disease development. Here, we describe modulation of miRNA processing as a novel noncanonical function of the 5-lipoxygenase (5-LO) enzyme in monocytic cells. In differentiated Mono Mac 6 (MM6) cells, we found an in situ interaction of 5-LO with Dicer, a key enzyme in miRNA biogenesis. RNA sequencing of small noncoding RNAs revealed a functional impact, knockout of 5-LO altered the expression profile of several miRNAs. Effects of 5-LO could be observed at two levels. qPCR analyses thus indicated that (a) 5-LO promotes the transcription of the evolutionarily conserved miR-99b/let-7e/miR-125a cluster and (b) the 5-LO-Dicer interaction downregulates the processing of pre-let-7e, resulting in an increase in miR-125a and miR-99b levels by 5-LO without concomitant changes in let-7e levels in differentiated MM6 cells. Our observations suggest that 5-LO regulates the miRNA profile by modulating the Dicer-mediated processing of distinct pre-miRNAs. 5-LO inhibits the formation of let-7e which is a well-known inducer of cell differentiation, but promotes the generation of miR-99b and miR-125a known to induce cell proliferation and the maintenance of leukemic stem cell functions.
"Wie kann ich die Čechen differenzieren? In städtische u. ländliche (Machar u. Brezina)?" fragte Hugo von Hofmannsthal unsicher Hermann Bahr, als er den Editionsplan für die "Österreichische Bibliothek" konzipierte. Die Frage mag, was die tschechische Literatur betrifft, etwas naiv erscheinen, sie zeigt jedoch, dass Hofmannsthal zumindest von zwei markanten Vertretern der frühen tschechischen literarischen Moderne eine gewisse Kenntnis besaß. Der Dichter und Feuilletonist Josef Svatopluk Machar (1864–1942) lebte seit 1889 in Wien. Bahr hatte ihn im Juli 1892 kennengelernt und bei der Gründung der Wochenschrift "Die Zeit" mit ihm zusammengearbeitet, und auch nach Bahrs Rückzug von dieser Zeitschrift 1899 fungierte Machar als wichtiges Verbindungsglied zu tschechischen Schriftstellern und Politikern einschließlich T. G. Masaryks. In der deutsch-österreichischen Presse zu Beginn des 20. Jahrhunderts auf Machars Namen zu stoßen, war nicht schwer. Öfters wurden seine Konflikte mit der katholischen Kirche erwähnt, die er mit seinen Feuilletons, Gedichten und Vorträgen provozierte. Zudem war er zum meistübersetzten tschechischen Dichter avanciert. Grund hierfür waren nicht nur die Qualität seines Werkes und seine wachsende Popularität bei tschechischen Lesern - derlei tschechische Schriftsteller ließen sich mehrere finden. Der Hauptgrund bestand vielmehr darin, dass er in Wien lebte und etliche seiner dortigen Freunde ihn übersetzt hatten. Einer von ihnen war Emil Saudek, der Hofmannsthal auf den zweiten der oben genannten tschechischen Dichter, den Symbolisten Otokar Březina, aufmerksam machte. Dieser bislang wenig bekannte Umstand soll im Folgenden beleuchtet werden.
Der hier edierte und übersetzte Text ist Teil einer Sammelrezension ("Poésie"), in der Teodor de Wyzewa 1887 im Februar-Heft der "Revue indépendante" neue Lyrikbände bespricht.
Sie sind ihm Anlass eines grundsätzlichen Nachdenkens über Symbolismus, insofern dieser als eine neue Schreibweise der Lyrik im Gespräch ist und, so Wyzewa, es der Klärung des Symbolbegriffs bedarf, auf den der 1887 bereits geläufige Name der neuen Bewegung verweist.
Im Blickpunkt der Arbeit steht die Historiografie der Unternehmungen Friedrich I. Barbarossas 1154-1158 in der Lombardei. Während der hochgebildete Bischof Otto von Freising ein reges Forschungsinteresse darstellt, sind seine beiden Zeitgenossen, die eigenständige Berichte über die Ereignisse verfassten, in der Forschung weitestgehend unberücksichtigt geblieben. Durch einen Vergleich der 'Gesta' Bischofs Otto von Freising, des 'Libellus' des Lodesen Otto Morena und der 'Narratio' eines anonymen Schreibers aus Mailand zeigt diese Arbeit die Absichten der Autoren auf und fragt, inwieweit die sich widersprechenden Schilderungen als "alternativen Fakten" aufgefasst werden können.
Nach einem Abriss über den Begriff der "alternativen Fakten", dem im Zuge der Präsidentschaft von Donald Trump Aufmerksamkeit zuteilwurde und der hier als unbewusst oder bewusst erfolgte Verformung verstanden wird, der neuzeitlichen Rezeption Barbarossas sowie einer zeitlichen und räumlichen Einordnung werden die "Ausgangslagen" der Autoren betrachtet. Die Entstehung der 'Gesta' und ihr Verhältnis zu Ottos erstem Werk sind umstritten. Es zeigt sich, dass die Positionen Ottos von Freising und Otto Morenas kaiserfreundliche, diejenige des Mailänders Autors eine kaiserfeindliche Absicht erwarten lassen.
Eine kleinteilige Betrachtung der Vorworte/Prologe der Werke offenbart die selbst geäußerten Absichten. Die Anlehnung der 'Gesta' Ottos von Freising an einen durch oder im Auftrag Barbarossas verfassten Tatenbericht sowie seine Lobpreisungen des Kaisers stellen eine Färbung der Darstellung in Aussicht. Auch bei Otto Morena zeigt sich eine starke Verbundenheit zum Kaiser, die Zweifel an der Neutralität seines Werkes aufkommen lassen muss. Der anonyme Autor aus Mailand bekennt ausdrücklich, zum Nutzen der Nachwelt zu schreiben und reiht die Zerstörung Mailands 1162 als Endpunkt einer weitzurückreichenden Opfernarrative ein. Auch wenn ausdrückliche Ausfälle gegen den Kaiser unterbleiben, sind starke Zweifel an einer neutralen Darstellung angezeigt.
Die Beschäftigung mit den Ereignissen des Jahres 1154 zeigt "alternative" Darstellungen: Die Darstellung Ottos von Freising hält sich an die kaiserliche Vorlage und ist im Sinne des Kaisers gehalten, was sich auch bei Otto Morena zeigt, der darüber hinaus die Rolle Lodis betont. Die Mailänder "Gegendarstellung" hingegen lastet negative Ereignisse ausschließlich Barbarossa an.
Otto von Freising betont die lange geplante Kaiserkrönung in Rom und den Feldzug gegen die Normannen als Ausgangspunkt des ersten Italienzuges. Otto Morena legt den Beginn des Disputs zwischen Barbarossa und den Mailändern auf die Versammlung des Hofes in Konstanz, wo Klagen zweier Lodesen Anlass zu Friedrichs erstem Italienzug gegeben hätten. Der Anonymus aus Mailand wirft Barbarossa vor, mit dem Ziel der militärischen Unterwerfung aufgebrochen zu sein.
Otto von Freising übernahm die Darstellung Barbarossas von einem Bestechungsversuch der Mailänder, deren Konsuln anschließend seinen Zug durch verödete Landschaften geführt hätten, was auch Otto Morena zu berichten weiß. Der Mailänder Schreiber verschweigt dies und erzählt stattdessen von Misshandlungen der Mailänder durch das königliche Gefolge. Die Erstürmung der Burg Rosate stilisiert er als unbegründeten Gewaltakt, während die Schreiber aus Lodi und Freising rechtfertigend argumentieren.
Die unabhängig überlieferte 'Conventio', die 1158 nach der Belagerung Mailands zwischen der Stadt und dem Kaiser geschlossen wurde, beinhaltete neben Strafbestimmungen die Anerkennung der Hoheit des Kaisers unter Wahrung der kommunalen Herrschaftsform. Während Otto Morena ihre Bestimmungen nur höchst unvollständig wiedergab, sodass der Schluss naheliegt, dass er sie nicht kannte, lieferte der Mailänder Anonymus durch gezielte Auslassungen und Verfälschung ihrer Bestimmungen erneut "alternative Fakten" und erweckte den Anschein einer Rückkehr zu den "kaiserfernen" Jahren vor Barbarossa.
Bei genauer Betrachtung der auf dem Hoftag von Roncaglia 1158 festgestellten 'lex omnis iurisdictio' wird deutlich, dass diese entgegen der bisherigen Forschungsmeinung keinen Bruch der 'Conventio' darstellte. Eine Konfrontation der Darstellungen der Ereignisse im Januar 1159 in Mailand mit dem Augenzeugenberichts Vinzenz' von Prag zeigt, dass Otto Morena erneut nur knapp berichtet. Der Anonymus hingegen liefert eine "alternative" Darstellung, nach der die Gesandten des Kaisers gekommen waren, um das Recht zu brechen. Diese Tendenziösität wird auch bei der Einnahme der Burg Trezzo deutlich, über die ein Bericht von Ottos einstigem Kaplan Rahewin vorliegt.
Die Darstellungen offenbaren, dass ihre Autoren ihre Texte gezielt einzusetzen gedachten und so zu Produzenten "alternativen Fakten" wurden. Für den Historiker zeigt sich einmal mehr die Wichtigkeit einer quellenkritischen Arbeitsweise, wie sie Johannes Fried in seiner "Memorik" eindrucksvoll vertrat.
Radar technology in the millimeter-wave frequency band offers many interesting features for wind park surveillance, such as structural monitoring of rotor blades or the detection of bats and birds in the vicinity of wind turbines (WTs). Currently, the majority of WTs are affected by shutdown algorithms to minimize animal fatalities via direct collision with the rotor blades or barotrauma effects. The presence of rain is an important parameter in the definition of those algorithms together with wind speed, temperature, time of the day, and season of the year. A Ka-band frequency-modulated continuous-wave radar (33.4-36.0 GHz) installed at the tower of a 2-MW WT was used during a field study. We have observed characteristic rain-induced patterns, based on the range-Doppler algorithm. To better understand those signatures, we have developed a laboratory experiment and implemented a numerical modeling framework. Experimental and numerical results for rain detection and classification are presented and discussed here. Based on this article, a bat- and bird-friendly adaptive WT control can be developed for improved WT efficiency in periods of rain and, at the same time, reduced animal mortality.
Unresolved inflammation maintained by release of danger‐associated molecular patterns, particularly high‐mobility group box‐1 (HMGB1), is crucial for hepatocellular carcinoma (HCC) pathogenesis. To further characterize interactions between leucocytes and necrotic cancerous tissue, a cellular model of necroinflammation was studied in which murine Raw 264.7 macrophages or primary splenocytes were exposed to necrotic lysates (N‐lys) of murine hepatoma cells or primary hepatocytes. In comparison to those derived from primary hepatocytes, N‐lys from hepatoma cells were highly active—inducing in macrophages efficient expression of inflammatory cytokines like C‐X‐C motif ligand‐2 , tumor necrosis factor‐α, interleukin (IL)‐6 and IL‐23‐p19. This activity associated with higher levels of HMGB1 in hepatoma cells and was curbed by pharmacological blockage of the receptor for advanced glycation end product (RAGE)/HMGB1 axis or the mitogen‐activated protein kinases ERK1/2 pathway. Analysis of murine splenocytes furthermore demonstrated that N‐lys did not comprise of functionally relevant amounts of TLR4 agonists. Finally, N‐lys derived from hepatoma cells supported inflammatory splenic Th17 and Th1 polarization as detected by IL‐17, IL‐22 or interferon‐γ production. Altogether, a straightforward applicable model was established which allows for biochemical characterization of immunoregulation by HCC necrosis in cell culture. Data presented indicate a remarkably inflammatory capacity of necrotic hepatoma cells that, at least partly, depends on the RAGE/HMGB1 axis and may shape immunological properties of the HCC microenvironment.
Cryo-electron tomography combined with subtomogram averaging (StA) has yielded high-resolution structures of macromolecules in their native context. However, high-resolution StA is not commonplace due to beam-induced sample drift, images with poor signal-to-noise ratios (SNR), challenges in CTF correction, and limited particle number. Here we address these issues by collecting tilt series with a higher electron dose at the zero-degree tilt. Particles of interest are then located within reconstructed tomograms, processed by conventional StA, and then re-extracted from the high-dose images in 2D. Single particle analysis tools are then applied to refine the 2D particle alignment and generate a reconstruction. Use of our hybrid StA (hStA) workflow improved the resolution for tobacco mosaic virus from 7.2 to 4.4 Å and for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. These resolution gains make hStA a promising approach for other StA projects aimed at achieving subnanometer resolution.
The blood-brain barrier (BBB) protects the brain microenvironment from external damage. It is formed by endothelial cells (ECs) lining the brain vessels, expressing tight junctions and having reduced transcytosis, resulting in a very low paracellular and transcellular passage of substances, respectively (low permeability). The specific BBB phenotype is maintained by Wnt molecules secreted by astrocytes (ACs) that bind to receptors in ECs, and start a molecular cascade that leads to β-catenin translocating to the nucleus and activating the transcription of BBB genes.
An increasing number of studies report BBB dysfunction in Alzheimer’s disease (AD), although the topic is currently under debate. AD is a neurodegenerative condition characterized by brain depositions of Aβ aggregates and Tau neurofibrillary tangles. The aetiology of AD is unknown, although round 5% of all AD cases have a genetic origin. Mutations in APP or PSEN1/2 can lead to Aβ over-production and accumulation, causing familiar AD. There is no cure for AD, as all clinical trials failed during the past years. Consequently, I studied the role of the BBB in AD, aiming to investigate if a BBB dysfunction occurs in AD, and to identify by transcriptomic analysis novel gene regulations happening at the BBB in AD. The final objective was to evaluate the potential of identified BBB genes as therapeutical target.
I used transgenic mice expressing the human APP mutations Swiss, Dutch and Iowa under the control of the neuronal promoter Thy1 (Thy1-APPSwDI) as AD model. In this AD mouse model, I could detect Aβ deposits and memory loss by immunofluorescence (IF) and behavioural tests. Importantly, I identified an increase of BBB permeability to 3-4 kDa dextrans in 6 months, 9-12 months, and 18 months or older AD mice compared to age-matched control wild types (WT), indicating BBB dysfunction in AD mice.
In order to study the BBB transcriptional changes in AD, I sequenced the RNA from 6 and 18 months old AD and WT mouse brain microvessels (MBMVs), as well as of FACS-sorted ECs, mural cells (MuCs), ACs, and microglia (MG) in collaboration with GenXPro, a company specialized in 3’ RNA sequencing. Currently, no transcriptomic datasets of ECs and MuCs are publicly available, suggesting that this is the first study sequencing those cell types in the context of AD.
The analysis of sequencing data from MBMVs and ECs revealed a Wnt/β-catenin repression, and an increase of inflammatory genes like Ccl3 in ECs, that could explain the BBB dysfunction observed in AD mice. Furthermore, the sequencing data from MuCs identified a set of 11 genes strongly regulated in both 6 and 18 month AD groups. Three of those 11 genes are known to be involved in inflammatory processes, demonstrating that inflammation affects and plays an important role in MuCs and ECs during AD.
Thanks to published sequencing data, some up-regulated MG genes in AD are well known and recognized, such as Trem2 and Apoe. Those genes were found in the FACS-sorted MG data as well, validating the AD model and with it, the other novel sequenced datasets. Importantly, one of the most strongly AD-regulated genes in MBMV and MG samples was Dkk2, a member of the Dickkopf family of secreted proteins known to be involved in Wnt signalling modulation. Importantly, a dual luciferase reporter assay proved that Dkk2 is a Wnt inhibitor. A preliminary immunohistochemistry examination of DKK2 in human brain autopsy tissue from an AD patient and age-matched control revealed a stronger DKK2 immunoreactivity in the AD brain.
In order to answer the question whether a rescue of BBB function would ameliorate AD symptoms, I made use of a tamoxifen-inducible transgenic mouse line to activate the Wnt/β-catenin pathway specifically in ECs, leading to a gain of function (GOF) condition (Cdh5-CreERT2+/–/Ctnnb1(Ex3)fl/fl). This mouse line was then crossed with the AD line, creating AD/GOF and AD/control groups.
AD/GOF mice performed better in a Y-Maze memory test than AD/controls when the Wnt/β-catenin pathway was induced before AD onset, indicating a protective effect. Moreover, the finding implies that shielding BBB functioning in AD further protects the brain from AD toxic effects, suggesting an important role of brain vasculature in AD and its potential as therapeutic target.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.