Refine
Year of publication
- 2020 (1891) (remove)
Document Type
- Article (1891) (remove)
Language
- English (1392)
- German (402)
- Portuguese (40)
- Turkish (23)
- French (16)
- Spanish (11)
- Italian (3)
- slo (3)
- Multiple languages (1)
Has Fulltext
- yes (1891) (remove)
Is part of the Bibliography
- no (1891) (remove)
Keywords
- Deutsch (38)
- taxonomy (35)
- COVID-19 (20)
- inflammation (18)
- SARS-CoV-2 (15)
- Coronavirus (14)
- Übersetzung (14)
- Literatur (13)
- new species (12)
- Ästhetik (12)
Institute
- Medizin (607)
- Physik (138)
- Biowissenschaften (113)
- Gesellschaftswissenschaften (71)
- Frankfurt Institute for Advanced Studies (FIAS) (69)
- Präsidium (59)
- Biochemie, Chemie und Pharmazie (49)
- Informatik (44)
- Psychologie (44)
- Biochemie und Chemie (43)
- Kulturwissenschaften (42)
- Rechtswissenschaft (38)
- Geowissenschaften (37)
- Senckenbergische Naturforschende Gesellschaft (36)
- Neuere Philologien (35)
- Wirtschaftswissenschaften (30)
- Institut für Ökologie, Evolution und Diversität (28)
- Philosophie (26)
- Psychologie und Sportwissenschaften (25)
- Sportwissenschaften (25)
- Biodiversität und Klima Forschungszentrum (BiK-F) (22)
- Erziehungswissenschaften (19)
- Institut für Sozialforschung (IFS) (19)
- Geschichtswissenschaften (15)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (14)
- Geowissenschaften / Geographie (14)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (13)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (12)
- MPI für Hirnforschung (11)
- Informatik und Mathematik (10)
- E-Finance Lab e.V. (8)
- MPI für Biophysik (8)
- Philosophie und Geschichtswissenschaften (7)
- Sprach- und Kulturwissenschaften (7)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (7)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (6)
- Exzellenzcluster Makromolekulare Komplexe (6)
- Georg-Speyer-Haus (6)
- Sustainable Architecture for Finance in Europe (SAFE) (6)
- Universitätsbibliothek (6)
- ELEMENTS (5)
- Mathematik (5)
- Geographie (4)
- House of Finance (HoF) (4)
- Institut für sozial-ökologische Forschung (ISOE) (3)
- MPI für empirische Ästhetik (3)
- Forschungszentrum Historische Geisteswissenschaften (FHG) (2)
- Helmholtz International Center for FAIR (2)
- Hochschulrechenzentrum (2)
- Pharmazie (2)
- Zentrum für Nordamerika-Forschung (ZENAF) (2)
- Akademie für Bildungsforschung und Lehrerbildung (bisher: Zentrum für Lehrerbildung und Schul- und Unterrichtsforschung) (1)
- Center for Financial Studies (CFS) (1)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (1)
- Ernst Strüngmann Institut (1)
- Evangelische Theologie (1)
- Hessische Stiftung für Friedens- und Konfliktforschung (HSFK) (1)
- Institut für Bienenkunde (1)
- Institute for Monetary and Financial Stability (IMFS) (1)
- Interdisziplinäres Zentrum für Neurowissenschaften Frankfurt (IZNF) (1)
- Starker Start ins Studium: Qualitätspakt Lehre (1)
Highlights
• German patients with LGS identified using most specific algorithm to date.
• Prevalence of probable LGS with epilepsy diagnosis before age 6 was 6.5 per 100,000.
• High healthcare costs of €22,787 PPY; mostly due to inpatient and home nursing care.
• Costs were greater in patients prescribed rescue medications.
• Over 10 years, LGS patients had significant mortality vs. controls (2.88 vs. 0.01%).
Abstract
Objective: This retrospective study examined patients with probable Lennox-Gastaut syndrome (LGS) identified from German healthcare data.
Methods: This 10-year study (2007–2016) assessed healthcare insurance claims information from the Vilua Healthcare research database. A selection algorithm considering diagnoses and drug prescriptions identified patients with probable LGS. To increase the sensitivity of the identification algorithm, two populations were defined: all patients with probable LGS (broadly defined) and only those with a documented epilepsy diagnosis before 6 years of age (narrowly defined). This specific criterion was used as LGS typically has a peak seizure onset between age 3 and 5 years. Primary analyses were prevalence and demographics; secondary analyses included healthcare costs, hospitalization rate and length of stay (LOS), medication use, and mortality.
Results: In the final year of the study, 545 patients with broadly defined probable LGS (mean [range] age: 31.4 [2–89] years; male: 53%) were identified. Using the narrowly defined probable LGS definition, the number of patients was reduced to 102 (mean [range] age: 7.4 [2–14] years; male: 52%). Prevalence of broadly defined and narrowly defined probable LGS was 39.2 and 6.5 per 100,000 people. During the 10-year study, 208 patients with narrowly defined probable LGS were identified and followed up for 1379 patient-years. The mean annual cost of healthcare was €22,787 per patient-year (PPY); greatest costs were attributable to inpatient care (33%), home nursing care (13%), and medication (10%). Mean annual healthcare costs were significantly greater for those with prescribed rescue medication (45% of patient-years) versus those without (€33,872 vs. €13,785 PPY, p < 0.001). Mean (standard deviation [SD]) annual hospitalization rate was 1.6 (2.0) PPY with mean (SD) annual LOS of 22.7 (46.0) days. Annual hospitalization rate was significantly greater in those who were prescribed rescue medication versus those who were not (2.2 [2.3] vs. 1.1 [1.6] PPY, p < 0.001). The mean (SD) number of different medications prescribed was 11.3 (7.3) PPY and 33.8 (17.0) over the entire observable time per patient (OET); antiepileptic drugs only accounted for 2.1 (1.1) of the medications prescribed PPY and 3.8 (2.0) OET. Over the 10-year study period, mortality in patients with narrowly defined probable LGS was significantly higher than the matched control population (six events [2.88%] vs. one event [0.01%], p < 0.001).
Conclusion: Annual healthcare costs incurred by patients with probable LGS in Germany were substantial, and mostly attributable to inpatient care, home nursing care, and medication. Patients prescribed with rescue medication incurred significantly greater costs than those who were not. Patients with narrowly defined probable LGS had a higher mortality rate versus control populations.
100 Jahre Dieter Janz
(2020)
The 20 April 2020 marks the centenary of Dieter Janz’s birth. This issue of Zeitschrift für Epileptologie is published in his honor with the aim of tracing the work of Dieter Janz over the last five decades and summarizing new findings on the Janz syndrome (Juvenile Myoclonic Epilepsy), which is named after him.
Protein turnover, the net result of protein synthesis and degradation, enables cells to remodel their proteomes in response to internal and external cues. Previously, we analyzed protein turnover rates in cultured brain cells under basal neuronal activity and found that protein turnover is influenced by subcellular localization, protein function, complex association, cell type of origin, and by the cellular environment (Dörrbaum et al., 2018). Here, we advanced our experimental approach to quantify changes in protein synthesis and degradation, as well as the resulting changes in protein turnover or abundance in rat primary hippocampal cultures during homeostatic scaling. Our data demonstrate that a large fraction of the neuronal proteome shows changes in protein synthesis and/or degradation during homeostatic up- and down-scaling. More than half of the quantified synaptic proteins were regulated, including pre- as well as postsynaptic proteins with diverse molecular functions.
We examined the feedback between the major protein degradation pathway, the ubiquitin-proteasome system (UPS), and protein synthesis in rat and mouse neurons. When protein degradation was inhibited, we observed a coordinate dramatic reduction in nascent protein synthesis in neuronal cell bodies and dendrites. The mechanism for translation inhibition involved the phosphorylation of eIF2α, surprisingly mediated by eIF2α kinase 1, or heme-regulated kinase inhibitor (HRI). Under basal conditions, neuronal expression of HRI is barely detectable. Following proteasome inhibition, HRI protein levels increase owing to stabilization of HRI and enhanced translation, likely via the increased availability of tRNAs for its rare codons. Once expressed, HRI is constitutively active in neurons because endogenous heme levels are so low; HRI activity results in eIF2α phosphorylation and the resulting inhibition of translation. These data demonstrate a novel role for neuronal HRI that senses and responds to compromised function of the proteasome to restore proteostasis.
Keystone mutualisms, such as corals, lichens or mycorrhizae, sustain fundamental ecosystem functions. Range dynamics of these symbioses are, however, inherently difficult to predict because host species may switch between different symbiont partners in different environments, thereby altering the range of the mutualism as a functional unit. Biogeographic models of mutualisms thus have to consider both the ecological amplitudes of various symbiont partners and the abiotic conditions that trigger symbiont replacement. To address this challenge, we here investigate 'symbiont turnover zones'--defined as demarcated regions where symbiont replacement is most likely to occur, as indicated by overlapping abundances of symbiont ecotypes. Mapping the distribution of algal symbionts from two species of lichen-forming fungi along four independent altitudinal gradients, we detected an abrupt and consistent β-diversity turnover suggesting parallel niche partitioning. Modelling contrasting environmental response functions obtained from latitudinal distributions of algal ecotypes consistently predicted a confined altitudinal turnover zone. In all gradients this symbiont turnover zone is characterized by approximately 12°C average annual temperature and approximately 5°C mean temperature of the coldest quarter, marking the transition from Mediterranean to cool temperate bioregions. Integrating the conditions of symbiont turnover into biogeographic models of mutualisms is an important step towards a comprehensive understanding of biodiversity dynamics under ongoing environmental change.
Two-person neuroscience (2 PN) is a recently introduced conceptual and methodological framework used to investigate the neural basis of human social interaction from simultaneous neuroimaging of two or more subjects (hyperscanning). In this study, we adopted a 2 PN approach and a multiple-brain connectivity model to investigate the neural basis of a form of cooperation called joint action. We hypothesized different intra-brain and inter-brain connectivity patterns when comparing the interpersonal properties of joint action with non-interpersonal conditions, with a focus on co-representation, a core ability at the basis of cooperation. 32 subjects were enrolled in dual-EEG recordings during a computerized joint action task including three conditions: one in which the dyad jointly acted to pursue a common goal (joint), one in which each subject interacted with the PC (PC), and one in which each subject performed the task individually (Solo).
A combination of multiple-brain connectivity estimation and specific indices derived from graph theory allowed to compare interpersonal with non-interpersonal conditions in four different frequency bands. Our results indicate that all the indices were modulated by the interaction, and returned a significantly stronger integration of multiple-subject networks in the joint vs. PC and Solo conditions. A subsequent classification analysis showed that features based on multiple-brain indices led to a better discrimination between social and non-social conditions with respect to single-subject indices. Taken together, our results suggest that multiple-brain connectivity can provide a deeper insight into the understanding of the neural basis of cooperation in humans.
Background: Data on the arrhythmic burden of women at risk for sudden cardiac death are limited, especially in patients using the wearable cardioverter-defibrillator (WCD).
Objective: We aimed to characterize WCD compliance, atrial and ventricular arrhythmic burden, and WCD outcomes by sex in patients enrolled in the Prospective Registry of Patients Using the Wearable Cardioverter Defibrillator (WEARIT-II U.S. Registry).
Methods: In the WEARIT-II Registry, we stratified 2000 patients by sex into women (n = 598) and men (n = 1402). WCD wear time, ventricular and atrial arrhythmic events during WCD use, and implantable cardioverter-defibrillator (ICD) implantation rates at the end of WCD use were evaluated.
Results: The mean WCD wear time was similar in women and men (94 days vs 90 days; P = .145), with longer daily use in women (21.4 h/d vs 20.7 h/d; P = .001). Burden of ventricular tachycardia or ventricular fibrillation was higher in women, with 30 events per 100 patient-years compared with 18 events per 100 patient-years in men (P = .017), with similar findings for treated and non-treated ventricular tachycardia/ventricular fibrillation. Recurrent atrial arrhythmias/sustained ventricular tachycardia was also more frequent in women than in men (167 events per 100 patient-years vs 73 events per 100 patient-years; P = .042). However, ICD implantation rate at the end of WCD use was similar in both women and men (41% vs 39%; P = .448).
Conclusion: In the WEARIT-II Registry, we have shown a higher burden of ventricular and atrial arrhythmic events in women than in men. ICD implantation rates at the end of WCD use were similar. Our findings warrant monitoring women at risk for sudden cardiac death who have a high burden of atrial and ventricular arrhythmias while using the WCD.
Highlights
• Transparency of design, reference frames and support for action were found to support students' sense-making of LA dashboards.
• The higher the overall SRL score, the more relevant the three factors were perceived by learners.
• Learner goals affect how relevant students find reference frames.
• The SRL effect on the perceived relevance of transparency depends on learner goals.
Abstract
Unequal stakeholder engagement is a common pitfall of adoption approaches of learning analytics in higher education leading to lower buy-in and flawed tools that fail to meet the needs of their target groups. With each design decision, we make assumptions on how learners will make sense of the visualisations, but we know very little about how students make sense of dashboard and which aspects influence their sense-making. We investigated how learner goals and self-regulated learning (SRL) skills influence dashboard sense-making following a mixed-methods research methodology: a qualitative pre-study followed-up with an extensive quantitative study with 247 university students. We uncovered three latent variables for sense-making: transparency of design, reference frames and support for action. SRL skills are predictors for how relevant students find these constructs. Learner goals have a significant effect only on the perceived relevance of reference frames. Knowing which factors influence students' sense-making will lead to more inclusive and flexible designs that will cater to the needs of both novice and expert learners.
Decline in physical activity in the weeks preceding sustained ventricular arrhythmia in women
(2020)
Background: Heightened risk of cardiac arrest following physical exertion has been reported. Among patients with an implantable defibrillator, an appropriate shock for sustained ventricular arrhythmia was preceded by a retrospective self-report of engaging in mild-to-moderate physical activity. Previous studies evaluating the relationship between activity and sudden cardiac arrest lacked an objective measure of physical activity and women were often underrepresented.
Objective: To determine the relationship between physical activity, recorded by accelerometer in a wearable cardioverter-defibrillator (WCD), and sustained ventricular arrhythmia among female patients.
Methods: A dataset of female adult patients prescribed a WCD for a diagnosis of myocardial infarction or dilated cardiomyopathy was compiled from a commercial database. Curve estimation, to include linear and nonlinear interpolation, was applied to physical activity as a function of time (days before arrhythmia).
Results: Among women who received an appropriate WCD shock for sustained ventricular arrhythmia (N = 120), a quadratic relationship between time and activity was present prior to shock. Physical activity increased starting at the beginning of the 30-day period up until day -16 (16 days before the ventricular arrhythmia) when activity begins to decline.
Conclusion: For patients who received treatment for sustained ventricular arrhythmia, a decline in physical activity was found during the 2 weeks preceding the arrhythmic event. Device monitoring for a sustained decline in physical activity may be useful to identify patients at near-term risk of a cardiac arrest.
Attention-Deficit/Hyperactivity Disorder (ADHD) and obesity are frequently comorbid, genetically correlated, and share brain substrates. The biological mechanisms driving this association are unclear, but candidate systems, like dopaminergic neurotransmission and circadian rhythm, have been suggested. Our aim was to identify the biological mechanisms underpinning the genetic link between ADHD and obesity measures and investigate associations of overlapping genes with brain volumes. We tested the association of dopaminergic and circadian rhythm gene sets with ADHD, body mass index (BMI), and obesity (using GWAS data of N = 53,293, N = 681,275, and N = 98,697, respectively). We then conducted genome-wide ADHD–BMI and ADHD–obesity gene-based meta-analyses, followed by pathway enrichment analyses. Finally, we tested the association of ADHD–BMI overlapping genes with brain volumes (primary GWAS data N = 10,720–10,928; replication data N = 9428). The dopaminergic gene set was associated with both ADHD (P = 5.81 × 10−3) and BMI (P = 1.63 × 10−5); the circadian rhythm was associated with BMI (P = 1.28 × 10−3). The genome-wide approach also implicated the dopaminergic system, as the Dopamine-DARPP32 Feedback in cAMP Signaling pathway was enriched in both ADHD–BMI and ADHD–obesity results. The ADHD–BMI overlapping genes were associated with putamen volume (P = 7.7 × 10−3; replication data P = 3.9 × 10−2)—a brain region with volumetric reductions in ADHD and BMI and linked to inhibitory control. Our findings suggest that dopaminergic neurotransmission, partially through DARPP-32-dependent signaling and involving the putamen, is a key player underlying the genetic overlap between ADHD and obesity measures. Uncovering shared etiological factors underlying the frequently observed ADHD–obesity comorbidity may have important implications in terms of prevention and/or efficient treatment of these conditions.
Inhibitors against the NS3-4A protease of hepatitis C virus (HCV) have proven to be useful drugs in the treatment of HCV infection. Although variants have been identified with mutations that confer resistance to these inhibitors, the mutations do not restore replicative fitness and no secondary mutations that rescue fitness have been found. To gain insight into the molecular mechanisms underlying the lack of fitness compensation, we screened known resistance mutations in infectious HCV cell culture with different genomic backgrounds. We observed that the Q41R mutation of NS3-4A efficiently rescues the replicative fitness in cell culture for virus variants containing mutations at NS3-Asp168. To understand how the Q41R mutation rescues activity, we performed protease activity assays complemented by molecular dynamics simulations, which showed that protease-peptide interactions far outside the targeted peptide cleavage sites mediate substrate recognition by NS3-4A and support protease cleavage kinetics. These interactions shed new light on the mechanisms by which NS3-4A cleaves its substrates, viral polyproteins and a prime cellular antiviral adaptor protein, the mitochondrial antiviral signaling protein MAVS. Peptide binding is mediated by an extended hydrogen-bond network in NS3-4A that was effectively optimized for protease-MAVS binding in Asp168 variants with rescued replicative fitness from NS3-Q41R. In the protease harboring NS3-Q41R, the N-terminal cleavage products of MAVS retained high affinity to the active site, rendering the protease susceptible for potential product inhibition. Our findings reveal delicately balanced protease-peptide interactions in viral replication and immune escape that likely restrict the protease adaptive capability and narrow the virus evolutionary space.
Cryo-electron tomography combined with subtomogram averaging (StA) has yielded high-resolution structures of macromolecules in their native context. However, high-resolution StA is not commonplace due to beam-induced sample drift, images with poor signal-to-noise ratios (SNR), challenges in CTF correction, and limited particle number. Here we address these issues by collecting tilt series with a higher electron dose at the zero-degree tilt. Particles of interest are then located within reconstructed tomograms, processed by conventional StA, and then re-extracted from the high-dose images in 2D. Single particle analysis tools are then applied to refine the 2D particle alignment and generate a reconstruction. Use of our hybrid StA (hStA) workflow improved the resolution for tobacco mosaic virus from 7.2 to 4.4 Å and for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. These resolution gains make hStA a promising approach for other StA projects aimed at achieving subnanometer resolution.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
The tremendous diversity of life in the ocean has proven to be a rich source of inspiration for drug discovery, with success rates for marine natural products up to 4 times higher than other naturally derived compounds. Yet the marine biodiscovery pipeline is characterized by chronic underfunding, bottlenecks and, ultimately, untapped potential. For instance, a lack of taxonomic capacity means that, on average, 20 years pass between the discovery of new organisms and the formal publication of scientific names, a prerequisite to proceed with detecting and isolating promising bioactive metabolites. The need for “edge” research that can spur novel lines of discovery and lengthy high-risk drug discovery processes, are poorly matched with research grant cycles. Here we propose five concrete pathways to broaden the biodiscovery pipeline and open the social and economic potential of the ocean genome for global benefit: (1) investing in fundamental research, even when the links to industry are not immediately apparent; (2) cultivating equitable collaborations between academia and industry that share both risks and benefits for these foundational research stages; (3) providing new opportunities for early-career researchers and under-represented groups to engage in high-risk research without risking their careers; (4) sharing data with global networks; and (5) protecting genetic diversity at its source through strong conservation efforts. The treasures of the ocean have provided fundamental breakthroughs in human health and still remain under-utilised for human benefit, yet that potential may be lost if we allow the biodiscovery pipeline to become blocked in a search for quick-fix solutions.
Aims: Acetylsalicylic acid (ASA) is widely used for the prevention of atherothrombotic events in patients with chronic coronary artery disease (CAD) and peripheral artery disease (PAD), but the risk of vascular events remains high. We aimed at identifying randomised controlled trials (RCTs) on antithrombotic treatments in patients with chronic CAD or PAD.
Methods: Searches were conducted on MEDLINE, EMBASE, and CENTRAL on March 1st, 2018. This systematic review (SR) uses a narrative synthesis to summarize the evidence for the efficacy and safety of antiplatelet and anticoagulant therapies in the population of both chronic CAD or PAD patients.
Results: Four RCTs from 27 publications were included. Study groups included 15,603 to 27,395 patients. ASA alone was the most extensively studied (n = 3); other studies included rivaroxaban with or without ASA (n = 1), vorapaxar alone (n = 1), and clopidogrel with (n = 1) or without ASA (n = 1). Clopidogrel alone and clopidogrel plus ASA compared to ASA presented similar efficacy with comparable safety profile. Rivaroxaban plus ASA significantly reduced the risk of the composite of cardiovascular death, myocardial infarction, and stroke compared to ASA alone, although major bleeding with rivaroxaban plus ASA increased.
Conclusion: There is limited and heterogeneous evidence on the prevention of atherothrombotic events in patients with chronic CAD or PAD. Clopidogrel alone and clopidogrel plus ASA did not demonstrate superiority over ASA alone. A combination of rivaroxaban plus ASA may offer significant additional benefit in reducing cardiovascular outcomes, yet it may increase the risk of bleeding, compared to ASA alone.
Determination of a minimal postmortem interval via age estimation of necrophagous diptera has been restricted to the juvenile stages and the time until emergence of the adult fly, i.e. up until 2–6 weeks depending on species and temperature. Age estimation of adult flies could extend this period by adding the age of the fly to the time needed for complete development. In this context pteridines are promising metabolites, as they accumulate in the eyes of flies with increasing age. We studied adults of the blow fly Lucilia sericata at constant temperatures of 16 °C and 25 °C up to an age of 25 days and estimated their pteridine levels by fluorescence spectroscopy. Age was given in accumulated degree days (ADD) across temperatures. Additionally, a mock case was set up to test the applicability of the method. Pteridine increases logarithmically with increasing ADD, but after 70–80 ADD the increase slows down and the curve approaches a maximum. Sex had a significant impact (p < 4.09 × 10−6) on pteridine fluorescence level, while body-size and head-width did not. The mock case demonstrated that a slight overestimation of the real age (in ADD) only occurred in two out of 30 samples. Age determination of L. sericata on the basis of pteridine levels seems to be limited to an age of about 70 ADD, but depending on the ambient temperature this could cover an extra amount of time of about 5–7 days after completion of the metamorphosis.
Cabozantinib (Cabometyx®) is a potent multikinase inhibitor targeting the vascular endothelial growth factor (VEGF) receptor 2, the mesenchymal-epithelial transition factor (MET) receptor, and the “anexelekto” (AXL) receptor tyrosine kinase. It is approved for the treatment of advanced hepatocellular carcinoma (HCC) after failure of sorafenib in Europe (since November 2018) and in the USA (since January 2019). The approval of cabozantinib was based on results of the randomized, placebo-controlled, phase 3 CELESTIAL trial in patients with unresectable HCC, who received one or two prior lines of treatment including sorafenib. At the second planned interim analysis, the trial was stopped, because the primary end point overall survival was clearly in favor for cabozantinib. Additionally, median progression-free survival was superior to placebo. The most common ≥ grade 3 relevant adverse events in patients with HCC treated with cabozantinib were palmar–plantar erythrodysesthesia, hypertension, fatigue, and diarrhea. In this review, current data on cabozantinib for the treatment of patients with advanced HCC, with a focus on the management of common adverse events and ongoing clinical trials, are discussed.
External linkages allow nascent ventures to access crucial resources during the process of new product development. Forming external linkages can substantially contribute to a venture’s performance. However, little is known about the paths of external linkage formation, as well as the circumstances that drive the choice to pursue one rather than another path. This gap deserves further investigation, because we do not know whether insights developed for incumbent firms also apply to nascent ventures: To address this gap, we explore a novel dataset of 370 venture creation processes. Using sequence analyses based on optimal matching techniques and cluster analyses, we reveal that nascent ventures pursue one of overall four distinct paths of linkage formation activities during new product development. Contrary to the findings of the strategy literature, we find that if nascent ventures engage in external linkages at all, they do not combine exploration- and exploitation-oriented linkages but form either exploration- or exploitation-oriented linkages. Additional regression analyses highlight the circumstances that lead nascent ventures to pursue one rather than the other pathways. Taken together, our analyses point out that resource scarcity constitutes an important factor shaping the linkage formation activities of nascent ventures. Accordingly, we show that nascent ventures tend not to optimize by adding complementary knowledge to the firm’s knowledge base but rather to extend the existing knowledge base—a strategy which we call bricolage.
In recent decades, the assessment of instructional quality has grown into a popular and well-funded arm of educational research. The present study contributes to this field by exploring first impressions of untrained raters as an innovative approach of assessment. We apply the thin slice procedure to obtain ratings of instructional quality along the dimensions of cognitive activation, classroom management, and constructive support based on only 30 s of classroom observations. Ratings were compared to the longitudinal data of students taught in the videos to investigate the connections between the brief glimpses into instructional quality and student learning. In addition, we included samples of raters with different backgrounds (university students, middle school students and educational research experts) to understand the differences in thin slice ratings with respect to their predictive power regarding student learning. Results suggest that each group provides reliable ratings, as measured by a high degree of agreement between raters, as well predictive ratings with respect to students’ learning. Furthermore, we find experts’ and middle school students’ ratings of classroom management and constructive support, respectively, explain unique components of variance in student test scores. This incremental validity can be explained with the amount of implicit knowledge (experts) and an attunement to assess specific cues that is attributable to an emotional involvement (students).
Die Gattungen Nicotiana tabacum und Nicotiana rustica der Tabakpflanze sind von großer wirtschaftlicher Bedeutung. Aus ihnen wird Tabak hergestellt, der mit Alkohol zur weltweit am häufigsten konsumierten Genussdroge zählt. Aufgrund seiner Legalität wird die Toxizität trotz steigender Warnung und Aufklärung immer noch unterschätzt. Die Toxizität der Tabakpflanze ist vor allem auf das Alkaloid Nikotin zurückzuführen. Dass es selten zu einer Vergiftung durch die reine Pflanze kommt, liegt daran, dass sie optisch kaum zum Verzehr anregt. Häufiger dagegen ist eine Vergiftung durch z. B. verschluckte Zigarettenstummel, die vor allem für Kinder sehr gefährlich sein kann. Eine weitere Gefahr der Vergiftung entsteht bei der Tabakernte. Nikotin wird auch über die Haut aufgenommen und kann so zu der Green Tobacco Sickness bei Tabakplantagenarbeitern führen. Im Ernstfall existiert kein Antidot. Aktivkohle sollte so schnell wie möglich gegeben werden, um die Resorption zu vermindern. Ansonsten muss das Nikotin mit einer Magenwäsche aus dem Körper gefiltert werden. Präventiv sollten deshalb verstärkt auf die Gefahren des Tabaks aufmerksam gemacht werden.
The metasomatised continental mantle may play a key role in the generation of some ore deposits, in particular mineral systems enriched in platinum-group elements (PGE) and Au. The cratonic lithosphere is the longest-lived potential source for these elements, but the processes that facilitate their pre-concentration in the mantle and their later remobilisation to the crust are not yet well-established. Here, we report new results on the petrography, major-element, and siderophile- and chalcophile-element composition of native Ni, base metal sulphides (BMS), and spinels in a suite of well-characterised, highly metasomatised and weakly serpentinised peridotite xenoliths from the Bultfontein kimberlite in the Kaapvaal Craton, and integrate these data with published analyses. Pentlandite in polymict breccias (failed kimberlite intrusions at mantle depth) has lower trace-element contents (e.g., median total PGE 0.72 ppm) than pentlandite in phlogopite peridotites and Mica-Amphibole-Rutile-Ilmenite-Diopside (MARID) rocks (median 1.6 ppm). Spinel is an insignificant host for all elements except Zn, and BMS and native Ni account for typically <25% of the bulk-rock PGE and Au. High bulk-rock Te/S suggest a role for PGE-bearing tellurides, which, along with other compounds of metasomatic origin, may host the missing As, Ag, Cd, Sb, Te and, in part, Bi that are unaccounted for by the main assemblage.
The close spatial relationship between BMS and metasomatic minerals (e.g., phlogopite, ilmenite) indicates that the lithospheric mantle beneath Bultfontein was resulphidised by metasomatism after initial melt depletion during stabilisation of the cratonic lithosphere. Newly-formed BMS are markedly PGE-poor, as total PGE contents are <4.2 ppm in pentlandite from seven samples, compared to >26 ppm in BMS in other peridotite xenoliths from the Kaapvaal craton. This represents a strong dilution of the original PGE abundances at the mineral scale, perhaps starting from precursor PGE alloy and small volumes of residual BMS. The latter may have been the precursor to native Ni, which occurs in an unusual Ni-enriched zone in a harzburgite and displays strongly variable, but overall high PGE abundances (up to 81 ppm). In strongly metasomatised peridotites, Au is enriched relative to Pd, and was probably added along with S. A combination of net introduction of S, Au +/− PGE from the asthenosphere and intra-lithospheric redistribution, in part sourced from subducted materials, during metasomatic events may have led to sulphide precipitation at ~80–120 km beneath Bultfontein. This process locally enhanced the metallogenic fertility of this lithospheric reservoir. Further mobilisation of the metal budget stored in these S-rich domains and upwards transport into the crust may require interaction with sulphide-undersaturated melts that can dissolve sulphides along with the metals they store.
Objectives: Lumbar spinal stenosis (LSS) and lumbar disc herniation (LDH) are often accompanied by frequently occurring leg cramps severely affecting patients’ life and sleep quality. Recent evidence suggests that neuromuscular electric stimulation (NMES) of cramp-prone muscles may prevent cramps in lumbar disorders.
Materials and Methods: Thirty-two men and women (63 ± 9 years) with LSS and/or LDH suffering from cramps were randomly allocated to four different groups. Unilateral stimulation of the gastrocnemius was applied twice a week over four weeks (3 × 6 × 5 sec stimulation trains at 30 Hz above the individual cramp threshold frequency [CTF]). Three groups received either 85%, 55%, or 25% of their maximum tolerated stimulation intensity, whereas one group only received pseudo-stimulation.
Results: The number of reported leg cramps decreased in the 25% (25 ± 14 to 7 ± 4; p = 0.002), 55% (24 ± 10 to 10 ± 11; p = 0.014) and 85%NMES (23 ± 17 to 1 ± 1; p < 0.001) group, whereas it remained unchanged after pseudo-stimulation (20 ± 32 to 19 ± 33; p > 0.999). In the 25% and 85%NMES group, this improvement was accompanied by an increased CTF (p < 0.001).
Conclusion: Regularly applied NMES of the calf muscles reduces leg cramps in patients with LSS/LDH even at low stimulation intensity.
We show explicit formulas for the evaluation of (possibly higher-order) fractional Laplacians (-△)ˢ of some functions supported on ellipsoids. In particular, we derive the explicit expression of the torsion function and give examples of s-harmonic functions. As an application, we infer that the weak maximum principle fails in eccentric ellipsoids for s ∈ (1; √3 + 3/2) in any dimension n ≥ 2. We build a counterexample in terms of the torsion function times a polynomial of degree 2. Using point inversion transformations, it follows that a variety of bounded and unbounded domains do not satisfy positivity preserving properties either and we give some examples.
Highlights
• PUR, PVC and PLA microplastics affect life-history parameters of Daphnia magna.
• Natural kaolin particles are less toxic than microplastics.
• Microplastic toxicity is material-specific, e.g. PVC is most toxic on reproduction.
• In case of PVC, plastic chemicals are the main driver of microplastic toxicity.
• PLA bioplastics are similarly toxic as conventional plastics.
Abstract
Given the ubiquitous presence of microplastics in aquatic environments, an evaluation of their toxicity is essential. Microplastics are a heterogeneous set of materials that differ not only in particle properties, like size and shape, but also in chemical composition, including polymers, additives and side products. Thus far, it remains unknown whether the plastic chemicals or the particle itself are the driving factor for microplastic toxicity. To address this question, we exposed Daphnia magna for 21 days to irregular polyvinyl chloride (PVC), polyurethane (PUR) and polylactic acid (PLA) microplastics as well as to natural kaolin particles in high concentrations (10, 50, 100, 500 mg/L, ≤ 59 μm) and different exposure scenarios, including microplastics and microplastics without extractable chemicals as well as the extracted and migrating chemicals alone. All three microplastic types negatively affected the life-history of D. magna. However, this toxicity depended on the endpoint and the material. While PVC had the largest effect on reproduction, PLA reduced survival most effectively. The latter indicates that bio-based and biodegradable plastics can be as toxic as their conventional counterparts. The natural particle kaolin was less toxic than microplastics when comparing numerical concentrations. Importantly, the contribution of plastic chemicals to the toxicity was also plastic type-specific. While we can attribute effects of PVC to the chemicals used in the material, effects of PUR and PLA plastics were induced by the mere particle. Our study demonstrates that plastic chemicals can drive microplastic toxicity. This highlights the importance of considering the individual chemical composition of plastics when assessing their environmental risks. Our results suggest that less studied polymer types, like PVC and PUR, as well as bioplastics are of particular toxicological relevance and should get a higher priority in ecotoxicological studies.
Deubiquitinases (DUBs) are vital for the regulation of ubiquitin signals, and both catalytic activity of and target recruitment by DUBs need to be tightly controlled. Here, we identify asparagine hydroxylation as a novel posttranslational modification involved in the regulation of Cezanne (also known as OTU domain–containing protein 7B (OTUD7B)), a DUB that controls key cellular functions and signaling pathways. We demonstrate that Cezanne is a substrate for factor inhibiting HIF1 (FIH1)- and oxygen-dependent asparagine hydroxylation. We found that FIH1 modifies Asn35 within the uncharacterized N-terminal ubiquitin-associated (UBA)-like domain of Cezanne (UBACez), which lacks conserved UBA domain properties. We show that UBACez binds Lys11-, Lys48-, Lys63-, and Met1-linked ubiquitin chains in vitro, establishing UBACez as a functional ubiquitin-binding domain. Our findings also reveal that the interaction of UBACez with ubiquitin is mediated via a noncanonical surface and that hydroxylation of Asn35 inhibits ubiquitin binding. Recently, it has been suggested that Cezanne recruitment to specific target proteins depends on UBACez. Our results indicate that UBACez can indeed fulfill this role as regulatory domain by binding various ubiquitin chain types. They also uncover that this interaction with ubiquitin, and thus with modified substrates, can be modulated by oxygen-dependent asparagine hydroxylation, suggesting that Cezanne is regulated by oxygen levels.
In diesem Beitrag werden Spezifika der mit der qualitativen Inhaltsanalyse vorgenommenen Leserezeptionsforschung dargestellt. Der Schwerpunkt liegt auf dem literarischen Lesen. In Analysen von Textrezeptionszeugnissen, die zu literaturdidaktischen Forschungszwecken vorgenommen werden, ergibt sich eine doppelt-hermeneutische Herausforderung: Ziel ist es zu verstehen, was Leser_innen in Texten verstehen. Für den Analyseprozess folgen daraus spezifische Anforderungen: Erstens muss der Umfang der Kontexteinheit geklärt werden. Hier sind differenzierte Antworten notwendig, weil sich der gegebene Kontext im Leseprozess ständig verändert. Zweitens erfordert das Forschungsinteresse eine bestimmte Art von Kategorien, die in der Literatur als formal bzw. analytisch bezeichnet werden. Eine weitere Differenzierung zwischen strikt formalen und theoriebasiert formalen Kategorien wird hier vorgeschlagen. Drittens muss geklärt werden, ob die rekonstruierten Leseaktivitäten Prozesse sind, oder ob sie auf zugrunde liegende Dispositionen schließen lassen. Diese Anforderungen werden diskutiert und mit Lösungsansätzen versehen.
Highlights
• Explanation of mobility design and its practical, aesthetic and emblematic effects on travel behaviour.
• Review of recent studies on mobility design elements and the promotion of non-motorised travel.
• Discussion of research gaps and methodological challenges of data collection and comparability.
Abstract
To promote non-motorised travel, many travel behaviour studies acknowledge the importance of the built environment to modal choice, for example with its density or mix of uses. From a mobility design theory perspective, however, objects and environments affect human perceptions, assessments and behaviour in at least three different ways: by their practical, aesthetic and emblematic functions. This review of existing evidence will argue that travel behaviour research has so far mainly focused on the practical function of the built environment. For that purpose, we systematically identified 56 relevant studies on the impacts of the built environment on non-motorised travel behaviour in the Web of Science database. The focus of research on the practical design function primary involves land use distribution, street network connectivity and the presence of walking and cycling facilities. Only a small number of papers address the aesthetic and emblematic functions. These show that the perceived attractiveness of an environment and evoked feelings of traffic safety increase the likelihood of walking and cycling. However, from a mobility design perspective, the results of the review indicate a gap regarding comprehensive research on the effects of the aesthetic and emblematic functions of the built environment. Further research involving these functions might contribute to a better understanding of how to promote non-motorised travel more effectively. Moreover, limitations related to survey techniques, regional distribution and the comparability of results were identified.
Assessment of individual therapeutic responses provides valuable information concerning treatment benefits in individual patients. We evaluated individual therapeutic responses as determined by the Disease Activity Score-28 joints critical difference for improvement (DAS28-dcrit) in rheumatoid arthritis (RA) patients treated with intravenous tocilizumab or comparator anti-tumor necrosis factor (TNF) agents. The previously published DAS28-dcrit value [DAS28 decrease (improvement) ≥ 1.8] was retrospectively applied to data from two studies of tocilizumab in RA, the 52-week ACT-iON observational study and the 24-week ADACTA randomized study. Data were compared within (not between) studies. DAS28 was calculated with erythrocyte sedimentation rate as the inflammatory marker. Stability of DAS28-dcrit responses and European League Against Rheumatism (EULAR) good responses was determined by evaluating repeated responses at subsequent timepoints. A logistic regression model was used to calculate p values for differences in response rates between active agents. Patient-reported outcomes (PROs; pain, global health, function, and fatigue) in DAS28-dcrit responder versus non-responder groups were compared with an ANCOVA model. DAS28-dcrit individual response rates were 78.2% in tocilizumab-treated patients and 58.2% in anti-TNF-treated patients at week 52 in the ACT-ion study (p = 0.0001) and 90.1% versus 59.1% at week 24 in the ADACTA study (p < 0.0001). DAS28-dcrit responses showed greater stability over time (up to 52 weeks) than EULAR good responses. For both active treatments, DAS28-dcrit responses were associated with statistically significant improvements in mean PRO values compared with non-responders. The DAS28-dcrit response criterion provides robust assessments of individual responses to RA therapy and may be useful for discriminating between active agents in clinical studies and guiding treat-to-target decisions in daily practice.
Human RNF213, which encodes the protein mysterin, is a known susceptibility gene for moyamoya disease (MMD), a cerebrovascular condition with occlusive lesions and compensatory angiogenesis. Mysterin mutations, together with exposure to environmental trigger factors, lead to an elevated stroke risk since childhood. Mysterin is induced during cell stress, to function as cytosolic AAA+ ATPase and ubiquitylation enzyme. Little knowledge exists, in which context mysterin is needed. Here, we found that genetic ablation of several mitochondrial matrix factors, such as the peptidase ClpP, the transcription factor Tfam, as well as the peptidase and AAA+ ATPase Lonp1, potently induces Rnf213 transcript expression in various organs, in parallel with other components of the innate immune system. Mostly in mouse fibroblasts and human endothelial cells, the Rnf213 levels showed prominent upregulation upon Poly(I:C)-triggered TLR3-mediated responses to dsRNA toxicity, as well as upon interferon gamma treatment. Only partial suppression of Rnf213 induction was achieved by C16 as an antagonist of PKR (dsRNA-dependent protein kinase). Since dysfunctional mitochondria were recently reported to release immune-stimulatory dsRNA into the cytosol, our results suggest that mysterin becomes relevant when mitochondrial dysfunction or infections have triggered RNA-dependent inflammation. Thus, MMD has similarities with vasculopathies that involve altered nucleotide processing, such as Aicardi-Goutières syndrome or systemic lupus erythematosus. Furthermore, in MMD, the low penetrance of RNF213 mutations might be modified by dysfunctions in mitochondria or the TLR3 pathway.
Purpose: Neonatal surgery for abdominal wall defects is not performed in a centralized manner in Germany. The aim of this study was to investigate whether treatment for abdominal wall defects in Germany is equally effective compared to international results despite the decentralized care.
Methods: All newborn patients who were clients of the major statutory health insurance company in Germany between 2009 and 2013 and who had a diagnosis of gastroschisis or omphalocele were included. Mortality during the first year of life was analysed.
Results: The 316 patients with gastroschisis were classified as simple (82%) or complex (18%) cases. The main associated anomalies in the 197 patients with omphalocele were trisomy 18/21 (8%), cardiac anomalies (32%) and anomalies of the urinary tract (10%). Overall mortality was 4% for gastroschisis and 16% for omphalocele. Significant factors for non-survival were birth weight below 1500 g for both groups, complex gastroschisis, volvulus and anomalies of the blood supply to the intestine in gastroschisis, and female gender, trisomy 18/21 and lung hypoplasia in omphalocele.
Conclusions: Despite the fact that paediatric surgical care is organized in a decentralized manner in Germany, the mortality rates for gastroschisis and omphalocele are equal to those reported in international data.
A convex body is unconditional if it is symmetric with respect to reflections in all coordinate hyperplanes. We investigate unconditional lattice polytopes with respect to geometric, combinatorial, and algebraic properties. In particular, we characterize unconditional reflexive polytopes in terms of perfect graphs. As a prime example, we study the signed Birkhoff polytope. Moreover, we derive constructions for Gale-dual pairs of polytopes and we explicitly describe Gröbner bases for unconditional reflexive polytopes coming from partially ordered sets.
Purpose of Review: To provide an overview of current surgical peri-implantitis treatment options.
Recent Findings: Surgical procedures for peri-implantitis treatment include two main approaches: non-augmentative and augmentative therapy. Open flap debridement (OFD) and resective treatment are non-augmentative techniques that are indicated in the presence of horizontal bone loss in aesthetically nondemanding areas. Implantoplasty performed adjunctively at supracrestally and buccally exposed rough implant surfaces has been shown to efficiently attenuate soft tissue inflammation compared to control sites. However, this was followed by more pronounced soft tissue recession. Adjunctive augmentative measures are recommended at peri-implantitis sites exhibiting intrabony defects with a minimum depth of 3 mm and in the presence of keratinized mucosa. In more advanced cases with combined defect configurations, a combination of augmentative therapy and implantoplasty at exposed rough implant surfaces beyond the bony envelope is feasible.
Summary: For the time being, no particular surgical protocol or material can be considered as superior in terms of long-term peri-implant tissue stability.
Purpose of Review: Attention deficit hyperactivity disorder (ADHD) shows high heritability in formal genetic studies. In our review article, we provide an overview on common and rare genetic risk variants for ADHD and their link to clinical practice.
Recent findings: The formal heritability of ADHD is about 80% and therefore higher than most other psychiatric diseases. However, recent studies estimate the proportion of heritability based on singlenucleotide variants (SNPs) at 22%. It is a matter of debate which genetic mechanisms explain this huge difference. While frequent variants in first mega-analyses of genome-wideassociation study data containing several thousand patients give the first genome-wide results, explaining only little variance, the methodologically more difficult analyses of rare variants are still in their infancy. Some rare genetic syndromes show higher prevalence for ADHD indicating a potential role for a small number of patients. In contrast, polygenic risk scores (PRS) could potentially be applied to every patient. We give an overview how PRS explain different behavioral phenotypes in ADHD and how they could be used for diagnosis and therapy prediction.
Summary: Knowledge about a patient’s genetic makeup is not yet mandatory for ADHD therapy or diagnosis. PRS however have been introduced successfully in other areas of clinical medicine, and their application in psychiatry will begin within the next years. In order to ensure competent advice for patients, knowledge of the current state of research is useful forpsychiatrists.
Voting advice applications (VAAs) are online tools providing voting advice to their users. This voting advice is based on the match between the answers of the user and the answers of several political parties to a common questionnaire on political attitudes. To visualize this match, VAAs use a wide array of visualisations, most popular of which are the two-dimensional political maps. These maps show the position of both the political parties and the user in the political landscape, allowing the user to understand both their own position and their relation to the political parties. To construct these maps, VAAs require scales that represent the main underlying dimensions of the political space. This makes the correct construction of these scales important if the VAA aims to provide accurate and helpful voting advice. This paper presents three criteria that assess if a VAA achieves this aim. To illustrate their usefulness, these three criteria—unidimensionality, reliability and quality—are used to assess the scales in the cross-national EUVox VAA, a VAA designed for the European Parliament elections of 2014. Using techniques from Mokken scaling analysis and categorical principal component analysis to capture the metrics, I find that most scales show low unidimensionality and reliability. Moreover, even while designers can—and sometimes do—use certain techniques to improve their scales, these improvements are rarely enough to overcome all of the problems regarding unidimensionality, reliability and quality. This leaves certain problems for the designers of VAAs and designers of similar type online surveys.
We use recent results by Bainbridge–Chen–Gendron–Grushevsky–Möller on compactifications of strata of abelian differentials to give a comprehensive solution to the realizability problem for effective tropical canonical divisors in equicharacteristic zero. Given a pair (Γ,D) consisting of a stable tropical curve Γ and a divisor D in the canonical linear system on Γ, we give a purely combinatorial condition to decide whether there is a smooth curve X over a non-Archimedean field whose stable reduction has Γ as its dual tropical curve together with an effective canonical divisor KX that specializes to D.
Inhomogeneous phases in the Gross-Neveu model in 1 + 1 dimensions at finite number of flavors
(2020)
We explore the thermodynamics of the 1+1-dimensional Gross-Neveu (GN) model at a finite number of fermion flavors Nf, finite temperature, and finite chemical potential using lattice field theory. In the limit Nf→∞ the model has been solved analytically in the continuum. In this limit three phases exist: a massive phase, in which a homogeneous chiral condensate breaks chiral symmetry spontaneously; a massless symmetric phase with vanishing condensate; and most interestingly an inhomogeneous phase with a condensate, which oscillates in the spatial direction. In the present work we use chiral lattice fermions (naive fermions and SLAC fermions) to simulate the GN model with 2, 8, and 16 flavors. The results obtained with both discretizations are in agreement. Similarly as for Nf→∞ we find three distinct regimes in the phase diagram, characterized by a qualitatively different behavior of the two-point function of the condensate field. For Nf=8 we map out the phase diagram in detail and obtain an inhomogeneous region smaller as in the limit Nf→∞, where quantum fluctuations are suppressed. We also comment on the existence or absence of Goldstone bosons related to the breaking of translation invariance in 1+1 dimensions.
Erratum for: Cyclic AMP induces transactivation of the receptors for epidermal growth factor and nerve growth factor, thereby modulating activation of MAP kinase, Akt, and neurite outgrowth in PC12 cells.Journal of biological chemistry, 2002 Nov 15;277(46):43623-30. doi: 10.1074/jbc.M203926200. Epub 2002 Sep 5.
Type-II multiferroic materials, in which ferroelectric polarization is induced by inversion non-symmetric magnetic order, promise new and highly efficient multifunctional applications based on mutual control of magnetic and electric properties. However, to date this phenomenon is limited to low temperatures. Here we report giant pressure-dependence of the multiferroic critical temperature in CuBr2: at 4.5 GPa it is enhanced from 73.5 to 162 K, to our knowledge the highest TC ever reported for non-oxide type-II multiferroics. This growth shows no sign of saturating and the dielectric loss remains small under these high pressures. We establish the structure under pressure and demonstrate a 60\% increase in the two-magnon Raman energy scale up to 3.6 GPa. First-principles structural and magnetic energy calculations provide a quantitative explanation in terms of dramatically pressure-enhanced interactions between CuBr2 chains. These large, pressure-tuned magnetic interactions motivate structural control in cuprous halides as a route to applied high-temperature multiferroicity.
Deconfinement of Mott localized electrons into topological and spin–orbit-coupled Dirac fermions
(2020)
The interplay of electronic correlations, spin–orbit coupling and topology holds promise for the realization of exotic states of quantum matter. Models of strongly interacting electrons on honeycomb lattices have revealed rich phase diagrams featuring unconventional quantum states including chiral superconductivity and correlated quantum spin Hall insulators intertwining with complex magnetic order. Material realizations of these electronic states are, however, scarce or inexistent. In this work, we propose and show that stacking 1T-TaSe2 into bilayers can deconfine electrons from a deep Mott insulating state in the monolayer to a system of correlated Dirac fermions subject to sizable spin–orbit coupling in the bilayer. 1T-TaSe2 develops a Star-of-David charge density wave pattern in each layer. When the Star-of-David centers belonging to two adyacent layers are stacked in a honeycomb pattern, the system realizes a generalized Kane–Mele–Hubbard model in a regime where Dirac semimetallic states are subject to significant Mott–Hubbard interactions and spin–orbit coupling. At charge neutrality, the system is close to a quantum phase transition between a quantum spin Hall and an antiferromagnetic insulator. We identify a perpendicular electric field and the twisting angle as two knobs to control topology and spin–orbit coupling in the system. Their combination can drive it across hitherto unexplored grounds of correlated electron physics, including a quantum tricritical point and an exotic first-order topological phase transition.
Cancer‐associated venous thromboembolism (VTE) is a frequent, potentially life‐threatening event that complicates cancer management. Anticoagulants are the cornerstone of therapy for the treatment and prevention of cancer‐associated thrombosis (CAT); factor Xa–inhibiting direct oral anticoagulants (DOACs; apixaban, edoxaban, and rivaroxaban), which have long been recommended for the treatment of VTE in patients without cancer, have been investigated in this setting. The first randomized comparisons of DOACs against low‐molecular‐weight heparin for the treatment of CAT indicated that DOACs are efficacious in this setting, with findings reflected in recent updates to published guidance on CAT treatment. However, the higher risk of bleeding events (particularly in the gastrointestinal tract) with DOACs highlights the need for appropriate patient selection. Further insights will be gained from additional studies that are ongoing or awaiting publication. The efficacy and safety of DOAC thromboprophylaxis in ambulatory patients with cancer at a high risk of VTE have also been assessed in placebo‐controlled randomized controlled trials of apixaban and rivaroxaban. Both studies showed efficacy benefits with DOACs, but both studies also showed a nonsignificant increase in major bleeding events while on treatment. This review summarizes the evidence base for rivaroxaban use in CAT, the patient profile potentially most suited to DOAC use, and ongoing controversies under investigation. We also describe ongoing studies from the CALLISTO (Cancer Associated thrombosis—expLoring soLutions for patients through Treatment and Prevention with RivarOxaban) program, which comprises several randomized clinical trials and real‐world evidence studies, including investigator‐initiated research.
We study in detail the nuclear aspects of a neutron-star merger in which deconfinement to quark matter takes place. For this purpose, we make use of the Chiral Mean Field (CMF) model, an effective relativistic model that includes self-consistent chiral symmetry restoration and deconfinement to quark matter and, for this reason, predicts the existence of different degrees of freedom depending on the local density/chemical potential and temperature. We then use the out-of-chemical-equilibrium finite-temperature CMF equation of state in full general-relativistic simulations to analyze which regions of different QCD phase diagrams are probed and which conditions, such as strangeness and entropy, are generated when a strong first-order phase transition appears. We also investigate the amount of electrons present in different stages of the merger and discuss how far from chemical equilibrium they can be and, finally, draw some comparisons with matter created in supernova explosions and heavy-ion collisions.
Evaluation of a rapid turn-over, fully-automated ADAMTS13 activity assay: a method comparison study
(2020)
Thrombotic thrombocytopenic purpura (TTP) is a life-threatening thrombotic microangiopathy caused by severely reduced activity of the von-Willebrand factor-cleaving protease ADAMTS13, mainly caused by anti-ADAMTS-13 antibodies. Although several test systems for ADAMTS13 measurement exist, long turn-around times hamper the usability in daily practice. We performed a method comparison study for two commercially available ADAMTS13 assays and evaluated the agreement between the fully-automated rapid turn-over HemosIL AcuStar ADAMTS13 Activity assay and the manually performed TECHNOZYM ADAMTS-13 Activity assay. Twenty-four paired test samples derived from 10 consecutively recruited patients (n = 8, acquired TTP; n = 1, atypical hemolytic uremic syndrome; n = 1, control), of which nine test samples were collected in case of clinically apparent TTP and 13 samples were collected from TTP patients in clinical remission were included. Overall correlation between the TECHNOZYM and AcuStar assay was good with a Pearson R of 0.93 (p < 0.001). Agreement between the assays assessed with the Passing–Bablok analysis showed high agreement with an Intercept of − 2.56 (95% confidence interval [CI], − 5.07 to − 0.86) and Slope of 1.04 (95% CI 0.84–1.17). The absolute mean bias was 2.54% (standard difference [SD], 6.38%; 95% CI to 10.0–15.05%). Intra-method reliability was high with an absolute mean bias of − 0.13% (SD 3.21%; 95% CI to 6.42–6.16%). The observer agreement for categorial thresholds (> or < 10% ADAMTS3 activity) was kappa = 0.82 (95% CI 0.59–1.0). Conclusively, overall agreement between the testing methods was sufficient and we support previously published data suggesting the AcuStar assay being a valuable and accurate tool for ADAMTS13 activity testing and TTP diagnostics.
Corporate governance is the set of rules, be they legal or self-regulatory, practices and processes pursuant to which an insurance undertaking is administrated. Good corporate governance is not only key to establishing oneself and succeeding in a competitive environment but also to safeguarding the interests of all stakeholders in an insurance undertaking. It is insofar not surprising that mandatory requirements on the administration of insurance undertakings have become rather prolific in recent years, in an attempt by regulators to protect especially policyholders against perceived risks hailing from improperly governed insurance undertakings. In Germany this has been regarded by many undertakings as an overly paternalistic approach of the legislator, especially considering that the German insurance sector has experienced for decades if not centuries a remarkably low number of insolvencies and that German insurers were neither the trigger nor the (especially) endangered actors in the financial crisis commencing in 2007. Notwithstanding the true core of this criticism, that the insurance industry was taken to a certain degree hostage by the shortcomings within the banking sector, the reform of German Insurance Supervisory Law via implementation of the Solvency II-System has brought many advances in the sense of better governance of insurance undertakings and has also brought to light many deficiencies that the administration of some insurance undertakings may have suffered from in the past, which are now more properly addressed.
Macrophages acquire anti-inflammatory and proresolving functions to facilitate resolution of inflammation and promote tissue repair. While alternatively activated macrophages (AAMs), also referred to as M2 macrophages, polarized by type 2 (Th2) cytokines IL-4 or IL-13 contribute to the suppression of inflammatory responses and play a pivotal role in wound healing, contemporaneous exposure to apoptotic cells (ACs) potentiates the expression of anti-inflammatory and tissue repair genes. Given that liver X receptors (LXRs), which coordinate sterol metabolism and immune cell function, play an essential role in the clearance of ACs, we investigated whether LXR activation following engulfment of ACs selectively potentiates the expression of Th2 cytokine-dependent genes in primary human AAMs. We show that AC uptake simultaneously upregulates LXR-dependent, but suppresses SREBP-2-dependent gene expression in macrophages, which are both prevented by inhibiting Niemann–Pick C1 (NPC1)-mediated sterol transport from lysosomes. Concurrently, macrophages accumulate sterol biosynthetic intermediates desmosterol, lathosterol, lanosterol, and dihydrolanosterol but not cholesterol-derived oxysterols. Using global transcriptome analysis, we identify anti-inflammatory and proresolving genes including interleukin-1 receptor antagonist (IL1RN) and arachidonate 15-lipoxygenase (ALOX15) whose expression are selectively potentiated in macrophages upon concomitant exposure to ACs or LXR agonist T0901317 (T09) and Th2 cytokines. We show priming macrophages via LXR activation enhances the cellular capacity to synthesize inflammation-suppressing specialized proresolving mediator (SPM) precursors 15-HETE and 17-HDHA as well as resolvin D5. Silencing LXRα and LXRβ in macrophages attenuates the potentiation of ALOX15 expression by concomitant stimulation of ACs or T09 and IL-13. Collectively, we identify a previously unrecognized mechanism of regulation whereby LXR integrates AC uptake to selectively shape Th2-dependent gene expression in AAMs.
Mongolian spots (MS) are congenital dermal conditions resulting from neural crest-derived melanocytes migration to the skin during embryogenesis. MS incidences are highly variable in different populations. Morphologically, MS present as hyperpigmented maculae of varying size and form, ranging from round spots of 1 cm in diameter to extensive discolorations covering predominantly the lower back and buttocks. Due to their coloring, which is also dependent on the skin type, MS may mimic hematoma thus posing a challenge on the physician conducting examinations of children in cases of suspected child abuse. In the present study, MS incidences and distribution, as well as skin types, were documented in a collective of 253 children examined on the basis of suspected child abuse. From these data, a classification scheme was derived to document MS and to help identify cases with a need for recurrent examination for unambiguous interpretation of initial findings alongside the main decisive factors for re-examination such as general circumstances of the initial examination (e. g., experience of the examiner, lighting conditions) and given dermatological conditions of the patient (e. g., diaper rash).
Objective: Relative to urban populations, rural patients may have more limited access to care, which may undermine timely bladder cancer (BCa) diagnosis and even survival.
Methods: We tested the effect of residency status (rural areas [RA < 2500 inhabitants] vs. urban clusters [UC ≥ 2500 inhabitants] vs. urbanized areas [UA, ≥50,000 inhabitants]) on BCa stage at presentation, as well as on cancer-specific mortality (CSM) and other cause mortality (OCM), according to the US Census Bureau definition. Multivariate competing risks regression (CRR) models were fitted after matching of RA or UC with UA in stage-stratified analyses.
Results: Of 222,330 patients, 3496 (1.6%) resided in RA, 25,462 (11.5%) in UC and 193,372 (87%) in UA. Age, tumor stage, radical cystectomy rates or chemotherapy use were comparable between RA, UC and UA (all p > 0.05). At 10 years, RA was associated with highest OCM followed by UC and UA (30.9% vs. 27.7% vs. 25.6%, p < 0.01). Similarly, CSM was also marginally higher in RA or UC vs. UA (20.0% vs. 20.1% vs. 18.8%, p = 0.01). In stage-stratified, fully matched CRR analyses, increased OCM and CSM only applied to stage T1 BCa patients.
Conclusion: We did not observe meaningful differences in access to treatment or stage distribution, according to residency status. However, RA and to a lesser extent UC residency status, were associated with higher OCM and marginally higher CSM in T1N0M0 patients. This observation should be further validated or refuted in additional epidemiological investigations.
There is limited knowledge on the prevalence and risk factors of diabetic retinopathy (DR) in dialysis patients. We have investigated the association between diabetes mellitus and lipid-related biomarkers and retinopathy in hemodialysis patients. We reviewed 1,255 hemodialysis patients with type 2 diabetes mellitus (T2DM) who participated in the German Diabetes and Dialysis Study (4D Study). Associations between categorical clinical, biochemical variables and diabetic retinopathy were examined by logistic regression. On average, patients were 66 ± 8 years of age, 54% were male and the HbA1c was 6.7% ± 1.3%. DR, found in 71% of the patients, was significantly and positively associated with fasting glucose, HbA1c, time on dialysis, age, systolic blood pressure, body mass index and the prevalence of other microvascular diseases (e.g. neuropathy). Unexpectedly, DR was associated with high HDL cholesterol and high apolipoproteins AI and AII. Patients with coronary artery disease were less likely to have DR. DR was not associated with gender, smoking, diastolic blood pressure, VLDL cholesterol, triglycerides, and LDL cholesterol. In summary, the prevalence of DR in patients with type 2 diabetes mellitus requiring hemodialysis is higher than in patients suffering from T2DM, who do not receive hemodialysis. DR was positively related to systolic blood pressure (BP), glucometabolic control, and, paradoxically, HDL cholesterol. This data suggests that glucose and blood pressure control may delay the development of DR in patients with diabetes mellitus on dialysis.
The genus Ebolavirus comprises some of the deadliest viruses for primates and humans and associated disease outbreaks are increasing in Africa. Different evidence suggests that bats are putative reservoir hosts and play a major role in the transmission cycle of these filoviruses. Thus, detailed knowledge about their distribution might improve risk estimations of where future disease outbreaks might occur. A MaxEnt niche modelling approach based on climatic variables and land cover was used to investigate the potential distribution of 9 bat species associated to the Zaire ebolavirus. This viral species has led to major Ebola outbreaks in Africa and is known for causing high mortalities. Modelling results suggest suitable areas mainly in the areas near the coasts of West Africa with extensions into Central Africa, where almost all of the 9 species studied find suitable habitat conditions. Previous spillover events and outbreak sites of the virus are covered by the modelled distribution of 3 bat species that have been tested positive for the virus not only using serology tests but also PCR methods. Modelling the habitat suitability of the bats is an important step that can benefit public information campaigns and may ultimately help control future outbreaks of the disease.
Aktuelle wissenschaftliche Auseinandersetzungen mit dem Sinnerleben Beschäftigter thematisieren vor allem die Problematik eines belastungsbedingten Sinnverlustes. Danach leiden immer mehr Beschäftigte darunter, ihre Arbeit nicht mehr als sinnvoll empfinden zu können. Eine solche Perspektive lässt allerdings die subjektiven Gestaltungsleistungen und Aneignungsformen von Arbeit aus dem Blick geraten. Diesen wendet sich der Beitrag zu, indem er danach fragt, inwieweit sich unterschiedliche Formen der Aneignung von Arbeit identifizieren lassen. Auf der Basis von Interviews mit vierzig hochqualifizierten Beschäftigten werden drei unterschiedliche Aneignungsmodi mit ihren inhärenten Ambivalenzen identifiziert. Jeder Modus steht für eine spezifische Sichtweise auf die eigenen Gestaltungsmöglichkeiten und für eine Form der primären Sinnzuschreibung in der Arbeit. Differenziert werden drei Idealtypen – „progressive Sinngestaltung“, „widerständige Sinnbewahrung“ sowie „pragmatische Sinnbewahrung“ –, anhand derer die Heterogenität und die Ambivalenzen der Aneignung professioneller Arbeit deutlich werden. Der Beitrag liefert so Erkenntnisse über die subjektiven Praktiken des Bedeutsam-Machens von Arbeit und trägt zur Erforschung des Zusammenspiels von Arbeit und Subjektivität bei.
Objectives: Evaluation of surgical and non-surgical air-polishing in vitro efficacy for implant surface decontamination.
Material and methods: One hundred eighty implants were distributed to three differently angulated bone defect models (30°, 60°, 90°). Biofilm was imitated using indelible red color. Sixty implants were used for each defect, 20 of which were air-polished with three different types of glycine air powder abrasion (GAPA1–3) combinations. Within 20 equally air-polished implants, a surgical and non-surgical (with/without mucosa mask) procedure were simulated. All implants were photographed to determine the uncleaned surface. Changes in surface morphology were assessed using scanning electron micrographs (SEM).
Results: Cleaning efficacy did not show any significant differences between GAPA1–3 for surgical and non-surgical application. Within a cleaning method significant (p < 0.001) differences for GAPA2 between 30° (11.77 ± 2.73%) and 90° (7.25 ± 1.42%) in the non-surgical and 30° (8.26 ± 1.02%) and 60° (5.02 ± 0.84%) in the surgical simulation occurred. The surgical use of air-polishing (6.68 ± 1.66%) was significantly superior (p < 0.001) to the non-surgical (10.13 ± 2.75%). SEM micrographs showed no surface damages after use of GAPA.
Conclusions: Air-polishing is an efficient, surface protective method for surgical and non-surgical implant surface decontamination in this in vitro model. No method resulted in a complete cleaning of the implant surface.
Clinical relevance: Air-polishing appears to be promising for implant surface decontamination regardless of the device.
Purpose: COVID-19 pandemic had multiple influences on the social, industrial, and medical situation in all affected countries. Measures of obligatory medical confinement were suspensions of scheduled non-emergent surgical procedures and outpatients’ clinics as well as overall access restrictions to hospitals and medical practices. The aim of this retrospective study was to assess if the obligatory confinement (lockdown) had an effect on the number of appendectomies (during and after the period of lockdown).
Methods: This retrospective study was based on anonymized nationwide administrative claims data of the German Local General Sickness Fund (AOK). Patients admitted for diseases of the appendix (ICD-10: K35-K38) or abdominal and pelvic pain (ICD-10: R10) who underwent an appendectomy (OPS: 5-470) were included. The study period included 6 weeks of German lockdown (16 March–26 April 2020) as well as 6 weeks before (03 February–15 March 2020) and after (27 April–07 June 2020). These periods were compared to the respective one in 2018 and 2019.
Results: The overall number of appendectomies was significantly reduced during the lockdown time in 2020 compared to that in 2018 and 2019. This decrease affects only appendectomies due to acute simple (ICD-10: K35.30, K35.8) and non-acute appendicitis (ICD-10: K36-K38, R10). Numbers for appendectomies in acute complex appendicitis remained unchanged. Female patients and in the age group 1–18 years showed the strongest decrease in number of cases.
Conclusion: The lockdown in Germany resulted in a decreased number of appendectomies. This affected mainly appendectomies in simple acute and non-acute appendicitis, but not complicated acute appendicitis. The study gives no evidence that the confinement measures resulted in a deterioration of medical care for appendicitis.
Background: Alterations in the SCN5A gene encoding the cardiac sodium channel Nav1.5 have been linked to a number of arrhythmia syndromes and diseases including long-QT syndrome (LQTS), Brugada syndrome (BrS) and dilative cardiomyopathy (DCM), which may predispose to fatal arrhythmias and sudden death. We identified the heterozygous variant c.316A > G, p.(Ser106Gly) in a 35-year-old patient with survived cardiac arrest. In the present study, we aimed to investigate the functional impact of the variant to clarify the medical relevance.
Methods: Mutant as well as wild type GFP tagged Nav1.5 channels were expressed in HEK293 cells. We performed functional characterization experiments using patch-clamp technique.
Results: Electrophysiological measurements indicated, that the detected missense variant alters Nav1.5 channel functionality leading to a gain-of-function effect. Cells expressing S106G channels show an increase in Nav1.5 current over the entire voltage window.
Conclusion: The results support the assumption that the detected sequence aberration alters Nav1.5 channel function and may predispose to cardiac arrhythmias and sudden cardiac death.
Objectives: To immunohistochemically characterize and correlate macrophage M1/M2 polarization status with disease severity at peri-implantitis sites.
Materials and methods: A total of twenty patients (n = 20 implants) diagnosed with peri-implantitis (i.e., bleeding on probing with or without suppuration, probing depths ≥ 6 mm, and radiographic marginal bone loss ≥ 3 mm) were included. The severity of peri-implantitis was classified according to established criteria (i.e., slight, moderate, and advanced). Granulation tissue biopsies were obtained during surgical therapy and prepared for immunohistological assessment and macrophage polarization characterization. Macrophages, M1, and M2 phenotypes were identified through immunohistochemical markers (i.e., CD68, CD80, and CD206) and quantified through histomorphometrical analyses.
Results: Macrophages exhibiting a positive CD68 expression occupied a mean proportion of 14.36% (95% CI 11.4–17.2) of the inflammatory connective tissue (ICT) area. Positive M1 (CD80) and M2 (CD206) macrophages occupied a mean value of 7.07% (95% CI 5.9–9.4) and 5.22% (95% CI 3.8–6.6) of the ICT, respectively. The mean M1/M2 ratio was 1.56 (95% CI 1–12–1.9). Advanced peri-implantitis cases expressed a significantly higher M1 (%) when compared with M2 (%) expression. There was a significant correlation between CD68 (%) and M1 (%) expression and probing depth (PD) values.
Conclusion: The present immunohistochemical analysis suggests that macrophages constitute a considerable proportion of the inflammatory cellular composition at peri-implantitis sites, revealing a significant higher expression for M1 inflammatory phenotype at advanced peri-implantitis sites, which could possibly play a critical role in disease progression.
Clinical relevance: Macrophages have critical functions to establish homeostasis and disease. Bacteria might induce oral dysbiosis unbalancing the host’s immunological response and triggering inflammation around dental implants. M1/M2 status could possibly reveal peri-implantitis’ underlying pathogenesis.
Respiratory complex I catalyzes electron transfer from NADH to ubiquinone (Q) coupled to vectorial proton translocation across the inner mitochondrial membrane. Despite recent progress in structure determination of this very large membrane protein complex, the coupling mechanism is a matter of ongoing debate and the function of accessory subunits surrounding the canonical core subunits is essentially unknown. Concerted rearrangements within a cluster of conserved loops of central subunits NDUFS2 (β1-β2S2 loop), ND1 (TMH5-6ND1 loop) and ND3 (TMH1-2ND3 loop) were suggested to be critical for its proton pumping mechanism. Here, we show that stabilization of the TMH1-2ND3 loop by accessory subunit LYRM6 (NDUFA6) is pivotal for energy conversion by mitochondrial complex I. We determined the high-resolution structure of inactive mutant F89ALYRM6 of eukaryotic complex I from the yeast Yarrowia lipolytica and found long-range structural changes affecting the entire loop cluster. In atomistic molecular dynamics simulations of the mutant, we observed conformational transitions in the loop cluster that disrupted a putative pathway for delivery of substrate protons required in Q redox chemistry. Our results elucidate in detail the essential role of accessory subunit LYRM6 for the function of eukaryotic complex I and offer clues on its redox-linked proton pumping mechanism.
Vor dem Hintergrund der zunehmenden Veränderung des städtischen Lebensumfeldes durch Gentrifizierung, investorenfreundliche Stadtpolitik, Privatisierung öffentlicher Räume, Einsparung öffentlicher Investitionen und den Abbau demokratischer Beteiligungsinstrumente haben wir uns gefragt: Wie könnte eine solidarische Stadt der Zukunft aussehen? Welche Gegenentwürfe zu aktuell herrschenden Paradigmen in der Stadtentwicklung zeigen uns Wege aus der Alternativlosigkeit hin zu einer solidarischen Praxis auf Quartiersebene? Im Rahmen einer angewandten kritischen Geografie möchten wir zeigen, dass es eine Vielzahl an Projekten und Initiativen gibt, die die Kreativlosigkeit, zu der uns der Neoliberalismus erzogen hat, durchbrechen und an konkreten Ideen und deren praktischer Umsetzung arbeiten. Als theoretische Annäherung dafür setzen wir uns mit Utopien und deren Potenzialen für eine politische Praxis auseinander. Da wir selbst im Kontext stadtpolitischer Gruppen engagiert sind, nutzen wir die aktivistische Stadtforschung als methodischen Rahmen unserer Forschung. Daraus entstanden ist ein Faltblatt, der „Kompass für ein solidarisches Quartier“, welcher als aktivistisches Werkzeug und Ideengeber für die konkrete Umsetzung transformativer Stadtpolitik dienen soll.
The production of K∗(892)0 and ϕ(1020) in pp collisions at s√ = 8 TeV was measured using Run 1 data collected by the ALICE collaboration at the LHC. The pT-differential yields d2N/dydpT in the range 0<pT<20 GeV/c for K∗0 and 0.4<pT<16 GeV/c for ϕ have been measured at midrapidity, |y|<0.5. Moreover, improved measurements of the K∗(892)0 and ϕ(1020) at s√=7TeV are presented. The collision energy dependence of pT distributions, pT-integrated yields and particle ratios in inelastic pp collisions are examined. The results are also compared with different collision systems. The values of the particle ratios are found to be similar to those measured at other LHC energies. In pp collisions a hardening of the particle spectra is observed with increasing energy, but at the same time it is also observed that the relative particle abundances are independent of the collision energy. The pT-differential yields of K∗0 and ϕ in pp collisions at s√=8 TeV are compared with the expectations of different Monte Carlo event generators.
The transverse momentum (pT) differential yields of (anti-)3He and (anti-)3H measured in p-Pb collisions at sNN−−−√ = 5.02 TeV with ALICE at the Large Hadron Collider (LHC) are presented. The ratios of the pT-integrated yields of (anti-)3He and (anti-)3H to the proton yields are reported, as well as the pT dependence of the coalescence parameters B3 for (anti-)3He and (anti-)3H. For (anti-)3He, the results obtained in four classes of the mean charged-particle multiplicity density are also discussed. These results are compared to predictions from a canonical statistical hadronization model and coalescence approaches. An upper limit on the total yield of 4He¯ is determined.
The global polarization of the Λ and Λ¯¯¯¯ hyperons is measured for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV recorded with the ALICE at the LHC. The results are reported differentially as a function of collision centrality and hyperon's transverse momentum (pT) for the range of centrality 5-50%, 0.5<pT<5 GeV/c, and rapidity |y|<0.5. The hyperon global polarization averaged for Pb-Pb collisions at sNN−−−√ = 2.76 and 5.02 TeV is found to be consistent with zero, ⟨PH⟩ (%) ≈ - 0.01 ± 0.05 (stat.) ± 0.03 (syst.) in the collision centrality range 15-50%, where the largest signal is expected. The results are compatible with expectations based on an extrapolation from measurements at lower collision energies at RHIC, hydrodynamical model calculations, and empirical estimates based on collision energy dependence of directed flow, all of which predict the global polarization values at LHC energies of the order of 0.01%.
The Quark Gluon Plasma (QGP) produced in ultra relativistic heavy-ion collisions at the Large Hadron Collider (LHC) can be studied by measuring the modifications of jets formed by hard scattered partons which interact with the medium. We studied these modifications via angular correlations of jets with charged hadrons for jets with momenta 20 < pjetT < 40 GeV/c as a function of the associated particle momentum. The reaction plane fit (RPF) method is used in this analysis to remove the flow modulated background. The analysis of angular correlations for different orientations of the jet relative to the second order event plane allows for the study of the path length dependence of medium modifications to jets. We present the dependence of azimuthal angular correlations of charged hadrons with respect to the angle of the axis of a reconstructed jet relative to the event plane in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. The dependence of particle yields associated with jets on the angle of the jet with respect to the event plane is presented. Correlations at different angles relative to the event plane are compared through ratios and differences of the yield. No dependence of the results on the angle of the jet with respect to the event plane is observed within uncertainties, which is consistent with no significant path length dependence of the medium modifications for this observable.
The first measurement at the LHC of charge-dependent directed flow (v1) relative to the spectator plane is presented for Pb-Pb collisions at sNN−−−√ = 5.02 TeV. Results are reported for charged hadrons and D0 mesons for the transverse momentum intervals pT>0.2 GeV/c and 3<pT< 6 GeV/c in the 5-40% and 10-40% centrality classes, respectively. The difference between the positively and negatively charged hadron v1 has a positive slope as a function of pseudorapidity η, dΔv1/dη=[1.68 ± 0.49 (stat.) ± 0.41 (syst.)] ×10−4. The same measurement for D0 and D¯0 mesons yields a positive value dΔv1/dη= [4.9 ± 1.7 (stat.) ± 0.6 (syst.)]×10−1, which is about three orders of magnitude larger than the one of the charged hadrons. These measurements can provide new insights into the effects of the strong electromagnetic field and the initial tilt of matter created in non-central heavy-ion collisions on the dynamics of light (u, d, and s) and heavy (c) quarks. The large difference between the observed Δv1 of charged hadrons and D0 mesons may reflect different sensitivity of the charm and light quarks to the early time dynamics of a heavy-ion collision. These observations challenge some of the recent theoretical calculations, which predicted a negative and an order of magnitude smaller value of dΔv1/dη for both light-flavour and charmed hadrons.
The first measurements of dielectron production at midrapidity (|ηc|<0.8) in proton-proton and proton-lead collisions at sNN−−−√ = 5.02 TeV at the LHC are presented. The dielectron cross section is measured with the ALICE detector as a function of the invariant mass mee and the pair transverse momentum pT,ee in the ranges mee < 3.5 GeV/c2 and pT,ee < 8.0 GeV/c2, in both collision systems. In proton-proton collisions, the charm and beauty cross sections are determined at midrapidity from a fit to the data with two different event generators. This complements the existing dielectron measurements performed at s√ = 7 and 13 TeV. The slope of the s√ dependence of the three measurements is described by FONLL calculations. The dielectron cross section measured in proton-lead collisions is in agreement, within the current precision, with the expected dielectron production without any nuclear matter effects for e+e− pairs from open heavy-flavor hadron decays. For the first time at LHC energies, the dielectron production in proton-lead and proton-proton collisions are directly compared at the same sNN−−−√ via the dielectron nuclear modification factor RpPb. The measurements are compared to model calculations including cold nuclear matter effects, or additional sources of dielectrons from thermal radiation.
This article reports measurements of the pT-differential inclusive jet cross-section in pp collisions at s√ = 5.02 TeV and the pT-differential inclusive jet yield in Pb-Pb 0-10% central collisions at sNN−−−√ = 5.02 TeV. Jets were reconstructed at mid-rapidity with the ALICE tracking detectors and electromagnetic calorimeter using the anti-kT algorithm. For pp collisions, we report jet cross-sections for jet resolution parameters R=0.1−0.6 over the range 20<pT,jet<140 GeV/c, as well as the jet cross-section ratios of different R, and comparisons to two next-to-leading-order (NLO)-based theoretical predictions. For Pb-Pb collisions, we report the R=0.2 and R=0.4 jet spectra for 40<pT,jet<140 GeV/c and 60<pT,jet<140 GeV/c, respectively. The scaled ratio of jet yields observed in Pb-Pb to pp collisions, RAA, is constructed, and exhibits strong jet quenching and a clear pT-dependence for R=0.2. No significant R-dependence of the jet RAA is observed within the uncertainties of the measurement. These results are compared to several theoretical predictions.
Mid-rapidity production of π±, K± and (p¯)p measured by the ALICE experiment at the LHC, in Pb-Pb and inelastic pp collisions at sNN−−−√ = 5.02 TeV, is presented. The invariant yields are measured over a wide transverse momentum (pT) range from hundreds of MeV/c up to 20 GeV/c. The results in Pb-Pb collisions are presented as a function of the collision centrality, in the range 0−90%. The comparison of the pT-integrated particle ratios, i.e. proton-to-pion (p/π) and kaon-to-pion (K/π) ratios, with similar measurements in Pb-Pb collisions at sNN−−−√ = 2.76 TeV show no significant energy dependence. Blast-wave fits of the pT spectra indicate that in the most central collisions radial flow is slightly larger at 5.02 TeV with respect to 2.76 TeV. Particle ratios (p/π, K/π) as a function of pT show pronounced maxima at pT ≈ 3 GeV/c in central Pb-Pb collisions. At high pT, particle ratios at 5.02 TeV are similar to those measured in pp collisions at the same energy and in Pb-Pb collisions at sNN−−−√ = 2.76 TeV. Using the pp reference spectra measured at the same collision energy of 5.02 TeV, the nuclear modification factors for the different particle species are derived. Within uncertainties, the nuclear modification factor is particle species independent for high pT and compatible with measurements at sNN−−−√ = 2.76 TeV. The results are compared to state-of-the-art model calculations, which are found to describe the observed trends satisfactorily.
In bioengineering, scaffold proteins have been increasingly used to recruit molecules to parts of a cell, or to enhance the efficacy of biosynthetic or signalling pathways. For example, scaffolds can be used to make weak or non-immunogenic small molecules immunogenic by attaching them to the scaffold, in this role called carrier. Here, we present the dodecin from Mycobacterium tuberculosis (mtDod) as a new scaffold protein. MtDod is a homododecameric complex of spherical shape, high stability and robust assembly, which allows the attachment of cargo at its surface. We show that mtDod, either directly loaded with cargo or equipped with domains for non-covalent and covalent loading of cargo, can be produced recombinantly in high quantity and quality in Escherichia coli. Fusions of mtDod with proteins of up to four times the size of mtDod, e.g. with monomeric superfolder green fluorescent protein creating a 437 kDa large dodecamer, were successfully purified, showing mtDod’s ability to function as recruitment hub. Further, mtDod equipped with SYNZIP and SpyCatcher domains for post-translational recruitment of cargo was prepared of which the mtDod/SpyCatcher system proved to be particularly useful. In a case study, we finally show that mtDod-peptide fusions allow producing antibodies against human heat shock proteins and the C-terminus of heat shock cognate 70 interacting protein (CHIP).
Aim: The primary aim of this study was to analyze frequency and characteristics of combined facial and peripheral trauma with consecutive hospitalization and treatment.
Materials and methods: The study included all patients with concomitant orthopedic-traumatolgical (OT) and craniomaxillofacial (CMF) injuries admitted to our level I trauma center in 2018. The data were collected by analysis of the institution’s database and radiological reviews and included age, sex, injury type, weekday and time of presentation. All patients were examined and treated by a team of surgeons specialized in OT and CMF directly after presentation.
Results: A total number of 1040 combined OT and CMF patients were identified. Mean age was 33.0 ± 26.2 years. 67.3% (n = 700) were male patients. Primary presentation happened most frequently on Sundays (n = 199) and between 7 and 8 pm (n = 74). 193 OT fractures were documented, where cervical spine injuries were most frequent (n = 30). 365 facial and skull fractures were recorded. 10.8% of the 204 patients with fractures of the viscerocranium presented with at least one fracture of the extremity, 7.8% (16/204) with cervical spine fractures, 33.3% (68/204) with signs of closed brain trauma and 9.8% (20/204) with intracranial hemorrhage.
Discussion: The study shows a high frequency of combined facial with OT-injuries and brain damage in a predominantly young and male cohort. Attendance by interdisciplinary teams of both CMF and OT surgeons specialized in cervical spine trauma surgery is highly advisable for adequate treatment.
Conclusion: Diagnostics and treatment should be performed by a highly specialized OT and CMF team, with a consulting neurosurgeon in a level-1 trauma center to avoid missed diagnoses and keep mortality low.
Wie kaum ein anderes Bildmotiv machen schmelzende Gletscher den Klimawandel sichtbar. Sie spielen deshalb eine zentrale Rolle für die Klimaforschung selbst, für die Popularisierung ihrer alarmierenden Erkenntnisse sowie für die zeitgenössische Kunst, die im Lichte dieser Einsichten nach einer adäquaten neuen Ästhetik sucht. Entsprechend umfangreich fällt inzwischen auch die kulturwissenschaftliche Auseinandersetzung mit Gletscherbildern aus. Zahlreiche Ausstellungskataloge und umfangreiche Studien verfolgen deren Entwicklung vom frühen 17. Jahrhundert, auf das die ersten bildlichen Darstellungen datiert sind, bis in die Gegenwart, in der Gletscher und ihr Verschwinden zum Emblem der globalen Erwärmung geworden sind. Der Heuristik des Vergleichs kommt dabei eine wichtige Funktion zu: Nicht nur bildet sie die Basis etwa für klassisch kunsthistorische Untersuchungen, deren Augenmerk dem Wandel der Ausdrucksformen und Abbildungskonventionen von Gletscherbildern (etwa auf einer Skala zwischen Idealisierung und Realismus) gilt. Überdies und insbesondere ist auch der Prozess des Verschwindens auf den vergleichenden Blick angewiesen, denn dieser offenbart sich ja erst auf diese Weise in seiner ganzen Dramatik. Dieser Aufsatz jedoch wählt eine andere Perspektive: In begrifflicher Anlehnung an Jussi Parikkas 'Mediengeologie' und vor dem Hintergrund des umfassenden Felds der Medienökologie wird im Folgenden eine "Medienglaziologie" umrissen, die Gletscher selbst als Medien versteht. Ganz im Sinne des medienkomparatistischen Forschungsparadigmas, dass sich spezifische Medialitäten erst aus einer medienvergleichenden Perspektive erschließen, wird der Frage nachgegangen, wie sich dieses "Medien-Werden" der Gletscher im und durch den Vergleich mit anderen (technischen) Medien vollzieht. Dabei konzentriere ich mich zeitlich auf das 19. und frühe 20. Jahrhundert und regional auf die Alpengletscher, deren wissenschaftliche Erforschung die Disziplin der Glaziologie begründete.
Briefe, das Gespräch zweier Abwesender miteinander, spielen in vielen Filmen eine große Rolle. Sie werden eingeblendet oder per 'voice over' vorgelesen, man sieht Lese- und Schreibszenen, die mit der Vieldeutigkeit des Geschriebenen spielen. Der Brief sei, so Christina Bartz, "wegen der kommunikativen Verbindung über zeitliche und räumliche Distanzen hinweg" "besonders anschlussfähig für den Film", der ebenfalls durch die Montage räumlich und zeitlich Getrenntes zusammenbringt. Im Gegensatz zum Film ist der Brief jedoch kein Massenmedium sondern Individualkommunikation. Das Zeigen des Mediums Brief oder das Ersetzen dieses historischen Mediums durch ein aktuelleres im Film bietet immer auch die Möglichkeit der Medienreflexion. In meinem Beitrag möchte ich anhand zweier prominenter Beispiele zum einen beobachten, wie in filmischen Adaptionen briefgeprägter literarischer Texte mit Briefen umgegangen wird, und zum anderen, wie anhand der Briefthematik eine Medienreflexion stattfindet. Ich stelle dazu zwei Melodramen vor, in denen Briefe und das damit einhergehende Erkennen und Verkennen eine zentrale Rolle spielen: Max Ophüls' "Letter from an unknown woman" (USA 1948), der Verfilmung von Stefan Zweigs Novelle "Brief einer Unbekannten" (1922), und "Atonement" (2007), die Adaption von Ian McEwans gleichnamigen Roman von 2001.
Die Verunsicherung auf dem Feld zeitgenössischer Kunst berührt nicht nur die Frage nach der Qualität von Kunst, sondern auch jene der Grenze zwischen Kunst(werk) und ihrem (bzw. seinem) jeweiligen Außen. [...] Kunst, die einen herkömmlichen Werkbegriff in Frage stellt (und vom breiten Publikum oft abgelehnt wird), aber doch verortet und verortbar und daher, zumindest weitestgehend, als Kunst erkennbar ist, soll im folgenden Gegenwartskunst genannt werden, die in den Alltag integrierte und intervenierende und manchmal nicht als Kunst wahrgenommene Kunst als Situationskunst. Gegenwartskunst setzt ihre Autonomie und eine klare Grenze zwischen Kunst und Nicht-Kunst voraus, Situationskunst (die man als eine radikale Ausformung und somit als Teil der Gegenwartskunst ansehen könnte) sät Zweifel an der Kunstautonomie, auch wenn sie diese häufig als Argument gegen Anrufungen oder Übergriffe von Politik, Religion oder Alltagswirklichkeit verwendet bzw. verwenden 'muss'. Bei beiden Formen, die sich in vielen Fällen überschneiden, wird im herkömmlichen Sinne nichts mehr erschaffen ('poesis'), sondern etwas gefunden bzw. letztlich 'einfach' etwas getan ('praxis'). In beiden Fällen versteht sich nichts mehr von selbst: Es ist in der Rezeption - zumindest im ersten Moment - unklar, ob wir es überhaupt mit Kunst zu tun haben. In anderen Worten: Wir können uns im Moment des Ausstellungsbesuches also nicht auf unsere Sinneswahrnehmungen, auf unsere Erfahrung und auf unser implizites (Vor-)Wissen verlassen, wenn wir wissen wollen, womit wir es zu tun haben und was das alles soll. Wir benötigen also nicht zuletzt Erklärungen und Erläuterungen (die wieder zu implizitem Wissen gerinnen können) - und das ist ein Grund, warum zeitgenössische Kunst für die Komparatistik interessant sein könnte. Davon wird noch zu sprechen sein. Die Begriffe Gegenwarts- und Situationskunst decken einen sehr weiten Bereich von Phänomenen ab. Daher wird das Folgende eine kursorische Skizze werden, bei der in erster Linie auf solche Phänomene und ihre Gemeinsamkeiten abgezielt werden soll, die für die Komparatistik von Interesse sind. Im Zentrum steht nicht eine genaue Analyse und Interpretation von Phänomenen, sondern die Frage, was im Hinblick auf die Disziplin der Komparatistik spannend für Analyse und Interpretation wäre. Die im Folgenden diskutierten Phänomene und Beispiele befinden sich auf jeden Fall in der Peripherie der Komparatistik mit allen Nachteilen, welche die Arbeit in Peripherien mit sich bringt.
Inschriften sind Formen, die durch eine besondere mediale Disposition charakterisiert sind. Was Inschriften auszeichnet, ist, neben ihrem engen Bezug zu einem materiellen Träger, ihre eigentümliche Position auf der Schwelle von Schrift und Bild. [...] Die Eigenart der Inschrift, ein Wort oder einen Text als sichtbare Zeichenfolge auszustellen, hat der italienische Epigraphieforscher Armando Petrucci im Begriff der 'scrittura esposta' zum Ausdruck gebracht. [...] Versucht man, die damit berührte spezifische Potenz der Inschrift genauer zu erfassen, liegt es nahe, zunächst auf die visuelle Dimension zurückzukommen. Es ist, so darf man annehmen, die Fähigkeit der Inschrift, als Bild zu erscheinen, die es ihr erlaubt, in den Blick des Betrachters zu treten und sich jenem als exponierte Figur vor Augen zu stellen. Mit dieser bildhaften Erscheinungsform, so ließe sich das Argument weiterführen, verbinden sich ästhetische Qualitäten der sinnlichen Eindrücklichkeit und Präsenz, die der Inschrift die ihr eigentümliche Ausdrucks- und Aussagekraft verleihen. [...] Mit dieser Erklärung ist unterdessen nur die eine Seite der Inschrift und ihrer medien- und wirkungsästhetischen Beschaffenheit erfasst. Das Besondere der Inschrift erschöpft sich nicht in deren Eigenart als ausgestellter, exponierter Zeichenformation. Die Inschrift ist nicht nur 'esposta', sondern ebenso 'scrittura'. Die besondere Gestaltungs- und Wirkungsweise der Inschrift beruht mithin nicht allein auf deren bildhafter Disposition. Die Wirkkraft der Inschrift verdankt sich, so die hier vorgeschlagene These, dem Umstand, dass diese, auch wenn sie sich als exponierte, eingängig und weithin sichtbare Gestalt zur Geltung bringt, zugleich ihren Charakter als Schrift bewahrt und diesen nicht weniger deutlich hervorkehrt. Wer eine Inschrift betrachtet, der erblickt in ihrer bildhaften Gestaltung zugleich die visuelle Form eines Textes, einer sprachlichen Äußerung. Durch ihre Gestaltung als 'scrittura' erscheint die Inschrift somit in einer Form, die in spezifischer Weise mit Momenten der Macht und Autorität versehen ist. Ist doch die Schrift dasjenige Medium, in dem uns, in einer von der Antike bis in die Neuzeit und Moderne reichenden Tradition, das Gesetz, die aufgezeichnete und materialisierte 'Stimme des Souveräns' entgegentritt. Das Besondere der Inschrift scheint also, so lässt sich vorläufig festhalten, darin zu bestehen, dass sie die Medien von Bild und Schrift in einer spezifischen Weise miteinander verknüpft. In ihr sind mediale und ästhetische Qualitäten wirksam, die teils dem Bild, teils der Schrift angehören. Auf diesem Zusammenspiel beruht auch das eigentümliche Wirkungspotential, das sich mit dieser Äußerungsform verbindet. In der Folge wird es darum gehen, dieses Zusammenwirken bildlicher und skripturaler Aspekte genauer zu erkunden und vor diesem Hintergrund die Bedeutung und Wirkkraft inschriftlicher Zeichen insbesondere in politischen Kontexten zu untersuchen.
Das Internet findet auf unterschiedlichste Weise Eingang in den Film: Digitale Formate wie Webserien, Podcasts oder sogar Tweets werden im Medienwechsel Grundlage filmischer Adaptionen, filmische Experimente mit interaktiven und virtuellen Technologien generieren neue, zwischen Film und Computerspiel angesiedelte Medienkombinationen, transmediale Erweiterungen führen auf verschiedene Arten Film- und Serienuniversen im digitalen Raum fort und intermediale Bezüge erzählen durch die Imitation einer digitalen Ästhetik nicht (nur) über das Altermedium, sondern oft auch durch das andere Medium. Zu letzterer intermedialer Kategorie gehörende Phänomene der Thematisierung, Evozierung oder Simulierung sollen hier im Kontext der Darstellung des Internets analysiert werden. Aufgrund der Ubiquität digitaler Medien im Alltag spielen seit einigen Jahren neuere Technologien als Bezugsmedien eine zentrale Rolle in vielen Filmen und Serien. Filmische Internetanwendungen werden dabei vor allem als grafische Benutzeroberfläche, als Nutzungsschnittstelle zwischen Anwender und technischem Gerät visualisiert, die Repräsentation der Hardware erscheint meist nachrangig. Nicht die Darstellung von Computern und Smartphones, sondern die Inszenierung von vernetzten Systemen, Räumen und Kommunikationsstrukturen steht daher im Fokus dieses Artikels. Eingegangen werden soll in diesem Zusammenhang insbesondere auf intermediale Evozierungen des Altermediums durch die Nachahmung digitaler Ästhetiken vermittels des Formenrepertoires des Films, simulierte Screen- und Desktopfilme und auf die Darstellung der dominant schrift- und zeichenbasierten digitalen Kultur durch die Integration von Schrift im Filmbild. Begonnen wird die Untersuchung mit einer Betrachtung von visuellen Metaphern und Strategien der Sichtbarmachung virtueller Räume.
Fremde Welten - eigene Welten : zur kategorisierenden Rolle von Abweichungen für Fiktionalität
(2020)
Selbst Texte und Filme mit dargestellten Welten jenseits irgendeiner temporalen oder spatialen Relation zur uns bekannten Welt sind notwendigerweise bloße Produkte ihrer jeweiligen Entstehungszeit. Umso mehr gilt das für Plots zu beachten, die in der irdischen Zukunft oder auf von der Erde weit entfernten Planeten spielen. Und weil all diese Filme und Texte ein Produkt einer ganz konkreten Zeit, einer ganz konkreten Kultur sind, gilt auch für deren mal mehr, mal weniger fremde Welten, dass sie auf Analogien zur jeweils wirklichen Welt ihrer jeweiligen Entstehungszeit, auf Referenzen zu dieser, zu untersuchen sind. Die fremden Welten können dann, müssen gar, nicht nur eigentlich, sondern auch uneigentlich, zeichenhaft gelesen werden. Der Grad an Explizitheit wie auch Konkretheit der jeweiligen Referenzen mag zwar von Text zu Text, von Film zu Film verschieden sein, in den meisten Fällen werden allerdings die anzutreffenden Analogien zu den jeweils außertextuell oder außerfilmisch existenten Gegebenheiten ausreichen, um sowohl die jeweils temporale als auch spatiale Differenz zu neutralisieren: Die fremde Welt des Textes oder Films stellt eben doch nur ein Spiegelbild, Zerrbild oder Wunschbild der wirklichen, außertextuellen oder außerfilmischen Welt dar. In fantastisch-utopischen Fiktionen ist der Umstand, dass die dargestellten Welten von der zeitgenössischen Wirklichkeit abweichen, dennoch aber auf diese zeitgenössische Wirklichkeit zu beziehen sind, das entscheidende Wesensmerkmal. Demnach ist es vorrangiger Sinn und Zweck der in diesen Gattungen konstruierten, von der raumzeitlichen Wirklichkeit sich differenzierenden, d. h. abweichenden Anders-, Parallel- und Zukunftswelten, Projektionsflächen für die Thematisierung von Sachverhalten eben jener Wirklichkeit anzubieten. Auf diese Weise sollen entweder Themen, die in ihrem ursprünglichen Kontext nur schwerlich oder gar nicht thematisiert werden können, thematisierbar gemacht werden oder aber auch in ihrem Kontext thematisierbare Sachverhalte durch Isolierung von ihrem ursprünglichen Kontext in ein neues Licht gerückt, anders akzentuiert und damit womöglich auch präzisiert und kritisiert werden. Die Existenz einer rational-logischen Begründung der in der dargestellten Welt von der außerfilmischen oder außertextuellen Wirklichkeit abweichenden Phänomene ist dabei nicht zwingend notwendig. Vielmehr stellt deren Vorhandensein lediglich das Unterscheidungskriterium von Science-Fiction zu anderen fantastisch-utopischen Gattungen dar. Im Folgenden soll anhand von Tim Burtons Neuverfilmung von "Planet der Affen" (USA 2001) gezeigt werden, inwiefern eine fremde Welt als eigene Welt gelesen werden kann. Im Anschluss - und das stellt quasi das 'Novum' dieses Beitrags dar - soll vor dem Hintergrund einer mengentheoretischen Definition von Fiktionalität, Faktualität und Fake erörtert werden, inwieweit es sinnvoll ist, das von der wirklichen Welt Abweichende als Distinktionsmerkmal von Fiktionalität heranzuziehen.
Inzwischen mag es fast in Vergessenheit geraten sein, wie Facebook seine Hauptspielwiese bis zum Herbst 2011 genannt und organisiert hatte: Im Zentrum der Anwendung befand sich eine hellgraue Wand oder Mauer, an die man Informationen in zahlreichen Formaten und Varianten heften konnte. Was zuvor als flächig organisiertes, statisches, mit einem begrenzten Raum ausgestattetes, ältere Einträge automatisch verdrängendes Format diente, wird nun im Herbst 2011 ersetzt durch eine Organisationsform, die man - nicht ohne zahlreiche Implikationen - 'Timeline' zu nennen beliebt. Im Gegensatz zur vorherigen bietet diese neue Struktur ein dynamisches, spaltenartiges, nach oben in die offene Zukunft organisiertes Format, mit einem lediglich nach unten begrenzten Ursprung, ein Arrangement also, das ältere Einträge für den Besucher allesamt unmittelbar sicht- und abrufbar vorhält. Warum, so könnte man fragen, nimmt Facebook diese Änderung vor? Worin bestehen die Gründe, dass eine weitestgehend statische Fläche ersetzt wird durch eine verschiebbare, dynamische Zeitleiste? [...] Es läge nahe zu vermuten, dass einem solchen Medienwechsel vor allem ästhetische oder modische Überlegungen vorausgingen. [...] Ich möchte nun im Folgenden gegen diese allzu leichtgewichtige Vermutung einer rein ästhetischen, aufmerksamkeitsheischenden oder gänzlich arbiträren Umstellung seitens der Firma argumentieren, um einige strukturelle Gründe dafür anzuführen, warum sich dieser Wechsel dennoch und vor allem für die Firma Facebook als ratsam und nicht nur werbetechnisch als profitabel erwiesen hat. Eine solche Begründung findet sich allerdings weder in der Keynote von Zuckerberg vom September 2011 erwähnt noch im Kleingedruckten der Facebook-Statuten. Um die wahren Beweggründe für einen solchen, als durchaus tiefgreifend zu charakterisierenden Eingriff zu ermessen, erweist sich eine medienhistorische und auch medientheoretische Perspektive als außerordentlich hilfreich. Dazu sei zunächst etwas weiter ausgeholt, und zwar meinerseits mit Hilfe einer Zeitleiste, und zwar einer Zeitleiste zur Geschichte der Zeitleiste, die zeigt, was eine Zeitleiste eigentlich leistet.
Autor*innenfiguren finden sich seit Anbeginn der Filmgeschichte im audiovisuellen Medium - und zwar sowohl als Rekurrenz auf tatsächlich existierende historische Autor*innen als auch auf fiktive Schriftsteller*innen, die für die jeweilige Narration mitsamt der von ihnen geschriebenen Werke kreiert sind. Es bleibt eine spannende Frage, warum das audiovisuelle Medium ein so großes und langlebiges Interesse an Prozessen des Imaginierens, Schreibens und Publizierens hat, das besondere Konjunkturphasen während Herausforderungslagen des Mediums durchlebt - wie beispielsweise die jüngste Welle rund um das Millennium, die mit der zunehmenden Digitalisierung des Films zusammenfiel. Dieser Zusammenhang zwischen sich verändernden medialen Begebenheiten und dem Interesse an Autor*innenfiguren lässt sich nun auch im seriellen Erzählformat des Mediums finden. Der jüngst zu beobachtende Wandel des seriellen audio-visuellen Erzählens vollzog sich insbesondere durch das Aufkommen von Streaming-Diensten und der damit von Sendezeiten der Fernsehsender losgelösten Verfügbarkeit aller bis dato veröffentlichten Episoden einer Serie. Während zuvor ein wöchentliches Warten auf einzelne Folgen und (abgesehen vom Sonderfall der Wiederholung) nur die Verfügbarkeit dieser jeweiligen Episode gewährleistet war, besteht nun die Möglichkeit über 'Binge Watching' große Abschnitte einer Serie auf einmal anzusehen. Hierüber ergibt sich die Möglichkeit komplexe und stringent fortlaufende Handlungsstränge im seriellen Format zu erzählen. [...] Im Kontext dieser Verschachtelung von Spannungsbögen werden auch an die Darstellung von Autorschaft im seriellen Erzählformat Anforderungen gestellt, die deutlich von denen des Spielfilmformats abweichen und sich als folgende Hypothesen formulieren lassen: 1. Aufgrund ihrer Länge und der Untergliederung in einzelne Episoden mit jeweils eigenen kleinen Höhepunkten kann nicht nur die Entstehungsgeschichte eines Werks im Fokus der Narration stehen. Autor*innen im seriellen Erzählformat müssen immer wieder neue Werke angehen, sich auf immer wieder neue Art mit diesen auseinandersetzen und sich durch deren Vollendung erneut beweisen. 2. Das schöpferische Potential und der Akt der Entstehung eines Werks werden im Rahmen der Spannungsbögen getaktet. [...] 3. Als Weiterführung der letzten Konjunkturwelle im Spielfilmformat ist ein deutlich intensivierter transmedialer Umgang zu verzeichnen. [...] Wie sich diese drei Annahmen in einzelnen Beispielen manifestieren, soll anhand drei unterschiedlicher Typen von Autorenfiguren in amerikanischen Serien aufgezeigt werden.
Der vorliegende Beitrag verfolgt das Ziel, die besondere Rolle und Funktion der Rezipienten beim transmedialen Erzählen zu erkunden und genauer zu definieren. Im Rahmen einer Historisierung des Phänomens des 'transmedia storytelling' lassen sich durch den epochenübergreifenden Vergleich zwischen unterschiedlichen historischen Spielarten transmedialer Narrative gemeinsame Spezifika dieser Erzählformen erkennen und wiederkehrende charakteristische Ausprägungen im Rezeptionsverhalten sichtbar machen. Es zeigt sich dabei, dass die Leser, Zuschauer bzw. Nutzer des transmedial Erzählten häufig als Beobachter höherer Ordnung in Erscheinung treten, deren Interesse nicht allein und oft nicht in erster Linie auf die inhaltliche Ebene, das Was der Geschichte, ausgerichtet ist. Vielmehr konzentriert sich die Aufmerksamkeit der Rezipienten vermehrt auf das Wie, auf die Art und Weise der Vermittlung der Erzählung in den verschiedenen medialen Kontexten.
"The art of making beautiful prints in less than an hour : die Dunkelkammer in filmischer Reflexion
(2020)
Mit erstaunlicher Persistenz findet die Dunkelkammer über Jahrzehnte hinweg Eingang in den fiktionalen Film: als visueller wie narrativer Topos, als Handlungsort und als stabiles ikonisches Ensemble - und dies durchaus unter Absehung von der tatsächlichen historischen Relevanz der Dunkelkammer für die fotografischen Fertigungsprozesse im jeweiligen Handlungszeitraum der Filme. Mehr noch: Insofern selbst heute noch Filme mit klarem Gegenwartsbezug die Dunkelkammer thematisieren und reflektieren, findet sich auch hierin die Gleichzeitigkeit des Ungleichzeitigen wieder. Selbst in Filmen, die ganz selbstverständlich dem Primat der Digitalfotografie und ihrer Materialisierung auf den verschiedensten Displays huldigen, bleiben die Dunkelkammer und die in ihr vollzogenen Prozesse präsent. Im Folgenden soll daher den prominentesten filmischen Narrativen und Motivgestaltungen dieses Zusammenhangs sowie der Frage nach Gründen für die offenkundige Faszination des Mediums Film gerade für diesen Teil der fotografischen Bildgenese nachgegangen werden. Es wird sich zeigen, dass ungeachtet der relativen Breite an einschlägigen Motivverwendungen doch rekurrierende narrative und ikonische Muster identifiziert werden können, die der Dunkelkammer als Handlungsort einen begrenzten Korpus von Funktionalisierungen und Handlungsketten zuweisen und sie dabei als je unterschiedlichen semantischen Raum konstruieren. Gemäß der medienkomparatistischen Grundannahme, dass die Fremdreflexion eines Mediums immer auch eine Selbstreflexion des eigenen Mediums mit sich bringt, wird ferner zu beobachten sein, welche Figurationen von Fremd- und Selbstbeobachtung der Film in seiner Handhabung des Dunkelkammermotivs vollzieht.
Background: A prototype of a noninvasive glucometer combining skin excitation by a mid-infrared quantum cascade laser with photothermal detection was evaluated in glucose correlation tests including 100 volunteers (41 people with diabetes and 59 healthy people).
Methods: Invasive reference measurements using a clinical glucometer and noninvasive measurements at a finger of the volunteer were simultaneously recorded in five-minute intervals starting from fasting glucose values for healthy subjects (low glucose values for diabetes patients) over a two-hour period. A glucose range from >50 to <350 mg/dL was covered. Machine learning algorithms were used to predict glucose values from the photothermal spectra. Data were analyzed for the average percent disagreement of the noninvasive measurements with the clinical reference measurement and visualized in consensus error grids.
Results: 98.8% (full data set) and 99.1% (improved algorithm) of glucose results were within Zones A and B of the grid, indicating the highest accuracy level. Less than 1% of the data were in Zone C, and none in Zone D or E. The mean and median percent differences between the invasive as a reference and the noninvasive method were 12.1% and 6.5%, respectively, for the full data set, and 11.3% and 6.4% with the improved algorithm.
Conclusions: Our results demonstrate that noninvasive blood glucose analysis combining mid-infrared spectroscopy and photothermal detection is feasible and comparable in accuracy with minimally invasive glucometers and finger pricking devices which use test strips. As a next step, a handheld version of the present device for diabetes patients is being developed.
We analyze the behavior of cumulants of conserved charges in a subvolume of a thermal system with exact global conservation laws by extending a recently developed subensemble acceptance method (SAM) [1] to multiple conserved charges. Explicit expressions for all diagonal and off-diagonal cumulants up to sixth order that relate them to the grand canonical susceptibilities are obtained. The derivation is presented for an arbitrary equation of state with an arbitrary number of different conserved charges. The global conservation effects cancel out in any ratio of two second order cumulants, in any ratio of two third order cumulants, as well as in a ratio of strongly intensive measures Σ and ∆ involving any two conserved charges, making all these quantities particularly suitable for theory-to-experiment comparisons in heavy-ion collisions. We also show that the same cancellation occurs in correlators of a conserved charge, like the electric charge, with any non-conserved quantity such as net proton or net kaon number. The main results of the SAM are illustrated in the framework of the hadron resonance gas model. We also elucidate how net-proton and net-Λ fluctuations are affected by conservation of electric charge and strangeness in addition to baryon number.
In this Letter, we report the first measurement of the antideuteron inelastic cross section at low particle momenta, covering a range of 0.3≤p<4 GeV/c. The measurement is carried out using p-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of sNN−−−√ = 5.02 TeV, recorded with the ALICE detector at the CERN LHC and utilizing the detector material as an absorber for antideuterons and antiprotons. The extracted raw primary antiparticle-to-particle ratios are compared to the results from detailed ALICE simulations based on the GEANT4 toolkit for the propagation of antiparticles through the detector material. The analysis of the raw primary (anti)proton spectra serves as a benchmark for this study, since their hadronic interaction cross sections are well constrained experimentally. The first measurement of the antideuteron inelastic cross section averaged over the ALICE detector material with atomic mass numbers ⟨A⟩ = 17.4 and 31.8 is obtained. The measured inelastic cross section points to a possible excess with respect to the Glauber model parameterization in the lowest momentum interval of 0.3≤p<0.47 GeV/c up to a factor 2.1. This result is relevant for the understanding of antimatter propagation and the contributions to antinuclei production from cosmic ray interactions within the interstellar medium. In addition, the momentum range covered by this measurement is of particular importance to evaluate signal predictions for indirect dark-matter searches.
The measurements of the (anti)deuterons elliptic flow (v2) and the first measurements of triangular flow (v3) in Pb-Pb collisions at a center-of-mass energy per nucleon-nucleon collisions sNN−−−√ = 5.02 TeV are presented. A mass ordering at low transverse momentum (pT) is observed when comparing these measurements with those of other identified hadrons, as expected from relativistic hydrodynamics. The measured (anti)deuterons v2 lies between the predictions from the simple coalescence and blast-wave models, which provide a good description of the data only for more peripheral and for more central collisions, respectively. The mass number scaling, which is violated for v2, is approximately valid for the (anti)deuterons v3. The measured v2 and v3 are also compared with the predictions from a coalescence approach with phase-space distributions of nucleons generated by iEBE-VISHNU with AMPT initial conditions coupled with UrQMD, and from a dynamical model based on relativistic hydrodynamics coupled to the hadronic afterburner SMASH. The model predictions are consistent with the data within the uncertainties in mid-central collisions, while a deviation is observed in central centrality intervals.
This paper presents isolated photon-hadron correlations using pp and p-Pb data collected by the ALICE detector at the LHC. For photons with |η| < 0.67 and 12 < pT < 40 GeV/c, the associated yield of charged particles in the range |η| < 0.80 and 0.5 < pT < 10 GeV/c is presented. These momenta are much lower than previous measurements at the LHC. No significant difference between pp and p-Pb is observed, with PYTHIA 8.2 describing both data sets within uncertainties. This measurement constrains nuclear effects on the parton fragmentation in p-Pb collisions, and provides a benchmark for future studies of Pb-Pb collisions.
Scattering studies with low-energy kaon-proton femtoscopy in proton-proton collisions at the LHC
(2020)
The study of the strength and behaviour of the antikaon-nucleon (K¯¯¯¯N) interaction constitutes one of the key focuses of the strangeness sector in low-energy Quantum Chromodynamics (QCD). In this letter a unique high-precision measurement of the strong interaction between kaons and protons, close and above the kinematic threshold, is presented. The femtoscopic measurements of the correlation function at low pair-frame relative momentum of (K+ p ⊕ K− p¯¯¯) and (K− p ⊕ K+ p¯¯¯) pairs measured in pp collisions at s√ = 5, 7 and 13 TeV are reported. A structure observed around a relative momentum of 58 MeV/c in the measured correlation function of (K− p ⊕ K+ p¯¯¯) with a significance of 4.4. σ constitutes the first experimental evidence for the opening of the (K¯¯¯¯0n⊕K0n¯¯¯) isospin breaking channel due to the mass difference between charged and neutral kaons. The measured correlation functions have been compared to Jülich and Kyoto models in addition to the Coulomb potential. The high-precision data at low relative momenta presented in this work prove femtoscopy to be a powerful complementary tool to scattering experiments and provide new constraints above the K¯¯¯¯N threshold for low-energy QCD chiral models.
The Izu–Bonin–Mariana volcanic arc is situated at a convergent plate margin where subduction initiation triggered the formation of MORB-like forearc basalts as a result of decompression melting and near-trench spreading. International Ocean Discovery Program (IODP) Expedition 352 recovered samples within the forearc basalt stratigraphy that contained unusual macroscopic globular textures hosted in andesitic glass (Unit 6, Hole 1440B). It is unclear how these andesites, which are unique in a stratigraphic sequence dominated by forearc basalts, and the globular textures therein may have formed. Here, we present detailed textural evidence, major and trace element analysis, as well as B and Sr isotope compositions, to investigate the genesis of these globular andesites. Samples consist of K2O-rich basaltic globules set in a glassy groundmass of andesitic composition. Between these two textural domains a likely hydrated interface of devitrified glass occurs, which, based on textural evidence, seems to be genetically linked to the formation of the globules. The andesitic groundmass is Cl rich (ca. 3000 µg/g ), whereas globules and the interface are Cl poor (ca. 300 µg/g ). Concentrations of fluid-mobile trace elements also appear to be fractionated in that globules and show enrichments in B, K, Rb, Cs, and Tl, but not in Ba and W relative to the andesitic groundmass, whereas the interface shows depletions in the latter, but is enriched in the former. Interestingly, globules and andesitic groundmass have identical Sr isotopic composition within analytical uncertainty ( 87Sr∕86Sr of 0.70580 ± 10 ), indicating that they likely formed from the same source. However, globules show high δ11 B (ca. + 7 ‰ ), whereas their host andesites are isotopically lighter (ca. – 1 ‰ ), potentially indicating that whatever process led to their formation either introduced heavier B isotopes to the globules, or induced stable isotope fractionation of B between globules and their groundmass. Based on the bulk of the textural information and geochemical data obtained from these samples, we conclude that these andesites likely formed as a result of the assimilation of shallowly altered oceanic crust (AOC) during forearc basaltic magmatism. Assimilation likely introduced radiogenic Sr, as well as heavier B isotopes to comparatively unradiogenic and low δ11B forearc basalt parental magmas (average 87Sr∕86Sr of 0.703284). Moreover, the globular textures are consistent with their formation being the result of fluid-melt immiscibility that was potentially induced by the rapid release of water from assimilated AOC whose escape likely formed the interface. If the globular textures present in these samples are indeed the result of fluid-melt immiscibility, then this process led to significant trace element and stable isotope fractionation. The textures and chemical compositions of the globules highlight the need for future experimental studies aimed at investigating the exsolution process with respect to potential trace element and isotopic fractionation in arc magmas that have perhaps not been previously considered.
1H, 13C, and 15N backbone chemical shift assignments of coronavirus-2 non-structural protein Nsp10
(2020)
The international Covid19-NMR consortium aims at the comprehensive spectroscopic characterization of SARS-CoV-2 RNA elements and proteins and will provide NMR chemical shift assignments of the molecular components of this virus. The SARS-CoV-2 genome encodes approximately 30 different proteins. Four of these proteins are involved in forming the viral envelope or in the packaging of the RNA genome and are therefore called structural proteins. The other proteins fulfill a variety of functions during the viral life cycle and comprise the so-called non-structural proteins (nsps). Here, we report the near-complete NMR resonance assignment for the backbone chemical shifts of the non-structural protein 10 (nsp10). Nsp10 is part of the viral replication-transcription complex (RTC). It aids in synthesizing and modifying the genomic and subgenomic RNAs. Via its interaction with nsp14, it ensures transcriptional fidelity of the RNA-dependent RNA polymerase, and through its stimulation of the methyltransferase activity of nsp16, it aids in synthesizing the RNA cap structures which protect the viral RNAs from being recognized by the innate immune system. Both of these functions can be potentially targeted by drugs. Our data will aid in performing additional NMR-based characterizations, and provide a basis for the identification of possible small molecule ligands interfering with nsp10 exerting its essential role in viral replication.
The FUBP1-FUSE complex is an essential component of a transcription molecular machinery that is necessary for tight regulation of expression of many key genes including c-Myc and p21. FUBP1 utilizes its four articulated KH modules, which function cooperatively, for FUSE nucleotide binding. To understand molecular mechanisms fundamental to the intermolecular interaction, we present a set of crystal structures, as well ssDNA-binding characterization of FUBP1 KH domains. All KH1-4 motifs were highly topologically conserved, and were able to interact with FUSE individually and independently. Nevertheless, differences in nucleotide binding properties among the four KH domains were evident, including higher nucleotide-binding potency for KH3 as well as diverse nucleotide sequence preferences. Variations in amino acid compositions at one side of the binding cleft responsible for nucleobase resulted in diverse shapes and electrostatic charge interaction, which might feasibly be a contributing factor for different nucleotide-binding propensities among KH1-4. Nonetheless, conservation of structure and nucleotide-binding property in all four KH motifs is essential for the cooperativity of multi KH modules present in FUBP1 towards nanomolar affinity for FUSE interaction. Comprehensive structural comparison and ssDNA binding characteristics of all four KH domains presented here provide molecular insights at a fundamental level that might be beneficial for elucidating the mechanisms of the FUBP1-FUSE interaction.
One of the key challenges for nuclear physics today is to understand from first principles the effective interaction between hadrons with different quark content. First successes have been achieved using techniques that solve the dynamics of quarks and gluons on discrete space-time lattices1,2. Experimentally, the dynamics of the strong interaction have been studied by scattering hadrons off each other. Such scattering experiments are difficult or impossible for unstable hadrons3,4,5,6 and so high-quality measurements exist only for hadrons containing up and down quarks7. Here we demonstrate that measuring correlations in the momentum space between hadron pairs8,9,10,11,12 produced in ultrarelativistic proton–proton collisions at the CERN Large Hadron Collider (LHC) provides a precise method with which to obtain the missing information on the interaction dynamics between any pair of unstable hadrons. Specifically, we discuss the case of the interaction of baryons containing strange quarks (hyperons). We demonstrate how, using precision measurements of proton–omega baryon correlations, the effect of the strong interaction for this hadron–hadron pair can be studied with precision similar to, and compared with, predictions from lattice calculations13,14. The large number of hyperons identified in proton–proton collisions at the LHC, together with accurate modelling15 of the small (approximately one femtometre) inter-particle distance and exact predictions for the correlation functions, enables a detailed determination of the short-range part of the nucleon-hyperon interaction.
The first evidence of spin alignment of vector mesons (K*0 and ϕ) in heavy-ion collisions at the Large Hadron Collider (LHC) is reported. The spin density matrix element ρ00 is measured at midrapidity (|y|< 0.5) in Pb-Pb collisions at a center-of-mass energy (√sNN) of 2.76 TeV
with the ALICE detector. ρ00 values are found to be less than 1/3 (1/3 implies no spin alignment) at low transverse momentum (pT<2 GeV/c) for K*0 and ϕ at a level of 3σ and 2σ, respectively. No significant spin alignment is observed for the K0S meson (spin = 0) in Pb-Pb collisions and for the vector mesons in pp collisions. The measured spin alignment is unexpectedly large but qualitatively consistent with the expectation from models which attribute it to a polarization of quarks in the presence of angular momentum in heavy-ion collisions and a subsequent hadronization by the process of recombination.
Background: The adequate allocation of inpatient care resources requires assumptions about the need for health care and how this need will be met. However, in current practice, these assumptions are often based on outdated methods (e.g. Hill-Burton Formula). This study evaluated floating catchment area (FCA) methods, which have been applied as measures of spatial accessibility, focusing on their ability to predict the need for health care in the inpatient sector in Germany.
Methods: We tested three FCA methods (enhanced (E2SFCA), modified (M2SFCA) and integrated (iFCA)) for their accuracy in predicting hospital visits regarding six medical diagnoses (atrial flutter/fibrillation, heart failure, femoral fracture, gonarthrosis, stroke, and epilepsy) on national level in Germany. We further used the closest provider approach for benchmark purposes. The predicted visits were compared with the actual visits for all six diagnoses using a correlation analysis and a maximum error from the actual visits of ± 5%, ± 10% and ± 15%.
Results: The analysis of 229 million distances between hospitals and population locations revealed a high and significant correlation of predicted with actual visits for all three FCA methods across all six diagnoses up to ρ = 0.79 (p < 0.001). Overall, all FCA methods showed a substantially higher correlation with actual hospital visits compared to the closest provider approach (up to ρ = 0.51; p < 0.001). Allowing a 5% error of the absolute values, the analysis revealed up to 13.4% correctly predicted hospital visits using the FCA methods (15% error: up to 32.5% correctly predicted hospital). Finally, the potential of the FCA methods could be revealed by using the actual hospital visits as the measure of hospital attractiveness, which returned very strong correlations with the actual hospital visits up to ρ = 0.99 (p < 0.001).
Conclusion: We were able to demonstrate the impact of FCA measures regarding the prediction of hospital visits in non-emergency settings, and their superiority over commonly used methods (i.e. closest provider). However, hospital beds were inadequate as the measure of hospital attractiveness resulting in low accuracy of predicted hospital visits. More reliable measures must be integrated within the proposed methods. Still, this study strengthens the possibilities of FCA methods in health care planning beyond their original application in measuring spatial accessibility.
This research examines the impact of online display advertising and paid search advertising relative to offline advertising on firm performance and firm value. Using proprietary data on annualized advertising expenditures for 1651 firms spanning seven years, we document that both display advertising and paid search advertising exhibit positive effects on firm performance (measured by sales) and firm value (measured by Tobin's q). Paid search advertising has a more positive effect on sales than offline advertising, consistent with paid search being closest to the actual purchase decision and having enhanced targeting abilities. Display advertising exhibits a relatively more positive effect on Tobin's q than offline advertising, consistent with its long-term effects. The findings suggest heterogeneous economic benefits across different types of advertising, with direct implications for managers in analyzing advertising effectiveness and external stakeholders in assessing firm performance.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
The current outbreak of the highly infectious COVID-19 respiratory disease is caused by the novel coronavirus SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2). To fight the pandemic, the search for promising viral drug targets has become a cross-border common goal of the international biomedical research community. Within the international Covid19-NMR consortium, scientists support drug development against SARS-CoV-2 by providing publicly available NMR data on viral proteins and RNAs. The coronavirus nucleocapsid protein (N protein) is an RNA-binding protein involved in viral transcription and replication. Its primary function is the packaging of the viral RNA genome. The highly conserved architecture of the coronavirus N protein consists of an N-terminal RNA-binding domain (NTD), followed by an intrinsically disordered Serine/Arginine (SR)-rich linker and a C-terminal dimerization domain (CTD). Besides its involvement in oligomerization, the CTD of the N protein (N-CTD) is also able to bind to nucleic acids by itself, independent of the NTD. Here, we report the near-complete NMR backbone chemical shift assignments of the SARS-CoV-2 N-CTD to provide the basis for downstream applications, in particular site-resolved drug binding studies.
Background: The aim of this study was to collect standard reference values of the weight and the maximum pressure distribution in healthy adults aged 18–65 years and to investigate the influence of constitutional parameters on it.
Methods: A total of 416 healthy subjects (208 male / 208 female) aged between 18 and 65 years (Ø 38.3 ± 14.1 years) participated in this study, conducted 2015–2019 in Heidelberg. The age-specific evaluation is based on 4 age groups (G1, 18–30 years; G2, 31–40 years; G3, 41–50 years; G4, 51–65 years). A pressure measuring plate FDM-S (Zebris/Isny/Germany) was used to collect body weight distribution and maximum pressure distribution of the right and left foot and left and right forefoot/rearfoot, respectively.
Results: Body weight distribution of the left (50.07%) and right (50.12%) foot was balanced. There was higher load on the rearfoot (left 54.14%; right 55.09%) than on the forefoot (left 45.49%; right 44.26%). The pressure in the rearfoot was higher than in the forefoot (rearfoot left 9.60 N/cm2, rearfoot right 9.51 N/cm2/forefoot left 8.23 N/cm2, forefoot right 8.59 N/cm2). With increasing age, the load in the left foot shifted from the rearfoot to the forefoot as well as the maximum pressure (p ≤ 0.02 and 0.03; poor effect size). With increasing BMI, the body weight shifted to the left and right rearfoot (p ≤ 0.001, poor effect size). As BMI increased, so did the maximum pressure in all areas (p ≤ 0.001 and 0.03, weak to moderate effect size). There were significant differences in weight and maximum pressure distribution in the forefoot and rearfoot in the different age groups, especially between younger (18–40 years) and older (41–65 years) subjects.
Discussion: Healthy individuals aged from 18 to 65 years were found to have a balanced weight distribution in an aspect ratio, with a 20% greater load of the rearfoot. Age and BMI were found to be influencing factors of the weight and maximum pressure distribution, especially between younger and elder subjects. The collected standard reference values allow comparisons with other studies and can serve as a guideline in clinical practice and scientific studies.
Selective sympathetic and parasympathetic pathways that act on target organs represent the terminal actors in the neurobiology of homeostasis and often become compromised during a range of neurodegenerative and traumatic disorders. Here, we delineate several neurotransmitter and neuromodulator phenotypes found in diverse parasympathetic and sympathetic ganglia in humans and rodent species. The comparative approach reveals evolutionarily conserved and non-conserved phenotypic marker constellations. A developmental analysis examining the acquisition of selected neurotransmitter properties has provided a detailed, but still incomplete, understanding of the origins of a set of noradrenergic and cholinergic sympathetic neuron populations, found in the cervical and trunk region. A corresponding analysis examining cholinergic and nitrergic parasympathetic neurons in the head, and a range of pelvic neuron populations, with noradrenergic, cholinergic, nitrergic, and mixed transmitter phenotypes, remains open. Of particular interest are the molecular mechanisms and nuclear processes that are responsible for the correlated expression of the various genes required to achieve the noradrenergic phenotype, the segregation of cholinergic locus gene expression, and the regulation of genes that are necessary to generate a nitrergic phenotype. Unraveling the neuron population-specific expression of adhesion molecules, which are involved in axonal outgrowth, pathway selection, and synaptic organization, will advance the study of target-selective autonomic pathway generation.
Flavin-based electron bifurcation is a long hidden mechanism of energetic coupling present mainly in anaerobic bacteria and archaea that suffer from energy limitations in their environment. Electron bifurcation saves precious cellular ATP and enables lithotrophic life of acetate-forming (acetogenic) bacteria that grow on H2 + CO2 by the only pathway that combines CO2 fixation with ATP synthesis, the Wood–Ljungdahl pathway. The energy barrier for the endergonic reduction of NADP+, an electron carrier in the Wood–Ljungdahl pathway, with NADH as reductant is overcome by an electron-bifurcating, ferredoxin-dependent transhydrogenase (Nfn) but many acetogens lack nfn genes. We have purified a ferredoxin-dependent NADH:NADP+ oxidoreductase from Sporomusa ovata, characterized the enzyme biochemically and identified the encoding genes. These studies led to the identification of a novel, Sporomusa type Nfn (Stn), built from existing modules of enzymes such as the soluble [Fe–Fe] hydrogenase, that is widespread in acetogens and other anaerobic bacteria.
Replacement of a stenotic aortic valve reduces immediately the ventricular to aortic gradient and is expected to improve diastolic and systolic left ventricular function over the long term. However, the hemodynamic changes immediately after valve implantation are so far poorly understood. Within this pilot study, we performed an invasive pressure volume loop analysis to describe the early hemodynamic changes after transcatheter aortic valve implantation (TAVI) with self-expandable prostheses. Invasive left ventricular pressure volume loop analysis was performed in 8 patients with aortic stenosis (mean 81.3 years) prior and immediately after transfemoral TAVI with a self-expandable valve system (St. Jude Medical Portico Valve). Parameters for global hemodynamics, afterload, contractility and the interaction of the cardiovascular system were analyzed. Left ventricular ejection fraction, (53.9% vs. 44.8%, p = 0.018), preload recruitable stroke work (68.5 vs. 44.8 mmHg, p = 0.012) and end-systolic elastance (3.55 vs. 2.17, p = 0.036) both marker for myocardial contractility declined significantly compared to baseline. As sign of impaired diastolic function, TAU, a preload-independent measure of isovolumic relaxation (37.3 vs. 41.8 ms, p = 0.018) and end-diastolic pressure (13.1 vs. 16.4 mmHg, p = 0.015) raised after valve implantation. Contrarily, a smaller ratio of end-systolic to arterial elastance (ventricular-arterial coupling) indicates an improvement of global cardiovascular energy efficiency (1.40 vs. 0.97 p = 0.036). Arterial elastance had a strong correlation with the number of conducted rapid ventricular pacings (Pearson correlation coefficient, r = 0.772, p = 0.025). Invasive left ventricular pressure volume loop analysis revealed impaired systolic and diastolic function in the early phase after TAVI with self-expandable valve for the treatment of severe aortic stenosis. Contrarily, we found indications for early improvement of global cardiovascular energy efficiency.
This study was performed to identify Peronosclerospora species found in Indonesia based on sequence analysis of the cox2 gene. In addition, sequence data in total, 26 isolates of Peronosclerospora were investigated in this study. They were obtained from 7 provinces in Indonesia, namely Lampung, Jawa Timur, Jawa Barat, Sumatera Utara, Jawa Tengah, Yogyakarta, and Sulawesi Selatan. Sequence analysis of cox2 and phylogenetic inference were performed on all the 26 isolates. A set of primers developed in this study, PCOX2F and PCOX2R, was used for PCR amplification. Phylogenetic analyses showed that all the Indonesian isolates were divided into two groups. Group I contained 13 isolates; 9 isolates obtained from Lampung, 3 isolates from Sumatera Utara, and 1 isolate from Jawa Barat. Group II consisted of 13 isolates; 7 isolates from Jawa Timur, 2 isolates from Jawa Tengah, 1 isolate from Yogyakarta, and 3 isolates from Sulawesi Selatan. All the members of group I clustered with the ex-type sequence of P. australiensis. Meanwhile, all members of Group II formed the sister clade of isolates obtained from Timor-Leste and may represent P. maydis.
The nuclear factor kappa beta (NFκB) signaling pathway plays an important role in liver homeostasis and cancer development. Tax1-binding protein 1 (Tax1BP1) is a regulator of the NFκB signaling pathway, but its role in the liver and hepatocellular carcinoma (HCC) is presently unknown. Here we investigated the role of Tax1BP1 in liver cells and murine models of HCC and liver fibrosis. We applied the diethylnitrosamine (DEN) model of experimental hepatocarcinogenesis in Tax1BP1+/+ and Tax1BP1−/− mice. The amount and subsets of non-parenchymal liver cells in in Tax1BP1+/+ and Tax1BP1−/− mice were determined and activation of NFκB and stress induced signaling pathways were assessed. Differential expression of mRNA and miRNA was determined. Tax1BP1−/− mice showed increased numbers of inflammatory cells in the liver. Furthermore, a sustained activation of the NFκB signaling pathway was found in hepatocytes as well as increased transcription of proinflammatory cytokines in isolated Kupffer cells from Tax1BP1−/− mice. Several differentially expressed mRNAs and miRNAs in livers of Tax1BP1−/− mice were found, which are regulators of inflammation or are involved in cancer development or progression. Furthermore, Tax1BP1−/− mice developed more HCCs than their Tax1BP1+/+ littermates. We conclude that Tax1BP1 protects from liver cancer development by limiting proinflammatory signaling.