Refine
Year of publication
- 2020 (3438) (remove)
Document Type
- Article (1914)
- Part of Periodical (478)
- Doctoral Thesis (235)
- Preprint (169)
- Contribution to a Periodical (160)
- Book (128)
- Working Paper (123)
- Review (104)
- Part of a Book (86)
- Bachelor Thesis (11)
Language
- English (2168)
- German (1132)
- Portuguese (43)
- French (31)
- Turkish (28)
- Multiple languages (17)
- Spanish (13)
- Italian (3)
- slo (3)
Keywords
- taxonomy (88)
- Deutsch (53)
- new species (50)
- Spracherwerb (34)
- Sprachtest (33)
- Capital Markets Union (25)
- Financial Markets (25)
- Literatur (25)
- Übersetzung (25)
- Coronavirus (24)
Institute
- Medizin (744)
- Präsidium (278)
- Physik (267)
- Wirtschaftswissenschaften (220)
- Sustainable Architecture for Finance in Europe (SAFE) (167)
- Biowissenschaften (161)
- Frankfurt Institute for Advanced Studies (FIAS) (150)
- Informatik (116)
- Biochemie, Chemie und Pharmazie (104)
- Neuere Philologien (95)
- Gesellschaftswissenschaften (94)
- Center for Financial Studies (CFS) (83)
- Rechtswissenschaft (73)
- House of Finance (HoF) (66)
- Psychologie (51)
- Biochemie und Chemie (50)
- Kulturwissenschaften (46)
- Geowissenschaften (45)
- Geschichtswissenschaften (40)
- Senckenbergische Naturforschende Gesellschaft (39)
- Sprach- und Kulturwissenschaften (38)
- Informatik und Mathematik (32)
- Institut für Ökologie, Evolution und Diversität (31)
- Psychologie und Sportwissenschaften (31)
- Erziehungswissenschaften (30)
- Philosophie (29)
- Sportwissenschaften (29)
- Evangelische Theologie (27)
- Geowissenschaften / Geographie (27)
- Institut für Sozialforschung (IFS) (23)
- Biodiversität und Klima Forschungszentrum (BiK-F) (22)
- MPI für Hirnforschung (20)
- Universitätsbibliothek (20)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (19)
- Institut für Wirtschaft, Arbeit, und Kultur (IWAK) (16)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (15)
- Philosophie und Geschichtswissenschaften (14)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (13)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (12)
- E-Finance Lab e.V. (12)
- Foundation of Law and Finance (12)
- Ernst Strüngmann Institut (11)
- Institute for Monetary and Financial Stability (IMFS) (11)
- MPI für Biophysik (11)
- Geographie (10)
- Institut für sozial-ökologische Forschung (ISOE) (9)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (8)
- Cornelia Goethe Centrum für Frauenstudien und die Erforschung der Geschlechterverhältnisse (CGC) (7)
- Mathematik (7)
- Exzellenzcluster Makromolekulare Komplexe (6)
- Georg-Speyer-Haus (6)
- ELEMENTS (5)
- Pharmazie (4)
- Sprachwissenschaften (4)
- Akademie für Bildungsforschung und Lehrerbildung (bisher: Zentrum für Lehrerbildung und Schul- und Unterrichtsforschung) (3)
- Hochschulrechenzentrum (3)
- MPI für empirische Ästhetik (3)
- Zentrum für Nordamerika-Forschung (ZENAF) (3)
- Extern (2)
- Forschungszentrum Historische Geisteswissenschaften (FHG) (2)
- Helmholtz International Center for FAIR (2)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (2)
- Frankfurt MathFinance Institute (FMFI) (1)
- Fritz Bauer Institut (1)
- Gleichstellungsbüro (1)
- Hessische Stiftung für Friedens- und Konfliktforschung (HSFK) (1)
- Institut für Bienenkunde (1)
- Institute for Law and Finance (ILF) (1)
- Interdisziplinäres Zentrum für Neurowissenschaften Frankfurt (IZNF) (1)
- Katholische Theologie (1)
- Museum Giersch der Goethe Universität (1)
- Starker Start ins Studium: Qualitätspakt Lehre (1)
- Zentrum für Interdisziplinäre Afrikaforschung (ZIAF) (1)
- studiumdigitale (1)
Ice particle activation and evolution have important atmospheric implications for cloud formation, initiation of precipitation and radiative interactions. The initial formation of atmospheric ice by heterogeneous ice nucleation requires the presence of a nucleating seed, an ice-nucleating particle (INP), to facilitate its first emergence. Unfortunately, only a few long-term measurements of INPs exist, and as a result, knowledge about geographic and seasonal variations of INP concentrations is sparse. Here we present data from nearly 2 years of INP measurements from four stations in different regions of the world: the Amazon (Brazil), the Caribbean (Martinique), central Europe (Germany) and the Arctic (Svalbard). The sites feature diverse geographical climates and ecosystems that are associated with dissimilar transport patterns, aerosol characteristics and levels of anthropogenic impact (ranging from near pristine to mostly rural). Interestingly, observed INP concentrations, which represent measurements in the deposition and condensation freezing modes, do not differ greatly from site to site but usually fall well within the same order of magnitude. Moreover, short-term variability overwhelms all long-term trends and/or seasonality in the INP concentration at all locations. An analysis of the frequency distributions of INP concentrations suggests that INPs tend to be well mixed and reflective of large-scale air mass movements. No universal physical or chemical parameter could be identified to be a causal link driving INP climatology, highlighting the complex nature of the ice nucleation process. Amazonian INP concentrations were mostly unaffected by the biomass burning season, even though aerosol concentrations increase by a factor of 10 from the wet to dry season. Caribbean INPs were positively correlated to parameters related to transported mineral dust, which is known to increase during the Northern Hemisphere summer. A wind sector analysis revealed the absence of an anthropogenic impact on average INP concentrations at the site in central Europe. Likewise, no Arctic haze influence was observed on INPs at the Arctic site, where low concentrations were generally measured. We consider the collected data to be a unique resource for the community that illustrates some of the challenges and knowledge gaps of the field in general, while specifically highlighting the need for more long-term observations of INPs worldwide.
Bioaerosols are considered to play a relevant role in atmospheric processes, but their sources, properties, and spatiotemporal distribution in the atmosphere are not yet well characterized. In the Amazon Basin, primary biological aerosol particles (PBAPs) account for a large fraction of coarse particulate matter, and fungal spores are among the most abundant PBAPs in this area as well as in other vegetated continental regions. Furthermore, PBAPs could also be important ice nuclei in Amazonia. Measurement data on the release of fungal spores under natural conditions, however, are sparse. Here we present an experimental approach to analyze and quantify the spore release from fungi and other spore-producing organisms under natural and laboratory conditions. For measurements under natural conditions, the samples were kept in their natural environment and a setup was developed to estimate the spore release numbers and sizes as well as the microclimatic factors temperature and air humidity in parallel to the mesoclimatic parameters net radiation, rain, and fog occurrence. For experiments in the laboratory, we developed a cuvette to assess the particle size and number of newly released fungal spores under controlled conditions, simultaneously measuring temperature and relative humidity inside the cuvette. Both approaches were combined with bioaerosol sampling techniques to characterize the released particles using microscopic methods. For fruiting bodies of the basidiomycetous species, Rigidoporus microporus, the model species for which these techniques were tested, the highest frequency of spore release occurred in the range from 62 % to 96 % relative humidity. The results obtained for this model species reveal characteristic spore release patterns linked to environmental or experimental conditions, indicating that the moisture status of the sample may be a regulating factor, whereas temperature and light seem to play a minor role for this species. The presented approach enables systematic studies aimed at the quantification and validation of spore emission rates and inventories, which can be applied to a regional mapping of cryptogamic organisms under given environmental conditions.
Purpose: In the clinical routine, detection of focal cortical dysplasia (FCD) by visual inspection is challenging. Still, information about the presence and location of FCD is highly relevant for prognostication and treatment decisions. Therefore, this study aimed to develop, describe and test a method for the calculation of synthetic anatomies using multiparametric quantitative MRI (qMRI) data and surface-based analysis, which allows for an improved visualization of FCD.
Materials and Methods: Quantitative T1-, T2- and PD-maps and conventional clinical datasets of patients with FCD and epilepsy were acquired. Tissue segmentation and delineation of the border between white matter and cortex was performed. In order to detect blurring at this border, a surface-based calculation of the standard deviation of each quantitative parameter (T1, T2, and PD) was performed across the cortex and the neighboring white matter for each cortical vertex. The resulting standard deviations combined with measures of the cortical thickness were used to enhance the signal of conventional FLAIR-datasets. The resulting synthetically enhanced FLAIR-anatomies were compared with conventional MRI-data utilizing regions of interest based analysis techniques.
Results: The synthetically enhanced FLAIR-anatomies showed higher signal levels than conventional FLAIR-data at the FCD sites (p = 0.005). In addition, the enhanced FLAIR-anatomies exhibited higher signal levels at the FCD sites than in the corresponding contralateral regions (p = 0.005). However, false positive findings occurred, so careful comparison with conventional datasets is mandatory.
Conclusion: Synthetically enhanced FLAIR-anatomies resulting from surface-based multiparametric qMRI-analyses have the potential to improve the visualization of FCD and, accordingly, the treatment of the respective patients.
Cortical changes in epilepsy patients with focal cortical dysplasia: new insights with T2 mapping
(2020)
Background: In epilepsy patients with focal cortical dysplasia (FCD) as the epileptogenic focus, global cortical signal changes are generally not visible on conventional MRI. However, epileptic seizures or antiepileptic medication might affect normal-appearing cerebral cortex and lead to subtle damage. Purpose: To investigate cortical properties outside FCD regions with T2-relaxometry. Study Type: Prospective study. Subjects: Sixteen patients with epilepsy and FCD and 16 age-/sex-matched healthy controls. Field Strength/Sequence: 3T, fast spin-echo T2-mapping, fluid-attenuated inversion recovery (FLAIR), and synthetic T1-weighted magnetization-prepared rapid acquisition of gradient-echoes (MP-RAGE) datasets derived from T1-maps. Assessment: Reconstruction of the white matter and cortical surfaces based on MP-RAGE structural images was performed to extract cortical T2 values, excluding lesion areas. Three independent raters confirmed that morphological cortical/juxtacortical changes in the conventional FLAIR datasets outside the FCD areas were definitely absent for all patients. Averaged global cortical T2 values were compared between groups. Furthermore, group comparisons of regional cortical T2 values were performed using a surface-based approach. Tests for correlations with clinical parameters were carried out. Statistical Tests: General linear model analysis, permutation simulations, paired and unpaired t-tests, and Pearson correlations. Results: Cortical T2 values were increased outside FCD regions in patients (83.4 ± 2.1 msec, control group 81.4 ± 2.1 msec, P = 0.01). T2 increases were widespread, affecting mainly frontal, but also parietal and temporal regions of both hemispheres. Significant correlations were not observed (P ≥ 0.55) between cortical T2 values in the patient group and the number of seizures in the last 3 months or the number of anticonvulsive drugs in the medical history. Data Conclusion: Widespread increases in cortical T2 in FCD-associated epilepsy patients were found, suggesting that structural epilepsy in patients with FCD is not only a symptom of a focal cerebral lesion, but also leads to global cortical damage not visible on conventional MRI. Evidence Level: 21. Technical efficacy Stage: 3 J. MAGN. RESON. IMAGING 2020;52:1783–1789.
Background: Austria has recently been embroiled in the complex debate on the legalization of measures to end life prematurely. Empirical data on end-of-life decisions made by Austrian physicians barely exists. This study is the first in Austria aimed at finding out how physicians generally approach and make end-of-life therapy decisions.
Methods: The European end-of-life decisions (EURELD) questionnaire, translated and adapted by Schildmann et al., was used to conduct this cross-sectional postal survey. Questions on palliative care training, legal issues, and use of and satisfaction with palliative care were added. All Austrian specialists in hematology and oncology, a representative sample of doctors specialized in internal medicine, and a sample of general practitioners, were invited to participate in this anonymous postal survey.
Results: Five hundred forty-eight questionnaires (response rate: 10.4%) were evaluated. 88.3% of participants had treated a patient who had died in the previous 12 months. 23% of respondents had an additional qualification in palliative medicine. The cause of death in 53.1% of patients was cancer, and 44.8% died at home. In 86.3% of cases, pain relief and / or symptom relief had been intensified. Further treatment had been withheld by 60.0%, and an existing treatment discontinued by 49.1% of respondents. In 5 cases, the respondents had prescribed, provided or administered a drug which had resulted in death. 51.3% of physicians said they would never carry out physician-assisted suicide (PAS), while 30.3% could imagine doing so under certain conditions. 38.5% of respondents supported the current prohibition of PAS, 23.9% opposed it, and 33.2% were undecided. 52.4% of physicians felt the legal situation with respect to measures to end life prematurely was ambiguous. An additional qualification in palliative medicine had no influence on measures taken, or attitudes towards PAS.
Conclusions: The majority of doctors perform symptom control in terminally ill patients. PAS is frequently requested but rarely carried out. Attending physicians felt the legal situation was ambiguous. Physicians should therefore receive training in current legislation relating to end-of-life choices and medical decisions. The data collected in this survey will help political decision-makers provide the necessary legal framework for end-of-life medical care.
Objectives: To review systematically the past 10 years of research activity into the healthcare experiences (HCX) of patients with chronic heart failure (CHF) in Germany, in order to identify research foci and gaps and make recommendations for future research. Design: In this scoping review, six databases and grey literature sources were systematically searched for articles reporting HCX of patients with CHF in Germany that were published between 2008 and 2018. Extracted results were summarised using quantitative and qualitative descriptive analysis. Results: Of the 18 studies (100%) that met the inclusion criteria, most were observational studies (60%) that evaluated findings quantitatively (60%). HCX were often concerned with patient information, global satisfaction as well as relationships and communication between patients and providers and generally covered ambulatory care, hospital care and rehabilitation services. Overall, the considerable heterogeneity of the included studies’ outcomes only permitted relatively trivial levels of synthesis. Conclusion: In Germany, research on HCX of patients with CHF is characterised by missing, inadequate and insufficient information. Future research would benefit from qualitative analyses, evidence syntheses, longitudinal analyses that investigate HCX throughout the disease trajectory, and better reporting of sociodemographic data. Furthermore, research should include studies that are based on digital data, reports of experiences gained in under-investigated yet patient-relevant healthcare settings and include more female subjects.
Objectives: The ongoing coronavirus pandemic is challenging, especially in severely affected patients who require intubation and sedation. Although the potential benefits of sedation with volatile anesthetics in coronavirus disease 2019 patients are currently being discussed, the use of isoflurane in patients with coronavirus disease 2019–induced acute respiratory distress syndrome has not yet been reported. Design: We performed a retrospective analysis of critically ill patients with hypoxemic respiratory failure requiring mechanical ventilation. Setting: The study was conducted with patients admitted between April 4 and May 15, 2020 to our ICU. Patients: We included five patients who were previously diagnosed with severe acute respiratory syndrome coronavirus 2 infection. Intervention: Even with high doses of several IV sedatives, the targeted level of sedation could not be achieved. Therefore, the sedation regimen was switched to inhalational isoflurane. Clinical data were recorded using a patient data management system. We recorded demographical data, laboratory results, ventilation variables, sedative dosages, sedation level, prone positioning, duration of volatile sedation and outcomes. Measurements & Main Results: Mean age (four men, one women) was 53.0 (± 12.7) years. The mean duration of isoflurane sedation was 103.2 (± 66.2) hours. Our data demonstrate a substantial improvement in the oxygenation ratio when using isoflurane sedation. Deep sedation as assessed by the Richmond Agitation and Sedation Scale was rapidly and closely controlled in all patients, and the subsequent discontinuation of IV sedation was possible within the first 30 minutes. No adverse events were detected. Conclusions: Our findings demonstrate the feasibility of isoflurane sedation in five patients suffering from severe coronavirus disease 2019 infection. Volatile isoflurane was able to achieve the required deep sedation and reduced the need for IV sedation.
Purpose: To investigate cortical thickness and cortical quantitative T2 values as imaging markers of microstructural tissue damage in patients with unilateral high-grade internal carotid artery occlusive disease (ICAOD).
Methods: A total of 22 patients with ≥70% stenosis (mean age 64.8 years) and 20 older healthy control subjects (mean age 70.8 years) underwent structural magnetic resonance imaging (MRI) and high-resolution quantitative (q)T2 mapping. Generalized linear mixed models (GLMM) controlling for age and white matter lesion volume were employed to investigate the effect of ICAOD on imaging parameters of cortical microstructural integrity in multivariate analyses.
Results: There was a significant main effect (p < 0.05) of the group (patients/controls) on both cortical thickness and cortical qT2 values with cortical thinning and increased cortical qT2 in patients compared to controls, irrespective of the hemisphere. The presence of upstream carotid stenosis had a significant main effect on cortical qT2 values (p = 0.01) leading to increased qT2 in the poststenotic hemisphere, which was not found for cortical thickness. The GLMM showed that in general cortical thickness was decreased and cortical qT2 values were increased with increasing age (p < 0.05).
Conclusion: Unilateral high-grade carotid occlusive disease is associated with widespread cortical thinning and prolongation of cortical qT2, presumably reflecting hypoperfusion-related microstructural cortical damage similar to accelerated aging of the cerebral cortex. Cortical thinning and increase of cortical qT2 seem to reflect different aspects and different pathophysiological states of cortical degeneration. Quantitative T2 mapping might be a sensitive imaging biomarker for early cortical microstructural damage.
An important measure in pain research is the intensity of nociceptive stimuli and their cortical representation. However, there is evidence of different cerebral representations of nociceptive stimuli, including the fact that cortical areas recruited during processing of intranasal nociceptive chemical stimuli included those outside the traditional trigeminal areas. Therefore, the aim of this study was to investigate the major cerebral representations of stimulus intensity associated with intranasal chemical trigeminal stimulation. Trigeminal stimulation was achieved with carbon dioxide presented to the nasal mucosa. Using a single‐blinded, randomized crossover design, 24 subjects received nociceptive stimuli with two different stimulation paradigms, depending on the just noticeable differences in the stimulus strengths applied. Stimulus‐related brain activations were recorded using functional magnetic resonance imaging with event‐related design. Brain activations increased significantly with increasing stimulus intensity, with the largest cluster at the right Rolandic operculum and a global maximum in a smaller cluster at the left lower frontal orbital lobe. Region of interest analyses additionally supported an activation pattern correlated with the stimulus intensity at the piriform cortex as an area of special interest with the trigeminal input. The results support the piriform cortex, in addition to the secondary somatosensory cortex, as a major area of interest for stimulus strength‐related brain activation in pain models using trigeminal stimuli. This makes both areas a primary objective to be observed in human experimental pain settings where trigeminal input is used to study effects of analgesics.
Introduction: From the beginning of the corona pandemic until August 19, 2020, more than 21,989,366 cases have been reported worldwide – 228,495 in Germany alone, including 12,648 children aged 0–14. In many countries, the proportion of infected children in the total population is comparatively low; in addition, children often have no or milder symptoms and are less likely to transmit the pathogen to adults than the other way round. Based on the registration data in Frankfurt am Main, Germany, the symptoms of children in comparison with adults and the likely routes of transmission are presented below.
Materials and methods: The documentation of the mandatory reports includes personal data (name, date of birth, gender, place of residence), disease characteristics (date of report, date of onset of the disease, symptoms), possible contact persons (family, others) and i.a. possible activity or care in children’s community facilities. All reports were viewed, especially with regard to likely transmission routes.
Results: From March 1 to July 31, 2020, 1,977 infected people were reported, including 138 children between the ages of 0 and 14 years. Children had fewer and milder symptoms than adults. None of the children experienced severe respiratory symptoms or the need for ventilation. 62% of the children had no symptoms at all (19% adults), 5% of the children were hospitalized (24% adults), and none of the children died (3.8% adults).
After excluding a cluster of 34 children from refugee accommodations and 14 children from a parish, 78% of the remaining 90 children had been infected by an adult within the family, and only 4% were likely to have a reverse transmission route. In 5.5% of cases, transmission in a community facility was likely.
Discussion: The results of the registration data from Frankfurt am Main, Germany confirm the results published in other countries: Children are less likely to become infected, and if infected, their symptoms are less severe than in adults, and they are apparently not the main drivers of virus transmission. Therefore, scientific medical associations strongly recommend reopening schools.
Keystone mutualisms, such as corals, lichens or mycorrhizae, sustain fundamental ecosystem functions. Range dynamics of these symbioses are, however, inherently difficult to predict because host species may switch between different symbiont partners in different environments, thereby altering the range of the mutualism as a functional unit. Biogeographic models of mutualisms thus have to consider both the ecological amplitudes of various symbiont partners and the abiotic conditions that trigger symbiont replacement. To address this challenge, we here investigate 'symbiont turnover zones'--defined as demarcated regions where symbiont replacement is most likely to occur, as indicated by overlapping abundances of symbiont ecotypes. Mapping the distribution of algal symbionts from two species of lichen-forming fungi along four independent altitudinal gradients, we detected an abrupt and consistent β-diversity turnover suggesting parallel niche partitioning. Modelling contrasting environmental response functions obtained from latitudinal distributions of algal ecotypes consistently predicted a confined altitudinal turnover zone. In all gradients this symbiont turnover zone is characterized by approximately 12°C average annual temperature and approximately 5°C mean temperature of the coldest quarter, marking the transition from Mediterranean to cool temperate bioregions. Integrating the conditions of symbiont turnover into biogeographic models of mutualisms is an important step towards a comprehensive understanding of biodiversity dynamics under ongoing environmental change.
Das Internet findet auf unterschiedlichste Weise Eingang in den Film: Digitale Formate wie Webserien, Podcasts oder sogar Tweets werden im Medienwechsel Grundlage filmischer Adaptionen, filmische Experimente mit interaktiven und virtuellen Technologien generieren neue, zwischen Film und Computerspiel angesiedelte Medienkombinationen, transmediale Erweiterungen führen auf verschiedene Arten Film- und Serienuniversen im digitalen Raum fort und intermediale Bezüge erzählen durch die Imitation einer digitalen Ästhetik nicht (nur) über das Altermedium, sondern oft auch durch das andere Medium. Zu letzterer intermedialer Kategorie gehörende Phänomene der Thematisierung, Evozierung oder Simulierung sollen hier im Kontext der Darstellung des Internets analysiert werden. Aufgrund der Ubiquität digitaler Medien im Alltag spielen seit einigen Jahren neuere Technologien als Bezugsmedien eine zentrale Rolle in vielen Filmen und Serien. Filmische Internetanwendungen werden dabei vor allem als grafische Benutzeroberfläche, als Nutzungsschnittstelle zwischen Anwender und technischem Gerät visualisiert, die Repräsentation der Hardware erscheint meist nachrangig. Nicht die Darstellung von Computern und Smartphones, sondern die Inszenierung von vernetzten Systemen, Räumen und Kommunikationsstrukturen steht daher im Fokus dieses Artikels. Eingegangen werden soll in diesem Zusammenhang insbesondere auf intermediale Evozierungen des Altermediums durch die Nachahmung digitaler Ästhetiken vermittels des Formenrepertoires des Films, simulierte Screen- und Desktopfilme und auf die Darstellung der dominant schrift- und zeichenbasierten digitalen Kultur durch die Integration von Schrift im Filmbild. Begonnen wird die Untersuchung mit einer Betrachtung von visuellen Metaphern und Strategien der Sichtbarmachung virtueller Räume.
Inschriften sind Formen, die durch eine besondere mediale Disposition charakterisiert sind. Was Inschriften auszeichnet, ist, neben ihrem engen Bezug zu einem materiellen Träger, ihre eigentümliche Position auf der Schwelle von Schrift und Bild. [...] Die Eigenart der Inschrift, ein Wort oder einen Text als sichtbare Zeichenfolge auszustellen, hat der italienische Epigraphieforscher Armando Petrucci im Begriff der 'scrittura esposta' zum Ausdruck gebracht. [...] Versucht man, die damit berührte spezifische Potenz der Inschrift genauer zu erfassen, liegt es nahe, zunächst auf die visuelle Dimension zurückzukommen. Es ist, so darf man annehmen, die Fähigkeit der Inschrift, als Bild zu erscheinen, die es ihr erlaubt, in den Blick des Betrachters zu treten und sich jenem als exponierte Figur vor Augen zu stellen. Mit dieser bildhaften Erscheinungsform, so ließe sich das Argument weiterführen, verbinden sich ästhetische Qualitäten der sinnlichen Eindrücklichkeit und Präsenz, die der Inschrift die ihr eigentümliche Ausdrucks- und Aussagekraft verleihen. [...] Mit dieser Erklärung ist unterdessen nur die eine Seite der Inschrift und ihrer medien- und wirkungsästhetischen Beschaffenheit erfasst. Das Besondere der Inschrift erschöpft sich nicht in deren Eigenart als ausgestellter, exponierter Zeichenformation. Die Inschrift ist nicht nur 'esposta', sondern ebenso 'scrittura'. Die besondere Gestaltungs- und Wirkungsweise der Inschrift beruht mithin nicht allein auf deren bildhafter Disposition. Die Wirkkraft der Inschrift verdankt sich, so die hier vorgeschlagene These, dem Umstand, dass diese, auch wenn sie sich als exponierte, eingängig und weithin sichtbare Gestalt zur Geltung bringt, zugleich ihren Charakter als Schrift bewahrt und diesen nicht weniger deutlich hervorkehrt. Wer eine Inschrift betrachtet, der erblickt in ihrer bildhaften Gestaltung zugleich die visuelle Form eines Textes, einer sprachlichen Äußerung. Durch ihre Gestaltung als 'scrittura' erscheint die Inschrift somit in einer Form, die in spezifischer Weise mit Momenten der Macht und Autorität versehen ist. Ist doch die Schrift dasjenige Medium, in dem uns, in einer von der Antike bis in die Neuzeit und Moderne reichenden Tradition, das Gesetz, die aufgezeichnete und materialisierte 'Stimme des Souveräns' entgegentritt. Das Besondere der Inschrift scheint also, so lässt sich vorläufig festhalten, darin zu bestehen, dass sie die Medien von Bild und Schrift in einer spezifischen Weise miteinander verknüpft. In ihr sind mediale und ästhetische Qualitäten wirksam, die teils dem Bild, teils der Schrift angehören. Auf diesem Zusammenspiel beruht auch das eigentümliche Wirkungspotential, das sich mit dieser Äußerungsform verbindet. In der Folge wird es darum gehen, dieses Zusammenwirken bildlicher und skripturaler Aspekte genauer zu erkunden und vor diesem Hintergrund die Bedeutung und Wirkkraft inschriftlicher Zeichen insbesondere in politischen Kontexten zu untersuchen.
Die Verunsicherung auf dem Feld zeitgenössischer Kunst berührt nicht nur die Frage nach der Qualität von Kunst, sondern auch jene der Grenze zwischen Kunst(werk) und ihrem (bzw. seinem) jeweiligen Außen. [...] Kunst, die einen herkömmlichen Werkbegriff in Frage stellt (und vom breiten Publikum oft abgelehnt wird), aber doch verortet und verortbar und daher, zumindest weitestgehend, als Kunst erkennbar ist, soll im folgenden Gegenwartskunst genannt werden, die in den Alltag integrierte und intervenierende und manchmal nicht als Kunst wahrgenommene Kunst als Situationskunst. Gegenwartskunst setzt ihre Autonomie und eine klare Grenze zwischen Kunst und Nicht-Kunst voraus, Situationskunst (die man als eine radikale Ausformung und somit als Teil der Gegenwartskunst ansehen könnte) sät Zweifel an der Kunstautonomie, auch wenn sie diese häufig als Argument gegen Anrufungen oder Übergriffe von Politik, Religion oder Alltagswirklichkeit verwendet bzw. verwenden 'muss'. Bei beiden Formen, die sich in vielen Fällen überschneiden, wird im herkömmlichen Sinne nichts mehr erschaffen ('poesis'), sondern etwas gefunden bzw. letztlich 'einfach' etwas getan ('praxis'). In beiden Fällen versteht sich nichts mehr von selbst: Es ist in der Rezeption - zumindest im ersten Moment - unklar, ob wir es überhaupt mit Kunst zu tun haben. In anderen Worten: Wir können uns im Moment des Ausstellungsbesuches also nicht auf unsere Sinneswahrnehmungen, auf unsere Erfahrung und auf unser implizites (Vor-)Wissen verlassen, wenn wir wissen wollen, womit wir es zu tun haben und was das alles soll. Wir benötigen also nicht zuletzt Erklärungen und Erläuterungen (die wieder zu implizitem Wissen gerinnen können) - und das ist ein Grund, warum zeitgenössische Kunst für die Komparatistik interessant sein könnte. Davon wird noch zu sprechen sein. Die Begriffe Gegenwarts- und Situationskunst decken einen sehr weiten Bereich von Phänomenen ab. Daher wird das Folgende eine kursorische Skizze werden, bei der in erster Linie auf solche Phänomene und ihre Gemeinsamkeiten abgezielt werden soll, die für die Komparatistik von Interesse sind. Im Zentrum steht nicht eine genaue Analyse und Interpretation von Phänomenen, sondern die Frage, was im Hinblick auf die Disziplin der Komparatistik spannend für Analyse und Interpretation wäre. Die im Folgenden diskutierten Phänomene und Beispiele befinden sich auf jeden Fall in der Peripherie der Komparatistik mit allen Nachteilen, welche die Arbeit in Peripherien mit sich bringt.
Briefe, das Gespräch zweier Abwesender miteinander, spielen in vielen Filmen eine große Rolle. Sie werden eingeblendet oder per 'voice over' vorgelesen, man sieht Lese- und Schreibszenen, die mit der Vieldeutigkeit des Geschriebenen spielen. Der Brief sei, so Christina Bartz, "wegen der kommunikativen Verbindung über zeitliche und räumliche Distanzen hinweg" "besonders anschlussfähig für den Film", der ebenfalls durch die Montage räumlich und zeitlich Getrenntes zusammenbringt. Im Gegensatz zum Film ist der Brief jedoch kein Massenmedium sondern Individualkommunikation. Das Zeigen des Mediums Brief oder das Ersetzen dieses historischen Mediums durch ein aktuelleres im Film bietet immer auch die Möglichkeit der Medienreflexion. In meinem Beitrag möchte ich anhand zweier prominenter Beispiele zum einen beobachten, wie in filmischen Adaptionen briefgeprägter literarischer Texte mit Briefen umgegangen wird, und zum anderen, wie anhand der Briefthematik eine Medienreflexion stattfindet. Ich stelle dazu zwei Melodramen vor, in denen Briefe und das damit einhergehende Erkennen und Verkennen eine zentrale Rolle spielen: Max Ophüls' "Letter from an unknown woman" (USA 1948), der Verfilmung von Stefan Zweigs Novelle "Brief einer Unbekannten" (1922), und "Atonement" (2007), die Adaption von Ian McEwans gleichnamigen Roman von 2001.
Wie kaum ein anderes Bildmotiv machen schmelzende Gletscher den Klimawandel sichtbar. Sie spielen deshalb eine zentrale Rolle für die Klimaforschung selbst, für die Popularisierung ihrer alarmierenden Erkenntnisse sowie für die zeitgenössische Kunst, die im Lichte dieser Einsichten nach einer adäquaten neuen Ästhetik sucht. Entsprechend umfangreich fällt inzwischen auch die kulturwissenschaftliche Auseinandersetzung mit Gletscherbildern aus. Zahlreiche Ausstellungskataloge und umfangreiche Studien verfolgen deren Entwicklung vom frühen 17. Jahrhundert, auf das die ersten bildlichen Darstellungen datiert sind, bis in die Gegenwart, in der Gletscher und ihr Verschwinden zum Emblem der globalen Erwärmung geworden sind. Der Heuristik des Vergleichs kommt dabei eine wichtige Funktion zu: Nicht nur bildet sie die Basis etwa für klassisch kunsthistorische Untersuchungen, deren Augenmerk dem Wandel der Ausdrucksformen und Abbildungskonventionen von Gletscherbildern (etwa auf einer Skala zwischen Idealisierung und Realismus) gilt. Überdies und insbesondere ist auch der Prozess des Verschwindens auf den vergleichenden Blick angewiesen, denn dieser offenbart sich ja erst auf diese Weise in seiner ganzen Dramatik. Dieser Aufsatz jedoch wählt eine andere Perspektive: In begrifflicher Anlehnung an Jussi Parikkas 'Mediengeologie' und vor dem Hintergrund des umfassenden Felds der Medienökologie wird im Folgenden eine "Medienglaziologie" umrissen, die Gletscher selbst als Medien versteht. Ganz im Sinne des medienkomparatistischen Forschungsparadigmas, dass sich spezifische Medialitäten erst aus einer medienvergleichenden Perspektive erschließen, wird der Frage nachgegangen, wie sich dieses "Medien-Werden" der Gletscher im und durch den Vergleich mit anderen (technischen) Medien vollzieht. Dabei konzentriere ich mich zeitlich auf das 19. und frühe 20. Jahrhundert und regional auf die Alpengletscher, deren wissenschaftliche Erforschung die Disziplin der Glaziologie begründete.
The elliptic flow (v2) of (anti-)3He is measured in Pb–Pb collisions at √sNN=5.02TeV in the transverse-momentum (pT) range of 2–6 GeV/c for the centrality classes 0–20%, 20–40%, and 40–60% using the event-plane method. This measurement is compared to that of pions, kaons, and protons at the same center-of-mass energy. A clear mass ordering is observed at low pT, as expected from relativistic hydrodynamics. The violation of the scaling of v2 with the number of constituent quarks at low pT, already observed for identified hadrons and deuterons at LHC energies, is confirmed also for (anti-)3He. The elliptic flow of (anti-)3He is underestimated by the Blast-Wave model and overestimated by a simple coalescence approach based on nucleon scaling. The elliptic flow of (anti-)3He measured in the centrality classes 0–20% and 20–40% is well described by a more sophisticated coalescence model where the phase-space distributions of protons and neutrons are generated using the iEBE-VISHNU hybrid model with AMPT initial conditions.
Macro-finance theory predicts that financial fragility builds up when volatility is low. This “volatility paradox’” challenges traditional systemic risk measures. I explore a new dimension of systemic risk, spillover persistence, which is the average time horizon at which a firm’s losses increase future risk in the financial system. Using firm-level data covering more than 30 years and 50 countries, I document that persistence declines when fragility builds up: before crises, during stock market booms, and when banks take more risks. In contrast, persistence increases with loss amplification: during crises and fire sales. These findings support key predictions of recent macrofinance models.
The impact of the appropriate and inappropriately applied statistical metrics to verify the State of Control of pharmaceutical manufacturing has been reviewed from an auditor’s perspective. Good and bad statistical practices have been presented in an attempt for manufactures to appreciate the risks of using these metrics. Conclusions concerning (1) control charts to be used instead of line/run charts for trend analysis, (2) Ppk as the preferred capability index (but still with an ambition to get processes into statistical control), (3) show process capability indices along with their respective control charts (4) determine which Manufacturing State of Control the product/process lies in (5) an effective Control Strategy can only be implemented if the Manufacturing State of Control is understood, (6) when presenting data consider what is truly representative of the product/process and not the average (7) Management should align with ICH Q10 more effectively to provide statistical resources for their personnel.
Linking epigenetic signature and metabolic phenotype in IDH mutant and IDH wildtype diffuse glioma
(2020)
Aims: Changes in metabolism are known to contribute to tumour phenotypes. If and how metabolic alterations in brain tumours contribute to patient outcome is still poorly understood. Epigenetics impact metabolism and mitochondrial function. The aim of this study is a characterisation of metabolic features in molecular subgroups of isocitrate dehydrogenase mutant (IDHmut) and isocitrate dehydrogenase wildtype (IDHwt) gliomas. Methods: We employed DNA methylation pattern analyses with a special focus on metabolic genes, large-scale metabolism panel immunohistochemistry (IHC), qPCR-based determination of mitochondrial DNA copy number and immune cell content using IHC and deconvolution of DNA methylation data. We analysed molecularly characterised gliomas (n = 57) for in depth DNA methylation, a cohort of primary and recurrent gliomas (n = 22) for mitochondrial copy number and validated these results in a large glioma cohort (n = 293). Finally, we investigated the potential of metabolic markers in Bevacizumab (Bev)-treated gliomas (n = 29). Results: DNA methylation patterns of metabolic genes successfully distinguished the molecular subtypes of IDHmut and IDHwt gliomas. Promoter methylation of lactate dehydrogenase A negatively correlated with protein expression and was associated with IDHmut gliomas. Mitochondrial DNA copy number was increased in IDHmut tumours and did not change in recurrent tumours. Hierarchical clustering based on metabolism panel IHC revealed distinct subclasses of IDHmut and IDHwt gliomas with an impact on patient outcome. Further quantification of these markers allowed for the prediction of survival under anti-angiogenic therapy. Conclusion: A mitochondrial signature was associated with increased survival in all analyses, which could indicate tumour subgroups with specific metabolic vulnerabilities.
Simple Summary: Targeted therapies are of growing interest to physicians in cancer treatment. These drugs target specific genes and proteins involved in the growth and survival of cancer cells. Brain tumor therapy is complicated by the fact that not all drugs can penetrate the blood brain barrier and reach their target. We explored the non-invasive method, Magnetic Resonance Spectroscopy, for monitoring drug penetration and its effects in live animals bearing brain tumors. We were able to show the presence of the investigated drug in mouse brains and its on-target activity.
Abstract: Background: BAY1436032 is a fluorine-containing inhibitor of the R132X-mutant isocitrate dehydrogenase (mIDH1). It inhibits the mIDH1-mediated production of 2-hydroxyglutarate (2-HG) in glioma cells. We investigated brain penetration of BAY1436032 and its effects using 1H/19F-Magnetic Resonance Spectroscopy (MRS). Methods: 19F-Nuclear Magnetic Resonance (NMR) Spectroscopy was conducted on serum samples from patients treated with BAY1436032 (NCT02746081 trial) in order to analyze 19F spectroscopic signal patterns and concentration-time dynamics of protein-bound inhibitor to facilitate their identification in vivo MRS experiments. Hereafter, 30 mice were implanted with three glioma cell lines (LNT-229, LNT-229 IDH1-R132H, GL261). Mice bearing the IDH-mutated glioma cells received 5 days of treatment with BAY1436032 between baseline and follow-up 1H/19F-MRS scan. All other animals underwent a single scan after BAY1436032 administration. Mouse brains were analyzed by liquid chromatography-mass spectrometry (LC-MS/MS). Results: Evaluation of 1H-MRS data showed a decrease in 2-HG/total creatinine (tCr) ratios from the baseline to post-treatment scans in the mIDH1 murine model. Whole brain concentration of BAY1436032, as determined by 19F-MRS, was similar to total brain tissue concentration determined by Liquid Chromatography with tandem mass spectrometry (LC-MS/MS), with a signal loss due to protein binding. Intratumoral drug concentration, as determined by LC-MS/MS, was not statistically different in models with or without R132X-mutant IDH1 expression. Conclusions: Non-invasive monitoring of mIDH1 inhibition by BAY1436032 in mIDH1 gliomas is feasible.
The production of light neutral mesons in AA collisions probes the physics of the Quark-Gluon Plasma (QGP), which is formed in heavy-ion collisions at the LHC. More specifically, the centrality dependent neutral meson spectra in AA collisions compared to its spectra in minimum-bias pp collisions, scaled with the number of hard collisions, provides information on the energy loss of partons traversing the QGP. The measurement allows to test with high precision the predictions of theoretical model calculations. In addition, the decay of the π0 and η mesons are the dominant back- grounds for all direct photon measurements. Therefore, pushing the limits of the precision of neutral meson production is key to learning about the temperature and space-time evolution of the QGP.
In the ALICE experiment neutral mesons can be detected via their decay into two photons. The latter can be reconstructed using the two calorimeters EMCal and PHOS or via conversions in the detector material. The excellent momentum resolution of the conversion photons down to very low pT and the high reconstruction efficiency and triggering capability of calorimeters at high pT, allow us to measure the pT dependent invariant yield of light neutral mesons over a wide kinematic range.
Combining state-of-the-art reconstruction techniques with the high statistics delivered by the LHC in Run 2 gives us the opportunity to enhance the precision of our measurements. In these proceedings, new ALICE run 2 preliminary results for neutral meson production in pp and Pb–Pb collisions at LHC energies are presented.
Nature affects human well-being in multiple ways. However, the association between species diversity and human well-being at larger spatial scales remains largely unexplored. Here, we examine the relationship between species diversity and human well-being at the continental scale, while controlling for other known drivers of well-being. We related socio-economic data from more than 26,000 European citizens across 26 countries with macroecological data on species diversity and nature characteristics for Europe. Human well-being was measured as self-reported life-satisfaction and species diversity as the species richness of several taxonomic groups (e.g. birds, mammals and trees). Our results show that bird species richness is positively associated with life-satisfaction across Europe. We found a relatively strong relationship, indicating that the effect of bird species richness on life-satisfaction may be of similar magnitude to that of income. We discuss two, non-exclusive pathways for this relationship: the direct multisensory experience of birds, and beneficial landscape properties which promote both bird diversity and people's well-being. Based on these results, this study argues that management actions for the protection of birds and the landscapes that support them would benefit humans. We suggest that political and societal decision-making should consider the critical role of species diversity for human well-being.
Introduction: Recommendations for venous thromboembolism and deep venous thrombosis (DVT) prophylaxis using graduated compression stockings (GCS) is historically based and has been critically examined in current publications. Existing guidelines are inconclusive as to recommend the general use of GCS.
Patients/Methods: 24 273 in-patients (general surgery and orthopedic patients) undergoing surgery between 2006 and 2016 were included in a retrospectively analysis from a single center. From January 2006 to January 2011 perioperative GCS was employed additionally to drug prophylaxis and from February 2011 to March 2016 patients received drug prophylaxis alone. According to german guidelines all patients received venous thromboembolism prophylaxis with weight-adapted LMWH. Risk stratification (low risk, moderate risk, high risk) was based on the guideline of the American College of Chest Physicians. Data analysis was performed before and after propensity matching (PM). The defined primary endpoint was the incidence of symptomatic or fatal pulmonary embolism (PE). A secondary endpoint was the incidence of deep venous thromboembolism (DVT).
Results: After risk stratification (low risk n = 16 483; moderate risk n = 4464; high risk n = 3326) a total of 24 273 patient were analyzed. Before to PM the relative risk for the occurrence of a PE or DVT was not increased by abstaining from GCS. After PM two groups of 11 312 patients each, one with and one without GCS application, were formed. When comparing the two groups, the relative risk (RR) for the occurrence of a pulmonary embolism was: Low Risk 0.99 [CI95% 0.998–1.000]; Moderate Risk 0.999 [CI95% 0.95–1.003]; High Risk 0.996 [CI95% 0.992–1.000] (p > 0.05). The incidence of PE in the total group LMWH alone was 0.1% (n = 16). In the total group using LMWH + GCS, the incidence was 0.3% (n = 29). RR after PM was 0.999 [CI95% 0.998–1.00].
Conclusion: In comparison to prior studies with only small numbers of patients our trial shows in a large group of patients with moderate and high risk developing VTE we can support the view that abstaining from GCS-use does not increase the incidence of symptomatic or fatal PE and symptomatic DVT.
Over the last 15 years the Diagnostic Center of Acute Leukemia (DCAL) at the Frankfurt University has diagnosed and elucidated the Mixed Lineage Leukemia (MLL) recombinome with >100 MLL fusion partners. When analyzing all these different events, balanced chromosomal translocations were found to comprise the majority of these cases (~70%), while other types of genetic rearrangements (3-way-translocations, spliced fusions, 11q inversions, interstitial deletions or insertion of chromosomal fragments into other chromosomes) account for about 30%. In nearly all those complex cases, functional fusion proteins can be produced by transcription, splicing and translation. With a few exceptions (10 out of 102 fusion genes which were per se out-of-frame), all these genetic rearrangements produced a direct MLL fusion gene, and in 94% of cases an additional reciprocal fusion gene. So far, 114 patients (out of 2454 = ~5%) have been diagnosed only with the reciprocal fusion allele, displaying no MLL-X allele. The fact that so many MLL rearrangements bear at least two fusion alleles, but also our findings that several direct MLL fusions were either out-of-frame fusions or missing, raises the question about the function and importance of reciprocal MLL fusions. Recent findings also demonstrate the presence of reciprocal MLL fusions in sarcoma patients. Here, we want to discuss the role of reciprocal MLL fusion proteins for leukemogenesis and beyond.
Two-person neuroscience (2 PN) is a recently introduced conceptual and methodological framework used to investigate the neural basis of human social interaction from simultaneous neuroimaging of two or more subjects (hyperscanning). In this study, we adopted a 2 PN approach and a multiple-brain connectivity model to investigate the neural basis of a form of cooperation called joint action. We hypothesized different intra-brain and inter-brain connectivity patterns when comparing the interpersonal properties of joint action with non-interpersonal conditions, with a focus on co-representation, a core ability at the basis of cooperation. 32 subjects were enrolled in dual-EEG recordings during a computerized joint action task including three conditions: one in which the dyad jointly acted to pursue a common goal (joint), one in which each subject interacted with the PC (PC), and one in which each subject performed the task individually (Solo).
A combination of multiple-brain connectivity estimation and specific indices derived from graph theory allowed to compare interpersonal with non-interpersonal conditions in four different frequency bands. Our results indicate that all the indices were modulated by the interaction, and returned a significantly stronger integration of multiple-subject networks in the joint vs. PC and Solo conditions. A subsequent classification analysis showed that features based on multiple-brain indices led to a better discrimination between social and non-social conditions with respect to single-subject indices. Taken together, our results suggest that multiple-brain connectivity can provide a deeper insight into the understanding of the neural basis of cooperation in humans.
Inhibitors against the NS3-4A protease of hepatitis C virus (HCV) have proven to be useful drugs in the treatment of HCV infection. Although variants have been identified with mutations that confer resistance to these inhibitors, the mutations do not restore replicative fitness and no secondary mutations that rescue fitness have been found. To gain insight into the molecular mechanisms underlying the lack of fitness compensation, we screened known resistance mutations in infectious HCV cell culture with different genomic backgrounds. We observed that the Q41R mutation of NS3-4A efficiently rescues the replicative fitness in cell culture for virus variants containing mutations at NS3-Asp168. To understand how the Q41R mutation rescues activity, we performed protease activity assays complemented by molecular dynamics simulations, which showed that protease-peptide interactions far outside the targeted peptide cleavage sites mediate substrate recognition by NS3-4A and support protease cleavage kinetics. These interactions shed new light on the mechanisms by which NS3-4A cleaves its substrates, viral polyproteins and a prime cellular antiviral adaptor protein, the mitochondrial antiviral signaling protein MAVS. Peptide binding is mediated by an extended hydrogen-bond network in NS3-4A that was effectively optimized for protease-MAVS binding in Asp168 variants with rescued replicative fitness from NS3-Q41R. In the protease harboring NS3-Q41R, the N-terminal cleavage products of MAVS retained high affinity to the active site, rendering the protease susceptible for potential product inhibition. Our findings reveal delicately balanced protease-peptide interactions in viral replication and immune escape that likely restrict the protease adaptive capability and narrow the virus evolutionary space.
Highlights
• Explanation of mobility design and its practical, aesthetic and emblematic effects on travel behaviour.
• Review of recent studies on mobility design elements and the promotion of non-motorised travel.
• Discussion of research gaps and methodological challenges of data collection and comparability.
Abstract
To promote non-motorised travel, many travel behaviour studies acknowledge the importance of the built environment to modal choice, for example with its density or mix of uses. From a mobility design theory perspective, however, objects and environments affect human perceptions, assessments and behaviour in at least three different ways: by their practical, aesthetic and emblematic functions. This review of existing evidence will argue that travel behaviour research has so far mainly focused on the practical function of the built environment. For that purpose, we systematically identified 56 relevant studies on the impacts of the built environment on non-motorised travel behaviour in the Web of Science database. The focus of research on the practical design function primary involves land use distribution, street network connectivity and the presence of walking and cycling facilities. Only a small number of papers address the aesthetic and emblematic functions. These show that the perceived attractiveness of an environment and evoked feelings of traffic safety increase the likelihood of walking and cycling. However, from a mobility design perspective, the results of the review indicate a gap regarding comprehensive research on the effects of the aesthetic and emblematic functions of the built environment. Further research involving these functions might contribute to a better understanding of how to promote non-motorised travel more effectively. Moreover, limitations related to survey techniques, regional distribution and the comparability of results were identified.
This research examines the impact of online display advertising and paid search advertising relative to offline advertising on firm performance and firm value. Using proprietary data on annualized advertising expenditures for 1651 firms spanning seven years, we document that both display advertising and paid search advertising exhibit positive effects on firm performance (measured by sales) and firm value (measured by Tobin's q). Paid search advertising has a more positive effect on sales than offline advertising, consistent with paid search being closest to the actual purchase decision and having enhanced targeting abilities. Display advertising exhibits a relatively more positive effect on Tobin's q than offline advertising, consistent with its long-term effects. The findings suggest heterogeneous economic benefits across different types of advertising, with direct implications for managers in analyzing advertising effectiveness and external stakeholders in assessing firm performance.
The US Treasury recently permitted deferred longevity income annuities to be included in pension plan menus as a default payout solution, yet little research has investigated whether more people should convert some of the $18 trillion they hold in employer-based defined contribution plans into lifelong income streams. We investigate this innovation using a calibrated lifecycle consumption and portfolio choice model embodying realistic institutional considerations. Our welfare analysis shows that defaulting a modest portion of retirees’ 401(k) assets (over a threshold) is an attractive way to enhance retirement security, enhancing welfare by up to 20% of retiree plan accruals.
In this paper a new method of experimental data analysis, the Particle-Set Identification method, is presented. The method allows to reconstruct moments of multiplicity distribution of identified particles. The difficulty the method copes with is due to incomplete particle identification – a particle mass is frequently determined with a resolution which does not allow for a unique determination of the particle type. Within this method the moments of order k are calculated from mean multiplicities of k-particle sets of a given type. The Particle-Set Identification method remains valid even in the case of correlations between mass measurements for different particles. This distinguishes it from the Identity method introduced by us previously to solve the problem of incomplete particle identification in studies of particle fluctuations.
Deubiquitinases (DUBs) are vital for the regulation of ubiquitin signals, and both catalytic activity of and target recruitment by DUBs need to be tightly controlled. Here, we identify asparagine hydroxylation as a novel posttranslational modification involved in the regulation of Cezanne (also known as OTU domain–containing protein 7B (OTUD7B)), a DUB that controls key cellular functions and signaling pathways. We demonstrate that Cezanne is a substrate for factor inhibiting HIF1 (FIH1)- and oxygen-dependent asparagine hydroxylation. We found that FIH1 modifies Asn35 within the uncharacterized N-terminal ubiquitin-associated (UBA)-like domain of Cezanne (UBACez), which lacks conserved UBA domain properties. We show that UBACez binds Lys11-, Lys48-, Lys63-, and Met1-linked ubiquitin chains in vitro, establishing UBACez as a functional ubiquitin-binding domain. Our findings also reveal that the interaction of UBACez with ubiquitin is mediated via a noncanonical surface and that hydroxylation of Asn35 inhibits ubiquitin binding. Recently, it has been suggested that Cezanne recruitment to specific target proteins depends on UBACez. Our results indicate that UBACez can indeed fulfill this role as regulatory domain by binding various ubiquitin chain types. They also uncover that this interaction with ubiquitin, and thus with modified substrates, can be modulated by oxygen-dependent asparagine hydroxylation, suggesting that Cezanne is regulated by oxygen levels.
Hypoxia inhibits ferritinophagy, increases mitochondrial ferritin, and protects from ferroptosis
(2020)
Highlights
• Hypoxia decreases NCOA4 transcription in primary human macrophages.
• NCOA4 mRNA is a target of miR-6862-5p.
• Lowering NCOA4 increases FTMT abundance under hypoxia.
• FTMT and FTH protect from ferroptosis.
• Tumor cells lack the hypoxic decrease of NCOA4 and fail to stabilize FTMT.
Abstract
Cellular iron, at the physiological level, is essential to maintain several metabolic pathways, while an excess of free iron may cause oxidative damage and/or provoke cell death. Consequently, iron homeostasis has to be tightly controlled. Under hypoxia these regulatory mechanisms for human macrophages are not well understood. Hypoxic primary human macrophages reduced intracellular free iron and increased ferritin expression, including mitochondrial ferritin (FTMT), to store iron. In parallel, nuclear receptor coactivator 4 (NCOA4), a master regulator of ferritinophagy, decreased and was proven to directly regulate FTMT expression. Reduced NCOA4 expression resulted from a lower rate of hypoxic NCOA4 transcription combined with a micro RNA 6862-5p-dependent degradation of NCOA4 mRNA, the latter being regulated by c-jun N-terminal kinase (JNK). Pharmacological inhibition of JNK under hypoxia increased NCOA4 and prevented FTMT induction. FTMT and ferritin heavy chain (FTH) cooperated to protect macrophages from RSL-3-induced ferroptosis under hypoxia as this form of cell death is linked to iron metabolism. In contrast, in HT1080 fibrosarcome cells, which are sensitive to ferroptosis, NCOA4 and FTMT are not regulated. Our study helps to understand mechanisms of hypoxic FTMT regulation and to link ferritinophagy and macrophage sensitivity to ferroptosis.
Highlights
• PUR, PVC and PLA microplastics affect life-history parameters of Daphnia magna.
• Natural kaolin particles are less toxic than microplastics.
• Microplastic toxicity is material-specific, e.g. PVC is most toxic on reproduction.
• In case of PVC, plastic chemicals are the main driver of microplastic toxicity.
• PLA bioplastics are similarly toxic as conventional plastics.
Abstract
Given the ubiquitous presence of microplastics in aquatic environments, an evaluation of their toxicity is essential. Microplastics are a heterogeneous set of materials that differ not only in particle properties, like size and shape, but also in chemical composition, including polymers, additives and side products. Thus far, it remains unknown whether the plastic chemicals or the particle itself are the driving factor for microplastic toxicity. To address this question, we exposed Daphnia magna for 21 days to irregular polyvinyl chloride (PVC), polyurethane (PUR) and polylactic acid (PLA) microplastics as well as to natural kaolin particles in high concentrations (10, 50, 100, 500 mg/L, ≤ 59 μm) and different exposure scenarios, including microplastics and microplastics without extractable chemicals as well as the extracted and migrating chemicals alone. All three microplastic types negatively affected the life-history of D. magna. However, this toxicity depended on the endpoint and the material. While PVC had the largest effect on reproduction, PLA reduced survival most effectively. The latter indicates that bio-based and biodegradable plastics can be as toxic as their conventional counterparts. The natural particle kaolin was less toxic than microplastics when comparing numerical concentrations. Importantly, the contribution of plastic chemicals to the toxicity was also plastic type-specific. While we can attribute effects of PVC to the chemicals used in the material, effects of PUR and PLA plastics were induced by the mere particle. Our study demonstrates that plastic chemicals can drive microplastic toxicity. This highlights the importance of considering the individual chemical composition of plastics when assessing their environmental risks. Our results suggest that less studied polymer types, like PVC and PUR, as well as bioplastics are of particular toxicological relevance and should get a higher priority in ecotoxicological studies.
Decline in physical activity in the weeks preceding sustained ventricular arrhythmia in women
(2020)
Background: Heightened risk of cardiac arrest following physical exertion has been reported. Among patients with an implantable defibrillator, an appropriate shock for sustained ventricular arrhythmia was preceded by a retrospective self-report of engaging in mild-to-moderate physical activity. Previous studies evaluating the relationship between activity and sudden cardiac arrest lacked an objective measure of physical activity and women were often underrepresented.
Objective: To determine the relationship between physical activity, recorded by accelerometer in a wearable cardioverter-defibrillator (WCD), and sustained ventricular arrhythmia among female patients.
Methods: A dataset of female adult patients prescribed a WCD for a diagnosis of myocardial infarction or dilated cardiomyopathy was compiled from a commercial database. Curve estimation, to include linear and nonlinear interpolation, was applied to physical activity as a function of time (days before arrhythmia).
Results: Among women who received an appropriate WCD shock for sustained ventricular arrhythmia (N = 120), a quadratic relationship between time and activity was present prior to shock. Physical activity increased starting at the beginning of the 30-day period up until day -16 (16 days before the ventricular arrhythmia) when activity begins to decline.
Conclusion: For patients who received treatment for sustained ventricular arrhythmia, a decline in physical activity was found during the 2 weeks preceding the arrhythmic event. Device monitoring for a sustained decline in physical activity may be useful to identify patients at near-term risk of a cardiac arrest.
Entorhinal-retrosplenial circuits for allocentric-egocentric transformation of boundary coding
(2020)
Spatial navigation requires landmark coding from two perspectives, relying on viewpoint-invariant and self-referenced representations. The brain encodes information within each reference frame but their interactions and functional dependency remains unclear. Here we investigate the relationship between neurons in the rat's retrosplenial cortex (RSC) and entorhinal cortex (MEC) that increase firing near boundaries of space. Border cells in RSC specifically encode walls, but not objects, and are sensitive to the animal’s direction to nearby borders. These egocentric representations are generated independent of visual or whisker sensation but are affected by inputs from MEC that contains allocentric spatial cells. Pharmaco- and optogenetic inhibition of MEC led to a disruption of border coding in RSC, but not vice versa, indicating allocentric-to-egocentric transformation. Finally, RSC border cells fire prospective to the animal’s next motion, unlike those in MEC, revealing the MEC-RSC pathway as an extended border coding circuit that implements coordinate transformation to guide navigation behavior.
Aim: To assess volumetric tissue changes at peri‐implantitis sites following combined surgical therapy of peri‐implantitis over a 6‐month follow‐up period.
Materials and Methods: Twenty patients (n = 28 implants) diagnosed with peri‐implantitis underwent access flap surgery, implantoplasty at supracrestally or bucally exposed implant surfaces and augmentation at intra‐bony components using a natural bone mineral and application of a native collagen membrane during clinical routine treatments. The peri‐implant region of interest (ROI) was intra‐orally scanned pre‐operatively (S0), and after 1 (S1) and 6 (S2) months following surgical therapy. Digital files were converted to standard tessellation language (STL) format for superimposition and assessment of peri‐implant volumetric variations between time points. The change in thickness was assessed at a standardized ROI, subdivided into three equidistant sections (i.e. marginal, medial and apical). Peri‐implant soft tissue contour area (STCA) (mm2) and its corresponding contraction rates (%) were also assessed.
Results: Peri‐implant tissues revealed a mean thickness change (loss) of −0.11 and −0.28 mm at 1 and 6 months. S0 to S1 volumetric variations pointed to a thickness change of −0.46, 0.08 and 0.4 mm at marginal, medial and apical regions, respectively. S0 to S2 analysis exhibited corresponding thickness changes of −0.61, −0.25 and −0.09 mm, respectively. The thickness differences between the areas were statistically significant at both time periods. The mean peri‐implant STCA totalled to 189.2, 175 and 158.9 mm2 at S0, S1 and S2, showing a significant STCA contraction rate of 7.9% from S0 to S1 and of 18.5% from S0 to S2. Linear regression analysis revealed a significant association between the pre‐operative width of keratinized mucosa (KM) and STCA contraction rate.
Conclusions: The peri‐implant mucosa undergoes considerable volumetric changes after combined surgical therapy. However, tissue contraction appears to be influenced by the width of KM.
In resource-limited or point-of-care settings, rapid diagnostic tests (RDTs), that aim to simultaneously detect HIV antibodies and p24 capsid (p24CA) antigen with high sensitivity, can pose important alternatives to screen for early infections. We evaluated the performance of the antibody and antigen components of the old and novel version of the Determine™ HIV-1/2 Ag/Ab Combo RDTs in parallel to quantifications in a fourth-generation antigen/antibody immunoassay (4G-EIA), p24CA antigen immunoassay (p24CA-EIA), immunoblots, and nucleic acid quantification. We included plasma samples of acute, treatment-naïve HIV-1 infections (Fiebig stages I–VI, subtypes A1, B, C, F, CRF02_AG, CRF02_AE, URF) or chronic HIV-1 and HIV-2 infections. The tests’ antigen component was evaluated also for a panel of subtype B HIV-1 transmitted/founder (T/F) viruses, HIV-2 strains and HIV-2 primary isolates. Furthermore, we assessed the analytical sensitivity of the RDTs to detect p24CA using a highly purified HIV-1NL4-3 p24CA standard. We found that 77% of plasma samples from acutely infected, immunoblot-negative HIV-1 patients in Fiebig stages II–III were identified by the new RDT, while only 25% scored positive in the old RDT. Both RDTs reacted to all samples from chronically HIV-1-infected and acutely HIV-1-infected patients with positive immunoblots. All specimens from chronically infected HIV-2 patients scored positive in the new RDT. Of note, the sensitivity of the RDTs to detect recombinant p24CA from a subtype B virus ranged between 50 and 200 pg/mL, mirrored also by the detection of HIV-1 T/F viruses only at antigen concentrations tenfold higher than suggested by the manufacturer. The RTD failed to recognize any of the HIV-2 viruses tested. Our results indicate that the new version of the Determine™ HIV-1/2 Ag/Ab Combo displays an increased sensitivity to detect HIV-1 p24CA-positive, immunoblot-negative plasma samples compared to the precursor version. The sensitivity of 4G-EIA and p24CA-EIA to detect the major structural HIV antigen, and thus to diagnose acute infections prior to seroconversion, is still superior.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here we show in awake macaque area V1 that both, repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects showed some persistence on the timescale of minutes. Further, gamma increases were specific to the presented stimulus location. Importantly, repetition effects on gamma and on firing rates generalized to natural images. These findings suggest that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters, both for generating efficient stimulus responses and possibly for memory formation.
Mitochondria have a central role in regulating a range of cellular activities and host responses upon bacterial infection. Multiple pathogens affect mitochondria dynamics and functions to influence their intracellular survival or evade host immunity. On the other side, major host responses elicited against infections are directly dependent on mitochondrial functions, thus placing mitochondria centrally in maintaining homeostasis upon infection. In this review, we summarize how different bacteria and viruses impact morphological and functional changes in host mitochondria and how this manipulation can influence microbial pathogenesis as well as the host cell metabolism and immune responses.
In this proceeding, we review our recent work using deep convolutional neural network (CNN) to identify the nature of the QCD transition in a hybrid modeling of heavy-ion collisions. Within this hybrid model, a viscous hydrodynamic model is coupled with a hadronic cascade “after-burner”. As a binary classification setup, we employ two different types of equations of state (EoS) of the hot medium in the hydrodynamic evolution. The resulting final-state pion spectra in the transverse momentum and azimuthal angle plane are fed to the neural network as the input data in order to distinguish different EoS. To probe the effects of the fluctuations in the event-by-event spectra, we explore different scenarios for the input data and make a comparison in a systematic way. We observe a clear hierarchy in the predictive power when the network is fed with the event-by-event, cascade-coarse-grained and event-fine-averaged spectra. The carefully-trained neural network can extract high-level features from pion spectra to identify the nature of the QCD transition in a realistic simulation scenario.
Previous studies reported on the safety and applicability of mesenchymal stem/stromal cells (MSCs) to ameliorate pulmonary inflammation in acute respiratory distress syndrome (ARDS). Thus, multiple clinical trials assessing the potential of MSCs for COVID-19 treatment are underway. Yet, as SARS-inducing coronaviruses infect stem/progenitor cells, it is unclear whether MSCs could be infected by SARS-CoV-2 upon transplantation to COVID-19 patients. We found that MSCs from bone marrow, amniotic fluid, and adipose tissue carry angiotensin-converting enzyme 2 and transmembrane protease serine subtype 2 at low levels on the cell surface under steady-state and inflammatory conditions. We did not observe SARS-CoV-2 infection or replication in MSCs at steady state under inflammatory conditions, or in direct contact with SARS-CoV-2-infected Caco-2 cells. Further, indoleamine 2,3-dioxygenase 1 production in MSCs was not impaired in the presence of SARS-CoV-2. We show that MSCs are resistant to SARS-CoV-2 infection and retain their immunomodulation potential, supporting their potential applicability for COVID-19 treatment.
Knowledge of consumers' willingness to pay (WTP) is a prerequisite to profitable price-setting. To gauge consumers' WTP, practitioners often rely on a direct single question approach in which consumers are asked to explicitly state their WTP for a product. Despite its popularity among practitioners, this approach has been found to suffer from hypothetical bias. In this paper, we propose a rigorous method that improves the accuracy of the direct single question approach. Specifically, we systematically assess the hypothetical biases associated with the direct single question approach and explore ways to de-bias it. Our results show that by using the de-biasing procedures we propose, we can generate a de-biased direct single question approach that is accurate enough to be useful for managerial decision-making. We validate this approach with two studies in this paper.
The striking similarities that have been observed between high-multiplicity proton-proton (pp) collisions and heavy-ion collisions can be explored through multiplicity-differential measurements of identified hadrons in pp collisions. With these measurements, it is possible to study mechanisms such as collective flow that determine the shapes of hadron transverse momentum (pT) spectra, to search for possible modifications of the yields of short-lived hadronic resonances due to scattering effects in an extended hadron-gas phase, and to investigate different explanations provided by phenomenological models for enhancement of strangeness production with increasing multiplicity. In this paper, these topics are addressed through measurements of the K∗(892)0 and φ(1020) mesons at midrapidity in pp collisions at √s = 13 TeV as a function of the charged-particle multiplicity. The results include the pT spectra, pT-integrated yields, mean transverse momenta, and the ratios of the yields of these resonances to those of longer-lived hadrons. Comparisons with results from other collision systems and energies, as well as predictions from phenomenological models, are also discussed.
The inclusive J/ψ meson production in Pb–Pb collisions at a center-of-mass energy per nucleon–nucleon collision of sNN=5.02 TeV at midrapidity (|y|<0.9) is reported by the ALICE Collaboration. The measurements are performed in the dielectron decay channel, as a function of event centrality and J/ψ transverse momentum pT, down to pT=0. The J/ψ mean transverse momentum 〈pT〉 and rAA ratio, defined as 〈pT2〉PbPb/〈pT2〉pp, are evaluated. Both observables show a centrality dependence decreasing towards central (head-on) collisions. The J/ψ nuclear modification factor RAA exhibits a strong pT dependence with a large suppression at high pT and an increase to unity for decreasing pT. When integrating over the measured momentum range pT<10 GeV/c, the J/ψ RAA shows a weak centrality dependence. Each measurement is compared with results at lower center-of-mass energies and with ALICE measurements at forward rapidity, as well as to theory calculations. All reported features of the J/ψ production at low pT are consistent with a dominant contribution to the J/ψ yield originating from charm quark (re)combination.
This paper presents the first measurements of the charge independent (CI) and charge dependent (CD) two-particle transverse momentum correlators GCI 2 and GCD 2 in Pb–Pb collisions at √sNN = 2.76 TeV by the ALICE collaboration. The two-particle transverse momentum correlator G2 was introduced as a measure of the momentum current transfer between neighboring system cells. The correlators are measured as a function of pair separation in pseudorapidity (Δη) and azimuth (Δφ) and as a function of collision centrality. From peripheral to central collisions, the correlator GCI 2 exhibits a longitudinal broadening while undergoing a monotonic azimuthal narrowing. By contrast, GCD 2 exhibits a narrowing along both dimensions. These features are not reproduced by models such as HIJING and AMPT. However, the observed narrowing of the correlators from peripheral to central collisions is expected to result from the stronger transverse flow profiles produced in more central collisions and the longitudinal broadening is predicted to be sensitive to momentum currents and the shear viscosity per unit of entropy density η/s of the matter produced in the collisions. The observed broadening is found to be consistent with the hypothesized lower bound of η/s and is in qualitative agreement with values obtained from anisotropic flow measurements.
This Letter presents the first direct investigation of the p–0 interaction, using the femtoscopy technique in high-multiplicity pp collisions at √s = 13 TeV measured by the ALICE detector. The 0 is reconstructed via the decay channel to Λγ, and the subsequent decay of Λ to pπ−. The photon is detected via the conversion in material to e+e− pairs exploiting the capability of the ALICE detector to measure electrons at low transverse momenta. The measured p–0 correlation indicates a shallow strong interaction. The comparison of the data to several theoretical predictions obtained employing the Correlation Analysis Tool using the Schrödinger Equation (CATS) and the Lednický–Lyuboshits approach shows that the current experimental precision does not yet allow to discriminate between different models, as it is the case for the available scattering and hypernuclei data. Nevertheless, the p–0 correlation function is found to be sensitive to the strong interaction, and driven by the interplay of the different spin and isospin channels. This pioneering study demonstrates the feasibility of a femtoscopic measurement in the p–0 channel and with the expected larger data samples in LHC Run 3 and Run 4, the p–0 interaction will be constrained with high precision.
Multiplicity dependence of light (anti-)nuclei production in p–Pb collisions at √sNN =5.02 TeV
(2020)
The measurement of the deuteron and anti-deuteron production in the rapidity range −1 < y < 0 as a function of transverse momentum and event multiplicity in p–Pb collisions at √sNN = 5.02 TeV is presented. (Anti-)deuterons are identified via their specific energy loss dE/dx and via their time-offlight. Their production in p–Pb collisions is compared to pp and Pb–Pb collisions and is discussed within the context of thermal and coalescence models. The ratio of integrated yields of deuterons to protons (d/p) shows a significant increase as a function of the charged-particle multiplicity of the event starting from values similar to those observed in pp collisions at low multiplicities and approaching those observed in Pb–Pb collisions at high multiplicities. The mean transverse particle momenta are extracted from the deuteron spectra and the values are similar to those obtained for p and particles. Thus, deuteron spectra do not follow mass ordering. This behaviour is in contrast to the trend observed for non-composite particles in p–Pb collisions. In addition, the production of the rare 3He and 3He nuclei has been studied. The spectrum corresponding to all non-single diffractive p-Pb collisions is obtained in the rapidity window −1 < y < 0 and the pT-integrated yield dN/dy is extracted. It is found that the yields of protons, deuterons, and 3He, normalised by the spin degeneracy factor, follow an exponential decrease with mass number.
The ALICE collaboration at the CERN LHC reports novel measurements of jet substructure in pp collisions at √s = 7 TeV and central Pb–Pb collisions at √sNN = 2.76 TeV. Jet substructure of track-based jets is explored via iterative declustering and grooming techniques. We present the measurement of the momentum sharing of two-prong substructure exposed via grooming, the zg, and its dependence on the opening angle, in both pp and Pb–Pb collisions. We also present the measurement of the distribution of the number of branches obtained in the iterative declustering of the jet, which is interpreted as the number of its hard splittings. In Pb–Pb collisions, we observe a suppression of symmetric splittings at large opening angles and an enhancement of splittings at small opening angles relative to pp collisions, with no significant modification of the number of splittings. The results are compared to predictions from various Monte Carlo event generators to test the role of important concepts in the evolution of the jet in the medium such as colour coherence.
ϒ production in p–Pb interactions is studied at the centre-of-mass energy per nucleon–nucleon collision √sNN = 8.16 TeV with the ALICE detector at the CERN LHC. The measurement is performed reconstructing bottomonium resonances via their dimuon decay channel, in the centre-of-mass rapidity intervals 2.03 < ycms < 3.53 and −4.46 < ycms < −2.96, down to zero transverse momentum. In this work, results on the ϒ(1S) production cross section as a function of rapidity and transverse momentum are presented. The corresponding nuclear modification factor shows a suppression of the ϒ(1S) yields with respect to pp collisions, both at forward and backward rapidity. This suppression is stronger in the low transverse momentum region and shows no significant dependence on the centrality of the interactions. Furthermore, the ϒ(2S) nuclear modification factor is evaluated, suggesting a suppression similar to that of the ϒ(1S). A first measurement of the ϒ(3S) has also been performed. Finally, results are compared with previous ALICE measurements in p–Pb collisions at √sNN = 5.02 TeV and with theoretical calculations.
Measurement of groomed jet substructure observables in p+p collisions at √s = 200 GeV with STAR
(2020)
In this letter, measurements of the shared momentum fraction (zg) and the groomed jet radius (Rg), as defined in the SoftDrop algorithm, are reported in p+p collisions at √s = 200 GeV collected by the STAR experiment. These substructure observables are differentially measured for jets of varying resolution parameters from R = 0.2 − 0.6 in the transverse momentum range 15 < pT,jet < 60 GeV/c. These studies show that, in the pT,jet range accessible at √s = 200 GeV and with increasing jet resolution parameter and jet transverse momentum, the zg distribution asymptotically converges to the DGLAP splitting kernel for a quark radiating a gluon. The groomed jet radius measurements reflect a momentum-dependent narrowing of the jet structure for jets of a given resolution parameter, i.e., the larger the pT,jet, the narrower the first splitting. For the first time, these fully corrected measurements are compared to Monte Carlo generators with leading order QCD matrix elements and leading log in the parton shower, and to state-of-the-art theoretical calculations at next-to-leading-log accuracy. We observe that PYTHIA 6 with parameters tuned to reproduce RHIC measurements is able to quantitatively describe data, whereas PYTHIA 8 and HERWIG 7, tuned to reproduce LHC data, are unable to provide a simultaneous description of both zg and Rg, resulting in opportunities for fine parameter tuning of these models for p+p collisions at RHIC energies. We also find that the theoretical calculations without non-perturbative corrections are able to qualitatively describe the trend in data for jets of large resolution parameters at high pT,jet, but fail at small jet resolution parameters and low jet transverse momenta.
We report on the measurement of the size of the particle-emitting source from two-baryon correlations with ALICE in high-multiplicity pp collisions at √s = 13 TeV. The source radius is studied with low relative momentum p–p, p–p, p–, and p– pairs as a function of the pair transverse mass mT considering for the first time in a quantitative way the effect of strong resonance decays. After correcting for this effect, the radii extracted for pairs of different particle species agree. This indicates that protons, antiprotons, s, and s originate from the same source. Within the measured mT range (1.1–2.2) GeV/c2 the invariant radius of this common source varies between 1.3 and 0.85 fm. These results provide a precise reference for studies of the strong hadron–hadron interactions and for the investigation of collective properties in small colliding systems.
Multiplicity dependence of inclusive J/ψ production at midrapidity in pp collisions at √s = 13 TeV
(2020)
Measurements of the inclusive J/ψ yield as a function of charged-particle pseudorapidity density dNch/dη in pp collisions at √s = 13 TeV with ALICE at the LHC are reported. The J/ψ meson yield is measured at midrapidity (|y| < 0.9) in the dielectron channel, for events selected based on the charged-particle multiplicity at midrapidity (|η| < 1) and at forward rapidity (−3.7 < η < −1.7 and 2.8 < η < 5.1); both observables are normalized to their corresponding averages in minimum bias events. The increase of the normalized J/ψ yield with normalized dNch/dη is significantly stronger than linear and dependent on the transverse momentum. The data are compared to theoretical predictions, which describe the observed trends well, albeit not always quantitatively.
Investigation of the linear and mode-coupled flow harmonics in Au+Au collisions at √sNN = 200 GeV
(2020)
Flow harmonics (vn) of the Fourier expansion for the azimuthal distributions of hadrons are commonly employed to quantify the azimuthal anisotropy of particle production relative to the collision symmetry planes. While lower order Fourier coefficients (v2 and v3) are more directly related to the corresponding eccentricities of the initial state, the higher-order flow harmonics (vn>3) can be induced by a modecoupled response to the lower-order anisotropies, in addition to a linear response to the same-order anisotropies. These higher-order flow harmonics and their linear and mode-coupled contributions can be used to more precisely constrain the initial conditions and the transport properties of the medium in theoretical models. The multiparticle azimuthal cumulant method is used to measure the linear and mode-coupled contributions in the higher-order anisotropic flow, the mode-coupled response coefficients, and the correlations of the event plane angles for charged particles as functions of centrality and transverse momentum in Au+Au collisions at nucleon-nucleon center-of-mass energy √sN N= 200 GeV. The results are compared to similar LHC measurements as well as to several viscous hydrodynamic calculations with varying initial conditions.
Monoterpenes and their monoterpenoid derivatives form a subclass of terpene(oid)s. They are widely used in medicines/pharmaceuticals, as flavor and fragrance compounds, or in agriculture and are also considered as future biofuels. However, for many of these substances, the extraction from natural sources poses challenges such as occurring at low concentrations in their raw material or because the natural sources are diminishing. Furthermore, many of the structurally more complex terpenoids cannot be chemically synthesized in an economic way. Therefore, microbial production provides an attractive alternative, taking advantage of the often distinct regio- and stereoselectivity of enzymatic reactions. However, monoterpenes and monoterpenoids are challenging products for industrial biotechnology processes due to their pronounced cytotoxicity, which complicates the production in microorganisms compared to longer-chain terpenes (sesquiterpenes, diterpenes, etc.).
The aim of this thesis was to generate a biotechnological complement to fossil-resources-based chemical processes for industrial monoterpenoid production. Therefore, a starting point for the further development of a microbial cell factory based on the microbe Pseudomonas putida KT2440 was aimed to be created. This production organism should be able to conduct a whole- cell biocatalysis to selectively oxyfunctionalize monoterpene hydrocarbons using renewable industrial by-products and waste streams as raw material for monoterpenoid production (Figure 1). As a model substance, the production of (-)-menthol should be addressed due to its industrial significance. (-)-Menthol is one of the world’s most widely-used flavor and fragrance compounds by volume as well as a medical component, having an annual production volume of over 30,000 tons. An approach for (-)-menthol production from renewable resources could be a biotechnological(-chemical) two-step conversion (Figure 1), starting from (+)-limonene, a by-product of the citrus fruit processing industry.
The thesis project was divided into three parts. In the first part, enzymes (limonene-3- hydroxylases) were to be identified that can convert (+)-limonene into the precursor of (-)-menthol, (+)-trans-isopiperitenol. To counteract product toxicity, in the second part, the tolerance of the intended production organism P. putida KT2440 towards monoterpenes and their monoterpenoid derivatives should be increased. Finally, in the third part, the identified hydroxylase enzymes would be expressed in the improved P. putida KT2440 strain to create a whole-cell biocatalyst for the first reaction step of a two-step (-)-menthol production, starting from (+)-limonene.
To achieve these objectives, different genetic/molecular biology and analytical methods were applied. In this way, two cytochrome P450 monooxygenase enzymes from the fungi Aureobasidium pullulans and Hormonema carpetanum could be identified and functionally expressed in Pichia pastoris, which can catalyze the intended hydroxylation reaction on (+) limonene with high stereo- and regioselectivity. A further characterization of the enzyme from A. pullulans showed that apart from (+) limonene the protein can also hydroxylate ( ) limonene, - and -pinene, as well as 3-carene.
Furthermore, within this thesis, mechanisms of microbial monoterpenoid resistance of P. putida could be identified. It was shown that the different monoterpenes and monoterpenoids tested have very different toxicity levels and that mainly the Ttg efflux pumps of P. putida GS1 are responsible for the tolerance to many of these compounds. Based on these results, a P. putida KT2440 strain with increased resistance to various monoterpenoids, including isopiperitenol, could then be generated, which can be used as a host organism for the further development of monoterpenoid-producing cell factories.
While within the scope of this work the heterologous expression of the fungal gene in prokaryotic cells in a functional form could not be realized despite different approaches, the identified enzymes, the monoterpenoid-tolerant P. putida strain and a plasmid developed for heterologous gene expression in P. putida provide a starting point for the further design of a microbial cell factory for biotechnological monoterpenoid production.
The miRNA biogenesis is tightly regulated to avoid dysfunction and consequent disease development. Here, we describe modulation of miRNA processing as a novel noncanonical function of the 5-lipoxygenase (5-LO) enzyme in monocytic cells. In differentiated Mono Mac 6 (MM6) cells, we found an in situ interaction of 5-LO with Dicer, a key enzyme in miRNA biogenesis. RNA sequencing of small noncoding RNAs revealed a functional impact, knockout of 5-LO altered the expression profile of several miRNAs. Effects of 5-LO could be observed at two levels. qPCR analyses thus indicated that (a) 5-LO promotes the transcription of the evolutionarily conserved miR-99b/let-7e/miR-125a cluster and (b) the 5-LO-Dicer interaction downregulates the processing of pre-let-7e, resulting in an increase in miR-125a and miR-99b levels by 5-LO without concomitant changes in let-7e levels in differentiated MM6 cells. Our observations suggest that 5-LO regulates the miRNA profile by modulating the Dicer-mediated processing of distinct pre-miRNAs. 5-LO inhibits the formation of let-7e which is a well-known inducer of cell differentiation, but promotes the generation of miR-99b and miR-125a known to induce cell proliferation and the maintenance of leukemic stem cell functions.
"Wie kann ich die Čechen differenzieren? In städtische u. ländliche (Machar u. Brezina)?" fragte Hugo von Hofmannsthal unsicher Hermann Bahr, als er den Editionsplan für die "Österreichische Bibliothek" konzipierte. Die Frage mag, was die tschechische Literatur betrifft, etwas naiv erscheinen, sie zeigt jedoch, dass Hofmannsthal zumindest von zwei markanten Vertretern der frühen tschechischen literarischen Moderne eine gewisse Kenntnis besaß. Der Dichter und Feuilletonist Josef Svatopluk Machar (1864–1942) lebte seit 1889 in Wien. Bahr hatte ihn im Juli 1892 kennengelernt und bei der Gründung der Wochenschrift "Die Zeit" mit ihm zusammengearbeitet, und auch nach Bahrs Rückzug von dieser Zeitschrift 1899 fungierte Machar als wichtiges Verbindungsglied zu tschechischen Schriftstellern und Politikern einschließlich T. G. Masaryks. In der deutsch-österreichischen Presse zu Beginn des 20. Jahrhunderts auf Machars Namen zu stoßen, war nicht schwer. Öfters wurden seine Konflikte mit der katholischen Kirche erwähnt, die er mit seinen Feuilletons, Gedichten und Vorträgen provozierte. Zudem war er zum meistübersetzten tschechischen Dichter avanciert. Grund hierfür waren nicht nur die Qualität seines Werkes und seine wachsende Popularität bei tschechischen Lesern - derlei tschechische Schriftsteller ließen sich mehrere finden. Der Hauptgrund bestand vielmehr darin, dass er in Wien lebte und etliche seiner dortigen Freunde ihn übersetzt hatten. Einer von ihnen war Emil Saudek, der Hofmannsthal auf den zweiten der oben genannten tschechischen Dichter, den Symbolisten Otokar Březina, aufmerksam machte. Dieser bislang wenig bekannte Umstand soll im Folgenden beleuchtet werden.
Der hier edierte und übersetzte Text ist Teil einer Sammelrezension ("Poésie"), in der Teodor de Wyzewa 1887 im Februar-Heft der "Revue indépendante" neue Lyrikbände bespricht.
Sie sind ihm Anlass eines grundsätzlichen Nachdenkens über Symbolismus, insofern dieser als eine neue Schreibweise der Lyrik im Gespräch ist und, so Wyzewa, es der Klärung des Symbolbegriffs bedarf, auf den der 1887 bereits geläufige Name der neuen Bewegung verweist.
Im Blickpunkt der Arbeit steht die Historiografie der Unternehmungen Friedrich I. Barbarossas 1154-1158 in der Lombardei. Während der hochgebildete Bischof Otto von Freising ein reges Forschungsinteresse darstellt, sind seine beiden Zeitgenossen, die eigenständige Berichte über die Ereignisse verfassten, in der Forschung weitestgehend unberücksichtigt geblieben. Durch einen Vergleich der 'Gesta' Bischofs Otto von Freising, des 'Libellus' des Lodesen Otto Morena und der 'Narratio' eines anonymen Schreibers aus Mailand zeigt diese Arbeit die Absichten der Autoren auf und fragt, inwieweit die sich widersprechenden Schilderungen als "alternativen Fakten" aufgefasst werden können.
Nach einem Abriss über den Begriff der "alternativen Fakten", dem im Zuge der Präsidentschaft von Donald Trump Aufmerksamkeit zuteilwurde und der hier als unbewusst oder bewusst erfolgte Verformung verstanden wird, der neuzeitlichen Rezeption Barbarossas sowie einer zeitlichen und räumlichen Einordnung werden die "Ausgangslagen" der Autoren betrachtet. Die Entstehung der 'Gesta' und ihr Verhältnis zu Ottos erstem Werk sind umstritten. Es zeigt sich, dass die Positionen Ottos von Freising und Otto Morenas kaiserfreundliche, diejenige des Mailänders Autors eine kaiserfeindliche Absicht erwarten lassen.
Eine kleinteilige Betrachtung der Vorworte/Prologe der Werke offenbart die selbst geäußerten Absichten. Die Anlehnung der 'Gesta' Ottos von Freising an einen durch oder im Auftrag Barbarossas verfassten Tatenbericht sowie seine Lobpreisungen des Kaisers stellen eine Färbung der Darstellung in Aussicht. Auch bei Otto Morena zeigt sich eine starke Verbundenheit zum Kaiser, die Zweifel an der Neutralität seines Werkes aufkommen lassen muss. Der anonyme Autor aus Mailand bekennt ausdrücklich, zum Nutzen der Nachwelt zu schreiben und reiht die Zerstörung Mailands 1162 als Endpunkt einer weitzurückreichenden Opfernarrative ein. Auch wenn ausdrückliche Ausfälle gegen den Kaiser unterbleiben, sind starke Zweifel an einer neutralen Darstellung angezeigt.
Die Beschäftigung mit den Ereignissen des Jahres 1154 zeigt "alternative" Darstellungen: Die Darstellung Ottos von Freising hält sich an die kaiserliche Vorlage und ist im Sinne des Kaisers gehalten, was sich auch bei Otto Morena zeigt, der darüber hinaus die Rolle Lodis betont. Die Mailänder "Gegendarstellung" hingegen lastet negative Ereignisse ausschließlich Barbarossa an.
Otto von Freising betont die lange geplante Kaiserkrönung in Rom und den Feldzug gegen die Normannen als Ausgangspunkt des ersten Italienzuges. Otto Morena legt den Beginn des Disputs zwischen Barbarossa und den Mailändern auf die Versammlung des Hofes in Konstanz, wo Klagen zweier Lodesen Anlass zu Friedrichs erstem Italienzug gegeben hätten. Der Anonymus aus Mailand wirft Barbarossa vor, mit dem Ziel der militärischen Unterwerfung aufgebrochen zu sein.
Otto von Freising übernahm die Darstellung Barbarossas von einem Bestechungsversuch der Mailänder, deren Konsuln anschließend seinen Zug durch verödete Landschaften geführt hätten, was auch Otto Morena zu berichten weiß. Der Mailänder Schreiber verschweigt dies und erzählt stattdessen von Misshandlungen der Mailänder durch das königliche Gefolge. Die Erstürmung der Burg Rosate stilisiert er als unbegründeten Gewaltakt, während die Schreiber aus Lodi und Freising rechtfertigend argumentieren.
Die unabhängig überlieferte 'Conventio', die 1158 nach der Belagerung Mailands zwischen der Stadt und dem Kaiser geschlossen wurde, beinhaltete neben Strafbestimmungen die Anerkennung der Hoheit des Kaisers unter Wahrung der kommunalen Herrschaftsform. Während Otto Morena ihre Bestimmungen nur höchst unvollständig wiedergab, sodass der Schluss naheliegt, dass er sie nicht kannte, lieferte der Mailänder Anonymus durch gezielte Auslassungen und Verfälschung ihrer Bestimmungen erneut "alternative Fakten" und erweckte den Anschein einer Rückkehr zu den "kaiserfernen" Jahren vor Barbarossa.
Bei genauer Betrachtung der auf dem Hoftag von Roncaglia 1158 festgestellten 'lex omnis iurisdictio' wird deutlich, dass diese entgegen der bisherigen Forschungsmeinung keinen Bruch der 'Conventio' darstellte. Eine Konfrontation der Darstellungen der Ereignisse im Januar 1159 in Mailand mit dem Augenzeugenberichts Vinzenz' von Prag zeigt, dass Otto Morena erneut nur knapp berichtet. Der Anonymus hingegen liefert eine "alternative" Darstellung, nach der die Gesandten des Kaisers gekommen waren, um das Recht zu brechen. Diese Tendenziösität wird auch bei der Einnahme der Burg Trezzo deutlich, über die ein Bericht von Ottos einstigem Kaplan Rahewin vorliegt.
Die Darstellungen offenbaren, dass ihre Autoren ihre Texte gezielt einzusetzen gedachten und so zu Produzenten "alternativen Fakten" wurden. Für den Historiker zeigt sich einmal mehr die Wichtigkeit einer quellenkritischen Arbeitsweise, wie sie Johannes Fried in seiner "Memorik" eindrucksvoll vertrat.
Radar technology in the millimeter-wave frequency band offers many interesting features for wind park surveillance, such as structural monitoring of rotor blades or the detection of bats and birds in the vicinity of wind turbines (WTs). Currently, the majority of WTs are affected by shutdown algorithms to minimize animal fatalities via direct collision with the rotor blades or barotrauma effects. The presence of rain is an important parameter in the definition of those algorithms together with wind speed, temperature, time of the day, and season of the year. A Ka-band frequency-modulated continuous-wave radar (33.4-36.0 GHz) installed at the tower of a 2-MW WT was used during a field study. We have observed characteristic rain-induced patterns, based on the range-Doppler algorithm. To better understand those signatures, we have developed a laboratory experiment and implemented a numerical modeling framework. Experimental and numerical results for rain detection and classification are presented and discussed here. Based on this article, a bat- and bird-friendly adaptive WT control can be developed for improved WT efficiency in periods of rain and, at the same time, reduced animal mortality.
Unresolved inflammation maintained by release of danger‐associated molecular patterns, particularly high‐mobility group box‐1 (HMGB1), is crucial for hepatocellular carcinoma (HCC) pathogenesis. To further characterize interactions between leucocytes and necrotic cancerous tissue, a cellular model of necroinflammation was studied in which murine Raw 264.7 macrophages or primary splenocytes were exposed to necrotic lysates (N‐lys) of murine hepatoma cells or primary hepatocytes. In comparison to those derived from primary hepatocytes, N‐lys from hepatoma cells were highly active—inducing in macrophages efficient expression of inflammatory cytokines like C‐X‐C motif ligand‐2 , tumor necrosis factor‐α, interleukin (IL)‐6 and IL‐23‐p19. This activity associated with higher levels of HMGB1 in hepatoma cells and was curbed by pharmacological blockage of the receptor for advanced glycation end product (RAGE)/HMGB1 axis or the mitogen‐activated protein kinases ERK1/2 pathway. Analysis of murine splenocytes furthermore demonstrated that N‐lys did not comprise of functionally relevant amounts of TLR4 agonists. Finally, N‐lys derived from hepatoma cells supported inflammatory splenic Th17 and Th1 polarization as detected by IL‐17, IL‐22 or interferon‐γ production. Altogether, a straightforward applicable model was established which allows for biochemical characterization of immunoregulation by HCC necrosis in cell culture. Data presented indicate a remarkably inflammatory capacity of necrotic hepatoma cells that, at least partly, depends on the RAGE/HMGB1 axis and may shape immunological properties of the HCC microenvironment.
Cryo-electron tomography combined with subtomogram averaging (StA) has yielded high-resolution structures of macromolecules in their native context. However, high-resolution StA is not commonplace due to beam-induced sample drift, images with poor signal-to-noise ratios (SNR), challenges in CTF correction, and limited particle number. Here we address these issues by collecting tilt series with a higher electron dose at the zero-degree tilt. Particles of interest are then located within reconstructed tomograms, processed by conventional StA, and then re-extracted from the high-dose images in 2D. Single particle analysis tools are then applied to refine the 2D particle alignment and generate a reconstruction. Use of our hybrid StA (hStA) workflow improved the resolution for tobacco mosaic virus from 7.2 to 4.4 Å and for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. These resolution gains make hStA a promising approach for other StA projects aimed at achieving subnanometer resolution.
The blood-brain barrier (BBB) protects the brain microenvironment from external damage. It is formed by endothelial cells (ECs) lining the brain vessels, expressing tight junctions and having reduced transcytosis, resulting in a very low paracellular and transcellular passage of substances, respectively (low permeability). The specific BBB phenotype is maintained by Wnt molecules secreted by astrocytes (ACs) that bind to receptors in ECs, and start a molecular cascade that leads to β-catenin translocating to the nucleus and activating the transcription of BBB genes.
An increasing number of studies report BBB dysfunction in Alzheimer’s disease (AD), although the topic is currently under debate. AD is a neurodegenerative condition characterized by brain depositions of Aβ aggregates and Tau neurofibrillary tangles. The aetiology of AD is unknown, although round 5% of all AD cases have a genetic origin. Mutations in APP or PSEN1/2 can lead to Aβ over-production and accumulation, causing familiar AD. There is no cure for AD, as all clinical trials failed during the past years. Consequently, I studied the role of the BBB in AD, aiming to investigate if a BBB dysfunction occurs in AD, and to identify by transcriptomic analysis novel gene regulations happening at the BBB in AD. The final objective was to evaluate the potential of identified BBB genes as therapeutical target.
I used transgenic mice expressing the human APP mutations Swiss, Dutch and Iowa under the control of the neuronal promoter Thy1 (Thy1-APPSwDI) as AD model. In this AD mouse model, I could detect Aβ deposits and memory loss by immunofluorescence (IF) and behavioural tests. Importantly, I identified an increase of BBB permeability to 3-4 kDa dextrans in 6 months, 9-12 months, and 18 months or older AD mice compared to age-matched control wild types (WT), indicating BBB dysfunction in AD mice.
In order to study the BBB transcriptional changes in AD, I sequenced the RNA from 6 and 18 months old AD and WT mouse brain microvessels (MBMVs), as well as of FACS-sorted ECs, mural cells (MuCs), ACs, and microglia (MG) in collaboration with GenXPro, a company specialized in 3’ RNA sequencing. Currently, no transcriptomic datasets of ECs and MuCs are publicly available, suggesting that this is the first study sequencing those cell types in the context of AD.
The analysis of sequencing data from MBMVs and ECs revealed a Wnt/β-catenin repression, and an increase of inflammatory genes like Ccl3 in ECs, that could explain the BBB dysfunction observed in AD mice. Furthermore, the sequencing data from MuCs identified a set of 11 genes strongly regulated in both 6 and 18 month AD groups. Three of those 11 genes are known to be involved in inflammatory processes, demonstrating that inflammation affects and plays an important role in MuCs and ECs during AD.
Thanks to published sequencing data, some up-regulated MG genes in AD are well known and recognized, such as Trem2 and Apoe. Those genes were found in the FACS-sorted MG data as well, validating the AD model and with it, the other novel sequenced datasets. Importantly, one of the most strongly AD-regulated genes in MBMV and MG samples was Dkk2, a member of the Dickkopf family of secreted proteins known to be involved in Wnt signalling modulation. Importantly, a dual luciferase reporter assay proved that Dkk2 is a Wnt inhibitor. A preliminary immunohistochemistry examination of DKK2 in human brain autopsy tissue from an AD patient and age-matched control revealed a stronger DKK2 immunoreactivity in the AD brain.
In order to answer the question whether a rescue of BBB function would ameliorate AD symptoms, I made use of a tamoxifen-inducible transgenic mouse line to activate the Wnt/β-catenin pathway specifically in ECs, leading to a gain of function (GOF) condition (Cdh5-CreERT2+/–/Ctnnb1(Ex3)fl/fl). This mouse line was then crossed with the AD line, creating AD/GOF and AD/control groups.
AD/GOF mice performed better in a Y-Maze memory test than AD/controls when the Wnt/β-catenin pathway was induced before AD onset, indicating a protective effect. Moreover, the finding implies that shielding BBB functioning in AD further protects the brain from AD toxic effects, suggesting an important role of brain vasculature in AD and its potential as therapeutic target.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
Recurrent cortical network dynamics plays a crucial role for sequential information processing in the brain. While the theoretical framework of reservoir computing provides a conceptual basis for the understanding of recurrent neural computation, it often requires manual adjustments of global network parameters, in particular of the spectral radius of the recurrent synaptic weight matrix. Being a mathematical and relatively complex quantity, the spectral radius is not readily accessible to biological neural networks, which generally adhere to the principle that information about the network state should either be encoded in local intrinsic dynamical quantities (e.g. membrane potentials), or transmitted via synaptic connectivity. We present two synaptic scaling rules for echo state networks that solely rely on locally accessible variables. Both rules work online, in the presence of a continuous stream of input signals. The first rule, termed flow control, is based on a local comparison between the mean squared recurrent membrane potential and the mean squared activity of the neuron itself. It is derived from a global scaling condition on the dynamic flow of neural activities and requires the separability of external and recurrent input currents. We gained further insight into the adaptation dynamics of flow control by using a mean field approximation on the variances of neural activities that allowed us to describe the interplay between network activity and adaptation as a two-dimensional dynamical system. The second rule that we considered, variance control, directly regulates the variance of neural activities by locally scaling the recurrent synaptic weights. The target set point of this homeostatic mechanism is dynamically determined as a function of the variance of the locally measured external input. This functional relation was derived from the same mean-field approach that was used to describe the approximate dynamics of flow control.
The effectiveness of the presented mechanisms was tested numerically using different external input protocols. The network performance after adaptation was evaluated by training the network to perform a time delayed XOR operation on binary sequences. As our main result, we found that flow control can reliably regulate the spectral radius under different input statistics, but precise tuning is negatively affected by interneural correlations. Furthermore, flow control showed a consistent task performance over a wide range of input strengths/variances. Variance control, on the other side, did not yield the desired spectral radii with the same precision. Moreover, task performance was less consistent across different input strengths.
Given the better performance and simpler mathematical form of flow control, we concluded that a local control of the spectral radius via an implicit adaptation scheme is a realistic alternative to approaches using classical “set point” homeostatic feedback controls of neural firing.
Author summary How can a neural network control its recurrent synaptic strengths such that network dynamics are optimal for sequential information processing? An important quantity in this respect, the spectral radius of the recurrent synaptic weight matrix, is a non-local quantity. Therefore, a direct calculation of the spectral radius is not feasible for biological networks. However, we show that there exist a local and biologically plausible adaptation mechanism, flow control, which allows to control the recurrent weight spectral radius while the network is operating under the influence of external inputs. Flow control is based on a theorem of random matrix theory, which is applicable if inter-synaptic correlations are weak. We apply the new adaption rule to echo-state networks having the task to perform a time-delayed XOR operation on random binary input sequences. We find that flow-controlled networks can adapt to a wide range of input strengths while retaining essentially constant task performance.
We explore the phase structure of the 1+1 dimensional Gross-Neveu model at finite number of fermion flavors using lattice field theory. Besides a chirally symmetric phase and a homogeneously broken phase we find evidence for the existence of an inhomogeneous phase, where the condensate is a spatially oscillating function. Our numerical results include a crude μ-T phase diagram.
Standard monitoring of heart rate, blood pressure and arterial oxygen saturation during endoscopy is recommended by current guidelines on procedural sedation. A number of studies indicated a reduction of hypoxic (art. oxygenation < 90% for > 15 s) and severe hypoxic events (art. oxygenation < 85%) by additional use of capnography. Therefore, U.S. and the European guidelines comment that additional capnography monitoring can be considered in long or deep sedation. Integrated Pulmonary Index® (IPI) is an algorithm-based monitoring parameter that combines oxygenation measured by pulse oximetry (art. oxygenation, heart rate) and ventilation measured by capnography (respiratory rate, apnea > 10 s, partial pressure of end-tidal carbon dioxide [PetCO2]). The aim of this paper was to analyze the value of IPI as parameter to monitor the respiratory status in patients receiving propofol sedation during PEG-procedure. Patients reporting for PEG-placement under sedation were randomized 1:1 in either standard monitoring group (SM) or capnography monitoring group including IPI (IM). Heart rate, blood pressure and arterial oxygen saturation were monitored in SM. In IM additional monitoring was performed measuring PetCO2, respiratory rate and IPI. Capnography and IPI values were recorded for all patients but were only visible to the endoscopic team for the IM-group. IPI values range between 1 and 10 (10 = normal; 8–9 = within normal range; 7 = close to normal range, requires attention; 5–6 = requires attention and may require intervention; 3–4 = requires intervention; 1–2 requires immediate intervention). Results on capnography versus standard monitoring of the same study population was published previously. A total of 147 patients (74 in SM and 73 in IM) were included in the present study. Hypoxic events occurred in 62 patients (42%) and severe hypoxic events in 44 patients (29%), respectively. Baseline characteristics were equally distributed in both groups. IPI = 1, IPI < 7 as well as the parameters PetCO2 = 0 mmHg and apnea > 10 s had a high sensitivity for hypoxic and severe hypoxic events, respectively (IPI = 1: 81%/81% [hypoxic/severe hypoxic event], IPI < 7: 82%/88%, PetCO2: 69%/68%, apnea > 10 s: 84%/84%). All four parameters had a low specificity for both hypoxic and severe hypoxic events (IPI = 1: 13%/12%, IPI < 7: 7%/7%, PetCO2: 29%/27%, apnea > 10 s: 7%/7%). In multivariate analysis, only SM and PetCO2 = 0 mmHg were independent risk factors for hypoxia. IPI (IPI = 1 and IPI < 7) as well as the individual parameters PetCO2 = 0 mmHg and apnea > 10 s allow a fast and convenient conclusion on patients’ respiratory status in a morbid patient population. Sensitivity is good for most parameters, but specificity is poor. In conclusion, IPI can be a useful metric to assess respiratory status during propofol-sedation in PEG-placement. However, IPI was not superior to PetCO2 and apnea > 10 s.
Aims: Stroke is a major complication after transcatheter aortic valve implantation (TAVI). Although multifactorial, it remains unknown whether the valve deployment system itself has an impact on the incidence of early stroke. We performed a meta- and network analysis to investigate the 30-day stroke incidence of self-expandable (SEV) and balloon-expandable (BEV) valves after transfemoral TAVI.
Methods and results: Overall, 2723 articles were searched directly comparing the performance of SEV and BEV after transfemoral TAVI, from which 9 were included (3086 patients). Random effects models were used for meta- and network meta-analysis based on a frequentist framework. Thirty-day incidence of stroke was 1.8% in SEV and 3.1% in BEV (risk ratio of 0.62, 95% confidence interval (CI) 0.49–0.80, p = 0.004). Treatment ranking based on network analysis (P-score) revealed CoreValve with the best performance for 30-day stroke incidence (75.2%), whereas SAPIEN had the worst (19.0%). However, network analysis showed no inferiority of SAPIEN compared with CoreValve (odds ratio 2.24, 95% CI 0.70–7.2).
Conclusion: Our analysis indicates higher 30-day stroke incidence after transfemoral TAVI with BEV compared to SEV. We could not find evidence for superiority of a specific valve system. More randomized controlled trials with head-to-head comparison of SEV and BEV are needed to address this open question.
Congenital diaphragmatic hernia (CDH) is a relatively common and life-threatening birth defect, characterized by incomplete formation of the diaphragm. Because CDH herniation occurs at the same time as preacinar airway branching, normal lung development becomes severely disrupted, resulting almost invariably in pulmonary hypoplasia. Despite various research efforts over the past decades, the pathogenesis of CDH and associated lung hypoplasia remains poorly understood. With the advent of molecular techniques, transgenic animal models of CDH have generated a large number of candidate genes, thus providing a novel basis for future research and treatment. This review article offers a comprehensive overview of genes and signaling pathways implicated in CDH etiology, whilst also discussing strengths and limitations of transgenic animal models in relation to the human condition.
The Rnf complex is a Na+ coupled respiratory enzyme in a fermenting bacterium, Thermotoga maritima
(2020)
rnf genes are widespread in bacteria and biochemical and genetic data are in line with the hypothesis that they encode a membrane-bound enzyme that oxidizes reduced ferredoxin and reduces NAD and vice versa, coupled to ion transport across the cytoplasmic membrane. The Rnf complex is of critical importance in many bacteria for energy conservation but also for reverse electron transport to drive ferredoxin reduction. However, the enzyme has never been purified and thus, ion transport could not be demonstrated yet. Here, we have purified the Rnf complex from the anaerobic, fermenting thermophilic bacterium Thermotoga maritima and show that is a primary Na+ pump. These studies provide the proof that the Rnf complex is indeed an ion (Na+) translocating, respiratory enzyme. Together with a Na+-F1FO ATP synthase it builds a simple, two-limb respiratory chain in T. maritima. The physiological role of electron transport phosphorylation in a fermenting bacterium is discussed.
For large isospin asymmetries, perturbation theory predicts the quantum chromodynamic (QCD) ground state to be a superfluid phase of u and d¯ Cooper pairs. This phase, which is denoted as the Bardeen-Cooper-Schrieffer (BCS) phase, is expected to be smoothly connected to the standard phase with Bose-Einstein condensation (BEC) of charged pions at μI≥mπ/2 by an analytic crossover. A first hint for the existence of the BCS phase, which is likely characterised by the presence of both deconfinement and charged pion condensation, comes from the lattice observation that the deconfinement crossover smoothly penetrates into the BEC phase. To further scrutinize the existence of the BCS phase, in this article we investigate the complex spectrum of the massive Dirac operator in 2+1-flavor QCD at nonzero temperature and isospin chemical potential. The spectral density near the origin is related to the BCS gap via a generalization of the Banks-Casher relation to the case of complex Dirac eigenvalues (derived for the zero-temperature, high-density limits of QCD at nonzero isospin chemical potential).
Accurate measurement of the standard 235U(n,f) cross section from thermal to 170 keV neutron energy
(2020)
An accurate measurement of the 235U(n,f) cross section from thermal to 170 keV of neutron energy has recently been performed at n_TOF facility at CERN using 6Li(n,t)4He and 10B(n,α)7Li as references. This measurement has been carried out in order to investigate a possible overestimation of the 235U fission cross section evaluation provided by most recent libraries between 10 and 30 keV. A custom experimental apparatus based on in-beam silicon detectors has been used, and a Monte Carlo simulation in GEANT4 has been employed to characterize the setup and calculate detectors efficiency. The results evidenced the presence of an overestimation in the interval between 9 and 18 keV and the new data may be used to decrease the uncertainty of 235U(n,f) cross section in the keV region.
Two-particle correlation functions were measured for pp, p, p, and pairs in Pb–Pb collisions at √sNN = 2.76 TeV and √sNN = 5.02 TeV recorded by the ALICE detector. From a simultaneous fit to all obtained correlation functions, real and imaginary components of the scattering lengths, as well as the effective ranges, were extracted for combined p and p pairs and, for the first time, for pairs. Effective averaged scattering parameters for heavier baryon–antibaryon pairs, not measured directly, are also provided. The results reveal similarly strong interaction between measured baryon–antibaryon pairs, suggesting that they all annihilate in the same manner at the same pair relative momentum k∗. Moreover, the reported significant non-zero imaginary part and negative real part of the scattering length provide motivation for future baryon–antibaryon bound state searches.
Measurements of K∗(892)0 and φ(1020) resonance production in Pb–Pb and pp collisions at √sNN = 5.02 TeV with the ALICE detector at the Large Hadron Collider are reported. The resonances are measured at midrapidity (|y| < 0.5) via their hadronic decay channels and the transverse momentum (pT) distributions are obtained for various collision centrality classes up to pT = 20 GeV/c. The pT-integrated yield ratio K∗(892)0/K in Pb–Pb collisions shows significant suppression relative to pp collisions and decreases towards more central collisions. In contrast, the φ(1020)/K ratio does not show any suppression. Furthermore, the measured K∗(892)0/K ratio in central Pb–Pb collisions is significantly suppressed with respect to the expectations based on a thermal model calculation, while the φ(1020)/K ratio agrees with the model prediction. These measurements are an experimental demonstration of rescattering of K∗(892)0 decay products in the hadronic phase of the collisions. The K∗(892)0/K yield ratios in Pb–Pb and pp collisions are used to estimate the time duration between chemical and kinetic freeze-out, which is found to be ∼ 4–7 fm/c for central collisions. The pT-differential ratios of K∗(892)0/K, φ(1020)/K, K∗(892)0/π , φ(1020)/π , p/K∗(892)0 and p/φ(1020) are also presented for Pb–Pb and pp collisions at √sNN = 5.02 TeV. These ratios show that the rescattering effect is predominantly a low-pT phenomenon.
Accurate neutron capture cross section data for minor actinides (MAs) are required to estimate the production and transmutation rates of MAs in light water reactors with a high burnup, critical fast reactors like Gen-IV systems and other innovative reactor systems such as accelerator driven systems (ADS). Capture reactions of 244Cm open the path for the formation of heavier Cm isotopes and of heavier elements such as Bk and Cf. In addition, 244Cm shares nearly 50% of the total actinide decay heat in irradiated reactor fuels with a high burnup, even after three years of cooling.
Experimental data for this isotope are very scarce due to the difficulties of providing isotopically enriched samples and because the high intrinsic activity of the samples requires the use of neutron facilities with high instantaneous flux. The only two previous experimental data sets for this neutron capture cross section have been obtained in 1969 using a nuclear explosion and, more recently, at J-PARC in 2010. The neutron capture cross sections have been measured at n_TOF with the same samples that the previous experiments in J-PARC. The samples were measured at n_TOF Experimental Area 2 (EAR-2) with three C6D6 detectors and also in Experimental Area 1 (EAR-1) with the Total Absorption Calorimeter (TAC). Preliminary results assessing the quality and limitations of these new experimental datasets are presented for the experiments in both areas. Preliminary yields of both measurements will be compared with evaluated libraries for the first time.
233U is the fissile nuclei in the Th-U fuel cycle with a particularily small neutron capture cross setion which is on average about one order of magnitude lower than its fission cross section. Hence, the measurement of the 233U(n, γ) cross section relies on a method to accurately distinguish between capture and fission γ-rays. A measurement of the 233U α-ratio has been performed at the n_TOF facility at CERN using a so-called fission tagging setup, coupling n_TOF 's Total Absorption Calorimeter with a novel fission chamber to tag the fission γ-rays. The experimental setup is described and essential parts of the analysis are discussed. Finally, a preliminary 233U α-ratio is presented.
We have measured the capture cross section of the 155Gd and 157Gd isotopes between 0.025 eV and 1 keV. The capture events were recorded by an array of 4 C6D6 detectors, and the capture yield was deduced exploiting the total energy detection system in combination with the Pulse Height Weighting Techniques. Because of the large cross section around thermal neutron energy, 4 metallic samples of different thickness were used to prevent problems related to self-shielding. The samples were isotopically enriched, with a cross contamination of the other isotope of less than 1.14%. The capture yield was analyzed with an R-Matrix code to describe the cross section in terms of resonance parameters. Near thermal energies, the results are significantly different from evaluations and from previous time-of-flight experiments. The data from the present measurement at n_TOF are publicly available in the experimental nuclear reaction database EXFOR.
Feasibility, design and sensitivity studies on innovative nuclear reactors that could address the issue of nuclear waste transmutation using fuels enriched in minor actinides, require high accuracy cross section data for a variety of neutron-induced reactions from thermal energies to several tens of MeV. The isotope 241Am (T1/2= 433 years) is present in high-level nuclear waste (HLW), representing about 1.8 % of the actinide mass in spent PWR UOx fuel. Its importance increases with cooling time due to additional production from the β-decay of 241Pu with a half-life of 14.3 years. The production rate of 241 Am in conventional reactors, including its further accumulation through the decay of 241Pu and its destruction through transmutation/incineration are very important parameters for the design of any recycling solution. In the present work, the 241 Am(n,f) reaction cross-section was measured using Micromegas detectors at the Experimental Area 2 of the n_TOF facility at CERN. For the measurement, the 235U(n,f) and 238U(n,f) reference reactions were used for the determination of the neutron flux. In the present work an overview of the experimental setup and the adopted data analysis techniques is given along with preliminary results.
The (n, γ) cross sections of the gadolinium isotopes play an important role in the study of the stellar nucleosynthesis. In particular, among the isotopes heavier than Fe, 154Gd together with 152Gd have the peculiarity to be mainly produced by the slow capture process, the so-called s-process, since they are shielded against the β-decay chains from the r-process region by their stable samarium isobars. Such a quasi pure s-process origin makes them crucial for testing the robustness of stellar models in galactic chemical evolution (GCE). According to recent models, the 154Gd and 152Gd abundances are expected to be 15-20% lower than the reference un-branched s-process 150Sm isotope. The close correlation between stellar abundances and neutron capture cross sections prompted for an accurate measurement of 154Gd cross section in order to reduce the uncertainty attributable to nuclear physics input and eventually rule out one of the possible causes of present discrepancies between observation and model predictions. To this end, the neutron capture cross section of 154Gd was measured in a wide neutron energy range (from thermal up to some keV) with high resolution in the first experimental area of the neutron time-of-flight facility n_TOF (EAR1) at CERN. In this contribution, after a brief description of the motivation and of the experimental setup used in the measurement, the preliminary results of the 154Gd neutron capture reaction as well as their astrophysical implications are presented.
Monte Carlo simulations and n-p differential scattering data measured with Proton Recoil Telescopes
(2020)
The neutron-induced fission cross section of 235U, a standard at thermal energy and between 0.15 MeV and 200 MeV, plays a crucial role in nuclear technology applications. The long-standing need of improving cross section data above 20 MeV and the lack of experimental data above 200 MeV motivated a new experimental campaign at the n_TOF facility at CERN. The measurement has been performed in 2018 at the experimental area 1 (EAR1), located at 185 m from the neutron-producing target (the experiment is presented by A. Manna et al. in a contribution to this conference). The 235U(n,f) cross section from 20 MeV up to about 1 GeV has been measured relative to the 1H(n,n)1H reaction, which is considered the primary reference in this energy region. The neutron flux impinging on the 235U sample (a key quantity for determining the fission events) has been obtained by detecting recoil protons originating from n-p scattering in a C2H4 sample. Two Proton Recoil Telescopes (PRT), consisting of several layers of solid-state detectors and fast plastic scintillators, have been located at proton scattering angles of 25.07° and 20.32°, out of the neutron beam. The PRTs exploit the ΔE-E technique for particle identification, a basic requirement for the rejection of charged particles from neutron-induced reactions in carbon. Extensive Monte Carlo simulations were performed to characterize proton transport through the different slabs of silicon and scintillation detectors, to optimize the experimental set-up and to deduce the efficiency of the whole PRT detector. In this work we compare measured data collected with the PRTs with a full Monte Carlo simulation based on the Geant-4 toolkit.
Since the start of its operation in 2001, based on an idea of Prof. Carlo Rubbia [1], the neutron time of-flight facility of CERN, n_TOF, has become one of the most forefront neutron facilities in the world for wide-energy spectrum neutron cross section measurements. Thanks to the combination of excellent neutron energy resolution and high instantaneous neutron flux available in the two experimental areas, the second of which has been constructed in 2014, n_TOF is providing a wealth of new data on neutron-induced reactions of interest for nuclear astrophysics, advanced nuclear technologies and medical applications. The unique features of the facility will continue to be exploited in the future, to perform challenging new measurements addressing the still open issues and long-standing quests in the field of neutron physics. In this document the main characteristics of the n_TOF facility and their relevance for neutron studies in the different areas of research will be outlined, addressing the possible future contribution of n_TOF in the fields of nuclear astrophysics, nuclear technologies and medical applications. In addition, the future perspectives of the facility will be described including the upgrade of the spallation target, the setup of an imaging installation and the construction of a new irradiation area.
Neutron-induced fission cross sections of isotopes involved in the nuclear fuel cycle are vital for the design and safe operation of advanced nuclear systems. Such experimental data can also provide additional constraints for the adjustment of nuclear model parameters used in the evaluation process, resulting in the further development of fission models. In the present work, the 237Np(n,f) cross section was studied at the EAR2 vertical beam-line at CERN's n_TOF facility, over a wide range of neutron energies, from meV to MeV, using the time-of-flight technique and a set-up based on Micromegas detectors, in an attempt to provide accurate experimental data. Preliminary results in the 200 keV – 14 MeV neutron energy range as well as the experimental procedure, including a description of the facility and the data handling and analysis, will be presented.
We have measured the γ-rays following neutron capture on 240Pu and 244 Cm at the n_TOF facility at CERN with the Total Absorption Calorimeter (TAC) and with C6D6 organic scintillators. The TAC is made of 40 BaF2 crystals operating in coincidence and covering almost the entire solid angle. This allows to obtain information concerning the energy spectra and the multiplicity of the measured capture γ-ray cascades. Additional information is also obtained from the C6D6 detectors. We have analyzed the measured data in order to draw conclusions about the Photon Strength Functions (PSFs) of 241Pu and 245Cm below their neutron separation energies. The analysis has been performed by fitting the PSFs to the experimental results, using the differential evolution method, in order to find neutron capture cascades capable of reproducing at the same time a great variety of deposited energy spectra.
The study of neutron-induced reactions on actinides is of considerable importance for the design of advanced nuclear systems and alternative fuel cycles. Specifically, 230Th is produced from the α-decay of 234U as a byproduct of the 232Th/233U fuel cycle, thus the accurate knowledge of its fission cross section is strongly required. However, few experimental datasets exist in literature with large deviations among them, covering the energy range between 0.2 to 25 MeV. In addition, the study of the 230Th(n,f) cross-section is of great interest in the research on the fission process related to the structure of the fission barriers. Previous measurements have revealed a large resonance at En=715 keV and additional fine structures, but with high discrepancies among the cross-section values of these measurements. This contribution presents preliminary results of the 230Th(n,f) cross-section measurements at the CERN n_TOF facility. The high purity targets of the natural, but very rare isotope 230Th, were produced at JRC-Geel in Belgium. The measurements were performed at both experimental areas (EAR-1 and EAR-2) of the n_TOF facility, covering a very broad energy range from thermal up to at least 100 MeV. The experimental setup was based on Micromegas detectors with the 235U(n,f) and 238U(n,f) reaction cross-sections used as reference.
New measurements of the 7Be(n,α)4He and 7Be(n,p)7Li reaction cross sections from thermal to keV neutron energies have been recently performed at CERN/n_TOF. Based on the new experimental results, astrophysical reaction rates have been derived for both reactions, including a proper evaluation of their uncertainties in the thermal energy range of interest for big bang nucleosynthesis studies. The new estimate of the 7Be destruction rate, based on these new results, yields a decrease of the predicted cosmological 7Li abundance insufficient to provide a viable solution to the cosmological lithium problem.
The design and operation of innovative nuclear systems requires a better knowledge of the capture and fission cross sections of the Pu isotopes. For the case of capture on 242Pu, a reduction of the uncertainty in the fast region down to 8-12% is required. Moreover, aiming at improving the evaluation of the fast energy range in terms of average parameters, the OECD NEA High Priority Request List (HPRL) requests high-resolution capture measurements with improved accuracy below 2 keV. The current uncertainties also affect the thermal point, where previous experiments deviate from each other by 20%. A fruitful collaboration betwen JGU Mainz and HZ Dresden-Rossendorf within the EC CHANDA project resulted in a 242Pu sample consisting of a stack of seven fission-like targets making a total of 95(4) mg of 242Pu electrodeposited on thin (11.5 μm) aluminum backings. This contribution presents the results of a set of measurements of the 242Pu(n, γ) cross section from thermal to 500 keV combining different neutron beams and techniques. The thermal point was determined at the Budapest Research Reactor by means of Neutron Activation Analysis and Prompt Gamma Analysis, and the resolved (1 eV - 4 keV) and unresolved (1 - 500 keV) resonance regions were measured using a set of four Total Energy detectors at the CERN n_TOF-EAR1.
Setup for the measurement of the 235U(n,f) cross section relative to n-p scattering up to 1 GeV
(2020)
The neutron induced fission of 235U is extensively used as a reference for neutron fluence measurements in various applications, ranging from the investigation of the biological effectiveness of high energy neutrons, to the measurement of high energy neutron cross sections of relevance for accelerator driven nuclear systems. Despite its widespread use, no data exist on neutron induced fission of 235U above 200 MeV. The neutron facility n_TOF offers the possibility to improve the situation. The measurement of 235U(n,f) relative to the differential n-p scattering cross-section, was carried out in September 2018 with the aim of providing accurate and precise cross section data in the energy range from 10 MeV up to 1 GeV. In such measurements, Recoil Proton Telescopes (RPTs) are used to measure the neutron flux while the fission events are detected and counted with dedicated detectors. In this paper the measurement campaign and the experimental set-up are illustrated.
We determine the magnetic susceptibility of thermal QCD matter by means of first principles lattice simulations using staggered quarks with physical masses. A novel method is employed that only requires simulations at zero background field, thereby circumventing problems related to magnetic flux quantization. After a careful continuum limit extrapolation, diamagnetic behavior (negative susceptibility) is found at low temperatures and strong paramagnetism (positive susceptibility) at high temperatures. We revisit the decomposition of the magnetic susceptibility into spin- and orbital angular momentum- related contributions. The spin term — related to the normalization of the photon lightcone distribution amplitude at zero temperature — is calculated non-perturbatively and extrapolated to the continuum limit. Having access to both the full magnetic susceptibility and the spin term, we calculate the orbital angular momentum contribution for the first time. The results reveal the opposite of what might be expected based on a free fermion picture. We provide a simple parametrization of the temperature- and magnetic field-dependence of the QCD equation of state that can be used in phenomenological studies.
Central nervous hyperarousal is as a key component of current pathophysiological concepts of chronic insomnia disorder. However, there are still open questions regarding its exact nature and the mechanisms linking hyperarousal to sleep disturbance. Here, we aimed at studying waking state hyperarousal in insomnia by the perspective of resting-state vigilance dynamics. The VIGALL (Vigilance Algorithm Leipzig) algorithm has been developed to investigate resting-state vigilance dynamics, and it revealed, for example, enhanced vigilance stability in depressive patients. We hypothesized that patients with insomnia also show a more stable vigilance regulation. Thirty-four unmedicated patients with chronic insomnia and 25 healthy controls participated in a twenty-minute resting-state electroencephalography (EEG) measurement following a night of polysomnography. Insomnia patients showed enhanced EEG vigilance stability as compared to controls. The pattern of vigilance hyperstability differed from that reported previously in depressive patients. Vigilance hyperstability was also present in insomnia patients showing only mildly reduced sleep efficiency. In this subgroup, vigilance hyperstability correlated with measures of disturbed sleep continuity and arousal. Our data indicate that insomnia disorder is characterized by hyperarousal at night as well as during daytime.
The transcription factor ∆Np63 is a master regulator of epithelial cell identity and essential for the survival of squamous cell carcinoma (SCC) of lung, head and neck, oesophagus, cervix and skin. Here, we report that the deubiquitylase USP28 stabilizes ∆Np63 and maintains elevated ∆NP63 levels in SCC by counteracting its proteasome‐mediated degradation. Impaired USP28 activity, either genetically or pharmacologically, abrogates the transcriptional identity and suppresses growth and survival of human SCC cells. CRISPR/Cas9‐engineered in vivo mouse models establish that endogenous USP28 is strictly required for both induction and maintenance of lung SCC. Our data strongly suggest that targeting ∆Np63 abundance via inhibition of USP28 is a promising strategy for the treatment of SCC tumours.
Introduction: Cancer patients tend to prefer oral instead of parenteral chemotherapy. To date, there is little evidence on the medication adherence in cancer patients. We investigated medication adherence to tyrosine kinase inhibitors in patients suffering from non-small cell lung cancer. Methods: Tyrosine kinase inhibitor adherence was measured electronically by MEMS® (medication event monitoring system) over at least six months. Adherence rates were calculated in terms of Dosing Compliance, Timing Compliance, Taking Compliance, and Drug Holidays. Patients were dichotomized as adherent when Dosing Compliance and Timing Compliance were ≥80%, Taking Compliance ranged between 90 and 110%, and <1 Drug Holiday was registered. Quality of life was assessed by two questionnaires (EORTC QLQ-C30 version 3.0, EORTC QLQ-LC13) at three time points. Adverse drug events were reported via patient diaries. Results: Out of 32 patients enrolled, data from 23 patients were evaluable. Median Dosing Compliance, Taking Compliance, and Timing Compliance adherence rates of tyrosine kinase inhibitor intake amounted to 100%, 98%, and 99%, respectively; Drug Holidays were observed in three patients. Four patients were dichotomized as non-adherent. Three of them had a twice-daily tyrosine kinase inhibitor regimen. Median quality of life scores amounted to 67 (max. 100) and remained unchanged over the study period. Fatigue and rash were the most frequently reported adverse drug events. Conclusion: Medication adherence of non-small cell lung cancer patients treated with tyrosine kinase inhibitors was extraordinarily high and is likely to support the effectiveness of tyrosine kinase inhibitor treatment and a good quality of life over a long period of time. Adherence facilitating information and education is especially relevant for patients taking tyrosine kinase inhibitors in a twice-daily regimen.
This essay explores the problem of legitimation crises in deliberative systems. For some time now, theorists of deliberative democracy have started to embrace a “systemic approach.” But if deliberative democracy is to be understood in the context of a system of multiple moving parts, then we must confront the possibility that that system’s dynamics may admit of breakdowns, contradictions, and tendencies toward crisis. Yet such crisis potentials remain largely unexplored in deliberative theory. The present article works toward rectifying this lacuna, using the 2016 Brexit and Trump votes as examples of a particular kind of “legitimation crisis” that results in a sequence of failures in the deliberative system. Drawing on recent work of Rainer Forst, I identify this particular kind of legitimation crisis as a “justification crisis.”
Stored and cooled, highly-charged ions offer unprecedented capabilities for precision studies in the realm of atomic, nuclear structure and astrophysics[1]. After the successful investigation of the 96Ru(p,7)97Rh reaction cross section in 2009[2], the first measurement of the 124Xe(p,7)125Cs reaction cross section has been performed with decelerated, fully-ionized 124Xe ions in 2016 at the Experimental Storage Ring (ESR) of GSI[3]. Using a Double Sided Silicon Strip Detector, introduced directly into the ultra-high vacuum environment of a storage ring, the 125Cs proton-capture products have been successfully detected. The cross section has been measured at 5 different energies between 5.5AMeV and 8AMeV, on the high energy tail of the Gamow-window for hot, explosive scenarios such as supernovae and X-ray binaries. The elastic scattering on the H2 gas jet target is the major source of background to count the (p,7) events. Monte Carlo simulations show that an additional slit system in the ESR in combination with the energy information of the Si detector will enable background free measurements of the proton-capture products. The corresponding hardware is being prepared and will increase the sensitivity of the method tremendously.