Refine
Year of publication
Document Type
- Preprint (2046) (remove)
Has Fulltext
- yes (2046)
Is part of the Bibliography
- no (2046) (remove)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
- Multicomponent Tree Adjoining Grammar (9)
Institute
- Physik (1253)
- Frankfurt Institute for Advanced Studies (FIAS) (878)
- Informatik (748)
- Medizin (172)
- Extern (82)
- Biowissenschaften (71)
- Ernst Strüngmann Institut (69)
- Mathematik (46)
- Psychologie (46)
- MPI für Hirnforschung (45)
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
We obtain the D-meson spectral density at finite temperature for the conditions of density and temperature expected at FAIR. We perform a self-consistent coupled-channel calculation taking, as a bare interaction, a separable potential model. The Lambda_c (2593) resonance is generated dynamically. We observe that the D-meson spectral density develops a sizeable width while the quasiparticle peak stays close to the free position. The consequences for the D-meson production at FAIR are discussed.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
Using full 3+1 dimensional general-relativistic hydrodynamic simulations of equal- and unequal-mass neutron-star binaries with properties that are consistent with those inferred from the inspiral of GW170817, we perform a detailed study of the quark-formation processes that could take place after merger. We use three equations of state consistent with current pulsar observations derived from a novel finite-temperature framework based on V-QCD, a non-perturbative gauge/gravity model for Quantum Chromodynamics. In this way, we identify three different post-merger stages at which mixed baryonic and quark matter, as well as pure quark matter, are generated. A phase transition triggered collapse already ≲10ms after the merger reveals that the softest version of our equations of state is actually inconsistent with the expected second-long post-merger lifetime of GW170817. Our results underline the impact that multi-messenger observations of binary neutron-star mergers can have in constraining the equation of state of nuclear matter, especially in its most extreme regimes.
Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem
(2023)
The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encodes a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterising the properties of such a signal is an “old” problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through “quasi-universal” relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar ψ4. In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, f ψ4 0, associated with the instant of quasi timesymmetry in the postmerger dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency f h mer, which provides a description of the data that is four times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistently with the findings of numerous studies before ours, and using an enlarged ensamble of binary systems we point out that the ℓ = 2, m = 1 gravitational-wave mode could become comparable with the traditional ℓ = 2, m = 2 mode on sufficiently long timescales, with strain amplitudes in a ratio |h 21|/|h 22| ∼ 0.1 − 1 under generic orientations of the binary, which could be measured by present detectors for signals with large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can spread from symptomatic patients with COVID-19, but also from asymptomatic individuals. Therefore, robust surveillance and timely interventions are essential for the control of virus spread within the community. In this regard the frequency of testing and speed of reporting, but not the test sensitivity alone, play a crucial role. In order to reduce the costs and meet the expanding demands in real-time RT-PCR (rRT-PCR) testing for SARS-CoV-2, complementary assays, such as rapid antigen tests, have been developed. Rigorous analysis under varying conditions is required to assess the clinical performance of these tests and to ensure reproducible results. We evaluated the sensitivity and specificity of a recently licensed rapid antigen test using 137 clinical samples in two institutions. Test sensitivity was between 88.2-89.6% when applied to samples with viral loads typically seen in infectious patients. Of 32 rRT-PCR positive samples, 19 demonstrated infectivity in cell culture, and 84% of these samples were reactive with the antigen test. Seven full-genome sequenced SARS-CoV-2 isolates and SARS-CoV-1 were detected with this antigen test, with no cross-reactivity against other common respiratory viruses. Numerous antigen tests are available for SARS-CoV-2 testing and their performance to detect infectious individuals may vary. Head-to-head comparison along with cell culture testing for infectivity may prove useful to identify better performing antigen tests. The antigen test analyzed in this study is easy-to-use, inexpensive, and scalable. It can be helpful in monitoring infection trends and thus has potential to reduce transmission.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Several studies have probed perceptual performance at different times after a self-paced motor action and found frequency-specific modulations of perceptual performance phase-locked to the action. Such action-related modulation has been reported for various frequencies and modulation strengths. In an attempt to establish a basic effect at the population level, we had a relatively large number of participants (n=50) perform a self-paced button press followed by a detection task at threshold, and we applied both fixed- and random-effects tests. The combined data of all trials and participants surprisingly did not show any significant action-related modulation. However, based on previous studies, we explored the possibility that such modulation depends on the participant’s internal state. Indeed, when we split trials based on performance in neighboring trials, then trials in periods of low performance showed an action-related modulation at ≈17 Hz. When we split trials based on the performance in the preceding trial, we found that trials following a “miss” showed an action-related modulation at ≈17 Hz. Finally, when we split participants based on their false-alarm rate, we found that participants with no false alarms showed an action-related modulation at ≈17 Hz. All these effects were significant in random-effects tests, supporting an inference on the population. Together, these findings indicate that action-related modulations are not always detectable. However, the results suggest that specific internal states such as lower attentional engagement and/or higher decision criterion are characterized by a modulation in the beta-frequency range.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
Cryo-electron tomography (cryo-ET) is a powerful method to elucidate subcellular architecture and to structurally analyse biomolecules in situ by subtomogram averaging (STA). Specimen thickness is a key factor affecting cryo-ET data quality. Cells that are too thick for transmission imaging can be thinned by cryo-focused-ion-beam (cryo-FIB) milling. However, optimal specimen thickness for cryo-ET on lamellae has not been systematically investigated. Furthermore, the ions used to ablate material can cause damage in the lamellae, thereby reducing STA resolution. Here, we systematically benchmark the resolution depending on lamella thickness and the depth of the particles within the sample. Up to ca. 180 nm, lamella thickness does not negatively impact resolution. This shows that there is no need to generate very thin lamellae and thickness can be chosen such that it captures major cellular features. Furthermore, we show that gallium-ion-induced damage extends to depths of up to 30 nm from either lamella surface.
Generating predictions about environmental regularities, relying on these predictions, and updating these predictions when there is a violation from incoming sensory evidence are considered crucial functions of our cognitive system for being adaptive in the future. The violation of a prediction can result in a prediction error (PE) which affects subsequent memory processing. In our preregistered studies, we examined the effects of different levels of PE on episodic memory. Participants were asked to generate predictions about the associations between sequentially presented cue-target pairs, which were violated later with individual items in three PE levels, namely low, medium, and high PE. Hereafter, participants were asked to provide old/new judgments on the items with confidence ratings, and to retrieve the paired cues. Our results indicated a better recognition memory for low PE than medium and high PE levels, suggesting a memory congruency effect. On the other hand, there was no evidence of memory benefit for high PE level. Together, these novel and coherent findings strongly suggest that high PE does not guarantee better memory.
The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the major focus for vaccine development. We combine cryo electron tomography, subtomogram averaging and molecular dynamics simulations to structurally analyze S in situ. Compared to recombinant S, the viral S is more heavily glycosylated and occurs predominantly in a closed pre-fusion conformation. We show that the stalk domain of S contains three hinges that give the globular domain unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and the development of safe vaccines. The large scale tomography data set of SARS-CoV-2 used for this study is therefore sufficient to resolve structural features to below 5 Ångstrom, and is publicly available at EMPIAR-10453.
mRNA localization to subcellular compartments has been reported across all kingdoms of life and it is generally believed to promote asymmetric protein synthesis and localization. In striking contrast to previous observations, we show that in S. cerevisiae the B-type cyclin CLB2 mRNA is localized and translated in the yeast bud, while the Clb2 protein, a key regulator of mitosis progression, is concentrated in the mother nucleus. Using single-molecule RNA imaging in fixed (smFISH) and living cells (MS2 system), we show that the CLB2 mRNA is transported to the yeast bud by the She2-She3 complex, via an mRNA ZIP-code situated in the coding sequence. In CLB2 mRNA localization mutants, Clb2 protein synthesis in the bud is decreased resulting in changes in cell cycle distribution and genetic instability. Altogether, we propose that CLB2 mRNA localization acts as a sensor for bud development to couple cell growth and cell cycle progression, revealing a novel function for mRNA localization.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
Knowledge is limited as to how prior SARS-CoV-2 infection influences cellular and humoral immunity after booster-vaccination with bivalent BA.4/5-adapted mRNA-vaccines, and whether vaccine-induced immunity correlates with subsequent infection. In this observational study, individuals with prior infection (n=64) showed higher vaccine-induced anti-spike IgG antibodies and neutralizing titers, but the relative increase was significantly higher in non-infected individuals (n=63). In general, both groups showed higher neutralizing activity towards the parental strain than towards Omicron subvariants BA.1, BA.2 and BA.5. In contrast, CD4 or CD8 T-cell levels towards spike from the parental strain and the Omicron subvariants, and cytokine expression profiles were similar irrespective of prior infection. Breakthrough infections occurred more frequently among previously non-infected individuals, who had significantly lower vaccine-induced spike-specific neutralizing activity and CD4 T-cell levels. Thus, the magnitude of vaccine-induced neutralizing activity and specific CD4 T-cells after bivalent vaccination may serve as a correlate for protection in previously non-infected individuals.
Motivated by the wealth of proposals and realizations of nontrivial topological phases in EuCd2As2, such as a Weyl semimetallic state and the recently discussed semimetallic versus semiconductor behavior in this system, we analyze in this work the role of the delicate interplay of Eu magnetism, strain and pressure on the realization of such phases. For that we invoke a combination of a group theoretical analysis with ab initio density functional theory calculations and uncover a rich phase diagram with various non-trivial topological phases beyond a Weyl semimetallic state, such as axion and topological crystalline insulating phases, and discuss their realization.
Motivated by the wealth of proposals and realizations of nontrivial topological phases in EuCd2As2, such as a Weyl semimetallic state and the recently discussed semimetallic versus semiconductor behavior in this system, we analyze in this work the role of the delicate interplay of Eu magnetism, strain and pressure on the realization of such phases. For that we invoke a combination of a group theoretical analysis with ab initio density functional theory calculations and uncover a rich phase diagram with various non-trivial topological phases beyond a Weyl semimetallic state, such as axion and topological crystalline insulating phases, and discuss their realization.
canning tunneling microscopy (STM) is perhaps the most promising way to detect the superconducting gap size and structure in the canonical unconventional superconductor Sr2RuO4 directly. However, in many cases, researchers have reported being unable to detect the gap at all in simple STM conductance measurements. Recently, an investigation of this issue on various local topographic structures on a Sr-terminated surface found that superconducting spectra appeared only in the region of small nanoscale canyons, corresponding to the removal of one RuO surface layer. Here, we analyze the electronic structure of various possible surface structures using first principles methods, and argue that bulk conditions favorable for superconductivity can be achieved when removal of the RuO layer suppresses the RuO4 octahedral rotation locally. We further propose alternative terminations to the most frequently reported Sr termination where superconductivity surfaces should be observed.
Lattice strains of appropriate symmetry have served as an excellent tool to explore the interaction of superconductivity in the iron-based superconductors with nematic and stripe spin-density wave (SSDW) order, which are both closely tied to an orthorhombic distortion. In this work, we contribute to a broader understanding of the coupling of strain to superconductivity and competing normal-state orders by studying CaKFe4As4 under large, in-plane strains of B1g and B2g symmetry. In contrast to the majority of iron-based superconductors, pure CaKFe4As4 exhibits superconductivity with relatively high transition temperature of Tc∼35 K in proximity of a non-collinear, tetragonal, hedgehog spin-vortex crystal (SVC) order. Through experiments, we demonstrate an anisotropic in-plane strain response of Tc, which is reminiscent of the behavior of other pnictides with nematicity. However, our calculations suggest that in CaKFe4As4, this anisotropic response correlates with the one of the SVC fluctuations, highlighting the close interrelation of magnetism and high-Tc superconductivity. By suggesting moderate B2g strains as an effective parameter to change the stability of SVC and SSDW, we outline a pathway to a unified phase diagram of iron-based superconductivity.
Aim: Replicate the analysis conducted by Prof. Dr. Alexander W. Schmidt-Catran (Goethe University Frankfurt), Prof. Dr. Malcolm Fairbrother (Umea University), and Prof. Dr. Hans-Jürgen Andreß (University of Cologne) that was published in a special issue on Cross-National Comparative Research in the German academic journal Kölner Zeitschrift für Soziologie und Sozialpsychologie in 2019. Result: Almost all calculations, tables and graphs from Schmidt-Catran et al. (2019) could be replicated sufficiently well in R.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3' untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3' untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3’ untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
Key Points:
* Deep profiling identifies novel targets of miR-181 associated with global gene regulation.
* miR-181 MREs in repeat elements in the coding sequence act through translational inhibition.
* High-resolution analysis reveals an alternative seed match in functional MREs.
In this paper, we investigate the usefulness of a wide range of features for their usefulness in the resolution of nominal coreference, both as hard constraints (i.e. completely removing elements from the list of possible candidates) as well as soft constraints (where a cumulation of violations of soft constraints will make it less likely that a candidate is chosen as the antecedent). We present a state of the art system based on such constraints and weights estimated with a maximum entropy model, using lexical information to resolve cases of coreferent bridging.
When a statistical parser is trained on one treebank, one usually tests it on another portion of the same treebank, partly due to the fact that a comparable annotation format is needed for testing. But the user of a parser may not be interested in parsing sentences from the same newspaper all over, or even wants syntactic annotations for a slightly different text type. Gildea (2001) for instance found that a parser trained on the WSJ portion of the Penn Treebank performs less well on the Brown corpus (the subset that is available in the PTB bracketing format) than a parser that has been trained only on the Brown corpus, although the latter one has only half as many sentences as the former. Additionally, a parser trained on both the WSJ and Brown corpora performs less well on the Brown corpus than on the WSJ one. This leads us to the following questions that we would like to address in this paper: - Is there a difference in usefulness of techniques that are used to improve parser performance between the same-corpus and the different-corpus case? - Are different types of parsers (rule-based and statistical) equally sensitive to corpus variation? To achieve this, we compared the quality of the parses of a hand-crafted constraint-based parser and a statistical PCFG-based parser that was trained on a treebank of German newspaper text.
Using a qualitative analysis of disagreements from a referentially annotated newspaper corpus, we show that, in coreference annotation, vague referents are prone to greater disagreement. We show how potentially problematic cases can be dealt with in a way that is practical even for larger-scale annotation, considering a real-world example from newspaper text.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
We adopt Markert and Nissim (2005)’s approach of using the World Wide Web to resolve cases of coreferent bridging for German and discuss the strength and weaknesses of this approach. As the general approach of using surface patterns to get information on ontological relations between lexical items has only been tried on English, it is also interesting to see whether the approach works for German as well as it does for English and what differences between these languages need to be accounted for. We also present a novel approach for combining several patterns that yields an ensemble that outperforms the best-performing single patterns in terms of both precision and recall.
In the past, a divide could be seen between ’deep’ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TüBa-D/Z treebank and Versley (2005)´s conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)´s parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TüBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved.
Dual coding theories of knowledge suggest that meaning is represented in the brain by a double code, which comprises language-derived representations in the Anterior Temporal Lobe and sensory-derived representations in perceptual and motor regions. This approach predicts that concrete semantic features should activate both codes, whereas abstract features rely exclusively on the linguistic code. Using magnetoencephalography (MEG), we adopted a temporally resolved multiple regression approach to identify the contribution of abstract and concrete semantic predictors to the underlying brain signal. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings shed new light on the temporal dynamics of abstract and concrete semantic representations in the brain and suggest that the concreteness of words processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual and motor regions.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
The thrombopoietin receptor agonist eltrombopag was successfully used against human cytomegalovirus (HCMV)-associated thrombocytopenia refractory to immunomodulatory and antiviral drugs. These effects were ascribed to effects of eltrombopag on megakaryocytes. Here, we tested whether eltrombopag may also exert direct antiviral effects. Therapeutic eltrombopag concentrations inhibited HCMV replication in human fibroblasts and adult mesenchymal stem cells infected with six different virus strains and drug-resistant clinical isolates. Eltrombopag also synergistically increased the anti-HCMV activity of the mainstay drug ganciclovir. Time-of-addition experiments suggested that eltrombopag interferes with HCMV replication after virus entry. Eltrombopag was effective in thrombopoietin receptor-negative cells, and addition of Fe3+ prevented the anti-HCMV effects, indicating that it inhibits HCMV replication via iron chelation. This may be of particular interest for the treatment of cytopenias after haematopoietic stem cell transplantation, as HCMV reactivation is a major reason for transplantation failure. Since therapeutic eltrombopag concentrations are effective against drug-resistant viruses and synergistically increase the effects of ganciclovir, eltrombopag is also a drug repurposing candidate for the treatment of therapy-refractory HCMV disease.
Weak function word shift
(2004)
The fact that object shift only affects weak pronouns in mainland Scandinavian is seen as an instance of a more general observation that can be made in all Germanic languages: weak function words tend to avoid the edges of larger prosodic domains. This generalisation has been formulated within Optimality Theory in terms of alignment constraints on prosodic structure by Selkirk (1996) in explaining thedistribution of prosodically strong and weak forms of English functionwords, especially modal verbs, prepositions and pronouns. But a purely phonological account fails to integrate the syntactic licensing conditions for object shift in an appropriate way. The standard semantico-syntactic accounts of object shift, onthe other hand, fail to explain why it is only weak pronouns that undergo object shift. This paper develops an Optimality theoretic model of the syntax-phonology interface which is based on the interaction of syntactic and prosodic factors. The account can successfully be applied to further related phenomena in English and German.
This paper argues for a particular architecture of OT syntax. This architecture hasthree core features: i) it is bidirectional, the usual production-oriented optimisation (called ‘first optimisation’ here) is accompanied by a second step that checks the recoverability of an underlying form; ii) this underlying form already contains a full-fledged syntactic specification; iii) especially the procedure checking for recoverability makes crucial use of semantic and pragmatic factors. The first section motivates the basic architecture. The second section shows with two examples, how contextual factors are integrated. The third section examines its implications for learning theory, and the fourth section concludes with a broader discussion of the advantages and disadvantages of the proposed model.
This paper is part of a research project on OT Syntax and the typology of the free relative (FR) construction. It concentrates on the details of an OT analysis and some of its consequences for OT syntax. I will not present a general discussion of the phenomenon and the many controversial issues it is famous for in generative syntax.
The aim of this paper is the exploration of an optimality theoretic architecture for syntax that is guided by the concept of "correspondence": syntax is understood as the mechanism of "translating" underlying representations into a surface form. In minimalism, this surface form is called "Phonological Form" (PF). Both semantic and abstract syntactic information are reflected by the surface form. The empirical domain where this architecture is tested are minimal link effects, especially in the case of "wh"-movement. The OT constraints require the surface form to reflect the underlying semantic and syntactic representations as maximally as possible. The means by which underlying relations and properties are encoded are precedence, adjacency, surface morphology and prosodic structure. Information that is not encoded in one of these ways remains unexpressed, and gets lost unless it is recoverable via the context. Different kinds of information are often expressed by the same means. The resulting conflicts are resolved by the relative ranking of the relevant correspondence constraints.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.
The regeneration of hadronic resonances is discussed for heavy ion collisions at SPS and SIS-300 energies. The time evolutions of Delta, rho and phi resonances are investigated. Special emphasize is put on resonance regeneration after chemical freeze-out. The emission time spectra of experimentally detectable resonances are explored.
We predict transverse and longitudinal momentum spectra and yields of rho 0 and omega mesons reconstructed from hadron correlations in C+C reactions at 2~AGeV. The rapidity and pT distributions for reconstructable rho 0 mesons differs strongly from the primary distribution, while the omega's distributions are only weakly modified. We discuss the temporal and spatial distributions of the particles emitted in the hadron channel. Finally, we report on the mass shift of the rho 0 due to its coupling to the N*(1520), which is observable in both the di-lepton and pi pi channel. Our calculations can be tested with the Hades experiment at GSI, Darmstadt.
The establishment and maintenance of protected areas(PAs) is viewed as a key action in delivering post-2020 biodiversity targets. PAs often need to meet a multitude of objectives, ranging from biodiversity protection to ecosystem service provision and climate change mitigation. As available land and conservation funding are limited, optimizing resources by selecting the most beneficial PAs is vital. Here we present a decision support tool that enables a flexible approach to PA selection on a global scale, allowing different conservation objectives to be weighted and prioritized according to user-specified preferences. We apply the tool across 1347 terrestrial PAs and highlight frequent trade-offs among different objectives, e.g., between biodiversity protection and ecosystem integrity. These results indicate that decision makers must usually decide among conflicting objectives. To assist this our decision support tool provides an explicitly value-based approach that can help resolve such conflicts by considering divergent societal and political demands and values.
The establishment and maintenance of protected areas (PAs) is viewed as a key action in delivering post-2020 biodiversity targets. PAs often need to meet multiple objectives, ranging from biodiversity protection to ecosystem service provision and climate change mitigation, but available land and conservation funding is limited. Therefore, optimizing resources by selecting the most beneficial PAs is vital. Here, we advocate for a flexible and transparent approach to selecting protected areas based on multiple objectives, and illustrate this with a decision support tool on a global scale. The tool allows weighting and prioritization of different conservation objectives according to user-specified preferences, as well as real-time comparison of the selected areas that result from such different priorities. We apply the tool across 1347 terrestrial PAs and highlight frequent trade-offs among different objectives, e.g., between species protection and ecosystem integrity. Outputs indicate that decision makers frequently face trade-offs among conflicting objectives. Nevertheless, we show that transparent decision-support tools can reveal synergies and trade-offs associated with PA selection, thereby helping to illuminate and resolve land-use conflicts embedded in divergent societal and political demands and values.
Ongoing climate change is a major threat to biodiversity and impacts on species distributions and abundances are already evident. Heterogenous responses of species due to varying abiotic tolerances and dispersal abilities have the potential to further amplify or ameliorate these impacts through changes in species assemblages. Here we investigate the impacts of climate change on terrestrial bird distributions and, subsequently, on species richness as well as on different aspects of phylogenetic diversity of species assemblages across the globe. We go beyond previous work by disentangling the potential impacts on assemblage phylogenetic diversity of species gains vs. losses under climate change and compare the projected impacts to randomized assemblage changes.
We show that climate change might not only affect species numbers and composition of global species assemblages but could also have profound impacts on assemblage phylogenetic diversity, which, across extensive areas, differ significantly from random changes. Both the projected impacts on phylogenetic diversity and on phylogenetic structure vary greatly across the globe. Projected increases in the evolutionary history contained within species assemblages, associated with either increasing phylogenetic diversification or clustering, are most frequent at high northern latitudes. By contrast, projected declines in evolutionary history, associated with increasing phylogenetic over-dispersion or homogenisation, are projected across all continents.
The projected widespread changes in the phylogenetic structure of species assemblages show that changes in species richness do not fully reflect the potential threat from climate change to ecosystems. Our results indicate that the most severe changes to the phylogenetic diversity and structure of species assemblages are likely to be caused by species range shifts rather than range reductions and extinctions. Our findings highlight the importance of considering diverse measures in climate impact assessments and the value of integrating species-specific responses into assessments of entire community changes.
Abstract
The endoplasmic reticulum (ER) is a key organelle of membrane biogenesis and crucial for the folding of both membrane and secretory proteins. Sensors of the unfolded protein response (UPR) monitor the unfolded protein load in the ER and convey effector functions for maintaining ER homeostasis. Aberrant compositions of the ER membrane, referred to as lipid bilayer stress, are equally potent activators of the UPR. How the distinct signals from lipid bilayer stress and unfolded proteins are processed by the conserved UPR transducer Ire1 remains unknown. Here, we have generated a functional, cysteine-less variant of Ire1 and performed systematic cysteine crosslinking experiments in native membranes to establish its transmembrane architecture in signaling-active clusters. We show that the transmembrane helices of two neighboring Ire1 molecules adopt an X-shaped configuration independent of the primary cause for ER stress. This suggests that different forms of stress converge in a common, signaling-active transmembrane architecture of Ire1.
Summary
The endoplasmic reticulum (ER) is a hotspot of lipid biosynthesis and crucial for the folding of membrane and secretory proteins. The unfolded protein response (UPR) controls the size and folding capacity of the ER. The conserved UPR transducer Ire1 senses both unfolded proteins and aberrant lipid compositions to mount adaptive responses. Using a biochemical assay to study Ire1 in signaling-active clusters, Väth et al. provide evidence that the neighboring transmembrane helices of clustered Ire1 form an ‘X’ irrespectively of the primary cause of ER stress. Hence, different forms of ER stress converge in a common, signaling-active transmembrane architecture of Ire1.
Bisphenols and phthalates, chemicals frequently used in plastic products, promote obesity in cell and animal models. However, these well-known metabolism disrupting chemicals (MDCs) represent only a minute fraction of all compounds found in plastics. To gain a comprehensive understanding of plastics as a source of exposure to MDCs, we characterized all chemicals present in 34 everyday products using nontarget high-resolution mass spectrometry and analyzed their joint adipogenic activities by high-content imaging. We detected 55,300 chemical features and tentatively identified 629 unique compounds, including 11 known MDCs. Importantly, chemicals that induced proliferation, growth, and triglyceride accumulation in 3T3-L1 adipocytes were found in one third of the products. Since the majority did not target peroxisome proliferator-activated receptor γ, the effects are likely to be caused by unknown MDCs. Our study demonstrates that daily-use plastics contain potent mixtures of MDCs and can, therefore, be a relevant yet underestimated environmental factor contributing to obesity.
Teaser Plastics contain a potent mixture of chemicals promoting adipogenesis, a key process in developing obesity.
With the emergence of immunotherapies, the understanding of functional HLA class I antigen presentation to T cells is more relevant than ever. Current knowledge on antigen presentation is based on decades of research in a wide variety of cell types with varying antigen presentation machinery (APM) expression patterns, proteomes and HLA haplotypes. This diversity complicates the establishment of individual APM contributions to antigen generation, selection and presentation. Therefore, we generated a novel Panel of APM Knockout Cell lines (PAKC) from the same genetic origin. After CRISPR/Cas9 genome-editing of ten individual APM components in a human cell line, we derived clonal cell lines and confirmed their knockout status and phenotype. We then show how PAKC will accelerate research on the functional interplay between APM components and their role in antigen generation and presentation. This will lead to improved understanding of peptide-specific T cell responses in infection, cancer and autoimmunity.