Refine
Year of publication
Document Type
- Preprint (2299) (remove)
Has Fulltext
- yes (2299)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1407)
- Frankfurt Institute for Advanced Studies (FIAS) (989)
- Informatik (786)
- Medizin (176)
- Extern (82)
- Biowissenschaften (76)
- Ernst Strüngmann Institut (70)
- Mathematik (48)
- MPI für Hirnforschung (47)
- Psychologie (46)
The establishment and maintenance of protected areas(PAs) is viewed as a key action in delivering post-2020 biodiversity targets. PAs often need to meet a multitude of objectives, ranging from biodiversity protection to ecosystem service provision and climate change mitigation. As available land and conservation funding are limited, optimizing resources by selecting the most beneficial PAs is vital. Here we present a decision support tool that enables a flexible approach to PA selection on a global scale, allowing different conservation objectives to be weighted and prioritized according to user-specified preferences. We apply the tool across 1347 terrestrial PAs and highlight frequent trade-offs among different objectives, e.g., between biodiversity protection and ecosystem integrity. These results indicate that decision makers must usually decide among conflicting objectives. To assist this our decision support tool provides an explicitly value-based approach that can help resolve such conflicts by considering divergent societal and political demands and values.
The regeneration of hadronic resonances is discussed for heavy ion collisions at SPS and SIS-300 energies. The time evolutions of Delta, rho and phi resonances are investigated. Special emphasize is put on resonance regeneration after chemical freeze-out. The emission time spectra of experimentally detectable resonances are explored.
We predict transverse and longitudinal momentum spectra and yields of rho 0 and omega mesons reconstructed from hadron correlations in C+C reactions at 2~AGeV. The rapidity and pT distributions for reconstructable rho 0 mesons differs strongly from the primary distribution, while the omega's distributions are only weakly modified. We discuss the temporal and spatial distributions of the particles emitted in the hadron channel. Finally, we report on the mass shift of the rho 0 due to its coupling to the N*(1520), which is observable in both the di-lepton and pi pi channel. Our calculations can be tested with the Hades experiment at GSI, Darmstadt.
Weak function word shift
(2004)
The fact that object shift only affects weak pronouns in mainland Scandinavian is seen as an instance of a more general observation that can be made in all Germanic languages: weak function words tend to avoid the edges of larger prosodic domains. This generalisation has been formulated within Optimality Theory in terms of alignment constraints on prosodic structure by Selkirk (1996) in explaining thedistribution of prosodically strong and weak forms of English functionwords, especially modal verbs, prepositions and pronouns. But a purely phonological account fails to integrate the syntactic licensing conditions for object shift in an appropriate way. The standard semantico-syntactic accounts of object shift, onthe other hand, fail to explain why it is only weak pronouns that undergo object shift. This paper develops an Optimality theoretic model of the syntax-phonology interface which is based on the interaction of syntactic and prosodic factors. The account can successfully be applied to further related phenomena in English and German.
This paper argues for a particular architecture of OT syntax. This architecture hasthree core features: i) it is bidirectional, the usual production-oriented optimisation (called ‘first optimisation’ here) is accompanied by a second step that checks the recoverability of an underlying form; ii) this underlying form already contains a full-fledged syntactic specification; iii) especially the procedure checking for recoverability makes crucial use of semantic and pragmatic factors. The first section motivates the basic architecture. The second section shows with two examples, how contextual factors are integrated. The third section examines its implications for learning theory, and the fourth section concludes with a broader discussion of the advantages and disadvantages of the proposed model.
This paper is part of a research project on OT Syntax and the typology of the free relative (FR) construction. It concentrates on the details of an OT analysis and some of its consequences for OT syntax. I will not present a general discussion of the phenomenon and the many controversial issues it is famous for in generative syntax.
The aim of this paper is the exploration of an optimality theoretic architecture for syntax that is guided by the concept of "correspondence": syntax is understood as the mechanism of "translating" underlying representations into a surface form. In minimalism, this surface form is called "Phonological Form" (PF). Both semantic and abstract syntactic information are reflected by the surface form. The empirical domain where this architecture is tested are minimal link effects, especially in the case of "wh"-movement. The OT constraints require the surface form to reflect the underlying semantic and syntactic representations as maximally as possible. The means by which underlying relations and properties are encoded are precedence, adjacency, surface morphology and prosodic structure. Information that is not encoded in one of these ways remains unexpressed, and gets lost unless it is recoverable via the context. Different kinds of information are often expressed by the same means. The resulting conflicts are resolved by the relative ranking of the relevant correspondence constraints.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.
The thrombopoietin receptor agonist eltrombopag was successfully used against human cytomegalovirus (HCMV)-associated thrombocytopenia refractory to immunomodulatory and antiviral drugs. These effects were ascribed to effects of eltrombopag on megakaryocytes. Here, we tested whether eltrombopag may also exert direct antiviral effects. Therapeutic eltrombopag concentrations inhibited HCMV replication in human fibroblasts and adult mesenchymal stem cells infected with six different virus strains and drug-resistant clinical isolates. Eltrombopag also synergistically increased the anti-HCMV activity of the mainstay drug ganciclovir. Time-of-addition experiments suggested that eltrombopag interferes with HCMV replication after virus entry. Eltrombopag was effective in thrombopoietin receptor-negative cells, and addition of Fe3+ prevented the anti-HCMV effects, indicating that it inhibits HCMV replication via iron chelation. This may be of particular interest for the treatment of cytopenias after haematopoietic stem cell transplantation, as HCMV reactivation is a major reason for transplantation failure. Since therapeutic eltrombopag concentrations are effective against drug-resistant viruses and synergistically increase the effects of ganciclovir, eltrombopag is also a drug repurposing candidate for the treatment of therapy-refractory HCMV disease.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Dual coding theories of knowledge suggest that meaning is represented in the brain by a double code, which comprises language-derived representations in the Anterior Temporal Lobe and sensory-derived representations in perceptual and motor regions. This approach predicts that concrete semantic features should activate both codes, whereas abstract features rely exclusively on the linguistic code. Using magnetoencephalography (MEG), we adopted a temporally resolved multiple regression approach to identify the contribution of abstract and concrete semantic predictors to the underlying brain signal. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings shed new light on the temporal dynamics of abstract and concrete semantic representations in the brain and suggest that the concreteness of words processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual and motor regions.
In the past, a divide could be seen between ’deep’ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TüBa-D/Z treebank and Versley (2005)´s conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)´s parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TüBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved.
In this paper, we investigate the usefulness of a wide range of features for their usefulness in the resolution of nominal coreference, both as hard constraints (i.e. completely removing elements from the list of possible candidates) as well as soft constraints (where a cumulation of violations of soft constraints will make it less likely that a candidate is chosen as the antecedent). We present a state of the art system based on such constraints and weights estimated with a maximum entropy model, using lexical information to resolve cases of coreferent bridging.
When a statistical parser is trained on one treebank, one usually tests it on another portion of the same treebank, partly due to the fact that a comparable annotation format is needed for testing. But the user of a parser may not be interested in parsing sentences from the same newspaper all over, or even wants syntactic annotations for a slightly different text type. Gildea (2001) for instance found that a parser trained on the WSJ portion of the Penn Treebank performs less well on the Brown corpus (the subset that is available in the PTB bracketing format) than a parser that has been trained only on the Brown corpus, although the latter one has only half as many sentences as the former. Additionally, a parser trained on both the WSJ and Brown corpora performs less well on the Brown corpus than on the WSJ one. This leads us to the following questions that we would like to address in this paper: - Is there a difference in usefulness of techniques that are used to improve parser performance between the same-corpus and the different-corpus case? - Are different types of parsers (rule-based and statistical) equally sensitive to corpus variation? To achieve this, we compared the quality of the parses of a hand-crafted constraint-based parser and a statistical PCFG-based parser that was trained on a treebank of German newspaper text.
Using a qualitative analysis of disagreements from a referentially annotated newspaper corpus, we show that, in coreference annotation, vague referents are prone to greater disagreement. We show how potentially problematic cases can be dealt with in a way that is practical even for larger-scale annotation, considering a real-world example from newspaper text.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
We adopt Markert and Nissim (2005)’s approach of using the World Wide Web to resolve cases of coreferent bridging for German and discuss the strength and weaknesses of this approach. As the general approach of using surface patterns to get information on ontological relations between lexical items has only been tried on English, it is also interesting to see whether the approach works for German as well as it does for English and what differences between these languages need to be accounted for. We also present a novel approach for combining several patterns that yields an ensemble that outperforms the best-performing single patterns in terms of both precision and recall.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3’ untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
Key Points:
* Deep profiling identifies novel targets of miR-181 associated with global gene regulation.
* miR-181 MREs in repeat elements in the coding sequence act through translational inhibition.
* High-resolution analysis reveals an alternative seed match in functional MREs.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3' untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
MicroRNAs (miRNAs) are critical post-transcriptional regulators in many biological processes. They act by guiding RNA-induced silencing complexes to miRNA response elements (MREs) in target mRNAs, inducing translational inhibition and/or mRNA degradation. Functional MREs are expected to predominantly occur in the 3' untranslated region and involve perfect base-pairing of the miRNA seed. Here, we generate a high-resolution map of miR-181a/b-1 (miR-181) MREs to define the targeting rules of miR-181 in developing murine T-cells. By combining a multi-omics approach with computational high-resolution analyses, we uncover novel miR-181 targets and demonstrate that miR-181 acts predominantly through RNA destabilization. Importantly, we discover an alternative seed match and identify a distinct set of targets with repeat elements in the coding sequence which are targeted by miR-181 and mediate translational inhibition. In conclusion, deep profiling of MREs in primary cells is critical to expand physiologically relevant targetomes and establish context-dependent miRNA targeting rules.
Aim: Replicate the analysis conducted by Prof. Dr. Alexander W. Schmidt-Catran (Goethe University Frankfurt), Prof. Dr. Malcolm Fairbrother (Umea University), and Prof. Dr. Hans-Jürgen Andreß (University of Cologne) that was published in a special issue on Cross-National Comparative Research in the German academic journal Kölner Zeitschrift für Soziologie und Sozialpsychologie in 2019. Result: Almost all calculations, tables and graphs from Schmidt-Catran et al. (2019) could be replicated sufficiently well in R.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
Lattice strains of appropriate symmetry have served as an excellent tool to explore the interaction of superconductivity in the iron-based superconductors with nematic and stripe spin-density wave (SSDW) order, which are both closely tied to an orthorhombic distortion. In this work, we contribute to a broader understanding of the coupling of strain to superconductivity and competing normal-state orders by studying CaKFe4As4 under large, in-plane strains of B1g and B2g symmetry. In contrast to the majority of iron-based superconductors, pure CaKFe4As4 exhibits superconductivity with relatively high transition temperature of Tc∼35 K in proximity of a non-collinear, tetragonal, hedgehog spin-vortex crystal (SVC) order. Through experiments, we demonstrate an anisotropic in-plane strain response of Tc, which is reminiscent of the behavior of other pnictides with nematicity. However, our calculations suggest that in CaKFe4As4, this anisotropic response correlates with the one of the SVC fluctuations, highlighting the close interrelation of magnetism and high-Tc superconductivity. By suggesting moderate B2g strains as an effective parameter to change the stability of SVC and SSDW, we outline a pathway to a unified phase diagram of iron-based superconductivity.
canning tunneling microscopy (STM) is perhaps the most promising way to detect the superconducting gap size and structure in the canonical unconventional superconductor Sr2RuO4 directly. However, in many cases, researchers have reported being unable to detect the gap at all in simple STM conductance measurements. Recently, an investigation of this issue on various local topographic structures on a Sr-terminated surface found that superconducting spectra appeared only in the region of small nanoscale canyons, corresponding to the removal of one RuO surface layer. Here, we analyze the electronic structure of various possible surface structures using first principles methods, and argue that bulk conditions favorable for superconductivity can be achieved when removal of the RuO layer suppresses the RuO4 octahedral rotation locally. We further propose alternative terminations to the most frequently reported Sr termination where superconductivity surfaces should be observed.
Motivated by the wealth of proposals and realizations of nontrivial topological phases in EuCd2As2, such as a Weyl semimetallic state and the recently discussed semimetallic versus semiconductor behavior in this system, we analyze in this work the role of the delicate interplay of Eu magnetism, strain and pressure on the realization of such phases. For that we invoke a combination of a group theoretical analysis with ab initio density functional theory calculations and uncover a rich phase diagram with various non-trivial topological phases beyond a Weyl semimetallic state, such as axion and topological crystalline insulating phases, and discuss their realization.
Motivated by the wealth of proposals and realizations of nontrivial topological phases in EuCd2As2, such as a Weyl semimetallic state and the recently discussed semimetallic versus semiconductor behavior in this system, we analyze in this work the role of the delicate interplay of Eu magnetism, strain and pressure on the realization of such phases. For that we invoke a combination of a group theoretical analysis with ab initio density functional theory calculations and uncover a rich phase diagram with various non-trivial topological phases beyond a Weyl semimetallic state, such as axion and topological crystalline insulating phases, and discuss their realization.
Knowledge is limited as to how prior SARS-CoV-2 infection influences cellular and humoral immunity after booster-vaccination with bivalent BA.4/5-adapted mRNA-vaccines, and whether vaccine-induced immunity correlates with subsequent infection. In this observational study, individuals with prior infection (n=64) showed higher vaccine-induced anti-spike IgG antibodies and neutralizing titers, but the relative increase was significantly higher in non-infected individuals (n=63). In general, both groups showed higher neutralizing activity towards the parental strain than towards Omicron subvariants BA.1, BA.2 and BA.5. In contrast, CD4 or CD8 T-cell levels towards spike from the parental strain and the Omicron subvariants, and cytokine expression profiles were similar irrespective of prior infection. Breakthrough infections occurred more frequently among previously non-infected individuals, who had significantly lower vaccine-induced spike-specific neutralizing activity and CD4 T-cell levels. Thus, the magnitude of vaccine-induced neutralizing activity and specific CD4 T-cells after bivalent vaccination may serve as a correlate for protection in previously non-infected individuals.
Während der wissenschaftliche Nachwuchs im Forschungsbereich strategisch und wissenschaftlich fundiert samt diversen Prüfungen (Bachelor, Master, Promotion, ggf. auch Habilitation) ausgebildet wird, existiert im Bereich der Lehre nichts auch nur annährend Vergleichbares. Die übliche „Qualifizierung“ des Nachwuchslehrenden findet meist nur „On-the-job“ (vgl. Conradi, 1983) statt, d.h. durch eigenes Ausprobieren nach Beobachtung anderer Lehrender während des eigenen Studiums. Unter guten Bedingungen hat der Lehrende vorab oder begleitend Weiterbildungen zu guter Lehre besucht. Eine strategische Einbettung dieser Personalentwicklungsmaßnahmen, wie es seitens der Forschung intendiert wird, ist nicht vorhanden. Dieser Beitrag stellt mögliche Formen vor und führt exemplarisch eine darunter näher aus.
mRNA localization to subcellular compartments has been reported across all kingdoms of life and it is generally believed to promote asymmetric protein synthesis and localization. In striking contrast to previous observations, we show that in S. cerevisiae the B-type cyclin CLB2 mRNA is localized and translated in the yeast bud, while the Clb2 protein, a key regulator of mitosis progression, is concentrated in the mother nucleus. Using single-molecule RNA imaging in fixed (smFISH) and living cells (MS2 system), we show that the CLB2 mRNA is transported to the yeast bud by the She2-She3 complex, via an mRNA ZIP-code situated in the coding sequence. In CLB2 mRNA localization mutants, Clb2 protein synthesis in the bud is decreased resulting in changes in cell cycle distribution and genetic instability. Altogether, we propose that CLB2 mRNA localization acts as a sensor for bud development to couple cell growth and cell cycle progression, revealing a novel function for mRNA localization.
The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the major focus for vaccine development. We combine cryo electron tomography, subtomogram averaging and molecular dynamics simulations to structurally analyze S in situ. Compared to recombinant S, the viral S is more heavily glycosylated and occurs predominantly in a closed pre-fusion conformation. We show that the stalk domain of S contains three hinges that give the globular domain unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and the development of safe vaccines. The large scale tomography data set of SARS-CoV-2 used for this study is therefore sufficient to resolve structural features to below 5 Ångstrom, and is publicly available at EMPIAR-10453.
Generating predictions about environmental regularities, relying on these predictions, and updating these predictions when there is a violation from incoming sensory evidence are considered crucial functions of our cognitive system for being adaptive in the future. The violation of a prediction can result in a prediction error (PE) which affects subsequent memory processing. In our preregistered studies, we examined the effects of different levels of PE on episodic memory. Participants were asked to generate predictions about the associations between sequentially presented cue-target pairs, which were violated later with individual items in three PE levels, namely low, medium, and high PE. Hereafter, participants were asked to provide old/new judgments on the items with confidence ratings, and to retrieve the paired cues. Our results indicated a better recognition memory for low PE than medium and high PE levels, suggesting a memory congruency effect. On the other hand, there was no evidence of memory benefit for high PE level. Together, these novel and coherent findings strongly suggest that high PE does not guarantee better memory.
Cryo-electron tomography (cryo-ET) is a powerful method to elucidate subcellular architecture and to structurally analyse biomolecules in situ by subtomogram averaging (STA). Specimen thickness is a key factor affecting cryo-ET data quality. Cells that are too thick for transmission imaging can be thinned by cryo-focused-ion-beam (cryo-FIB) milling. However, optimal specimen thickness for cryo-ET on lamellae has not been systematically investigated. Furthermore, the ions used to ablate material can cause damage in the lamellae, thereby reducing STA resolution. Here, we systematically benchmark the resolution depending on lamella thickness and the depth of the particles within the sample. Up to ca. 180 nm, lamella thickness does not negatively impact resolution. This shows that there is no need to generate very thin lamellae and thickness can be chosen such that it captures major cellular features. Furthermore, we show that gallium-ion-induced damage extends to depths of up to 30 nm from either lamella surface.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Several studies have probed perceptual performance at different times after a self-paced motor action and found frequency-specific modulations of perceptual performance phase-locked to the action. Such action-related modulation has been reported for various frequencies and modulation strengths. In an attempt to establish a basic effect at the population level, we had a relatively large number of participants (n=50) perform a self-paced button press followed by a detection task at threshold, and we applied both fixed- and random-effects tests. The combined data of all trials and participants surprisingly did not show any significant action-related modulation. However, based on previous studies, we explored the possibility that such modulation depends on the participant’s internal state. Indeed, when we split trials based on performance in neighboring trials, then trials in periods of low performance showed an action-related modulation at ≈17 Hz. When we split trials based on the performance in the preceding trial, we found that trials following a “miss” showed an action-related modulation at ≈17 Hz. Finally, when we split participants based on their false-alarm rate, we found that participants with no false alarms showed an action-related modulation at ≈17 Hz. All these effects were significant in random-effects tests, supporting an inference on the population. Together, these findings indicate that action-related modulations are not always detectable. However, the results suggest that specific internal states such as lower attentional engagement and/or higher decision criterion are characterized by a modulation in the beta-frequency range.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can spread from symptomatic patients with COVID-19, but also from asymptomatic individuals. Therefore, robust surveillance and timely interventions are essential for the control of virus spread within the community. In this regard the frequency of testing and speed of reporting, but not the test sensitivity alone, play a crucial role. In order to reduce the costs and meet the expanding demands in real-time RT-PCR (rRT-PCR) testing for SARS-CoV-2, complementary assays, such as rapid antigen tests, have been developed. Rigorous analysis under varying conditions is required to assess the clinical performance of these tests and to ensure reproducible results. We evaluated the sensitivity and specificity of a recently licensed rapid antigen test using 137 clinical samples in two institutions. Test sensitivity was between 88.2-89.6% when applied to samples with viral loads typically seen in infectious patients. Of 32 rRT-PCR positive samples, 19 demonstrated infectivity in cell culture, and 84% of these samples were reactive with the antigen test. Seven full-genome sequenced SARS-CoV-2 isolates and SARS-CoV-1 were detected with this antigen test, with no cross-reactivity against other common respiratory viruses. Numerous antigen tests are available for SARS-CoV-2 testing and their performance to detect infectious individuals may vary. Head-to-head comparison along with cell culture testing for infectivity may prove useful to identify better performing antigen tests. The antigen test analyzed in this study is easy-to-use, inexpensive, and scalable. It can be helpful in monitoring infection trends and thus has potential to reduce transmission.
Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem
(2023)
The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encodes a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterising the properties of such a signal is an “old” problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through “quasi-universal” relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar ψ4. In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, f ψ4 0, associated with the instant of quasi timesymmetry in the postmerger dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency f h mer, which provides a description of the data that is four times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistently with the findings of numerous studies before ours, and using an enlarged ensamble of binary systems we point out that the ℓ = 2, m = 1 gravitational-wave mode could become comparable with the traditional ℓ = 2, m = 2 mode on sufficiently long timescales, with strain amplitudes in a ratio |h 21|/|h 22| ∼ 0.1 − 1 under generic orientations of the binary, which could be measured by present detectors for signals with large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur.
Using full 3+1 dimensional general-relativistic hydrodynamic simulations of equal- and unequal-mass neutron-star binaries with properties that are consistent with those inferred from the inspiral of GW170817, we perform a detailed study of the quark-formation processes that could take place after merger. We use three equations of state consistent with current pulsar observations derived from a novel finite-temperature framework based on V-QCD, a non-perturbative gauge/gravity model for Quantum Chromodynamics. In this way, we identify three different post-merger stages at which mixed baryonic and quark matter, as well as pure quark matter, are generated. A phase transition triggered collapse already ≲10ms after the merger reveals that the softest version of our equations of state is actually inconsistent with the expected second-long post-merger lifetime of GW170817. Our results underline the impact that multi-messenger observations of binary neutron-star mergers can have in constraining the equation of state of nuclear matter, especially in its most extreme regimes.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
We obtain the D-meson spectral density at finite temperature for the conditions of density and temperature expected at FAIR. We perform a self-consistent coupled-channel calculation taking, as a bare interaction, a separable potential model. The Lambda_c (2593) resonance is generated dynamically. We observe that the D-meson spectral density develops a sizeable width while the quasiparticle peak stays close to the free position. The consequences for the D-meson production at FAIR are discussed.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
Spontaneous brain activity builds the foundation for human cognitive processing during external demands. Neuroimaging studies based on functional magnetic resonance imaging (fMRI) identified specific characteristics of spontaneous (intrinsic) brain dynamics to be associated with individual differences in general cognitive ability, i.e., intelligence. However, fMRI research is inherently limited by low temporal resolution, thus, preventing conclusions about neural fluctuations within the range of milliseconds. Here, we used resting-state electroencephalographical (EEG) recordings from 144 healthy adults to test whether individual differences in intelligence (Raven’s Advanced Progressive Matrices scores) can be predicted from the complexity of temporally highly resolved intrinsic brain signals. We compared different operationalizations of brain signal complexity (multiscale entropy, Shannon entropy, Fuzzy entropy, and specific characteristics of microstates) regarding their relation to intelligence. The results indicate that associations between brain signal complexity measures and intelligence are of small effect sizes (r ~ .20) and vary across different spatial and temporal scales. Specifically, higher intelligence scores were associated with lower complexity in local aspects of neural processing, and less activity in task-negative brain regions belonging to the defaultmode network. Finally, we combined multiple measures of brain signal complexity to show that individual intelligence scores can be significantly predicted with a multimodal model within the sample (10-fold cross-validation) as well as in an independent sample (external replication, N = 57). In sum, our results highlight the temporal and spatial dependency of associations between intelligence and intrinsic brain dynamics, proposing multimodal approaches as promising means for future neuroscientific research on complex human traits.
Significance Statement Spontaneous brain activity builds the foundation for intelligent processing - the ability of humans to adapt to various cognitive demands. Using resting-state EEG, we extracted multiple aspects of temporally highly resolved intrinsic brain dynamics to investigate their relationship with individual differences in intelligence. Single associations were of small effect sizes and varied critically across spatial and temporal scales. However, combining multiple measures in a multimodal cross-validated prediction model, allows to significantly predict individual intelligence scores in unseen participants. Our study adds to a growing body of research suggesting that observable associations between complex human traits and neural parameters might be rather small and proposes multimodal prediction approaches as promising tool to derive robust brain-behavior relations despite limited sample sizes.
Background: School attendance during the SARS-CoV-2 pandemic is intensely debated. Modelling studies suggest that school closures contribute to community transmission reduction. However, data among school-attending students and staff are scarce. In November 2020, we examined SARS-CoV-2 infections and seroreactivity in 24 randomly selected school classes and connected households in Berlin, Germany.
Methods: Students and school staff were examined, oro-nasopharyngeal swabs and blood samples collected, and SARS-CoV-2 infection and IgG antibodies detected by RT-PCR and ELISA. Household members performed self-swabs. Individual and institutional infection prevention and control measures were assessed. Classes with SARS-CoV-2 infection and connected household members were re-tested after one week.
Findings: 1119 participants were examined, including 177 primary and 175 secondary school students, 142 staff, and 625 household members. Participants reported mainly cold symptoms (19·4%). SARS-CoV-2 infection occurred in eight of 24 classes affecting each 1-2 individuals. Infection prevalence was 2·7% (95%CI; 1·2-5·0%; 9/338), 1·4% (0·2-5·1%; 2/140), and 2·3% (1·3-3·8%; 14/611) among students, staff and household members, respectively, including quarantined persons. Six of nine infected students were asymptomatic. Prevalence increased with inconsistent facemask use in school, way to school on foot, and case-contacts outside school. IgG antibodies were detected in 2·0% (0·8-4·1%; 7/347), 1·4% (0·2-5·0%; 2/141) and 1·4% (0·6-2·7%; 8/576), respectively. For three of nine households with infection(s) detected at cross-sectional assessment, origin in school seemed possible. After one week, no school-related, secondary infections appeared in affected classes; the attack rate in connected households was 1·1%.
Interpretation: These data suggest that school attendance under preventive measures is feasible, provided their rigorous implementation. In balancing threats and benefits of open versus closed schools during the pandemic, parents and society need to consider possible spill-overs into their households. Deeper insight is needed into the infection risks due to being a schoolchild as compared to attending school.
Parkinson disease (PD), one of the most common neurodegenerative disorder, is believed to be driven by toxic α-synuclein aggregates eventually resulting in selective loss of vulnerable neuron populations, prominent among them, nigrostriatal dopamine (DA) neurons in the lateral substantia nigra (l-SN). How α-synuclein aggregates initiate a pathophysiological cascade selectively in vulnerable neurons is still unclear. Here, we show that the exposure to low nanomolar concentrations of α-synuclein aggregates (i.e. fibrils) but not its monomeric forms acutely and selectively disrupted the electrical pacemaker function of the DA subpopulation most vulnerable in PD. This implies that only dorsolateral striatum projecting l-SN DA neurons were electrically silenced by α-synuclein aggregates, while the activity of neither neighboring DA neurons in medial SN projecting to dorsomedial striatum nor mesolimbic DA neurons in the ventral tegmental area (VTA) were affected. Moreover, we demonstrate functional K-ATP channels comprised of Kir6.2 subunit in DA neurons to be necessary to mediate this acute pacemaker disruption by α-synuclein aggregates. Our study thus identifies a molecularly defined target that quickly translates the presence of α-synuclein aggregates into an immediate impairment of essential neuronal function. This constitutes a novel candidate process how a protein-aggregation-driven sequence in PD is initiated that might eventually lead to selective neurodegeneration.
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Deutsche Fassung: Vertragswelten: Das Recht in der Fragmentierung von private governance regimes. Rechtshistorisches Journal 17, 1998, 234-265. Italienische Fassung: Mondi contrattuali. Discourse rights nel diritto privato. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 113-142. Portugiesische Fassung: Mundos contratuais: o direito na fragmentacao de regimes de private governance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 269-298.
Salt-inducible kinases (SIKs) are key metabolic regulators. Imbalance of SIK function is associated with the development of diverse cancers, including breast, gastric and ovarian cancer. Chemical tools to clarify the roles of SIK in different diseases are, however, sparse and are generally characterized by poor kinome-wide selectivity. Here, we have adapted the pyrido[2,3-d]pyrimidin-7-one-based PAK inhibitor G-5555 for the targeting of SIK, by exploiting differences in the back-pocket region of these kinases. Optimization was supported by high-resolution crystal structures of G-5555 bound to the known off-targets MST3 and MST4, leading to a chemical probe, MRIA9, with dual SIK/PAK activity and excellent selectivity over other kinases. Furthermore, we show that MRIA9 sensitizes ovarian cancer cells to treatment with the mitotic agent paclitaxel, confirming earlier data from genetic knockdown studies and suggesting a combination therapy with SIK inhibitors and paclitaxel for the treatment of paclitaxel-resistant ovarian cancer.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.
The purpose of this paper is to describe the TüBa-D/Z treebank of written German and to compare it to the independently developed TIGER treebank (Brants et al., 2002). Both treebanks, TIGER and TüBa-D/Z, use an annotation framework that is based on phrase structure grammar and that is enhanced by a level of predicate-argument structure. The comparison between the annotation schemes of the two treebanks focuses on the different treatments of free word order and discontinuous constituents in German as well as on differences in phrase-internal annotation.
Fungi play pivotal roles in ecosystem functioning, but little is known about their global patterns of diversity, endemicity, vulnerability to global change drivers and conservation priority areas. We applied the high-resolution PacBio sequencing technique to identify fungi based on a long DNA marker that revealed a high proportion of hitherto unknown fungal taxa. We used a Global Soil Mycobiome consortium dataset to test relative performance of various sequencing depth standardization methods (calculation of residuals, exclusion of singletons, traditional and SRS rarefaction, use of Shannon index of diversity) to find optimal protocols for statistical analyses. Altogether, we used six global surveys to infer these patterns for soil-inhabiting fungi and their functional groups. We found that residuals of log-transformed richness (including singletons) against log-transformed sequencing depth yields significantly better model estimates compared with most other standardization methods. With respect to global patterns, fungal functional groups differed in the patterns of diversity, endemicity and vulnerability to main global change predictors. Unlike α-diversity, endemicity and global-change vulnerability of fungi and most functional groups were greatest in the tropics. Fungi are vulnerable mostly to drought, heat, and land cover change. Fungal conservation areas of highest priority include wetlands and moist tropical ecosystems.
Can prediction error explain predictability effects on the N1 during picture-word verification?
(2023)
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
Can prediction error explain predictability effects on the N1 during picture-word verification?
(2024)
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
Zinc finger (ZnF) domains appear in a pool of structural contexts and despite their small size achieve varying target specificities, covering single-stranded and double-stranded DNA and RNA as well as proteins. Combined with other RNA-binding domains, ZnFs enhance affinity and specificity of RNA-binding proteins (RBPs). The ZnF-containing immunoregulatory RBP Roquin initiates mRNA decay, thereby controlling the adaptive immune system. Its unique ROQ domain shape-specifically recognizes stem-looped cis-elements in mRNA 3’-untranslated regions (UTR). The N-terminus of Roquin contains a RING domain for protein-protein interactions and a ZnF, which was suggested to play an essential role in RNA decay by Roquin. The ZnF domain boundaries, its RNA motif preference and its interplay with the ROQ domain have remained elusive, also driven by the lack of high-resolution data of the challenging protein. We provide the solution structure of the Roquin-1 ZnF and use an RBNS-NMR pipeline to show that the ZnF recognizes AU-rich elements (ARE). We systematically refine the contributions of adenines in a poly(U)-background to specific complex formation. With the simultaneous binding of ROQ and ZnF to a natural target transcript of Roquin, our study for the first time suggests how Roquin integrates RNA shape and sequence specificity through the ROQ-ZnF tandem.
Nuclear pore complexes (NPCs) constitute giant channels within the nuclear envelope that mediate nucleocytoplasmic exchange. NPC diameter is thought to be regulated by nuclear envelope tension, but how such diameter changes are physiologically linked to cell differentiation, where mechanical properties of nuclei are remodeled and nuclear mechanosensing occurs, remains unstudied. Here we used cryo-electron tomography to show that NPCs dilate during differentiation of mouse embryonic stem cells into neural progenitors. In Nup133-deficient cells, which are known to display impaired neural differentiation, NPCs however fail to dilate. By analyzing the architectures of individual NPCs with template matching, we revealed that the Nup133-deficient NPCs are structurally heterogeneous and frequently disintegrate, resulting in the formation of large nuclear envelope openings. We propose that the elasticity of the NPC scaffold mechanically safeguards the nuclear envelope. Our studies provide a molecular explanation for how genetic perturbation of scaffolding components of macromolecular complexes causes tissue-specific phenotypes.
Membrane receptors are central to cell-cell communication. Receptor clustering at the plasma membrane modulates physiological responses, and mesoscale receptor organization is critical for downstream signaling. Spatially restricted cluster formation of the neuropeptide Y2 hormone receptor (Y2R) was observed in vivo; however, the relevance of this confinement is not fully understood. Here, we controlled Y2R clustering in situ by a chelator nanotool. Due to the multivalent interaction, we observed a dynamic exchange in the microscale confined regions. Fast Y2R enrichment in clustered areas triggered a ligand-independent downstream signaling determined by an increase in cytosolic calcium, cell spreading, and migration. We revealed that the cell response to ligand-induced activation was amplified when cells were pre-clustered by the nanotool. Ligand-independent signaling by clustering differed from ligand-induced activation in the binding of arrestin-3 as downstream effector, which was recruited to the confined regions only in the presence of the ligand. This approach enables in situ clustering of membrane receptors and raises the possibility to explore different modalities of receptor activation.
Cryo electron tomography (cryo-ET) combined with subtomogram averaging (StA) enables structural determination of macromolecules in their native context. A few structures were reported by StA at resolution higher than 4.5 Å, however all of these are from viral structural proteins or vesicle coats. Reaching high resolution for a broader range of samples is uncommon due to beam-induced sample drift, poor signal-to-noise ratio (SNR) of images, challenges in CTF correction, limited number of particles. Here we propose a strategy to address these issues, which consists of a tomographic data collection scheme and a processing workflow. Tilt series are collected with higher electron dose at zero-degree tilt in order to increase SNR. Next, after performing StA conventionally, we extract 2D projections of the particles of interest from the higher SNR images and use the single particle analysis tools to refine the particle alignment and generate a reconstruction. We benchmarked our proposed hybrid StA (hStA) workflow and improved the resolution for tobacco mosaic virus from 7.2 to 5.2 Å and the resolution for the ion channel RyR1 in crowded native membranes from 12.9 to 9.1 Å. We demonstrate that hStA can improve the resolution obtained by conventional StA and promises to be a useful tool for StA projects aiming at subnanometer resolution or higher.
Dendrites display a striking variety of neuronal type-specific morphologies, but the mechanisms and principles underlying such diversity remain elusive. A major player in defining the morphology of dendrites is the neuronal cytoskeleton, including evolutionarily conserved actin-modulatory proteins (AMPs). Still, we lack a clear understanding of how AMPs might support developmental phenomena such as neuron-type specific dendrite dynamics. To address precisely this level of in vivo specificity, we concentrated on a defined neuronal type, the class III dendritic arborisation (c3da) neuron of Drosophila larvae, displaying actin-enriched short terminal branchlets (STBs). Computational modelling reveals that the main branches of c3da neurons follow a general growth model based on optimal wiring, but the STBs do not. Instead, model STBs are defined by a short reach and a high affinity to grow towards the main branches. We thus concentrated on c3da STBs and developed new methods to quantitatively describe dendrite morphology and dynamics based on in vivo time-lapse imaging of mutants lacking individual AMPs. In this way, we extrapolated the role of these AMPs in defining STB properties. We propose that dendrite diversity is supported by the combination of a common step, refined by a neuron type-specific second level. For c3da neurons, we present a molecular model of how the combined action of multiple AMPs in vivo define the properties of these second level specialisations, the STBs.
To be published in J. Phys. G - Proceedings of SQM 2004 : We review the results from the various hydrodynamical and transport models on the collective flow observables from AGS to RHIC energies. A critical discussion of the present status of the CERN experiments on hadron collective flow is given. We emphasize the importance of the flow excitation function from 1 to 50 A.GeV: here the hydrodynamic model has predicted the collapse of the v2-flow ~ 10 A.GeV; at 40 A.GeV it has been recently observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy we interpret this observation as evidence for a first order phase transition at high baryon density r b. Moreover, the connection of the elliptic flow v2 to jet suppression is examined. It is proven experimentally that the collective flow is not faked by minijet fragmentation. Additionally, detailed transport studies show that the away-side jet suppression can only partially (< 50%) be due to hadronic rescattering. Furthermore, the change in sign of v1, v2 closer to beam rapidity is related to the occurence of a high density first order phase transition in the RHIC data at 62.5, 130 and 200 A.GeV.
A critical discussion of the present status of the CERN experiments on charm dynamics and hadron collective flow is given. We emphasize the importance of the flow excitation function from 1 to 50 A·GeV: here the hydrodynamic model has predicted the collapse of the v1-flow and of the v2-flow at 10 A·GeV; at 40 A·GeV it has been recently observed by the NA49 collaboration. Since hadronic rescattering models predict much larger flow than observed at this energy we interpret this observation as potential evidence for a first order phase transition at high baryon density B. A detailed discussion of the collective flow as a barometer for the equation of state (EoS) of hot dense matter at RHIC follows. Here, hadronic rescattering models can explain < 30% of the observed elliptic flow, v2, for pT > 2 GeV/c. This is interpreted as evidence for the production of superdense matter at RHIC with initial pressure far above hadronic pressure, p > 1 GeV/fm3. We suggest that the fluctuations in the flow, v1 and v2, should be measured in future since ideal hydrodynamics predicts that they are larger than 50 % due to initial state fluctuations. Furthermore, the QGP coe cient of viscosity may be determined experimentally from the fluctuations observed. The connection of v2 to jet suppression is examined. It is proven experimentally that the collective flow is not faked by minijet fragmentation. Additionally, detailed transport studies show that the awayside jet suppression can only partially (< 50%) be due to hadronic rescattering. We, finally, propose upgrades and second generation experiments at RHIC which inspect the first order phase transition in the fragmentation region, i.e. at µB 400 MeV (y 4 5), where the collapse of the proton flow should be seen in analogy to the 40 A·GeV data. The study of Jet-Wake-riding potentials and Bow shocks caused by jets in the QGP formed at RHIC can give further information on the equation of state (EoS) and transport coe cients of the Quark Gluon Plasma (QGP).
Results from various theoretical approaches and ideas presented at this exciting meeting (summary talk at the 5th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP - 2005)) are reviewed. I also point towards future directions, in particular hydrodynamic behaviour induced by jets traveling through the quark-gluon plasma, which might be worth looking at in more detail.
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
The production of Large Extra Dimension (LXD) Black Holes (BHs), with a new, fundamental mass scale of M_f = 1 TeV, has been predicted to occur at the Large Hadron Collider, LHC, with the formidable rate of 10^8 per year in p-p collisions at full energy, 14 TeV, and at full luminosity. We show that such LXD-BH formation will be experimentally observable at the LHC by the complete disappearance of all very high p_t (> 500 GeV) back-to-back correlated Di-Jets of total mass M > M_f = 1 TeV. We suggest to complement this clear cut-off signal at M > 2*500 GeV in the di-jet-correlation function by detecting the subsequent, Hawking-decay products of the LXD-BHs, namely either multiple high energy (> 100 GeV) SM Mono-Jets (i.e. away-side jet missing), sprayed off the evaporating BHs isentropically into all directions or the thermalization of the multiple overlapping Hawking-radiation in a eckler-Kapusta-Plasma. Microcanonical quantum statistical calculations of the Hawking evaporation process for these LXD-BHs show that cold black hole remnants (BHRs) of Mass sim M_f remain leftover as the ashes of these spectacular Di-Jet-suppressed events. Strong Di-Jet suppression is also expected with Heavy Ion beams at the LHC, due to Quark-Gluon-Plasma induced jet attenuation at medium to low jet energies, p_t < 200 GeV. The (Mono-)Jets in these events can be used to trigger for Tsunami-emission of secondary compressed QCD-matter at well defined Mach-angles, both at the trigger side and at the awayside (missing) jet. The Machshock-angles allow for a direct measurement of both the equation of state EoS and the speed of sound c_s via supersonic bang in the "big bang" matter. We discuss the importance of the underlying strong collective flow - the gluon storm - of the QCD- matter for the formation and evolution of these Machshock cones. We predict a significant deformation of Mach shocks from the gluon storm in central Au+Au collisions at RHIC and LHC energies, as compared to the case of weakly coupled jets propagating through a static medium. A possible complete stopping of pt > 50 GeV jets at the LHC in 2-3 fm yields nonlinear high density Mach shocks in he quark gluon plasma, which can be studied in the complex emission and disintegration pattern of the possibly supercooled matter. We report on first full 3-dimensional fluid dynamical studies of the strong effects of a first order phase transition on the evolution and the Tsunami-like Mach shock emission of the QCD matter.
In Arabidopsis thaliana, the stem cell niche (SCN) within the root apical meristem (RAM) is maintained by an intricate regulatory network that ensures optimal growth and high developmental plasticity. Yet, many aspects of this regulatory network of stem cell quiescence and replenishment are still not fully understood. Here, we investigate the interplay of the key transcription factors (TFs) BRASSINOSTEROID AT VASCULAR AND ORGANIZING CENTRE (BRAVO), PLETHORA 3 (PLT3) and WUSCHEL-RELATED HOMEOBOX 5 (WOX5) involved in SCN maintenance. Phenotypical analysis of mutants involving these TFs uncover their combinatorial regulation of cell fates and divisions in the SCN. Moreover, interaction studies employing fluorescence resonance energy transfer fluorescence lifetime imaging microscopy (FRET-FLIM) in combination with novel analysis methods, allowed us to quantify protein-protein interaction (PPI) affinities as well as higher-order complex formation of these TFs. We integrated our experimental results into a computational model, suggesting that cell type specific profiles of protein complexes and characteristic complex formation, that is also dependent on prion-like domains in PLT3, contribute to the intricate regulation of the SCN. We propose that these unique protein complex ‘signatures’ could serve as a read-out for cell specificity thereby adding another layer to the sophisticated regulatory network that balances stem cell maintenance and replenishment in the Arabidopsis root.
Complexome profiling is an emerging ‘omics approach that systematically interrogates the composition of protein complexes (the complexome) of a sample, by combining biochemical separation of native protein complexes with mass-spectrometry based quantitation proteomics. The resulting fractionation profiles hold comprehensive information on the abundance and composition of the complexome, and have a high potential for reuse by experimental and computational researchers. However, the lack of a central resource that provides access to these data, reported with adequate descriptions and an analysis tool, has limited their reuse. Therefore, we established the ComplexomE profiling DAta Resource (CEDAR, www3.cmbi.umcn.nl/cedar/), an openly accessible database for depositing and exploring mass spectrometry data from complexome profiling studies. Compatibility and reusability of the data is ensured by a standardized data and reporting format containing the “minimum information required for a complexome profiling experiment” (MIACE). The data can be accessed through a user-friendly web interface, as well as programmatically using the REST API portal. Additionally, all complexome profiles available on CEDAR can be inspected directly on the website with the profile viewer tool that allows the detection of correlated profiles and inference of potential complexes. In conclusion, CEDAR is a unique, growing and invaluable resource for the study of protein complex composition and dynamics across biological systems.
The ALICE experiment at the LHC investigates the properties of the hot and dense nuclear matter created in heavy-ion collisions. By comparing the particle production in pp and p-Pb collisions, possible nuclear initial state effects can be isolated. Measurements of the ω meson pT-spectra in pp and p-Pb collisions not only allow for a determination of the nuclear modification factor RpPb, but also provide insight into the fragmentation process and serve as vital input for decay background simulations for direct photons. In this contribution, measurements of the ω meson production in pp and p-Pb collisions at √sNN=5.02 TeV are presented. This includes the signal extraction and various corrections of the ω meson yields, leading to their production cross sections and the first measured nuclear modification factor RpPb of the ω meson at LHC energies.
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
A steep maximum occurs in the Wroblewski ratio between strange and non-strange quarks created in central nucleus-nucleus collisions, of about A=200, at the lower SPS energy square root s approximately equal to 7 GeV. By analyzing hadronic multiplicities within the grand canonical statistical hadronization model this maximum is shown to occur at a baryochemical potential of about 450 MeV. In comparison, recent QCD lattice calculations at finite baryochemical potential suggest a steep maximum of the light quark susceptibility, to occur at similar mu B, indicative of "critical fluctuation" expected to occur at or near the QCD critical endpoint. This endpoint hat not been firmly pinned down but should occur in the 300 MeV < mu c B < 700 MeV interval. It is argued that central collisions within the low SPS energy range should exhibit a turning point between compression/heating, and expansion/cooling at energy density, temperature and mu B close to the suspected critical point. Whereas from top SPS to RHIC energy the primordial dynamics create a turning point far above in epsilon and T, and far below in mu B. And at lower AGS energies the dynamical trajectory stays below the phase boundary. Thus, the observed sharp strangeness maximum might coincide with the critical square root s at which the dynamics settles at, or near the QCD endpoint.
A selection of recent data referring to Pb+Pb collisions at the SPS CERN energy of 158 GeV per nucleon is presented which might describe the state of highly excited strongly interacting matter both above and below the deconfinement to hadronization (phase) transition predicted by lattice QCD. A tentative picture emerges in which a partonic state is indeed formed in central Pb+Pb collisions which hadronizes at about T = 185 MeV, and expands its volume more than tenfold, cooling to about 120 MeV before hadronic collisions cease. We suggest further that all SPS collisions, from central S+S onward, reach that partonic phase, the maximum energy density increasing with more massive collision systems.
Hadronic yields and yield ratios observed in Pb+Pb collisions at the SPS energy of 158 GeV per nucleon are known to resemble a thermal equilibrium population at T=180 +/- 10 MeV, also observed in elementary e+ + e- to hadron data at LEP. We argue that this is the universal consequence of the QCD parton to hadron phase transition populating the maximum entropy state. This state is shown to survive the hadronic rescattering and expansion phase, freezing in right after hadronization due to the very rapid longitudinal and transverse expansion that is inferred from Bose-Einstein pion correlation analysis of central Pb+Pb collisions.
With new data available from the SPS, at 40 and 80 GeV/A, I review the systematics of bulk hadron multiplicities, with prime focus on strangeness production. The classical concept of strangeness enhancement in central AA collisions is reviewed, in view of the statistical hadronization model which suggests to understand strangeness enhancement to arise chiefly in the transition from the canonical to the grand canonical version of that model. I. e. enhancement results from the fading away of canonical suppression. The model also captures the striking strangeness maximum observed in the vicinity of sqrt s approx 8 GeV. A puzzle remains in the understanding of apparent grand canonical order at the lower SPS, and at AGS energies.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
Path integration is a sensorimotor computation that can be used to infer latent dynamical states by integrating self-motion cues. We studied the influence of sensory observation (visual/vestibular) and latent control dynamics (velocity/acceleration) on human path integration using a novel motion-cueing algorithm. Sensory modality and control dynamics were both varied randomly across trials, as participants controlled a joystick to steer to a memorized target location in virtual reality. Visual and vestibular steering cues allowed comparable accuracies only when participants controlled their acceleration, suggesting that vestibular signals, on their own, fail to support accurate path integration in the absence of sustained acceleration. Nevertheless, performance in all conditions reflected a failure to fully adapt to changes in the underlying control dynamics, a result that was well explained by a bias in the dynamics estimation. This work demonstrates how an incorrect internal model of control dynamics affects navigation in volatile environments in spite of continuous sensory feedback.
Under natural conditions, the visual system often sees a given input repeatedly. This provides an opportunity to optimize processing of the repeated stimuli. Stimulus repetition has been shown to strongly modulate neuronal-gamma band synchronization, yet crucial questions remained open. Here we used magnetoencephalography in 30 human subjects and find that gamma decreases across ~10 repetitions and then increases across further repetitions, revealing plastic changes of the activated neuronal circuits. Crucially, changes induced by one stimulus did not affect responses to other stimuli, demonstrating stimulus specificity. Changes partially persisted when the inducing stimulus was repeated after 25 minutes of intervening stimuli. They were strongest in early visual cortex and increased interareal feedforward influences. Our results suggest that early visual cortex gamma synchronization enables adaptive neuronal processing of recurring stimuli. These and previously reported changes might be due to an interaction of oscillatory dynamics with established synaptic plasticity mechanisms.
According to his own understanding, Jürgen Habermas’ Theory of Communicative Action offers a new account of the normative foundations of critical theory. 1 Habermas’ motivating insight is that neither a transcendental or metaphysical solution to the problem of normativity, nor a merely hermeneutic reconstruction of historically given norms, is sufficient to clarify the normative foundations of critical theory. In response to this insight, Habermas develops a novel account of normativity which locates the normative demands upon which critical theory draws within the socially instituted practice of communicative understanding. Although Habermas has claimed otherwise, this new foundation for critical theory constitutes a novel and innovative form of “immanent critique”. To argue for and to clarify this claim, I offer, in section 1, a formal account of immanent critique and distinguish between two different ways of carrying out such a critique. In section 2, I examine Habermas’ rejection of the first, hermeneutic option. Against this background, I then show, in section 3, that the Theory of Communicative Action attempts to formulate an immanent critique of contemporary societies according to a second, “practice-based” model. However, because Habermas, as I will argue in section 4, commits himself to an implausibly narrow view in regard to one central element of such a model – in regard to the social ontology of immanent normativity – his normative critique cannot develop its full potential (section 5).
Die Bedeutung des philosophischen Programms John McDowells, das schon in der theoretischen Philosophie eine revolutionäre Neuausrichtung vornimmt, kann erst voll erkannt werden, wenn man auch seine Konsequenzen für die praktische Philosophie in den Blick nimmt. Zwar geht Geist und Welt primär von Dilemmata der Erkenntnistheorie aus. Aus McDowells Vorschlag, die Gleichsetzung der äußeren Natur mit dem bedeutungsfreien Raum der Naturgesetze zugunsten einer Konzeption von Gründen in der Welt aufzugeben, ergibt sich aber die Möglichkeit einer so neuartigen Perspektive auf die Natur moralischer Urteile, dass es fast so scheint, als sei McDowells theoretisches Programm auf diesen Gewinn für die praktische Philosophie hin angelegt worden.
Spatial attention increases both inter-areal synchronization and spike rates across the visual hierarchy. To investigate whether these attentional changes reflect distinct or common mechanisms, we performed simultaneous laminar recordings of identified cell classes in macaque V1 and V4. Enhanced V4 spike rates were expressed by both excitatory neurons and fast-spiking interneurons, and were most prominent and arose earliest in time in superficial layers, consistent with a feedback modulation. By contrast, V1-V4 gamma-synchronization reflected feedforward communication and surprisingly engaged only fast-spiking interneurons in the V4 input layer. In mouse visual cortex, we found a similar motif for optogenetically identified inhibitory-interneuron classes. Population decoding analyses further indicate that feedback-related increases in spikes rates encoded attention more reliably than feedforward-related increases in synchronization. These findings reveal distinct, cell-type-specific feedforward and feedback pathways for the attentional modulation of inter-areal synchronization and spike rates, respectively.
Dissociation rates of J / psi's with comoving mesons : thermal versus nonequilibrium scenario.
(1998)
We study J/psi dissociation processes in hadronic environments. The validity of a thermal meson gas ansatz is tested by confronting it with an alternative, nonequilibrium scenario. Heavy ion collisions are simulated in the frame- work of the microscopic transport model UrQMD, taking into account the production of charmonium states through hard parton-parton interactions and subsequent rescattering with hadrons. The thermal gas and microscopic transport scenarios are shown to be very dissimilar. Estimates of J/psi survival probabilities based on thermal models of comover interactions in heavy ion collisions are therefore not reliable.
Charmonium production and absorption in heavy ion collisions is studied with the Ultrarelativisitic Quantum Molecular Dynamics model. We compare the scenario of universal and time independent color-octet dissociation cross sections with one of distinct color-singlet J/psi, psi 2 and CHIc states, evolving from small, color transparent configurations to their asymptotic sizes. The measured J/psi production cross sections in pA and AB collisions at SPS energies are consistent with both purely hadronic scenarios. The predicted rapidity dependence of J/psi suppression can be used to discriminate between the two experimentally. The importance of interactions with secondary hadrons and the applicability of thermal reaction kinetics to J/psi absorption are in- vestigated. We discuss the e ect of nuclear stopping and the role of leading hadrons. The dependence of the 2/J/psi ratio on the model assumptions and the possible influence of refeeding processes is also studied.
We study J/psi suppression in AB collisions assuming that the charmonium states evolve from small, color transparent configurations. Their interaction with nucleons and nonequilibrated, secondary hadrons is simulated us- ing the microscopic model UrQMD. The Drell-Yan lepton pair yield and the J/psi /Drell-Yan ratio are calculated as a function of the neutral transverse en- ergy in Pb+Pb collisions at 160 GeV and found to be in reasonable agreement with existing data.
Measured hadron yields from relativistic nuclear collisions can be equally well understood in two physically distinct models, namely a static thermal hadronic source vs. a time-dependent, nonequilibrium hadronization o a quark-gluon plasma droplet. Due to the time-dependent particle evapora- tion o the hadronic surface in the latter approach the hadron ratios change (by factors of <H 5) in time. Final particle yields reflect time averages over the actual thermodynamic properties of the system at a certain stage of the evolution. Calculated hadron, strangelet and (anti-)cluster yields as well as freeze-out times are presented for di erent systems. Due to strangeness distillation the system moves rapidly out of the T, µq plane into the µs-sector. Classif.: 25.75.Dw, 12.38.Mh, 24.85.+p
The deconfinement transition region between hadronic matter and quark-gluon plasma is studied for finite volumes. Assuming simple model equations of state and a first order phase transition, we find that fluctuations in finite volumes hinder a sharp separation between the two phases around the critical temperature, leading to a rounding of the phase transition. For reaction volumes expected in heavy ion experiments, the softening of the equation of state is reduced considerably. This is especially true when the requirement of exact color-singletness is included in the QGP equation of state.
We present a RQMD calculation of antiproton yields and their momentum distribution in Ne + NaF collisions at 2 GeV/u. The antiprotons can be produced below threshold due to multi-step excitations for which meson-baryon interactions play a considerable role. In this system the annihilation probability for an initially produced antiproton is predicted to be about 65%.
We calculate the evolution of quark-gluon-plasma droplets during the hadronization in a thermodynamical model. It is speculated that cooling as well as strangeness enrichment allow for the formation of strangelets even at very high initial entropy per baryon S/Ainit H 500 and low initial baryon numbers of Ainit B H 30. It is shown that the droplet with vanishing initial chemical potential of strange quarks and a very moderate chemical potential of up/down quarks immediately charges up with strangeness. Baryon densi- ties of H 2 0 and strange chemical potentials of µs > 350 MeV are reached if strangelets are stable. The importance of net baryon and net strangeness fluctuations for the possible strangelet formation at RHIC and LHC is em- phasized. Pacs-Classif.: 25.15.tr, 12.38.Mh, 24.85.tp
A study of secondary Drell-Yan production in nuclear collisions is presented for SPS energies. In addition to the lepton pairs produced in the initial collisions of the projectile and target nucleons, we consider the potentially high dilepton yield from hard valence antiquarks in produced mesons and antibaryons. We calculate the secondary Drell-Yan contributions taking the collision spectrum of hadrons from the microscopic model URQMD. The con- tributions from meson-baryon interactions, small in hadron-nucleus interac- tions, are found to be substantial in nucleus-nucleus collisions at low dilepton masses. Preresonance collisions of partons may further increase the yields.
We study the thermodynamic properties of infinite nuclear matter with the Ultrarelativistic Quantum Molecular Dynamics (URQMD), a semiclassical transport model, running in a box with periodic boundary conditions. It appears that the energy density rises faster than T4 at high temperatures of T approx. 200 - 300 MeV. This indicates an increase in the number of degrees of freedom. Moreover, We have calculated direct photon production in Pb+Pb collisions at 160 GeV/u within this model. The direct photon slope from the microscopic calculation equals that from a hydrodynamical calculation without a phase transition in the equation of state of the photon source.
In the framework of RQMD we investigate antiproton observables in massive heavy ion collisions at AGS energies and compare to preliminary results of the E878 collaboration. We focus here on the considerable influence of the real part of an antinucleon nucleus optical potential on the ¯p momentum spectra. Pacs-numbers: 14.20 Dh, 25.70.-z
We want to draw the attention to the dynamics of a (finite) hadronizing quark matter drop. Strange and antistrange quarks do not hadronize at the same time for a baryon-rich system1. Both the hadronic and the quark matter phases enter the strange sector fs 6= 0 of the phase diagram almost immediately, which has up to now been neglected in almost all calculations of the time evolution of the system. Therefore it seems questionable, whether final particle yields reflect the actual thermodynamic properties of the system at a certain stage of the evolution. We put special interest on the possible formation of exotic states, namely strangelets (multistrange quark clusters). They may exist as (meta-)stable exotic isomers of nuclear matter 2. It was speculated that strange matter might exist also as metastable exotic multi-strange (baryonic) objects (MEMO s 3). The possible creation in heavy ion collisions of long-lived remnants of the quark-gluon-plasma, cooled and charged up with strangeness by the emission of pions and kaons, was proposed in 1,4,5. Strangelets can serve as signatures for the creation of a quark gluon plasma. Currently, both at the BNL-AGS and at the CERN-SPS experiments are carried out to search for MEMO s and strangelets, e. g. by the E864, E878 and the NA52 collaborations9,
Bleaching-independent, whole-cell, 3D and multi-color STED imaging with exchangeable fluorophores
(2018)
We demonstrate bleaching-independent STED microscopy using fluorogenic labels that reversibly bind to their target structure. A constant exchange of labels guarantees the removal of photobleached fluorophores and their replacement by intact fluorophores, thereby circumventing bleaching-related limitations of STED super-resolution imaging in fixed and living cells. Foremost, we achieve a constant labeling density and demonstrate a fluorescence signal for long and theoretically unlimited acquisition times. Using this concept, we demonstrate whole-cell, 3D, multi-color and live cell STED microscopy with up to 100 min acquisition time.
Coarse-grained modeling has become an important tool to supplement experimental measurements, allowing access to spatio-temporal scales beyond all-atom based approaches. The GōMartini model combines structure- and physics-based coarse-grained approaches, balancing computational efficiency and accurate representation of protein dynamics with the capabilities of studying proteins in different biological environments. This paper introduces an enhanced GōMartini model, which combines a virtual-site implementation of Gō models with Martini 3. The implementation has been extensively tested by the community since the release of the new version of Martini. This work demonstrates the capabilities of the model in diverse case studies, ranging from protein-membrane binding to protein-ligand interactions and AFM force profile calculations. The model is also versatile, as it can address recent inaccuracies reported in the Martini protein model. Lastly, the paper discusses the advantages, limitations, and future perspectives of the Martini 3 protein model and its combination with Gō models.
The traditional view on coding in the cortex is that populations of neurons primarily convey stimulus information through the spike count. However, given the speed of sensory processing, it has been hypothesized that sensory encoding may rely on the spike-timing relationships among neurons. Here, we use a recently developed method based on Optimal Transport Theory called SpikeShip to study the encoding of natural movies by high-dimensional ensembles of neurons in visual cortex. SpikeShip is a generic measure of dissimilarity between spike train patterns based on the relative spike-timing relations among all neurons and with computational complexity similar to the spike count. We compared spike-count and spike-timing codes in up to N > 8000 neurons from six visual areas during natural video presentations. Using SpikeShip, we show that temporal spiking sequences convey substantially more information about natural movies than population spike-count vectors when the neural population size is larger than about 200 neurons. Remarkably, encoding through temporal sequences did not show representational drift both within and between blocks. By contrast, population firing rates showed better coding performance when there were few active neurons. Furthermore, the population firing rate showed memory across frames and formed a continuous trajectory across time. In contrast to temporal spiking sequences, population firing rates exhibited substantial drift across repetitions and between blocks. These findings suggest that spike counts and temporal sequences constitute two different coding schemes with distinct information about natural movies.