Refine
Year of publication
Document Type
- Preprint (2041) (remove)
Has Fulltext
- yes (2041)
Is part of the Bibliography
- no (2041) (remove)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
- Multicomponent Tree Adjoining Grammar (9)
Institute
- Physik (1252)
- Frankfurt Institute for Advanced Studies (FIAS) (877)
- Informatik (747)
- Medizin (171)
- Extern (82)
- Biowissenschaften (69)
- Ernst Strüngmann Institut (69)
- Mathematik (46)
- Psychologie (46)
- MPI für Hirnforschung (45)
Post-merger gravitational-wave signal from neutron-star binaries: a new look at an old problem
(2023)
The spectral properties of the post-merger gravitational-wave signal from a binary of neutron stars encodes a variety of information about the features of the system and of the equation of state describing matter around and above nuclear saturation density. Characterising the properties of such a signal is an “old” problem, which first emerged when a number of frequencies were shown to be related to the properties of the binary through “quasi-universal” relations. Here we take a new look at this old problem by computing the properties of the signal in terms of the Weyl scalar ψ4. In this way, and using a database of more than 100 simulations, we provide the first evidence for a new instantaneous frequency, f ψ4 0, associated with the instant of quasi timesymmetry in the postmerger dynamics, and which also follows a quasi-universal relation. We also derive a new quasi-universal relation for the merger frequency f h mer, which provides a description of the data that is four times more accurate than previous expressions while requiring fewer fitting coefficients. Finally, consistently with the findings of numerous studies before ours, and using an enlarged ensamble of binary systems we point out that the ℓ = 2, m = 1 gravitational-wave mode could become comparable with the traditional ℓ = 2, m = 2 mode on sufficiently long timescales, with strain amplitudes in a ratio |h 21|/|h 22| ∼ 0.1 − 1 under generic orientations of the binary, which could be measured by present detectors for signals with large signal-to-noise ratio or by third-generation detectors for generic signals should no collapse occur.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can spread from symptomatic patients with COVID-19, but also from asymptomatic individuals. Therefore, robust surveillance and timely interventions are essential for the control of virus spread within the community. In this regard the frequency of testing and speed of reporting, but not the test sensitivity alone, play a crucial role. In order to reduce the costs and meet the expanding demands in real-time RT-PCR (rRT-PCR) testing for SARS-CoV-2, complementary assays, such as rapid antigen tests, have been developed. Rigorous analysis under varying conditions is required to assess the clinical performance of these tests and to ensure reproducible results. We evaluated the sensitivity and specificity of a recently licensed rapid antigen test using 137 clinical samples in two institutions. Test sensitivity was between 88.2-89.6% when applied to samples with viral loads typically seen in infectious patients. Of 32 rRT-PCR positive samples, 19 demonstrated infectivity in cell culture, and 84% of these samples were reactive with the antigen test. Seven full-genome sequenced SARS-CoV-2 isolates and SARS-CoV-1 were detected with this antigen test, with no cross-reactivity against other common respiratory viruses. Numerous antigen tests are available for SARS-CoV-2 testing and their performance to detect infectious individuals may vary. Head-to-head comparison along with cell culture testing for infectivity may prove useful to identify better performing antigen tests. The antigen test analyzed in this study is easy-to-use, inexpensive, and scalable. It can be helpful in monitoring infection trends and thus has potential to reduce transmission.
Several recent studies investigated the rhythmic nature of cognitive processes that lead to perception and behavioral report. These studies used different methods, and there has not yet been an agreement on a general standard. Here, we present a way to test and quantitatively compare these methods. We simulated behavioral data from a typical experiment and analyzed these data with several methods. We applied the main methods found in the literature, namely sine-wave fitting, the Discrete Fourier Transform (DFT) and the Least Square Spectrum (LSS). DFT and LSS can be applied both on the averaged accuracy time course and on single trials. LSS is mathematically equivalent to DFT in the case of regular, but not irregular sampling - which is more common. LSS additionally offers the possibility to take into account a weighting factor which affects the strength of the rhythm, such as arousal. Statistical inferences were done either on the investigated sample (fixed-effect) or on the population (random-effect) of simulated participants. Multiple comparisons across frequencies were corrected using False-Discovery-Rate, Bonferroni, or the Max-Based approach. To perform a quantitative comparison, we calculated Sensitivity, Specificity and D-prime of the investigated analysis methods and statistical approaches. Within the investigated parameter range, single-trial methods had higher sensitivity and D-prime than the methods based on the averaged-accuracy-time-course. This effect was further increased for a simulated rhythm of higher frequency. If an additional (observable) factor influenced detection performance, adding this factor as weight in the LSS further improved Sensitivity and D-prime. For multiple comparison correction, the Max-Based approach provided the highest Specificity and D-prime, closely followed by the Bonferroni approach. Given a fixed total amount of trials, the random-effect approach had higher D-prime when trials were distributed over a larger number of participants, even though this gave less trials per participant. Finally, we present the idea of using a dampened sinusoidal oscillator instead of a simple sinusoidal function, to further improve the fit to behavioral rhythmicity observed after a reset event.
Several studies have probed perceptual performance at different times after a self-paced motor action and found frequency-specific modulations of perceptual performance phase-locked to the action. Such action-related modulation has been reported for various frequencies and modulation strengths. In an attempt to establish a basic effect at the population level, we had a relatively large number of participants (n=50) perform a self-paced button press followed by a detection task at threshold, and we applied both fixed- and random-effects tests. The combined data of all trials and participants surprisingly did not show any significant action-related modulation. However, based on previous studies, we explored the possibility that such modulation depends on the participant’s internal state. Indeed, when we split trials based on performance in neighboring trials, then trials in periods of low performance showed an action-related modulation at ≈17 Hz. When we split trials based on the performance in the preceding trial, we found that trials following a “miss” showed an action-related modulation at ≈17 Hz. Finally, when we split participants based on their false-alarm rate, we found that participants with no false alarms showed an action-related modulation at ≈17 Hz. All these effects were significant in random-effects tests, supporting an inference on the population. Together, these findings indicate that action-related modulations are not always detectable. However, the results suggest that specific internal states such as lower attentional engagement and/or higher decision criterion are characterized by a modulation in the beta-frequency range.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The present article proposes a re-reading of what "inclusion" into the sphere of the historical actually means in modern European historical discourse. It argues that this re-reading permits challenging a powerful, but problematic norm of ontological homogeneity as something to be achieved in and by historical discourse. At least some of the more conceptually profound challenges that accounts of "deep history" - of very distant pasts - pose to historical discourse have to do with pursuits of this norm. Historical theory has the potential of responding to some of these challenges and actually reverting them back at the practice of accounting for deep times in historical writing. The argument proceeds, in a first step, by analyzing the ties between modern European mortuary cultures and historical writing. In a second step, the history of humanitarian moralities is brought to bear on the analysis, in order to make visible, thirdly, the fractured presences of deep time in modern-era and contemporary historical writing. The fractures in question emerge, the article argues, from the ontological heterogeneity of historical knowledge. So in the end, a position beyond ontological homogeneity is adumbrated.
Cryo-electron tomography (cryo-ET) is a powerful method to elucidate subcellular architecture and to structurally analyse biomolecules in situ by subtomogram averaging (STA). Specimen thickness is a key factor affecting cryo-ET data quality. Cells that are too thick for transmission imaging can be thinned by cryo-focused-ion-beam (cryo-FIB) milling. However, optimal specimen thickness for cryo-ET on lamellae has not been systematically investigated. Furthermore, the ions used to ablate material can cause damage in the lamellae, thereby reducing STA resolution. Here, we systematically benchmark the resolution depending on lamella thickness and the depth of the particles within the sample. Up to ca. 180 nm, lamella thickness does not negatively impact resolution. This shows that there is no need to generate very thin lamellae and thickness can be chosen such that it captures major cellular features. Furthermore, we show that gallium-ion-induced damage extends to depths of up to 30 nm from either lamella surface.
Generating predictions about environmental regularities, relying on these predictions, and updating these predictions when there is a violation from incoming sensory evidence are considered crucial functions of our cognitive system for being adaptive in the future. The violation of a prediction can result in a prediction error (PE) which affects subsequent memory processing. In our preregistered studies, we examined the effects of different levels of PE on episodic memory. Participants were asked to generate predictions about the associations between sequentially presented cue-target pairs, which were violated later with individual items in three PE levels, namely low, medium, and high PE. Hereafter, participants were asked to provide old/new judgments on the items with confidence ratings, and to retrieve the paired cues. Our results indicated a better recognition memory for low PE than medium and high PE levels, suggesting a memory congruency effect. On the other hand, there was no evidence of memory benefit for high PE level. Together, these novel and coherent findings strongly suggest that high PE does not guarantee better memory.