Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
- Informatik und Mathematik (7)
- ELEMENTS (5)
- Präsidium (5)
- Geowissenschaften (4)
- Hochschulrechenzentrum (4)
- MPI für Biophysik (4)
- Biochemie, Chemie und Pharmazie (3)
- Zentrum für Biomolekulare Magnetische Resonanz (BMRZ) (3)
- Buchmann Institut für Molekulare Lebenswissenschaften (BMLS) (2)
- Exzellenzcluster Makromolekulare Komplexe (2)
- MPI für empirische Ästhetik (2)
- Mathematik (2)
- Biodiversität und Klima Forschungszentrum (BiK-F) (1)
- Center for Scientific Computing (CSC) (1)
- Pharmazie (1)
- Senckenbergische Naturforschende Gesellschaft (1)
- Zentrum für Arzneimittelforschung, Entwicklung und Sicherheit (ZAFES) (1)
The novel coronavirus (SARS-CoV-2), identified in China at the end of December 2019 and causing the disease COVID-19, has meanwhile led to outbreaks all over the globe, with about 571,700 confirmed cases and about 26,500 deaths as of March 28th, 2020. We present here the preliminary results of a mathematical study directed at informing on the possible application or lifting of control measures in Germany. The developed mathematical models allow to study the spread of COVID-19 among the population in Germany and to asses the impact of non-pharmaceutical interventions.
The novel coronavirus (SARS-CoV-2), identified in China at the end of December 2019 and causing the disease COVID-19, has meanwhile led to outbreaks all over the globe with about 2.2 million confirmed cases and more than 150,000 deaths as of April 17, 2020 [37]. In view of most recent information on testing activity [32], we present here an update of our initial work [4]. In this work, mathematical models have been developed to study the spread of COVID-19 among the population in Germany and to asses the impact of non-pharmaceutical interventions. Systems of differential equations of SEIR type are extended here to account for undetected infections, as well as for stages of infections and age groups. The models are calibrated on data until April 5, data from April 6 to 14 are used for model validation. We simulate different possible strategies for the mitigation of the current outbreak, slowing down the spread of the virus and thus reducing the peak in daily diagnosed cases, the demand for hospitalization or intensive care units admissions, and eventually the number of fatalities. Our results suggest that a partial (and gradual) lifting of introduced control measures could soon be possible if accompanied by further increased testing activity, strict isolation of detected cases and reduced contact to risk groups.
To understand the neural mechanisms underlying brain function, neuroscientists aim to quantify causal interactions between neurons, for instance by perturbing the activity of neuron A and measuring the effect on neuron B. Recently, manipulating neuron activity using light-sensitive opsins, optogenetics, has increased the specificity of neural perturbation. However, using widefield optogenetic interventions, multiple neurons are usually perturbed, producing a confound -- any of the stimulated neurons can have affected the postsynaptic neuron making it challenging to discern which neurons produced the causal effect. Here, we show how such confounds produce large biases in interpretations. We explain how confounding can be reduced by combining instrumental variables (IV) and difference in differences (DiD) techniques from econometrics. Combined, these methods can estimate (causal) effective connectivity by exploiting the weak, approximately random signal resulting from the interaction between stimulation and the absolute refractory period of the neuron. In simulated neural networks, we find that estimates using ideas from IV and DiD outperform naive techniques suggesting that methods from causal inference can be useful to disentangle neural interactions in the brain.
A key competence for open-ended learning is the formation of increasingly abstract representations useful for driving complex behavior. Abstract representations ignore specific details and facilitate generalization. Here we consider the learning of abstract representations in a multi-modal setting with two or more input modalities. We treat the problem as a lossy compression problem and show that generic lossy compression of multimodal sensory input naturally extracts abstract representations that tend to strip away modalitiy specific details and preferentially retain information that is shared across the different modalities. Furthermore, we propose an architecture to learn abstract representations by identifying and retaining only the information that is shared across multiple modalities while discarding any modality specific information.
Recent advances in artificial neural networks enabled the quick development of new learning algorithms, which, among other things, pave the way to novel robotic applications. Traditionally, robots are programmed by human experts so as to accomplish pre-defined tasks. Such robots must operate in a controlled environment to guarantee repeatability, are designed to solve one unique task and require costly hours of development. In developmental robotics, researchers try to artificially imitate the way living beings acquire their behavior by learning. Learning algorithms are key to conceive versatile and robust robots that can adapt to their environment and solve multiple tasks efficiently. In particular, Reinforcement Learning (RL) studies the acquisition of skills through teaching via rewards. In this thesis, we will introduce RL and present recent advances in RL applied to robotics. We will review Intrinsically Motivated (IM) learning, a special form of RL, and we will apply in particular the Active Efficient Coding (AEC) principle to the learning of active vision. We also propose an overview of Hierarchical Reinforcement Learning (HRL), an other special form of RL, and apply its principle to a robotic manipulation task.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Very-long-baseline interferometry (VLBI) observations of active galactic nuclei at millimetre wavelengths have the power to reveal the launching and initial collimation region of extragalactic radio jets, down to 10–100 gravitational radii (rg ≡ GM/c2) scales in nearby sources. Centaurus A is the closest radio-loud source to Earth. It bridges the gap in mass and accretion rate between the supermassive black holes (SMBHs) in Messier 87 and our Galactic Centre. A large southern declination of −43° has, however, prevented VLBI imaging of Centaurus A below a wavelength of 1 cm thus far. Here we show the millimetre VLBI image of the source, which we obtained with the Event Horizon Telescope at 228 GHz. Compared with previous observations, we image the jet of Centaurus A at a tenfold higher frequency and sixteen times sharper resolution and thereby probe sub-lightday structures. We reveal a highly collimated, asymmetrically edge-brightened jet as well as the fainter counterjet. We find that the source structure of Centaurus A resembles the jet in Messier 87 on ~500 rg scales remarkably well. Furthermore, we identify the location of Centaurus A’s SMBH with respect to its resolved jet core at a wavelength of 1.3 mm and conclude that the source’s event horizon shadow should be visible at terahertz frequencies. This location further supports the universal scale invariance of black holes over a wide range of masses.
The cortical networks that underlie behavior exhibit an orderly functional organization at local and global scales, which is readily evident in the visual cortex of carnivores and primates1-6. Here, neighboring columns of neurons represent the full range of stimulus orientations and contribute to distributed networks spanning several millimeters2,7-11. However, the principles governing functional interactions that bridge this fine-scale functional architecture and distant network elements are unclear, and the emergence of these network interactions during development remains unexplored. Here, by using in vivo wide-field and 2-photon calcium imaging of spontaneous activity patterns in mature ferret visual cortex, we find widespread and specific modular correlation patterns that accurately predict the local structure of visually-evoked orientation columns from the spontaneous activity of neurons that lie several millimeters away. The large-scale networks revealed by correlated spontaneous activity show abrupt ‘fractures’ in continuity that are in tight register with evoked orientation pinwheels. Chronic in vivo imaging demonstrates that these large-scale modular correlation patterns and fractures are already present at early stages of cortical development and predictive of the mature network structure. Silencing feed-forward drive through either retinal or thalamic blockade does not affect network structure suggesting a cortical origin for this large-scale correlated activity, despite the immaturity of long-range horizontal network connections in the early cortex. Using a circuit model containing only local connections, we demonstrate that such a circuit is sufficient to generate large-scale correlated activity, while also producing correlated networks showing strong fractures, a reduced dimensionality, and an elongated local correlation structure, all in close agreement with our empirical data. These results demonstrate the precise local and global organization of cortical networks revealed through correlated spontaneous activity and suggest that local connections in early cortical circuits may generate structured long-range network correlations that underlie the subsequent formation of visually-evoked distributed functional networks.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The development of binocular vision is an active learning process comprising the development of disparity tuned neurons in visual cortex and the establishment of precise vergence control of the eyes. We present a computational model for the learning and self-calibration of active binocular vision based on the Active Efficient Coding framework, an extension of classic efficient coding ideas to active perception. Under normal rearing conditions, the model develops disparity tuned neurons and precise vergence control, allowing it to correctly interpret random dot stereogramms. Under altered rearing conditions modeled after neurophysiological experiments, the model qualitatively reproduces key experimental findings on changes in binocularity and disparity tuning. Furthermore, the model makes testable predictions regarding how altered rearing conditions impede the learning of precise vergence control. Finally, the model predicts a surprising new effect that impaired vergence control affects the statistics of orientation tuning in visual cortical neurons.
Mounting evidence suggests that perception depends on a largely-feedforward brain network. However, the discrepancy between (i) the latency of the corresponding feedforward responses (150-200 ms) and (ii) the time it takes human subjects to recognize brief images (often >500 ms) suggests that recurrent neuronal activity is critical to visual processing. Here, we use magneto-encephalography to localize, track and decode the feedforward and recurrent responses elicited by brief presentations of variably-ambiguous letters and digits. We first confirm that these stimuli trigger, within the first 200 ms, a feedforward response in the ventral and dorsal cortical pathways. The subsequent activity is distributed across temporal, parietal and prefrontal cortices and leads to a slow and incremental cascade of representations culminating in action-specific motor signals. We introduce an analytical framework to show that these brain responses are best accounted for by a hierarchy of recurrent neural assemblies. An accumulation of computational delays across specific processing stages explains subjects’ reaction times. Finally, the slow convergence of neural representations towards perceptual categories is quickly followed by all-or-none motor decision signals. Together, these results show how recurrent processes generate, over extended time periods, a cascade of hierarchical decisions that ultimately predicts subjects’ perceptual reports.
The spike protein (S) of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the primary focus for vaccine development. In this study, we combined cryo–electron tomography, subtomogram averaging, and molecular dynamics simulations to structurally analyze S in situ. Compared with the recombinant S, the viral S was more heavily glycosylated and occurred mostly in the closed prefusion conformation. We show that the stalk domain of S contains three hinges, giving the head unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and potentially to the development of safe vaccines.
The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the major focus for vaccine development. We combine cryo electron tomography, subtomogram averaging and molecular dynamics simulations to structurally analyze S in situ. Compared to recombinant S, the viral S is more heavily glycosylated and occurs predominantly in a closed pre-fusion conformation. We show that the stalk domain of S contains three hinges that give the globular domain unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and the development of safe vaccines. The large scale tomography data set of SARS-CoV-2 used for this study is therefore sufficient to resolve structural features to below 5 Ångstrom, and is publicly available at EMPIAR-10453.
Abstract
The primary immunological target of COVID-19 vaccines is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface and mediates viral entry into the host cell. To identify possible antibody binding sites, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation, and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for structure-based vaccine design. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics. The protective glycan shield and the high flexibility of its hinges give the stalk overall low epitope scores. Our computational epitope-mapping procedure is general and should thus prove useful for other viral envelope proteins whose structures have been characterized.
Author summary
The SARS-CoV-2 virus has caused a global health crisis. The spike protein exposed at its surface is key for infection and the primary antibody target. However, spike is covered by highly mobile glycan molecules that could impair antibody binding. To identify accessible epitopes, we performed molecular dynamics simulations of an atomistic model of glycosylated spike embedded in a membrane. By combining extensive simulations with bioinformatics analyses, we recovered known antibody binding sites and identified several epitope candidates as targets for further vaccine development.
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
Autophagosome biogenesis requires a localized perturbation of lipid membrane dynamics and a unique protein-lipid conjugate. Autophagy-related (ATG) proteins catalyze this biogenesis on cellular membranes, but the underlying molecular mechanism remains unclear. Focusing on the final step of the protein-lipid conjugation reaction, ATG8/LC3 lipidation, we show how membrane association of the conjugation machinery is organized and fine-tuned at the atomistic level. Amphipathic α-helices in ATG3 proteins (AHATG3) are found to have low hydrophobicity and to be less bulky. Molecular dynamics simulations reveal that AHATG3 regulates the dynamics and accessibility of the thioester bond of the ATG3∼LC3 conjugate to lipids, allowing covalent lipidation of LC3. Live cell imaging shows that the transient membrane association of ATG3 with autophagic membranes is governed by the less bulky- hydrophobic feature of AHATG3. Collectively, the unique properties of AHATG3 facilitate protein- lipid bilayer association leading to the remodeling of the lipid bilayer required for the formation of autophagosomes.
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Afterimages result from a prolonged exposure to still visual stimuli. They are best detectable when viewed against uniform backgrounds and can persist for multiple seconds. Consequently, the dynamics of afterimages appears to be slow by their very nature. To the contrary, we report here that about 50% of an afterimage intensity can be erased rapidly—within less than a second. The prerequisite is that subjects view a rich visual content to erase the afterimage; fast erasure of afterimages does not occur if subjects view a blank screen. Moreover, we find evidence that fast removal of afterimages is a skill learned with practice as our subjects were always more effective in cleaning up afterimages in later parts of the experiment. These results can be explained by a tri-level hierarchy of adaptive mechanisms, as has been proposed by the theory of practopoiesis.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
COVID-19 pandemic is a major public health threat with unanswered questions regarding the role of the immune system in the severity level of the disease. In this paper, based on antibody kinetic data of patients with different disease severity, topological data analysis highlights clear differences in the shape of antibody dynamics between three groups of patients, which were non-severe, severe, and one intermediate case of severity. Subsequently, different mathematical models were developed to quantify the dynamics between the different severity groups. The best model was the one with the lowest media value of Akaike Information Criterion for all groups of patients. Although it has been reported high IgG level in severe patients, our findings suggest that IgG antibodies in severe patients may be less effective than non-severe patients due to early B cell production and early activation of the seroconversion process from IgM to IgG antibody.
Untangling the cell immune response dynamic for severe and critical cases of SARS-CoV-2 infection
(2021)
COVID-19 is a global pandemic leading high death tolls worldwide day by day. Clinical evidence suggests that COVID-19 patients can be classified as non-severe, severe and critical cases. In particular, studies have highlighted the relationship between the lymphopenia and the severity of the illness, where CD8+ T cells have the lowest levels in critical cases. In this work, we aim to elucidate the key parameters that define the course of the disease deviating from severe to critical case. To this end, several mathematical models are proposed to represent the dynamic of the immune response in patients with SARS-CoV-2 infection. The best model had a good fit to reported experimental data, and in accordance with values found in the literature. Our results suggest that a rapid proliferation of CD8+ T cells is decisive in the severity of the disease.
Tracking influenza a virus infection in the lung from hematological data with machine learning
(2022)
The tracking of pathogen burden and host responses with minimal-invasive methods during respiratory infections is central for monitoring disease development and guiding treatment decisions. Utilizing a standardized murine model of respiratory Influenza A virus (IAV) infection, we developed and tested different supervised machine learning models to predict viral burden and immune response markers, i.e. cytokines and leukocytes in the lung, from hematological data. We performed independently in vivo infection experiments to acquire extensive data for training and testing purposes of the models. We show here that lung viral load, neutrophil counts, cytokines like IFN-γ and IL-6, and other lung infection markers can be predicted from hematological data. Furthermore, feature analysis of the models shows that blood granulocytes and platelets play a crucial role in prediction and are highly involved in the immune response against IAV. The proposed in silico tools pave the path towards improved tracking and monitoring of influenza infections and possibly other respiratory infections based on minimal-invasively obtained hematological parameters.
Abstract
Co-infections by multiple pathogens have important implications in many aspects of health, epidemiology and evolution. However, how to disentangle the contributing factors of the immune response when two infections take place at the same time is largely unexplored. Using data sets of the immune response during influenza-pneumococcal co-infection in mice, we employ here topological data analysis to simplify and visualise high dimensional data sets.
We identified persistent shapes of the simplicial complexes of the data in the three infection scenarios: single viral infection, single bacterial infection, and co-infection. The immune response was found to be distinct for each of the infection scenarios and we uncovered that the immune response during the co-infection has three phases and two transition points. During the first phase, its dynamics is inherited from its response to the primary (viral) infection. The immune response has an early (few hours post co-infection) and then modulates its response to finally react against the secondary (bacterial) infection. Between 18 to 26 hours post co-infection the nature of the immune response changes again and does no longer resembles either of the single infection scenarios.
Author summary
The mapper algorithm is a topological data analysis technique used for the qualitative analysis, simplification and visualisation of high dimensional data sets. It generates a low-dimensional image that captures topological and geometric information of the data set in high dimensional space, which can highlight groups of data points of interest and can guide further analysis and quantification.
To understand how the immune system evolves during the co-infection between viruses and bacteria, and the role of specific cytokines as contributing factors for these severe infections, we use Topological Data Analysis (TDA) along with an extensive semi-unsupervised parameter value grid search, and k-nearest neighbour analysis.
We find persistent shapes of the data in the three infection scenarios, single viral and bacterial infections and co-infection. The immune response is shown to be distinct for each of the infections scenarios and we uncover that the immune response during the co-infection has three phases and two transition points, a previously unknown property regarding the dynamics of the immune response during co-infection.
Learning in the eyes: specific changes in gaze patterns track explicit and implicit visual learning
(2020)
What is the link between eye movements and sensory learning? Although some theories have argued for a permanent and automatic interaction between what we know and where we look, which continuously modulates human information- gathering behavior during both implicit and explicit learning, there exist surprisingly little evidence supporting such an ongoing interaction. We used a pure form of implicit learning called visual statistical learning and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, eye movement patterns systematically changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount of knowledge the observers acquired. Our results provide the first evidence for an ongoing and specific interaction between hitherto accumulated knowledge and eye movements during both implicit and explicit learning.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
Antimicrobial resistance is a major threat to global health and food security today. Scheduling cycling therapies by targeting phenotypic states associated to specific mutations can help us to eradicate pathogenic variants in chronic infections. In this paper, we introduce a logistic switching model in order to abstract mutation networks of collateral resistance. We found particular conditions for which unstable zero-equilibrium of the logistic maps can be stabilized through a switching signal. That is, persistent populations can be eradicated through tailored switching regimens.
Starting from an optimal-control formulation, the switching policies show their potential in the stabilization of the zero-equilibrium for dynamics governed by logistic maps. However, employing such switching strategies, deserve a specific characterization in terms of limit behaviour. Ultimately, we use evolutionary and control algorithms to find either optimal and sub-optimal switching policies. Simulations results show the applicability of Parrondo’s Paradox to design cycling therapies against drug resistance.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
The successful elimination of bacteria such as Streptococcus pneumoniae from a host involves the coordination between different parts of the immune system. Previous studies have explored the effects of the initial pneumococcal load (bacterial dose) on different representations of innate immunity, finding that pathogenic outcomes can vary with the size of the bacterial dose. However, others yield support to the notion of dose-independent factors contributing to bacterial clearance. In this paper, we seek to provide a deeper understanding of the immune responses associated to the pneumococcus. To this end, we formulate a model that realizes an abstraction of the innate-regulatory immune host response. Stability and bifurcation analyses of the model reveal the following trichotomy of pneumococcal outcomes determined by the bifurcation parameters: (i) dose-independent clearance; (ii) dose-independent persistence; and (iii) dose-limited clearance. Bistability, where the bacteria-free equilibrium co-stabilizes with the most substantial steady-state bacterial load is the specific result behind dose-limited clearance. The trichotomy of pneumococcal outcomes here described integrates all previously observed bacterial fates into a unified framework.
COVID-19 pandemic has underlined the impact of emergent pathogens as a major threat for human health. The development of quantitative approaches to advance comprehension of the current outbreak is urgently needed to tackle this severe disease. In this work, several mathematical models are proposed to represent SARS-CoV-2 dynamics in infected patients. Considering different starting times of infection, parameters sets that represent infectivity of SARS-CoV-2 are computed and compared with other viral infections that can also cause pandemics.
Based on the target cell model, SARS-CoV-2 infecting time between susceptible cells (mean of 30 days approximately) is much slower than those reported for Ebola (about 3 times slower) and influenza (60 times slower). The within-host reproductive number for SARS-CoV-2 is consistent to the values of influenza infection (1.7-5.35). The best model to fit the data was including immune responses, which suggest a slow cell response peaking between 5 to 10 days post onset of symptoms. The model with eclipse phase, time in a latent phase before becoming productively infected cells, was not supported. Interestingly, both, the target cell model and the model with immune responses, predict that virus may replicate very slowly in the first days after infection, and it could be below detection levels during the first 4 days post infection. A quantitative comprehension of SARS-CoV-2 dynamics and the estimation of standard parameters of viral infections is the key contribution of this pioneering work.
The severity of the COVID-19 pandemic, caused by the SARS-CoV-2 coronavirus, calls for the urgent development of a vaccine. The primary immunological target is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface to mediate viral entry into the host cell. To identify possible antibody binding sites not shielded by glycans, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for vaccine development. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics in epitope mapping.
In particle collider experiments, elementary particle interactions with large momentum transfer produce quarks and gluons (known as partons) whose evolution is governed by the strong force, as described by the theory of quantum chromodynamics (QCD)1. These partons subsequently emit further partons in a process that can be described as a parton shower2, which culminates in the formation of detectable hadrons. Studying the pattern of the parton shower is one of the key experimental tools for testing QCD. This pattern is expected to depend on the mass of the initiating parton, through a phenomenon known as the dead-cone effect, which predicts a suppression of the gluon spectrum emitted by a heavy quark of mass mQ and energy E, within a cone of angular size mQ/E around the emitter3. Previously, a direct observation of the dead-cone effect in QCD had not been possible, owing to the challenge of reconstructing the cascading quarks and gluons from the experimentally accessible hadrons. We report the direct observation of the QCD dead cone by using new iterative declustering techniques4,5 to reconstruct the parton shower of charm quarks. This result confirms a fundamental feature of QCD. Furthermore, the measurement of a dead-cone angle constitutes a direct experimental observation of the non-zero mass of the charm quark, which is a fundamental constant in the standard model of particle physics.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi's fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of stochastic growth and retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.
Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.
Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.
Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak.
Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Epilepsy can have many different causes and its development (epileptogenesis) involves a bewildering complexity of interacting processes. Here, we present a first-of-its-kind computational model to better understand the role of neuroimmune interactions in the development of acquired epilepsy. Our model describes the interactions between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model is validated using data from animal models that mimic human epileptogenesis caused by infection, status epilepticus, and blood-brain barrier disruption. The mathematical model successfully explains characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries, or its dependence on the intensity of an injury. Furthermore, stochasticity in the model captures the variability of epileptogenesis outcomes in individuals exposed to identical injury. Notably, in line with the concept of degeneracy, our simulations reveal multiple routes towards epileptogenesis with neuronal loss as a sufficient but non-necessary component. We show that our framework allows for in silico predictions of therapeutic strategies, providing information on injury-specific therapeutic targets and optimal time windows for intervention.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here, we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a formulation of the active efficient coding theory, which proposes that eye movements as well as stimulus encoding are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to coordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Models of perceptual decision making have historically been designed to maximally explain behaviour and brain activity independently of their ability to actually perform tasks. More recently, performance-optimized models have been shown to correlate with brain responses to images and thus present a complementary approach to understand perceptual processes. In the present study, we compare how these approaches comparatively account for the spatio-temporal organization of neural responses elicited by ambiguous visual stimuli. Forty-six healthy human subjects performed perceptual decisions on briefly flashed stimuli constructed from ambiguous characters. The stimuli were designed to have 7 orthogonal properties, ranging from low-sensory levels (e.g. spatial location of the stimulus) to conceptual (whether stimulus is a letter or a digit) and task levels (i.e. required hand movement). Magneto-encephalography source and decoding analyses revealed that these 7 levels of representations are sequentially encoded by the cortical hierarchy, and actively maintained until the subject responds. This hierarchy appeared poorly correlated to normative, drift-diffusion, and 5-layer convolutional neural networks (CNN) optimized to accurately categorize alpha-numeric characters, but partially matched the sequence of activations of 3/6 state-of-the-art CNNs trained for natural image labeling (VGG-16, VGG-19, MobileNet). Additionally, we identify several systematic discrepancies between these CNNs and brain activity, revealing the importance of single-trial learning and recurrent processing. Overall, our results strengthen the notion that performance-optimized algorithms can converge towards the computational solution implemented by the human visual system, and open possible avenues to improve artificial perceptual decision making.
Polarization of Λ and ¯Λ hyperons along the beam direction in Pb-Pb collisions at √sNN=5.02 TeV
(2022)
The polarization of the Λ and ¯Λ hyperons along the beam (z) direction, Pz, has been measured in Pb-Pb collisions at √sNN=5.02 TeV recorded with ALICE at the Large Hadron Collider (LHC). The main contribution to Pz comes from elliptic flow-induced vorticity and can be characterized by the second Fourier sine coefficient Pz,s2=⟨Pzsin(2φ−2Ψ2)⟩, where φ is thhyperon azimuthal emission angle and Ψ2 is the elliptic flow plane angle. We report the measurement of Pz,s2 for different collision centralities and in the 30%–50% centrality interval as a function of the hyperon transverse momentum and rapidity. The Pz,s2 is positive similarly as measured by the STAR Collaboration in Au-Au collisions at √sNN=200 GeV, with somewhat smaller amplitude in the semicentral collisions. This is the first experimental evidence of a nonzero hyperon Pz in Pb-Pb collisions at the LHC. The comparison of the measured Pz,s2 with the hydrodynamic model calculations shows sensitivity to the competing contributions from thermal and the recently found shear-induced vorticity, as well as to whether the polarization is acquired at the quark-gluon plasma or the hadronic phase.
Two types of particles exist in the atmosphere, primary and secondary particles. While primary particles such as soot, mineral dust, sea salt particles or pollen are introduced directly as particles into the atmosphere, secondary particles are formed in the atmosphere by condensation of gases. The formation of such new aerosol particles takes place frequently and at a broad variety of atmospheric conditions and geographic locations. A considerable fraction of the atmospheric particles is formed by such nucleation processes. The newly formed particles may grow by condensation to sizes where they are large enough to act as cloud condensation nuclei and therefore may affect cloud properties. The fundamental processes of aerosol nucleation are described and typical atmospheric observations are discussed. Two recent studies are introduced that potentially change our current understanding of atmospheric nucleation substantially.
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
The dynamics of strange pseudoscalar and vector mesons in hot and dense nuclear matter is studied within a chiral unitary framework in coupled channels. Our results set up the starting point for implementations in microscopic transport approaches of heavy-ion collisions, particularly at the conditions of the forthcoming experiments at GSI/FAIR and NICA-Dubna. In the K̄ N sector we focus on the calculation of (off-shell) transition rates for the most relevant binary reactions involved in strangeness production close to threshold energies, with special attention to the excitation of sub-threshold hyperon resonances and isospin effects (e.g. K̄ p vs K̄ n). We also give an overview of recent theoretical developments regarding the dynamics of strange vector mesons (K*, K̄* and ϕ) in the nuclear medium, in connection with experimental activity from heavy-ion collisions and nuclear production reactions. We emphasize the role of hadronic decay modes and the excitation of hyperon resonances as the driving mechanisms modifying the properties of vector mesons.
We introduce a top-down stylized model to analyse the impact of a transition to a European power system based only on wind and solar power. Wind and solar power generation is calculated from high-resolution weather data and based on the country specific electricity demand alone, we introduce a model of the conventional power system that facilitates simple spatio-temporal modelling of its macroscopic behavior without direct reference to the underlying technological, economical, and political development in the system. Using this model, we find that wind and solar power generation can replace conventional power generation and power capacity to a large degree if power transmission across the continent is made possible.
Fluctuations of anisotropic flow in lead-lead collisions at LHC energies arising in HYDJET++model are studied. It is shown that intrinsic fluctuations of the flow which appear mainly because of the fluctuations of particle multiplicity, momenta and coordinates are insufficient to match the measured experimental data, provided the eccentricity of the freeze-out hypersurface is fixed at any given impact parameter b. However, when the variations of the eccentricity in HYDJET++ are taken into account, the agreement between the model results and the data is drastically improved. Both model calculations and the data are filtered through the unfolding procedure. This procedure eliminates the non-flow fluctuations to a higher degree, thus indicating a dynamical origin of the flow fluctuations in HYDJET++ event generator.
We apply HYDJET++ model, which contains the treatment of both soft and hard processes, to study the heavy-ion collisions at LHC energies. The interplay of parametrised hydrodynamics and jets describes many features of the development of particle anisotropic flow including the break-up of mass hierarchy of elliptic and triangular flow, the falloff of the flow at certain transverse momentum and violation of the number-ofconstituent- quark (NCQ) scaling at LHC energies compared to the lower ones. Other signals, such as long-range dihadron correlations (ridge) and event-by-event (EbyE) fluctuations of the flow are also discussed. Model calculations demonstrate a good agreement with the available experimental data.
Preface
(2012)
The production of charmonia in the antiproton-nucleus reactions at plab = 3 − 10 GeV/c is studied within the Glauber model and the generalized eikonal approximation. The main reaction channel is charmonium formation in an antiproton-proton collision. The target mass dependence of the charmonium transparency ratio allows to determine the charmonium-nucleon cross section. The polarization effects in the production of χc2 states are evaluated.
We study primary and secondary reactions induced by 600 MeV proton beams in monolithic cylindrical targets made of natural tungsten and uranium by using Monte Carlo simulations with the Geant4 toolkit [1–3]. Bertini intranuclear cascade model, Binary cascade model and IntraNuclear Cascade Liège (INCL) with ABLA model [4] were used as calculational options to describe nuclear reactions. Fission cross sections, neutron multiplicity and mass distributions of fragments for 238U fission induced by 25.6 and 62.9 MeV protons are calculated and compared to recent experimental data [5]. Time distributions of neutron leakage from the targets and heat depositions are calculated.
We found that a true ternary fission with formation of a heavy third fragment (a new kind of radioactivity) is quite possible for superheavy nuclei due to the strong shell effects leading to a three-body clusterization with the two doubly magic tin-like cores. The three-body quasifission process could be even more pronounced for giant nuclear systems formed in collisions of heavy actinide nuclei. In this case a three-body clusterization might be proved experimentally by detection of two coincident lead-like fragments in low-energy U+U collisions.
Using an advanced version of the hadron resonance gas model we have found several remarkable irregularities at chemical freeze-out. The most prominent of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which we found at center of mass energies 3.6-4.9 GeV and 7.6-10 GeV. The low energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shockadiabat model we demonstrate that the low energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties of the mixed phase at its boundary to the quark-gluon plasma. The question is whether the high energy correlated quasi-plateaus are also related to some kind of mixed phase. In order to answer this question we employ the results of a systematic meta-analysis of the quality of data description of 10 existing event generators of nucleus-nucleus collisions in the range of center of mass collision energies from 3.1 GeV to 17.3 GeV. These generators are divided into two groups: the first group includes the generators which account for the quark-gluon plasma formation during nuclear collisions, while the second group includes the generators which do not assume the quark-gluon plasma formation in such collisions. Comparing the quality of data description of more than a hundred of different data sets of strange hadrons by these two groups of generators, we find two regions of the equal quality of data description which are located at the center of mass collision energies 4.3-4.9 GeV and 10.-13.5 GeV. These two regions of equal quality of data description we interpret as regions of the hadron-quark-gluon mixed phase formation. Such a conclusion is strongly supported by the irregularities in the collision energy dependence of the experimental ratios of the Lambda hyperon number per proton and positive kaon number per Lambda hyperon. Although at the moment it is unclear, whether these regions belong to the same mixed phase or not, there are arguments that the most probable collision energy range to probe the QCD phase diagram (tri)critical endpoint is 12-14 GeV.
Cysteine cross-linking in native membranes establishes the transmembrane architecture of Ire1
(2021)
The ER is a key organelle of membrane biogenesis and crucial for the folding of both membrane and secretory proteins. Sensors of the unfolded protein response (UPR) monitor the unfolded protein load in the ER and convey effector functions for maintaining ER homeostasis. Aberrant compositions of the ER membrane, referred to as lipid bilayer stress, are equally potent activators of the UPR. How the distinct signals from lipid bilayer stress and unfolded proteins are processed by the conserved UPR transducer Ire1 remains unknown. Here, we have generated a functional, cysteine-less variant of Ire1 and performed systematic cysteine cross-linking experiments in native membranes to establish its transmembrane architecture in signaling-active clusters. We show that the transmembrane helices of two neighboring Ire1 molecules adopt an X-shaped configuration independent of the primary cause for ER stress. This suggests that different forms of stress converge in a common, signaling-active transmembrane architecture of Ire1.
One of important consequences of Hagedorn statistical bootstrap model is the prediction of limiting temperature Tcrit for hadron systems colloquially known as Hagedorn temperature. According to Hagedorn, this effect should be observed in hadron spectra obtained in infinite equilibrated nuclear matter rather than in relativistic heavy-ion collisions. We present results of microscopic model calculations for the infinite nuclear matter, simulated by a box with periodic boundary conditions. The limiting temperature indeed appears in the model calculations. Its origin is traced to strings and many-body decays of resonances.
These proceedings will cover various studies of hadronic resonances within the UrQMD transport model. After a brief explanation of the model, various observables will be highlighted and the chances for resonance reconstruction in hadronic channels will be discussed. Possible signals of chiral symmetry restoration will be investigated for feasibility.
We propose an effective theory of SU(3) gluonic matter where interactions between color-electric and color-magnetic gluons are constrained by the center and scale symmetries. Through matching to the dimensionally-reduced magnetic theories, the magnetic gluon condensate qualitatively changes its thermal behavior above the critical temperature. We argue its phenomenological consequences for the thermodynamics, in particular the dynamical breaking of scale invariance.
Resonances from PHSD
(2012)
The multi-strange baryon and vector meson resonance production in relativistic nucleus-nucleus collisions is studied within the parton-hadron-string dynamics (PHSD) approach which incorporates explicit partonic degrees-of-freedom in terms of strongly interacting quasiparticles (quarks and gluons) in line with an equation-of-state from lattice QCD as well as the dynamical hadronization and hadronic collision dynamics in the final reaction phase. We find a significant effect of the partonic phase on the production of multi-strange antibaryons at SPS energies due to a slightly enhanced pair production from massive time-like gluon decay and a larger formation of antibaryons in the hadronization process. We, futhermore, obtain a visible in-medium effects in the low mass dilepton sector from dynamical vector-meson spectral functions from SIS to SPS energies whereas at RHIC and LHC energies such medium effects become more moderate. In the intermediate mass regime from 1.1 to 3 GeV pronounced traces of the partonic degrees of freedom are found at SPS energies which superseed the hadronic (multi-meson) channels as well as the correlated and uncorrelated semi-leptonic D-meson decays. The dilepton production from the strongly interacting quark-gluon-plasma (sQGP) becomes already visible at top SPS energies and more pronounced at RHIC and LHC energies.
The so-called Pygmy Dipole Resonance, an additional structure of low-lying electric dipole strength, has attracted strong interest in the last years. Different experimental approaches have been used in the last decade in order to investigate this new interesting nuclear excitation mode. In this contribution an overview on the available experimental data is given.
The advent of improved experimental and theoretical techniques has brought a lot of attention to the electric dipole (E1) response of atomic nuclei in the last decade. The extensive studies have led to the observation and interpretation of a concentration of E1 strength energetically below the Giant Dipole Resonance in many nuclei. This phenomenon is commonly denoted as Pygmy Dipole Resonance (PDR). This contribution will summarize the most important results obtained using different experimental probes, define the challenges to gain a deeper understanding of the excitations, and discuss the newest experimental developments.
n this article we will focus on the appearance of the hadron-quark phase transition and the formation of strange matter in the interior region of the hypermassive neutron star and its conjunction with the spectral properties of the emitted gravitational waves (GWs). A strong hadron-quark phase transition might give rise to a mass-radius relation with a twin star shape and we will show in this article that a twin star collapse followed by a twin star oscillation is feasible. If such a twin star collapse would happen during the postmerger phase it will be imprinted in the GW-signal.
We present results on hadronic resonance production in high energy nuclear collisions from the UrQMD hybrid model. In particular we are interested in the effect of the final hadronic stage on the properties of resonances observable at RHIC and LHC experiments. We investigate weather these observable properties can be used to pinpoint the transition energy density from the QGP phase to the hadronic phase.
Dirac spectrum representations of the Polyakov loop fluctuations are derived on the temporally odd-number lattice, where the temporal length is odd with the periodic boundary condition. We investigate the Polyakov loop fluctuations based on these analytical relations. It is semi-analytically and numerically found that the low-lying Dirac eigenmodes have little contribution to the Polyakov loop fluctuations, which are sensitive probe for the quark deconfinement. Our results suggest no direct one-to-one corresponding between quark confinement and chiral symmetry breaking in QCD.
Predictions of popular cosmic ray interaction models for some basic characteristics of cosmic ray-induced extensive air showers are analyzed in view of experimental data on proton-proton collisions, obtained at the Large Hadron Collider. The differences between the results are traced down to different approaches for the treatment of hadronic interactions, implemented in those models. Potential measurements by LHC and cosmic ray experiments, which could be able to discriminate between the alternative approaches, are proposed.
I review the state-of-the-art concerning the treatment of high energy cosmic ray interactions in the atmosphere, discussing in some detail the underlying physical concepts and the possibilities to constrain the latter by current and future measurements at the Large Hadron Collider. The relation of basic characteristics of hadronic interactions tothe properties of nuclear-electromagnetic cascades induced by primary cosmic rays in the atmosphere is addressed.
We discuss the behavior of dynamically-generated charmed baryonic resonances in matter within a unitarized coupled-channel model consistent with heavy-quark spin symmetry. We analyze the implications for the formation of D-meson bound states in nuclei and the propagation of D mesons in heavy-ion collisions from RHIC to FAIR energies.
The STAR experiment provides a perfect machinery for studying strange matter for more than two decades. Recently, we developed the express procedure, which allows online monitoring of the collected physics data. The high quality of express calibration and reconstruction provides a unique possibility to run the express production and observe almost in real time strange particles including mesons, hyperons, resonances and even hypernuclei.
The STAR Beam Energy Scan II program, including fixed target Au+Au collisions taken in 2018–2021, is particularly suited to study hypernuclei. Light hypernuclei are expected to be abundantly produced in low energy heavy-ion collisions. Measurements of hypernuclei production and their properties will provide information on the hyperon-nucleon interactions, which are essential ingredients for understanding nuclear matter equation of state at high net-baryon densities, such as inside neutron stars.
With the heavy fragment trigger introduced for the 2021 data taking, we were able to run the express production at the STAR High Level Trigger farm. The collected data were suffcient to observe the decay process of Λ5He →4Hepπ− with more than 11σ significance, measure binding energy as a function of hypernuclei mass, and study hypernuclei decay properties with the Dalitz plot technique.
Impact of low-energy multipole excitations and pygmy resonances on radiative nucleon captures
(2016)
Nuclear structure theory is considered in the framework of the development of a microscopic model for nucleon-capture astrophysical implementations. In particular, microscopically obtained strength functions from a theoretical method incorporating density functional theory and quasiparticle-phonon model are used as an input in a statistical reaction model. The approach is applied in systematic investigations of the impact of low-energy multipole excitations and pygmy resonances on dipole photoabsorption and radiative neutronand proton-capture cross sections of key s- and r-process nuclei which is discussed in comparison with the experiment. For the cases of the short-lived isotopes 89Zr and 91Mo theoretical predictions are made.
Dilepton production in heavy-ion collisions at top SPS energy is investigated within a coarse-graining approach that combines an underlying microscopic evolution of the nuclear reaction with the application of medium-modified spectral functions. Extracting local energy and baryon density for a grid of small space-time cells and going to each cell’s rest frame enables to determine local temperature and chemical potential by application of an equation of state. This allows for the calculation of thermal dilepton emission. We apply and compare two different spectral functions for the ρ: A hadronic many-body calculation and an approach that uses empirical scattering amplitudes. Quantitatively good agreement of the model calculations with the data from the NA60 collaboration is achieved for both spectral functions, but in detail the hadronic many-body approach leads to a better description, especially of the broadening around the pole mass of the ρ and for the low-mass excess. We further show that the presence of a pion chemical potential significantly influences the dilepton yield.
Due to their penetrating nature, electromagnetic probes, i.e., lepton-antilepton pairs (dileptons) and photons are unique tools to gain insight into the nature of the hot and dense medium of strongly-interacting particles created in relativistic heavy-ion collisions, including hints to the nature of the restoration of chiral symmetry of QCD. Of particular interest are the spectral properties of the electromagnetic current-correlation function of these particles within the dense and/or hot medium. The related theoretical investigations of the in-medium properties of the involved particles in both the partonic and hadronic part of the QCD phase diagram underline the importance of a proper understanding of the properties of various hadron resonances in the medium.
Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.
Background: Organoids are morphologically heterogeneous three-dimensional cell culture systems and serve as an ideal model for understanding the principles of collective cell behaviour in mammalian organs during development, homeostasis, regeneration, and pathogenesis. To investigate the underlying cell organisation principles of organoids, we imaged hundreds of pancreas and cholangiocarcinoma organoids in parallel using light sheet and bright-field microscopy for up to 7 days.
Results: We quantified organoid behaviour at single-cell (microscale), individual-organoid (mesoscale), and entire-culture (macroscale) levels. At single-cell resolution, we monitored formation, monolayer polarisation, and degeneration and identified diverse behaviours, including lumen expansion and decline (size oscillation), migration, rotation, and multi-organoid fusion. Detailed individual organoid quantifications lead to a mechanical 3D agent-based model. A derived scaling law and simulations support the hypotheses that size oscillations depend on organoid properties and cell division dynamics, which is confirmed by bright-field microscopy analysis of entire cultures.
Conclusion: Our multiscale analysis provides a systematic picture of the diversity of cell organisation in organoids by identifying and quantifying the core regulatory principles of organoid morphogenesis.
The search for short-lived particles is usually the final stage in the chain of event reconstruction and precedes event selection when operating in online mode or physics analysis when operating in offline mode. Most often such short-lived particles are neutral and their search and reconstruction is carried out using their daughter charged particles resulting from their decay.
The use of the missing mass method makes it possible to find and analyze also decays of charged short-lived particles, when one of the daughter particles is neutral and is not registered in the detector system. One of the most known examples of such decays is the decay Σ− → nπ−.
In this paper, we discuss in detail the missing mass method, which was implemented as part of the KF Particle Finder package for the search and analysis of short-lived particles, and describe the use of the method in the STAR experiment (BNL, USA).
The method was used to search for pion (π± → μ±ν) and kaon (K± → μ±ν and K± → π±π0) decays online on the HLT farm in the express production chain. An important feature of the express production chain in the STAR experiment is that it allows one to start calibration, production, and analysis of the data immediately after receiving them.
Here, the particular features and results of the real-time application of the method within the express processing of data obtained in the BES-II program at a beam energy of 3.85 GeV/n when working with a fixed target are presented and discussed.
The production of hypernuclei is investigated for p̅+A→Λ‾+ΛA reactions in a covariant meson exchange approach. Besides the conventional pseudo-scalar (K) and vector (K*) channels, we study for the first time also contributions from the correlated πK scalar channel, described by the k/K*0 meson. Initial and final state interactions are considered by eikonal theory. The total and angular differential cross sections of the coherent process p̅+AZ→ΛA(Z−1)+Λ̅ are evaluated at the beam momenta 1:5…20 GeV/c within the meson exchange model with bound proton and Λ-hyperon wave functions. It is shown that the shape of the beam momentum dependence of the hypernucleus production cross sections with various discrete Λ states is strongly sensitive to the presence of the scalar k meson exchange in the p̅p→Λ̅Λ amplitude. This can be used as a clean test of the exchange by scalar πK correlation in coherent p̅A reactions.
The present status in the field of strange mesons in nuclei and neutron stars is reviewed. In particular, the K̅N interaction, that is governed by the presence of the Λ(1405), is analyzed and the formation of the K̅NN bound state is discussed. Moreover, the properties of K̅ in dense nuclear matter are studied, in connection with strangeness production in nuclear collisions and kaon condensation in neutron stars.
Future operation of the CBM detector requires ultra-fast analysis of the continuous stream of data from all subdetector systems. Determining the inter-system time shifts among individual detector systems in the existing prototype experiment mCBM is an essential step for data processing and in particular for stable data taking. Based on the input of raw measurements from all detector systems, the corresponding time correlations can be obtained at digital level by evaluating the differences in time stamps. If the relevant systems are stable during data taking and sufficient digital measurements are available, the distribution of time differences should display a clear peak. Up to now, the outcome of the processed time differences is stored in histograms and the maximum peak is considered, after the evaluation of all timeslices of a run leading to significant run times. The results presented here demonstrate the stability of the synchronicity of mCBM systems. Furthermore it is illustrated that relatively small amounts of raw measurements are sufficient to evaluate corresponding time correlations among individual mCBM detectors, thus enabling fast online monitoring of them in future online data processing.
We discuss in some detail the physics content of the new model, QGSJET-III-01, focusing on major problems related to the treatment of semihard processes in the very high energy limit. A special attention has been payed to the main improvement, compared to the QGSJET-II model, which is related to a phenomenological treatment of leading power corrections corresponding to final parton rescattering off soft gluons. In particular, this allowed us to use a twice smaller separation scale between the soft and hard parton physics, compared to the previous model version, QGSJET-II-04. Preliminary results obtained with the new model are also presented.
We present the novel finite-temperature FSU2H* equation-of-state model that covers a wide range of temperatures and lepton fractions for the conditions in proto-neutron stars, neutron star mergers and supernovae. The temperature effects on the thermodynamical observables and the composition of the neutron star core are stronger when the hyperonic degrees of freedom are considered. We pay a special attention to the temperature and density dependence of the thermal index in the presence of hyperons and conclude that the true thermal effects cannot be reproduced with the use of a constant Г law,
We review the composition and the equation of state of the hyperonic core of neutron stars at finite temperature within a relativistic mean-field approach. We make use of the new FSU2H∗ model, which is built upon the FSU2H scheme by improving on the Ξ potential according to the recent analysis on the Ξ atoms, and we extend it to include finite temperature corrections. The calculations are done for a wide range of densities, temperatures and charge fractions, thus exploring the different conditions that can be found in protoneutron stars, binary mergers remnants and supernovae explosions. The inclusion of hyperons has a strong effect on the composition and the equation of state at finite temperature, which consequently would lead to significant changes in the properties and evolution of hot neutron stars.