Frankfurt Institute for Advanced Studies (FIAS)
Refine
Year of publication
Document Type
- Preprint (962)
- Article (753)
- Conference Proceeding (27)
- Doctoral Thesis (18)
- Part of Periodical (6)
- Contribution to a Periodical (3)
- Part of a Book (2)
- Diploma Thesis (1)
- Master's Thesis (1)
- Review (1)
Has Fulltext
- yes (1774) (remove)
Is part of the Bibliography
- no (1774)
Keywords
- Heavy Ion Experiments (21)
- Hadron-Hadron Scattering (11)
- Hadron-Hadron scattering (experiments) (11)
- LHC (10)
- Heavy-ion collisions (8)
- Heavy-ion collision (7)
- heavy-ion collisions (7)
- schizophrenia (7)
- Black holes (6)
- Equation of state (5)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (1774)
- Physik (1315)
- Informatik (1008)
- Medizin (64)
- MPI für Hirnforschung (31)
- Ernst Strüngmann Institut (26)
- Biowissenschaften (22)
- Psychologie (13)
- Biochemie und Chemie (12)
- Helmholtz International Center for FAIR (7)
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.
The timing of feedback to early visual cortex in the perception of long-range apparent motion
(2008)
When 2 visual stimuli are presented one after another in different locations, they are often perceived as one, but moving object. Feedback from area human motion complex hMT/V5+ to V1 has been hypothesized to play an important role in this illusory perception of motion. We measured event-related responses to illusory motion stimuli of varying apparent motion (AM) content and retinal location using Electroencephalography. Detectable cortical stimulus processing started around 60-ms poststimulus in area V1. This component was insensitive to AM content and sequential stimulus presentation. Sensitivity to AM content was observed starting around 90 ms post the second stimulus of a sequence and most likely originated in area hMT/V5+. This AM sensitive response was insensitive to retinal stimulus position. The stimulus sequence related response started to be sensitive to retinal stimulus position at a longer latency of 110 ms. We interpret our findings as evidence for feedback from area hMT/V5+ or a related motion processing area to early visual cortices (V1, V2, V3).
This thesis will first introduce in more detail the Bayesian theory and its use in integrating multiple information sources. I will briefly talk about models and their relation to the dynamics of an environment, and how to combine multiple alternative models. Following that I will discuss the experimental findings on multisensory integration in humans and animals. I start with psychophysical results on various forms of tasks and setups, that show that the brain uses and combines information from multiple cues. Specifically, the discussion will focus on the finding that humans integrate this information in a way that is close to the theoretical optimal performance. Special emphasis will be put on results about the developmental aspects of cue integration, highlighting experiments that could show that children do not perform similar to the Bayesian predictions. This section also includes a short summary of experiments on how subjects handle multiple alternative environmental dynamics. I will also talk about neurobiological findings of cells receiving input from multiple receptors both in dedicated brain areas but also primary sensory areas. I will proceed with an overview of existing theories and computational models of multisensory integration. This will be followed by a discussion on reinforcement learning (RL). First I will talk about the original theory including the two different main approaches model-free and model-based reinforcement learning. The important variables will be introduced as well as different algorithmic implementations. Secondly, a short review on the mapping of those theories onto brain and behaviour will be given. I mention the most in uential papers that showed correlations between the activity in certain brain regions with RL variables, most prominently between dopaminergic neurons and temporal difference errors. I will try to motivate, why I think that this theory can help to explain the development of near-optimal cue integration in humans. The next main chapter will introduce our model that learns to solve the task of audio-visual orienting. Many of the results in this section have been published in [Weisswange et al. 2009b,Weisswange et al. 2011]. The model agent starts without any knowledge of the environment and acts based on predictions of rewards, which will be adapted according to the reward signaling the quality of the performed action. I will show that after training this model performs similarly to the prediction of a Bayesian observer. The model can also deal with more complex environments in which it has to deal with multiple possible underlying generating models (perform causal inference). In these experiments I use di#erent formulations of Bayesian observers for comparison with our model, and find that it is most similar to the fully optimal observer doing model averaging. Additional experiments using various alterations to the environment show the ability of the model to react to changes in the input statistics without explicitly representing probability distributions. I will close the chapter with a discussion on the benefits and shortcomings of the model. The thesis continues whith a report on an application of the learning algorithm introduced before to two real world cue integration tasks on a robotic head. For these tasks our system outperforms a commonly used approximation to Bayesian inference, reliability weighted averaging. The approximation is handy because of its computational simplicity, because it relies on certain assumptions that are usually controlled for in a laboratory setting, but these are often not true for real world data. This chapter is based on the paper [Karaoguz et al. 2011]. Our second modeling approach tries to address the neuronal substrates of the learning process for cue integration. I again use a reward based training scheme, but this time implemented as a modulation of synaptic plasticity mechanisms in a recurrent network of binary threshold neurons. I start the chapter with an additional introduction section to discuss recurrent networks and especially the various forms of neuronal plasticity that I will use in the model. The performance on a task similar to that of chapter 3 will be presented together with an analysis of the in uence of different plasticity mechanisms on it. Again benefits and shortcomings and the general potential of the method will be discussed. I will close the thesis with a general conclusion and some ideas about possible future work.
Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.
Background Objects in our environment are often partly occluded, yet we effortlessly perceive them as whole and complete. This phenomenon is called visual amodal completion. Psychophysical investigations suggest that the process of completion starts from a representation of the (visible) physical features of the stimulus and ends with a completed representation of the stimulus. The goal of our study was to investigate both stages of the completion process by localizing both brain regions involved in processing the physical features of the stimulus as well as brain regions representing the completed stimulus. Results Using fMRI adaptation we reveal clearly distinct regions in the visual cortex of humans involved in processing of amodal completion: early visual cortex - presumably V1 - processes the local contour information of the stimulus whereas regions in the inferior temporal cortex represent the completed shape. Furthermore, our data suggest that at the level of inferior temporal cortex information regarding the original local contour information is not preserved but replaced by the representation of the amodally completed percept. Conclusion These findings provide neuroimaging evidence for a multiple step theory of amodal completion and further insights into the neuronal correlates of visual perception.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Synchronous neuronal firing has been proposed as a potential neuronal code. To determine whether synchronous firing is really involved in different forms of information processing, one needs to directly compare the amount of synchronous firing due to various factors, such as different experimental or behavioral conditions. In order to address this issue, we present an extended version of the previously published method, NeuroXidence. The improved method incorporates bi- and multivariate testing to determine whether different factors result in synchronous firing occurring above the chance level. We demonstrate through the use of simulated data sets that bi- and multivariate NeuroXidence reliably and robustly detects joint-spike-events across different factors.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Highlights
• Brain connectivity states identified by cofluctuation strength.
• CMEP as new method to robustly predict human traits from brain imaging data.
• Network-identifying connectivity ‘events’ are not predictive of cognitive ability.
• Sixteen temporally independent fMRI time frames allow for significant prediction.
• Neuroimaging-based assessment of cognitive ability requires sufficient scan lengths.
Abstract
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10 min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual's network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
EEG microstate periodicity explained by rotating phase patterns of resting-state alpha oscillations
(2020)
Spatio-temporal patterns in electroencephalography (EEG) can be described by microstate analysis, a discrete approximation of the continuous electric field patterns produced by the cerebral cortex. Resting-state EEG microstates are largely determined by alpha frequencies (8-12 Hz) and we recently demonstrated that microstates occur periodically with twice the alpha frequency.
To understand the origin of microstate periodicity, we analyzed the analytic amplitude and the analytic phase of resting-state alpha oscillations independently. In continuous EEG data we found rotating phase patterns organized around a small number of phase singularities which varied in number and location. The spatial rotation of phase patterns occurred with the underlying alpha frequency. Phase rotors coincided with periodic microstate motifs involving the four canonical microstate maps. The analytic amplitude showed no oscillatory behaviour and was almost static across time intervals of 1-2 alpha cycles, resulting in the global pattern of a standing wave.
In n=23 healthy adults, time-lagged mutual information analysis of microstate sequences derived from amplitude and phase signals of awake eyes-closed EEG records showed that only the phase component contributed to the periodicity of microstate sequences. Phase sequences showed mutual information peaks at multiples of 50 ms and the group average had a main peak at 100 ms (10 Hz), whereas amplitude sequences had a slow and monotonous information decay. This result was confirmed by an independent approach combining temporal principal component analysis (tPCA) and autocorrelation analysis.
We reproduced our observations in a generic model of EEG oscillations composed of coupled non-linear oscillators (Stuart-Landau model). Phase-amplitude dynamics similar to experimental EEG occurred when the oscillators underwent a supercritical Hopf bifurcation, a common feature of many computational models of the alpha rhythm.
These findings explain our previous description of periodic microstate recurrence and its relation to the time scale of alpha oscillations. Moreover, our results corroborate the predictions of computational models and connect experimentally observed EEG patterns to properties of critical oscillator networks.
What is the energy function guiding behavior and learningµ Representationbased approaches like maximum entropy, generative models, sparse coding, or slowness principles can account for unsupervised learning of biologically observed structure in sensory systems from raw sensory data. However, they do not relate to behavior. Behavior-based approaches like reinforcement learning explain animal behavior in well-described situations. However, they rely on high-level representations which they cannot extract from raw sensory data. Combinations of multiple goal functions seems the methodology of choice to understand the complexity of the brain. But what is the set of possible goals. ...
A deep convolutional neural network (CNN) is developed to study symmetry energy (Esym(ρ)) effects by learning the mapping between the symmetry energy and the two-dimensional (transverse momentum and rapidity) distributions of protons and neutrons in heavy-ion collisions. Supervised training is performed with labeled data-set from the ultrarelativistic quantum molecular dynamics (UrQMD) model simulation. It is found that, by using proton spectra on event-by-event basis as input, the accuracy for classifying the soft and stiff Esym(ρ) is about 60% due to large event-by-event fluctuations, while by setting event-summed proton spectra as input, the classification accuracy increases to 98%. The accuracies for 5-label (5 different Esym(ρ)) classification task are about 58% and 72% by using proton and neutron spectra, respectively. For the regression task, the mean absolute errors (MAE) which measure the average magnitude of the absolute differences between the predicted and actual L (the slope parameter of Esym(ρ)) are about 20.4 and 14.8 MeV by using proton and neutron spectra, respectively. Fingerprints of the density-dependent nuclear symmetry energy on the transverse momentum and rapidity distributions of protons and neutrons can be identified by convolutional neural network algorithm.
The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network’s changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network’s sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN may be the driving forces behind our ability to learn complex action sequences.
Infants' poor motor abilities limit their interaction with their environment and render studying infant cognition notoriously difficult. Exceptions are eye movements, which reach high accuracy early, but generally do not allow manipulation of the physical environment. In this study, real-time eye tracking is used to put 6- and 8-month-old infants in direct control of their visual surroundings to study the fundamental problem of discovery of agency, i.e. the ability to infer that certain sensory events are caused by one's own actions. We demonstrate that infants quickly learn to perform eye movements to trigger the appearance of new stimuli and that they anticipate the consequences of their actions in as few as 3 trials. Our findings show that infants can rapidly discover new ways of controlling their environment. We suggest that gaze-contingent paradigms offer effective new ways for studying many aspects of infant learning and cognition in an interactive fashion and provide new opportunities for behavioral training and treatment in infants.
Spherical harmonics coeffcients for ligand-based virtual screening of cyclooxygenase inhibitors
(2011)
Background: Molecular descriptors are essential for many applications in computational chemistry, such as ligand-based similarity searching. Spherical harmonics have previously been suggested as comprehensive descriptors of molecular structure and properties. We investigate a spherical harmonics descriptor for shape-based virtual screening. Methodology/Principal Findings: We introduce and validate a partially rotation-invariant three-dimensional molecular shape descriptor based on the norm of spherical harmonics expansion coefficients. Using this molecular representation, we parameterize molecular surfaces, i.e., isosurfaces of spatial molecular property distributions. We validate the shape descriptor in a comprehensive retrospective virtual screening experiment. In a prospective study, we virtually screen a large compound library for cyclooxygenase inhibitors, using a self-organizing map as a pre-filter and the shape descriptor for candidate prioritization. Conclusions/Significance: 12 compounds were tested in vitro for direct enzyme inhibition and in a whole blood assay. Active compounds containing a triazole scaffold were identified as direct cyclooxygenase-1 inhibitors. This outcome corroborates the usefulness of spherical harmonics for representation of molecular shape in virtual screening of large compound collections. The combination of pharmacophore and shape-based filtering of screening candidates proved to be a straightforward approach to finding novel bioactive chemotypes with minimal experimental effort.
We compiled an NMR data set consisting of exact nuclear Overhauser enhancement (eNOE) distance limits, residual dipolar couplings (RDCs) and scalar (J) couplings for GB3, which forms one of the largest and most diverse data set for structural characterization of a protein to date. All data have small experimental errors, which are carefully estimated. We use the data in the research article Vogeli et al., 2015, Complementarity and congruence between exact NOEs and traditional NMR probes for spatial decoding of protein dynamics, J. Struct. Biol., 191, 3, 306–317, doi:10.1016/j.jsb.2015.07.008 [1] for cross-validation in multiple-state structural ensemble calculation. We advocate this set to be an ideal test case for molecular dynamics simulations and structure calculations.
Cysteine cross-linking in native membranes establishes the transmembrane architecture of Ire1
(2021)
The ER is a key organelle of membrane biogenesis and crucial for the folding of both membrane and secretory proteins. Sensors of the unfolded protein response (UPR) monitor the unfolded protein load in the ER and convey effector functions for maintaining ER homeostasis. Aberrant compositions of the ER membrane, referred to as lipid bilayer stress, are equally potent activators of the UPR. How the distinct signals from lipid bilayer stress and unfolded proteins are processed by the conserved UPR transducer Ire1 remains unknown. Here, we have generated a functional, cysteine-less variant of Ire1 and performed systematic cysteine cross-linking experiments in native membranes to establish its transmembrane architecture in signaling-active clusters. We show that the transmembrane helices of two neighboring Ire1 molecules adopt an X-shaped configuration independent of the primary cause for ER stress. This suggests that different forms of stress converge in a common, signaling-active transmembrane architecture of Ire1.
We derive the relation between cumulants of a conserved charge measured in a subvolume of a thermal system and the corresponding grand-canonical susceptibilities, taking into account exact global conservation of that charge. The derivation is presented for an arbitrary equation of state, with the assumption that the subvolume is sufficiently large to be close to the thermodynamic limit. Our framework – the subensemble acceptance method (SAM) – quantifies the effect of global conservation laws and is an important step toward a direct comparison between cumulants of conserved charges measured in central heavy ion collisions and theoretical calculations of grand-canonical susceptibilities, such as lattice QCD. As an example, we apply our formalism to net-baryon fluctuations at vanishing baryon chemical potentials as encountered in collisions at the LHC and RHIC.
We analyze the behavior of cumulants of conserved charges in a subvolume of a thermal system with exact global conservation laws by extending a recently developed subensemble acceptance method (SAM) [1] to multiple conserved charges. Explicit expressions for all diagonal and off-diagonal cumulants up to sixth order that relate them to the grand canonical susceptibilities are obtained. The derivation is presented for an arbitrary equation of state with an arbitrary number of different conserved charges. The global conservation effects cancel out in any ratio of two second order cumulants, in any ratio of two third order cumulants, as well as in a ratio of strongly intensive measures Σ and ∆ involving any two conserved charges, making all these quantities particularly suitable for theory-to-experiment comparisons in heavy-ion collisions. We also show that the same cancellation occurs in correlators of a conserved charge, like the electric charge, with any non-conserved quantity such as net proton or net kaon number. The main results of the SAM are illustrated in the framework of the hadron resonance gas model. We also elucidate how net-proton and net-Λ fluctuations are affected by conservation of electric charge and strangeness in addition to baryon number.
The centrality dependence of the p/π ratio measured by the ALICE Collaboration in 5.02 TeV Pb-Pb collisions indicates a statistically significant suppression with the increase of the charged particle multiplicity once the centrality-correlated part of the systematic uncertainty is eliminated from the data. We argue that this behavior can be attributed to baryon annihilation in the hadronic phase. By implementing the BB¯↔5π reaction within a generalized partial chemical equilibrium framework, we estimate the annihilation freeze-out temperature at different centralities, which decreases with increasing charged particle multiplicity and yields Tann=132±5 MeV in 0-5% most central collisions. This value is considerably below the hadronization temperature of Thad∼160 MeV but above the thermal (kinetic) freeze-out temperature of Tkin∼100 MeV. Baryon annihilation reactions thus remain relevant in the initial stage of the hadronic phase but freeze out before (pseudo-)elastic hadronic scatterings. One experimentally testable consequence of this picture is a suppression of various light nuclei to proton ratios in central collisions of heavy ions.
We estimate the feeddown contributions from decays of unstable A=4 and A=5 nuclei to the final yields of protons, deuterons, tritons, 3He, and 4He produced in relativistic heavy-ion collisions at sNN>2.4 GeV, using the statistical model. The feeddown contribution effects do not exceed 5% at LHC and top RHIC energies due to the large penalty factors involved, but are substantial at intermediate collision energies. We observe large feeddown contributions for tritons, 3He, and 4He at sNN≲10 GeV, where they may account for as much as 70% of the final yield at the lower end of the collision energies considered. Sizable (>10%) effects for deuteron yields are observed at sNN≲4 GeV. The results suggest that the excited nuclei feeddown cannot be neglected in the ongoing and future analysis of light nuclei production at intermediate collision energies, including HADES and CBM experiments at FAIR, NICA at JINR, RHIC beam energy scan and fixed-target programmes, and NA61/SHINE at CERN. We further show that the freeze-out curve in the T-μB plane itself is affected significantly by the light nuclei at high baryochemical potential.
Dendrites form predominantly binary trees that are exquisitely embedded in the networks of the brain. While neuronal computation is known to depend on the morphology of dendrites, their underlying topological blueprint remains unknown. Here, we used a centripetal branch ordering scheme originally developed to describe river networks—the Horton-Strahler order (SO)–to examine hierarchical relationships of branching statistics in reconstructed and model dendritic trees. We report on a number of universal topological relationships with SO that are true for all binary trees and distinguish those from SO-sorted metric measures that appear to be cell type-specific. The latter are therefore potential new candidates for categorising dendritic tree structures. Interestingly, we find a faithful correlation of branch diameters with centripetal branch orders, indicating a possible functional importance of SO for dendritic morphology and growth. Also, simulated local voltage responses to synaptic inputs are strongly correlated with SO. In summary, our study identifies important SO-dependent measures in dendritic morphology that are relevant for neural function while at the same time it describes other relationships that are universal for all dendrites.
These proceedings will cover various studies of hadronic resonances within the UrQMD transport model. After a brief explanation of the model, various observables will be highlighted and the chances for resonance reconstruction in hadronic channels will be discussed. Possible signals of chiral symmetry restoration will be investigated for feasibility.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.
Background The synchrony hypothesis postulates that precise temporal synchronization of different pools of neurons conveys information that is not contained in their firing rates. The synchrony hypothesis had been supported by experimental findings demonstrating that millisecond precise synchrony of neuronal oscillations across well separated brain regions plays an essential role in visual perception and other higher cognitive tasks [1]. Albeit, more evidence is being accumulated in favour of its role as a binding mechanism of distributed neural responses, the physical and anatomical substrate for such a dynamic and precise synchrony, especially zero-lag even in the presence of non-negligible delays, remains unclear. Here we propose a simple network motif that naturally accounts for zero-lag synchronization for a wide range of temporal delays [3]. We demonstrate that zero-lag synchronization between two distant neurons or neural populations can be achieved by relaying the dynamics via a third mediating single neuron or population. Methods We simulated the dynamics of two Hodgkin-Huxley neurons that interact with each other via an intermediate third neuron. The synaptic coupling was mediated through alpha-functions. Individual temporal delays of the arrival of pre-synaptic potentials were modelled by a gamma distribution. The strength of the synchronization and the phase-difference between each individual pairs were derived by cross-correlation of the membrane potentials. Results In the regular spiking regime the two outer neurons consistently synchronize with zero phase lag irrespective of the initial conditions. This robust zero-lag synchronization naturally arises as a consequence of the relay and redistribution of the dynamics performed by the central neuron. This result is independent on whether the coupling is excitatory or inhibitory and can be maintained for arbitrarily long time delays (see Fig. 1). Conclusion We have presented a simple and extremely robust network motif able to account for the isochronous synchronization of distant neural elements in a natural way. As opposed to other possible mechanisms of neural synchronization, neither inhibitory coupling, gap junctions nor precise tuning of morphological parameters are required to obtain zero-lag synchronized neuronal oscillation.
This short paper gives a brief overview of the manifestly covariant canonical gauge gravity (CCGG) that is rooted in the De Donder-Weyl Hamiltonian formulation of relativistic field theories, and the proven methodology of the canonical transformation theory. That framework derives, from a few basic physical and mathematical assumptions, equations describing generic matter and gravity dynamics with the spin connection emerging as a Yang Mills-type gauge field. While the interaction of any matter field with spacetime is fixed just by the transformation property of that field, a concrete gravity ansatz is introduced by the choice of the free (kinetic) gravity Hamiltonian. The key elements of this approach are discussed and its implications for particle dynamics and cosmology are presented. New insights: Anomalous Pauli coupling of spinors to curvature and torsion of spacetime, spacetime with (A)dS ground state, inertia, torsion and geometrical vacuum energy, Zero-energy balance of the Universe leading to a vanishing cosmological constant and torsional dark energy.
A modification of the Einstein–Hilbert theory, the Covariant Canonical Gauge Gravity (CCGG), leads to a cosmological constant that represents the energy of the space–time continuum when deformed from its (A)dS ground state to a flat geometry. CCGG is based on the canonical transformation theory in the De Donder–Weyl (DW) Hamiltonian formulation. That framework modifies the Einstein–Hilbert Lagrangian of the free gravitational field by a quadratic Riemann–Cartan concomitant. The theory predicts a total energy-momentum of the system of space–time and matter to vanish, in line with the conjecture of a “Zero-Energy-Universe” going back to Lorentz (1916) and Levi-Civita (1917). Consequently, a flat geometry can only exist in presence of matter where the bulk vacuum energy of matter, regardless of its value, is eliminated by the vacuum energy of space–time. The observed cosmological constant Λobs is found to be merely a small correction attributable to deviations from a flat geometry and effects of complex dynamical geometry of space–time, namely torsion and possibly also vacuum fluctuations. That quadratic extension of General Relativity, anticipated already in 1918 by Einstein, thus provides a significant and natural contribution to resolving the “cosmological constant problem”.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
The cosmological implications of the Covariant Canonical Gauge Theory of Gravity (CCGG) are investigated. CCGG is a Palatini theory derived from first principles using the canonical transformation formalism in the covariant Hamiltonian formulation. The Einstein-Hilbert theory is thereby extended by a quadratic Riemann-Cartan term in the Lagrangian. Moreover, the requirement of covariant conservation of the stress-energy tensor leads to necessary presence of torsion. In the Friedman universe that promotes the cosmological constant to a time-dependent function, and gives rise to a geometrical correction with the EOS of dark radiation. The resulting cosmology, compatible with the ΛCDM parameter set, encompasses bounce and bang scenarios with graceful exits into the late dark energy era. Testing those scenarios against low-z observations shows that CCGG is a viable theory.
Neural oscillations at low- and high-frequency ranges are a fundamental feature of large-scale networks. Recent evidence has indicated that schizophrenia is associated with abnormal amplitude and synchrony of oscillatory activity, in particular, at high (beta/gamma) frequencies. These abnormalities are observed during task-related and spontaneous neuronal activity which may be important for understanding the pathophysiology of the syndrome. In this paper, we shall review the current evidence for impaired beta/gamma-band oscillations and their involvement in cognitive functions and certain symptoms of the disorder. In the first part, we will provide an update on neural oscillations during normal brain functions and discuss underlying mechanisms. This will be followed by a review of studies that have examined high-frequency oscillatory activity in schizophrenia and discuss evidence that relates abnormalities of oscillatory activity to disturbed excitatory/inhibitory (E/I) balance. Finally, we shall identify critical issues for future research in this area.
Following the discovery of context-dependent synchronization of oscillatory neuronal responses in the visual system, the role of neural synchrony in cortical networks has been expanded to provide a general mechanism for the coordination of distributed neural activity patterns. In the current paper, we present an update of the status of this hypothesis through summarizing recent results from our laboratory that suggest important new insights regarding the mechanisms, function and relevance of this phenomenon. In the first part, we present recent results derived from animal experiments and mathematical simulations that provide novel explanations and mechanisms for zero and nero-zero phase lag synchronization. In the second part, we shall discuss the role of neural synchrony for expectancy during perceptual organization and its role in conscious experience. This will be followed by evidence that indicates that in addition to supporting conscious cognition, neural synchrony is abnormal in major brain disorders, such as schizophrenia and autism spectrum disorders. We conclude this paper with suggestions for further research as well as with critical issues that need to be addressed in future studies.
Bardeen black hole chemistry
(2019)
In the present paper we try to connect the Bardeen black hole with the concept of the recently proposed black hole chemistry. We study thermodynamic properties of the regular black hole with an anti-deSitter background. The negative cosmological constant Λ plays the role of the positive thermodynamic pressure of the system. After studying the thermodynamic variables, we derive the corresponding equation of state and we show that a neutral Bardeen-anti-deSitter black hole has similar phenomenology to the chemical Van der Waals fluid. This is equivalent to saying that the system exhibits criticality and a first order small/large black hole phase transition reminiscent of the liquid/gas coexistence.
We examine the thermodynamic behavior of a static neutral regular (non-singular) black hole enclosed in a finite isothermal cavity. The cavity enclosure helps us investigate black hole systems in a canonical or a grand canonical ensemble. Here we demonstrate the derivation of the reduced action for the general metric of a regular black hole in a cavity by considering a canonical ensemble. The new expression of the action contains quantum corrections at short distances and concludes to the action of a singular black hole in a cavity at large distances. We apply this formalism to the noncommutative Schwarzschild black hole, in order to study the phase structure of the system. We conclude to a possible small/large stable regular black hole transition inside the cavity that exists neither at the system of a classical Schwarzschild black hole in a cavity, nor at the asymptotically flat regular black hole without the cavity. This phase transition seems to be similar with the liquid/gas transition of a Van der Waals gas.
The spike (S) protein of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the major focus for vaccine development. We combine cryo electron tomography, subtomogram averaging and molecular dynamics simulations to structurally analyze S in situ. Compared to recombinant S, the viral S is more heavily glycosylated and occurs predominantly in a closed pre-fusion conformation. We show that the stalk domain of S contains three hinges that give the globular domain unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and the development of safe vaccines. The large scale tomography data set of SARS-CoV-2 used for this study is therefore sufficient to resolve structural features to below 5 Ångstrom, and is publicly available at EMPIAR-10453.
The spike protein (S) of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is required for cell entry and is the primary focus for vaccine development. In this study, we combined cryo–electron tomography, subtomogram averaging, and molecular dynamics simulations to structurally analyze S in situ. Compared with the recombinant S, the viral S was more heavily glycosylated and occurred mostly in the closed prefusion conformation. We show that the stalk domain of S contains three hinges, giving the head unexpected orientational freedom. We propose that the hinges allow S to scan the host cell surface, shielded from antibodies by an extensive glycan coat. The structure of native S contributes to our understanding of SARS-CoV-2 infection and potentially to the development of safe vaccines.
Human Transformer2-beta (hTra2-beta) is an important member of the serine/arginine-rich protein family, and contains one RNA recognition motif (RRM). It controls the alternative splicing of several pre-mRNAs, including those of the calcitonin/calcitonin gene-related peptide (CGRP), the survival motor neuron 1 (SMN1) protein and the tau protein. Accordingly, the RRM of hTra2-beta specifically binds to two types of RNA sequences [the CAA and (GAA)2 sequences]. We determined the solution structure of the hTra2-beta RRM (spanning residues Asn110–Thr201), which not only has a canonical RRM fold, but also an unusual alignment of the aromatic amino acids on the beta-sheet surface. We then solved the complex structure of the hTra2-beta RRM with the (GAA)2 sequence, and found that the AGAA tetra-nucleotide was specifically recognized through hydrogen-bond formation with several amino acids on the N- and C-terminal extensions, as well as stacking interactions mediated by the unusually aligned aromatic rings on the beta-sheet surface. Further NMR experiments revealed that the hTra2-beta RRM recognizes the CAA sequence when it is integrated in the stem-loop structure. This study indicates that the hTra2-beta RRM recognizes two types of RNA sequences in different RNA binding modes.
Impact of low-energy multipole excitations and pygmy resonances on radiative nucleon captures
(2016)
Nuclear structure theory is considered in the framework of the development of a microscopic model for nucleon-capture astrophysical implementations. In particular, microscopically obtained strength functions from a theoretical method incorporating density functional theory and quasiparticle-phonon model are used as an input in a statistical reaction model. The approach is applied in systematic investigations of the impact of low-energy multipole excitations and pygmy resonances on dipole photoabsorption and radiative neutronand proton-capture cross sections of key s- and r-process nuclei which is discussed in comparison with the experiment. For the cases of the short-lived isotopes 89Zr and 91Mo theoretical predictions are made.
Two generic mechanisms for emergence of direction selectivity coexist in recurrent neural networks
(2013)
Poster presentation: Twenty Second Annual Computational Neuroscience Meeting: CNS*2013. Paris, France. 13-18 July 2013.
In the mammalian visual cortex, the time-averaged response of many neurons is maximal for stimuli moving in a particular direction. Such a direction selective response is not found in LGN, upstream of the visual processing pathway, suggesting that cortical networks play a strong role in the generation of direction selectivity. Here we investigate the mechanisms for the emergence of direction selectivity in the recurrent networks of nonlinear firing rate neurons in layer 4 of V1 receiving the input from LGN. In the model the LGN inputs are characterized by different receptive field positions, and their relative temporal phase shifts are reversed for the stimuli moving in the opposite direction. We propose that two distinct mechanisms result in the neuronal direction selective response in these recurrent networks. The first one is a result of nonlinear feed-forward summation of several time-shifted inputs. The second mechanism is based on the competition between neurons for firing in a winner-take-all regime. Both mechanisms rely on inhibitory interactions in the connectivity matrix of lateral connections, but the second one involves inhibitory loops. Typically, the first mechanism results in lower selectivity values than the second, but the time-course of acquiring direction selective response is faster for the first mechanism. Importantly, the two mechanisms have different input frequency tuning. The first mechanism, based on the nonlinear summation, result in a relatively narrow tuning curve around the preferred frequency of the stimulus in the case of the moving grating. In contrast, the direction selectivity arising from the second mechanism depends only weakly on the input frequency, i.e. has a broader tuning curve. These differences allow us to provide the recipe for identifying in experiment which of the two mechanisms is used by a given direction selective neuron. We then analyze how the statistics of the connections in the random recurrent networks affect the relative contributions from these two mechanisms and determine the distributions of the direction selectivity values. We identify the motifs in the connectivity matrix, which are required for each mechanism and show that the minimal conditions for both mechanisms are met in a very broad set of random recurrent networks with sufficiently strong inhibitory connections. Thus, we propose that these mechanisms coexist in generic recurrent networks with inhibition. Our results may account for the recent experimental observations that direction selectivity is present in dark-reared mice and ferrets [1,2]. It can also explain the emergence of direction selectivity in species lacking a spatially organized direction selectivity map.
Poster presentation at The Twenty Third Annual Computational Neuroscience Meeting: CNS*2014 Québec City, Canada. 26-31 July 2014: We study random strongly heterogeneous recurrent networks of firing rate neurons, introducing the notion of cohorts: groups of co-active neurons, who compete for firing with one another and whose presence depends sensitively on the structure of the input. The identities of neurons recruited to and dropped from an active cohort changes smoothly with varying input features. We search for network parameter regimes in which the activation of cohorts is robust yet easily switchable by the external input and which exhibit large repertoires of different cohorts. We apply these networks to model the emergence of orientation and direction selectivity in visual cortex. We feed these random networks with a set of harmonic inputs that vary across neurons only in their temporal phase, mimicking the feedforward drive due to a moving grating stimulus. The relationship between the phases that carries the information about the orientation of the stimulus determines which cohort of neurons is activated. As a result the individual neurons acquire non-monotonic orientation tuning curves which are characterized by high orientation and direction selectivity. This mechanism of emergence for direction selectivity differs from the classical motion detector scheme, which is based on the nonlinear summation of the time-shifted inputs. In our model these two mechanisms coexist in the same network, but can be distinguished by their different frequency and contrast dependences. In general, the mechanism we are studying here converts temporal phase sequence into population activity and could therefore be used to extract and represent also various other relevant stimulus features.
Working memory and conscious perception are thought to share similar brain mechanisms, yet recent reports of non-conscious working memory challenge this view. Combining visual masking with magnetoencephalography, we investigate the reality of non-conscious working memory and dissect its neural mechanisms. In a spatial delayed-response task, participants reported the location of a subjectively unseen target above chance-level after several seconds. Conscious perception and conscious working memory were characterized by similar signatures: a sustained desynchronization in the alpha/beta band over frontal cortex, and a decodable representation of target location in posterior sensors. During non-conscious working memory, such activity vanished. Our findings contradict models that identify working memory with sustained neural firing, but are compatible with recent proposals of ‘activity-silent’ working memory. We present a theoretical framework and simulations showing how slowly decaying synaptic changes allow cell assemblies to go dormant during the delay, yet be retrieved above chance-level after several seconds.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this "nature-nurture transform" using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
The fundamental structure of cortical networks arises early in development prior to the onset of sensory experience. However, how endogenously generated networks respond to the onset of sensory experience, and how they form mature sensory representations with experience remains unclear. Here we examine this ‘nature-nurture transform’ using in vivo calcium imaging in ferret visual cortex. At eye-opening, visual stimulation evokes robust patterns of cortical activity that are highly variable within and across trials, severely limiting stimulus discriminability. Initial evoked responses are distinct from spontaneous activity of the endogenous network. Visual experience drives the development of low-dimensional, reliable representations aligned with spontaneous activity. A computational model shows that alignment of novel visual inputs and recurrent cortical networks can account for the emergence of reliable visual representations.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.