Refine
Year of publication
Document Type
- Preprint (877) (remove)
Language
- English (877) (remove)
Has Fulltext
- yes (877) (remove)
Is part of the Bibliography
- no (877)
Keywords
- quark-gluon plasma (3)
- ultra-peripheral collision (3)
- vector meson production (3)
- RHIC (2)
- heavy-ion collisions (2)
- natural scenes (2)
- neuronal populations (2)
- nuclear parton modification (2)
- primary visual cortex (2)
- stimulus encoding (2)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (877) (remove)
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Changes in the efficacies of synapses are thought to be the neurobiological basis of learning and memory. The efficacy of a synapse depends on its current number of neurotransmitter receptors. Recent experiments have shown that these receptors are highly dynamic, moving back and forth between synapses on time scales of seconds and minutes. This suggests spontaneous fluctuations in synaptic efficacies and a competition of nearby synapses for available receptors. Here we propose a mathematical model of this competition of synapses for neurotransmitter receptors from a local dendritic pool. Using minimal assumptions, the model produces a fast multiplicative scaling behavior of synapses. Furthermore, the model explains a transient form of heterosynaptic plasticity and predicts that its amount is inversely related to the size of the local receptor pool. Overall, our model reveals logistical tradeoffs during the induction of synaptic plasticity due to the rapid exchange of neurotransmitter receptors between synapses.
Bacteria of the genera Photorhabdus and Xenorhabdus produce a plethora of natural products to support their similar symbiotic lifecycles. For many of these compounds, the specific bioactivities are unknown. One common challenge in natural product research when trying to prioritize research efforts is the rediscovery of identical (or highly similar) compounds from different strains. Linking genome sequence to metabolite production can help in overcoming this problem. However, sequences are typically not available for entire collections of organisms. Here we perform a comprehensive metabolic screening using HPLC-MS data associated with a 114-strain collection (58 Photorhabdus and 56 Xenorhabdus) from across Thailand and explore the metabolic variation among the strains, matched with several abiotic factors. We utilize machine learning in order to rank the importance of individual metabolites in determining all given metadata. With this approach, we were able to prioritize metabolites in the context of natural product investigations, leading to the identification of previously unknown compounds. The top three highest-ranking features were associated with Xenorhabdus and attributed to the same chemical entity, cyclo(tetrahydroxybutyrate). This work addresses the need for prioritization in high-throughput metabolomic studies and demonstrates the viability of such an approach in future research.
Antimicrobial resistance is a major threat to global health and food security today. Scheduling cycling therapies by targeting phenotypic states associated to specific mutations can help us to eradicate pathogenic variants in chronic infections. In this paper, we introduce a logistic switching model in order to abstract mutation networks of collateral resistance. We found particular conditions for which unstable zero-equilibrium of the logistic maps can be stabilized through a switching signal. That is, persistent populations can be eradicated through tailored switching regimens.
Starting from an optimal-control formulation, the switching policies show their potential in the stabilization of the zero-equilibrium for dynamics governed by logistic maps. However, employing such switching strategies, deserve a specific characterization in terms of limit behaviour. Ultimately, we use evolutionary and control algorithms to find either optimal and sub-optimal switching policies. Simulations results show the applicability of Parrondo’s Paradox to design cycling therapies against drug resistance.
We propose a generalized modeling framework for the kinetic mechanisms of transcriptional riboswitches. The formalism accommodates time-dependent transcription rates and changes of metabolite concentration and permits incorporation of variations in transcription rate depending on transcript length. We derive explicit analytical expressions for the fraction of transcripts that determine repression or activation of gene expression, pause site location and its slowing down of transcription for the case of the (2’dG)-sensing riboswitch from Mesoplasma florum. Our modeling challenges the current view on the exclusive importance of metabolite binding to transcripts containing only the aptamer domain. Numerical simulations of transcription proceeding in a continuous manner under time-dependent changes of metabolite concentration further suggest that rapid modulations in concentration result in a reduced dynamic range for riboswitch function regardless of transcription rate, while a combination of slow modulations and small transcription rates ensures a wide range of finely tuneable regulatory outcomes.
Stockpiling neuraminidase inhibitors (NAIs) such as oseltamivir and zanamivir is part of a global effort to be prepared for an influenza pandemic. However, the contribution of NAIs for treatment and prevention of influenza and its complications is largely debatable. Here, we developed a transparent mathematical modelling setting to analyse the impact of NAIs on influenza disease at within-host and population level. Analytical and simulation results indicate that even assuming unrealistically high efficacies for NAIs, drug intake starting on the onset of symptoms has a negligible effect on an individual's viral load and symptoms score. Increasing NAIs doses does not provide a better outcome as is generally believed. Considering Tamiflu's pandemic regimen for prophylaxis, different multiscale simulation scenarios reveal modest reductions in epidemic size despite high investments in stockpiling. Our results question the use of NAIs in general to treat influenza as well as the respective stockpiling by regulatory authorities.
The successful elimination of bacteria such as Streptococcus pneumoniae from a host involves the coordination between different parts of the immune system. Previous studies have explored the effects of the initial pneumococcal load (bacterial dose) on different representations of innate immunity, finding that pathogenic outcomes can vary with the size of the bacterial dose. However, others yield support to the notion of dose-independent factors contributing to bacterial clearance. In this paper, we seek to provide a deeper understanding of the immune responses associated to the pneumococcus. To this end, we formulate a model that realizes an abstraction of the innate-regulatory immune host response. Stability and bifurcation analyses of the model reveal the following trichotomy of pneumococcal outcomes determined by the bifurcation parameters: (i) dose-independent clearance; (ii) dose-independent persistence; and (iii) dose-limited clearance. Bistability, where the bacteria-free equilibrium co-stabilizes with the most substantial steady-state bacterial load is the specific result behind dose-limited clearance. The trichotomy of pneumococcal outcomes here described integrates all previously observed bacterial fates into a unified framework.
COVID-19 pandemic has underlined the impact of emergent pathogens as a major threat for human health. The development of quantitative approaches to advance comprehension of the current outbreak is urgently needed to tackle this severe disease. In this work, several mathematical models are proposed to represent SARS-CoV-2 dynamics in infected patients. Considering different starting times of infection, parameters sets that represent infectivity of SARS-CoV-2 are computed and compared with other viral infections that can also cause pandemics.
Based on the target cell model, SARS-CoV-2 infecting time between susceptible cells (mean of 30 days approximately) is much slower than those reported for Ebola (about 3 times slower) and influenza (60 times slower). The within-host reproductive number for SARS-CoV-2 is consistent to the values of influenza infection (1.7-5.35). The best model to fit the data was including immune responses, which suggest a slow cell response peaking between 5 to 10 days post onset of symptoms. The model with eclipse phase, time in a latent phase before becoming productively infected cells, was not supported. Interestingly, both, the target cell model and the model with immune responses, predict that virus may replicate very slowly in the first days after infection, and it could be below detection levels during the first 4 days post infection. A quantitative comprehension of SARS-CoV-2 dynamics and the estimation of standard parameters of viral infections is the key contribution of this pioneering work.
The severity of the COVID-19 pandemic, caused by the SARS-CoV-2 coronavirus, calls for the urgent development of a vaccine. The primary immunological target is the SARS-CoV-2 spike (S) protein. S is exposed on the viral surface to mediate viral entry into the host cell. To identify possible antibody binding sites not shielded by glycans, we performed multi-microsecond molecular dynamics simulations of a 4.1 million atom system containing a patch of viral membrane with four full-length, fully glycosylated and palmitoylated S proteins. By mapping steric accessibility, structural rigidity, sequence conservation and generic antibody binding signatures, we recover known epitopes on S and reveal promising epitope candidates for vaccine development. We find that the extensive and inherently flexible glycan coat shields a surface area larger than expected from static structures, highlighting the importance of structural dynamics in epitope mapping.
Spike count correlations (SCCs) are ubiquitous in sensory cortices, are characterized by rich structure and arise from structured internal interactions. Yet, most theories of visual perception focus exclusively on the mean responses of individual neurons. Here, we argue that feedback interactions in primary visual cortex (V1) establish the context in which individual neurons process complex stimuli and that changes in visual context give rise to stimulus-dependent SCCs. Measuring V1 population responses to natural scenes in behaving macaques, we show that the fine structure of SCCs is stimulus-specific and variations in response correlations across-stimuli are independent of variations in response means. Moreover, we demonstrate that stimulus-specificity of SCCs in V1 can be directly manipulated by controlling the high-order structure of synthetic stimuli. We propose that stimulus-specificity of SCCs is a natural consequence of hierarchical inference where inferences on the presence of high-level image features modulate inferences on the presence of low-level features.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. We hypothesize that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we show that attention enhances the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Attentional enhancement of stimulus decodability from population responses occurs in low dimensional spaces, as revealed by principal component analysis, suggesting an alignment between the attentional and the natural stimulus variance. Moreover, natural scenes produce stimulus-specific oscillatory responses in V1, whose power undergoes a global shift from low to high frequencies with attention. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
In meditation practices that involve focused attention to a specific object, novice practitioners often experience moments of distraction (i.e., mind wandering). Previous studies have investigated the neural correlates of mind wandering during meditation practice through Electroencephalography (EEG) using linear metrics (e.g., oscillatory power). However, their results are not fully consistent. Since the brain is known to be a chaotic/nonlinear system, it is possible that linear metrics cannot fully capture complex dynamics present in the EEG signal. In this study, we assess whether nonlinear EEG signatures can be used to characterize mind wandering during breath focus meditation in novice practitioners. For that purpose, we adopted an experience sampling paradigm in which 25 participants were iteratively interrupted during meditation practice to report whether they were focusing on the breath or thinking about something else. We compared the complexity of EEG signals during mind wandering and breath focus states using three different algorithms: Higuchi’s fractal dimension (HFD), Lempel-Ziv complexity (LZC), and Sample entropy (SampEn). Our results showed that EEG complexity was generally reduced during mind wandering relative to breath focus states. We conclude that EEG complexity metrics are appropriate to disentangle mind wandering from breath focus states in novice meditation practitioners, and therefore, they could be used in future EEG neurofeedback protocols to facilitate meditation practice.
Inspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.
Orientation hypercolumns in the visual cortex are delimited by the repeating pinwheel patterns of orientation selective neurons. We design a generative model for visual cortex maps that reproduces such orientation hypercolumns as well as ocular dominance maps while preserving retinotopy. The model uses a neural placement method based on t–distributed stochastic neighbour embedding (t–SNE) to create maps that order common features in the connectivity matrix of the circuit. We find that, in our model, hypercolumns generally appear with fixed cell numbers independently of the overall network size. These results would suggest that existing differences in absolute pinwheel densities are a consequence of variations in neuronal density. Indeed, available measurements in the visual cortex indicate that pinwheels consist of a constant number of ∼30, 000 neurons. Our model is able to reproduce a large number of characteristic properties known for visual cortex maps. We provide the corresponding software in our MAPStoolbox for Matlab.
Dendritic spines are crucial for excitatory synaptic transmission as the size of a spine head correlates with the strength of its synapse. The distribution of spine head sizes follows a lognormal-like distribution with more small spines than large ones. We analysed the impact of synaptic activity and plasticity on the spine size distribution in adult-born hippocampal granule cells from rats with induced homo- and heterosynaptic long-term plasticity in vivo and CA1 pyramidal cells from Munc-13-1-Munc13-2 knockout mice with completely blocked synaptic transmission. Neither induction of extrinsic synaptic plasticity nor the blockage of presynaptic activity degrades the lognormal-like distribution but changes its mean, variance and skewness. The skewed distribution develops early in the life of the neuron. Our findings and their computational modelling support the idea that intrinsic synaptic plasticity is sufficient for the generation, while a combination of intrinsic and extrinsic synaptic plasticity maintains lognormal like distribution of spines.
Achieving functional neuronal dendrite structure through sequential stochastic growth and retraction
(2020)
Class I ventral posterior dendritic arborisation (c1vpda) proprioceptive sensory neurons respond to contractions in the Drosophila larval body wall during crawling. Their dendritic branches run along the direction of contraction, possibly a functional requirement to maximise membrane curvature during crawling contractions. Although the molecular machinery of dendritic patterning in c1vpda has been extensively studied, the process leading to the precise elaboration of their comb-like shapes remains elusive. Here, to link dendrite shape with its proprioceptive role, we performed long-term, non-invasive, in vivo time-lapse imaging of c1vpda embryonic and larval morphogenesis to reveal a sequence of differentiation stages. We combined computer models and dendritic branch dynamics tracking to propose that distinct sequential phases of targeted growth and stochastic retraction achieve efficient dendritic trees both in terms of wire and function. Our study shows how dendrite growth balances structure–function requirements, shedding new light on general principles of self-organisation in functionally specialised dendrites.
The way in which dendrites spread within neural tissue determines the resulting circuit connectivity and computation. However, a general theory describing the dynamics of this growth process does not exist. Here we obtain the first time-lapse reconstructions of neurons in living fly larvae over the entirety of their developmental stages. We show that these neurons expand in a remarkably regular stretching process that conserves their shape. Newly available space is filled optimally, a direct consequence of constraining the total amount of dendritic cable. We derive a mathematical model that predicts one time point from the previous and use this model to predict dendrite morphology of other cell types and species. In summary, we formulate a novel theory of dendrite growth based on detailed developmental experimental data that optimises wiring and space filling and serves as a basis to better understand aspects of coverage and connectivity for neural circuit formation.
Reducing neuronal size results in less cell membrane and therefore lower input conductance. Smaller neurons are thus more excitable as seen in their voltage responses to current injections in the soma. However, the impact of a neuron’s size and shape on its voltage responses to synaptic activation in dendrites is much less understood. Here we use analytical cable theory to predict voltage responses to distributed synaptic inputs and show that these are entirely independent of dendritic length. For a given synaptic density, a neuron’s response depends only on the average dendritic diameter and its intrinsic conductivity. These results remain true for the entire range of possible dendritic morphologies irrespective of any particular arborisation complexity. Also, spiking models result in morphology invariant numbers of action potentials that encode the percentage of active synapses. Interestingly, in contrast to spike rate, spike times do depend on dendrite morphology. In summary, a neuron’s excitability in response to synaptic inputs is not affected by total dendrite length. It rather provides a homeostatic input-output relation that specialised synapse distributions, local non-linearities in the dendrites and synaptic plasticity can modulate. Our work reveals a new fundamental principle of dendritic constancy that has consequences for the overall computation in neural circuits.
Excess neuronal branching allows for innervation of specific dendritic compartments in cortex
(2019)
The connectivity of cortical microcircuits is a major determinant of brain function; defining how activity propagates between different cell types is key to scaling our understanding of individual neuronal behaviour to encompass functional networks. Furthermore, the integration of synaptic currents within a dendrite depends on the spatial organisation of inputs, both excitatory and inhibitory. We identify a simple equation to estimate the number of potential anatomical contacts between neurons; finding a linear increase in potential connectivity with cable length and maximum spine length, and a decrease with overlapping volume. This enables us to predict the mean number of candidate synapses for reconstructed cells, including those realistically arranged. We identify an excess of putative connections in cortical data, with densities of neurite higher than is necessary to reliably ensure the possible implementation of any given connection. We show that potential contacts allow the particular implementation of connectivity at a subcellular level.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Abstract Trial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.
In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.
We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.
Author Summary Neural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.
The electrical and computational properties of neurons in our brains are determined by a rich repertoire of membrane-spanning ion channels and elaborate dendritic trees. However, the precise reason for this inherent complexity remains unknown. Here, we generated large stochastic populations of biophysically realistic hippocampal granule cell models comparing those with all 15 ion channels to their reduced but functional counterparts containing only 5 ion channels. Strikingly, valid parameter combinations in the full models were more frequent and more stable in the face of perturbations to channel expression levels. Scaling up the numbers of ion channels artificially in the reduced models recovered these advantages confirming the key contribution of the actual number of ion channel types. We conclude that the diversity of ion channels gives a neuron greater flexibility and robustness to achieve target excitability.
Background Corticospinal excitability depends on the current brain state. The recent development of real-time EEG-triggered transcranial magnetic stimulation (EEG-TMS) allows studying this relationship in a causal fashion. Specifically, it has been shown that corticospinal excitability is higher during the scalp surface negative EEG peak compared to the positive peak of µ-oscillations in sensorimotor cortex, as indexed by larger motor evoked potentials (MEPs) for fixed stimulation intensity.
Objective We further characterize the effect of µ-rhythm phase on the MEP input-output (IO) curve by measuring the degree of excitability modulation across a range of stimulation intensities. We furthermore seek to optimize stimulation parameters to enable discrimination of functionally relevant EEG-defined brain states.
Methods A real-time EEG-TMS system was used to trigger MEPs during instantaneous brain-states corresponding to µ-rhythm surface positive and negative peaks with five different stimulation intensities covering an individually calibrated MEP IO curve in 15 healthy participants.
Results MEP amplitude is modulated by µ-phase across a wide range of stimulation intensities, with larger MEPs at the surface negative peak. The largest relative MEP-modulation was observed for weak intensities, the largest absolute MEP-modulation for intermediate intensities. These results indicate a leftward shift of the MEP IO curve during the µ-rhythm negative peak.
Conclusion The choice of stimulation intensity influences the observed degree of corticospinal excitability modulation by µ-phase. Lower stimulation intensities enable more efficient differentiation of EEG µ-phase-defined brain states.
Active efficient coding explains the development of binocular vision and its failure in amblyopia
(2020)
The development of vision during the first months of life is an active process that comprises the learning of appropriate neural representations and the learning of accurate eye movements. While it has long been suspected that the two learning processes are coupled, there is still no widely accepted theoretical framework describing this joint development. Here we propose a computational model of the development of active binocular vision to fill this gap. The model is based on a new formulation of the Active Efficient Coding theory, which proposes that eye movements, as well as stimulus encoding, are jointly adapted to maximize the overall coding efficiency. Under healthy conditions, the model self-calibrates to perform accurate vergence and accommodation eye movements. It exploits disparity cues to deduce the direction of defocus, which leads to co-ordinated vergence and accommodation responses. In a simulated anisometropic case, where the refraction power of the two eyes differs, an amblyopia-like state develops, in which the foveal region of one eye is suppressed due to inputs from the other eye. After correcting for refractive errors, the model can only reach healthy performance levels if receptive fields are still plastic, in line with findings on a critical period for binocular vision development. Overall, our model offers a unifying conceptual framework for understanding the development of binocular vision.
Epilepsy can have many different causes and its development (epileptogenesis) involves a bewildering complexity of interacting processes. Here, we present a first-of-its-kind computational model to better understand the role of neuroimmune interactions in the development of acquired epilepsy. Our model describes the interactions between neuroinflammation, blood-brain barrier disruption, neuronal loss, circuit remodeling, and seizures. Formulated as a system of nonlinear differential equations, the model is validated using data from animal models that mimic human epileptogenesis caused by infection, status epilepticus, and blood-brain barrier disruption. The mathematical model successfully explains characteristic features of epileptogenesis such as its paradoxically long timescales (up to decades) despite short and transient injuries, or its dependence on the intensity of an injury. Furthermore, stochasticity in the model captures the variability of epileptogenesis outcomes in individuals exposed to identical injury. Notably, in line with the concept of degeneracy, our simulations reveal multiple routes towards epileptogenesis with neuronal loss as a sufficient but non-necessary component. We show that our framework allows for in silico predictions of therapeutic strategies, providing information on injury-specific therapeutic targets and optimal time windows for intervention.
Dendritic spines are considered a morphological proxy for excitatory synapses, rendering them a target of many different lines of research. Over recent years, it has become possible to image simultaneously large numbers of dendritic spines in 3D volumes of neural tissue. In contrast, currently no automated method for spine detection exists that comes close to the detection performance reached by human experts. However, exploiting such datasets requires new tools for the fully automated detection and analysis of large numbers of spines. Here, we developed an efficient analysis pipeline to detect large numbers of dendritic spines in volumetric fluorescence imaging data. The core of our pipeline is a deep convolutional neural network, which was pretrained on a general-purpose image library, and then optimized on the spine detection task. This transfer learning approach is data efficient while achieving a high detection precision. To train and validate the model we generated a labelled dataset using five human expert annotators to account for the variability in human spine detection. The pipeline enables fully automated dendritic spine detection and reaches a near human-level detection performance. Our method for spine detection is fast, accurate and robust, and thus well suited for large-scale datasets with thousands of spines. The code is easily applicable to new datasets, achieving high detection performance, even without any retraining or adjustment of model parameters.
Treatments for amblyopia focus on vision therapy and patching of one eye. Predicting the success of these methods remains difficult, however. Recent research has used binocular rivalry to monitor visual cortical plasticity during occlusion therapy, leading to a successful prediction of the recovery rate of the amblyopic eye. The underlying mechanisms and their relation to neural homeostatic plasticity are not known. Here we propose a spiking neural network to explain the effect of short-term monocular deprivation on binocular rivalry. The model reproduces perceptual switches as observed experimentally. When one eye is occluded, inhibitory plasticity changes the balance between the eyes and leads to longer dominance periods for the eye that has been deprived. The model suggests that homeostatic inhibitory plasticity is a critical component of the observed effects and might play an important role in the recovery from amblyopia.
Models of perceptual decision making have historically been designed to maximally explain behaviour and brain activity independently of their ability to actually perform tasks. More recently, performance-optimized models have been shown to correlate with brain responses to images and thus present a complementary approach to understand perceptual processes. In the present study, we compare how these approaches comparatively account for the spatio-temporal organization of neural responses elicited by ambiguous visual stimuli. Forty-six healthy human subjects performed perceptual decisions on briefly flashed stimuli constructed from ambiguous characters. The stimuli were designed to have 7 orthogonal properties, ranging from low-sensory levels (e.g. spatial location of the stimulus) to conceptual (whether stimulus is a letter or a digit) and task levels (i.e. required hand movement). Magneto-encephalography source and decoding analyses revealed that these 7 levels of representations are sequentially encoded by the cortical hierarchy, and actively maintained until the subject responds. This hierarchy appeared poorly correlated to normative, drift-diffusion, and 5-layer convolutional neural networks (CNN) optimized to accurately categorize alpha-numeric characters, but partially matched the sequence of activations of 3/6 state-of-the-art CNNs trained for natural image labeling (VGG-16, VGG-19, MobileNet). Additionally, we identify several systematic discrepancies between these CNNs and brain activity, revealing the importance of single-trial learning and recurrent processing. Overall, our results strengthen the notion that performance-optimized algorithms can converge towards the computational solution implemented by the human visual system, and open possible avenues to improve artificial perceptual decision making.
In the last decades, energy modelling has supported energy planning by offering insights into the dynamics between energy access, resource use, and sustainable development. Especially in recent years, there has been an attempt to strengthen the science-policy interface and increase the involvement of society in energy planning processes. This has, both in the EU and worldwide, led to the development of open-source and transparent energy modelling practices.This paper describes the role of an open-source energy modelling tool in the energy planning process and highlights its importance for society. Specifically, it describes the existence and characteristics of the relationship between developing an open-source, freely available tool and its application, dissemination and use for policy making. Using the example of the Open Source energy Modelling System (OSeMOSYS), this work focuses on practices that were established within the community and that made the framework's development and application both relevant and scientifically grounded. Keywords: Energy system modelling tool, Open-source software, Model-based public policy, Software development practice, Outreach practice
Event-by-event fluctuations of the net baryon number and electric charge in nucleus-nucleus collisions are studied in Pb+Pb at SPS energies within the HSD transport model. We reveal an important role of the fluctuations in the number of target nucleon participants. They strongly influence all measured fluctuations even in the samples of events with rather rigid centrality trigger. This fact can be used to check different scenarios of nucleus-nucleus collisions by measuring the multiplicity fluctuations as a function of collision centrality in fixed kinematical regions of the projectile and target hemispheres. The HSD results for the event-by-event fluctuations of electric charge in central Pb+Pb collisions at 20, 30, 40, 80 and 158 A GeV are in a good agreement with the NA49 experimental data and considerably larger than expected in a quark-gluon plasma. This demonstrate that the distortions of the initial fluctuations by the hadronization phase and, in particular, by the final resonance decays dominate the observable fluctuations.
We obtain the D-meson spectral density at finite temperature for the conditions of density and temperature expected at FAIR. We perform a self-consistent coupled-channel calculation taking, as a bare interaction, a separable potential model. The Lambda_c (2593) resonance is generated dynamically. We observe that the D-meson spectral density develops a sizeable width while the quasiparticle peak stays close to the free position. The consequences for the D-meson production at FAIR are discussed.
We examine experimental signatures of TeV-mass black hole formation in heavy ion collisions at the LHC. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Finally we point out that a Heckler-Kapusta-Hawking plasma may form from the emitted mono-jets. In this context we present new simulation data of Mach shocks and of the evolution of initial conditions until the freeze-out.
We propose to measure azimuthal correlations of heavy-flavor hadrons to address the status of thermalization at the partonic stage of light quarks and gluons in high-energy nuclear collisions. In particular, we show that hadronic interactions at the late stage cannot significantly disturb the initial back-to-back azimuthal correlations of DDbar pairs. Thus, a decrease or the complete absence of these initial correlations does indicate frequent interactions of heavy-flavor quarks and also light partons in the partonic stage, which are essential for the early thermalization of light partons.
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
The concept of Large Extra Dimensions (LED) provides a way of solving the Hierarchy Problem which concerns the weakness of gravity compared with the strong and electro-weak forces. A consequence of LED is that miniature Black Holes (mini-BHs) may be produced at the Large Hadron Collider in p+p collisions. The present work uses the CHARYBDIS mini-BH generator code to simulate the hadronic signal which might be expected in a mid-rapidity particle tracking detector from the decay of these exotic objects if indeed they are produced. An estimate is also given for Pb+Pb collisions.
We develop a 1+1 dimensional hydrodynamical model for central heavy-ion collisions at ultrarelativistic energies. Deviations from Bjorken's scaling are taken into account by implementing finite-size profiles for the initial energy density. The calculated rapidity distributions of pions, kaons and antiprotons in central Au+Au collisions at the c.m. energy 200 AGeV are compared with experimental data of the BRAHMS Collaboration. The sensitivity of the results to the choice of the equation of state, the parameters of initial state and the freeze-out conditions is investigated. The best fit of experimental data is obtained for a soft equation of state and Gaussian-like initial profiles of the energy density.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
The production of Large Extra Dimension (LXD) Black Holes (BHs), with a new, fundamental mass scale of M_f = 1 TeV, has been predicted to occur at the Large Hadron Collider, LHC, with the formidable rate of 10^8 per year in p-p collisions at full energy, 14 TeV, and at full luminosity. We show that such LXD-BH formation will be experimentally observable at the LHC by the complete disappearance of all very high p_t (> 500 GeV) back-to-back correlated Di-Jets of total mass M > M_f = 1 TeV. We suggest to complement this clear cut-off signal at M > 2*500 GeV in the di-jet-correlation function by detecting the subsequent, Hawking-decay products of the LXD-BHs, namely either multiple high energy (> 100 GeV) SM Mono-Jets (i.e. away-side jet missing), sprayed off the evaporating BHs isentropically into all directions or the thermalization of the multiple overlapping Hawking-radiation in a eckler-Kapusta-Plasma. Microcanonical quantum statistical calculations of the Hawking evaporation process for these LXD-BHs show that cold black hole remnants (BHRs) of Mass sim M_f remain leftover as the ashes of these spectacular Di-Jet-suppressed events. Strong Di-Jet suppression is also expected with Heavy Ion beams at the LHC, due to Quark-Gluon-Plasma induced jet attenuation at medium to low jet energies, p_t < 200 GeV. The (Mono-)Jets in these events can be used to trigger for Tsunami-emission of secondary compressed QCD-matter at well defined Mach-angles, both at the trigger side and at the awayside (missing) jet. The Machshock-angles allow for a direct measurement of both the equation of state EoS and the speed of sound c_s via supersonic bang in the "big bang" matter. We discuss the importance of the underlying strong collective flow - the gluon storm - of the QCD- matter for the formation and evolution of these Machshock cones. We predict a significant deformation of Mach shocks from the gluon storm in central Au+Au collisions at RHIC and LHC energies, as compared to the case of weakly coupled jets propagating through a static medium. A possible complete stopping of pt > 50 GeV jets at the LHC in 2-3 fm yields nonlinear high density Mach shocks in he quark gluon plasma, which can be studied in the complex emission and disintegration pattern of the possibly supercooled matter. We report on first full 3-dimensional fluid dynamical studies of the strong effects of a first order phase transition on the evolution and the Tsunami-like Mach shock emission of the QCD matter.
The rapidity dependence of the single- and double- neutron to proton ratios of nucleon emission from isospin-asymmetric but mass-symmetric reactions Zr+Ru and Ru+Zr at energy range 100 ~ 800 A MeV and impact parameter range 0 ~ 8 fm is investigated. The reaction system with isospin-asymmetry and mass-symmetry has the advantage of simultaneously showing up the dependence on the symmetry energy and the degree of the isospin equilibrium. We find that the beam energy- and the impact parameter dependence of the slope parameter of the double neutron to proton ratio (F_D) as function of rapidity are quite sensitive to the density dependence of symmetry energy, especially at energies E_b ~ 400 A MeV and reduced impact parameters around 0.5. Here the symmetry energy effect on the F_D is enhanced, as compared to the single neutron to proton ratio. The degree of the equilibrium with respect to isospin (isospin mixing) in terms of the F_D is addressed and its dependence on the symmetry energy is also discussed.
The transverse momentum dependence of the anisotropic flow v_2 for pi, K, nucleon, Lambda, Xi and Omega is studied for Au+Au collisions at sqrt s_NN = 200 GeV within two independent string-hadron transport approaches (RQMD and UrQMD). Although both models reach only 60% of the absolute magnitude of the measured v_2, they both predict the particle type dependence of v_2, as observed by the RHIC experiments: v_2 exhibits a hadron-mass hierarchy (HMH) in the low p_T region and a number-of-constituent-quark (NCQ) dependence in the intermediate p_T region. The failure of the hadronic models to reproduce the absolute magnitude of the observed v_2 indicates that transport calculations of heavy ion collisions at RHIC must incorporate interactions among quarks and gluons in the early, hot and dense phase. The presence of an NCQ scaling in the string-hadron model results suggests that the particle-type dependencies observed in heavy-ion collisions at intermediate p_T are related to the hadronic cross sections in vacuum rather than to the hadronization process itself, as suggested by quark recombination models.
Several observables of unbound nucleons which are to some extent sensitive to the medium modifications of nucleon-nucleon elastic cross sections in neutron-rich intermediate energy heavy ion collisions are investigated. The splitting effect of neutron and proton effective masses on cross sections is discussed. It is found that the transverse flow as a function of rapidity, the Q_zz as a function of momentum, and the ratio of halfwidths of the transverse to that of longitudinal rapidity distribution R_t/l are very sensitive to the medium modifications of the cross sections. The transverse momentum distribution of correlation functions of two-nucleons does not yield information on the in-medium cross section.
Elliptic flow analysis at RHIC with the Lee-Yang Zeroes method in a relativistic transport approach
(2006)
The Lee-Yang zeroes method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s =200 A GeV, with the UrQMD model. In this transport approach, the true event plane is known and both the nonflow effects and event-by-event v_2 fluctuations exist. Although the low resolutions prohibit the application of the method for most central and peripheral collisions, the integral and differential elliptic flow from the Lee-Yang zeroes method agrees with the exact v_2 values very well for semi-central collisions.
We propose to measure correlations of heavy-flavor hadrons to address the status of thermalization at the partonic stage of light quarks and gluons in high-energy nuclear collisions, shown on the example of azimuthal correlations of D-Dbar pairs. We show that hadronic interactions at the late stage can not disturb these correlations significantly. Thus, a decrease or the complete absence of these initial correlations indicates frequent interactions of heavy-flavor quarks in the partonic stage. Therefore, early thermalization of light quarks is likely to be reached. PACS numbers: 25.75.-q
The pion source as seen through HBT correlations at RHIC energies is investigated within the UrQMD approach. We find that the calculated transverse momentum, centrality, and system size dependence of the Pratt-HBT radii R_L and R_S are reasonably well in line with experimental data. The predicted R_O values in central heavy ion collisions are larger as compared to experimental data. The corresponding quantity sqrt R_O^2-R_S^2 of the pion emission source is somewhat larger than experimental estimates.
We introduce a smooth mapping of some discrete space-time symmetries into quasi-continuous ones. Such transformations are related with q-deformations of the dilations of the Euclidean space and with the non-commutative space. We work out two examples of Hamiltonian invariance under such symmetries. The Schrodinger equation for a free particle is investigated in such a non-commutative plane and a connection with anyonic statistics is found. PACS: 03.65.Fd, 11.30.Er
Results from various theoretical approaches and ideas presented at this exciting meeting (summary talk at the 5th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP - 2005)) are reviewed. I also point towards future directions, in particular hydrodynamic behaviour induced by jets traveling through the quark-gluon plasma, which might be worth looking at in more detail.
We calculate thermal photon and neutral pion spectra in ultrarelativistic heavy-ion collisions in the framework of three-fluid hydrodynamics. Both spectra are quite sensitive to the equation of state used. In particular, within our model, recent data for S + Au at 200 AGeV can only be understood if a scenario with a phase transition (possibly to a quark-gluon plasma) is assumed. Results for Au+Au at 11 AGeV and Pb + Pb at 160 AGeV are also presented.
Event-by-event multiplicity fluctuations in nucleus-nucleus collisions are studied within the HSD and UrQMD transport models. The scaled variances of negative, positive, and all charged hadrons in Pb+Pb at 158 AGeV are analyzed in comparison to the data from the NA49 Collaboration. We find a dominant role of the fluctuations in the nucleon participant number for the final hadron multiplicity fluctuations. This fact can be used to check di erent scenarios of nucleus-nucleus collisions by measuring the final multiplicity fluctuations as a function of collision centrality. The analysis reveals surprising e ects in the recent NA49 data which indicate a rather strong mixing of the projectile and target hadron production sources even in peripheral collisions. PACS numbers: 25.75.-q,25.75.Gz,24.60.-k
Abstract: The medium modification of kaon and antikaon masses, compatible with low energy KN scattering data, are studied in a chiral SU(3) model. The mutual interactions with baryons in hot hadronic matter and the e ects from the baryonic Dirac sea on the K( ¯K ) masses are examined. The in-medium masses from the chiral SU(3) e ective model are compared to those from chiral perturbation theory. Furthermore, the influence of these in-medium e ects on kaon rapidity distributions and transverse energy spectra as well as the K, ¯K flow pattern in heavy-ion collision experiments at 1.5 to 2 A·GeV are investigated within the HSD transport approach. Detailed predictions on the transverse momentum and rapidity dependence of directed flow v1 and the elliptic flow v2 are provided for Ni+Ni at 1.93 A·GeV within the various models, that can be used to determine the in-medium K± properties from the experimental side in the near future.
Charmonium production and suppression in heavy-ion collisions at relativistic energies is investigated within di erent models, i.e. the comover absorption model, the threshold suppression model, the statistical coalescence model and the HSD transport approach. In HSD the charmonium dissociation cross sections with mesons are described by a simple phase-space parametrization including an e ective coupling strength |Mi|2 for the charmonium states i =Xc,J/psi, psi'. This allows to include the backward channels for charmonium reproduction by DD channels which are missed in the comover absorption and threshold suppression model employing detailed balance without introducing any new parameters. It is found that all approaches yield a reasonable description of J/psi suppression in S+U and Pb+Pb collisions at SPS energies. However, they di er significantly in the psi'/J/psi ratio versus centrality at SPS and especially at RHIC energies. These pronounced differences can be exploited in future measurements at RHIC to distinguish the hadronic rescattering scenarios from quark coalescence close to the QGP phase boundary.