Refine
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
Institute
Glia, the helper cells of the brain, are essential in maintaining neural resilience across time and varying challenges: By reacting to changes in neuronal health glia carefully balance repair or disposal of injured neurons. Malfunction of these interactions is implicated in many neurodegenerative diseases. We present a reductionist model that mimics repair-or-dispose decisions to generate a hypothesis for the cause of disease onset. The model assumes four tissue states: healthy and challenged tissue, primed tissue at risk of acute damage propagation, and chronic neurodegeneration. We discuss analogies to progression stages observed in the most common neurodegenerative conditions and to experimental observations of cellular signaling pathways of glia-neuron crosstalk. The model suggests that the onset of neurodegeneration can result as a compromise between two conflicting goals: short-term resilience to stressors versus long-term prevention of tissue damage.
Oscillations play a critical role in cognitive phenomena and have been observed in many brain regions. Experimental evidence indicates that classes of neurons exhibit properties that could promote oscillations, such as subthreshold resonance and electrical gap junctions. Typically, these two properties are studied separately but it is not clear which is the dominant determinant of global network rhythms. Our aim is to provide an analytical understanding of how these two effects destabilize the fluctuation-driven state, in which neurons fire irregularly, and lead to an emergence of global synchronous oscillations. Here we show how the oscillation frequency is shaped by single neuron resonance, electrical and chemical synapses.The presence of both gap junctions and subthreshold resonance are necessary for the emergence of oscillations. Our results are in agreement with several experimental observations such as network responses to oscillatory inputs and offer a much-needed conceptual link connecting a collection of disparate effects observed in networks.
To crack the neural code and read out the information neural spikes convey, it is essential to understand how the information is coded and how much of it is available for decoding. To this end, it is indispensable to derive from first principles a minimal set of spike features containing the complete information content of a neuron. Here we present such a complete set of coding features. We show that temporal pairwise spike correlations fully determine the information conveyed by a single spiking neuron with finite temporal memory and stationary spike statistics. We reveal that interspike interval temporal correlations, which are often neglected, can significantly change the total information. Our findings provide a conceptual link between numerous disparate observations and recommend shifting the focus of future studies from addressing firing rates to addressing pairwise spike correlation functions as the primary determinants of neural information.
The intensity and the features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity rescales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity invariant. This emergence of network-invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though (i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and (ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the nonlinear network behavior. In this study we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity. In particular, we demonstrate that an effective power-law synaptic transformation at the population level is necessary for invariance. In a range of firing rates, purely depressing short-term synapses fulfills this condition, and in this case, the network is contrast-invariant. Instead, facilitating short-term plasticity generally narrows the network selectivity. We found that facilitating and depressing short-term plasticity can be combined to approximate a power-law that leads to contrast invariance. These results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
The prevalence and specificity of local protein synthesis during neuronal synaptic plasticity
(2021)
To supply proteins to their vast volume, neurons localize mRNAs and ribosomes in dendrites and axons. While local protein synthesis is required for synaptic plasticity, the abundance and distribution of ribosomes and nascent proteins near synapses remain elusive. Here, we quantified the occurrence of local translation and visualized the range of synapses supplied by nascent proteins during basal and plastic conditions. We detected dendritic ribosomes and nascent proteins at single-molecule resolution using DNA-PAINT and metabolic labeling. Both ribosomes and nascent proteins positively correlated with synapse density. Ribosomes were detected at ~85% of synapses with ~2 translational sites per synapse; ~50% of the nascent protein was detected near synapses. The amount of locally synthesized protein detected at a synapse correlated with its spontaneous Ca2+ activity. A multifold increase in synaptic nascent protein was evident following both local and global plasticity at respective scales, albeit with substantial heterogeneity between neighboring synapses.
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.