Refine
Year of publication
- 2022 (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
Institute
Neural computations emerge from recurrent neural circuits that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, it is challenging to predict which spiking network connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We established a mapping between the stabilized supralinear network (SSN) and spiking activity which allowed us to pinpoint the location in parameter space where these activity regimes occur. Notably, we found that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we showed that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.
The intensity and the features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity rescales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity invariant. This emergence of network-invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though (i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and (ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the nonlinear network behavior. In this study we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity. In particular, we demonstrate that an effective power-law synaptic transformation at the population level is necessary for invariance. In a range of firing rates, purely depressing short-term synapses fulfills this condition, and in this case, the network is contrast-invariant. Instead, facilitating short-term plasticity generally narrows the network selectivity. We found that facilitating and depressing short-term plasticity can be combined to approximate a power-law that leads to contrast invariance. These results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.