150 Psychologie
Refine
Year of publication
Language
- English (57)
Has Fulltext
- yes (57)
Is part of the Bibliography
- no (57)
Keywords
- Visual cortex (2)
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- stimulus encoding (2)
- visual attention (2)
- Alpha rhythm (1)
- CNNs (1)
- Communication-through-coherence (CTC) (1)
- Computational model (1)
Institute
- Ernst Strüngmann Institut (57) (remove)
Rhythmic neural spiking and attentional sampling arising from cortical receptive field interactions
(2018)
Summary: Growing evidence suggests that distributed spatial attention may invoke theta (3-9 Hz) rhythmic sampling processes. The neuronal basis of such attentional sampling is however not fully understood. Here we show using array recordings in visual cortical area V4 of two awake macaques that presenting separate visual stimuli to the excitatory center and suppressive surround of neuronal receptive fields elicits rhythmic multi-unit activity (MUA) at 3-6 Hz. This neuronal rhythm did not depend on small fixational eye movements. In the context of a distributed spatial attention task, during which the monkeys detected a spatially and temporally uncertain target, reaction times (RT) exhibited similar rhythmic fluctuations. RTs were fast or slow depending on the target occurrence during high or low MUA, resulting in rhythmic MUA-RT cross-correlations at at theta frequencies. These findings suggest that theta-rhythmic neuronal activity arises from competitive receptive field interactions and that this rhythm may subserve attentional sampling.
Highlights:
* Center-surround interactions induce theta-rhythmic MUA of visual cortex neurons
* The MUA rhythm does not depend on small fixational eye movements
* Reaction time fluctuations lock to the neuronal rhythm under distributed attention
Highlights
• Microstimulation of visual area V4 improves visual stimulus detection
• Effects of V4 microstimulation extend to the other hemifield
• Microstimulation effects are time dependent and consistent with attention dynamics
Summary
Neuronal activity in visual area V4 is well known to be modulated by selective attention, and there are reports on V4 lesions leading to attentional deficits. However, it remains unclear whether V4 microstimulation can elicit attentional benefits. To test this hypothesis, we performed local microstimulation in area V4 and explored its spatial and time dynamics in two macaque monkeys performing a visual detection task. Microstimulation was delivered via chronically implanted multi-electrode arrays. We found that microstimulation increases average performance by 35% and reduces luminance detection thresholds by −30%. This benefit critically depends on the onset of microstimulation relative to the stimulus, consistent with known dynamics of endogenous attention. These results show that local microstimulation of V4 can improve behavior and highlight the critical role of V4 for attention.
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition. Furthermore, CNNs have major applications in understanding the nature of visual representations in the human brain. Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans. Specifically, there is a major debate about the question of whether CNNs primarily rely on surface regularities of objects, or whether they are capable of exploiting the spatial arrangement of features, similar to humans. Here, we develop a novel feature-scrambling approach to explicitly test whether CNNs use the spatial arrangement of features (i.e. object parts) to classify objects. We combine this approach with a systematic manipulation of effective receptive field sizes of CNNs as well as minimal recognizable configurations (MIRCs) analysis. In contrast to much previous literature, we provide evidence that CNNs are in fact capable of using relatively long-range spatial relationships for object classification. Moreover, the extent to which CNNs use spatial relationships depends heavily on the dataset, e.g. texture vs. sketch. In fact, CNNs even use different strategies for different classes within heterogeneous datasets (ImageNet), suggesting CNNs have a continuous spectrum of classification strategies. Finally, we show that CNNs learn the spatial arrangement of features only up to an intermediate level of granularity, which suggests that intermediate rather than global shape features provide the optimal trade-off between sensitivity and specificity in object classification. These results provide novel insights into the nature of CNN representations and the extent to which they rely on the spatial arrangement of features for object classification.
In a dynamic environment, the already limited information that human working memory can maintain needs to be constantly updated to optimally guide behaviour. Indeed, previous studies showed that working memory representations are continuously being transformed during delay periods leading up to a response. This goes hand-in-hand with the removal of task-irrelevant items. However, does such removal also include veridical, original stimuli, as they were prior to transformation? Here we aimed to assess the neural representation of task-relevant transformed representations, compared to the no-longer-relevant veridical representations they originated from. We applied multivariate pattern analysis to electroencephalographic data during maintenance of orientation gratings with and without mental rotation. During maintenance, we perturbed the representational network by means of a visual impulse stimulus, and were thus able to successfully decode veridical as well as imaginary, transformed orientation gratings from impulse-driven activity. On the one hand, the impulse response reflected only task-relevant (cued), but not task-irrelevant (uncued) items, suggesting that the latter were quickly discarded from working memory. By contrast, even though the original cued orientation gratings were also no longer task-relevant after mental rotation, these items continued to be represented next to the rotated ones, in different representational formats. This seemingly inefficient use of scarce working memory capacity was associated with reduced probe response times and may thus serve to increase precision and flexibility in guiding behaviour in dynamic environments.
We explore the potential of optically-pumped magnetometers (OPMs) to infer the laminar origins of neural activity non-invasively. OPM sensors can be positioned closer to the scalp than conventional cryogenic MEG sensors, opening an avenue to higher spatial resolution when combined with high-precision forward modelling. By simulating the forward model projection of single dipole sources onto OPM sensor arrays with varying sensor densities and measurement axes, and employing sparse source reconstruction approaches, we find that laminar inference with OPM arrays is possible at relatively low sensor counts at moderate to high signal-to-noise ratios (SNR). We observe improvements in laminar inference with increasing spatial sampling densities and number of measurement axes. Surprisingly, moving sensors closer to the scalp is less advantageous than anticipated - and even detrimental at high SNRs. Biases towards both the superficial and deep surfaces at very low SNRs and a notable bias towards the deep surface when combining empirical Bayesian beamformer (EBB) source reconstruction with a whole-brain analysis pose further challenges. Adequate SNR through appropriate trial numbers and shielding, as well as precise co-registration, is crucial for reliable laminar inference with OPMs.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. In other words, any two signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks, and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex, is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy in auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced post-synaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.