Refine
Year of publication
Has Fulltext
- yes (50)
Is part of the Bibliography
- no (50)
Keywords
- schizophrenia (7)
- gamma (4)
- primary visual cortex (4)
- visual cortex (4)
- synchrony (3)
- MEG (2)
- cognition (2)
- consciousness (2)
- diffusion tensor imaging (2)
- gamma oscillations (2)
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (45)
- MPI für Hirnforschung (19)
- Medizin (19)
- Ernst Strüngmann Institut (11)
- Psychologie (4)
- Informatik (2)
- Biowissenschaften (1)
- Philosophie (1)
- Physik (1)
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
In order to investigate the involvement of primary visual cortex (V1) in working memory (WM), parallel, multisite recordings of multiunit activity were obtained from monkey V1 while the animals performed a delayed match-to-sample (DMS) task. During the delay period, V1 population firing rate vectors maintained a lingering trace of the sample stimulus that could be reactivated by intervening impulse stimuli that enhanced neuronal firing. This fading trace of the sample did not require active engagement of the monkeys in the DMS task and likely reflects the intrinsic dynamics of recurrent cortical networks in lower visual areas. This renders an active, attention-dependent involvement of V1 in the maintenance of working memory contents unlikely. By contrast, population responses to the test stimulus depended on the probabilistic contingencies between sample and test stimuli. Responses to tests that matched expectations were reduced which agrees with concepts of predictive coding.
When a visual stimulus is repeated, average neuronal responses typically decrease, yet they might maintain or even increase their impact through increased synchronization. Previous work has found that many repetitions of a grating lead to increasing gamma-band synchronization. Here, we show in awake macaque area V1 that both repetition-related reductions in firing rate and increases in gamma are specific to the repeated stimulus. These effects show some persistence on the timescale of minutes. Gamma increases are specific to the presented stimulus location. Further, repetition effects on gamma and on firing rates generalize to images of natural objects. These findings support the notion that gamma-band synchronization subserves the adaptive processing of repeated stimulus encounters.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in e.g. perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Different perceptual phenotypes may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients, synesthetes, and controls. Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors – introduced during the hysteresis experiment – lowered thresholds but did not normalize perception. Our results imply that distinct perceptual phenotypes might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.
Solving the problem of consciousness remains one of the biggest challenges in modern science. One key step towards understanding consciousness is to empirically narrow down neural processes associated with the subjective experience of a particular content. To unravel these neural correlates of consciousness (NCC) a common scientific strategy is to compare perceptual conditions in which consciousness of a particular content is present with those in which it is absent, and to determine differences in measures of brain activity (the so called "contrastive analysis"). However, this comparison appears not to reveal exclusively the NCC, as the NCC proper can be confounded with prerequisites for and consequences of conscious processing of the particular content. This implies that previous results cannot be unequivocally interpreted as reflecting the neural correlates of conscious experience. Here we review evidence supporting this conjecture and suggest experimental strategies to untangle the NCC from the prerequisites and consequences of conscious experience in order to further develop the otherwise valid and valuable contrastive methodology.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in post-exposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
The brain adapts to the sensory environment. For example, simple sensory exposure can modify the response properties of early sensory neurons. How these changes affect the overall encoding and maintenance of stimulus information across neuronal populations remains unclear. We perform parallel recordings in the primary visual cortex of anesthetized cats and find that brief, repetitive exposure to structured visual stimuli enhances stimulus encoding by decreasing the selectivity and increasing the range of the neuronal responses that persist after stimulus presentation. Low-dimensional projection methods and simple classifiers demonstrate that visual exposure increases the segregation of persistent neuronal population responses into stimulus-specific clusters. These observed refinements preserve the representational details required for stimulus reconstruction and are detectable in postexposure spontaneous activity. Assuming response facilitation and recurrent network interactions as the core mechanisms underlying stimulus persistence, we show that the exposure-driven segregation of stimulus responses can arise through strictly local plasticity mechanisms, also in the absence of firing rate changes. Our findings provide evidence for the existence of an automatic, unguided optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.
Individual differences in perception are widespread. Considering inter-individual variability, synesthetes experience stable additional sensations; schizophrenia patients suffer perceptual deficits in, eg, perceptual organization (alongside hallucinations and delusions). Is there a unifying principle explaining inter-individual variability in perception? There is good reason to believe perceptual experience results from inferential processes whereby sensory evidence is weighted by prior knowledge about the world. Perceptual variability may result from different precision weighting of sensory evidence and prior knowledge. We tested this hypothesis by comparing visibility thresholds in a perceptual hysteresis task across medicated schizophrenia patients (N = 20), synesthetes (N = 20), and controls (N = 26). Participants rated the subjective visibility of stimuli embedded in noise while we parametrically manipulated the availability of sensory evidence. Additionally, precise long-term priors in synesthetes were leveraged by presenting either synesthesia-inducing or neutral stimuli. Schizophrenia patients showed increased visibility thresholds, consistent with overreliance on sensory evidence. In contrast, synesthetes exhibited lowered thresholds exclusively for synesthesia-inducing stimuli suggesting high-precision long-term priors. Additionally, in both synesthetes and schizophrenia patients explicit, short-term priors—introduced during the hysteresis experiment—lowered thresholds but did not normalize perception. Our results imply that perceptual variability might result from differences in the precision afforded to prior beliefs and sensory evidence, respectively.