150 Psychologie
Refine
Year of publication
Document Type
- Preprint (107) (remove)
Has Fulltext
- yes (107)
Is part of the Bibliography
- no (107)
Keywords
- Deutschland (2)
- collaboration script (2)
- natural scenes (2)
- neuronal populations (2)
- primary visual cortex (2)
- referential communication (2)
- stimulus encoding (2)
- visual attention (2)
- Adjustment (1)
- Adulthood (1)
Institute
- Ernst Strüngmann Institut (48)
- Psychologie (39)
- MPI für Hirnforschung (32)
- Frankfurt Institute for Advanced Studies (FIAS) (22)
- Medizin (10)
- MPI für empirische Ästhetik (7)
- Starker Start ins Studium: Qualitätspakt Lehre (5)
- Informatik (3)
- Biowissenschaften (1)
- Deutsches Institut für Internationale Pädagogische Forschung (DIPF) (1)
Efficient processing of visual environment necessitates the integration of incoming sensory evidence with concurrent contextual inputs and mnemonic content from our past experiences. To delineate how this integration takes place in the brain, we studied modulations of feedback neural patterns in non-stimulated areas of the early visual cortex in humans (i.e., V1 and V2). Using functional magnetic resonance imaging and multivariate pattern analysis, we show that both, concurrent contextual and time-distant mnemonic information, coexist in V1/V2 as feedback signals. The extent to which mnemonic information is reinstated in V1/V2 depends on whether the information is retrieved episodically or semantically. These results demonstrate that our stream of visual experience contains not just information from the visual surrounding, but also memory-based predictions internally generated in the brain.
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1,000 short (3s) naturalistic video clips of visual events across ten human subjects. We use the videos’ extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Understanding effects of emotional valence and stress on children’s memory is important for educational and legal contexts. This study disentangles the effects of emotional content of to-be-remembered information (i.e., items differing in emotional valence and arousal), stress exposure, and associated cortisol secretion on children’s memory. We also examine whether girls’ memory is more affected by stress induction. 143 6-to-7-year-old children were randomly allocated to the Trier Social Stress Test for Children (n = 103) or a control condition (n = 40). 25 minutes after stressor onset, children incidentally encoded 75 objects varying in emotional valence (crossed with arousal) together with neutral scene backgrounds. We found that response-bias corrected memory was worse for low arousing negative items than neutral and positive items, with the latter two categories not being different from each other. Whilst boys’ memory was largely unaffected by stress, girls in the stress condition showed worse memory for negative items, especially the low arousing ones, than girls in the control condition. Girls, compared to boys, reported higher subjective stress increases following stress exposure, and had higher cortisol stress responses. Whilst a higher cortisol stress response was associated with better emotional memory in girls in the stress condition, boys’ memory was not associated with their cortisol secretion. Taken together, our study suggests that 6-to-7-year-old children, more so girls, show memory suppression for negative information. Girls’ memory for negative information, compared to boys, is also more strongly modulated by stress experience and the associated cortisol response.
Rhythmic neural spiking and attentional sampling arising from cortical receptive field interactions
(2018)
Summary: Growing evidence suggests that distributed spatial attention may invoke theta (3-9 Hz) rhythmic sampling processes. The neuronal basis of such attentional sampling is however not fully understood. Here we show using array recordings in visual cortical area V4 of two awake macaques that presenting separate visual stimuli to the excitatory center and suppressive surround of neuronal receptive fields elicits rhythmic multi-unit activity (MUA) at 3-6 Hz. This neuronal rhythm did not depend on small fixational eye movements. In the context of a distributed spatial attention task, during which the monkeys detected a spatially and temporally uncertain target, reaction times (RT) exhibited similar rhythmic fluctuations. RTs were fast or slow depending on the target occurrence during high or low MUA, resulting in rhythmic MUA-RT cross-correlations at at theta frequencies. These findings suggest that theta-rhythmic neuronal activity arises from competitive receptive field interactions and that this rhythm may subserve attentional sampling.
Highlights:
* Center-surround interactions induce theta-rhythmic MUA of visual cortex neurons
* The MUA rhythm does not depend on small fixational eye movements
* Reaction time fluctuations lock to the neuronal rhythm under distributed attention
Can prediction error explain predictability effects on the N1 during picture-word verification?
(2024)
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
The hippocampal-dependent memory system and striatal-dependent memory system modulate reinforcement learning depending on feedback timing in adults, but their contributions during development remain unclear. In a 2-year longitudinal study, 6-to-7-year-old children performed a reinforcement learning task in which they received feedback immediately or with a short delay following their response. Children’s learning was found to be sensitive to feedback timing modulations in their reaction time and inverse temperature parameter, which quantifies value-guided decision-making. They showed longitudinal improvements towards more optimal value-based learning, and their hippocampal volume showed protracted maturation. Better delayed model-derived learning covaried with larger hippocampal volume longitudinally, in line with the adult literature. In contrast, a larger striatal volume in children was associated with both better immediate and delayed model-derived learning longitudinally. These findings show, for the first time, an early hippocampal contribution to the dynamic development of reinforcement learning in middle childhood, with neurally less differentiated and more cooperative memory systems than in adults.
Metacognition plays a pivotal role in human development. The ability to realize that we do not know something, or meta-ignorance, emerges after approximately five years of age. We aimed at identifying the brain systems that underlie the developmental emergence of this ability in a preschool sample.
Twenty-four children aged between five and six years answered questions under three conditions of a meta-ignorance task twice. In the critical partial knowledge condition, an experimenter first showed two toys to a child, then announced that she would place one of them in a box behind a screen, out of sight from the child. The experimenter then asked the child whether or not she knew which toy was in the box.
Children who answered correctly both times to the metacognitive question in the partial knowledge condition (n=9) showed greater cortical thickness in a cluster within left medial orbitofrontal cortex than children who did not (n=15). Further, seed-based functional connectivity analyses of the brain during resting state revealed that this region is functionally connected to the medial orbitofrontal gyrus, posterior cingulate gyrus and precuneus, and mid- and inferior temporal gyri.
This finding suggests that the default mode network, critically through its prefrontal regions, supports introspective processing. It leads to the emergence of metacognitive monitoring allowing children to explicitly report their own ignorance.
Spontaneous brain activity builds the foundation for human cognitive processing during external demands. Neuroimaging studies based on functional magnetic resonance imaging (fMRI) identified specific characteristics of spontaneous (intrinsic) brain dynamics to be associated with individual differences in general cognitive ability, i.e., intelligence. However, fMRI research is inherently limited by low temporal resolution, thus, preventing conclusions about neural fluctuations within the range of milliseconds. Here, we used resting-state electroencephalographical (EEG) recordings from 144 healthy adults to test whether individual differences in intelligence (Raven’s Advanced Progressive Matrices scores) can be predicted from the complexity of temporally highly resolved intrinsic brain signals. We compared different operationalizations of brain signal complexity (multiscale entropy, Shannon entropy, Fuzzy entropy, and specific characteristics of microstates) regarding their relation to intelligence. The results indicate that associations between brain signal complexity measures and intelligence are of small effect sizes (r ~ .20) and vary across different spatial and temporal scales. Specifically, higher intelligence scores were associated with lower complexity in local aspects of neural processing, and less activity in task-negative brain regions belonging to the defaultmode network. Finally, we combined multiple measures of brain signal complexity to show that individual intelligence scores can be significantly predicted with a multimodal model within the sample (10-fold cross-validation) as well as in an independent sample (external replication, N = 57). In sum, our results highlight the temporal and spatial dependency of associations between intelligence and intrinsic brain dynamics, proposing multimodal approaches as promising means for future neuroscientific research on complex human traits.
Significance Statement Spontaneous brain activity builds the foundation for intelligent processing - the ability of humans to adapt to various cognitive demands. Using resting-state EEG, we extracted multiple aspects of temporally highly resolved intrinsic brain dynamics to investigate their relationship with individual differences in intelligence. Single associations were of small effect sizes and varied critically across spatial and temporal scales. However, combining multiple measures in a multimodal cross-validated prediction model, allows to significantly predict individual intelligence scores in unseen participants. Our study adds to a growing body of research suggesting that observable associations between complex human traits and neural parameters might be rather small and proposes multimodal prediction approaches as promising tool to derive robust brain-behavior relations despite limited sample sizes.
Adaptive threshold estimation procedures sample close to a subject’s perceptual threshold by dynamically adapting the stimulation based on the subject’s performance. Yet, perceptual thresholds not only depend on the observers’ sensory capabilities but also on any bias in terms of their expectations and response preferences, thus distorting the precision of the threshold estimates. Using the framework of signal detection theory (SDT), independent estimates of both, an observer’s sensitivity and internal processing bias can be delineated from threshold estimates. While this approach is commonly available for estimation procedures engaging the method of constant stimuli (MCS), correction procedures for adaptive methods (AM) are only scarcely applied. In this article, we introduce a new AM that takes individual biases into account, and that allows for a bias-corrected assessment of subjects’ sensitivity. This novel AM is validated with simulations and compared to a typical MCS-procedure, for which the implementation of bias correction has been previously demonstrated.
Comparing AM and MCS demonstrates the viability of the presented AM. Besides its feasibility, the results of the simulation reveal both, advantages, and limitations of the proposed AM. The procedure has considerable practical implications, in particular for the design of shaping procedures in sensory training experiments, in which task difficulty has to be constantly adapted to an observer’s performance, to improve training efficiency.
In a dynamic environment, the already limited information that human working memory can maintain needs to be constantly updated to optimally guide behaviour. Indeed, previous studies showed that working memory representations are continuously being transformed during delay periods leading up to a response. This goes hand-in-hand with the removal of task-irrelevant items. However, does such removal also include veridical, original stimuli, as they were prior to transformation? Here we aimed to assess the neural representation of task-relevant transformed representations, compared to the no-longer-relevant veridical representations they originated from. We applied multivariate pattern analysis to electroencephalographic data during maintenance of orientation gratings with and without mental rotation. During maintenance, we perturbed the representational network by means of a visual impulse stimulus, and were thus able to successfully decode veridical as well as imaginary, transformed orientation gratings from impulse-driven activity. On the one hand, the impulse response reflected only task-relevant (cued), but not task-irrelevant (uncued) items, suggesting that the latter were quickly discarded from working memory. By contrast, even though the original cued orientation gratings were also no longer task-relevant after mental rotation, these items continued to be represented next to the rotated ones, in different representational formats. This seemingly inefficient use of scarce working memory capacity was associated with reduced probe response times and may thus serve to increase precision and flexibility in guiding behaviour in dynamic environments.
We explore the potential of optically-pumped magnetometers (OPMs) to infer the laminar origins of neural activity non-invasively. OPM sensors can be positioned closer to the scalp than conventional cryogenic MEG sensors, opening an avenue to higher spatial resolution when combined with high-precision forward modelling. By simulating the forward model projection of single dipole sources onto OPM sensor arrays with varying sensor densities and measurement axes, and employing sparse source reconstruction approaches, we find that laminar inference with OPM arrays is possible at relatively low sensor counts at moderate to high signal-to-noise ratios (SNR). We observe improvements in laminar inference with increasing spatial sampling densities and number of measurement axes. Surprisingly, moving sensors closer to the scalp is less advantageous than anticipated - and even detrimental at high SNRs. Biases towards both the superficial and deep surfaces at very low SNRs and a notable bias towards the deep surface when combining empirical Bayesian beamformer (EBB) source reconstruction with a whole-brain analysis pose further challenges. Adequate SNR through appropriate trial numbers and shielding, as well as precise co-registration, is crucial for reliable laminar inference with OPMs.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. In other words, any two signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks, and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy for auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
An important question concerning inter-areal communication in the cortex, is whether these interactions are synergistic, i.e. convey information beyond what can be performed by isolated signals. Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in three common awake marmosets and investigated to what extent event-related-potentials (ERP) and broadband (BB) dynamics exhibit redundancy and synergy in auditory prediction error signals. We observed multiple patterns of redundancy and synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between lower and higher areas in the frontal cortex. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.
Natural scene responses in the primary visual cortex are modulated simultaneously by attention and by contextual signals about scene statistics stored across the connectivity of the visual processing hierarchy. Here, we hypothesized that attentional and contextual top-down signals interact in V1, in a manner that primarily benefits the representation of natural visual stimuli, rich in high-order statistical structure. Recording from two macaques engaged in a spatial attention task, we found that attention enhanced the decodability of stimulus identity from population responses evoked by natural scenes but, critically, not by synthetic stimuli in which higher-order statistical regularities were eliminated. Population analysis revealed that neuronal responses converged to a low dimensional subspace for natural but not for synthetic images. Critically, we determined that the attentional enhancement in stimulus decodability was captured by the dominant low dimensional subspace, suggesting an alignment between the attentional and natural stimulus variance. The alignment was pronounced for late evoked responses but not for early transient responses of V1 neurons, supporting the notion that top-down feedback was required. We argue that attention and perception share top-down pathways, which mediate hierarchical interactions optimized for natural vision.
Context information supports serial dependence of multiple visual objects across memory episodes
(2019)
Visual perception operates in an object-based manner, by integrating associated features via attention. Working memory allows a flexible access to a limited number of currently relevant objects, even when they are occluded or physically no longer present. Recently, it has been shown that we compensate for small changes of an object’s feature over memory episodes, which can support its perceptual stability. This phenomenon was termed ‘serial dependence’ and has mostly been studied in situations that comprised only a single relevant object. However, since we are typically confronted with situations where several objects have to be perceived and held in working memory, the central question of how we selectively create temporal stability of several objects has remained unsolved. As different objects can be distinguished by their accompanying context features, like their color or temporal position, we tested whether serial dependence is supported by the congruence of context features across memory episodes. Specifically, we asked participants to remember the motion directions of two sequentially presented colored dot fields per trial. At the end of a trial one motion direction was cued for continuous report either by its color (Experiment 1) or serial position (Experiment 2). We observed serial dependence, i.e., an attractive bias of currently toward previously memorized objects, between current and past motion directions that was clearly enhanced when items had the same color or serial position across trials. This bias was particularly pronounced for the context feature that was used for cueing and for the target of the previous trial. Together, these findings demonstrate that coding of current object representations depends on previous representations, especially when they share similar content and context features. Apparently the binding of content and context features is not completely erased after a memory episode, but it is carried over to subsequent episodes. As this reflects temporal dependencies in natural settings, the present findings reveal a mechanism that integrates corresponding bundles of content and context features to support stable representations of individualized objects over time.