Preprint
Refine
Year of publication
Document Type
- Preprint (40) (remove)
Language
- English (40) (remove)
Has Fulltext
- yes (40)
Is part of the Bibliography
- no (40)
Keywords
Institute
- Psychologie (40) (remove)
We tested 6–7-year-olds, 18–22-year-olds, and 67–74-year-olds on an associative memory task that consisted of knowledge-congruent and knowledge-incongruent object–scene pairs that were highly familiar to all age groups. We compared the three age groups on their memory congruency effect (i.e., better memory for knowledge-congruent associations) and on a schema bias score, which measures the participants’ tendency to commit knowledge-congruent memory errors. We found that prior knowledge similarly benefited memory for items encoded in a congruent context in all age groups. However, for associative memory, older adults and, to a lesser extent, children overrelied on their prior knowledge, as indicated by both an enhanced congruency effect and schema bias. Functional Magnetic Resonance Imaging (fMRI) performed during memory encoding revealed an age-independent memory x congruency interaction in the ventromedial prefrontal cortex (vmPFC). Furthermore, the magnitude of vmPFC recruitment correlated positively with the schema bias. These findings suggest that older adults are most prone to rely on their prior knowledge for episodic memory decisions, but that children can also rely heavily on prior knowledge that they are well acquainted with. Furthermore, the fMRI results suggest that the vmPFC plays a key role in the assimilation of new information into existing knowledge structures across the entire lifespan. vmPFC recruitment leads to better memory for knowledge-congruent information but also to a heightened susceptibility to commit knowledge-congruent memory errors, in particular in children and older adults.
Pathophysiological models are urgently needed for personalized treatments of mental disorders. However, most potential neural markers for psychopathology are limited by low interpretability, prohibiting reverse inference from brain measures to clinical symptoms and traits. Neural signatures—i.e. multivariate brain-patterns trained to be both sensitive and specific to a construct of interest—might alleviate this problem, but are rarely applied to mental disorders. We tested whether previously developed neural signatures for negative affect and discrete emotions distinguish between healthy individuals and those with mental disorders characterized by emotion dysregulation, i.e. Borderline Personality Disorder (BPD) and complex Post-traumatic Stress Disorder (cPTSD). In three different fMRI studies, a total sample of 192 women (49 BPD, 62 cPTSD, 81 healthy controls) were shown pictures of scenes with negative or neutral content. Based on pathophysiological models, we hypothesized higher negative and lower positive reactivity of neural emotion signatures in participants with emotion dysregulation. The expression of neural signatures differed strongly between neutral and negative pictures (average Cohen’s d = 1.17). Nevertheless, a mega-analysis on individual participant data showed no differences in the reactivity of neural signatures between participants with and without emotion dysregulation. Confidence intervals ruled out even small effect sizes in the hypothesized direction and were further supported by Bayes factors. Overall, these results support the validity of neural signatures for emotional states during fMRI tasks, but raise important questions concerning their link to individual differences in emotion dysregulation.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
Can prediction error explain predictability effects on the N1 during picture-word verification?
(2023)
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
Can prediction error explain predictability effects on the N1 during picture-word verification?
(2024)
Do early effects of predictability in visual word recognition reflect prediction error? Electrophysiological research investigating word processing has demonstrated predictability effects in the N1, or first negative component of the event-related potential (ERP). However, findings regarding the magnitude of effects and potential interactions of predictability with lexical variables have been inconsistent. Moreover, past studies have typically used categorical designs with relatively small samples and relied on by-participant analyses. Nevertheless, reports have generally shown that predicted words elicit less negative-going (i.e., lower amplitude) N1s, a pattern consistent with a simple predictive coding account. In our preregistered study, we tested this account via the interaction between prediction magnitude and certainty. A picture-word verification paradigm was implemented in which pictures were followed by tightly matched picture-congruent or picture-incongruent written nouns. The predictability of target (picture-congruent) nouns was manipulated continuously based on norms of association between a picture and its name. ERPs from 68 participants revealed a pattern of effects opposite to that expected under a simple predictive coding framework.
Efficient processing of visual environment necessitates the integration of incoming sensory evidence with concurrent contextual inputs and mnemonic content from our past experiences. To delineate how this integration takes place in the brain, we studied modulations of feedback neural patterns in non-stimulated areas of the early visual cortex in humans (i.e., V1 and V2). Using functional magnetic resonance imaging and multivariate pattern analysis, we show that both, concurrent contextual and time-distant mnemonic information, coexist in V1/V2 as feedback signals. The extent to which mnemonic information is reinstated in V1/V2 depends on whether the information is retrieved episodically or semantically. These results demonstrate that our stream of visual experience contains not just information from the visual surrounding, but also memory-based predictions internally generated in the brain.
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/ phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N=38 human participants) while Experiment 2 assessed behavioral facilitation effects (N=24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context based-facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in pseudowords familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context based-facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.
The outstanding speed of language comprehension necessitates a highly efficient implementation of cognitive-linguistic processes. The domain-general theory of Predictive Coding suggests that our brain solves this problem by continuously forming linguistic predictions about expected upcoming input. The neurophysiological implementation of these predictive linguistic processes, however, is not yet understood. Here, we use EEG (human participants, both sexes) to investigate the existence and nature of online-generated, category-level semantic representations during sentence processing. We conducted two experiments in which some nouns – embedded in a predictive spoken sentence context – were unexpectedly delayed by 1 second. Target nouns were either abstract/concrete (Experiment 1) or animate/inanimate (Experiment 2). We hypothesized that if neural prediction error signals following (temporary) omissions carry specific information about the stimulus, the semantic category of the upcoming target word is encoded in brain activity prior to its presentation. Using time-generalized multivariate pattern analysis, we demonstrate significant decoding of word category from silent periods directly preceding the target word, in both experiments. This provides direct evidence for predictive coding during sentence processing, i.e., that information about a word can be encoded in brain activity before it is perceived. While the same semantic contrast could also be decoded from EEG activity elicited by isolated words (Experiment 1), the identified neural patterns did not generalize to pre-stimulus delay period activity in sentences. Our results not only indicate that the brain processes language predictively, but also demonstrate the nature and sentence-specificity of category-level semantic predictions preactivated during sentence comprehension.
From early to middle childhood, brain regions that underlie memory consolidation undergo profound maturational changes. However, there is little empirical investigation that directly relates age-related differences in brain structural measures to the memory consolidation processes. The present study examined system-level memory consolidations of intentionally studied object-location associations after one night of sleep (short delay) and after two weeks (long delay) in normally developing 5-to-7-year-old children (n = 50) and young adults (n = 39). Behavioural differences in memory consolidation were related to structural brain measures. Our results showed that children, in comparison to young adults, consolidate correctly learnt object-location associations less robustly over short and long delay. Moreover, using partial least squares correlation method, a unique multivariate profile comprised of specific neocortical (prefrontal, parietal, and occipital), cerebellar, and hippocampal subfield structures was found to be associated with variation in short-delay memory consolidation. A different multivariate profile comprised of a reduced set of brain structures, mainly consisting of neocortical (prefrontal, parietal, and occipital), and selective hippocampal subfield structures (CA1-2 and subiculum) was associated with variation in long-delay memory consolidation. Taken together, the results suggest that multivariate structural pattern of unique sets of brain regions are related to variations in short- and long-delay memory consolidation across children and young adults.
RESEARCH HIGHLIGHTS
* Short- and long-delay memory consolidation is less robust in children than in young adults
* Short-delay brain profile comprised of hippocampal, cerebellar, and neocortical brain regions
* Long-delay brain profile comprised of neocortical and selected hippocampal brain regions.
* Brain profiles differ between children and young adults.
An important question concerning inter-areal communication in the cortex is whether these interactions are synergistic, i.e. brain signals can either share common information (redundancy) or they can encode complementary information that is only available when both signals are considered together (synergy). Here, we dissociated cortical interactions sharing common information from those encoding complementary information during prediction error processing. To this end, we computed co-information, an information-theoretical measure that distinguishes redundant from synergistic information among brain signals. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded redundant and synergistic information during auditory prediction error processing. In both tasks, we observed multiple patterns of synergy across the entire cortical hierarchy with distinct dynamics. The information conveyed by ERPs and BB signals was highly synergistic even at lower stages of the hierarchy in the auditory cortex, as well as between auditory and frontal regions. Using a brain-constrained neural network, we simulated the spatio-temporal patterns of synergy and redundancy observed in the experimental results and further demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback and feedforward connections. These results indicate that the distributed representations of prediction error signals across the cortical hierarchy can be highly synergistic.