150 Psychologie
Refine
Year of publication
- 2019 (13) (remove)
Document Type
- Preprint (13) (remove)
Language
- English (13)
Has Fulltext
- yes (13)
Is part of the Bibliography
- no (13)
Context information supports serial dependence of multiple visual objects across memory episodes
(2019)
Visual perception operates in an object-based manner, by integrating associated features via attention. Working memory allows a flexible access to a limited number of currently relevant objects, even when they are occluded or physically no longer present. Recently, it has been shown that we compensate for small changes of an object’s feature over memory episodes, which can support its perceptual stability. This phenomenon was termed ‘serial dependence’ and has mostly been studied in situations that comprised only a single relevant object. However, since we are typically confronted with situations where several objects have to be perceived and held in working memory, the central question of how we selectively create temporal stability of several objects has remained unsolved. As different objects can be distinguished by their accompanying context features, like their color or temporal position, we tested whether serial dependence is supported by the congruence of context features across memory episodes. Specifically, we asked participants to remember the motion directions of two sequentially presented colored dot fields per trial. At the end of a trial one motion direction was cued for continuous report either by its color (Experiment 1) or serial position (Experiment 2). We observed serial dependence, i.e., an attractive bias of currently toward previously memorized objects, between current and past motion directions that was clearly enhanced when items had the same color or serial position across trials. This bias was particularly pronounced for the context feature that was used for cueing and for the target of the previous trial. Together, these findings demonstrate that coding of current object representations depends on previous representations, especially when they share similar content and context features. Apparently the binding of content and context features is not completely erased after a memory episode, but it is carried over to subsequent episodes. As this reflects temporal dependencies in natural settings, the present findings reveal a mechanism that integrates corresponding bundles of content and context features to support stable representations of individualized objects over time.
Successful consolidation of associative memories relies on the coordinated interplay of slow oscillations and sleep spindles during non-rapid eye movement (NREM) sleep, enabling the transfer of labile information from the hippocampus to permanent memory stores in the neocortex. During senescence, the decline of the structural and functional integrity of the hippocampus and neocortical regions is paralleled by changes of the physiological events that stabilize and enhance associative memories during NREM sleep. However, the currently available evidence is inconclusive if and under which circumstances aging impacts memory consolidation. By tracing the encoding quality of single memories in individual participants, we demonstrate that previous learning determines the extent of age-related impairments in memory consolidation. Specifically, the detrimental effects of aging on memory maintenance were greatest for mnemonic contents of medium encoding quality, whereas memory gain of weakly encoded memories did not differ by age. Using multivariate techniques, we identified profiles of alterations in sleep physiology and brain structure characteristic for increasing age. Importantly, while both ‘aged’ sleep and ‘aged’ brain structure profiles were associated with reduced memory maintenance, inter-individual differences in neither sleep nor structural brain integrity qualified as the driving force behind age differences in sleep-dependent consolidation in the present study.
Age-related memory decline is associated with changes in neural functioning but little is known about how aging affects the quality of information representation in the brain. Whereas a long-standing hypothesis of the aging literature links cognitive impairments to less distinct neural representations in old age, memory studies have shown that high similarity between activity patterns benefits memory performance for the respective stimuli. Here, we addressed this apparent conflict by investigating between-item representational similarity in 50 younger (19–27 years old) and 63 older (63–75 years old) human adults (male and female) who studied scene-word associations using a mnemonic imagery strategy while electroencephalography was recorded. We compared the similarity of spatiotemporal frequency patterns elicited during encoding of items with different subsequent memory fate. Compared to younger adults, older adults’ memory representations were more similar to each other but items that elicited the most similar activity patterns early in the encoding trial were those that were best remembered by older adults. In contrast, young adults’ memory performance benefited from decreased similarity between earlier and later periods in the encoding trials, which might reflect their better success in forming unique memorable mental images of the joint picture–word pair. Our results advance the understanding of the representational properties that give rise to memory quality as well as how these properties change in the course of aging.
We studied oscillatory mechanisms of memory formation in 48 younger and 51 older adults in an intentional associative memory task with cued recall. While older adults showed lower memory performance than young adults, we found subsequent memory effects (SME) in alpha/beta and theta frequency bands in both age groups. Using logistic mixed effect models, we investigated whether interindividual differences in structural integrity of key memory regions could account for interindividual differences in the strength of the SME. Structural integrity of inferior frontal gyrus (IFG) and hippocampus was reduced in older adults. SME in the alpha/beta band were modulated by the cortical thickness of IFG, in line with its hypothesized role for deep semantic elaboration. Importantly, this structure–function relationship did not differ by age group. However, older adults were more frequently represented among the participants with low cortical thickness and consequently weaker SME in the alpha band. Thus, our results suggest that differences in the structural integrity of the IFG contribute not only to interindividual, but also to age differences in memory formation.
Mounting evidence suggests that perception depends on a largely-feedforward brain network. However, the discrepancy between (i) the latency of the corresponding feedforward responses (150-200 ms) and (ii) the time it takes human subjects to recognize brief images (often >500 ms) suggests that recurrent neuronal activity is critical to visual processing. Here, we use magneto-encephalography to localize, track and decode the feedforward and recurrent responses elicited by brief presentations of variably-ambiguous letters and digits. We first confirm that these stimuli trigger, within the first 200 ms, a feedforward response in the ventral and dorsal cortical pathways. The subsequent activity is distributed across temporal, parietal and prefrontal cortices and leads to a slow and incremental cascade of representations culminating in action-specific motor signals. We introduce an analytical framework to show that these brain responses are best accounted for by a hierarchy of recurrent neural assemblies. An accumulation of computational delays across specific processing stages explains subjects’ reaction times. Finally, the slow convergence of neural representations towards perceptual categories is quickly followed by all-or-none motor decision signals. Together, these results show how recurrent processes generate, over extended time periods, a cascade of hierarchical decisions that ultimately predicts subjects’ perceptual reports.
BOLD signatures of sleep
(2019)
Sleep can be distinguished from wake by changes in brain electrical activity, typically assessed using electroencephalography (EEG). The hallmark of non-rapid-eye-movement sleep are two major EEG events: slow waves and spindles. Here we sought to identify possible signatures of sleep in brain hemodynamic activity, using simultaneous fMRI-EEG. We found that, during the transition from wake to sleep, blood-oxygen-level-dependent (BOLD) activity evolved from a mixed-frequency pattern to one dominated by two distinct oscillations: a low-frequency (~0.05Hz) oscillation prominent in light sleep and a high-frequency (~0.17Hz) oscillation in deep sleep. The two BOLD oscillations correlated with the occurrences of spindles and slow waves, respectively. They were detectable across the whole brain, cortically and subcortically, but had different regional distributions and opposite onset patterns. These spontaneous BOLD oscillations provide fMRI signatures of basic sleep processes, which may be employed to study human sleep at spatial resolution and brain coverage not achievable using EEG.
Afterimages result from a prolonged exposure to still visual stimuli. They are best detectable when viewed against uniform backgrounds and can persist for multiple seconds. Consequently, the dynamics of afterimages appears to be slow by their very nature. To the contrary, we report here that about 50% of an afterimage intensity can be erased rapidly—within less than a second. The prerequisite is that subjects view a rich visual content to erase the afterimage; fast erasure of afterimages does not occur if subjects view a blank screen. Moreover, we find evidence that fast removal of afterimages is a skill learned with practice as our subjects were always more effective in cleaning up afterimages in later parts of the experiment. These results can be explained by a tri-level hierarchy of adaptive mechanisms, as has been proposed by the theory of practopoiesis.
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/ phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N=38 human participants) while Experiment 2 assessed behavioral facilitation effects (N=24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context based-facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in pseudowords familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context based-facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, i.e., the ease vs. difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English, German) read isolated words, we encode and decode word vectors taken from the family of prediction-based word2vec algorithms. We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
Mental imagery provides an essential simulation tool for remembering the past and planning the future, with its strength affecting both cognition and mental health. Research suggests that neural activity spanning prefrontal, parietal, temporal, and visual areas supports the generation of mental images. Exactly how this network controls the strength of visual imagery remains unknown. Here, brain imaging and transcranial magnetic phosphene data show that lower resting activity and excitability levels in early visual cortex (V1-V3) predict stronger sensory imagery. Electrically decreasing visual cortex excitability using tDCS increases imagery strength, demonstrating a causative role of visual cortex excitability in controlling visual imagery. These data suggest a neurophysiological mechanism of cortical excitability involved in controlling the strength of mental images.