Refine
Year of publication
Has Fulltext
- yes (26)
Is part of the Bibliography
- no (26) (remove)
Keywords
- Human behaviour (3)
- functional magnetic resonance imaging (2)
- magnetoencephalography (2)
- multiple sclerosis (2)
- sound-induced flash illusion (2)
- Attention (1)
- BDNF (1)
- Bindungsproblem (1)
- Cognitive neuroscience (1)
- Crossmodal (1)
Institute
- Medizin (23)
- Psychologie (7)
- MPI für empirische Ästhetik (2)
- Sportwissenschaften (2)
- Biowissenschaften (1)
- MPI für Hirnforschung (1)
- Psychologie und Sportwissenschaften (1)
Background: Standardized neuropsychological testing serves to quantify cognitive impairment in multiple sclerosis (MS) patients. However, the exact mechanism underlying the translation of cognitive dysfunction into difficulties in everyday tasks has remained unclear. To answer this question, we tested if MS patients with intact vs. impaired information processing speed measured by the Symbol Digit Modalities Test (SDMT) differ in their visual search behavior during ecologically valid tasks reflecting everyday activities.
Methods: Forty-three patients with relapsing-remitting MS enrolled in an eye-tracking experiment consisting of a visual search task with naturalistic images. Patients were grouped into “impaired” and “unimpaired” according to their SDMT performance. Reaction time, accuracy and eye-tracking parameters were measured.
Results: The groups did not differ regarding age, gender, and visual acuity. Patients with impaired SDMT (cut-off SDMT-z-score < −1.5) performance needed more time to find and fixate the target (q = 0.006). They spent less time fixating the target (q = 0.042). Impaired patients had slower reaction times and were less accurate (both q = 0.0495) even after controlling for patients' upper extremity function. Exploratory analysis revealed that unimpaired patients had higher accuracy than impaired patients particularly when the announced target was in unexpected location (p = 0.037). Correlational analysis suggested that SDMT performance is inversely linked to the time to first fixation of the target only if the announced target was in its expected location (r = −0.498, p = 0.003 vs. r = −0.212, p = 0.229).
Conclusion: Dysfunctional visual search behavior may be one of the mechanisms translating cognitive deficits into difficulties in everyday tasks in MS patients. Our results suggest that cognitively impaired patients search their visual environment less efficiently and this is particularly evident when top-down processes have to be employed.
Motives motivate human behavior. Most behaviors are driven by more than one motive, yet it is unclear how different motives interact and how such motive combinations affect the neural computation of the behaviors they drive. To answer this question, we induced two prosocial motives simultaneously (multi-motive condition) and separately (single motive conditions). After the different motive inductions, participants performed the same choice task in which they allocated points in favor of the other person (prosocial choice) or in favor of themselves (egoistic choice). We used fMRI to assess prosocial choice-related brain responses and drift diffusion modeling to specify how motive combinations affect individual components of the choice process. Our results showed that the combination of the two motives in the multi-motive condition increased participants’ choice biases prior to the behavior itself. On the neural level, these changes in initial prosocial bias were associated with neural responses in the bilateral dorsal striatum. In contrast, the efficiency of the prosocial decision process was comparable between the multi-motive and the single-motive conditions. These findings provide insights into the computation of prosocial choices in complex motivational states, the motivational setting that drives most human behaviors.
Objective: Research on visual working memory has shown that individual stimulus features are processed in both specialized sensory regions and higher cortical areas. Much less evidence exists for auditory working memory. Here, a main distinction has been proposed between the processing of spatial and non-spatial sound features. Our aim was to examine feature-specific activation patterns in auditory working memory.
Methods: We collected fMRI data while 28 healthy adults performed an auditory delayed match-to-sample task. Stimuli were abstract sounds characterized by both spatial and non-spatial information, i.e., interaural time delay and central frequency, respectively. In separate recording blocks, subjects had to memorize either the spatial or non-spatial feature, which had to be compared with a probe sound presented after a short delay. We performed both univariate and multivariate comparisons between spatial and non-spatial task blocks.
Results: Processing of spatial sound features elicited a higher activity in a small cluster in the superior parietal lobe than did sound pattern processing, whereas there was no significant activation difference for the opposite contrast. The multivariate analysis was applied using a whole-brain searchlight approach to identify feature-selective processing. The task-relevant auditory feature could be decoded from multiple brain regions including the auditory cortex, posterior temporal cortex, middle occipital gyrus, and extended parietal and frontal regions.
Conclusion: In summary, the lack of large univariate activation differences between spatial and non-spatial processing could be attributable to the identical stimulation in both tasks. In contrast, the whole-brain multivariate analysis identified feature-specific activation patterns in widespread cortical regions. This suggests that areas beyond the auditory dorsal and ventral streams contribute to working memory processing of auditory stimulus features.
Aging is accompanied by unisensory decline. To compensate for this, two complementary strategies are potentially relied upon increasingly: first, older adults integrate more information from different sensory organs. Second, according to the predictive coding (PC) model, we form “templates” (internal models or “priors”) of the environment through our experiences. It is through increased life experience that older adults may rely more on these templates compared to younger adults. Multisensory integration and predictive coding would be effective strategies for the perception of near-threshold stimuli, which may however come at the cost of integrating irrelevant information. Both strategies can be studied in multisensory illusions because these require the integration of different sensory information, as well as an internal model of the world that can take precedence over sensory input. Here, we elicited a classic multisensory illusion, the sound-induced flash illusion, in younger (mean: 27 years, N = 25) and older (mean: 67 years, N = 28) adult participants while recording the magnetoencephalogram. Older adults perceived more illusions than younger adults. Older adults had increased pre-stimulus beta-band activity compared to younger adults as predicted by microcircuit theories of predictive coding, which suggest priors and predictions are linked to beta-band activity. Transfer entropy analysis and dynamic causal modeling of pre-stimulus magnetoencephalography data revealed a stronger illusion-related modulation of cross-modal connectivity from auditory to visual cortices in older compared to younger adults. We interpret this as the neural correlate of increased reliance on a cross-modal predictive template in older adults leading to the illusory percept.
Context information supports serial dependence of multiple visual objects across memory episodes
(2020)
Serial dependence is thought to promote perceptual stability by compensating for small changes of an object’s appearance across memory episodes. So far, it has been studied in situations that comprised only a single object. The question of how we selectively create temporal stability of several objects remains unsolved. In a memory task, objects can be differentiated by their to-be-memorized feature (content) as well as accompanying discriminative features (context). We test whether congruent context features, in addition to content similarity, support serial dependence. In four experiments, we observe a stronger serial dependence between objects that share the same context features across trials. Apparently, the binding of content and context features is not erased but rather carried over to the subsequent memory episode. As this reflects temporal dependencies in natural settings, our findings reveal a mechanism that integrates corresponding content and context features to support stable representations of individualized objects over time.
Attention selects relevant information regardless of whether it is physically present or internally stored in working memory. Perceptual research has shown that attentional selection of external information is better conceived as rhythmic prioritization than as stable allocation. Here we tested this principle using information processing of internal representations held in working memory. Participants memorized four spatial positions that formed the endpoints of two objects. One of the positions was cued for a delayed match-non-match test. When uncued positions were probed, participants responded faster to uncued positions located on the same object as the cued position than to those located on the other object, revealing object-based attention in working memory. Manipulating the interval between cue and probe at a high temporal resolution revealed that reaction times oscillated at a theta rhythm of 6 Hz. Moreover, oscillations showed an anti-phase relationship between memorized but uncued positions on the same versus other object as the cued position, suggesting that attentional prioritization fluctuated rhythmically in an object-based manner. Our results demonstrate the highly rhythmic nature of attentional selection in working memory. Moreover, the striking similarity between rhythmic attentional selection of mental representations and perceptual information suggests that attentional oscillations are a general mechanism of information processing in human cognition. These findings have important implications for current, attention-based models of working memory.
Objective: To determine whether the performance of multiple sclerosis (MS) patients in the sound-induced flash illusion (SiFi), a multisensory perceptual illusion, would reflect their cognitive impairment.
Methods: We performed the SiFi task as well as an extensive neuropsychological testing in 95 subjects [39 patients with relapse-remitting MS (RRMS), 16 subjects with progressive multiple sclerosis (PMS) and 40 healthy control subjects (HC)].
Results: MS patients reported more frequently the multisensory SiFi than HC. In contrast, there were no group differences in the control conditions. Essentially, patients with progressive type of MS continued to perceive the illusion at stimulus onset asynchronies (SOA) that were more than three times longer than the SOA at which the illusion was already disrupted for healthy controls. Furthermore, MS patients' degree of cognitive impairment measured with a broad neuropsychological battery encompassing tests for memory, attention, executive functions, and fluency was predicted by their performance in the SiFi task for the longest SOA of 500 ms.
Conclusions: These findings support the notion that MS patients exhibit an altered multisensory perception in the SiFi task and that their susceptibility to the perceptual illusion is negatively correlated with their neuropsychological test performance. Since MS lesions affect white matter tracts and cortical regions which seem to be involved in the transfer and processing of both crossmodal and cognitive information, this might be one possible explanation for our findings. SiFi might be considered as a brief, non-expensive, language- and education-independent screening test for cognitive deficits in MS patients.
Objective: Many cancer patients complain about cognitive dysfunction. While cognitive deficits have been attributed to the side effects of chemotherapy, there is evidence for impairment at disease onset, prior to cancer-directed therapy. Further debated issues concern the relationship between self-reported complaints and objective test performance and the role of psychological distress.
Method: We assessed performance on neuropsychological tests of attention and memory and obtained estimates of subjective distress and quality of life in 27 breast cancer patients and 20 healthy controls. Testing in patients took place shortly after the initial diagnosis, but prior to subsequent therapy.
Results: While patients showed elevated distress, cognitive performance differed on a few subtests only. Patients showed slower processing speed and poorer verbal memory than controls. Objective and self-reported cognitive function were unrelated, and psychological distress correlated more strongly with subjective complaints than with neuropsychological test performance.
Conclusion: This study provides further evidence of limited cognitive deficits in cancer patients prior to the onset of adjuvant therapy. Self-reported cognitive deficits seem more closely related to psychological distress than to objective test performance.
Context information supports serial dependence of multiple visual objects across memory episodes
(2019)
Visual perception operates in an object-based manner, by integrating associated features via attention. Working memory allows a flexible access to a limited number of currently relevant objects, even when they are occluded or physically no longer present. Recently, it has been shown that we compensate for small changes of an object’s feature over memory episodes, which can support its perceptual stability. This phenomenon was termed ‘serial dependence’ and has mostly been studied in situations that comprised only a single relevant object. However, since we are typically confronted with situations where several objects have to be perceived and held in working memory, the central question of how we selectively create temporal stability of several objects has remained unsolved. As different objects can be distinguished by their accompanying context features, like their color or temporal position, we tested whether serial dependence is supported by the congruence of context features across memory episodes. Specifically, we asked participants to remember the motion directions of two sequentially presented colored dot fields per trial. At the end of a trial one motion direction was cued for continuous report either by its color (Experiment 1) or serial position (Experiment 2). We observed serial dependence, i.e., an attractive bias of currently toward previously memorized objects, between current and past motion directions that was clearly enhanced when items had the same color or serial position across trials. This bias was particularly pronounced for the context feature that was used for cueing and for the target of the previous trial. Together, these findings demonstrate that coding of current object representations depends on previous representations, especially when they share similar content and context features. Apparently the binding of content and context features is not completely erased after a memory episode, but it is carried over to subsequent episodes. As this reflects temporal dependencies in natural settings, the present findings reveal a mechanism that integrates corresponding bundles of content and context features to support stable representations of individualized objects over time.
Multisensory integration strongly depends on the temporal proximity between two inputs. In the audio-visual domain, stimulus pairs with delays up to a few hundred milliseconds can be perceived as simultaneous and integrated into a unified percept. Previous research has shown that the size of this temporal window of integration can be narrowed by feedback-guided training on an audio-visual simultaneity judgment task. Yet, it has remained uncertain how the neural network that processes audio-visual asynchronies is affected by the training. In the present study, participants were trained on a 2-interval forced choice audio-visual simultaneity judgment task. We recorded their neural activity with magnetoencephalography in response to three different stimulus onset asynchronies (0 ms, each participant’s individual binding window, 300 ms) before, and one day following training. The Individual Window stimulus onset asynchrony condition was derived by assessing each participant’s point of subjective simultaneity. Training improved performance in both asynchronous stimulus onset conditions (300 ms, Individual Window). Furthermore, beta-band amplitude (12–30 Hz) increased from pre-compared to post-training sessions. This increase moved across central, parietal, and temporal sensors during the time window of 80–410 ms post-stimulus onset. Considering the putative role of beta oscillations in carrying feedback from higher to lower cortical areas, these findings suggest that enhanced top-down modulation of sensory processing is responsible for the improved temporal acuity after training. As beta oscillations can be assumed to also preferentially support neural communication over longer conduction delays, the widespread topography of our effect could indicate that training modulates not only processing within primary sensory cortex, but rather the communication within a large-scale network.