Filtern
Erscheinungsjahr
- 2022 (5) (entfernen)
Dokumenttyp
- Preprint (5) (entfernen)
Sprache
- Englisch (5)
Volltext vorhanden
- ja (5)
Gehört zur Bibliographie
- nein (5) (entfernen)
Schlagworte
Institut
- Psychologie (5) (entfernen)
Electroencephalography (EEG) has been used for decades to identify neurocognitive processes related to intelligence. Evidence is accumulating for associations with neural markers of higher-order cognitive processes (e.g., working memory); however, whether associations are specific to complex processes or also relate to earlier processing stages remains unclear. Addressing these issues has implications for improving our understanding of intelligence and its neural correlates. The mismatch negativity (MMN) is an event-related brain potential (ERP) that is elicited when, within a series of frequent standard stimuli, rare deviant stimuli are presented. As stimuli are typically presented outside the focus of attention, the MMN is suggested to capture automatic pre-attentive discrimination processes. However, the MMN and its relation to intelligence has largely only been studied in the auditory domain, thus preventing conclusions about the involvement of automatic discrimination processes in humans’ dominant sensory modality vision. Electroencephalography was recorded from 50 healthy participants during a passive visual oddball task that presented simple sequence violations as well as deviations within a more complex hidden pattern. Signed area amplitudes and fractional area latencies of the visual mismatch negativity (vMMN) were calculated with and without Laplacian transformation. Correlations between vMMN and intelligence (Raven’s Advanced Progressive Matrices) were of negligible to small effect sizes, differed critically between measurement approaches, and Bayes Factors provided anecdotal to substantial evidence for the absence of an association. We discuss differences between the auditory and visual MMN, the implications of different measurement approaches, and offer recommendations for further research in this evolving field.
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Objects that are congruent with a scene are recognised more efficiently than objects that are incongruent. Further, semantic integration of incongruent objects elicits a stronger N300/N400 EEG component. Yet, the time course and mechanisms of how contextual information supports access to semantic object information is unclear. We used computational modelling and EEG to test how context influences semantic object processing. Using representational similarity analysis, we established that EEG patterns dissociated between objects in congruent or incongruent scenes from around 300 ms. By modelling semantic processing of objects using independently normed properties, we confirm that the onset of semantic processing of both congruent and incongruent objects is similar (∼150 ms). Critically, after ∼275 ms, we discover a difference in the duration of semantic integration, lasting longer for incongruent compared to congruent objects. These results constrain our understanding of how contextual information supports access to semantic object information.
Spontaneous brain activity builds the foundation for human cognitive processing during external demands. Neuroimaging studies based on functional magnetic resonance imaging (fMRI) identified specific characteristics of spontaneous (intrinsic) brain dynamics to be associated with individual differences in general cognitive ability, i.e., intelligence. However, fMRI research is inherently limited by low temporal resolution, thus, preventing conclusions about neural fluctuations within the range of milliseconds. Here, we used resting-state electroencephalographical (EEG) recordings from 144 healthy adults to test whether individual differences in intelligence (Raven’s Advanced Progressive Matrices scores) can be predicted from the complexity of temporally highly resolved intrinsic brain signals. We compared different operationalizations of brain signal complexity (multiscale entropy, Shannon entropy, Fuzzy entropy, and specific characteristics of microstates) regarding their relation to intelligence. The results indicate that associations between brain signal complexity measures and intelligence are of small effect sizes (r ~ .20) and vary across different spatial and temporal scales. Specifically, higher intelligence scores were associated with lower complexity in local aspects of neural processing, and less activity in task-negative brain regions belonging to the defaultmode network. Finally, we combined multiple measures of brain signal complexity to show that individual intelligence scores can be significantly predicted with a multimodal model within the sample (10-fold cross-validation) as well as in an independent sample (external replication, N = 57). In sum, our results highlight the temporal and spatial dependency of associations between intelligence and intrinsic brain dynamics, proposing multimodal approaches as promising means for future neuroscientific research on complex human traits.
Significance Statement Spontaneous brain activity builds the foundation for intelligent processing - the ability of humans to adapt to various cognitive demands. Using resting-state EEG, we extracted multiple aspects of temporally highly resolved intrinsic brain dynamics to investigate their relationship with individual differences in intelligence. Single associations were of small effect sizes and varied critically across spatial and temporal scales. However, combining multiple measures in a multimodal cross-validated prediction model, allows to significantly predict individual intelligence scores in unseen participants. Our study adds to a growing body of research suggesting that observable associations between complex human traits and neural parameters might be rather small and proposes multimodal prediction approaches as promising tool to derive robust brain-behavior relations despite limited sample sizes.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.