Refine
Year of publication
Document Type
- Preprint (41) (remove)
Has Fulltext
- yes (41)
Is part of the Bibliography
- no (41)
Keywords
- Deutschland (2)
- collaboration script (2)
- referential communication (2)
- Adjustment (1)
- Adulthood (1)
- Anpassung (1)
- College Students (1)
- College Teachers (1)
- Computer Mediated Communication (1)
- Computervermittelte Kommunikation (1)
Institute
- Psychologie (41) (remove)
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from “accidental” viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalise to 3D models of objects. Using 3D models further allowed us to probe a broad range of viewpoints and empirically establish accidental and canonical viewpoints. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in colour (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the performance in Experiments 1a and 1b, we determined canonical (0°-rotation) and non-canonical (120°-rotation) viewpoints for the stimuli. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that viewpoint-dependence and scene context effects generalize to depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy.
Electroencephalography (EEG) has been used for decades to identify neurocognitive processes related to intelligence. Evidence is accumulating for associations with neural markers of higher-order cognitive processes (e.g., working memory); however, whether associations are specific to complex processes or also relate to earlier processing stages remains unclear. Addressing these issues has implications for improving our understanding of intelligence and its neural correlates. The mismatch negativity (MMN) is an event-related brain potential (ERP) that is elicited when, within a series of frequent standard stimuli, rare deviant stimuli are presented. As stimuli are typically presented outside the focus of attention, the MMN is suggested to capture automatic pre-attentive discrimination processes. However, the MMN and its relation to intelligence has largely only been studied in the auditory domain, thus preventing conclusions about the involvement of automatic discrimination processes in humans’ dominant sensory modality vision. Electroencephalography was recorded from 50 healthy participants during a passive visual oddball task that presented simple sequence violations as well as deviations within a more complex hidden pattern. Signed area amplitudes and fractional area latencies of the visual mismatch negativity (vMMN) were calculated with and without Laplacian transformation. Correlations between vMMN and intelligence (Raven’s Advanced Progressive Matrices) were of negligible to small effect sizes, differed critically between measurement approaches, and Bayes Factors provided anecdotal to substantial evidence for the absence of an association. We discuss differences between the auditory and visual MMN, the implications of different measurement approaches, and offer recommendations for further research in this evolving field.
Objects that are congruent with a scene are recognised more efficiently than objects that are incongruent. Further, semantic integration of incongruent objects elicits a stronger N300/N400 EEG component. Yet, the time course and mechanisms of how contextual information supports access to semantic object information is unclear. We used computational modelling and EEG to test how context influences semantic object processing. Using representational similarity analysis, we established that EEG patterns dissociated between objects in congruent or incongruent scenes from around 300 ms. By modelling semantic processing of objects using independently normed properties, we confirm that the onset of semantic processing of both congruent and incongruent objects is similar (∼150 ms). Critically, after ∼275 ms, we discover a difference in the duration of semantic integration, lasting longer for incongruent compared to congruent objects. These results constrain our understanding of how contextual information supports access to semantic object information.
Spontaneous brain activity builds the foundation for human cognitive processing during external demands. Neuroimaging studies based on functional magnetic resonance imaging (fMRI) identified specific characteristics of spontaneous (intrinsic) brain dynamics to be associated with individual differences in general cognitive ability, i.e., intelligence. However, fMRI research is inherently limited by low temporal resolution, thus, preventing conclusions about neural fluctuations within the range of milliseconds. Here, we used resting-state electroencephalographical (EEG) recordings from 144 healthy adults to test whether individual differences in intelligence (Raven’s Advanced Progressive Matrices scores) can be predicted from the complexity of temporally highly resolved intrinsic brain signals. We compared different operationalizations of brain signal complexity (multiscale entropy, Shannon entropy, Fuzzy entropy, and specific characteristics of microstates) regarding their relation to intelligence. The results indicate that associations between brain signal complexity measures and intelligence are of small effect sizes (r ~ .20) and vary across different spatial and temporal scales. Specifically, higher intelligence scores were associated with lower complexity in local aspects of neural processing, and less activity in task-negative brain regions belonging to the defaultmode network. Finally, we combined multiple measures of brain signal complexity to show that individual intelligence scores can be significantly predicted with a multimodal model within the sample (10-fold cross-validation) as well as in an independent sample (external replication, N = 57). In sum, our results highlight the temporal and spatial dependency of associations between intelligence and intrinsic brain dynamics, proposing multimodal approaches as promising means for future neuroscientific research on complex human traits.
Significance Statement Spontaneous brain activity builds the foundation for intelligent processing - the ability of humans to adapt to various cognitive demands. Using resting-state EEG, we extracted multiple aspects of temporally highly resolved intrinsic brain dynamics to investigate their relationship with individual differences in intelligence. Single associations were of small effect sizes and varied critically across spatial and temporal scales. However, combining multiple measures in a multimodal cross-validated prediction model, allows to significantly predict individual intelligence scores in unseen participants. Our study adds to a growing body of research suggesting that observable associations between complex human traits and neural parameters might be rather small and proposes multimodal prediction approaches as promising tool to derive robust brain-behavior relations despite limited sample sizes.
How much data do we need? Lower bounds of brain activation states to predict human cognitive ability
(2022)
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Despite their low frequency of occurrence, states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture (derived from resting-state fMRI) and to be highly subject-specific. However, it is currently unclear whether such network-defining states of high cofluctuation also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, an eigenvector-based prediction framework, we show that functional connectivity estimates from as few as 20 temporally separated time frames (< 3% of a 10 min resting-state fMRI scan) are significantly predictive of individual differences in intelligence (N = 281, p < .001). In contrast and against previous expectations, individual’s network-defining time frames of particularly high cofluctuation do not achieve significant prediction of intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest brain connectivity, temporally distributed information is necessary to extract information about cognitive abilities from functional connectivity time series. This information, however, is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
From age 5 to 7, there are remarkable improvements in children’s cognitive abilities (“5–7 shift”). In many countries, including Germany, formal schooling begins in this age range. It is, thus, unclear to what extent exposure to formal schooling contributes to the “5–7 shift.” In this longitudinal study, we investigated if schooling acts as a catalyst of maturation. We tested 5-year-old children who were born close to the official cutoff date for school entry and who were still attending a play-oriented kindergarten. One year later, the children were tested again. Some of the children had experienced their first year of schooling whereas the others had remained in kindergarten. Using 2 functional magnetic resonance imaging tasks that assessed episodic memory formation (i.e., subsequent memory effect), we found that children relied strongly on the medial temporal lobe (MTL) at both time points but not on the prefrontal cortex (PFC). In contrast, older children and adults typically show subsequent memory effects in both MTL and PFC. Both children groups improved in their memory performance, but there were no longitudinal changes nor group differences in neural activation. We conclude that successful memory formation in this age group relies more heavily on the MTL than in older age groups.
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading – visual, orthographic, phonological, and/or lexical-semantic – contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in ‘explaining away’ predictable stimulus features, but rather in a ‘sharpening’ of brain responses involved in word processing.
Probing the association between resting state brain network dynamics and psychological resilience
(2021)
Abstract
This study aimed at replicating a previously reported negative correlation between node flexibility and psychological resilience, i.e., the ability to retain mental health in the face of stress and adversity. To this end, we used multiband resting-state BOLD fMRI (TR = .675 sec) from 52 participants who had filled out three psychological questionnaires assessing resilience. Time-resolved functional connectivity was calculated by performing a sliding window approach on averaged time series parcellated according to different established atlases. Multilayer modularity detection was performed to track network reconfigurations over time and node flexibility was calculated as the number of times a node changes community assignment. In addition, node promiscuity (the fraction of communities a node participates in) and node degree (as proxy for time-varying connectivity) were calculated to extend previous work. We found no substantial correlations between resilience and node flexibility. We observed a small number of correlations between the two other brain measures and resilience scores, that were however very inconsistently distributed across brain measures, differences in temporal sampling, and parcellation schemes. This heterogeneity calls into question the existence of previously postulated associations between resilience and brain network flexibility and highlights how results may be influenced by specific analysis choices.
Author Summary We tested the replicability and generalizability of a previously proposed negative association between dynamic brain network reconfigurations derived from multilayer modularity detection (node flexibility) and psychological resilience. Using multiband resting-state BOLD fMRI data and exploring several parcellation schemes, sliding window approaches, and temporal resolutions of the data, we could not replicate previously reported findings regarding the association between node flexibility and resilience. By extending this work to other measures of brain dynamics (node promiscuity, degree) we observe a rather inconsistent pattern of correlations with resilience, that strongly varies across analysis choices. We conclude that further research is needed to understand the network neuroscience basis of mental health and discuss several reasons that may account for the variability in results.
Many cross-sectional findings suggest that volumes of specific hippocampal subfields increase in middle childhood and early adolescence. In contrast, a small number of available longitudinal studies observed decreased volumes in most subfields over this age range. Further, it remains unknown whether structural changes in development are associated with corresponding gains in children’s memory. Here we report cross-sectional age differences in children’s hippocampal subfield volumes together with longitudinal developmental trajectories and their relationships with memory performance. In two waves, 109 healthy participants aged 6 to 10 years (wave 1: MAge=7.25, wave 2: MAge=9.27) underwent high-resolution magnetic resonance imaging to assess hippocampal subfield volumes, and completed cognitive tasks assessing hippocampus dependent memory processes. We found that cross-sectional age-associations and longitudinal developmental trends in hippocampal subfield volumes were highly discrepant, both by subfields and in direction. Further, volumetric changes were largely unrelated to changes in memory, with the exception that increase in subiculum volume was associated with gains in spatial memory. Importantly, the observed longitudinal patterns of brain-cognition coupling could not be inferred from cross-sectional findings. We discuss potential sources of these discrepancies. This study underscores that children’s structural brain development and its relationship to cognition cannot be inferred from cross-sectional age comparisons.
Highlights
The subiculum undergoes volumetric increase between 6-10 years of age
Change across two years in CA1-2 and DG-CA3 was not observed in this age window
Change across two years did not reflect age differences spanning two years
Cross-sectional and longitudinal slopes in stark contrast for hippocampal subfields
Longitudinal brain-cognition coupling cannot be inferred from cross-sectional data
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3-5.5Hz, reflecting the production and processing of linguistic information chunks (syllables, words) every ∼200ms. Interestingly, ∼200ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ∼5Hz. A subsequent meta-analysis with 142 studies from 14 languages replicates this result, but also shows that sampling frequencies vary across languages between 3.9Hz and 5.2Hz, and that this variation systematically depends on the complexity of the writing systems (character-based vs. alphabetic systems, orthographic transparency). Finally, we demonstrate empirically a positive correlation between speech spectrum and eye-movement sampling in low-skilled readers. Based on this convergent evidence, we propose that during reading, our brain’s linguistic processing systems imprint a preferred processing rate, i.e., the rate of spoken language production and perception, onto the oculomotor system.