Refine
Document Type
- Article (6)
Language
- English (6)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- Functional connectivity (6) (remove)
Institute
Highlights
• Brain connectivity states identified by cofluctuation strength.
• CMEP as new method to robustly predict human traits from brain imaging data.
• Network-identifying connectivity ‘events’ are not predictive of cognitive ability.
• Sixteen temporally independent fMRI time frames allow for significant prediction.
• Neuroimaging-based assessment of cognitive ability requires sufficient scan lengths.
Abstract
Human functional brain connectivity can be temporally decomposed into states of high and low cofluctuation, defined as coactivation of brain regions over time. Rare states of particularly high cofluctuation have been shown to reflect fundamentals of intrinsic functional network architecture and to be highly subject-specific. However, it is unclear whether such network-defining states also contribute to individual variations in cognitive abilities – which strongly rely on the interactions among distributed brain regions. By introducing CMEP, a new eigenvector-based prediction framework, we show that as few as 16 temporally separated time frames (< 1.5% of 10 min resting-state fMRI) can significantly predict individual differences in intelligence (N = 263, p < .001). Against previous expectations, individual's network-defining time frames of particularly high cofluctuation do not predict intelligence. Multiple functional brain networks contribute to the prediction, and all results replicate in an independent sample (N = 831). Our results suggest that although fundamentals of person-specific functional connectomes can be derived from few time frames of highest connectivity, temporally distributed information is necessary to extract information about cognitive abilities. This information is not restricted to specific connectivity states, like network-defining high-cofluctuation states, but rather reflected across the entire length of the brain connectivity time series.
Metacognition plays a pivotal role in human development. The ability to realize that we do not know something, or meta-ignorance, emerges after approximately five years of age. We sought for the brain systems that underlie the developmental emergence of this ability in a preschool sample.
Twenty-four children aged between five and six years answered questions under three conditions. In the critical partial knowledge condition, an experimenter first showed two toys to a child, then announced that she would place one of them in a box, out of sight from the child. The experimenter then asked the child whether she knew which toy was in the box.
Children who gave consistently correct answers to this question (n = 9) showed greater cortical thickness in a cluster within left medial orbitofrontal cortex than children who did not (n = 15). Further, seed-based functional connectivity analyses of the brain during resting state revealed that this region is functionally connected to the medial orbitofrontal gyrus, posterior cingulate gyrus and precuneus, and mid- and inferior temporal gyri.
This finding suggests that the default mode network, critically through its prefrontal regions, supports introspective processing. It leads to the emergence of metacognitive monitoring allowing children to explicitly report their own ignorance.
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces.
Inter-areal coherence has been hypothesized as a mechanism for inter-areal communication. Indeed, empirical studies have observed an increase in inter-areal coherence with attention. Yet, the mechanisms underlying changes in coherence remain largely unknown. Both attention and stimulus salience are associated with shifts in the peak frequency of gamma oscillations in V1, which suggests that the frequency of oscillations may play a role in facilitating changes in inter-areal communication and coherence. In this study, we used computational modeling to investigate how the peak frequency of a sender influences inter-areal coherence. We show that changes in the magnitude of coherence are largely determined by the peak frequency of the sender. However, the pattern of coherence depends on the intrinsic properties of the receiver, specifically whether the receiver integrates or resonates with its synaptic inputs. Because resonant receivers are frequency-selective, resonance has been proposed as a mechanism for selective communication. However, the pattern of coherence changes produced by a resonant receiver is inconsistent with empirical studies. By contrast, an integrator receiver does produce the pattern of coherence with frequency shifts in the sender observed in empirical studies. These results indicate that coherence can be a misleading measure of inter-areal interactions. This led us to develop a new measure of inter-areal interactions, which we refer to as Explained Power. We show that Explained Power maps directly to the signal transmitted by the sender filtered by the receiver, and thus provides a method to quantify the true signals transmitted between the sender and receiver. Together, these findings provide a model of changes in inter-areal coherence and Granger-causality as a result of frequency shifts.
Congenitally blind individuals have been shown to activate the visual cortex during non-visual tasks. The neuronal mechanisms of such cross-modal activation are not fully understood. Here, we used an auditory working memory training paradigm in congenitally blind and in sighted adults. We hypothesized that the visual cortex gets integrated into auditory working memory networks, after these networks have been challenged by training. The spectral profile of functional networks was investigated which mediate cross-modal reorganization following visual deprivation. A training induced integration of visual cortex into task-related networks in congenitally blind individuals was expected to result in changes in long-range functional connectivity in the theta-, beta- and gamma band (imaginary coherency) between visual cortex and working memory networks. Magnetoencephalographic data were recorded in congenitally blind and sighted individuals during resting state as well as during a voice-based working memory task; the task was performed before and after working memory training with either auditory or tactile stimuli, or a control condition. Auditory working memory training strengthened theta-band (2.5-5 Hz) connectivity in the sighted and beta-band (17.5-22.5 Hz) connectivity in the blind. In sighted participants, theta-band connectivity increased between brain areas typically involved in auditory working memory (inferior frontal, superior temporal, insular cortex). In blind participants, beta-band networks largely emerged during the training, and connectivity increased between brain areas involved in auditory working memory and as predicted, the visual cortex. Our findings highlight long-range connectivity as a key mechanism of functional reorganization following congenital blindness, and provide new insights into the spectral characteristics of functional network connectivity.
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.