MPI für Hirnforschung
Refine
Document Type
- Article (3)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- EEG (3) (remove)
Institute
Analyzing non-invasive recordings of electroencephalography (EEG) and magnetoencephalography (MEG) directly in sensor space, using the signal from individual sensors, is a convenient and standard way of working with this type of data. However, volume conduction introduces considerable challenges for sensor space analysis. While the general idea of signal mixing due to volume conduction in EEG/MEG is recognized, the implications have not yet been clearly exemplified. Here, we illustrate how different types of activity overlap on the level of individual sensors. We show spatial mixing in the context of alpha rhythms, which are known to have generators in different areas of the brain. Using simulations with a realistic 3D head model and lead field and data analysis of a large resting-state EEG dataset, we show that electrode signals can be differentially affected by spatial mixing by computing a sensor complexity measure. While prominent occipital alpha rhythms result in less heterogeneous spatial mixing on posterior electrodes, central electrodes show a diversity of rhythms present. This makes the individual contributions, such as the sensorimotor mu-rhythm and temporal alpha rhythms, hard to disentangle from the dominant occipital alpha. Additionally, we show how strong occipital rhythms can contribute the majority of activity to frontal channels, potentially compromising analyses that are solely conducted in sensor space. We also outline specific consequences of signal mixing for frequently used assessment of power, power ratios and connectivity profiles in basic research and for neurofeedback application. With this work, we hope to illustrate the effects of volume conduction in a concrete way, such that the provided practical illustrations may be of use to EEG researchers to in order to evaluate whether sensor space is an appropriate choice for their topic of investigation.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, that is, the ease versus difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English and German) read isolated words, we encoded and decoded word vectors taken from the family of prediction-based Word2vec algorithms. We found that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.