MPI für Hirnforschung
Refine
Year of publication
- 2020 (5) (remove)
Language
- English (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- EEG (2)
- Computational modelling (1)
- Handwriting (1)
- MVPA (1)
- N400 (1)
- Predictive coding (1)
- Reaction times (1)
- Visual word recognition (1)
- Word2vec (1)
- crowding (1)
Institute
- MPI für Hirnforschung (5)
- Psychologie (5)
Previous reports of improved oral reading performance for dyslexic children but not for regular readers when between-letter spacing was enlarged led to the proposal of a dyslexia-specific deficit in visual crowding. However, it is in this context also critical to understand how letter spacing affects visual word recognition and reading in unimpaired readers. Adopting an individual differences approach, the present study, accordingly, examined whether wider letter spacing improves reading performance also for non-impaired adults during silent reading and whether there is an association between letter spacing and crowding sensitivity. We report eye movement data of 24 German students who silently read texts presented either with normal or wider letter spacing. Foveal and parafoveal crowding sensitivity were estimated using two independent tests. Wider spacing reduced first fixation durations, gaze durations, and total fixation time for all participants, with slower readers showing stronger effects. However, wider letter spacing also reduced skipping probabilities and elicited more fixations, especially for faster readers. In terms of words read per minute, wider letter spacing did not provide a benefit, and faster readers in particular were slowed down. Neither foveal nor parafoveal crowding sensitivity correlated with the observed letter-spacing effects. In conclusion, wide letter spacing reduces single word processing time in typically developed readers during silent reading, but affects reading rates negatively since more words must be fixated. We tentatively propose that wider letter spacing reinforces serial letter processing in slower readers, but disrupts parallel processing of letter chunks in faster readers. These effects of letter spacing do not seem to be mediated by individual differences in crowding sensitivity.
How is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, that is, the ease versus difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English and German) read isolated words, we encoded and decoded word vectors taken from the family of prediction-based Word2vec algorithms. We found that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
Most current models assume that the perceptual and cognitive processes of visual word recognition and reading operate upon neuronally coded domain-general low-level visual representations – typically oriented line representations. We here demonstrate, consistent with neurophysiological theories of Bayesian-like predictive neural computations, that prior visual knowledge of words may be utilized to ‘explain away’ redundant and highly expected parts of the visual percept. Subsequent processing stages, accordingly, operate upon an optimized representation of the visual input, the orthographic prediction error, highlighting only the visual information relevant for word identification. We show that this optimized representation is related to orthographic word characteristics, accounts for word recognition behavior, and is processed early in the visual processing stream, i.e., in V4 and before 200 ms after word-onset. Based on these findings, we propose that prior visual-orthographic knowledge is used to optimize the representation of visually presented words, which in turn allows for highly efficient reading processes.
To characterize the left-ventral occipito-temporal cortex (lvOT) role during reading in a quantitatively explicit and testable manner, we propose the lexical categorization model (LCM). The LCM assumes that lvOT optimizes linguistic processing by allowing fast meaning access when words are familiar and filter out orthographic strings without meaning. The LCM successfully simulates benchmark results from functional brain imaging. Empirically, using functional magnetic resonance imaging, we demonstrate that quantitative LCM simulations predict lvOT activation across three studies better than alternative models. Besides, we found that word-likeness, which is assumed as input to LCM, is represented posterior to lvOT. In contrast, a dichotomous word/non-word contrast, which is assumed as the LCM’s output, could be localized to upstream frontal brain regions. Finally, we found that training lexical categorization results in more efficient reading. Thus, we propose a ventral-visual-stream processing framework for reading involving word-likeness extraction followed by lexical categorization, before meaning extraction.