MPI für empirische Ästhetik
Refine
Year of publication
- 2021 (3) (remove)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- adults (1)
- animacy (1)
- children (1)
- picture naming (1)
- rhythm (1)
- word order (1)
Institute
In this study, we investigated the impact of two constraints on the linear order of constituents in German preschool children’s and adults’ speech production: a rhythmic (*LAPSE, militating against sequences of unstressed syllables) and a semantic one (ANIM, requiring animate referents to be named before inanimate ones). Participants were asked to produce coordinated bare noun phrases in response to picture stimuli (e.g., Delfin und Planet, ‘dolphin and planet’) without any predefined word order. Overall, children and adults preferably produced animate items before inanimate ones, confirming findings of Prat-Sala, Shillcock, and Sorace (2000). In the group of preschoolers, the strength of the animacy effect correlated positively with age. Furthermore, the order of the conjuncts was affected by the rhythmic constraint, such that disrhythmic sequences, i.e., stress lapses, were avoided. In both groups, the latter result was significant when the two stimulus pictures did not vary with respect to animacy. In sum, our findings suggest a stronger influence of animacy compared to rhythmic well-formedness on conjunct ordering for German speaking children and adults, in line with findings by McDonald, Bock, and Kelly (1993) who investigated English speaking adults.
Across languages, the speech signal is characterized by a predominant modulation of the amplitude spectrum between about 4.3-5.5Hz, reflecting the production and processing of linguistic information chunks (syllables, words) every ∼200ms. Interestingly, ∼200ms is also the typical duration of eye fixations during reading. Prompted by this observation, we demonstrate that German readers sample written text at ∼5Hz. A subsequent meta-analysis with 142 studies from 14 languages replicates this result, but also shows that sampling frequencies vary across languages between 3.9Hz and 5.2Hz, and that this variation systematically depends on the complexity of the writing systems (character-based vs. alphabetic systems, orthographic transparency). Finally, we demonstrate empirically a positive correlation between speech spectrum and eye-movement sampling in low-skilled readers. Based on this convergent evidence, we propose that during reading, our brain’s linguistic processing systems imprint a preferred processing rate, i.e., the rate of spoken language production and perception, onto the oculomotor system.
Music, like language, is characterized by hierarchically organized structure that unfolds over time. Music listening therefore requires not only the tracking of notes and beats but also internally constructing high-level musical structures or phrases and anticipating incoming contents. Unlike for language, mechanistic evidence for online musical segmentation and prediction at a structural level is sparse. We recorded neurophysiological data from participants listening to music in its original forms as well as in manipulated versions with locally or globally reversed harmonic structures. We discovered a low-frequency neural component that modulated the neural rhythms of beat tracking and reliably parsed musical phrases. We next identified phrasal phase precession, suggesting that listeners established structural predictions from ongoing listening experience to track phrasal boundaries. The data point to brain mechanisms that listeners use to segment continuous music at the phrasal level and to predict abstract structural features of music.