Linguistik-Klassifikation
Refine
Document Type
- Part of a Book (2)
Language
- English (2)
Has Fulltext
- yes (2)
Is part of the Bibliography
- no (2)
Keywords
- Psycholinguistik (2) (remove)
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
This paper follows a new perspective on speech errors within the framework of Articulatory Phonology, as proposed by Goldstein et al. (in prep.). On the basis of kinematic evidence, their work has demonstrated that speech errors are not restricted to categorical exchanges of position of segmental units, but rather gestures that compose segments can exhibit errors that vary from zero to maximal in magnitude.
Here we report results from two perceptual experiments which use stimuli selected on the basis of their articulatory properties only, covering a range of errorful gestural activations. The outcome of the perceptual experiments suggests that different segments show different degrees of vulnerability to (subsegmental) speech errors: While listeners detected errors reliably for some segments, for other segments the reaction to errorful and non-errorful tokens was not distinct. The data suggest that at least for some error types an asymmetric error distribution arises due to perception, while production itself is not asymmetric. However, for error types involving segments whose gestural compositions stand in a subset relationship to each other (as described below), asymmetries may indeed originate in production due to the overall dominance of a gestural intrusion bias observed in the production data of Goldstein et al. (in prep.).