Linguistik-Klassifikation
Refine
Year of publication
- 2005 (3) (remove)
Document Type
- Part of a Book (3)
Language
- English (3) (remove)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- Spracherwerb (3)
- Kind (2)
- Sprachverstehen (2)
- Auditive Phonetik (1)
- Cochlear-Implantat (1)
- Computerlinguistik (1)
- Computersimulation (1)
- Frankokanadisch (1)
- Französisch (1)
- Hörstörung (1)
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
While the perilinguistic child is endowed with predispositions for the categorical perception of phonetic features, their adaptation to the native language results from a long evolution from the end of the first year of age up to the adolescence. This evolution entails both a better discrimination between phonological categories, a concomitant reduction of the discrimination between within-category variants, and a higher precision of perceptual boundaries between categories. The first objective of the present study was to assess the relative importance of these modifications by comparing the perceptual performances of a group of 11 children, aged from 8 to 11 years, with those of their mothers. Our second objective was to explore the functional implications of categorical perception by comparing the performances of a group of 8 deaf children, equipped with a cochlear implant, with normal-hearing chronological age controls. The results showed that the categorical boundary was slightly more precise and that categorical perception was consistently larger in adults vs. normal-hearing children. Those among the deaf children who were able to discriminate minimal distinctions between syllables displayed categorical perception performances equivalent to those of normal-hearing controls. In conclusion, the late effect of age on the categorical perception of speech seems to be anchored in a fairly mature phonological system, as evidenced the fairly high precision of categorical boundaries in pre-adolescents. These late developments have functional implications for speech perception in difficult conditions as suggested by the relationship between categorical perception and speech intelligibility in cochlear implant children.
The goal of our current project is to build a system that can learn to imitate a version of a spoken utterance using an articulatory speech synthesiser. The approach is informed and inspired by knowledge of early infant speech development. Thus we expect our system to reproduce and exploit the utility of infant behaviours such as listening, vocal play, babbling and word imitation. We expect our system to develop a relationship between the sound-making capabilities of its vocal tract and the phonetic/phonological structure of imitated utterances. At the heart of our approach is the learning of an inverse model that relates acoustic and motor representations of speech. The acoustic to auditory mappings uses an auditory filter bank and a self-organizing phase of learning. The inverse model from auditory to vocal tract control parameters is estimated using a babbling phase, in which the vocal tract is essentially driven in a random manner, much like the babbling phase of speech acquisition in infants. The complete system can be used to imitate simple utterances through a direct mapping from sound to control parameters. Our initial results show that this procedure works well for sounds generated by its own voice. Further work is needed to build a phonological control level and achieve better performance with real speech.