Linguistik
Refine
Year of publication
Document Type
- Part of a Book (591) (remove)
Language
- English (591) (remove)
Has Fulltext
- yes (591)
Is part of the Bibliography
- no (591) (remove)
Keywords
- Syntax (79)
- Spracherwerb (63)
- Deutsch (56)
- Phonologie (46)
- Semantik (42)
- Englisch (40)
- Sprachtest (33)
- Thema-Rhema-Gliederung (32)
- Intonation <Linguistik> (25)
- Morphologie (24)
Institute
- Extern (11)
- Sprachwissenschaften (1)
The present article is a follow-up study of the investigation of labiodentals in German and Dutch by Hamann & Sennema (2005), where we looked at the perception of the Dutch labiodental three-way contrast by German listeners without any knowledge of Dutch and German learners of Dutch. The results of this previous study suggested that the German voiced labiodental fricative /v/ is perceptually closer to the Dutch approximant /ʋ/ than to the corresponding Dutch voiced labiodental fricative /v/. These perceptual indications are attested by the acoustic findings in the present study. German /v/ has a similar harmonicity median and a similar centre of gravity to Dutch /ʋ/, but differs from Dutch /v/ in these parameters. With respect to the acoustic parameter of duration, German /v/ lies closer to the Dutch /v/ than to the Dutch /ʋ/.
(Non)retroflexivity of slavic affricates and its motivation : Evidence from polish and czech <č>
(2005)
The goal of this paper is two-fold. First, it revises the common assumption that the affricate <č> denotes /t͡ʃ/ for all Slavic languages. On the basis of experimental results it is shown that Slavic <č> stands for two sounds: /t͡ʃ/ as e.g. in Czech and /ʈʂ/ as in Polish.
The second goal of the paper is to show that this difference is not accidental but it is motivated by perceptual relations among sibilants. In Polish, /t͡ʃ/ changed to /ʈʂ/ thus lowering its sibilant tonality and creating a better perceptual distance to /tɕ/, whereas in Czech /t͡ʃ/ did not turn to /ʈʂ/, as the former displayed sufficient perceptual distance to the only affricate present in the inventory, namely, the alveolar /t͡s/. Finally, an analysis of Czech and Polish affricate inventories is offered.
While the perilinguistic child is endowed with predispositions for the categorical perception of phonetic features, their adaptation to the native language results from a long evolution from the end of the first year of age up to the adolescence. This evolution entails both a better discrimination between phonological categories, a concomitant reduction of the discrimination between within-category variants, and a higher precision of perceptual boundaries between categories. The first objective of the present study was to assess the relative importance of these modifications by comparing the perceptual performances of a group of 11 children, aged from 8 to 11 years, with those of their mothers. Our second objective was to explore the functional implications of categorical perception by comparing the performances of a group of 8 deaf children, equipped with a cochlear implant, with normal-hearing chronological age controls. The results showed that the categorical boundary was slightly more precise and that categorical perception was consistently larger in adults vs. normal-hearing children. Those among the deaf children who were able to discriminate minimal distinctions between syllables displayed categorical perception performances equivalent to those of normal-hearing controls. In conclusion, the late effect of age on the categorical perception of speech seems to be anchored in a fairly mature phonological system, as evidenced the fairly high precision of categorical boundaries in pre-adolescents. These late developments have functional implications for speech perception in difficult conditions as suggested by the relationship between categorical perception and speech intelligibility in cochlear implant children.
This paper describes the processing of MRI and CT images needed for developing a 3D linear articulatory model of velum. The 3D surface that defines each organ constitutive of the vocal and nasal tracts is extracted from MRI and CT images recorded on a subject uttering a corpus of artificially sustained French vowels and consonants. First, the 2D contours of the organs have been manually extracted from the corresponding images, expanded into 3D contours, and aligned in a common 3D coordinate system. Then, for each organ, a generic mesh has been chosen and fitted by elastic deformation to each of the 46 3D shapes of the corpus. This has finally resulted in a set of organ surfaces sampled with the same number of 3D vertices for each articulation, which is appropriate for Principal Component Analysis or linear decomposition. The analysis of these data has uncovered two main uncorrelated articulatory degrees of freedom for the velum's movement. The associated parameters are used to control the model. We have in particular investigated the question of a possible correlation between jaw / tongue and velum's movement and have not find more correlation than the one found in the corpus.
This paper contributes to the understanding of vocal folds oscillation during phonation. In order to test theoretical models of phonation, a new experimental set-up using a deformable vocal folds replica is presented. The replica is shown to be able to produce self sustained oscillations under controlled experimental conditions. Therefore different parameters, such as those related to elasticity, to acoustical coupling or to the subglottal pressure can be quantitatively studied. In this work we focused on the oscillation fundamental frequency and the upstream pressure in order to start (on-set threshold) either end (off-set threshold) oscillations in presence of a downstream acoustical resonator. As an example, it is shown how this data can be used in order to test the theoretical predictions of a simple one-mass model.
A visual articulatory model and its application to therapy
of speech disorders : a pilot study
(2005)
A visual articulatory model based on static MRI-data of isolated sounds and its application in therapy of speech disorders is described. The model is capable of generating video sequences of articulatory movements or still images of articulatory target positions within the midsagittal plane. On the basis of this model (1) a visual stimulation technique for the therapy of patients suffering from speech disorders and (2) a rating test for visual recognition of speech movements was developed. Results indicate that patients produce recognition rates above level of chance already without any training and that patients are capable of increasing their recognition rate over the time course of therapy significantly.
In order to investigate the articulatory processes of the hasty and mumbled speech of clutterers, the kinematic variability was analysed by means of electromagnetic midsagittal articulography (EMMA). In contrast to stutterers, clutterers improve their intelligibility by concentrating on their speech task. Variability is an important criterion in comparable studies of stuttering and is discussed in terms of the stability of the speech motor system. The aim of the current study was to analyse the spatial and temporal variability in the speech of three clutterers and three control speakers. All speakers were native speakers of German. The speech material consisted of repetitive CV-syllables and foreign words, because clutterers have the most severe problems with long words which have a complex syllable structure. The results showed a higher quotient of variation for clutterers in the foreign word production. For the syllable repetition task, no significant differences between clutterers and controls were found. The extremely large and variable displacements were interpreted as a strategy that helps clutterers to improve the intelligibility of their speech.
This paper summarizes our research efforts in functional modelling of the relationship between the acoustic properties of vowels and perceived vowel quality. Our model is trained on 164 short steady-state stimuli. We measured F1, F2, and additionally F0 since the effect of F0 on perceptual vowel height is evident. 40 phonetically skilled subjects judged vowel quality using the Cardinal Vowel diagram. The main focus is on refining the model and describing its transformation properties between the F1/F2 formant chart and the Cardinal Vowel diagram. An evaluation of the model based on 48 additional vowels showed the generalizability of the model and confirmed that it predicts perceived vowel quality with sufficient accuracy.
We measure face deformations during speech production using a motion capture system, which provides 3D coordinate data of about 60 markers glued on the speaker's face. An arbitrary orthogonal factor analysis followed by a principal component analysis (together called a guided PCA) of the data has showed that the first 6 factors explain about 90% of the variance, for each of our 3 speakers. The 6 derived factors, therefore, allow us to efficiently analyze or to reconstruct with a reasonable accuracy the observed face deformations. Since these factors can be interpreted in articulatory terms, they can reveal underlying articulatory organizations. The comparison of lip gestures in terms of data derived factors suggests that these speakers differently maneuver the lips to achieve contrast between /s/ and /R/. Such inter-speaker variability can occur because the acoustic contrast of these fricatives is shaped not only by the lip tube but also by cavities inside the mouth such as the sublingual cavity. In other words, these tube and cavity can acoustically compensate each other to produce their required acoustic properties.
The contribution of von Kempelen's "Mechanism of Speech" to the 'phonetic sciences' will be analyzed with respect to his theoretical reasoning on speech and speech production on the one hand and on the other in connection with his practical insights during his struggle in constructing a speaking machine. Whereas in his theoretical considerations von Kempelen's view is focussed on the natural functioning of the speech organs – cf. his membraneous glottis model – in constructing his speaking machine he clearly orientates himself towards the auditory result – cf. the bag pipe model for the sound generator used for the speaking machine instead. Concerning vowel production his theoretical description remains questionable, but his practical insight that vowels and speech sounds in general are only perceived correctly in connection with their surrounding sounds – i.e. the discovery of coarticulation – is clearly a milestone in the development of the phonetic sciences: He therefore dispenses with the Kratzenstein tubes, although they might have been based on more thorough acoustic modelling.
Finally, von Kempelen's model of speech production will be discussed in relation to the discussion of the acoustic nature of vowels afterwards [Willis and Wheatstone as well as von Helmholtz and Hermann in the 19th century and Stumpf, Chiba & Kajiyama as well as Fant and Ungeheuer in the 20th century].
In order to understand the functional morphology of the human voice producing system, we are in need of data on the vocal tract anatomy of other mammalian species. The larynges and vocal tracts of four species of Artiodactyla were investigated in combination with acoustic analyses of their respective calls. Different evolutionary specializations of laryngeal characters may lead to similar effects on sound production. In the investigated species, such specializations are: the elongation and mass increase of the vocal folds, the volume increase of the laryngeal vestibulum by an enlarged thyroid cartilage and the formation of laryngeal ventricles. Both the elongation of the vocal folds and the increase of the oscillating masses lower the fundamental frequency. The influence of an increased volume of the laryngeal vestibulum on sound production remains unclear. The anatomical and acoustic results are presented together with considerations about the habitats and the mating systems of the respective species.
Low- dimensional and speaker-independent linear vocal tract parametrizations can be obtained using the 3-mode PARAFAC factor analysis procedure first introduced by Harshman et al. (1977) and discussed in a series of subsequent papers in the Journal of the Acoustical Society of America (Jackson (1988), Nix et al. (1996), Hoole (1999), Zheng et al. (2003)). Nevertheless, some questions of importance have been left unanswered, e.g. none of the papers using this method has provided a consistent interpretation of the terms usually referred to as "speaker weights". This study attempts an exploration of what influences their reliability as a first step towards their consistent interpretation. With this in mind, we undertook a systematic comparison of the classical PARAFAC1 algorithm with a relaxed version, of it, PARAFAC2. This comparison was carried out on two different corpora acquired by the articulograph, which varied in vowel qualities, consonantal contexts, and the paralinguistic features accent and speech rate. The difference between these statistical approaches can grossly be described as follows: In PARAFAC1, observation units pertain to the same set of variables and the observation units are comparable. In PARAFAC2, observations pertain to the same set of variables, but observation units are not comparable. Such a situation can be easily conceived in a situation such as we are describing: The operationalization we took relies on the comparability of fleshpoint data acquired from different speakers, which need not be a good assumption due to influences like sensor placement and morphological conditions.
In particular, the comparison between the two different approaches is carried out by means of so-called "leverages" on different component matrices originating in regression analysis, calculated as v = diag(A(A A)−1A ) and delivering information on how "influential" a particular loading matrix is for the model. This analysis could potentially be carried out component by component, but we confined ourselves to effects on the global factor structure. For vowels, the most influential loadings are those for the tense cognates of non-palatal vowels. For speakers, the most prominent result is the relative absence of effects of the paralinguistic variables. Results generally indicate that there is quite little influence of the model specification (i.e. PARAFAC1 or PARAFAC2) on vowel and subject components. The patterns for the articulators indicate that there are strong differences between speakers with respect to the most influential measurement as revealed by PARAFAC2: In particular, the most influential y-contribution is the tongue-back for some talkers and the tongue-dorsum for other speakers. With respect to the speaker weights, again, the leverage patterns are very similar for both PARAFAC-versions. These patterns converge with the results of the loading plots, where the articulator profiles seem to be most altered by the use of PARAFAC2. These findings, in general, are interpreted as evidence for the reliability of the PARAFAC1 speaker weights.
Four speakers repeated 8 times 15 sentences containing 'pVp' syllables (V being /a/, /i/ and /u/). The 'pVp' syllables were located in final, penultimate and antepenultimate position relatively to the Intonational Phrase (IP) boundary. They were embedded in lexical words of 1-3 syllables and were either word-initial or word-final. Results show that the closer the vowel in word-final position is to the IP boundary, the longer the duration and the higher the fundamental frequency of the vowel; it is also characterised by larger lip opening gestures. The potential reduction or coarticulation of vowels in wordinitial position compared to their counterparts in word-final position is discussed.
A survey of 170 Tibeto-Burman languages showed 69 with a distinction between inclusive and exclusive first-person plural pronouns, 18 of which also show inclusive- exclusive in Idual. Only the Kiranti languages and some Chin languages have inclusive-exclusive in the person marking. Of the forms of the pronouns involved in the inclusive-exclusive opposition, usually the exclusive form is less marked and historically prior to the inclusive form, and we find the distinction cannot be reconstructed to Proto-Tibeto-Burman or to mid level groupings. Qnly the Kiranti group has marking of the distinction that can be reconstructed to the proto level, and this is also reflected in the person-marking system.
Typology and complexity
(2005)
For the Workshop I was asked to talk about complexity in language from a typological perspective. My way of approaching this topic was to ask myself some questions, and then see where the answers led. The first one was of course, "What sort of system are we looking at complexity in - what kind of system is language?"
Chao Yuen Ren (1892–1982)
(2005)
Y. R. Chao is easily the most famous linguist to have come out of China. Born before the end of the last dynasty in China, he received a traditional Confucian education, but was also one of the first Chinese people to be sent to the West for training in modern Western science (under the Boxer Indemnity Fund). The remarkable breadth and scope of his studies included physics, mathematics, linguistics, musical and literary composition, and translation, and he was a pioneer in many of these fields.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
The author presents MASSY, the MODULAR AUDIOVISUAL SPEECH SYNTHESIZER. The system combines two approaches of visual speech synthesis. Two control models are implemented: a (data based) di-viseme model and a (rule based) dominance model where both produce control commands in a parameterized articulation space. Analogously two visualization methods are implemented: an image based (video-realistic) face model and a 3D synthetic head. Both face models can be driven by both the data based and the rule based articulation model.
The high-level visual speech synthesis generates a sequence of control commands for the visible articulation. For every virtual articulator (articulation parameter) the 3D synthetic face model defines a set of displacement vectors for the vertices of the 3D objects of the head. The vertices of the 3D synthetic head then are moved by linear combinations of these displacement vectors to visualize articulation movements. For the image based video synthesis a single reference image is deformed to fit the facial properties derived from the control commands. Facial feature points and facial displacements have to be defined for the reference image. The algorithm can also use an image database with appropriately annotated facial properties. An example database was built automatically from video recordings. Both the 3D synthetic face and the image based face generate visual speech that is capable to increase the intelligibility of audible speech.
Other well known image based audiovisual speech synthesis systems like MIKETALK and VIDEO REWRITE concatenate pre-recorded single images or video sequences, respectively. Parametric talking heads like BALDI control a parametric face with a parametric articulation model. The presented system demonstrates the compatibility of parametric and data based visual speech synthesis approaches.
The goal of our current project is to build a system that can learn to imitate a version of a spoken utterance using an articulatory speech synthesiser. The approach is informed and inspired by knowledge of early infant speech development. Thus we expect our system to reproduce and exploit the utility of infant behaviours such as listening, vocal play, babbling and word imitation. We expect our system to develop a relationship between the sound-making capabilities of its vocal tract and the phonetic/phonological structure of imitated utterances. At the heart of our approach is the learning of an inverse model that relates acoustic and motor representations of speech. The acoustic to auditory mappings uses an auditory filter bank and a self-organizing phase of learning. The inverse model from auditory to vocal tract control parameters is estimated using a babbling phase, in which the vocal tract is essentially driven in a random manner, much like the babbling phase of speech acquisition in infants. The complete system can be used to imitate simple utterances through a direct mapping from sound to control parameters. Our initial results show that this procedure works well for sounds generated by its own voice. Further work is needed to build a phonological control level and achieve better performance with real speech.
It is one of the most highly debated issues in loanword phonology whether loanword adaptations are phonologically or phonetically driven. This paper addresses this issue and aims at demonstrating that only the acceptance of both a phonological as well as a phonetic approximation stance can adequately account for the data found in Japanese. This point will be exemplified with the adaptation of German and French mid front rounded vowels in Japanese. It will be argued that the adaptation of German /oe/ and /ø/ as Japanese /e/ is phonologically grounded, whereas the adaptation of French /oe/ and /ø/ as Japanese /u/ is phonetically grounded. This asymmetry in the adaptation process of German and French mid front rounded vowels and further examples of loans in Japanese lead to the only conclusion that both strategies of loanword adaptation occur in languages. It will be shown that not only perception, but also the influence of orthography, of conventions and the knowledge of the source language play a role in the adaptation process.