Linguistik
Refine
Year of publication
- 2005 (107) (remove)
Document Type
- Part of a Book (43)
- Article (34)
- Conference Proceeding (13)
- Preprint (7)
- Working Paper (5)
- Book (2)
- Report (2)
- Other (1)
Language
- English (107) (remove)
Has Fulltext
- yes (107)
Is part of the Bibliography
- no (107)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
- Akustische Phonetik (5)
- Metapher (5)
Institute
Low- dimensional and speaker-independent linear vocal tract parametrizations can be obtained using the 3-mode PARAFAC factor analysis procedure first introduced by Harshman et al. (1977) and discussed in a series of subsequent papers in the Journal of the Acoustical Society of America (Jackson (1988), Nix et al. (1996), Hoole (1999), Zheng et al. (2003)). Nevertheless, some questions of importance have been left unanswered, e.g. none of the papers using this method has provided a consistent interpretation of the terms usually referred to as "speaker weights". This study attempts an exploration of what influences their reliability as a first step towards their consistent interpretation. With this in mind, we undertook a systematic comparison of the classical PARAFAC1 algorithm with a relaxed version, of it, PARAFAC2. This comparison was carried out on two different corpora acquired by the articulograph, which varied in vowel qualities, consonantal contexts, and the paralinguistic features accent and speech rate. The difference between these statistical approaches can grossly be described as follows: In PARAFAC1, observation units pertain to the same set of variables and the observation units are comparable. In PARAFAC2, observations pertain to the same set of variables, but observation units are not comparable. Such a situation can be easily conceived in a situation such as we are describing: The operationalization we took relies on the comparability of fleshpoint data acquired from different speakers, which need not be a good assumption due to influences like sensor placement and morphological conditions.
In particular, the comparison between the two different approaches is carried out by means of so-called "leverages" on different component matrices originating in regression analysis, calculated as v = diag(A(A A)−1A ) and delivering information on how "influential" a particular loading matrix is for the model. This analysis could potentially be carried out component by component, but we confined ourselves to effects on the global factor structure. For vowels, the most influential loadings are those for the tense cognates of non-palatal vowels. For speakers, the most prominent result is the relative absence of effects of the paralinguistic variables. Results generally indicate that there is quite little influence of the model specification (i.e. PARAFAC1 or PARAFAC2) on vowel and subject components. The patterns for the articulators indicate that there are strong differences between speakers with respect to the most influential measurement as revealed by PARAFAC2: In particular, the most influential y-contribution is the tongue-back for some talkers and the tongue-dorsum for other speakers. With respect to the speaker weights, again, the leverage patterns are very similar for both PARAFAC-versions. These patterns converge with the results of the loading plots, where the articulator profiles seem to be most altered by the use of PARAFAC2. These findings, in general, are interpreted as evidence for the reliability of the PARAFAC1 speaker weights.
This work investigates laryngeal and supralaryngeal correlates of the voicing contrast in alveolar obstruent production in German. It further studies laryngealoral co-ordination observed for such productions. Three different positions of the obstruents are taken into account: the stressed, syllable initial position, the post-stressed intervocalic position, and the post-stressed word final position. For the latter the phonological rule of final devoicing applies in German. The different positions are chosen in order to study the following hypotheses:
1. The presence/absence of glottal opening is not a consistent correlate of the voicing contrast in German.
2. Supralaryngeal correlates are also involved in the contrast.
3. Supralaryngeal correlates can compensate for the lack of distinction in laryngeal adjustment.
Including the word final position is motivated by the question whether neutralization in word final position would be complete or whether some articulatory residue of the contrast can be found.
Two experiments are carried out. The first experiment investigates glottal abduction in co-ordination with tongue-palate contact patterns by means of simultaneous recordings of transillumination, fiberoptic films and Electropalatography (EPG). The second experiment focuses on supralaryngeal correlates of alveolar stops studied by means of Electromagnetic Articulography (EMA) simultaneously with EPG. Three German native speakers participated in both recordings. Results of this study provide evidence that the first hypothesis holds true for alveolar stops when different positions are taken into account. In fricative production it is also confirmed since voiceless and voiced fricatives are most of the time realised with glottal abduction. Additionally, supralaryngeal correlates are involved in the voicing contrast under two perspectives. First, laryngeal and supralaryngeal movements are well synchronised in voiceless obstruent production, particularly in the stressed position. Second, supralaryngeal correlates occur especially in the post-stressed intervocalic position. Results are discussed with respect to the phonetics-phonology interface, to the role of timing and its possible control, to the interarticulatory co-ordination, and to stress as 'localised hyperarticulation'.
This special issue of the ZAS Papers in Linguistics contains a collection of papers of the French-German Thematic Summerschool on "Cognitive and physical models of speech production, and speech perception and of their interaction".
Organized by Susanne Fuchs (ZAS Berlin), Jonathan Harrington (IPdS Kiel), Pascal Perrier (ICP Grenoble) and Bernd Pompino-Marschall (HUB and ZAS Berlin) and funded by the German-French University in Saarbrücken this summerschool was held from September 19th till 24th 2004 at the coast of the Baltic Sea at the Heimvolkshochschule Lubmin (Germany) with 45 participants from Germany, France, Great Britain, Italy and Canada. The scientific program of this summerschool that is reprinted at the end of this volume included 11 key-note presentations by invited speakers, 21 oral presentations and a poster session (8 presentations). The names and addresses of all participants are also given in the back matter of this volume.
All participants was offered the opportunity to publish an extended version of their presentation in the ZAS Papers in Linguistics. All submitted papers underwent a review and an editing procedure by external experts and the organizers of the summerschool. As it is the case in a summerschool, papers present either works in progress, or works at a more advanced stage, or tutorials. They are ordered alphabetically by their first author's name, fortunately resulting in the fact that this special issue starts out with the paper that won the award as best pre-doctoral presentation, i.e. Sophie Dupont, Jérôme Aubin and Lucie Ménard with "A study of the McGurk effect in 4 and 5-year-old French Canadian children".
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
In this paper the issue of the nature of the representations of the speech production task in the speaker's brain is addressed in a production-perception interaction framework. Since speech is produced to be perceived, it is hypothesized that its production is associated for the speaker with the generation of specific physical characteristics that are for the listeners the objects of speech perception. Hence, in the first part of the paper, four reference theories of speech perception are presented, in order to guide and to constrain the search for possible correlates of the speech production task in the physical space: the Acoustic Invariance Theory, the Adaptive Variability Theory, the Motor Theory and the Direct-Realist Theory. Possible interpretations of these theories in terms of representations of the speech production task are proposed and analyzed. In a second part, a few selected experimental studies are presented, which shed some light on this issue. In the conclusion, on the basis of the joint analysis of theoretical and experimental aspects presented in the paper, it is proposed that representations of the speech production task are multimodal, and that a hierarchy exists among the different modalities, the acoustic modality having the highest level of priority. It is also suggested that these representations are not associated with invariant characteristics, but with regions of the acoustic, orosensory and motor control spaces.
A fundamental question in the study of speech is about the invariance of the ultimate percepts, or features. The present paper gives an overview of the noninvariance problem and offers some hints towards a solution. Examination of various data on place and voicing perception suggests the following points. Features correspond to natural boundaries between sounds, which are included in the infant's predispositions for speech perception. Adult percepts arise from couplings and contextual interactions between features. Both couplings and interactions contribute to invariance. But this is at the expense of profound qualitative changes in perceptual boundaries implying that features are neither independently nor invariantly perceived. The question then is to understand the principles which guide feature couplings and interactions during perceptual development. The answer might reside in the fact that: (1) adult boundaries converge to a single point of the perceptual space, suggesting a context-free central reference; (2) this point corresponds to the neutral vocoïd, suggesting the reference is related to production; (3) at this point perceptual boundaries correspond to the natural ones, suggesting the reference is anchored in predispositions for feature perception. In sum, perceptual invariance seems to be grounded on a radial representation of the vocal tract around a singular point at which boundaries are context-fee, natural and coincide with the neutral vocoïd.
This paper presents the results of Open Quotient measurements in EGG signals of young (18 to 30 year old) and elderly (59 to 82 year old) male and female speakers. The paper further presents quantitative results on the relation between the OQ and the perception of a speaker's age. Higgins & Saxman (1991) found a decreased OQEGG with increasing age for females, whereas the OQEGG in sustained vowel material increased for males as the speakers age increased. In Linville (2002), however, the spectral amplitudes in the region of F0 (obtained by LTAS-measurements of read speech material) increased with increasing age independent of gender; this could be interpreted indirectly as an increasing OQ. We measured the OQEGG not only for sustained vowels, but also in vowels taken from isolated words. In order to analyse the relation between breathiness in terms of an increased OQ and the mean perceived age per stimulus a perception test was carried out in which listeners were asked to estimate speaker's age based on sustained /a/-vowel stimuli varying in vocal effort (soft - normal - loud) during production. The results indicated the following: (i) The decreased OQ for elderly females originally found by Higgins & Saxman is not apparent in our data for sustained /a/-vowels. For our female speakers no significant difference between the OQ of young and old speakers was found; for elderly males, however, we also found an increasing OQ with increasing age.(ii) In addition, a statistically significant increased OQEGG occurs for the group of the elderly males for the vowels from the word material. (iii) Our results show a strong positive relation between perceived age and OQ in male voices. Regarding (i) and (ii), at least the male speaker's voice becomes more breathy as age increases. Considering (iii), increased breathiness may contribute to the listener’s perception of increased age.
Studying kinematic behavior in speech production is an indispensable and fruitful methodology in order to describe for instance phonemic contrasts, allophonic variations, prosodic effects in articulatory movements. More intriguingly, it is also interpreted with respect to its underlying control mechanisms. Several interpretations have been borrowed from motor control studies of arm, eye, and limb movements. They do either explain kinematics with respect to a fine tuned control by the Central Nervous System (CNS) or they take into account a combination of influences arising from motor control strategies at the CNS level and from the complex physical properties of the peripheral speech apparatus. We assume that the latter is more realistic and ecological. The aims of this article are: first, to show, via a literature review related to the so called '1/3 power law' in human arm motor control, that this debate is of first importance in human motor control research in general. Second, to study a number of speech specific examples offering a fruitful framework to address this issue. However, it is also suggested that speech motor control differs from general motor control principles in the sense that it uses specific physical properties such as vocal tract limitations, aerodynamics and biomechanics in order to produce the relevant sounds. Third, experimental and modelling results are described supporting the idea that the three properties are crucial in shaping speech kinematics for selected speech phenomena. Hence, caution should be taken when interpreting kinematic results based on experimental data alone.
Syllable cut is said to be a phonologically distinctive feature in some languages where the difference in vowel quantity is accompanied by a difference in vowel quality like in German. There have been several attempts to find the corresponding phonetic correlates for syllable cut, from which the energy measurements of vowels by Spiekermann (2000) proved appropriate for explaining the difference between long, i.e. smoothly, and short, i.e. abruptly cut, vowels: in smoothly cut vowels, a larger number of peaks was counted in the energy contour which were located further back than in abruptly cut segments, and the overall energy was more constant throughout the entire nucleus. On this basis, we intended to compare German as a syllable cut language and Hungarian where the feature was not expected to be relevant. However, the phonetic correlates of syllable cut found in this study do not entirely confirm Spiekermann's results. It seems that the energy features of vowels are more strongly connected to their duration than to their quality.
This study reports on the results of an airflow experiment that measured the duration of airflow and the amount of air from release of a stop to the beginning of a following vowel in stop vowel-sequences of German. The sequences involved coronal, labial and velar voiced and voiceless stops followed by the vocoids /j, i:, ı, ɛ, ʊ, a/. The experiment tested the influence of the three factors voicing of stop, place of stop articulation, and the following vocoid context on the duration and amount of air as possible explanation for assibilation processes. The results show that the voiceless stops are related to a longer duration and more air in the release phase than voiced ones. For the influence of the vocoids, a significant difference could be established between /j/ and all other vocoids for the duration of the release phase. This difference could not be found for the amount of air over this duration. The place of articulation had only restricted influence. Velars resulted in significantly longer duration of the release phase compared to non-velars. A significant difference in amount of air between the places of articulation could not be found.
The present article is a follow-up study of the investigation of labiodentals in German and Dutch by Hamann & Sennema (2005), where we looked at the perception of the Dutch labiodental three-way contrast by German listeners without any knowledge of Dutch and German learners of Dutch. The results of this previous study suggested that the German voiced labiodental fricative /v/ is perceptually closer to the Dutch approximant /ʋ/ than to the corresponding Dutch voiced labiodental fricative /v/. These perceptual indications are attested by the acoustic findings in the present study. German /v/ has a similar harmonicity median and a similar centre of gravity to Dutch /ʋ/, but differs from Dutch /v/ in these parameters. With respect to the acoustic parameter of duration, German /v/ lies closer to the Dutch /v/ than to the Dutch /ʋ/.
(Non)retroflexivity of slavic affricates and its motivation : Evidence from polish and czech <č>
(2005)
The goal of this paper is two-fold. First, it revises the common assumption that the affricate <č> denotes /t͡ʃ/ for all Slavic languages. On the basis of experimental results it is shown that Slavic <č> stands for two sounds: /t͡ʃ/ as e.g. in Czech and /ʈʂ/ as in Polish.
The second goal of the paper is to show that this difference is not accidental but it is motivated by perceptual relations among sibilants. In Polish, /t͡ʃ/ changed to /ʈʂ/ thus lowering its sibilant tonality and creating a better perceptual distance to /tɕ/, whereas in Czech /t͡ʃ/ did not turn to /ʈʂ/, as the former displayed sufficient perceptual distance to the only affricate present in the inventory, namely, the alveolar /t͡s/. Finally, an analysis of Czech and Polish affricate inventories is offered.
While the perilinguistic child is endowed with predispositions for the categorical perception of phonetic features, their adaptation to the native language results from a long evolution from the end of the first year of age up to the adolescence. This evolution entails both a better discrimination between phonological categories, a concomitant reduction of the discrimination between within-category variants, and a higher precision of perceptual boundaries between categories. The first objective of the present study was to assess the relative importance of these modifications by comparing the perceptual performances of a group of 11 children, aged from 8 to 11 years, with those of their mothers. Our second objective was to explore the functional implications of categorical perception by comparing the performances of a group of 8 deaf children, equipped with a cochlear implant, with normal-hearing chronological age controls. The results showed that the categorical boundary was slightly more precise and that categorical perception was consistently larger in adults vs. normal-hearing children. Those among the deaf children who were able to discriminate minimal distinctions between syllables displayed categorical perception performances equivalent to those of normal-hearing controls. In conclusion, the late effect of age on the categorical perception of speech seems to be anchored in a fairly mature phonological system, as evidenced the fairly high precision of categorical boundaries in pre-adolescents. These late developments have functional implications for speech perception in difficult conditions as suggested by the relationship between categorical perception and speech intelligibility in cochlear implant children.
This paper describes the processing of MRI and CT images needed for developing a 3D linear articulatory model of velum. The 3D surface that defines each organ constitutive of the vocal and nasal tracts is extracted from MRI and CT images recorded on a subject uttering a corpus of artificially sustained French vowels and consonants. First, the 2D contours of the organs have been manually extracted from the corresponding images, expanded into 3D contours, and aligned in a common 3D coordinate system. Then, for each organ, a generic mesh has been chosen and fitted by elastic deformation to each of the 46 3D shapes of the corpus. This has finally resulted in a set of organ surfaces sampled with the same number of 3D vertices for each articulation, which is appropriate for Principal Component Analysis or linear decomposition. The analysis of these data has uncovered two main uncorrelated articulatory degrees of freedom for the velum's movement. The associated parameters are used to control the model. We have in particular investigated the question of a possible correlation between jaw / tongue and velum's movement and have not find more correlation than the one found in the corpus.
This paper contributes to the understanding of vocal folds oscillation during phonation. In order to test theoretical models of phonation, a new experimental set-up using a deformable vocal folds replica is presented. The replica is shown to be able to produce self sustained oscillations under controlled experimental conditions. Therefore different parameters, such as those related to elasticity, to acoustical coupling or to the subglottal pressure can be quantitatively studied. In this work we focused on the oscillation fundamental frequency and the upstream pressure in order to start (on-set threshold) either end (off-set threshold) oscillations in presence of a downstream acoustical resonator. As an example, it is shown how this data can be used in order to test the theoretical predictions of a simple one-mass model.
A visual articulatory model and its application to therapy
of speech disorders : a pilot study
(2005)
A visual articulatory model based on static MRI-data of isolated sounds and its application in therapy of speech disorders is described. The model is capable of generating video sequences of articulatory movements or still images of articulatory target positions within the midsagittal plane. On the basis of this model (1) a visual stimulation technique for the therapy of patients suffering from speech disorders and (2) a rating test for visual recognition of speech movements was developed. Results indicate that patients produce recognition rates above level of chance already without any training and that patients are capable of increasing their recognition rate over the time course of therapy significantly.
In order to investigate the articulatory processes of the hasty and mumbled speech of clutterers, the kinematic variability was analysed by means of electromagnetic midsagittal articulography (EMMA). In contrast to stutterers, clutterers improve their intelligibility by concentrating on their speech task. Variability is an important criterion in comparable studies of stuttering and is discussed in terms of the stability of the speech motor system. The aim of the current study was to analyse the spatial and temporal variability in the speech of three clutterers and three control speakers. All speakers were native speakers of German. The speech material consisted of repetitive CV-syllables and foreign words, because clutterers have the most severe problems with long words which have a complex syllable structure. The results showed a higher quotient of variation for clutterers in the foreign word production. For the syllable repetition task, no significant differences between clutterers and controls were found. The extremely large and variable displacements were interpreted as a strategy that helps clutterers to improve the intelligibility of their speech.
This paper summarizes our research efforts in functional modelling of the relationship between the acoustic properties of vowels and perceived vowel quality. Our model is trained on 164 short steady-state stimuli. We measured F1, F2, and additionally F0 since the effect of F0 on perceptual vowel height is evident. 40 phonetically skilled subjects judged vowel quality using the Cardinal Vowel diagram. The main focus is on refining the model and describing its transformation properties between the F1/F2 formant chart and the Cardinal Vowel diagram. An evaluation of the model based on 48 additional vowels showed the generalizability of the model and confirmed that it predicts perceived vowel quality with sufficient accuracy.
We measure face deformations during speech production using a motion capture system, which provides 3D coordinate data of about 60 markers glued on the speaker's face. An arbitrary orthogonal factor analysis followed by a principal component analysis (together called a guided PCA) of the data has showed that the first 6 factors explain about 90% of the variance, for each of our 3 speakers. The 6 derived factors, therefore, allow us to efficiently analyze or to reconstruct with a reasonable accuracy the observed face deformations. Since these factors can be interpreted in articulatory terms, they can reveal underlying articulatory organizations. The comparison of lip gestures in terms of data derived factors suggests that these speakers differently maneuver the lips to achieve contrast between /s/ and /R/. Such inter-speaker variability can occur because the acoustic contrast of these fricatives is shaped not only by the lip tube but also by cavities inside the mouth such as the sublingual cavity. In other words, these tube and cavity can acoustically compensate each other to produce their required acoustic properties.
The contribution of von Kempelen's "Mechanism of Speech" to the 'phonetic sciences' will be analyzed with respect to his theoretical reasoning on speech and speech production on the one hand and on the other in connection with his practical insights during his struggle in constructing a speaking machine. Whereas in his theoretical considerations von Kempelen's view is focussed on the natural functioning of the speech organs – cf. his membraneous glottis model – in constructing his speaking machine he clearly orientates himself towards the auditory result – cf. the bag pipe model for the sound generator used for the speaking machine instead. Concerning vowel production his theoretical description remains questionable, but his practical insight that vowels and speech sounds in general are only perceived correctly in connection with their surrounding sounds – i.e. the discovery of coarticulation – is clearly a milestone in the development of the phonetic sciences: He therefore dispenses with the Kratzenstein tubes, although they might have been based on more thorough acoustic modelling.
Finally, von Kempelen's model of speech production will be discussed in relation to the discussion of the acoustic nature of vowels afterwards [Willis and Wheatstone as well as von Helmholtz and Hermann in the 19th century and Stumpf, Chiba & Kajiyama as well as Fant and Ungeheuer in the 20th century].
In order to understand the functional morphology of the human voice producing system, we are in need of data on the vocal tract anatomy of other mammalian species. The larynges and vocal tracts of four species of Artiodactyla were investigated in combination with acoustic analyses of their respective calls. Different evolutionary specializations of laryngeal characters may lead to similar effects on sound production. In the investigated species, such specializations are: the elongation and mass increase of the vocal folds, the volume increase of the laryngeal vestibulum by an enlarged thyroid cartilage and the formation of laryngeal ventricles. Both the elongation of the vocal folds and the increase of the oscillating masses lower the fundamental frequency. The influence of an increased volume of the laryngeal vestibulum on sound production remains unclear. The anatomical and acoustic results are presented together with considerations about the habitats and the mating systems of the respective species.
Four speakers repeated 8 times 15 sentences containing 'pVp' syllables (V being /a/, /i/ and /u/). The 'pVp' syllables were located in final, penultimate and antepenultimate position relatively to the Intonational Phrase (IP) boundary. They were embedded in lexical words of 1-3 syllables and were either word-initial or word-final. Results show that the closer the vowel in word-final position is to the IP boundary, the longer the duration and the higher the fundamental frequency of the vowel; it is also characterised by larger lip opening gestures. The potential reduction or coarticulation of vowels in wordinitial position compared to their counterparts in word-final position is discussed.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
The author presents MASSY, the MODULAR AUDIOVISUAL SPEECH SYNTHESIZER. The system combines two approaches of visual speech synthesis. Two control models are implemented: a (data based) di-viseme model and a (rule based) dominance model where both produce control commands in a parameterized articulation space. Analogously two visualization methods are implemented: an image based (video-realistic) face model and a 3D synthetic head. Both face models can be driven by both the data based and the rule based articulation model.
The high-level visual speech synthesis generates a sequence of control commands for the visible articulation. For every virtual articulator (articulation parameter) the 3D synthetic face model defines a set of displacement vectors for the vertices of the 3D objects of the head. The vertices of the 3D synthetic head then are moved by linear combinations of these displacement vectors to visualize articulation movements. For the image based video synthesis a single reference image is deformed to fit the facial properties derived from the control commands. Facial feature points and facial displacements have to be defined for the reference image. The algorithm can also use an image database with appropriately annotated facial properties. An example database was built automatically from video recordings. Both the 3D synthetic face and the image based face generate visual speech that is capable to increase the intelligibility of audible speech.
Other well known image based audiovisual speech synthesis systems like MIKETALK and VIDEO REWRITE concatenate pre-recorded single images or video sequences, respectively. Parametric talking heads like BALDI control a parametric face with a parametric articulation model. The presented system demonstrates the compatibility of parametric and data based visual speech synthesis approaches.
The goal of our current project is to build a system that can learn to imitate a version of a spoken utterance using an articulatory speech synthesiser. The approach is informed and inspired by knowledge of early infant speech development. Thus we expect our system to reproduce and exploit the utility of infant behaviours such as listening, vocal play, babbling and word imitation. We expect our system to develop a relationship between the sound-making capabilities of its vocal tract and the phonetic/phonological structure of imitated utterances. At the heart of our approach is the learning of an inverse model that relates acoustic and motor representations of speech. The acoustic to auditory mappings uses an auditory filter bank and a self-organizing phase of learning. The inverse model from auditory to vocal tract control parameters is estimated using a babbling phase, in which the vocal tract is essentially driven in a random manner, much like the babbling phase of speech acquisition in infants. The complete system can be used to imitate simple utterances through a direct mapping from sound to control parameters. Our initial results show that this procedure works well for sounds generated by its own voice. Further work is needed to build a phonological control level and achieve better performance with real speech.
It is one of the most highly debated issues in loanword phonology whether loanword adaptations are phonologically or phonetically driven. This paper addresses this issue and aims at demonstrating that only the acceptance of both a phonological as well as a phonetic approximation stance can adequately account for the data found in Japanese. This point will be exemplified with the adaptation of German and French mid front rounded vowels in Japanese. It will be argued that the adaptation of German /oe/ and /ø/ as Japanese /e/ is phonologically grounded, whereas the adaptation of French /oe/ and /ø/ as Japanese /u/ is phonetically grounded. This asymmetry in the adaptation process of German and French mid front rounded vowels and further examples of loans in Japanese lead to the only conclusion that both strategies of loanword adaptation occur in languages. It will be shown that not only perception, but also the influence of orthography, of conventions and the knowledge of the source language play a role in the adaptation process.
The purpose of this dissertation is to defend the idea that the empirical responsibilities of binding theory can be handled in a more psychologically and historically realistic way when assigned to the field of pragmatics. In particular, I wish to show that Optimality Theory (OT) (Prince & Smolensky, 1993), the stochastic OT and Gradual Learning Algorithm of Boersma (1998), the Recoverability of OT of Wilson (2001) and Buchwald et al. (2002), and the bidirectional OT of Blutner (2000b) and Bidirectional Gradual Learning Algorithm of Jäger (2003a) can all participate in a formal framework in which one can formally spell out and justify the idea that the distributional behavior of bound pronouns and reflexivs is a pragmatic phenomenon.
This study investigates supralaryngeal mechanisms of the two way voicing contrast among German velar stops and the three way contrast among Korean velar stops, both in intervocalic position. Articulatory data won via electromagnetic articulography of three Korean speakers and acoustic recordings of three Korean and three German speakers are analysed. It was found that in both languages the voicing contrast is created by more than one mechanism. However, one can say that for Korean velar stops in intervocalic position stop closure duration is the most important parameter. For German it is closure voicing. The results support the phonological description proposed by Kohler (1984).
War and death in business : some remarks on the nature of conceptualisation in the field economy
(2005)
This paper proposes an annotating scheme that encodes honorifics (respectful words). Honorifics are used extensively in Japanese, reflecting the social relationship (e.g. social ranks and age) of the referents. This referential information is vital for resolving zero
pronouns and improving machine translation outputs. Annotating honorifics is a complex task that involves identifying a predicate with honorifics, assigning ranks to referents of the
predicate, calibrating the ranks, and connecting referents with their predicates.
The Deep Linguistic Processing with HPSG Initiative (DELH-IN) provides the infrastructure needed to produce open-source semantic transfer-based machine translation systems. We have made available a prototype Japanese-English machine translation system built from existing resources include parsers, generators, bidirectional grammars and a transfer engine.
While the sortal constraints associated with Japanese numeral classifiers are well-studied, less attention has been paid to the details of their syntax. We describe an analysis implemented within a broad-coverage HPSG that handles an intricate set of numeral classifier construction types and compositionally relates each to an appropriate semantic representation, using Minimal Recursion Semantics.
Typology and complexity
(2005)
For the Workshop I was asked to talk about complexity in language from a typological perspective. My way of approaching this topic was to ask myself some questions, and then see where the answers led. The first one was of course, "What sort of system are we looking at complexity in - what kind of system is language?"
A survey of 170 Tibeto-Burman languages showed 69 with a distinction between inclusive and exclusive first-person plural pronouns, 18 of which also show inclusive- exclusive in Idual. Only the Kiranti languages and some Chin languages have inclusive-exclusive in the person marking. Of the forms of the pronouns involved in the inclusive-exclusive opposition, usually the exclusive form is less marked and historically prior to the inclusive form, and we find the distinction cannot be reconstructed to Proto-Tibeto-Burman or to mid level groupings. Qnly the Kiranti group has marking of the distinction that can be reconstructed to the proto level, and this is also reflected in the person-marking system.
Chao Yuen Ren (1892–1982)
(2005)
Y. R. Chao is easily the most famous linguist to have come out of China. Born before the end of the last dynasty in China, he received a traditional Confucian education, but was also one of the first Chinese people to be sent to the West for training in modern Western science (under the Boxer Indemnity Fund). The remarkable breadth and scope of his studies included physics, mathematics, linguistics, musical and literary composition, and translation, and he was a pioneer in many of these fields.
Erdvilas Jakulis’ thorough, detailed and comprehensive study (2004) is an important contribution to our reconstruction of the Balto-Slavic verbal system. The following remarks are intended to complement his findings from a Slavic perspective. Jakulis demonstrates that the type of Lith. tekèti, teka ‘flow’ is largely of East Baltic provenance. He finds it difficult to identify the same type in Old Prussian.
It is gratifying to see that Jay Jasanoff has now (2004) adopted my theory that "the Balto-Slavic acute was a kind of stød or broken tone" (p. 172), which I have been advocating since 1973. Unfortunately, his acceptance of my view is not based on an evaluation of the comparative evidence (for which see Kortlandt 1985a) but on his desire to derive Balto-Slavic “acute” and "circumflex" syllables from the "bimoric" and "trimoric" long vowels which he assumes for Proto-Germanic as the reflexes of the Indo-European "acute" and "circumflex" tones of the neogrammarians. Since the original "circumflex" was limited to Indo-European VHV-sequences, Jasanoff proposes a whole series of additional lengthenings yielding "hyperlong" vowels in Germanic, Baltic and Slavic, which still do not suffice to eliminate the counter-evidence (cf. Kortlandt 2004b: 14). The reason for this failure is his unwillingness to recognize that lengthened grade vowels are circumflex in Balto-Slavic (cf. Kortlandt 1997a).
The highly successful conference on Balto-Slavic accentology organized by Mate Kapovic and Ranko Matasovic has given much food for thought. It has clarified the extent of fundamental disagreements as well as established areas of common interest where the evidence seems to be ambiguous. In the following I shall comment upon some of the papers presented at the conference which are directly relevant to my own research.
Holger Pedersen’s "Études lituaniennes" reflects the issues under discussion at the time of its publication (1933). Its five unequal chapters deal with the following topics: I. The Lithuanian future and its Indo-European origins: the sigmatic formation, the 3rd person zero ending, the short root vowels e and a, the shortening and metatony in the 3rd person, and the future participle. II. The accentuation of nouns in Lithuanian: accentual mobility in the Indo- European consonant stems and its absence in the o-stems, the origins of accentual mobility in Lithuanian nominal paradigms, the accentuation of separate case forms, and accentual peculiarities of the adjective. III. The acute tone of the root in consonant stems. IV. The past active participle. V. Secondary vocalic alternations: new vowel length and new acute tone.