Linguistik
Refine
Year of publication
- 2005 (152) (remove)
Document Type
- Part of a Book (54)
- Article (51)
- Conference Proceeding (17)
- Preprint (13)
- Book (7)
- Working Paper (5)
- Report (3)
- diplomthesis (1)
- Other (1)
Language
Has Fulltext
- yes (152)
Is part of the Bibliography
- no (152)
Keywords
- Deutsch (14)
- Artikulation (13)
- Artikulatorische Phonetik (13)
- Phonetik (13)
- Englisch (11)
- Artikulator (9)
- Bedeutungswandel (6)
- Computerlinguistik (6)
- Akustische Phonetik (5)
- Fremdsprachenlernen (5)
Institute
This paper discusses the nature of habits in the use of languages. It is well-known that the habits of one's first language can influence the acquisition of a second language. This paper discusses the less well-known phenomenon of how an acquired second language can influence one's first language, and explains this influence by reference to the nature of communicative behavior.
This paper discusses the typology of focus structure types (variation of information structuring in the clause) and how information structure can be used to explain all of the word order patterns in Chinese without reference to grammatical relations.
Low- dimensional and speaker-independent linear vocal tract parametrizations can be obtained using the 3-mode PARAFAC factor analysis procedure first introduced by Harshman et al. (1977) and discussed in a series of subsequent papers in the Journal of the Acoustical Society of America (Jackson (1988), Nix et al. (1996), Hoole (1999), Zheng et al. (2003)). Nevertheless, some questions of importance have been left unanswered, e.g. none of the papers using this method has provided a consistent interpretation of the terms usually referred to as "speaker weights". This study attempts an exploration of what influences their reliability as a first step towards their consistent interpretation. With this in mind, we undertook a systematic comparison of the classical PARAFAC1 algorithm with a relaxed version, of it, PARAFAC2. This comparison was carried out on two different corpora acquired by the articulograph, which varied in vowel qualities, consonantal contexts, and the paralinguistic features accent and speech rate. The difference between these statistical approaches can grossly be described as follows: In PARAFAC1, observation units pertain to the same set of variables and the observation units are comparable. In PARAFAC2, observations pertain to the same set of variables, but observation units are not comparable. Such a situation can be easily conceived in a situation such as we are describing: The operationalization we took relies on the comparability of fleshpoint data acquired from different speakers, which need not be a good assumption due to influences like sensor placement and morphological conditions.
In particular, the comparison between the two different approaches is carried out by means of so-called "leverages" on different component matrices originating in regression analysis, calculated as v = diag(A(A A)−1A ) and delivering information on how "influential" a particular loading matrix is for the model. This analysis could potentially be carried out component by component, but we confined ourselves to effects on the global factor structure. For vowels, the most influential loadings are those for the tense cognates of non-palatal vowels. For speakers, the most prominent result is the relative absence of effects of the paralinguistic variables. Results generally indicate that there is quite little influence of the model specification (i.e. PARAFAC1 or PARAFAC2) on vowel and subject components. The patterns for the articulators indicate that there are strong differences between speakers with respect to the most influential measurement as revealed by PARAFAC2: In particular, the most influential y-contribution is the tongue-back for some talkers and the tongue-dorsum for other speakers. With respect to the speaker weights, again, the leverage patterns are very similar for both PARAFAC-versions. These patterns converge with the results of the loading plots, where the articulator profiles seem to be most altered by the use of PARAFAC2. These findings, in general, are interpreted as evidence for the reliability of the PARAFAC1 speaker weights.
This work investigates laryngeal and supralaryngeal correlates of the voicing contrast in alveolar obstruent production in German. It further studies laryngealoral co-ordination observed for such productions. Three different positions of the obstruents are taken into account: the stressed, syllable initial position, the post-stressed intervocalic position, and the post-stressed word final position. For the latter the phonological rule of final devoicing applies in German. The different positions are chosen in order to study the following hypotheses:
1. The presence/absence of glottal opening is not a consistent correlate of the voicing contrast in German.
2. Supralaryngeal correlates are also involved in the contrast.
3. Supralaryngeal correlates can compensate for the lack of distinction in laryngeal adjustment.
Including the word final position is motivated by the question whether neutralization in word final position would be complete or whether some articulatory residue of the contrast can be found.
Two experiments are carried out. The first experiment investigates glottal abduction in co-ordination with tongue-palate contact patterns by means of simultaneous recordings of transillumination, fiberoptic films and Electropalatography (EPG). The second experiment focuses on supralaryngeal correlates of alveolar stops studied by means of Electromagnetic Articulography (EMA) simultaneously with EPG. Three German native speakers participated in both recordings. Results of this study provide evidence that the first hypothesis holds true for alveolar stops when different positions are taken into account. In fricative production it is also confirmed since voiceless and voiced fricatives are most of the time realised with glottal abduction. Additionally, supralaryngeal correlates are involved in the voicing contrast under two perspectives. First, laryngeal and supralaryngeal movements are well synchronised in voiceless obstruent production, particularly in the stressed position. Second, supralaryngeal correlates occur especially in the post-stressed intervocalic position. Results are discussed with respect to the phonetics-phonology interface, to the role of timing and its possible control, to the interarticulatory co-ordination, and to stress as 'localised hyperarticulation'.
This special issue of the ZAS Papers in Linguistics contains a collection of papers of the French-German Thematic Summerschool on "Cognitive and physical models of speech production, and speech perception and of their interaction".
Organized by Susanne Fuchs (ZAS Berlin), Jonathan Harrington (IPdS Kiel), Pascal Perrier (ICP Grenoble) and Bernd Pompino-Marschall (HUB and ZAS Berlin) and funded by the German-French University in Saarbrücken this summerschool was held from September 19th till 24th 2004 at the coast of the Baltic Sea at the Heimvolkshochschule Lubmin (Germany) with 45 participants from Germany, France, Great Britain, Italy and Canada. The scientific program of this summerschool that is reprinted at the end of this volume included 11 key-note presentations by invited speakers, 21 oral presentations and a poster session (8 presentations). The names and addresses of all participants are also given in the back matter of this volume.
All participants was offered the opportunity to publish an extended version of their presentation in the ZAS Papers in Linguistics. All submitted papers underwent a review and an editing procedure by external experts and the organizers of the summerschool. As it is the case in a summerschool, papers present either works in progress, or works at a more advanced stage, or tutorials. They are ordered alphabetically by their first author's name, fortunately resulting in the fact that this special issue starts out with the paper that won the award as best pre-doctoral presentation, i.e. Sophie Dupont, Jérôme Aubin and Lucie Ménard with "A study of the McGurk effect in 4 and 5-year-old French Canadian children".
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
In this paper the issue of the nature of the representations of the speech production task in the speaker's brain is addressed in a production-perception interaction framework. Since speech is produced to be perceived, it is hypothesized that its production is associated for the speaker with the generation of specific physical characteristics that are for the listeners the objects of speech perception. Hence, in the first part of the paper, four reference theories of speech perception are presented, in order to guide and to constrain the search for possible correlates of the speech production task in the physical space: the Acoustic Invariance Theory, the Adaptive Variability Theory, the Motor Theory and the Direct-Realist Theory. Possible interpretations of these theories in terms of representations of the speech production task are proposed and analyzed. In a second part, a few selected experimental studies are presented, which shed some light on this issue. In the conclusion, on the basis of the joint analysis of theoretical and experimental aspects presented in the paper, it is proposed that representations of the speech production task are multimodal, and that a hierarchy exists among the different modalities, the acoustic modality having the highest level of priority. It is also suggested that these representations are not associated with invariant characteristics, but with regions of the acoustic, orosensory and motor control spaces.
A fundamental question in the study of speech is about the invariance of the ultimate percepts, or features. The present paper gives an overview of the noninvariance problem and offers some hints towards a solution. Examination of various data on place and voicing perception suggests the following points. Features correspond to natural boundaries between sounds, which are included in the infant's predispositions for speech perception. Adult percepts arise from couplings and contextual interactions between features. Both couplings and interactions contribute to invariance. But this is at the expense of profound qualitative changes in perceptual boundaries implying that features are neither independently nor invariantly perceived. The question then is to understand the principles which guide feature couplings and interactions during perceptual development. The answer might reside in the fact that: (1) adult boundaries converge to a single point of the perceptual space, suggesting a context-free central reference; (2) this point corresponds to the neutral vocoïd, suggesting the reference is related to production; (3) at this point perceptual boundaries correspond to the natural ones, suggesting the reference is anchored in predispositions for feature perception. In sum, perceptual invariance seems to be grounded on a radial representation of the vocal tract around a singular point at which boundaries are context-fee, natural and coincide with the neutral vocoïd.
This paper presents the results of Open Quotient measurements in EGG signals of young (18 to 30 year old) and elderly (59 to 82 year old) male and female speakers. The paper further presents quantitative results on the relation between the OQ and the perception of a speaker's age. Higgins & Saxman (1991) found a decreased OQEGG with increasing age for females, whereas the OQEGG in sustained vowel material increased for males as the speakers age increased. In Linville (2002), however, the spectral amplitudes in the region of F0 (obtained by LTAS-measurements of read speech material) increased with increasing age independent of gender; this could be interpreted indirectly as an increasing OQ. We measured the OQEGG not only for sustained vowels, but also in vowels taken from isolated words. In order to analyse the relation between breathiness in terms of an increased OQ and the mean perceived age per stimulus a perception test was carried out in which listeners were asked to estimate speaker's age based on sustained /a/-vowel stimuli varying in vocal effort (soft - normal - loud) during production. The results indicated the following: (i) The decreased OQ for elderly females originally found by Higgins & Saxman is not apparent in our data for sustained /a/-vowels. For our female speakers no significant difference between the OQ of young and old speakers was found; for elderly males, however, we also found an increasing OQ with increasing age.(ii) In addition, a statistically significant increased OQEGG occurs for the group of the elderly males for the vowels from the word material. (iii) Our results show a strong positive relation between perceived age and OQ in male voices. Regarding (i) and (ii), at least the male speaker's voice becomes more breathy as age increases. Considering (iii), increased breathiness may contribute to the listener’s perception of increased age.
Studying kinematic behavior in speech production is an indispensable and fruitful methodology in order to describe for instance phonemic contrasts, allophonic variations, prosodic effects in articulatory movements. More intriguingly, it is also interpreted with respect to its underlying control mechanisms. Several interpretations have been borrowed from motor control studies of arm, eye, and limb movements. They do either explain kinematics with respect to a fine tuned control by the Central Nervous System (CNS) or they take into account a combination of influences arising from motor control strategies at the CNS level and from the complex physical properties of the peripheral speech apparatus. We assume that the latter is more realistic and ecological. The aims of this article are: first, to show, via a literature review related to the so called '1/3 power law' in human arm motor control, that this debate is of first importance in human motor control research in general. Second, to study a number of speech specific examples offering a fruitful framework to address this issue. However, it is also suggested that speech motor control differs from general motor control principles in the sense that it uses specific physical properties such as vocal tract limitations, aerodynamics and biomechanics in order to produce the relevant sounds. Third, experimental and modelling results are described supporting the idea that the three properties are crucial in shaping speech kinematics for selected speech phenomena. Hence, caution should be taken when interpreting kinematic results based on experimental data alone.
Syllable cut is said to be a phonologically distinctive feature in some languages where the difference in vowel quantity is accompanied by a difference in vowel quality like in German. There have been several attempts to find the corresponding phonetic correlates for syllable cut, from which the energy measurements of vowels by Spiekermann (2000) proved appropriate for explaining the difference between long, i.e. smoothly, and short, i.e. abruptly cut, vowels: in smoothly cut vowels, a larger number of peaks was counted in the energy contour which were located further back than in abruptly cut segments, and the overall energy was more constant throughout the entire nucleus. On this basis, we intended to compare German as a syllable cut language and Hungarian where the feature was not expected to be relevant. However, the phonetic correlates of syllable cut found in this study do not entirely confirm Spiekermann's results. It seems that the energy features of vowels are more strongly connected to their duration than to their quality.
This study reports on the results of an airflow experiment that measured the duration of airflow and the amount of air from release of a stop to the beginning of a following vowel in stop vowel-sequences of German. The sequences involved coronal, labial and velar voiced and voiceless stops followed by the vocoids /j, i:, ı, ɛ, ʊ, a/. The experiment tested the influence of the three factors voicing of stop, place of stop articulation, and the following vocoid context on the duration and amount of air as possible explanation for assibilation processes. The results show that the voiceless stops are related to a longer duration and more air in the release phase than voiced ones. For the influence of the vocoids, a significant difference could be established between /j/ and all other vocoids for the duration of the release phase. This difference could not be found for the amount of air over this duration. The place of articulation had only restricted influence. Velars resulted in significantly longer duration of the release phase compared to non-velars. A significant difference in amount of air between the places of articulation could not be found.
The present article is a follow-up study of the investigation of labiodentals in German and Dutch by Hamann & Sennema (2005), where we looked at the perception of the Dutch labiodental three-way contrast by German listeners without any knowledge of Dutch and German learners of Dutch. The results of this previous study suggested that the German voiced labiodental fricative /v/ is perceptually closer to the Dutch approximant /ʋ/ than to the corresponding Dutch voiced labiodental fricative /v/. These perceptual indications are attested by the acoustic findings in the present study. German /v/ has a similar harmonicity median and a similar centre of gravity to Dutch /ʋ/, but differs from Dutch /v/ in these parameters. With respect to the acoustic parameter of duration, German /v/ lies closer to the Dutch /v/ than to the Dutch /ʋ/.
(Non)retroflexivity of slavic affricates and its motivation : Evidence from polish and czech <č>
(2005)
The goal of this paper is two-fold. First, it revises the common assumption that the affricate <č> denotes /t͡ʃ/ for all Slavic languages. On the basis of experimental results it is shown that Slavic <č> stands for two sounds: /t͡ʃ/ as e.g. in Czech and /ʈʂ/ as in Polish.
The second goal of the paper is to show that this difference is not accidental but it is motivated by perceptual relations among sibilants. In Polish, /t͡ʃ/ changed to /ʈʂ/ thus lowering its sibilant tonality and creating a better perceptual distance to /tɕ/, whereas in Czech /t͡ʃ/ did not turn to /ʈʂ/, as the former displayed sufficient perceptual distance to the only affricate present in the inventory, namely, the alveolar /t͡s/. Finally, an analysis of Czech and Polish affricate inventories is offered.
While the perilinguistic child is endowed with predispositions for the categorical perception of phonetic features, their adaptation to the native language results from a long evolution from the end of the first year of age up to the adolescence. This evolution entails both a better discrimination between phonological categories, a concomitant reduction of the discrimination between within-category variants, and a higher precision of perceptual boundaries between categories. The first objective of the present study was to assess the relative importance of these modifications by comparing the perceptual performances of a group of 11 children, aged from 8 to 11 years, with those of their mothers. Our second objective was to explore the functional implications of categorical perception by comparing the performances of a group of 8 deaf children, equipped with a cochlear implant, with normal-hearing chronological age controls. The results showed that the categorical boundary was slightly more precise and that categorical perception was consistently larger in adults vs. normal-hearing children. Those among the deaf children who were able to discriminate minimal distinctions between syllables displayed categorical perception performances equivalent to those of normal-hearing controls. In conclusion, the late effect of age on the categorical perception of speech seems to be anchored in a fairly mature phonological system, as evidenced the fairly high precision of categorical boundaries in pre-adolescents. These late developments have functional implications for speech perception in difficult conditions as suggested by the relationship between categorical perception and speech intelligibility in cochlear implant children.
This paper describes the processing of MRI and CT images needed for developing a 3D linear articulatory model of velum. The 3D surface that defines each organ constitutive of the vocal and nasal tracts is extracted from MRI and CT images recorded on a subject uttering a corpus of artificially sustained French vowels and consonants. First, the 2D contours of the organs have been manually extracted from the corresponding images, expanded into 3D contours, and aligned in a common 3D coordinate system. Then, for each organ, a generic mesh has been chosen and fitted by elastic deformation to each of the 46 3D shapes of the corpus. This has finally resulted in a set of organ surfaces sampled with the same number of 3D vertices for each articulation, which is appropriate for Principal Component Analysis or linear decomposition. The analysis of these data has uncovered two main uncorrelated articulatory degrees of freedom for the velum's movement. The associated parameters are used to control the model. We have in particular investigated the question of a possible correlation between jaw / tongue and velum's movement and have not find more correlation than the one found in the corpus.
This paper contributes to the understanding of vocal folds oscillation during phonation. In order to test theoretical models of phonation, a new experimental set-up using a deformable vocal folds replica is presented. The replica is shown to be able to produce self sustained oscillations under controlled experimental conditions. Therefore different parameters, such as those related to elasticity, to acoustical coupling or to the subglottal pressure can be quantitatively studied. In this work we focused on the oscillation fundamental frequency and the upstream pressure in order to start (on-set threshold) either end (off-set threshold) oscillations in presence of a downstream acoustical resonator. As an example, it is shown how this data can be used in order to test the theoretical predictions of a simple one-mass model.
A visual articulatory model and its application to therapy
of speech disorders : a pilot study
(2005)
A visual articulatory model based on static MRI-data of isolated sounds and its application in therapy of speech disorders is described. The model is capable of generating video sequences of articulatory movements or still images of articulatory target positions within the midsagittal plane. On the basis of this model (1) a visual stimulation technique for the therapy of patients suffering from speech disorders and (2) a rating test for visual recognition of speech movements was developed. Results indicate that patients produce recognition rates above level of chance already without any training and that patients are capable of increasing their recognition rate over the time course of therapy significantly.