Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Part of a Book (100)
- Article (38)
- Conference Proceeding (22)
- Working Paper (13)
- Report (6)
- Book (2)
- Preprint (1)
Language
- English (182) (remove)
Has Fulltext
- yes (182)
Is part of the Bibliography
- no (182)
Keywords
- Phonologie (54)
- Phonetik (47)
- Intonation <Linguistik> (30)
- Prosodie (24)
- Artikulation (19)
- Deutsch (17)
- Optimalitätstheorie (13)
- Artikulatorische Phonetik (12)
- Bantusprachen (12)
- Relativsatz (11)
Institute
This paper builds on Zwicky's (1986) notion of shape condition, that is, a rule that specifies the phonological shape of inflected forms "by reference to triggers at least some of which lie outside the syntactic word". Zwicky observes that "many rules traditionally classified as external sandhi rules are [shape conditions]". They are not phonological rules in the usual sense, since they only apply to specific lexical items and are active within syntactic rather than phonological domains.
Shape conditions are problematic in many standard grammar architectures. On the one hand, they seem to be constraints on lexical entries, while on the other hand, they make reference to the syntactic context. Hayes (1990) has sketched a theory of "precompiled phrasal phonology" in which allomorph choice is conditioned by subcategorization frames in lexical entries. However, his approach is not formalized in any detail, and moreover makes the implicit claim that the relation between a shape condition target and its triggers can be equated with the syntactic relation between a lexical head and its complement. Although this assumption holds good for the Hausa phenomena he addresses, we do not believe that it holds in general.
HPSG appears to offer promising framework for formalizing something like Hayes' approach, but the standard machinery also makes it hard to distinguish a shape condition trigger from a complement. In order to overcome this difficulty, we develop the notion of phonological context: a feature of signs which allows us to condition allomorphic alternation in terms of (i) the phonological edges, and (ii) the syntactic properties of an expression's immediate syntactic sisters. We show how our analysis deals with four illustrative cases: the indefinite article alternation in English, syncretic liaison forms for possessive pronouns in French, Hausa verb-final vowel shortening, and soft mutation in Welsh nouns.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.
The unfolding discussion will focus on the internal representation of turbulent sounds in the phonology of German as well as pinpoint the special status of the prime defining the quality of turbulence. It will also be argued that this prime is capable of entering into special types of licensing relations, which results in specific phonetic manifestations of forms. We shall compare the effects of two processes attested in German: consonant degemination and spirantisation with a view to revealing the role of the turbulence-defining element in the two operations. Furthermore, our attention will be focused on the workings of the Obligatory Contour Principle which, as will be shown below, exerts decisive impact on prime interplay and consequently the phonetic realization of sounds and words. We shall see that segmental identity is contingent on the languagespecific interpretation of inter-element bonds.
Aware of the importance of prime autonomy in determining the manifestation of sounds, let us start with a brief outline of the fundamental segment structure principles offered by the theory of Phonological Government.
Metrical phonology in HPSG
(2006)
This paper proposes a new approach to the prosody-syntax interface in HPSG. Previous approaches to prosody in HPSG (Klein, 2000; Haji-Abdolhosseini, 2003) represent prosodic information by constructing metrical constituent structure in the tradition of (Selkirk, 1980; Liberman and Prince, 1977). One drawback of this approach is that it does not allow for a direct representation of purely metrical constraints, which are relegated to an unformalized performance component. By contrast, so called 'grid only' approaches (Prince, 1983; Selkirk, 1984; Delais-Roussarie, 2000) use a single data structure, a metrical grid, to encode prosodic constraints resulting from syntax and constraints of a rhythmic nature.
We first review relevant data from French showing that prosodic constituency is much less constrained by syntactic structure than is predicted by existing approaches. In all but very short utterances, many different prosodic groupings are possible for a given sentence with a determinate information structure, and rhythmic factors determine a preference ordering on the possible groupings. We then present an HPSG implementation of the metrical grid, and propose minimal syntactic constraints on relative prominence, leaving room for noncategorical rythmic constraints to choose between alternatives. We finish by discussing the interaction of the metrical grid with the rest of the prosodic grammar.
This study investigates supralaryngeal mechanisms of the two way voicing contrast among German velar stops and the three way contrast among Korean velar stops, both in intervocalic position. Articulatory data won via electromagnetic articulography of three Korean speakers and acoustic recordings of three Korean and three German speakers are analysed. It was found that in both languages the voicing contrast is created by more than one mechanism. However, one can say that for Korean velar stops in intervocalic position stop closure duration is the most important parameter. For German it is closure voicing. The results support the phonological description proposed by Kohler (1984).
This study investigates supralaryngeal mechanisms of the two way voicing contrast among German velar stops and the three way contrast among Korean velar stops, both in intervocalic position. Articulatory data won via electromagnetic articulography of three Korean speakers and acoustic recordings of three Korean and three German speakers are analysed. It was found that in both languages the voicing contrast is created by more than one mechanism. However, one can say that for Korean velar stops in intervocalic position stop closure duration is the most important parameter. For German it is closure voicing. The results support the phonological description proposed by Kohler (1984).
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)) , speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
Mechanisms of contrasting korean velar stops : A catalogue of acoustic and articulatory parameters
(2003)
The Korean stop system exhibits a three-way distinction in velar stops among /g/, /k'/ and /kh/. If the differentiation is regarded as being based on voicing, such a system is rather unusual because even a two-way distinction between a voiced and a voicless unaspirated velar stop gets easily lost in the languages of the world especially in the case of velar stops. One possibility for maintainig this distinction is that supralaryngeal characteristics like articulators' velocity, duration of surrounding vowels or stop closure duration are involved. The aim of the present study is to set up a catalogue of parameters which are involved in the distinction of Korean velar stops in intervocalic position.
Two Korean speakers have been recorded via Electromagnetic Articulography. The word material consisted of VCV-sequences where V is one of the three vowels /a/, /i/ or /u/ and C one of the Korean velars /g/, /k'/ or /kh/. Articulatory and acoustic signals have been analysed It turned out that the distinction is only partly built on laryngeal parameters and that supralaryngeal characteristics differ for the three stops. Another result is that the voicing contrast is not a matter of one parameter, but there is always a set of parameters involved. Furthermore, speakers seem to have a certain freedom in the choice of these parameters.
Several articulatory strategies are available during the production of /u/, all resulting in a similar acoustic output. /u/ has two main constrictions, at the velum and at the lips. A perturbation of either constriction can be compensated at the other one, e.g wider constriction at the velum by more lip protrusion, wider lip opening by more tongue retraction. This study investigates whether speakers use this relation under perturbation. Six speakers were provided with palatal prostheses which were worn for two weeks. Speakers were instructed to make a serious attempt to produce normal speech. Their speech was recorded via EMA and acoustics several times over the adaptation period. Formant values of /u/-productions were measured. Velar constriction width and lip protrusion were estimated. For four speakers a correlation between constriction width and lip protrusion was found. A negative correlation between lip protrusion and F1 or F2 could sometimes be observed, but no correlation occurred between constriction size and either of the formants. The results show that under perturbation speakers use motor equivalent strategies in order to adapt. The correlation between constriction size and lip protrusion is stronger than in studies investigating unperturbed speech. This could be because under perturbation speakers are inclined to try out several strategies in order to reach the acoustic target and the co-variability might thus be greater.