Linguistik-Klassifikation
Refine
Document Type
- Part of a Book (46) (remove)
Has Fulltext
- yes (46)
Is part of the Bibliography
- no (46)
Keywords
- Phonologie (46) (remove)
Seit mehr als 60 Jahren dominiert in der historisch-phonologischen Umlaut-Landschaft EIN Aufsatz, eine vierseitige Skizze des althochdeutschen Umlauts von W. Freeman Twaddell. Keller (1978: 160) nennt diese Theorie 'one of the finest achievements of American linguists'. Ähnliche Lobsprüche findet man mehrmals in der Literatur und der Artikel bleibt bis heute noch DER Eckpfeiler der Umlaut-Debatte (s. Krygier 1997, Schulte 1998).
In den letzten paar Jahren haben wir mit einigen Kollegen – Anthony Buccini, Garry Davis, David Fertig, Dave Holsinger, Robert Howell, Regina Smith – einen neuen Ansatz entwickelt, die wir "ingenerate Umlaut" nennen. "Ingenerate" heißt hier ungefähr 'vorprogrammiert, inhärent, angeboren' und deutet darauf hin, daß wir die Wurzeln vom Umlaut in der Phonetik – noch genauer: in der Koartikulation – suchen. Auch meinen wir, die allmähliche Entfaltung des Prozesses in den "Ausnahmen" zum Umlaut sehen zu können, mit anderen Worten genau in den umlautlosen Formen, die in der Twaddellschen Tradition als willkürliche Ergebnisse der Analogie gesehen werden müssen.
In this paper, I discuss four different verb forms in Ndebele (a Nguni Bantu language spoken mainly in Zimbabwe) - the imperative, reduplicated, future and participial. I show that while all four are subject to minimality restrictions, minimality is satisfied differently in each of these morphological contexts. To account for this, I argue that in Ndebele (as in other Bantu languages) Word and RED are not the only constituents which must satisfy minimality: the Stem is also subject to minimality conditions in some morphological contexts. This paper, then, provides additional arguments for the proposal that Phonological Word is not the only sub-lexical morpho-prosodic constituent. Further, I argue that, although Word, RED and Stern are all subject to the same minimality constraint – they must all be minimally bisyllabic - this does not follow from a single 'generalized' constraint. Instead, I argue, contra recent work within Generalized Template Theory (see, e.g., McCarthy & Prince 1994, 1995a, 1999; Urbanezyk 1995, 1996; and Walker 2000; etc.) that a distinct minimality constraint must be formalized for each of these morpho-prosodic constituents.
The distribution of trimoraic syllables in German and English as evidence for the phonological word
(2000)
In the present article I discuss the distribution of trimoraic syllables in German and English. The reason I have chosen to analyze these two languages together is that the data in both languages are strikingly similar. However, although the basic generalization in (1) holds for both German and English, we will see below that trimoraic syllabIes do not have an identical distribution in both languages.
In the present study I make the following theoretical claims. First, I argue that the three environments in (1) have a property in common: they all describe the right edge of a phonological word (or prosodic word; henceforth pword). From a formal point of view, I argue that a constraint I dub the THIRD MORA RESTRICTION (henceforth TMR), which ensures that trimoraic syllables surface at the end of a pword, is active in German and English. According to my proposal trimoraic syllables cannot occur morpheme-internally because monomorphemic grammatical words like garden are parsed as single pwords. Second, I argue that the TMR refers crucially to moraic structure. In particular, underlined strings like the ones in (1) will be shown to be trimoraic; neither skeletal positions nor the subsyllabic constituent rhyme are necessary. Third, the TMR will be shown to be violated in certain (predictable) pword-internal cases, as in Monde and chamber; I account for such facts in an OptimalityTheoretic analysis (henceforth OT; Prince & Smolensky 1993) by ranking various markedness constraints among themselves or by ranking them ahead of the TMR. Fourth, I hold that the TMR describes a concrete level of grammar, which I refer to below as the 'surface' representation. In this respect, my treatment differs significantly from the one proposed for English by Borowsky (1986, 1989), in which the English facts are captured in a Lexical Phonology model by ordering the relevant constraint at level 1 in the lexicon.
Identity effects in phonology are deviations from regular phonological form (i.e. canonical patterns) which are due to the relatedness between words. More specifically, identity effects are those deviations which have the function to enhance similarity in the surface phonological form of morphologically related words. In rule-based generative phonology the effects in question are described by means of the cycle. For example, the stress on the second syllable in cond[ɛ]nsation as opposed to the stresslessness of the second syllable in comp[ǝ]nsation is described by applying the stress rules initially to the sterns thereby yielding condénse and cómpensàte. Subsequently the stress rules are reapplied to the affixed words with the initial stress assignment (i.e. stress on the second syllable in condense, but not in compensate) leaving its mark in the output form (cf. Chomsky and Halle 1968). A second example are words like lie[p]los 'unloving' in German, which shows the effects of neutralization in coda position (i.e. only voiceless obstruents may occur in coda position) even though the obstruent should 'regularly' be syllabified in head position (i.e. bl is a wellformed syllable head in German). Here the stern is syllabified on an initial cycle, obstruent devoicing applies (i.e. lie[p]) and this structure is left intact when affixation applies (i.e. lie[p ]Ios ) (cf. Hall 1992). As a result the stern of lie[p]los is identical to the base lie[p].
I argue in this study that consonantal strength shifts can be explained through positional bans on features, expressed over positions marked as weak at a given level of prosodic structure, usually the metrical foo!. This approach might be characterized as "templatic" in the sense it seeks to explain positional restrictions and distributional patterns relative to independently motivated, fixed prosodic elements. In this sense, it follows Dresher & Lahiri's (1991) idea of metrical coherence in phonological systems, namely, "[T]hat grammars adhere to syllabic templates and metrical patterns of limited types, and that these patterns persist across derivations and are available to a number of different processes ... " (251). [...] The study is structured as follows: section 1 presents a typology of distributional asymmetries based on data from unrelated languages, demonstrating that the stress foot of each of these languages determines the contexts of neutralization and weakening of stops. Section 2 elaborates the notion of a template, exploring some of its formal properties, while section 3 presents templatic analyses of data from English and German. Section 4 explores the properties of weak positions, especially weak onsets, in more detail, including discussion of templates in phonological acquisition. Section 5 summarizes and concludes the study.
The aim of this paper is to show what role prosodic constituents, especially the foot and the prosodic word play in Polish phonology. The focus is placed on their function in the representation of extrasyllabic consonants in word-initial, word-medial, and word-final positions.
The paper is organized as follows. In the first section, I show that the foot and the prosodic word are well-motivated prosodic constituents in Polish prosody. In the second part, I discuss consonant clusters in Polish focussing on segments that are not parsed into a syllable due to violations of the Sonority Sequencing Generalisation, i.e. extrasyllabic segments. Finally, I analyze possible representations of the extrasyllabic consonants and conclude that both the foot and the prosodic word play a crucial role in terms of licensing. My proposal differs from the ones by Rubach and Booij (1990b) and Rubach (1997) in that I argue that the word-initial sonorants traditionally called extrasyllabic are licenced by the foot and not by the prosodic word (cf. Rubach and Booij (1990b)) or the syllable (cf. Rubach (1997)). For my analysis I adopt the framework of Optimality Theory, cf. McCarthy and Prince (1993), Prince and Smolensky (1993), in which derivational levels are abandoned and only surface representations are evaluated by means of universal constraints.
In this work, I examine a set of languages which appear to require resyllabification postlexically; in less derivational terms, a word's syllabification in isolation differs from its syllabification in a phrase-internal context. Although many people, myself included, have been looking at such cases in isolation over the years, I bring together several examples here to see what features they share and how an Optimality Theory analysis improves upon rule-based derivational approaches.
The purpose of this paper is to provide a unified (i.e. independent of lexical categories) account of Persian stress. I show that by differentiating word- and phrase-level stress rules, one can account for the superficial differences exemplified in (1) above and many of the stipulations suggested by previous scholars. The paper is organized as follows. In section 1, I look at nouns and adjectives and propose a rule that would account for their stress pattern. In section 2, I extend the stress rule to verbs and show the problem this category poses to our generalization. The main proposal of this paper is discussed in section 3. I introduce the phrasal stress rule in Persian and show that by differentiating word-level and phrase-level stress rules, one can come to a unified account of Persian stress. Section 4 deals with some problematic eases for the proposed generalization and discusses some tentative solutions and their theoretical consequences. Section 5 concludes the paper.
This article presents new experimental data on the phonetics of syllabic /l/ and syllabic /n/ in Southern British English and then proposes a new phonological account of their behaviour. Previous analyses (Chomsky and Halle 1968:354, Gimson 1989, Gussmann 1991 and Wells 1995) have proposed that syllabic /l/ and syllabic /n/ should be analysed in a uniform manner. Data presented here, however, shows that syllabic /l/ and syllabic /n/ behave in very different ways, and in light of this, a unitary analysis is not justified. Instead, a proposal is made that syllabic /l/ and syllabic /n/ have different phonological structures, and that these different phonological structures explain their different phonetic behaviours.
This article is organised as follows: First a general background is given to the phenomenon of syllabic consonants both cross linguistically and specifically in Southern British English. In §3 a set of experiments designed to elicit syllabic consonants are described and in §4 the results of these experiments are presented. §5 contains a discussion on data published by earlier authors concerning syllabic consonants in English. In §6 a theoretical phonological framework is set out, and in §7 the results of the experiments are analysed in the light of this framework. In the concluding section, some outstanding issues are addressed and several areas for further research are suggested.
It has been hypothesized that sounds which are less perceptible are more likely to be altered than more salient sounds, the rationale being that the loss of information resulting from a change in a sound which is difficult to perceive is not as great as the loss resulting from a change in a more salient sound. Kohler (1990) suggested that the tendency to reduce articulatory movements is countered by perceptual and social constraints, finding that fricatives are relatively resistant to reduction in colloquial German. Kohler hypothesized that this is due to the perceptual salience of fricatives, a hypothesis which was supported by the results of a perception experiment by Hura, Lindblom, and Diehl (1992). These studies showed that the relative salience of speech sounds is relevant to explaining phonological behavior. An additional factor is the impact of different acoustic environments on the perceptibility of speech sounds. Steriade (1997) found that voicing contrasts are more common in positions where more cues to voicing are available. The P-map, proposed by Steriade (2001a, b), allows the representation of varying salience of segments in different contexts. Many researchers have posited a relationship between speech perception and phonology. The purpose of this paper is to provide experimental evidence for this relationship, drawing on the case of Turkish /h/ deletion.
This article deals with the Tashlhiyt dialect of Berber (henceforth TB) spoken in the southern part of Morocco. In TB, words may consist entirely of consonants without vowels and sometimes of only voiceless obstruents, e.g. tft#tstt "you rolled it (fem)". In this study we have carried out acoustic, video-endoscopic and phonological analyses to answer the following question: is schwa, which may function as syllabic, a segment at the level of phonetic representations in TB? Video-endoscopic films were made of one male native speaker of TB, producing a list of forms consisting entirely of voiceless obstruents. The same list was produced by 7 male native speakers of TB for the acoustic analysis. The phonological analysis is based on the behaviour of vowels with respect to the phonological rule of assibilation. This study shows the absence of schwa vowels in forms consisting of voiceless obstruents.
The current paper explores these two sorts of phonetic explanations of the relationship between syllabic position and the voicing contrast in American English. It has long been observed that the contrast between, for example, /p/ and /b/ is expressed differently, depending on the position of the stop with respect to the vowel. Preceding a vowel within a syllable, the contrast is largely one of aspiration. /p/ is aspirated, while /b/ is voiceless, or in some dialects voiced or even an implosive. Following a vowel within a syllable, both /p/ and /b/ both tend to lack voicing in the closure and the contrast is expressed largely by dynamic differences in the transition between the previous vowel and the stop. Here, vowel and closure duration are negatively correlated such that the /p/ has a shorter vowel and longer closure duration. This difference is often enhanced by the addition of glottalization to /p/. In addition to these differences, there are additional differences connected to higher-level organization involving stress and feet edges. To make the current discussion more tractable, we will restrict ourselves to the two conditions (CV and VC) laid out above.
In this study, cross-dialectal variation in the use of the acoustic cues of VOT and F0 to mark the laryngeal contrast in Korean stops is examined with Chonnam Korean and Seoul Korean. Prior experimental results (Han & Weitzman, 1970; Hardcastle, 1973; Jun, 1993 &1998; Kim, C., 1965) show that pitch values in the vowel onset following the target stop consonants play a supplementary role to VOT in designating the three contrastive laryngeal categories. F0 contours are determined in part by the intonational system of a language, which raises the question of how the intonational system interacts with phonological contrasts. Intonational difference might be linked to dissimilar patterns in using the complementary acoustic cues of VOT and F0. This hypothesis is tested with 6 Korean speakers, three Seoul Korean and three Chonnam Korean speakers. The results show that Chonnam Korean involves more 3-way VOT and a 2-way distinction in F0 distribution in comparison to Seoul Korean that shows more 3-way F0 distribution and a 2-way VOT distinction. The two acoustic cues are complementary in that one cue is rather faithful in marking 3-way contrast, while the other cue marks the contrast less distinctively. It also seems that these variations are not completely arbitrary, but linked to the phonological characteristics in dialects. Chonnam Korean, in which the initial tonal realization in the accentual phrase is expected to be more salient, tends to minimize the F0 perturbation effect from the preceding consonants by taking more overlaps in F0 distribution. And a 3-way distribution of VOT in Chonnam Korean, as compensation, can be also understood as a durational sensitivity. Without these characteristics, Seoul Korean shows relatively more overlapping distribution in VOT and more 3-way separation in F0 distribution.
In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively λ and α. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (λ,α) and polymorphism of a given phase, or superstructures in phonological organisations (Vallée et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places “preferred” for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the world’s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control.
Arguing against Bhat’s (1974) claim that retroflexion cannot be correlated with retraction, the present article illustrates that retroflexes are always retracted, though retraction is not claimed to be a sufficient criterion for retroflexion. The cooccurrence of retraction with retroflexion is shown to make two further implications; first, that non-velarized retroflexes do not exist, and second, that secondary palatalization of retroflexes is phonetically impossible. The process of palatalization is shown to trigger a change in the primary place of articulation to non-retroflex. Phonologically, retraction has to be represented by the feature specification [+back] for all retroflex segments.
Consonants exhibit more variation in their phonetic realization than is typically acknowledged, but that variation is linguistically constrained. Acoustic analysis of both read and spontaneous speech reveals that consonants are not necessarily realized with the manner of articulation they would have in careful citation form. Although the variation is wider than one would imagine, it is limited by the phoneme inventory. The phoneme inventory of the language restricts the range of variation to protect the system of phonemic contrast. That is, consonants may stray phonetically into unfilled areas of the language's sound space. Listeners are seldom consciously aware of the consonant variation, and perceive the consonants phonemically as in their citation forms. A better understanding of surface phonetic consonant variation can help make predictions in theoretical domains and advances in applied domains.
Data on lingual movement, dorsopalatal contact and F2 frequency presented in previous papers of ours (Recasens, 2002; Recasens and Pallarès, 2001; Recasens, Pallarès and Fontdevila, 1997) suggest that the degree of articulatory constraint (DAC) model accounts to a large extent for the extent and direction of tongue dorsum coarticulation in VCV and CC sequences. A goal of this investigation is to verify the predictions of this model with respect to jaw V-to-V effects in VCV sequences using articulatory movement data collected with electromagnetic articulometry (EMA).
One of the most important insights of Optimality Theory (Prince & Smolensky 1993) is that phonological processes can be reduced to the interaction between faithfulness and universal markedness principles. In the most constrained version of the theory, all phonological processes should be thus reducible. This hypothesis is tested by alternations that appear to be phonological but in which universal markedness principles appear to play no role. If we are to pursue the claim that all phonological processes depend on the interaction of faithfulness and markedness, then processes that are not dependent on markedness must lie outside phonology. In this paper I will examine a group of such processes, the initial consonant mutations of the Celtic languages, and argue that they belong entirely to the morphology of the languages, not the phonology.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.