Linguistik-Klassifikation
Refine
Document Type
- Part of a Book (44)
- Working Paper (7)
- Article (2)
- Report (1)
Language
- English (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- Phonologie (54) (remove)
Institute
This article deals with the Tashlhiyt dialect of Berber (henceforth TB) spoken in the southern part of Morocco. In TB, words may consist entirely of consonants without vowels and sometimes of only voiceless obstruents, e.g. tft#tstt "you rolled it (fem)". In this study we have carried out acoustic, video-endoscopic and phonological analyses to answer the following question: is schwa, which may function as syllabic, a segment at the level of phonetic representations in TB? Video-endoscopic films were made of one male native speaker of TB, producing a list of forms consisting entirely of voiceless obstruents. The same list was produced by 7 male native speakers of TB for the acoustic analysis. The phonological analysis is based on the behaviour of vowels with respect to the phonological rule of assibilation. This study shows the absence of schwa vowels in forms consisting of voiceless obstruents.
What governs phonology
(2000)
The goal of this paper is to survey the accent systems of the indigenous languages of Africa. Although roughly one third of the world’s languages are spoken in Africa, this continent has tended to be underrepresented in earlier stress and accent typology surveys, like Hyman (1977). This one aims to fill that gap. Two main contributions to the typology of accent are made by this study of African languages. First, it confirms Hyman's (1977) earlier finding that (stem-)initial and penult are the most common positions, cross-linguistically, to be assigned main stress. Further, it shows that not only stress but also tone and segment distribution can define prominence asymmetries which are best analyzed in terms of accent.
I argue in this study that consonantal strength shifts can be explained through positional bans on features, expressed over positions marked as weak at a given level of prosodic structure, usually the metrical foo!. This approach might be characterized as "templatic" in the sense it seeks to explain positional restrictions and distributional patterns relative to independently motivated, fixed prosodic elements. In this sense, it follows Dresher & Lahiri's (1991) idea of metrical coherence in phonological systems, namely, "[T]hat grammars adhere to syllabic templates and metrical patterns of limited types, and that these patterns persist across derivations and are available to a number of different processes ... " (251). [...] The study is structured as follows: section 1 presents a typology of distributional asymmetries based on data from unrelated languages, demonstrating that the stress foot of each of these languages determines the contexts of neutralization and weakening of stops. Section 2 elaborates the notion of a template, exploring some of its formal properties, while section 3 presents templatic analyses of data from English and German. Section 4 explores the properties of weak positions, especially weak onsets, in more detail, including discussion of templates in phonological acquisition. Section 5 summarizes and concludes the study.
The present study, based on a typological survey of ca. 70 languages, offers a systematization of consonantal insertions by classifying them into three main types: grammatical, phonetic, and prosodic insertions. The three epenthesis types essentially differ from each other in terms of preferred sounds, domains of application, the role of segmental context, their occurrence cross-linguistically, the extent of variation and phonetic explication.
The present investigation is significantly different from other analyses of consonantal epentheses in the sense that it neither invokes markedness nor diachronic state of the processes under discussion. Instead, it considers the different nature of the epenthetic segments by referring to the representational levels and/or domains which are relevant for their appearance.
It has been hypothesized that sounds which are less perceptible are more likely to be altered than more salient sounds, the rationale being that the loss of information resulting from a change in a sound which is difficult to perceive is not as great as the loss resulting from a change in a more salient sound. Kohler (1990) suggested that the tendency to reduce articulatory movements is countered by perceptual and social constraints, finding that fricatives are relatively resistant to reduction in colloquial German. Kohler hypothesized that this is due to the perceptual salience of fricatives, a hypothesis which was supported by the results of a perception experiment by Hura, Lindblom, and Diehl (1992). These studies showed that the relative salience of speech sounds is relevant to explaining phonological behavior. An additional factor is the impact of different acoustic environments on the perceptibility of speech sounds. Steriade (1997) found that voicing contrasts are more common in positions where more cues to voicing are available. The P-map, proposed by Steriade (2001a, b), allows the representation of varying salience of segments in different contexts. Many researchers have posited a relationship between speech perception and phonology. The purpose of this paper is to provide experimental evidence for this relationship, drawing on the case of Turkish /h/ deletion.
In this article we propose that there are two universal properties for phonological stop assibilations, namely (i) assibilations cannot be triggered by /i/ unless they are also triggered by /j/, and (ii) voiced stops cannot undergo assibilations unless voiceless ones do. The article presents typological evidence from assibilations in 45 languages supporting both (i) and (ii). It is argued that assibilations are to be captured in the Optimality Theoretic framework by ranking markedness constraints grounded in perception which penalize sequences like [ti] ahead of a faith constraint which militates against the change from /t/ to some sibilant sound. The occurring language types predicted by (i) and (ii) will be shown to involve permutations of the rankings between several different markedness constraints and the one faith constraint. The article demonstrates that there exist several logically possible assibilation types which are ruled out because they would involve illicit rankings.
The purpose of this paper is to provide a unified (i.e. independent of lexical categories) account of Persian stress. I show that by differentiating word- and phrase-level stress rules, one can account for the superficial differences exemplified in (1) above and many of the stipulations suggested by previous scholars. The paper is organized as follows. In section 1, I look at nouns and adjectives and propose a rule that would account for their stress pattern. In section 2, I extend the stress rule to verbs and show the problem this category poses to our generalization. The main proposal of this paper is discussed in section 3. I introduce the phrasal stress rule in Persian and show that by differentiating word-level and phrase-level stress rules, one can come to a unified account of Persian stress. Section 4 deals with some problematic eases for the proposed generalization and discusses some tentative solutions and their theoretical consequences. Section 5 concludes the paper.
In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively λ and α. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (λ,α) and polymorphism of a given phase, or superstructures in phonological organisations (Vallée et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places “preferred” for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the world’s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.