Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Part of a Book (105) (remove)
Has Fulltext
- yes (105)
Is part of the Bibliography
- no (105)
Keywords
- Phonologie (46)
- Intonation <Linguistik> (24)
- Prosodie (20)
- Deutsch (18)
- Phonetik (17)
- Artikulation (15)
- Optimalitätstheorie (13)
- Artikulatorische Phonetik (10)
- Relativsatz (10)
- Bantusprachen (9)
Institute
- Extern (1)
The present study, based on a typological survey of ca. 70 languages, offers a systematization of consonantal insertions by classifying them into three main types: grammatical, phonetic, and prosodic insertions. The three epenthesis types essentially differ from each other in terms of preferred sounds, domains of application, the role of segmental context, their occurrence cross-linguistically, the extent of variation and phonetic explication.
The present investigation is significantly different from other analyses of consonantal epentheses in the sense that it neither invokes markedness nor diachronic state of the processes under discussion. Instead, it considers the different nature of the epenthetic segments by referring to the representational levels and/or domains which are relevant for their appearance.
It has been hypothesized that sounds which are less perceptible are more likely to be altered than more salient sounds, the rationale being that the loss of information resulting from a change in a sound which is difficult to perceive is not as great as the loss resulting from a change in a more salient sound. Kohler (1990) suggested that the tendency to reduce articulatory movements is countered by perceptual and social constraints, finding that fricatives are relatively resistant to reduction in colloquial German. Kohler hypothesized that this is due to the perceptual salience of fricatives, a hypothesis which was supported by the results of a perception experiment by Hura, Lindblom, and Diehl (1992). These studies showed that the relative salience of speech sounds is relevant to explaining phonological behavior. An additional factor is the impact of different acoustic environments on the perceptibility of speech sounds. Steriade (1997) found that voicing contrasts are more common in positions where more cues to voicing are available. The P-map, proposed by Steriade (2001a, b), allows the representation of varying salience of segments in different contexts. Many researchers have posited a relationship between speech perception and phonology. The purpose of this paper is to provide experimental evidence for this relationship, drawing on the case of Turkish /h/ deletion.
This paper summarizes our research efforts in functional modelling of the relationship between the acoustic properties of vowels and perceived vowel quality. Our model is trained on 164 short steady-state stimuli. We measured F1, F2, and additionally F0 since the effect of F0 on perceptual vowel height is evident. 40 phonetically skilled subjects judged vowel quality using the Cardinal Vowel diagram. The main focus is on refining the model and describing its transformation properties between the F1/F2 formant chart and the Cardinal Vowel diagram. An evaluation of the model based on 48 additional vowels showed the generalizability of the model and confirmed that it predicts perceived vowel quality with sufficient accuracy.
In this article we propose that there are two universal properties for phonological stop assibilations, namely (i) assibilations cannot be triggered by /i/ unless they are also triggered by /j/, and (ii) voiced stops cannot undergo assibilations unless voiceless ones do. The article presents typological evidence from assibilations in 45 languages supporting both (i) and (ii). It is argued that assibilations are to be captured in the Optimality Theoretic framework by ranking markedness constraints grounded in perception which penalize sequences like [ti] ahead of a faith constraint which militates against the change from /t/ to some sibilant sound. The occurring language types predicted by (i) and (ii) will be shown to involve permutations of the rankings between several different markedness constraints and the one faith constraint. The article demonstrates that there exist several logically possible assibilation types which are ruled out because they would involve illicit rankings.
This paper describes the processing of MRI and CT images needed for developing a 3D linear articulatory model of velum. The 3D surface that defines each organ constitutive of the vocal and nasal tracts is extracted from MRI and CT images recorded on a subject uttering a corpus of artificially sustained French vowels and consonants. First, the 2D contours of the organs have been manually extracted from the corresponding images, expanded into 3D contours, and aligned in a common 3D coordinate system. Then, for each organ, a generic mesh has been chosen and fitted by elastic deformation to each of the 46 3D shapes of the corpus. This has finally resulted in a set of organ surfaces sampled with the same number of 3D vertices for each articulation, which is appropriate for Principal Component Analysis or linear decomposition. The analysis of these data has uncovered two main uncorrelated articulatory degrees of freedom for the velum's movement. The associated parameters are used to control the model. We have in particular investigated the question of a possible correlation between jaw / tongue and velum's movement and have not find more correlation than the one found in the corpus.
"The documentation of... descriptive generalizations is sometimes clearer and more accessible when expressed in terms of a detailed formal reconstruction, but only in the rare and happy case that the formalism fits the data so well that the resulting account is clearer and easier to understand than the list of categories of facts that it encodes.... [If not], subsequent scholars must often struggle to decode a description in an out-of-date formal framework so as to work back to... the facts.... which they can re-formalize in a new way. Having experienced this struggle often ourselves, we have decided to accommodate our successors by providing them directly with a plainer account." (Akinlabi & Liberman 2000:24)
The purpose of this paper is to provide a unified (i.e. independent of lexical categories) account of Persian stress. I show that by differentiating word- and phrase-level stress rules, one can account for the superficial differences exemplified in (1) above and many of the stipulations suggested by previous scholars. The paper is organized as follows. In section 1, I look at nouns and adjectives and propose a rule that would account for their stress pattern. In section 2, I extend the stress rule to verbs and show the problem this category poses to our generalization. The main proposal of this paper is discussed in section 3. I introduce the phrasal stress rule in Persian and show that by differentiating word-level and phrase-level stress rules, one can come to a unified account of Persian stress. Section 4 deals with some problematic eases for the proposed generalization and discusses some tentative solutions and their theoretical consequences. Section 5 concludes the paper.
In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively λ and α. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (λ,α) and polymorphism of a given phase, or superstructures in phonological organisations (Vallée et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places “preferred” for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the world’s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control.
The purpose of this paper is to show how WH questions interact with the complex tonal phenomena which we summarized and illustrated in Hyman & Katamba (2010). As will be seen, WH questions have interesting syntactic and tonal properties of their own, including a WH-specific intonation. The paper is structured as follows: After an introduction in §1, we successively discuss non-subject WH questions (§2), subject WH questions (§3), and clefted WH questions (§4). We then briefly present a tense which is specifically limited to WH questions (§5), and conclude with a brief summary in §6.
Since the advent of nonlinear phonology many linguists have either assumed or argued explicitly that many languages have words in which one or more segment does not belong structurally to the syllable. Three commonly employed adjectives used to describe such consonants are 'extrasyllabic', 'extrametrical' or 'stray'. Other authors refer to such segments as belonging to the 'appendix'. [...] Various non-linear representations have been proposed to express the 'extrasyllabicity' of segments [...]. The ones I am concerned with in the present article analyze [...] consonants [...] structurally as being outside of the syllable [...]. For transparency I ignore here both subsyllabic constituency as well as higher level prosodic constituents to which the stray consonants are sometimes assumed to attach. For reasons to be made clear below I refer to syllables [...] in which the stray consonant is situated outside of the syllable, as abstract syllables.