Linguistik-Klassifikation
Filtern
Erscheinungsjahr
Dokumenttyp
- Teil eines Buches (Kapitel) (100)
- Wissenschaftlicher Artikel (38)
- Konferenzveröffentlichung (22)
- Arbeitspapier (13)
- Bericht (6)
- Buch (Monographie) (2)
- Preprint (1)
Sprache
- Englisch (182) (entfernen)
Volltext vorhanden
- ja (182)
Gehört zur Bibliographie
- nein (182)
Schlagworte
- Phonologie (54)
- Phonetik (47)
- Intonation <Linguistik> (30)
- Prosodie (24)
- Artikulation (19)
- Deutsch (17)
- Optimalitätstheorie (13)
- Artikulatorische Phonetik (12)
- Bantusprachen (12)
- Relativsatz (11)
Institut
This article deals with the Tashlhiyt dialect of Berber (henceforth TB) spoken in the southern part of Morocco. In TB, words may consist entirely of consonants without vowels and sometimes of only voiceless obstruents, e.g. tft#tstt "you rolled it (fem)". In this study we have carried out acoustic, video-endoscopic and phonological analyses to answer the following question: is schwa, which may function as syllabic, a segment at the level of phonetic representations in TB? Video-endoscopic films were made of one male native speaker of TB, producing a list of forms consisting entirely of voiceless obstruents. The same list was produced by 7 male native speakers of TB for the acoustic analysis. The phonological analysis is based on the behaviour of vowels with respect to the phonological rule of assibilation. This study shows the absence of schwa vowels in forms consisting of voiceless obstruents.
This paper shows that several typologically unrelated languages share the tendency to avoid voiced sibilant affricates. This tendency is explained by appealing to the phonetic properties of the sounds, and in particular to their aerodynamic characteristics. On the basis of experimental evidence it is shown that conflicting air pressure requirements for maintaining voicing and frication are responsible for the avoidance of voiced affricates. In particular, the air pressure released from the stop phase of the affricate is too high to maintain voicing, which in consequence leads to a devoicing of the frication part.
What governs phonology
(2000)
The goal of this paper is to survey the accent systems of the indigenous languages of Africa. Although roughly one third of the world’s languages are spoken in Africa, this continent has tended to be underrepresented in earlier stress and accent typology surveys, like Hyman (1977). This one aims to fill that gap. Two main contributions to the typology of accent are made by this study of African languages. First, it confirms Hyman's (1977) earlier finding that (stem-)initial and penult are the most common positions, cross-linguistically, to be assigned main stress. Further, it shows that not only stress but also tone and segment distribution can define prominence asymmetries which are best analyzed in terms of accent.
This paper presents a preliminary survey of the positions and prosodies associated with Wh-questions in two Bantu languages spoken in Malawi. The paper shows that the two languages are similar in requiring focused subjects to be clefted. Both also require 'which' questions and 'because of what' questions to be clefted or fronted. However, for other non-subjects Tumbuka rather uniformly imposes an IAV (immediately after the verb) requirement, while Chewa does not. In both languages, we found a strong tendency for there to be a prosodic phrase break following the Wh-word. In Tumbuka, this break follows from the general phrasing algorithm of the language, while in Chewa, I propose that the break can be best understood as following from the inherent prominence of Wh-words.
West Slavic accentuation
(2009)
At the time of the earliest reconstructible dialectal divergences, which belong to the Late Middle Slavic period of my chronology (stages 7.0 - 8.0 of Kortlandt 1989a, 2003, 2008), the West Slavic languages represented the most conservative part of the Slavic dialects (cf. Kortlandt 1982b: 191 and 2003: 231).
I argue in this study that consonantal strength shifts can be explained through positional bans on features, expressed over positions marked as weak at a given level of prosodic structure, usually the metrical foo!. This approach might be characterized as "templatic" in the sense it seeks to explain positional restrictions and distributional patterns relative to independently motivated, fixed prosodic elements. In this sense, it follows Dresher & Lahiri's (1991) idea of metrical coherence in phonological systems, namely, "[T]hat grammars adhere to syllabic templates and metrical patterns of limited types, and that these patterns persist across derivations and are available to a number of different processes ... " (251). [...] The study is structured as follows: section 1 presents a typology of distributional asymmetries based on data from unrelated languages, demonstrating that the stress foot of each of these languages determines the contexts of neutralization and weakening of stops. Section 2 elaborates the notion of a template, exploring some of its formal properties, while section 3 presents templatic analyses of data from English and German. Section 4 explores the properties of weak positions, especially weak onsets, in more detail, including discussion of templates in phonological acquisition. Section 5 summarizes and concludes the study.
Vowel dispersion in Truku
(2004)
This study investigates the dispersion of vowel space in Truku, an endangered Austronesian language in Taiwan. Adaptive Dispersion (Liljencrants and Lindblom, 1972; Lindblom, 1986, 1990) proposes that the distinctive sounds of a language tend to be positioned in phonetic space in a way that maximizes perceptual contrast. For example, languages with large vowel inventories tend to expand the overall acoustic vowel space. Adaptive Dispersion predicts that the distance between the point vowels will increase with the size of a language's vowel inventory. Thus, the available acoustic vowel space is utilized in a way that maintains maximal auditory contrast.
The contribution of von Kempelen's "Mechanism of Speech" to the 'phonetic sciences' will be analyzed with respect to his theoretical reasoning on speech and speech production on the one hand and on the other in connection with his practical insights during his struggle in constructing a speaking machine. Whereas in his theoretical considerations von Kempelen's view is focussed on the natural functioning of the speech organs – cf. his membraneous glottis model – in constructing his speaking machine he clearly orientates himself towards the auditory result – cf. the bag pipe model for the sound generator used for the speaking machine instead. Concerning vowel production his theoretical description remains questionable, but his practical insight that vowels and speech sounds in general are only perceived correctly in connection with their surrounding sounds – i.e. the discovery of coarticulation – is clearly a milestone in the development of the phonetic sciences: He therefore dispenses with the Kratzenstein tubes, although they might have been based on more thorough acoustic modelling.
Finally, von Kempelen's model of speech production will be discussed in relation to the discussion of the acoustic nature of vowels afterwards [Willis and Wheatstone as well as von Helmholtz and Hermann in the 19th century and Stumpf, Chiba & Kajiyama as well as Fant and Ungeheuer in the 20th century].
The contribution of von Kempelen’s “Mechanism of Speech” to the ‘phonetic sciences‘ will be analyzed with respect to his theoretical reasoning on speech and speech production on the one hand and on the other in connection with his practical insights during his struggle in constructing a speaking machine. Whereas in his theoretical considerations von Kempelen’s view is focussed on the natural functioning of the speech organs – cf. his membraneous glottis model – in constructing his speaking machine he clearly orientates himself towards the auditory result – cf. the bag pipe model for the sound generator used for the speaking machine instead. Concerning vowel production his theoretical description remains questionable, but his practical insight that vowels and speech sounds in general are only perceived correctly in connection with their surrounding sounds – i.e. the discovery of coarticulation – is clearly a milestone in the development of the phonetic sciences: He therefore dispenses with the Kratzenstein tubes, although they might have been based on more thorough acoustic modelling. Finally, von Kempelen’s model of speech production will be discussed in relation to the discussion of the acoustic nature of vowels afterwards [Willis and Wheatstone as well as von Helmholtz and Hermann in the 19th century and Stumpf, Chiba & Kajiyama as well as Fant and Ungeheuer in the 20th century].
Dutch has a three-way contrast in labiodental sounds, which causes problems for native speakers of German in their acquisition of Dutch, since German contrasts only two labiodentals. The present study investigates the perception of the Dutch labiodental fricative system by German L2 learners of Dutch and shows that native Germans with no or little knowledge of the Dutch language categorize the Dutch labiodental voiced fricative and approximant as their native voiced fricative. Advanced learners, however, succeed in acquiring a category for the voiced fricative, illustrating that plasticity in the perception of a second language develops with the amount of exposure to the language.
The present study argues that variation across listeners in the perception of a non-native contrast is due to two factors: the listener-specic weighting of auditory dimensions and the listener-specic construction of new segmental representations. The interaction of both factors is shown to take place in the perception grammar, which can be modelled within an OT framework. These points are illustrated with the acquisition of the Dutch three-member labiodental contrast [V v f] by German learners of Dutch, focussing on four types of learners from the perception study by Hamann and Sennema (2005a).
In this paper, we report on an experiment showing how the introduction of prosodic information from detailed syntactic structures into synthetic speech leads to better disambiguation of structurally ambiguous sentences. Using modifier attachment (MA) ambiguities and subject/object fronting (OF) in German as test cases, we show that prosody which is automatically generated from deep syntactic information provided by an HPSG generator can lead to considerable disambiguation effects, and can even override a strong semantics-driven bias. The architecture used in the experiment, consisting of the LKB generator running a large-scale grammar for German, a syntax-prosody interface module, and the speech synthesis system MARY is shown to be a valuable platform for testing hypotheses in intonation studies.
The present study, based on a typological survey of ca. 70 languages, offers a systematization of consonantal insertions by classifying them into three main types: grammatical, phonetic, and prosodic insertions. The three epenthesis types essentially differ from each other in terms of preferred sounds, domains of application, the role of segmental context, their occurrence cross-linguistically, the extent of variation and phonetic explication.
The present investigation is significantly different from other analyses of consonantal epentheses in the sense that it neither invokes markedness nor diachronic state of the processes under discussion. Instead, it considers the different nature of the epenthetic segments by referring to the representational levels and/or domains which are relevant for their appearance.
It has been hypothesized that sounds which are less perceptible are more likely to be altered than more salient sounds, the rationale being that the loss of information resulting from a change in a sound which is difficult to perceive is not as great as the loss resulting from a change in a more salient sound. Kohler (1990) suggested that the tendency to reduce articulatory movements is countered by perceptual and social constraints, finding that fricatives are relatively resistant to reduction in colloquial German. Kohler hypothesized that this is due to the perceptual salience of fricatives, a hypothesis which was supported by the results of a perception experiment by Hura, Lindblom, and Diehl (1992). These studies showed that the relative salience of speech sounds is relevant to explaining phonological behavior. An additional factor is the impact of different acoustic environments on the perceptibility of speech sounds. Steriade (1997) found that voicing contrasts are more common in positions where more cues to voicing are available. The P-map, proposed by Steriade (2001a, b), allows the representation of varying salience of segments in different contexts. Many researchers have posited a relationship between speech perception and phonology. The purpose of this paper is to provide experimental evidence for this relationship, drawing on the case of Turkish /h/ deletion.
This paper summarizes our research efforts in functional modelling of the relationship between the acoustic properties of vowels and perceived vowel quality. Our model is trained on 164 short steady-state stimuli. We measured F1, F2, and additionally F0 since the effect of F0 on perceptual vowel height is evident. 40 phonetically skilled subjects judged vowel quality using the Cardinal Vowel diagram. The main focus is on refining the model and describing its transformation properties between the F1/F2 formant chart and the Cardinal Vowel diagram. An evaluation of the model based on 48 additional vowels showed the generalizability of the model and confirmed that it predicts perceived vowel quality with sufficient accuracy.
In this article we propose that there are two universal properties for phonological stop assibilations, namely (i) assibilations cannot be triggered by /i/ unless they are also triggered by /j/, and (ii) voiced stops cannot undergo assibilations unless voiceless ones do. The article presents typological evidence from assibilations in 45 languages supporting both (i) and (ii). It is argued that assibilations are to be captured in the Optimality Theoretic framework by ranking markedness constraints grounded in perception which penalize sequences like [ti] ahead of a faith constraint which militates against the change from /t/ to some sibilant sound. The occurring language types predicted by (i) and (ii) will be shown to involve permutations of the rankings between several different markedness constraints and the one faith constraint. The article demonstrates that there exist several logically possible assibilation types which are ruled out because they would involve illicit rankings.
In this article we propose that there are two universal properties for phonological stop assibilations, namely (i) assibilations cannot be triggered by /i/ unless they are also triggered by /j/, and (ii) voiced stops cannot undergo assibilations unless voiceless ones do. The article presents typological evidence from assibilations in 45 languages supporting both (i) and (ii). It is argued that assibilations are to be captured in the Optimality Theoretic framework by ranking markedness constraints grounded in perception which penalize sequences like [ti] ahead of a faith constraint which militates against the change from /t/ to some sibilant sound. The occurring language types predicted by (i) and (ii) will be shown to involve permutations of the rankings between several different markedness constraints and the one faith constraint. The article demonstrates that there exist several logically possible assibilation types which are ruled out because they would involve illicit rankings.
This paper describes the processing of MRI and CT images needed for developing a 3D linear articulatory model of velum. The 3D surface that defines each organ constitutive of the vocal and nasal tracts is extracted from MRI and CT images recorded on a subject uttering a corpus of artificially sustained French vowels and consonants. First, the 2D contours of the organs have been manually extracted from the corresponding images, expanded into 3D contours, and aligned in a common 3D coordinate system. Then, for each organ, a generic mesh has been chosen and fitted by elastic deformation to each of the 46 3D shapes of the corpus. This has finally resulted in a set of organ surfaces sampled with the same number of 3D vertices for each articulation, which is appropriate for Principal Component Analysis or linear decomposition. The analysis of these data has uncovered two main uncorrelated articulatory degrees of freedom for the velum's movement. The associated parameters are used to control the model. We have in particular investigated the question of a possible correlation between jaw / tongue and velum's movement and have not find more correlation than the one found in the corpus.
"The documentation of... descriptive generalizations is sometimes clearer and more accessible when expressed in terms of a detailed formal reconstruction, but only in the rare and happy case that the formalism fits the data so well that the resulting account is clearer and easier to understand than the list of categories of facts that it encodes.... [If not], subsequent scholars must often struggle to decode a description in an out-of-date formal framework so as to work back to... the facts.... which they can re-formalize in a new way. Having experienced this struggle often ourselves, we have decided to accommodate our successors by providing them directly with a plainer account." (Akinlabi & Liberman 2000:24)
The purpose of this paper is to provide a unified (i.e. independent of lexical categories) account of Persian stress. I show that by differentiating word- and phrase-level stress rules, one can account for the superficial differences exemplified in (1) above and many of the stipulations suggested by previous scholars. The paper is organized as follows. In section 1, I look at nouns and adjectives and propose a rule that would account for their stress pattern. In section 2, I extend the stress rule to verbs and show the problem this category poses to our generalization. The main proposal of this paper is discussed in section 3. I introduce the phrasal stress rule in Persian and show that by differentiating word-level and phrase-level stress rules, one can come to a unified account of Persian stress. Section 4 deals with some problematic eases for the proposed generalization and discusses some tentative solutions and their theoretical consequences. Section 5 concludes the paper.
In the research field initiated by Lindblom & Liljencrants in 1972, we illustrate the possibility of giving substance to phonology, predicting the structure of phonological systems with nonphonological principles, be they listener-oriented (perceptual contrast and stability) or speaker-oriented (articulatory contrast and economy). We proposed for vowel systems the Dispersion-Focalisation Theory (Schwartz et al., 1997b). With the DFT, we can predict vowel systems using two competing perceptual constraints weighted with two parameters, respectively λ and α. The first one aims at increasing auditory distances between vowel spectra (dispersion), the second one aims at increasing the perceptual salience of each spectrum through formant proximities (focalisation). We also introduced new variants based on research in physics - namely, phase space (λ,α) and polymorphism of a given phase, or superstructures in phonological organisations (Vallée et al., 1999) which allow us to generate 85.6% of 342 UPSID systems from 3- to 7-vowel qualities. No similar theory for consonants seems to exist yet. Therefore we present in detail a typology of consonants, and then suggest ways to explain plosive vs. fricative and voiceless vs. voiced consonants predominances by i) comparing them with language acquisition data at the babbling stage and looking at the capacity to acquire relatively different linguistic systems in relation with the main degrees of freedom of the articulators; ii) showing that the places “preferred” for each manner are at least partly conditioned by the morphological constraints that facilitate or complicate, make possible or impossible the needed articulatory gestures, e.g. the complexity of the articulatory control for voicing and the aerodynamics of fricatives. A rather strict coordination between the glottis and the oral constriction is needed to produce acceptable voiced fricatives (Mawass et al., 2000). We determine that the region where the combinations of Ag (glottal area) and Ac (constriction area) values results in a balance between the voice and noise components is indeed very narrow. We thus demonstrate that some of the main tendencies in the phonological vowel and consonant structures of the world’s languages can be explained partly by sensorimotor constraints, and argue that actually phonology can take part in a theory of Perception-for-Action-Control.
This paper addresses remarks made by Flemming (2003) to the effect that his analysis of the interaction between retroflexion and vowel backness is superior to that of Hamann (2003b). While Hamann maintained that retroflex articulations are always back, Flemming adduces phonological as well as phonetic evidence to prove that retroflex consonants can be non-back and even front (i.e. palatalised). The present paper, however, shows that the phonetic evidence fails under closer scrutiny. A closer consideration of the phonological evidence shows, by making a principled distinction between articulatory and perceptual drives, that a reanalysis of Flemming’s data in terms of unviolated retroflex backness is not only possible but also simpler with respect to the number of language-specific stipulations.
The study investigates the contribution of tactile and auditory feedback in the adaptation of /s/ towards a palatal prosthesis. Five speakers were recorded via electromagnetic articulography, at first without the prosthesis, then with the prosthesis and auditory feedback masked, and finally with the prosthesis and auditory feedback available. Tongue position, jaw position and acoustic centre of gravity of productions of the sound were measured. The results show that the initial adaptation attempts without auditory feedback are dependent on the prosthesis type and directed towards reaching the original tongue palate contact pattern. Speakers with a prosthesis which retracted the alveolar ridge retracted the tongue. Speakers with a prosthesis which did not change the place of the alveolar ridge did not retract the tongue. All speakers lowered the jaw. In a second adaptation step with auditory feedback available speakers reorganised tongue and jaw movements in order to produce more subtle acoustic characteristics of the sound such as the high amplitude noise which is typical for sibilants.
The purpose of this paper is to show how WH questions interact with the complex tonal phenomena which we summarized and illustrated in Hyman & Katamba (2010). As will be seen, WH questions have interesting syntactic and tonal properties of their own, including a WH-specific intonation. The paper is structured as follows: After an introduction in §1, we successively discuss non-subject WH questions (§2), subject WH questions (§3), and clefted WH questions (§4). We then briefly present a tense which is specifically limited to WH questions (§5), and conclude with a brief summary in §6.
The paper considers a phenomenon in Korean where ambiguity in the written language is resolved prosodically. An LFG analysis is provided which extends the proposals of Mycock and Lowe (2013) to Korean, based on experimental evidence on the prosodic expression of focus in Korean which challenges the phrase-boundary based account of Jun and Oh (1996), and suggests that considering expanded pitch range may give a more robust account of focus expression.
Since the advent of nonlinear phonology many linguists have either assumed or argued explicitly that many languages have words in which one or more segment does not belong structurally to the syllable. Three commonly employed adjectives used to describe such consonants are 'extrasyllabic', 'extrametrical' or 'stray'. Other authors refer to such segments as belonging to the 'appendix'. [...] Various non-linear representations have been proposed to express the 'extrasyllabicity' of segments [...]. The ones I am concerned with in the present article analyze [...] consonants [...] structurally as being outside of the syllable [...]. For transparency I ignore here both subsyllabic constituency as well as higher level prosodic constituents to which the stray consonants are sometimes assumed to attach. For reasons to be made clear below I refer to syllables [...] in which the stray consonant is situated outside of the syllable, as abstract syllables.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.
The unfolding discussion will focus on the internal representation of turbulent sounds in the phonology of German as well as pinpoint the special status of the prime defining the quality of turbulence. It will also be argued that this prime is capable of entering into special types of licensing relations, which results in specific phonetic manifestations of forms. We shall compare the effects of two processes attested in German: consonant degemination and spirantisation with a view to revealing the role of the turbulence-defining element in the two operations. Furthermore, our attention will be focused on the workings of the Obligatory Contour Principle which, as will be shown below, exerts decisive impact on prime interplay and consequently the phonetic realization of sounds and words. We shall see that segmental identity is contingent on the languagespecific interpretation of inter-element bonds.
Aware of the importance of prime autonomy in determining the manifestation of sounds, let us start with a brief outline of the fundamental segment structure principles offered by the theory of Phonological Government.
This paper proposes a representation for syllable structure in HPSG, building on previous work by Bird and Klein (1994), Höhle (1999), and Crysmann (2002). Instead of mapping segments into a a separate part of the sign where syllables are represented structurally, information about syllabification is encoded directly in the list of segments, the core of the PHONOLOGY value. Higher level prosodic phenomena can operate on a more abstract representation of the sequence of syllables derived from the syllabified segments list. The approach is illustrated with analyses of some word-boundary phenomena conditioned by syllable structure in French.
The morpho-syntax of relative clauses in Sotho-Tswana is relatively well-described in the literature. Prosodic characteristics, such as tone, have received far less attention in the existing descriptions. After reviewing the basic morpho-syntactic and semantic features of relative clauses in Tswana, the current paper sets out to present and discuss prosodic aspects. These comprise tone specifications of relative clause markers such as the demonstrative pronoun that acts as the relative pronoun, relative agreement concords and the relative suffix. Further prosodic aspects dealt with in the current article are tone alternations at the juncture of relative pronoun and head noun, and finally the tone patterns of the finite verbs in the relative clause. The article aims at providing the descriptive basis from which to arrive at generalizations concerning the prosodic phrasing of relative clauses in Tswana.
Símákonde is an Eastern Bantu language (P23) spoken by immigrant Mozambican communities in Zanzibar and on the Tanzanian mainland. Like other Makonde dialects and other Eastern and Southern Bantu languages (Hyman 2009), it has lost the historical Proto-Bantu vowel length contrast and now has a regular phrase-final stress rule, which causes a predictable bimoraic lengthening of the penultimate syllable of every Prosodic Phrase. The study of the prosody / syntax interface in Símákonde Relative Clauses requires to take into account the following elements: the relationship between the head and the relative verb, the conjoint / disjoint verbal distinction and the various phrasing patterns of Noun Phrases. Within Símákonde noun phrases, depending on the nature of the modifier, three different phrasing situations are observed: a modifier or modifiers may (i) be required to phrase with the head noun, (ii) be required to phrase separately, or (iii) optionally phrase with the head noun.
Símákonde is an Eastern Bantu language (P23) spoken by immigrant Mozambican communities in Zanzibar and on the Tanzanian mainland. Like other Makonde dialects and other Eastern and Southern Bantu languages (Hyman 2009), it has lost the historical Proto-Bantu vowel length contrast and now has a regular phrase-final stress rule, which causes a predictable bimoraic lengthening of the penultimate syllable of every Prosodic Phrase. The study of the prosody / syntax interface in Símákonde Relative Clauses requires to take into account the following elements: the relationship between the head and the relative verb, the conjoint / disjoint verbal distinction and the various phrasing patterns of Noun Phrases. Within Símákonde noun phrases, depending on the nature of the modifier, three different phrasing situations are observed: a modifier or modifiers may (i) be required to phrase with the head noun, (ii) be required to phrase separately, or (iii) optionally phrase with the head noun.
This paper tests three current theories of the phonology-syntax interface – Truckenbrodt (1995), Pak (2008) and Cheng & Downing (2007, 2009) – on the prosody of relative clauses in Chewa. Relative clauses, especially restrictive relative clauses, provide an ideal data set for comparing these theories, as they each make distinct predictions about the optimal phrasing. We show that the asymmetrical phase-edge based approach developed to account for similar Zulu prosodic phrasing by Cheng & Downing also best accounts for the Chewa data.
This article presents new experimental data on the phonetics of syllabic /l/ and syllabic /n/ in Southern British English and then proposes a new phonological account of their behaviour. Previous analyses (Chomsky and Halle 1968:354, Gimson 1989, Gussmann 1991 and Wells 1995) have proposed that syllabic /l/ and syllabic /n/ should be analysed in a uniform manner. Data presented here, however, shows that syllabic /l/ and syllabic /n/ behave in very different ways, and in light of this, a unitary analysis is not justified. Instead, a proposal is made that syllabic /l/ and syllabic /n/ have different phonological structures, and that these different phonological structures explain their different phonetic behaviours.
This article is organised as follows: First a general background is given to the phenomenon of syllabic consonants both cross linguistically and specifically in Southern British English. In §3 a set of experiments designed to elicit syllabic consonants are described and in §4 the results of these experiments are presented. §5 contains a discussion on data published by earlier authors concerning syllabic consonants in English. In §6 a theoretical phonological framework is set out, and in §7 the results of the experiments are analysed in the light of this framework. In the concluding section, some outstanding issues are addressed and several areas for further research are suggested.
At the outset of this dissertation one might pose the question why retroflex consonants should still be of interest for phonetics and for phonological theory since ample work on this segmental class already exists. Bhat (1973) conducted a quite extensive study on retroflexion that treated the geographical spread of this class, some phonological processes its members can undergo, and the phonetic motivation for these processes. Furthermore, several phonological representations of retroflexes have been proposed in the framework of Feature Geometry, as in work by Sagey (1986), Pulleyblank (1989), Gnanadesikan (1993), and Clements (2001). Most recently, Steriade (1995, 2001) has discussed the perceptual cues of retroflexes and has argued that the distribution of these cues can account for the phonotactic restrictions on retroflexes and their assimilatory behaviour. Purely phonetically oriented studies such as Dixit (1990) and Simonsen, Moen & Cowen (2000) have shown the large articulatory variation that can be found for retroflexes and hint at the insufficiency of existing definitions.
In the following study we present the results of three acoustic experiments with native speakers of German and Polish which support implications (a) and (b). In our experiments we measured the friction phase after the /t d/ release before the onset of the following high front vocoid for four speakers of German and Polish. We found that the friction phase for /tj/ was significantly longer than that of /ti/, and that the friction phase of /t/ in the assibilation context is significantly longer than that of /d/.
This article examines the motivation for phonological stop assibilations, e.g. /t/ is realized as [ts], [s] or [tʃ] before /i/, from the phonetic perspective. Hall & Hamann (2003) posit the following two implications: (a) Assibilation cannot be triggered by /i/ unless it is also triggered by /j/, and (b) Voiced stops cannot undergo assibilations unless voiceless ones do. In the following study we present the results of three acoustic experiments with native speakers of German and Polish which support implications (a) and (b). In our experiments we measured the friction phase after the /t d/ release before the onset of the following high front vocoid for four speakers of German and Polish. We found that the friction phase for /tj/ was significantly longer than that of /ti/, and that the friction phase of /t/ in the assibilation context is significantly longer than that of /d/.
This exercise explores the historical relationship between tone, aspiration, prefixes and stem initial consonants in Tibetan. (The stem-initial consonant is underlined in those words that have prefixes or initial clusters; [ts], [tsh], [tç], [tçh], etc., all count as single consonants.) Other phonetic developments are also explored.
S.R. Ramsey writes (1979: 162): "The patterning of tone marks in Old Kyoto texts divides the vocabulary into virtually the same classes as those arrived at by comparing the accent distinctions found in the modern dialects. This means that the Old Kyoto dialect had a pitch system similar to that of proto-Japanese. The standard language of the Heian period may not actually be the ancestor of all the dialects of Japan, but at least as far as the accent system is concerned, it is close enough to the proto system to be used as a working model. The significance of this fact is important: It means that each of the dialects included in the comparison has as much to tell, at least potentially, as any other dialect about Old Kyoto accent."
The current paper explores these two sorts of phonetic explanations of the relationship between syllabic position and the voicing contrast in American English. It has long been observed that the contrast between, for example, /p/ and /b/ is expressed differently, depending on the position of the stop with respect to the vowel. Preceding a vowel within a syllable, the contrast is largely one of aspiration. /p/ is aspirated, while /b/ is voiceless, or in some dialects voiced or even an implosive. Following a vowel within a syllable, both /p/ and /b/ both tend to lack voicing in the closure and the contrast is expressed largely by dynamic differences in the transition between the previous vowel and the stop. Here, vowel and closure duration are negatively correlated such that the /p/ has a shorter vowel and longer closure duration. This difference is often enhanced by the addition of glottalization to /p/. In addition to these differences, there are additional differences connected to higher-level organization involving stress and feet edges. To make the current discussion more tractable, we will restrict ourselves to the two conditions (CV and VC) laid out above.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)) , speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
The Indo-Uralic verb
(2002)
C.C. Uhlenbeck made a distinction between two components of Proto-Indo-European, which he called A and B (1935a: 133ff.). The first component comprises pronouns, verbal roots, and derivational suffixes, and may be compared with Uralic, whereas the second component contains isolated words, such as numerals and most underived nouns, which have a different source. The wide attestation of the Indo-European numerals must be attributed to the development of trade resulting from the increased mobility which was the primary cause of the Indo-European expansions. Numerals do not belong to the basic vocabulary of a neolithic culture, as is clear from their absence in Proto-Uralic (cf. also Collinder 1965: 112) and from the spread of Chinese numerals throughout East Asia. Though Uhlenbeck objects to the term “substratum” for his B complex, I think that it is a perfectly appropriate denomination.
One of the most important insights of Optimality Theory (Prince & Smolensky 1993) is that phonological processes can be reduced to the interaction between faithfulness and universal markedness principles. In the most constrained version of the theory, all phonological processes should be thus reducible. This hypothesis is tested by alternations that appear to be phonological but in which universal markedness principles appear to play no role. If we are to pursue the claim that all phonological processes depend on the interaction of faithfulness and markedness, then processes that are not dependent on markedness must lie outside phonology. In this paper I will examine a group of such processes, the initial consonant mutations of the Celtic languages, and argue that they belong entirely to the morphology of the languages, not the phonology.
This study examines the movement trajectories of the dorsal tongue movements during symmetrical /VCa/ -sequences, where /V/ was one of the Hungarian long or short vowels /i,a,u/ and C either the voiceless palatal or velar stop consonants. General aims of this study were to deliver a data-driven account for (a) the evidence of the division between dorsality and coronality and (b) for the potential role coarticulatory factors could play for the relative frequency of velar palatalization processes in genetically unrelated languages. Results suggest a clear-cut demarcation between the behaviour of purely dorsal velars and the coronal palatals. Moreover, factors arising from a general movement economy might contribute to the palatalization processes mentioned.
In this paper we provide an account of the historical development of Polish and Russian sibilants. The arguments provided here are of theoretical interest because they show that (i) certain allophonic rules are driven by the need to keep contrasts perceptually distinct, (ii) (unconditioned) sound changes result from needs of perceptual distinctiveness, and (iii) perceptual distinctiveness can be extended to a dass of consonants, i.e. the sibilants. The analysis is cast within Dispersion Theory by providing phonetic and typological data supporting the perceptual distinctiveness claims we make.
In this paper we provide an account of the historical development of Polish and Russian sibilants. The arguments provided here are of theoretical interest because they show that (i) certain allophonic rules are driven by the need to keep contrasts perceptually distinct, (ii) (unconditioned) sound changes result from needs of perceptual distinctiveness, and (iii) perceptual distinctiveness can be extended to a class of consonants, i.e. the sibilants. The analysis is cast within Dispersion Theory by providing phonetic and typological data supporting the perceptual distinctiveness claims we make.