Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Part of a Book (100)
- Article (37)
- Conference Proceeding (23)
- Working Paper (13)
- Report (6)
- Book (2)
- Preprint (1)
Language
- English (182) (remove)
Has Fulltext
- yes (182)
Is part of the Bibliography
- no (182)
Keywords
- Phonologie (54)
- Phonetik (47)
- Intonation <Linguistik> (30)
- Prosodie (24)
- Artikulation (19)
- Deutsch (17)
- Optimalitätstheorie (13)
- Artikulatorische Phonetik (12)
- Bantusprachen (12)
- Relativsatz (11)
Institute
The paper considers a phenomenon in Korean where ambiguity in the written language is resolved prosodically. An LFG analysis is provided which extends the proposals of Mycock and Lowe (2013) to Korean, based on experimental evidence on the prosodic expression of focus in Korean which challenges the phrase-boundary based account of Jun and Oh (1996), and suggests that considering expanded pitch range may give a more robust account of focus expression.
This paper explores the use of HPSG for modeling historical phonological change and grammaticalization, focusing on the evolution of the pronunciation of word-final consonants in Modern French. The diachronic evidence is presented in detail, and interpreted as two main transitions, first from Old French to Middle French, then from Middle French to the modern language. The data show how the loss of final consonants, originally a phonological development in Middle French, gave rise to the grammaticalized external sandhi phenomenon known as consonant liaison in modern French. The stages of development are analyzed formally as a succession of HPSG lexical schemas in which phonological representations are determined by reference to the immediately following phonological context.
The paper aims to present approach to HPSG phonology which would account for underlying forms of phonemes. It shows some of the issues arising in monostratal analyses of phonology, and proposes a solution based on a notion of underlying representations. The approach presented, partly inspired by Optimality Theory, resolves cases of neutralisation and opacity by formulating constraints which either restrict the surface representation or relate it to the underlying form.
This paper proposes a representation for syllable structure in HPSG, building on previous work by Bird and Klein (1994), Höhle (1999), and Crysmann (2002). Instead of mapping segments into a a separate part of the sign where syllables are represented structurally, information about syllabification is encoded directly in the list of segments, the core of the PHONOLOGY value. Higher level prosodic phenomena can operate on a more abstract representation of the sequence of syllables derived from the syllabified segments list. The approach is illustrated with analyses of some word-boundary phenomena conditioned by syllable structure in French.
In this paper, we report on an experiment showing how the introduction of prosodic information from detailed syntactic structures into synthetic speech leads to better disambiguation of structurally ambiguous sentences. Using modifier attachment (MA) ambiguities and subject/object fronting (OF) in German as test cases, we show that prosody which is automatically generated from deep syntactic information provided by an HPSG generator can lead to considerable disambiguation effects, and can even override a strong semantics-driven bias. The architecture used in the experiment, consisting of the LKB generator running a large-scale grammar for German, a syntax-prosody interface module, and the speech synthesis system MARY is shown to be a valuable platform for testing hypotheses in intonation studies.
Metrical phonology in HPSG
(2006)
This paper proposes a new approach to the prosody-syntax interface in HPSG. Previous approaches to prosody in HPSG (Klein, 2000; Haji-Abdolhosseini, 2003) represent prosodic information by constructing metrical constituent structure in the tradition of (Selkirk, 1980; Liberman and Prince, 1977). One drawback of this approach is that it does not allow for a direct representation of purely metrical constraints, which are relegated to an unformalized performance component. By contrast, so called 'grid only' approaches (Prince, 1983; Selkirk, 1984; Delais-Roussarie, 2000) use a single data structure, a metrical grid, to encode prosodic constraints resulting from syntax and constraints of a rhythmic nature.
We first review relevant data from French showing that prosodic constituency is much less constrained by syntactic structure than is predicted by existing approaches. In all but very short utterances, many different prosodic groupings are possible for a given sentence with a determinate information structure, and rhythmic factors determine a preference ordering on the possible groupings. We then present an HPSG implementation of the metrical grid, and propose minimal syntactic constraints on relative prominence, leaving room for noncategorical rythmic constraints to choose between alternatives. We finish by discussing the interaction of the metrical grid with the rest of the prosodic grammar.
This paper presents a descriptive overview of liaison, giving an idea of the scope of the phenomenon and possible approaches to its analysis. As for the contextual conditions on liaison, in many cases, the traditional notions of obligatory and prohibited liaison do not reflect speakers' actual behavior. It turns out that general syntactic constraints cannot determine the systematic presence or absence of liaison at a given word boundary. At best, specific constraints can be formulated to target particular classes of constructions. To express such constraints, I propose a system of representation in the framework of HPSG. The use of EDGE features (introduced by Miller (1992) for a GPSG treatment of French) provides the necessary link between phrasal descriptions and the properties of phrase-peripheral elements.
This paper builds on Zwicky's (1986) notion of shape condition, that is, a rule that specifies the phonological shape of inflected forms "by reference to triggers at least some of which lie outside the syntactic word". Zwicky observes that "many rules traditionally classified as external sandhi rules are [shape conditions]". They are not phonological rules in the usual sense, since they only apply to specific lexical items and are active within syntactic rather than phonological domains.
Shape conditions are problematic in many standard grammar architectures. On the one hand, they seem to be constraints on lexical entries, while on the other hand, they make reference to the syntactic context. Hayes (1990) has sketched a theory of "precompiled phrasal phonology" in which allomorph choice is conditioned by subcategorization frames in lexical entries. However, his approach is not formalized in any detail, and moreover makes the implicit claim that the relation between a shape condition target and its triggers can be equated with the syntactic relation between a lexical head and its complement. Although this assumption holds good for the Hausa phenomena he addresses, we do not believe that it holds in general.
HPSG appears to offer promising framework for formalizing something like Hayes' approach, but the standard machinery also makes it hard to distinguish a shape condition trigger from a complement. In order to overcome this difficulty, we develop the notion of phonological context: a feature of signs which allows us to condition allomorphic alternation in terms of (i) the phonological edges, and (ii) the syntactic properties of an expression's immediate syntactic sisters. We show how our analysis deals with four illustrative cases: the indefinite article alternation in English, syncretic liaison forms for possessive pronouns in French, Hausa verb-final vowel shortening, and soft mutation in Welsh nouns.
This paper offers an extensive analysis of the reflexes of the Proto-Indo-European word-initial cluster *sk- in Proto-Slavic. It is argued that the regular reflex of this cluster is the Proto-Slavic *x-, but that *sk- was analogously re-introduced in a great number of cases under the influence of prefixed forms and cases where forms with and without the so-called "s-mobile" co-existed in Slavic. This conclusion is in accordance with the fact that *x- < *sk- is far more common in derivationally isolated words that do not occur with prefixes.
This paper presents doublets in the phonology and accentuation of a Kajkavian dialect in central Croatia, where all three major Croatian groups of dialects meet. Inconsistencies in the vowel and consonant systems are also noted. The second part considers the accentual system, its units and their distribution. Many fluctuations were noted, even with respect to retractions and special Kajkavian features. These are explained through influences of neihbouring local dialects and from the urban dialect of Karlovac and Standard Croatian.
The existence of complex clauses in the Amazonian language Pirahã has been controversially debated. We present a novel analysis of field data demonstrating the existence of complex clauses in Pirahã. The data concern the tone of the morpheme 'sai' and stem from a field experiment where a second language speaker of Pirahã presented sentences and Pirahã speakers were asked to correct them saying the correct sentence alound. The experimental items contained the morpheme 'sai' in two different clausal environments: a nominalizer and a conditional environment according to Everett's 1986 description. Our phonetic analysis shows an effect clausal clausal environment on the pitch of 'sai'. The native Pirahã speakers pronounced conditional 'sai' with lower pitch than nominalizer 'sai'. We show furthermore that the experimenters pitch on 'sai' shows the opposite pattern from that of the native Pirahã speakers and hence the Pirahã's pitch could not just have been copied. The effect of the clausal environment on the tone of 'sai' can be explained by a complex clause analysis of Pirahã, while existing alternative proposals do not explain the difference.
This exercise explores the historical relationship between tone, aspiration, prefixes and stem initial consonants in Tibetan. (The stem-initial consonant is underlined in those words that have prefixes or initial clusters; [ts], [tsh], [tç], [tçh], etc., all count as single consonants.) Other phonetic developments are also explored.
As work like McCarthy (2002: 128) notes, pre-Optimality Theory (OT) phonology was primarily concerned with representations and theories of subsegmental structure. In contrast, the role of representations and choice of structural models has received little attention in OT. Some central representational issues of the pre-OT era have, in fact, become moot in OT (McCarthy 2002: 128). Further, as work like Baković (2007) notes, even for assimilatory processes where representation played a central role in the pre-OT era, constraint interaction now carries the main explanatory burden. Indeed, relatively few studies in OT (e.g., Rose 2000; Hargus & Beavert 2006; Huffmann 2005, 2007; Morén 2006) have argued for the importance of phonological representations. This paper intends to contribute to this work by reanalyzing a set of processes related to vowel harmony in Shimakonde, a Bantu language spoken in Mozambique and Tanzania. These processes are of particular interest, as Liphola’s (2001) study argues that they are derivationally opaque and so not amenable to an OT analysis. I show that the opacity disappears given the proper choice of representations for vowel features and a metrical harmony domain.
The collection of papers in this volume presents results of a collaborative project between the School of Oriental and African Studies (SOAS) in London, the Zentrum für allgemeine Sprachwissenschaft, Typologie und Universalienforschung (ZAS) in Berlin, and the University of Leiden. All three institutions have a strong interest in the linguistics of Bantu languages, and in 2003 decided to set up a network to compare results and to provide a platform for on-going discussion of different topics on which their research interests converged. The project received funding from the British Academy International Networks Programme, and from 2003 to 2006 seven meetings were held at the institutions involved under the title Bantu Grammar: Description and Theory, indicating the shared belief that current research in Bantu is best served by combining the description of new data with theoretically informed analysis. During the life-time of the network, and partly in conjunction with it, larger externally funded Bantu research projects have been set up at all institutions: projects on word-order and morphological marking and on phrasal phonology in Leiden, on pronominal reference, agreement and clitics in Romance and Bantu at SOAS, and on focus in Southern Bantu languages at ZAS. The papers in this volume provide a sampling of the work developed within the network and show, or so we think, how fruitful the sharing of ideas over the last three years has been. While the current British Academy-funded network is coming to an end in 2006, we hope that the cooperative structures we have established will continue to develop - and be expanded - in the future, providing many future opportunities to exchange findings and ideas about Bantu linguistics.
Low- dimensional and speaker-independent linear vocal tract parametrizations can be obtained using the 3-mode PARAFAC factor analysis procedure first introduced by Harshman et al. (1977) and discussed in a series of subsequent papers in the Journal of the Acoustical Society of America (Jackson (1988), Nix et al. (1996), Hoole (1999), Zheng et al. (2003)). Nevertheless, some questions of importance have been left unanswered, e.g. none of the papers using this method has provided a consistent interpretation of the terms usually referred to as "speaker weights". This study attempts an exploration of what influences their reliability as a first step towards their consistent interpretation. With this in mind, we undertook a systematic comparison of the classical PARAFAC1 algorithm with a relaxed version, of it, PARAFAC2. This comparison was carried out on two different corpora acquired by the articulograph, which varied in vowel qualities, consonantal contexts, and the paralinguistic features accent and speech rate. The difference between these statistical approaches can grossly be described as follows: In PARAFAC1, observation units pertain to the same set of variables and the observation units are comparable. In PARAFAC2, observations pertain to the same set of variables, but observation units are not comparable. Such a situation can be easily conceived in a situation such as we are describing: The operationalization we took relies on the comparability of fleshpoint data acquired from different speakers, which need not be a good assumption due to influences like sensor placement and morphological conditions.
In particular, the comparison between the two different approaches is carried out by means of so-called "leverages" on different component matrices originating in regression analysis, calculated as v = diag(A(A A)−1A ) and delivering information on how "influential" a particular loading matrix is for the model. This analysis could potentially be carried out component by component, but we confined ourselves to effects on the global factor structure. For vowels, the most influential loadings are those for the tense cognates of non-palatal vowels. For speakers, the most prominent result is the relative absence of effects of the paralinguistic variables. Results generally indicate that there is quite little influence of the model specification (i.e. PARAFAC1 or PARAFAC2) on vowel and subject components. The patterns for the articulators indicate that there are strong differences between speakers with respect to the most influential measurement as revealed by PARAFAC2: In particular, the most influential y-contribution is the tongue-back for some talkers and the tongue-dorsum for other speakers. With respect to the speaker weights, again, the leverage patterns are very similar for both PARAFAC-versions. These patterns converge with the results of the loading plots, where the articulator profiles seem to be most altered by the use of PARAFAC2. These findings, in general, are interpreted as evidence for the reliability of the PARAFAC1 speaker weights.
This work investigates laryngeal and supralaryngeal correlates of the voicing contrast in alveolar obstruent production in German. It further studies laryngealoral co-ordination observed for such productions. Three different positions of the obstruents are taken into account: the stressed, syllable initial position, the post-stressed intervocalic position, and the post-stressed word final position. For the latter the phonological rule of final devoicing applies in German. The different positions are chosen in order to study the following hypotheses:
1. The presence/absence of glottal opening is not a consistent correlate of the voicing contrast in German.
2. Supralaryngeal correlates are also involved in the contrast.
3. Supralaryngeal correlates can compensate for the lack of distinction in laryngeal adjustment.
Including the word final position is motivated by the question whether neutralization in word final position would be complete or whether some articulatory residue of the contrast can be found.
Two experiments are carried out. The first experiment investigates glottal abduction in co-ordination with tongue-palate contact patterns by means of simultaneous recordings of transillumination, fiberoptic films and Electropalatography (EPG). The second experiment focuses on supralaryngeal correlates of alveolar stops studied by means of Electromagnetic Articulography (EMA) simultaneously with EPG. Three German native speakers participated in both recordings. Results of this study provide evidence that the first hypothesis holds true for alveolar stops when different positions are taken into account. In fricative production it is also confirmed since voiceless and voiced fricatives are most of the time realised with glottal abduction. Additionally, supralaryngeal correlates are involved in the voicing contrast under two perspectives. First, laryngeal and supralaryngeal movements are well synchronised in voiceless obstruent production, particularly in the stressed position. Second, supralaryngeal correlates occur especially in the post-stressed intervocalic position. Results are discussed with respect to the phonetics-phonology interface, to the role of timing and its possible control, to the interarticulatory co-ordination, and to stress as 'localised hyperarticulation'.
This special issue of the ZAS Papers in Linguistics contains a collection of papers of the French-German Thematic Summerschool on "Cognitive and physical models of speech production, and speech perception and of their interaction".
Organized by Susanne Fuchs (ZAS Berlin), Jonathan Harrington (IPdS Kiel), Pascal Perrier (ICP Grenoble) and Bernd Pompino-Marschall (HUB and ZAS Berlin) and funded by the German-French University in Saarbrücken this summerschool was held from September 19th till 24th 2004 at the coast of the Baltic Sea at the Heimvolkshochschule Lubmin (Germany) with 45 participants from Germany, France, Great Britain, Italy and Canada. The scientific program of this summerschool that is reprinted at the end of this volume included 11 key-note presentations by invited speakers, 21 oral presentations and a poster session (8 presentations). The names and addresses of all participants are also given in the back matter of this volume.
All participants was offered the opportunity to publish an extended version of their presentation in the ZAS Papers in Linguistics. All submitted papers underwent a review and an editing procedure by external experts and the organizers of the summerschool. As it is the case in a summerschool, papers present either works in progress, or works at a more advanced stage, or tutorials. They are ordered alphabetically by their first author's name, fortunately resulting in the fact that this special issue starts out with the paper that won the award as best pre-doctoral presentation, i.e. Sophie Dupont, Jérôme Aubin and Lucie Ménard with "A study of the McGurk effect in 4 and 5-year-old French Canadian children".
It has been established since Kanerva’s work that focus conditions phrasing – directly or indirectly – in several other Bantu languages, e.g. Chimwiini (Kisseberth 2007, Downing 2002, Kisseberth & Abasheikh 2004), Xhosa (Jokweni 1995, Zerbian 2004), Chitumbuka (Downing 2006, 2007), Zulu (Cheng & Downing 2006, Downing 2007), Bemba (Kula 2007), etc.
In this paper, I will argue that focus also conditions phrasing in Shingazidja, a Bantu language3 spoken on Grande Comore (or Ngazidja, the largest island of the Comoros).
Many works have been dedicated to the tonology of Shingazidja. The bases of the system were firstly identified by Tucker & Bryan (1970) and reanalyzed by Philippson (1988). Later, Cassimjee & Kisseberth (1989, 1992, 1993, 1998) provide a very convincing analysis of the whole system of the language, and my own research (Patin 2007a) shows a great correspondence with their results. However, little attention has been paid by these authors or others (Jouannet 1989, Rey 1990, Philippson 2005) to the phonology-pragmatics interface, especially on the relation between focus and phrasing. This paper thus proposes to explore this question. It will be claimed that focus, beside syntax, has an influence on phrasing in Shingazidja.
Tone as a distinctive feature used to differentiate not only words but also clause types, is a characteristic feature of Bantu languages. In this paper we show that Bemba relatives can be marked with a low tone in place of a segmental relative marker. This low tone strategy of relativization, which imposes a restrictive reading of relatives, manifests a specific phonological phrasing that can be differentiated from that of non-restrictives. The paper shows that the resultant phonological phrasing favours a head-raising analysis of relativization. In this sense, phonology can be shown to inform syntactic analyses.
We present the results of an experimental study which targets prosodic correlates of subclausal quotation marks. We found that written sentences containing passages enclosed by quotation marks were read aloud in a manner that significantly differs in prosody from spoken realizations of corresponding disquoted counterparts. However, we also observed that such prosodic marking of subclausal quotation wasn't strong enough to survive subsequent back-translation into written language: there was no correlation between the presence/absence of quotation marks in the original written examples, and the presence/absence of quotation marks in corresponding back-translations from oral renditions. We investigated three different kinds of uses of quotation marks and found no systematic difference between them with respect to prosodic marking.
Rate effects on aerodynamics of intervocalic stops : evidence from real speech data and model data
(2008)
This paper is a first attempt towards a better understanding of the aerodynamic properties during speech production and their potential control. In recent years, studies on intraoral pressure in speech have been rather rare, and more studies concern the air flow development. However, the intraoral pressure is a crucial factor for analysing the production of various sounds.
In this paper, we focus on the intraoral pressure development during the production of intervocalic stops.
Two experimental methodologies are presented and confronted with each other: real speech data recorded for four German native speakers, and model data, obtained by a mechanical replica which allows reproducing the main physical mechanisms occurring during phonation. The two methods are presented and applied to a study on the influence of speech rate on aerodynamic properties.
The unfolding discussion will focus on the internal representation of turbulent sounds in the phonology of German as well as pinpoint the special status of the prime defining the quality of turbulence. It will also be argued that this prime is capable of entering into special types of licensing relations, which results in specific phonetic manifestations of forms. We shall compare the effects of two processes attested in German: consonant degemination and spirantisation with a view to revealing the role of the turbulence-defining element in the two operations. Furthermore, our attention will be focused on the workings of the Obligatory Contour Principle which, as will be shown below, exerts decisive impact on prime interplay and consequently the phonetic realization of sounds and words. We shall see that segmental identity is contingent on the languagespecific interpretation of inter-element bonds.
Aware of the importance of prime autonomy in determining the manifestation of sounds, let us start with a brief outline of the fundamental segment structure principles offered by the theory of Phonological Government.
One of the most important insights of Optimality Theory (Prince & Smolensky 1993) is that phonological processes can be reduced to the interaction between faithfulness and universal markedness principles. In the most constrained version of the theory, all phonological processes should be thus reducible. This hypothesis is tested by alternations that appear to be phonological but in which universal markedness principles appear to play no role. If we are to pursue the claim that all phonological processes depend on the interaction of faithfulness and markedness, then processes that are not dependent on markedness must lie outside phonology. In this paper I will examine a group of such processes, the initial consonant mutations of the Celtic languages, and argue that they belong entirely to the morphology of the languages, not the phonology.
The papers in this volume were presented at the eleventh meeting of the Austronesian Formal Linguistics Association (AFLA 11), held from April 23-25 at the Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany. The conference was organized by Hans-Martin Gärtner, Joachim Sabel, and myself, as part of the research project Clause Structure and Adjuncts in Austronesian Languages. We gratefully acknowledge the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft). We would like to thank Wayan Arka, Agibail Cohn, Laura Downing, Silke Hamann, S J Hannahs, Ray Harlow, Nikolaus Himmelmann, Yuchua E. Hsiao, Lillian Huang, Ed Keenan, Glyne Piggott, Charles Randriamasimanana, Joszef Szakos, Barbara Stiebels, Jane Tang, Lisa Travis, Noami Tsukido, Sam Wang, Elizabeth Zeitoun, Kie Ross Zuraw, and Marzena Zygis for reviewing the abstracts. We are thankful to Mechthild Bernhard, Jenny Ehrhardt, Fabienne Fritzsche, Theódóra Torfadóttir and Tue Trinh for their help during the conference. I would like to thank Theódóra for providing essential editorial assistance.
This article presents new experimental data on the phonetics of syllabic /l/ and syllabic /n/ in Southern British English and then proposes a new phonological account of their behaviour. Previous analyses (Chomsky and Halle 1968:354, Gimson 1989, Gussmann 1991 and Wells 1995) have proposed that syllabic /l/ and syllabic /n/ should be analysed in a uniform manner. Data presented here, however, shows that syllabic /l/ and syllabic /n/ behave in very different ways, and in light of this, a unitary analysis is not justified. Instead, a proposal is made that syllabic /l/ and syllabic /n/ have different phonological structures, and that these different phonological structures explain their different phonetic behaviours.
This article is organised as follows: First a general background is given to the phenomenon of syllabic consonants both cross linguistically and specifically in Southern British English. In §3 a set of experiments designed to elicit syllabic consonants are described and in §4 the results of these experiments are presented. §5 contains a discussion on data published by earlier authors concerning syllabic consonants in English. In §6 a theoretical phonological framework is set out, and in §7 the results of the experiments are analysed in the light of this framework. In the concluding section, some outstanding issues are addressed and several areas for further research are suggested.
It has been hypothesized that sounds which are less perceptible are more likely to be altered than more salient sounds, the rationale being that the loss of information resulting from a change in a sound which is difficult to perceive is not as great as the loss resulting from a change in a more salient sound. Kohler (1990) suggested that the tendency to reduce articulatory movements is countered by perceptual and social constraints, finding that fricatives are relatively resistant to reduction in colloquial German. Kohler hypothesized that this is due to the perceptual salience of fricatives, a hypothesis which was supported by the results of a perception experiment by Hura, Lindblom, and Diehl (1992). These studies showed that the relative salience of speech sounds is relevant to explaining phonological behavior. An additional factor is the impact of different acoustic environments on the perceptibility of speech sounds. Steriade (1997) found that voicing contrasts are more common in positions where more cues to voicing are available. The P-map, proposed by Steriade (2001a, b), allows the representation of varying salience of segments in different contexts. Many researchers have posited a relationship between speech perception and phonology. The purpose of this paper is to provide experimental evidence for this relationship, drawing on the case of Turkish /h/ deletion.
This article deals with the Tashlhiyt dialect of Berber (henceforth TB) spoken in the southern part of Morocco. In TB, words may consist entirely of consonants without vowels and sometimes of only voiceless obstruents, e.g. tft#tstt "you rolled it (fem)". In this study we have carried out acoustic, video-endoscopic and phonological analyses to answer the following question: is schwa, which may function as syllabic, a segment at the level of phonetic representations in TB? Video-endoscopic films were made of one male native speaker of TB, producing a list of forms consisting entirely of voiceless obstruents. The same list was produced by 7 male native speakers of TB for the acoustic analysis. The phonological analysis is based on the behaviour of vowels with respect to the phonological rule of assibilation. This study shows the absence of schwa vowels in forms consisting of voiceless obstruents.
The current paper explores these two sorts of phonetic explanations of the relationship between syllabic position and the voicing contrast in American English. It has long been observed that the contrast between, for example, /p/ and /b/ is expressed differently, depending on the position of the stop with respect to the vowel. Preceding a vowel within a syllable, the contrast is largely one of aspiration. /p/ is aspirated, while /b/ is voiceless, or in some dialects voiced or even an implosive. Following a vowel within a syllable, both /p/ and /b/ both tend to lack voicing in the closure and the contrast is expressed largely by dynamic differences in the transition between the previous vowel and the stop. Here, vowel and closure duration are negatively correlated such that the /p/ has a shorter vowel and longer closure duration. This difference is often enhanced by the addition of glottalization to /p/. In addition to these differences, there are additional differences connected to higher-level organization involving stress and feet edges. To make the current discussion more tractable, we will restrict ourselves to the two conditions (CV and VC) laid out above.
In this study, cross-dialectal variation in the use of the acoustic cues of VOT and F0 to mark the laryngeal contrast in Korean stops is examined with Chonnam Korean and Seoul Korean. Prior experimental results (Han & Weitzman, 1970; Hardcastle, 1973; Jun, 1993 &1998; Kim, C., 1965) show that pitch values in the vowel onset following the target stop consonants play a supplementary role to VOT in designating the three contrastive laryngeal categories. F0 contours are determined in part by the intonational system of a language, which raises the question of how the intonational system interacts with phonological contrasts. Intonational difference might be linked to dissimilar patterns in using the complementary acoustic cues of VOT and F0. This hypothesis is tested with 6 Korean speakers, three Seoul Korean and three Chonnam Korean speakers. The results show that Chonnam Korean involves more 3-way VOT and a 2-way distinction in F0 distribution in comparison to Seoul Korean that shows more 3-way F0 distribution and a 2-way VOT distinction. The two acoustic cues are complementary in that one cue is rather faithful in marking 3-way contrast, while the other cue marks the contrast less distinctively. It also seems that these variations are not completely arbitrary, but linked to the phonological characteristics in dialects. Chonnam Korean, in which the initial tonal realization in the accentual phrase is expected to be more salient, tends to minimize the F0 perturbation effect from the preceding consonants by taking more overlaps in F0 distribution. And a 3-way distribution of VOT in Chonnam Korean, as compensation, can be also understood as a durational sensitivity. Without these characteristics, Seoul Korean shows relatively more overlapping distribution in VOT and more 3-way separation in F0 distribution.
In this paper, I discuss four different verb forms in Ndebele (a Nguni Bantu language spoken mainly in Zimbabwe) - the imperative, reduplicated, future and participial. I show that while all four are subject to minimality restrictions, minimality is satisfied differently in each of these morphological contexts. To account for this, I argue that in Ndebele (as in other Bantu languages) Word and RED are not the only constituents which must satisfy minimality: the Stem is also subject to minimality conditions in some morphological contexts. This paper, then, provides additional arguments for the proposal that Phonological Word is not the only sub-lexical morpho-prosodic constituent. Further, I argue that, although Word, RED and Stern are all subject to the same minimality constraint – they must all be minimally bisyllabic - this does not follow from a single 'generalized' constraint. Instead, I argue, contra recent work within Generalized Template Theory (see, e.g., McCarthy & Prince 1994, 1995a, 1999; Urbanezyk 1995, 1996; and Walker 2000; etc.) that a distinct minimality constraint must be formalized for each of these morpho-prosodic constituents.
Much work on the interaction of prosody and focus assumes that, crosslinguistically, there is a necessary correlation between the position of main sentence stress (or accent) and focus, and that an intonational pitch change on the focused element is a primary correlate of focus. In this paper, I discuss primary data from three Bantu languages – Chichewa, Durban Zulu and Chitumbuka – and show that in all three languages phonological re-phrasing, not stress, is the main prosodic correlate of focus and that lengthening, not pitch movement, is the main prosodic correlate of phrasing. This result is of interest for the typology of intonation in illustrating languages where intonation has limited use and where, notably, intonation does not highlight focused information in the way we might expect from European stress languages.
This study is an electropalatographic investigation of clusters composed of /n/ or /l/ followed by the (alveolo)palatal consonants /ʎ, ɲ/ or by dental /t/ in three Catalan dialects, i.e., Majorcan, Valencian and Eastern. Data show that articulatory blending through superposition occurs in the palatalizing environment except when C1 is highly constrained (e.g., dark /l/) or C2 is purely palatal and therefore, produced at a distant articulatory location from C1. Contrary to previous descriptions in the literature, data for /nt, lt/ reveal that blending through superposition rather than assimilation is at work. The implications of these data for theories of speech production are discussed.
Glottal marking of vowel-initial German words by glottalization and glottal stop insertion were investigated in dependence on speech rate, word type (content vs. function words), word accent, phrasal position and the following vowel. The analysed material consisted of speeches of Konrad Adenauer, Thomas Mann and Richard von Weizsäcker. The investigation shows that not only the left boundary of accented syllables (including phrasal stress boundary) and lexical words favour glottal stops/glottalization, but also that the segmental level appears to have a strong impact on these insertion processes. Specifically, the results show that low vowels in contrast to non-low ones favour glottal stops/glottalization even before non-accented syllables and functional words.
Introduction
(2006)
The papers in this volume reflect a number of broad themes which have emerged during the meetings of the project as particularly relevant for current Bantu linguistics. [...] The papers show that approaches to Bantu linguistics have also developed in new directions since this foundational work. For example, interaction of phonological phrasing with syntax and word order on the one hand, and with information structure on the other, is more prominent in the papers here than in earlier literature. Quite generally, the role of information structure for the understanding of Bantu syntax has become more important, in particular with respect to the expression of topic and focus, but also for the analysis of more central syntactic concerns such as questions and relative clauses. This, of course, relates to a wider development in linguistic theory to incorporate notions of topic and focus into core syntactic analysis, and it is not surprising that work on Bantu languages and on linguistic theory are closely related to each other in this respect. Another noteworthy development is the increasing interest in variation among Bantu languages which reflects the fact that more empirical evidence from more Bantu languages has become available over the last decade or so. The picture that emerges from this research is that morpho-syntactic variation in Bantu is rich and complex, and that there is strong potential to link this research to research on micro-variation in European (and other) languages, and to the study of morpho-syntactic variables, or parameters, more generally.
The present paper offers a summary of the results of two earlier experiments (Nawrocki and Gonet 2004; Nawrocki 2004), in which acoustic properties of the voiceless velar fricative phoneme /x/ in Southern Polish were investigated.
As is found in both studies (Nawrocki and Gonet 2004; Nawrocki 2004), speakers of both genders favour glottal articulation, with partial or full voicing. Word final contexts are decisively in favour of [x]. The word initial, prevocalic positions seem to allow quite a number of allophonic variants of /x/ . These are: [x], [ɦ], [ç] and, additionally, the voiceless glottal, the pharyngeal or the epiglottal [h]/[ħ]/[ʜ]. Another factor taken into account is the coarticulation effect of the vocalic context on the choice of articulation. Based on the results of the experiments, a reformulated allophonic composition is proposed for Polish /x/. It makes room for previously unconsidered pharyngeal and glottal allophones.
In order to inspect the acoustic properties of the allophones of Polish /x/ further, their static and dynamic spectral features are compared to those of phonetically similar sounds in other languages where they have the status of independent phonemes. Special attention is paid to the distribution of spectral peaks and their intensity. The fact that in Polish there are no 'back' fricative phonemes that would contrast with /x/ creates a wide range of acceptable allophonic articulations that cannot be challenged from either articulatory or perceptual points of view.
In this paper, we investigate two pairs of structures in German and English: German Weak Pronoun Left Dislocation and English Topicalization, on the one hand, and German and English Hanging Topic Left Dislocation, on the other. We review the prosodic, lexical, syntactic, and discourse evidence that places the former two structures into one class and the latter two into another, taking this evidence to show that dislocates in the former class are syntactically integrated into their 'host' sentences while those in the latter class are not. From there, we show that the most straightforward way to account for this difference in 'integration' is to take the dislocates in the latter structures to be 'orphans', phrases that are syntactically independent of the phrases with which they are associated, providing additional empirical and theoretical support for this analysis — which, we point out, has a number of antecedents in the literature.
The phenomenon of phonological opacity has been the subject of much debate in recent years, with scholars opposed to the Optimality Theory (OT) research program arguing that opacity proves OT must be false, while the solutions proposed within OT, such as sympathy theory and stratal OT , have proved to be unsatisfying to many OT proponents, who have found these proposals to be inconsistent with the parallelist approach to phonological processes otherwise characteristic of OT. In this paper I reexamine one of the best known cases of opacity, that found in three processes of Tiberian Hebrew (TH), and argue that these processes only appear to be opaque, because previous analyses have treated them as pure phonology, rather than as an interaction between phonology and morphology. Once it is recognized that certain words of TH are lexically marked to end with a syllabic trochee, and that the goal of paradigm uniformity exerts grammatical pressure on phonology, the three processes no longer present a problem to parallelist OT. The results suggest the possibility that all crosslinguistic instances of apparent opacity can be explained in terms of the phonology-morphology interface and that purely phonological opacity does not exist. If this claim is true, then parallelist OT can be defended against its detractors without the need for additional mechanisms like sympathy theory and stratal OT.
This study examines the movement trajectories of the dorsal tongue movements during symmetrical /VCa/ -sequences, where /V/ was one of the Hungarian long or short vowels /i,a,u/ and C either the voiceless palatal or velar stop consonants. General aims of this study were to deliver a data-driven account for (a) the evidence of the division between dorsality and coronality and (b) for the potential role coarticulatory factors could play for the relative frequency of velar palatalization processes in genetically unrelated languages. Results suggest a clear-cut demarcation between the behaviour of purely dorsal velars and the coronal palatals. Moreover, factors arising from a general movement economy might contribute to the palatalization processes mentioned.
The present study offers an Optimality-Theoretic analysis of the syllabification of intervocalic consonants and glides in Modern English. It will be argued that the proposed syllabifications fall out from universal markedness constraints – all of which derive motivation from other languages – and a language-specific ranking. The analysis offered below is therefore an alternative to the traditional rule-based analyses of English syllabification, e.g. Kahn (1976), Borowsky (1986), Giegerich (1992, 1999) and to the Optimality-Theoretic treatment proposed by Hammond (1999), whose analysis requires several language-specific constraints which apparently have no cross-linguistic motivation.
This paper investigates how syntax and focus interact in deriving the phonological phrasing of utterances in Xhosa, a Bantu language spoken in South Africa. Although the influence of syntax on phrasing is uncontroversial, a purely syntactic analysis cannot account for all the data reported for Xhosa by Jokweni (1995). Focus influences the phrasing in that it inserts a phonological phrase-boundary after the focused constituent. This generalization can account for the variation found in the phrasing of adverbials.
The findings are dealt with in an OT-based framework following Truckenbrodt's work on Chichewa (1995, 1999) which is extended to the phrasing of adjuncts.
In this paper, I argue that this apparent problem is accounted for by the interaction of constraints. For the fixed segment [ɛ] in Cɛ-reduplication, I argue that [ɛ] is the second least marked vowel in Palauan, which appears when the default vowel [ǝ] cannot appear. I show that the Palauan facts are not only consistent with the proposals of Urbanczyk (1999) and Alderete et. al (1999), but they actually provide support of their claims. In the following section, I discuss Urbanczyk's (1999) arguments concerning ROOT faithfulness in reduplication and possible asymmetries between affix reduplicants and root reduplicants. In Section 3, I introduce Palauan reduplication and discuss Finer's (1986) observations on the resulting state verb (RSV) form. I show that the RSV forms support the classification that Cɛ-reduplicants are affixes, and CVCV -reduplicants are roots. In Section 4, I discuss the shape and vowel quality of the two reduplicants. The CVCV-reduplicant has three variants: CǝCǝ, CǝC and CV. I explain this variation, illustrating why [ǝ] appears in the first two variations. Then, I discuss the shape and vowel quality of the Cɛ-reduplicant, arguing that the fixed segment [ɛ] in Cɛ-reduplication is a special case of TETU. I show that root faithfulness constraints are crucial in determining the shape and vowel quality of the reduplicants. Section 5 is the conclusion.
Ida'an-Begak is a Western Malayo-Polynesian language spoken by approximately 6,000 people on the east coast of Sabah, Malaysia, Borneo and belongs to the Sabahan subgroup of the North Borneo subgroup (Blust 1998). Ida'an-Begak has three dialects, Ida'an, spoken in the villages of Segama to the west of Lahad Datu, Ida'an Sungai spoken in the Kinabatangan and Sandakan districts, and Begak spoken in Ulu Tungku, to the east of Lahad Datu (Banker 1984).1 Moody (1993) deals with Ida'an; this paper concentrates on the Begak dialect. In this paper I will present new data gathered in the field and provide an analysis of the allomorphy. The study is based on spontaneous data as well as examples elicited from my language informants.
The goal of this paper is to survey the accent systems of the indigenous languages of Africa. Although roughly one third of the world’s languages are spoken in Africa, this continent has tended to be underrepresented in earlier stress and accent typology surveys, like Hyman (1977). This one aims to fill that gap. Two main contributions to the typology of accent are made by this study of African languages. First, it confirms Hyman's (1977) earlier finding that (stem-)initial and penult are the most common positions, cross-linguistically, to be assigned main stress. Further, it shows that not only stress but also tone and segment distribution can define prominence asymmetries which are best analyzed in terms of accent.
This paper evaluates trills [r] and their palatalized counterparts [rj] from the point of view of markedness. It is argued that [r]s are unmarked sounds in comparison to [r ]s which follows from the examination of the following parameters: (a) frequency of occurrence, (b) articulatory and aerodynamic characteristics, (c) perceptual features, (d) emergence in the process of language acquisition, (e) stability from a diachronic point of view, (f) phonotactic distribution, and (g) implications.
Several markedness aspects of [r]s and [rj] are analyzed on the basis of Slavic languages which offer excellent material for the evaluation of trills. Their phonetic characteristics incorporated into phonetically grounded constraints are employed for a phonological OT-analysis of r-palatalization in two selected languages: Polish and Czech.
Vowel dispersion in Truku
(2004)
This study investigates the dispersion of vowel space in Truku, an endangered Austronesian language in Taiwan. Adaptive Dispersion (Liljencrants and Lindblom, 1972; Lindblom, 1986, 1990) proposes that the distinctive sounds of a language tend to be positioned in phonetic space in a way that maximizes perceptual contrast. For example, languages with large vowel inventories tend to expand the overall acoustic vowel space. Adaptive Dispersion predicts that the distance between the point vowels will increase with the size of a language's vowel inventory. Thus, the available acoustic vowel space is utilized in a way that maintains maximal auditory contrast.
This paper presents preliminary results of a phonetic and phonological study of the Ntcheu dialect of Chichewa spoken by Al Mtenje (one of the co-authors). This study confirms Kanerva's (1990) work on Nkhotakota Chichewa showing that phonological re-phrasing is the primary cue to information structure in this language. It expands on Kanerva's work in several ways. First, we show that focus phrasing has intonational correlates, namely, the manipulation of downdrift and pause. Further, we show that there is a correlation between pitch prominence and discourse prominence at the left and right periphery which conditions dislocation to these positions. Finally, we show that focus and syntax are not the only factors which condition phonological phrasing in Chichewa.
The current study focuses on the prosodic realization of negators in Saisiyat, an endangered aboriginal language of Taiwan, and compares its prosodic realization of negation with that of English. The results of this study indicate that sentential subjects are the most acoustically prominent items in the Saisiyat negative sentences measured. This contrasts sharply with the English experimental sentences, in which the negator itself was the most acoustically prominent item. These findings suggest that Saisiyat is a pitch-accent language; thus, the presence of negators does not significantly change the prosodic parameters of surrounding words. English, in contrast, is an intonation language, so the presence of negation results in substantial prosodic modification. This suggests that the phenomenon of negation is universally prominent; however, languages with different prosodic systems will adopt different strategies for realizing prominence.
This study focuses upon a detailed description and analysis of the phonetic structures of Paiwan, an aboriginal language spoken in Taiwan, with around 53,000 speakers, Paiwan, a member of the Austronesian language family, is not typologically related to the other languages such as Mandarin and Taiwanese spoken in its geographically contiguous districts, Earlier work on phonological features of Paiwan (Chang, 1999; Tseng, 2003) sought an account in terms of segments and isolated facts about reduplication and stress, without accounting for the possible roles of phrase-level and sentence-Ievel prosodic structures, Government Teaching Material (1993) listed 25 consonants and 4 vowels, without any description of phonetic features and phonological rules, Chang's (2000) reference grammar included 22 consonants and 4 vowels, with a very brief description of 5 phonological rules on single words, Regional diversity and 25 consonants have been mentioned in Pulaluyan's (2002) teaching material; however, no description of phonological rules was found in his material.
Syllable cut is said to be a phonologically distinctive feature in some languages where the difference in vowel quantity is accompanied by a difference in vowel quality like in German. There have been several attempts to find the corresponding phonetic correlates for syllable cut, from which the energy measurements of vowels by Spiekermann (2000) proved appropriate for explaining the difference between long, i.e. smoothly, and short, i.e. abruptly cut, vowels: in smoothly cut vowels, a larger number of peaks was counted in the energy contour which were located further back than in abruptly cut segments, and the overall energy was more constant throughout the entire nucleus. On this basis, we intended to compare German as a syllable cut language and Hungarian where the feature was not expected to be relevant. However, the phonetic correlates of syllable cut found in this study do not entirely confirm Spiekermann's results. It seems that the energy features of vowels are more strongly connected to their duration than to their quality.
This study reports on the results of an airflow experiment that measured the duration of airflow and the amount of air from release of a stop to the beginning of a following vowel in stop vowel-sequences of German. The sequences involved coronal, labial and velar voiced and voiceless stops followed by the vocoids /j, i:, ı, ɛ, ʊ, a/. The experiment tested the influence of the three factors voicing of stop, place of stop articulation, and the following vocoid context on the duration and amount of air as possible explanation for assibilation processes. The results show that the voiceless stops are related to a longer duration and more air in the release phase than voiced ones. For the influence of the vocoids, a significant difference could be established between /j/ and all other vocoids for the duration of the release phase. This difference could not be found for the amount of air over this duration. The place of articulation had only restricted influence. Velars resulted in significantly longer duration of the release phase compared to non-velars. A significant difference in amount of air between the places of articulation could not be found.
The present article is a follow-up study of the investigation of labiodentals in German and Dutch by Hamann & Sennema (2005), where we looked at the perception of the Dutch labiodental three-way contrast by German listeners without any knowledge of Dutch and German learners of Dutch. The results of this previous study suggested that the German voiced labiodental fricative /v/ is perceptually closer to the Dutch approximant /ʋ/ than to the corresponding Dutch voiced labiodental fricative /v/. These perceptual indications are attested by the acoustic findings in the present study. German /v/ has a similar harmonicity median and a similar centre of gravity to Dutch /ʋ/, but differs from Dutch /v/ in these parameters. With respect to the acoustic parameter of duration, German /v/ lies closer to the Dutch /v/ than to the Dutch /ʋ/.
(Non)retroflexivity of slavic affricates and its motivation : Evidence from polish and czech <č>
(2005)
The goal of this paper is two-fold. First, it revises the common assumption that the affricate <č> denotes /t͡ʃ/ for all Slavic languages. On the basis of experimental results it is shown that Slavic <č> stands for two sounds: /t͡ʃ/ as e.g. in Czech and /ʈʂ/ as in Polish.
The second goal of the paper is to show that this difference is not accidental but it is motivated by perceptual relations among sibilants. In Polish, /t͡ʃ/ changed to /ʈʂ/ thus lowering its sibilant tonality and creating a better perceptual distance to /tɕ/, whereas in Czech /t͡ʃ/ did not turn to /ʈʂ/, as the former displayed sufficient perceptual distance to the only affricate present in the inventory, namely, the alveolar /t͡s/. Finally, an analysis of Czech and Polish affricate inventories is offered.
The distribution of trimoraic syllables in German and English as evidence for the phonological word
(2000)
In the present article I discuss the distribution of trimoraic syllables in German and English. The reason I have chosen to analyze these two languages together is that the data in both languages are strikingly similar. However, although the basic generalization in (1) holds for both German and English, we will see below that trimoraic syllabIes do not have an identical distribution in both languages.
In the present study I make the following theoretical claims. First, I argue that the three environments in (1) have a property in common: they all describe the right edge of a phonological word (or prosodic word; henceforth pword). From a formal point of view, I argue that a constraint I dub the THIRD MORA RESTRICTION (henceforth TMR), which ensures that trimoraic syllables surface at the end of a pword, is active in German and English. According to my proposal trimoraic syllables cannot occur morpheme-internally because monomorphemic grammatical words like garden are parsed as single pwords. Second, I argue that the TMR refers crucially to moraic structure. In particular, underlined strings like the ones in (1) will be shown to be trimoraic; neither skeletal positions nor the subsyllabic constituent rhyme are necessary. Third, the TMR will be shown to be violated in certain (predictable) pword-internal cases, as in Monde and chamber; I account for such facts in an OptimalityTheoretic analysis (henceforth OT; Prince & Smolensky 1993) by ranking various markedness constraints among themselves or by ranking them ahead of the TMR. Fourth, I hold that the TMR describes a concrete level of grammar, which I refer to below as the 'surface' representation. In this respect, my treatment differs significantly from the one proposed for English by Borowsky (1986, 1989), in which the English facts are captured in a Lexical Phonology model by ordering the relevant constraint at level 1 in the lexicon.
Identity effects in phonology are deviations from regular phonological form (i.e. canonical patterns) which are due to the relatedness between words. More specifically, identity effects are those deviations which have the function to enhance similarity in the surface phonological form of morphologically related words. In rule-based generative phonology the effects in question are described by means of the cycle. For example, the stress on the second syllable in cond[ɛ]nsation as opposed to the stresslessness of the second syllable in comp[ǝ]nsation is described by applying the stress rules initially to the sterns thereby yielding condénse and cómpensàte. Subsequently the stress rules are reapplied to the affixed words with the initial stress assignment (i.e. stress on the second syllable in condense, but not in compensate) leaving its mark in the output form (cf. Chomsky and Halle 1968). A second example are words like lie[p]los 'unloving' in German, which shows the effects of neutralization in coda position (i.e. only voiceless obstruents may occur in coda position) even though the obstruent should 'regularly' be syllabified in head position (i.e. bl is a wellformed syllable head in German). Here the stern is syllabified on an initial cycle, obstruent devoicing applies (i.e. lie[p]) and this structure is left intact when affixation applies (i.e. lie[p ]Ios ) (cf. Hall 1992). As a result the stern of lie[p]los is identical to the base lie[p].
The aim of this paper is to show what role prosodic constituents, especially the foot and the prosodic word play in Polish phonology. The focus is placed on their function in the representation of extrasyllabic consonants in word-initial, word-medial, and word-final positions.
The paper is organized as follows. In the first section, I show that the foot and the prosodic word are well-motivated prosodic constituents in Polish prosody. In the second part, I discuss consonant clusters in Polish focussing on segments that are not parsed into a syllable due to violations of the Sonority Sequencing Generalisation, i.e. extrasyllabic segments. Finally, I analyze possible representations of the extrasyllabic consonants and conclude that both the foot and the prosodic word play a crucial role in terms of licensing. My proposal differs from the ones by Rubach and Booij (1990b) and Rubach (1997) in that I argue that the word-initial sonorants traditionally called extrasyllabic are licenced by the foot and not by the prosodic word (cf. Rubach and Booij (1990b)) or the syllable (cf. Rubach (1997)). For my analysis I adopt the framework of Optimality Theory, cf. McCarthy and Prince (1993), Prince and Smolensky (1993), in which derivational levels are abandoned and only surface representations are evaluated by means of universal constraints.
In this work, I examine a set of languages which appear to require resyllabification postlexically; in less derivational terms, a word's syllabification in isolation differs from its syllabification in a phrase-internal context. Although many people, myself included, have been looking at such cases in isolation over the years, I bring together several examples here to see what features they share and how an Optimality Theory analysis improves upon rule-based derivational approaches.
The purpose of this paper is to provide a unified (i.e. independent of lexical categories) account of Persian stress. I show that by differentiating word- and phrase-level stress rules, one can account for the superficial differences exemplified in (1) above and many of the stipulations suggested by previous scholars. The paper is organized as follows. In section 1, I look at nouns and adjectives and propose a rule that would account for their stress pattern. In section 2, I extend the stress rule to verbs and show the problem this category poses to our generalization. The main proposal of this paper is discussed in section 3. I introduce the phrasal stress rule in Persian and show that by differentiating word-level and phrase-level stress rules, one can come to a unified account of Persian stress. Section 4 deals with some problematic eases for the proposed generalization and discusses some tentative solutions and their theoretical consequences. Section 5 concludes the paper.
While the perilinguistic child is endowed with predispositions for the categorical perception of phonetic features, their adaptation to the native language results from a long evolution from the end of the first year of age up to the adolescence. This evolution entails both a better discrimination between phonological categories, a concomitant reduction of the discrimination between within-category variants, and a higher precision of perceptual boundaries between categories. The first objective of the present study was to assess the relative importance of these modifications by comparing the perceptual performances of a group of 11 children, aged from 8 to 11 years, with those of their mothers. Our second objective was to explore the functional implications of categorical perception by comparing the performances of a group of 8 deaf children, equipped with a cochlear implant, with normal-hearing chronological age controls. The results showed that the categorical boundary was slightly more precise and that categorical perception was consistently larger in adults vs. normal-hearing children. Those among the deaf children who were able to discriminate minimal distinctions between syllables displayed categorical perception performances equivalent to those of normal-hearing controls. In conclusion, the late effect of age on the categorical perception of speech seems to be anchored in a fairly mature phonological system, as evidenced the fairly high precision of categorical boundaries in pre-adolescents. These late developments have functional implications for speech perception in difficult conditions as suggested by the relationship between categorical perception and speech intelligibility in cochlear implant children.
Four speakers repeated 8 times 15 sentences containing 'pVp' syllables (V being /a/, /i/ and /u/). The 'pVp' syllables were located in final, penultimate and antepenultimate position relatively to the Intonational Phrase (IP) boundary. They were embedded in lexical words of 1-3 syllables and were either word-initial or word-final. Results show that the closer the vowel in word-final position is to the IP boundary, the longer the duration and the higher the fundamental frequency of the vowel; it is also characterised by larger lip opening gestures. The potential reduction or coarticulation of vowels in wordinitial position compared to their counterparts in word-final position is discussed.
This paper examines how questions, both Wh-questions and yes-no questions, are phrased in Chimwiini, a Bantu language spoken in southern Somalia. Questions do not require any special phrasing principles, but Wh-questions do provide much evidence in support of the principle Align-Foc R, which requires that focused or emphasized words/constituents be located at the end of a phonological phrase. Question words and enclitics are always focused and thus appear at the end of a phrase. Although questions do not require any new phrasing principles, they do display complex accentual (tonal) behavior. This paper attempts to provide an account of these accentual phenomena.
This paper presents a preliminary survey of the positions and prosodies associated with Wh-questions in two Bantu languages spoken in Malawi. The paper shows that the two languages are similar in requiring focused subjects to be clefted. Both also require 'which' questions and 'because of what' questions to be clefted or fronted. However, for other non-subjects Tumbuka rather uniformly imposes an IAV (immediately after the verb) requirement, while Chewa does not. In both languages, we found a strong tendency for there to be a prosodic phrase break following the Wh-word. In Tumbuka, this break follows from the general phrasing algorithm of the language, while in Chewa, I propose that the break can be best understood as following from the inherent prominence of Wh-words.
This paper sketches the morphosyntactic and prosodic properties of questions in Fipa, discussing three varieties: Milanzi, Nkansi and Kwa. The general word order and morphological patterns relevant to question structures are outlined and different types of wh-question constructions are described and tentatively linked to the prosodic features of Fipa questions.
The purpose of this paper is to show how WH questions interact with the complex tonal phenomena which we summarized and illustrated in Hyman & Katamba (2010). As will be seen, WH questions have interesting syntactic and tonal properties of their own, including a WH-specific intonation. The paper is structured as follows: After an introduction in §1, we successively discuss non-subject WH questions (§2), subject WH questions (§3), and clefted WH questions (§4). We then briefly present a tense which is specifically limited to WH questions (§5), and conclude with a brief summary in §6.
This questionnaire is intended as an aid to eliciting different question types, including yes/no questions, alternative questions, and wh-questions on a range of constituents. We have taken care to include examples that allow one to test for common Bantu phenomena, such as a subject/non-subject asymmetry in wh-questions and an obligatory immediately after the verb (IAV) position for questioning verb complements. The questionnaire is intended as a guide, only, as every language will have its own set of possibilities and complications. At the end of the questionnaire is a checklist. While we had Bantu languages in mind in devising the questionnaire, we hope it will also be useful to linguists with an interest question constructions in other languages.
This paper tests three current theories of the phonology-syntax interface – Truckenbrodt (1995), Pak (2008) and Cheng & Downing (2007, 2009) – on the prosody of relative clauses in Chewa. Relative clauses, especially restrictive relative clauses, provide an ideal data set for comparing these theories, as they each make distinct predictions about the optimal phrasing. We show that the asymmetrical phase-edge based approach developed to account for similar Zulu prosodic phrasing by Cheng & Downing also best accounts for the Chewa data.
In Nłeʔkepmxcin, consonant-heavy inventories, lengthy obstruent clusters and widespread glottalization can make potential F0 cues to prosodic phrase boundaries (e.g. boundary tones or declination reset) difficult to observe phonetically. In this paper, I explore a test that exploits one behaviour of phrasefinal consonant clusters to test for prosodic phrasing in Nłeʔkepmxcin clauses. Final /t/ of the 1pl marker kt is aspirated when phrase-final, but not phraseinternally. Use of this test suggests that Thompson Salish speakers parse verbs, arguments and adjuncts into separate phonological phrases. However, complex verbal predicates and complex noun phrases are parsed as single phonological phrases. Implications are discussed, especially in regards to findings that (absence of) pitch accent is not employed to signal the informational categories of Focus and Givenness, even though Nłeʔkepmxcin is a stress language.
The aim of this paper is to try to explain how the Tooro system, which phonologically lacks tone, has come into being, by examining comparatively the tone system of each language itself and also by closely looking at the differences which exist among the Haya, Ankole and Nyoro systems (Kiga data insufficient) in order to look for phonetic reasons of the tone changes.
"The documentation of... descriptive generalizations is sometimes clearer and more accessible when expressed in terms of a detailed formal reconstruction, but only in the rare and happy case that the formalism fits the data so well that the resulting account is clearer and easier to understand than the list of categories of facts that it encodes.... [If not], subsequent scholars must often struggle to decode a description in an out-of-date formal framework so as to work back to... the facts.... which they can re-formalize in a new way. Having experienced this struggle often ourselves, we have decided to accommodate our successors by providing them directly with a plainer account." (Akinlabi & Liberman 2000:24)
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
Mechanisms of contrasting korean velar stops : A catalogue of acoustic and articulatory parameters
(2003)
The Korean stop system exhibits a three-way distinction in velar stops among /g/, /k'/ and /kh/. If the differentiation is regarded as being based on voicing, such a system is rather unusual because even a two-way distinction between a voiced and a voicless unaspirated velar stop gets easily lost in the languages of the world especially in the case of velar stops. One possibility for maintainig this distinction is that supralaryngeal characteristics like articulators' velocity, duration of surrounding vowels or stop closure duration are involved. The aim of the present study is to set up a catalogue of parameters which are involved in the distinction of Korean velar stops in intervocalic position.
Two Korean speakers have been recorded via Electromagnetic Articulography. The word material consisted of VCV-sequences where V is one of the three vowels /a/, /i/ or /u/ and C one of the Korean velars /g/, /k'/ or /kh/. Articulatory and acoustic signals have been analysed It turned out that the distinction is only partly built on laryngeal parameters and that supralaryngeal characteristics differ for the three stops. Another result is that the voicing contrast is not a matter of one parameter, but there is always a set of parameters involved. Furthermore, speakers seem to have a certain freedom in the choice of these parameters.
It is one of the most highly debated issues in loanword phonology whether loanword adaptations are phonologically or phonetically driven. This paper addresses this issue and aims at demonstrating that only the acceptance of both a phonological as well as a phonetic approximation stance can adequately account for the data found in Japanese. This point will be exemplified with the adaptation of German and French mid front rounded vowels in Japanese. It will be argued that the adaptation of German /oe/ and /ø/ as Japanese /e/ is phonologically grounded, whereas the adaptation of French /oe/ and /ø/ as Japanese /u/ is phonetically grounded. This asymmetry in the adaptation process of German and French mid front rounded vowels and further examples of loans in Japanese lead to the only conclusion that both strategies of loanword adaptation occur in languages. It will be shown that not only perception, but also the influence of orthography, of conventions and the knowledge of the source language play a role in the adaptation process.
In this artiele I reanalyze sibilant inventories of Slavic languages by taking into consideration acoustic. perceptive and phonological evidence. The main goal of this study is to show that perception is an important factor which determines the shape of sibilant inventories. The improvement of perceptual contrast essentially contributes to creating new sibilant inventories by (i) changing the place of articulation of the existing phonemes (ii) merging sibilants that are perceptually very close or (iii) deleting them.
It has also been shown that the symbol s traditionally used in Slavic linguistics corresponds to two sounds in the IP A system: it stands for a postalveolar sibilant (ʃ) in some Slavic languages, as e.g. Bulagarian, Czech, Slovak, some Serbian and Croatian dialects, whereas in others like Polish, Russian, Lower Sorbian it functions as a retroflex (ʂ). This discrepancy is motivated by the fact that ʃ is not optimal in terms of maintaining sufficient perceptual contrast to other sibilants such as s and ɕ. If ʃ occurs together with s (and sʲ) there is a considerable perceptual distance between them but if it occurs with ɕ in an inventory, the distance is much smaller. Therefore, the strategy most languages follow is the change from a postalveolar to a retroflex sibilant.
In this paper we provide an account of the historical development of Polish and Russian sibilants. The arguments provided here are of theoretical interest because they show that (i) certain allophonic rules are driven by the need to keep contrasts perceptually distinct, (ii) (unconditioned) sound changes result from needs of perceptual distinctiveness, and (iii) perceptual distinctiveness can be extended to a class of consonants, i.e. the sibilants. The analysis is cast within Dispersion Theory by providing phonetic and typological data supporting the perceptual distinctiveness claims we make.
In this article we propose that there are two universal properties for phonological stop assibilations, namely (i) assibilations cannot be triggered by /i/ unless they are also triggered by /j/, and (ii) voiced stops cannot undergo assibilations unless voiceless ones do. The article presents typological evidence from assibilations in 45 languages supporting both (i) and (ii). It is argued that assibilations are to be captured in the Optimality Theoretic framework by ranking markedness constraints grounded in perception which penalize sequences like [ti] ahead of a faith constraint which militates against the change from /t/ to some sibilant sound. The occurring language types predicted by (i) and (ii) will be shown to involve permutations of the rankings between several different markedness constraints and the one faith constraint. The article demonstrates that there exist several logically possible assibilation types which are ruled out because they would involve illicit rankings.
The present study examines a particular kind of rule blockage – referred to below as an 'antistructure-preservation effect'. An anti-structure-preservation effect occurs if some language has a process which is preempted from going into effect if some sequence of sounds [XY] would occur on the surface, even though other words in the language have [XY] sequences (which are underlyingly /XY/). It will be argued below that anti-structure-preservation effects can be captured in Optimality Theory in terms of a general ranking involving FAITH and MARKEDNESS constraints and that individual languages invoke a specific instantiation of this ranking. A significant point made below is that while anti-structure-preservation effects can be handled straightforwardly in terms of constraint rankings they typically require ad hoc rule-specific conditions in rule-based approaches.
Glide formation, a process whereby an underlying high front vowel is realized as a palatal glide, is shown to occur only in unstressed prevocalic position in German, and to be blocked by specific surface restrictions such as *ji and *ʁj. Traditional descriptions of glide formation (including derivational as well as Optimality theoretic approaches) refer to the syllable in order to capture its conditions. The present study illustrates that glide formation (plus the distribution of long and short tense /i/) in German can better be captured in a Functional Phonology account (Boersma 1998) which makes reference to stress instead of the syllable and thus overcomes problems of former approaches.
Die Hauptthese dieser Dissertation ist, dass Nord-Sotho keinen obligatorischen Gebrauch von grammatischen Mitteln zur Markierung von Fokus macht, weder in der Syntax noch in der Prosodie oder Morphologie. Trotzdem strukturiert diese Sprache eine Äußerung nach informationsstrukturellen Aspekten. Konstituenten, die im Diskurs gegeben sind, werden entweder getilgt, pronominalisiert oder an den rechten oder linken Satzrand versetzt. Diese (morpho-)syntaktischen Prozesse wirken so zusammen, dass die fokussierte Konstituente oft final in ihrem Teilsatz erscheint. Obwohl die finale Position keine designierte Fokusposition ist, ist das Wissen um diese Tendenz doch entscheidend für das Verständnis einer morphologischen Alternation, die in Nord-Sotho am Verb erscheint und die in der Literatur im Zusammenhang mit Fokus diskutiert wurde.
Obwohl also ein direkter grammatischer Ausdruck von formaler F(okus)-Markierung im Nord-Sotho fehlt, ist F-Markierung trotzdem entscheidend für die Grammatik dieser Sprache: Fokussierte logische Subjekte können nicht in kanonischer präverbaler Position erscheinen. Sie erscheinen stattdessen entweder postverbal oder in einem Spaltsatz, abhängig von der Valenz des Verbs. Obwohl Nord-Sotho bei Objekten im Gebrauch von Spaltsätzen eine Korrespondenz von komplexer Form mit komplexer Bedeutung zeigt, gilt diese Korrespondenz nicht für logische Subjekte.
Die vorliegende Dissertation modelliert die oben genannten Ergebnisse im theoretischen Rahmen der Optimalitätstheorie (OT). Syntaktischer in situ Fokus und die Abwesenheit von prosodischer Fokusmarkierung können mit unkontroversen Beschränkungen erfasst werden. Für die Ungrammatikaliät fokussierter logischer Subjekte in präverbaler Position schlägt die vorliegende Arbeit die Modifizierung einer in der Literatur vorhandenen Beschränkung vor, die in Nord-Sotho von entscheidener Bedeutung ist. Die Form-Bedeutungs-Korrespondenz wird, wie andere Phänomene pragmatischer Arbeitsteilung auch, innerhalb der schwach bidirektionalen Optimalitätstheorie behandelt.
We focus in this paper on two prosodic phenomena in Chimwiini: vowel length and accent (or High tone). Vowel length is determined in part by a lexical distinction between long and short vowels, and also by various morphophonemic processes that derive long vowels. Accent is penult in the default case, but final under certain morphosyntactic conditions. In order to account for the distribution of vowel length and the location of accents in a Chimwiini sentence, it is necessary to segment sentences into a sequence of phonological phrases. This paper examines the phonological phrasing of both canonical relative clauses and what we refer to as "pseudo-relative" clauses. An account of relative clause phrasing is of critical importance in Chimwiini due to the extensive use of pseudo-relatives in the language. Close examination of the pseudo-relatives reveals that their phrasing is not exactly the same as the phrasing of canonical relative clauses.
Símákonde is an Eastern Bantu language (P23) spoken by immigrant Mozambican communities in Zanzibar and on the Tanzanian mainland. Like other Makonde dialects and other Eastern and Southern Bantu languages (Hyman 2009), it has lost the historical Proto-Bantu vowel length contrast and now has a regular phrase-final stress rule, which causes a predictable bimoraic lengthening of the penultimate syllable of every Prosodic Phrase. The study of the prosody / syntax interface in Símákonde Relative Clauses requires to take into account the following elements: the relationship between the head and the relative verb, the conjoint / disjoint verbal distinction and the various phrasing patterns of Noun Phrases. Within Símákonde noun phrases, depending on the nature of the modifier, three different phrasing situations are observed: a modifier or modifiers may (i) be required to phrase with the head noun, (ii) be required to phrase separately, or (iii) optionally phrase with the head noun.
The interaction between Syntax and Phonology has been one area of interesting empirical research and theoretical debate in recent years, particularly the question of the extent to which syntactic structure influences phonological phrasing. It has generally been observed that the edges of the major syntactic constituents (XPs) tend to coincide with prosodic phrase boundaries thus resulting in XPs like subject NPs, object NPs, Topic NPs, VPs etc. forming separate phonological phrases. Within Optimality Theoretic (OT) accounts, this fact has been attributed to a number of well-motivated general alignment constraints. Studies on relative clauses in Bantu and other languages have significantly contributed to this area of research inquiry where a number of parametric variations have been observed with regard to prosodic phrasing. In some languages, XPs which are heads of relatives form separate phonological phrases while in others they phrase with the relative clauses. This paper makes a contribution to this topic by discussing the phrasing of relatives in Ciwandya (a Bantu language spoken in Malawi and Tanzania). It shows that XPs which are heads of restrictive relative clauses phrase with their relative verbs, regardless of whether they are subjects, objects or other adjuncts. A variety of syntactic constructions are used to illustrate this fact. The discussion also confirms what has been generally observed in other Bantu languages concerning restrictive relatives with clefts and non-restrictive relative clauses. In both cases, the heads of the relatives phrase separately. The paper adopts an OT analysis which has been well articulated and defended in Cheng & Downing (2007, 2010, to appear) Downing & Mtenje (2010, 2011) to account for these phenomena in Ciwandya.
Símákonde is an Eastern Bantu language (P23) spoken by immigrant Mozambican communities in Zanzibar and on the Tanzanian mainland. Like other Makonde dialects and other Eastern and Southern Bantu languages (Hyman 2009), it has lost the historical Proto-Bantu vowel length contrast and now has a regular phrase-final stress rule, which causes a predictable bimoraic lengthening of the penultimate syllable of every Prosodic Phrase. The study of the prosody / syntax interface in Símákonde Relative Clauses requires to take into account the following elements: the relationship between the head and the relative verb, the conjoint / disjoint verbal distinction and the various phrasing patterns of Noun Phrases. Within Símákonde noun phrases, depending on the nature of the modifier, three different phrasing situations are observed: a modifier or modifiers may (i) be required to phrase with the head noun, (ii) be required to phrase separately, or (iii) optionally phrase with the head noun.
The morpho-syntax of relative clauses in Sotho-Tswana is relatively well-described in the literature. Prosodic characteristics, such as tone, have received far less attention in the existing descriptions. After reviewing the basic morpho-syntactic and semantic features of relative clauses in Tswana, the current paper sets out to present and discuss prosodic aspects. These comprise tone specifications of relative clause markers such as the demonstrative pronoun that acts as the relative pronoun, relative agreement concords and the relative suffix. Further prosodic aspects dealt with in the current article are tone alternations at the juncture of relative pronoun and head noun, and finally the tone patterns of the finite verbs in the relative clause. The article aims at providing the descriptive basis from which to arrive at generalizations concerning the prosodic phrasing of relative clauses in Tswana.
Relative clauses in Haya
(2010)
This paper gives an overview of the morphology and syntax of Haya relative clause constructions. It extends previous work on this topic (Duranti, 1977) by incorporating data from a number of different dialects and by introducing new data on locative relative clauses. The dialects discussed in addition to the Kihanja data from Byarushengo et al. (1977) include Kiziba, Muleba and Bugabo dialects. Nyambo data taken from Rugemalira (2005) is also compared to Haya in places. The focus of the discussion is on the grammaticality of pronominal elements attached to the verb that refer back to the relativized entity with different types of relativized constituents in Haya. It is shown that there are differences between subjects, objects and locatives in terms of this kind of morphology inside the relative clause, as well as differences between these kinds of morphemes and resumptive pronouns.
Introduction
(2011)
In spite of this long history, most work to date on the phonology-syntax interface in Bantu languages suffers from limitations, due to the range of expertise required: intonation, phonology, syntax. Quite generally, intonational studies on African languages are extremely rare. Most of the existing data has not been the subject of careful phonetic analysis, whether of the prosody of neutral sentences or of questions or other focus structures. There are important gaps in our knowledge of Bantu syntax which in turn limit our understanding of the phonology-syntax interface. Recent developments in syntactic theory have provided a new way of thinking about the type of syntactic information that phonology can refer to and have raised new questions: Do only syntactic constituent edges condition prosodic phrasing? Do larger domains such as syntactic phases, or even other factors, like argument and adjunct distinctions, play a role? Further, earlier studies looked at a limited range of syntactic constructions. Little research exists on the phonology of focus or of sentences with non-canonical word order in Bantu languages. Both the prosody and the syntax of complex sentences, questions and dislocations are understudied for Bantu languages. Our project aims to remedy these gaps in our knowledge by bringing together a research team with all the necessary expertise. Further, by undertaking the intonational, phonological and syntactic analysis of several languages we can investigate whether there is any correlation among differences in morphosyntactic and prosodic properties that might also explain differences in phrasing and intonation. It will also allow us to investigate whether there are cross-linguistically common prosodic patterns for particular morpho-syntactic structure.
Introduction
(2010)
The papers in this volume were originally presented at the Bantu Relative Clause workshop held in Paris on 8-9 January 2010, which was organized by the French-German cooperative project on the Phonology/Syntax Interface in Bantu Languages (BANTU PSYN). This project, which is funded by the ANR and the DFG, comprises three research teams, based in Berlin, Paris and Lyon. [...] This range of expertise is essential to realizing the goals of our project. Because Bantu languages have a rich phrasal phonology, they have played a central role in the development of theories of the phonology-syntax interface ever since the seminal work from the 1970s on Chimwiini (Kisseberth & Abasheikh 1974) and Haya (Byarushengo et al. 1976). Indeed, half the papers in Inkelas & Zec’s (1990) collection of papers on the phonology-syntax interface deal with Bantu languages. They have naturally played an important role in current debates comparing indirect and direct reference theories of the phonology-syntax interface. Indirect reference theories (e.g., Nespor & Vogel 1986; Selkirk 1986, 1995, 2000, 2009; Kanerva 1990; Truckenbrodt 1995, 1999, 2005, 2007) propose that phonology is not directly conditioned by syntactic information. Rather, the interface is mediated by phrasal prosodic constituents like Phonological Phrase and Intonation Phrase, which need not match any syntactic constituent. In contrast, direct reference theories (e.g., Kaisse 1985; Odden 1995, 1996; Pak 2008; Seidl 2001) argue that phrasal prosodic constituents are superfluous, as phonology can – indeed, must – refer directly to syntactic structure.
This study examines articulatory and acoustic inter-speaker variability in the production of the German vowels /i/, /u/ and /a/. Our subjects are 3 monozygotic twin pairs (2 female and 1 male pair) and 2 dizygotic female twin pairs. All of them were born, raised and are still living in Berlin and see their twin brother or sister regularly. We assume that monozygotic twins that are genetically identical and share the same physiology should be more similar in their articulation than dizygotic twins but that the shared time and social environment of twins, regardless of their genetic similarity, also plays a crucial role in the acoustic similarity of twins. Articulatory measurements were made with EMA (Electromagnetic Articulography) and the target positions of the produced vowels were analyzed. Additionally, the formants F1-F4 of each vowel were measured and compared within the twin pairs. Our data seems to point out the importance of a shared environment and the strong influence of learning over the anatomical identity of the monozygotic twins regarding the production of vowels. But, additional results suggest (1) the impact of physiology on the production of a vowel following a velar consonant and (2) the interaction of physiology and stress in inter-speaker variability.
We show that wh-words are a tool to investigate the prosodic structure of Bàsàa. Our claim is that the end of an Intonation Phrase (IP) can be identified by the presence of a long vowel on the wh-word. We propose that wh-words, which sometimes surface as C´V and sometimes as C´V´V, are underlyingly of the C´V form and they introduce a floating H. Whenever the association of this floating H with the first tone bearing unit that follows the wh-word is prevented by the presence of an IP boundary, a mora is created on the wh-word in order to realize the floating H. We briefly discuss the interface approach of Immediately After the Verb (IAV) focus (Costa and Kula, 2008) and we show that Bàsàa wh-questions and answers do not support this hypothesis. Finally, Bàsàa fronted whphrases, just like Hausa’s fronted foci (Leben et al., 1989), seem to provide support to the idea that intonational effects are also at play in the present tone language.
Since the advent of nonlinear phonology many linguists have either assumed or argued explicitly that many languages have words in which one or more segment does not belong structurally to the syllable. Three commonly employed adjectives used to describe such consonants are 'extrasyllabic', 'extrametrical' or 'stray'. Other authors refer to such segments as belonging to the 'appendix'. [...] Various non-linear representations have been proposed to express the 'extrasyllabicity' of segments [...]. The ones I am concerned with in the present article analyze [...] consonants [...] structurally as being outside of the syllable [...]. For transparency I ignore here both subsyllabic constituency as well as higher level prosodic constituents to which the stray consonants are sometimes assumed to attach. For reasons to be made clear below I refer to syllables [...] in which the stray consonant is situated outside of the syllable, as abstract syllables.
In this paper we focus on the similarities tying together the second segment of an onset cluster and a singleton coda segment. We offer a proposal based on Baertsch (2002) accounting for this similarity and show how it captures a number of observations which have defied previous explanation. In accounting for the similarity of patterning between the second member of an onset and a coda consonant, we propose to augment Prince & Smolensky's (P&S, 1993/2002) Margin Hierarchy so as to distinguish between structural positions that prefer low sonority and those that prefer high sonority. P&S's Margin Hierarchy, which gives preference to segments of low sonority, applies to singleton onsets; this is our M1 hierarchy. Our proposed M2 hierarchy applies both to the second member of an onset and to a singleton coda. The M2 hierarchy differs from the M1 hierarchy in giving preference to consonants of high sonority. Splitting the Margin Hierarchy into the M1 and M2 hierarchies allows us to explain typological, phonotactic, and acquisitional observations that have defied previous explanation. In Section 2 of this paper, we briefly provide background on the links that tie together the second member of an onset and a singleton coda. In Section 3, we review P&S's Margin Hierarchy, showing that it becomes problematic when extended to coda consonants. We then offer our proposal for a split margin hierarchy. Section 4 extends the split margin approach to complex onsets. We then show how it is able to account for various typological, phonotactic, and acquisitional observations. In Section 5, we will conclude the paper by briefly sketching how the split margin approach enables us to analyze syllable contact phenomena without requiring a specific syllable contact constraint (or additional hierarchy) or reference to an external sonority scale.
All of the papers in the volume except one (Kaji) take up some aspect of relative clause construction in some Bantu language. Kaji’s paper aims to account for how Tooro (J12; western Uganda) lost phonological tone through a comparative study of the tone systems of other western Uganda Bantu languages. The other papers examine a range of ways of forming relative clauses, often including non-restrictive relatives and clefts, in a wide range of languages representing a variety of prosodic systems.
The contribution of von Kempelen's "Mechanism of Speech" to the 'phonetic sciences' will be analyzed with respect to his theoretical reasoning on speech and speech production on the one hand and on the other in connection with his practical insights during his struggle in constructing a speaking machine. Whereas in his theoretical considerations von Kempelen's view is focussed on the natural functioning of the speech organs – cf. his membraneous glottis model – in constructing his speaking machine he clearly orientates himself towards the auditory result – cf. the bag pipe model for the sound generator used for the speaking machine instead. Concerning vowel production his theoretical description remains questionable, but his practical insight that vowels and speech sounds in general are only perceived correctly in connection with their surrounding sounds – i.e. the discovery of coarticulation – is clearly a milestone in the development of the phonetic sciences: He therefore dispenses with the Kratzenstein tubes, although they might have been based on more thorough acoustic modelling.
Finally, von Kempelen's model of speech production will be discussed in relation to the discussion of the acoustic nature of vowels afterwards [Willis and Wheatstone as well as von Helmholtz and Hermann in the 19th century and Stumpf, Chiba & Kajiyama as well as Fant and Ungeheuer in the 20th century].
This paper presents the results of Open Quotient measurements in EGG signals of young (18 to 30 year old) and elderly (59 to 82 year old) male and female speakers. The paper further presents quantitative results on the relation between the OQ and the perception of a speaker's age. Higgins & Saxman (1991) found a decreased OQEGG with increasing age for females, whereas the OQEGG in sustained vowel material increased for males as the speakers age increased. In Linville (2002), however, the spectral amplitudes in the region of F0 (obtained by LTAS-measurements of read speech material) increased with increasing age independent of gender; this could be interpreted indirectly as an increasing OQ. We measured the OQEGG not only for sustained vowels, but also in vowels taken from isolated words. In order to analyse the relation between breathiness in terms of an increased OQ and the mean perceived age per stimulus a perception test was carried out in which listeners were asked to estimate speaker's age based on sustained /a/-vowel stimuli varying in vocal effort (soft - normal - loud) during production. The results indicated the following: (i) The decreased OQ for elderly females originally found by Higgins & Saxman is not apparent in our data for sustained /a/-vowels. For our female speakers no significant difference between the OQ of young and old speakers was found; for elderly males, however, we also found an increasing OQ with increasing age.(ii) In addition, a statistically significant increased OQEGG occurs for the group of the elderly males for the vowels from the word material. (iii) Our results show a strong positive relation between perceived age and OQ in male voices. Regarding (i) and (ii), at least the male speaker's voice becomes more breathy as age increases. Considering (iii), increased breathiness may contribute to the listener’s perception of increased age.
In order to understand the functional morphology of the human voice producing system, we are in need of data on the vocal tract anatomy of other mammalian species. The larynges and vocal tracts of four species of Artiodactyla were investigated in combination with acoustic analyses of their respective calls. Different evolutionary specializations of laryngeal characters may lead to similar effects on sound production. In the investigated species, such specializations are: the elongation and mass increase of the vocal folds, the volume increase of the laryngeal vestibulum by an enlarged thyroid cartilage and the formation of laryngeal ventricles. Both the elongation of the vocal folds and the increase of the oscillating masses lower the fundamental frequency. The influence of an increased volume of the laryngeal vestibulum on sound production remains unclear. The anatomical and acoustic results are presented together with considerations about the habitats and the mating systems of the respective species.
We measure face deformations during speech production using a motion capture system, which provides 3D coordinate data of about 60 markers glued on the speaker's face. An arbitrary orthogonal factor analysis followed by a principal component analysis (together called a guided PCA) of the data has showed that the first 6 factors explain about 90% of the variance, for each of our 3 speakers. The 6 derived factors, therefore, allow us to efficiently analyze or to reconstruct with a reasonable accuracy the observed face deformations. Since these factors can be interpreted in articulatory terms, they can reveal underlying articulatory organizations. The comparison of lip gestures in terms of data derived factors suggests that these speakers differently maneuver the lips to achieve contrast between /s/ and /R/. Such inter-speaker variability can occur because the acoustic contrast of these fricatives is shaped not only by the lip tube but also by cavities inside the mouth such as the sublingual cavity. In other words, these tube and cavity can acoustically compensate each other to produce their required acoustic properties.
Data on lingual movement, dorsopalatal contact and F2 frequency presented in previous papers of ours (Recasens, 2002; Recasens and Pallarès, 2001; Recasens, Pallarès and Fontdevila, 1997) suggest that the degree of articulatory constraint (DAC) model accounts to a large extent for the extent and direction of tongue dorsum coarticulation in VCV and CC sequences. A goal of this investigation is to verify the predictions of this model with respect to jaw V-to-V effects in VCV sequences using articulatory movement data collected with electromagnetic articulometry (EMA).
Arguing against Bhat’s (1974) claim that retroflexion cannot be correlated with retraction, the present article illustrates that retroflexes are always retracted, though retraction is not claimed to be a sufficient criterion for retroflexion. The cooccurrence of retraction with retroflexion is shown to make two further implications; first, that non-velarized retroflexes do not exist, and second, that secondary palatalization of retroflexes is phonetically impossible. The process of palatalization is shown to trigger a change in the primary place of articulation to non-retroflex. Phonologically, retraction has to be represented by the feature specification [+back] for all retroflex segments.
The present article illustrates that the specific articulatory and aerodynamic requirements for voiced but not voiceless alveolar or dental stops can cause tongue tip retraction and tongue mid lowering and thus retroflexion of front coronals. This retroflexion is shown to have occurred diachronically in the three typologically unrelated languages Dhao (Malayo-Polynesian), Thulung (Sino-Tibetan), and Afar (East-Cushitic). In addition to the diachronic cases, we provide synchronic data for retroflexion from an articulatory study with four speakers of German, a language usually described as having alveolar stops. With these combined data we supply evidence that voiced retroflex stops (as the only retroflex segments in a language) did not necessarily emerge from implosives, as argued by Haudricourt (1950), Greenberg (1970), Bhat (1973), and Ohala (1983). Instead, we propose that the voiced front coronal plosive /d/ is generally articulated in a way that favours retroflexion, that is, with a smaller and more retracted place of articulation and a lower tongue and jaw position than /t/.