Linguistik-Klassifikation
Refine
Document Type
- Working Paper (3)
- Preprint (2)
Language
- English (5) (remove)
Has Fulltext
- yes (5)
Is part of the Bibliography
- no (5)
Keywords
- Grammatiktheorie (3)
- Grammatik (2)
- Antikausativ (1)
- Gerundium (1)
- Gerundivum (1)
- Grammatikalisation (1)
- Kausativ (1)
- Latein (1)
- Morphologie <Linguistik> (1)
- Verb (1)
Institute
- Extern (5) (remove)
This paper is concerned with developing Joan Bybee's proposals regarding the nature of grammatical meaning and synthesizing them with Paul Hopper's concept of grammar as emergent. The basic question is this: How much of grammar may be modeled in terms of grammaticalization? In contradistinction to Heine, Claudi & Hünnemeyer (1991), who propose a fairly broad and unconstrained framework for grammaticalization, we try to present a fairly specific and constrained theory of grammaticalization in order to get a more precise idea of the potential and the problems of this approach. Thus, while Heine et al. (1991:25) expand – without discussion – the traditional notion of grammaticalization to the clause level, and even include non-segmental structure (such as word order), we will here adhere to a strictly 'element-bound' view of grammaticalization: where no grammaticalized element exists, there is no grammaticalization. Despite this fairly restricted concept of grammaticalization, we will attempt to corroborate the claim that essential aspects of grammar may be understood and modeled in terms of grammaticalization. The approach is essentially theoretical (practical applications will, hopefully, follow soon) and many issues are just mentioned and not discussed in detail. The paper presupposes a familiarity with the basic facts of grammaticalization and it does not present any new facts.
This paper is concerned with anticausative verbs (or verb-forms), or shortly, anticausatives. [...] [C]ausative/non-causative pairs with a marked non-causative are quite frequent in the languages of the world. However, so far they have not received sufficient attention in general and typological linguistics, a fact which is also manifested in the absence of a generally recognized term for this phenomenon […]. This paper therefore deals with the most important properties of anticausatives (particularly semantic conditions on them), their relationship to other areas of grammar as well as their historical development in different languages. The grammatical domain of transitivity, valence and voice, where the anticausative belongs, takes up a central position in grammar and consequently the present discussion should be of considerable interest to general comparative (or typological) linguists.
It is the aim of this paper to present and elaborate a new solution to the old syntactic problems connected with the Latin gerundive and gerund, two verbal categories which have been interpreted variously either as adjective (or participle) or noun (or infinitive). These questions have been much discussed for quite a number of years […] but for the most part from a philological or purely diachronic point of view. All these linguists try to explain the peculiarities of these categories and their syntax by showing that the gerund is historically prior to the gerundive. [...] It is our thesis […] that in order to arrive at a unified account of gerundive and gerund we do not have to go back to prehistoric times. Even for the classical language gerund and gerundive represent the same category, in the sense that the gerund can be shown to be a special case of the gerundive. Additional evidence from a parallel construction in Hindi is adduced to make the Latin facts more plausible. It is only in the post-classical language that certain tendencies which had shown up already in Old Latin poetry become stronger and finally lead to a reanalysis of the gerundive and a split into two distinct syntactic constructions. The propositional meaning of the gerundive in its attributive use is explained with reference to a conflict between syntactic and cognitive principles. Special constructions which are the effects of such conflicts can be found in other parts of grammar. Languages differ with respect to the degree of syntacticization (or conventionalization) of these special constructions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.