Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Working Paper (5)
- Conference Proceeding (4)
- Part of a Book (3)
- Article (1)
- Preprint (1)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14)
Keywords
- Grammatiktheorie (14) (remove)
Institute
- Extern (4)
Progress toward distinguishing clearly between generative and model-theoretic syntactic frameworks has not been smooth or swift, and the obfuscatory term 'constraint-based' has not helped. This paper reviews some elementary subregular formal language theory relevant to comparing description languages for model-theoretic grammars, generalizes the results to trees, and points out that HPSG linguists have maintained an unacknowledged and perhaps unintended allegiance to the idea of strictly local description: unbounded dependencies, in particular, are still being conceptualized in terms of plugging together local tree parts annotated with the SLASH feature. Adopting a description language with quantifiers holds out the prospect of eliminating the need for the SLASH feature. We need to ask whether that would be a good idea. Binding domain phenomena might tell us. More work of both descriptive and mathematical sorts is needed before the answer is clear.
This paper desribes four areas in which grammar engineers and theoretical linguists can interact. These include: using grammar engineering to confirm linguistic hypotheses; linguistic issues highlighted by grammar engineering; implementation capabilities guiding theoretical analyses; and insights into architecture issues. It is my hope that we will see more work in these areas in the future and more collaboration among grammar engineers and theoretical linguists. This is an area in which HPSG and LFG as a distinct advantage, given the strong communities and resources available.
This paper points out certain flaws in the semantics for lexical rule specifications developed in Meurers (2001). Under certain circumstances, certain words may not be licit inputs to a rule according to this semantics while one would expect them to be from inspecting the specification of the rule. The reasons for this are shown to be that whether properties of paths should be transferred from the input of a rule to its output is decided considering only the respective paths and their properties in isolation, ignoring the ‘non-local’ effects that transferring their properties can have. Furthermore, the semantics is insensitive to the possible shapes of inputs to the rule, which also makes it possible that inputs of certain shapes are unexpectedly not accepted. An alternative semantics is developed that does not suffer from these deficits.
Die Grundlagen der heutigen modernen Wortartenklassifikationen gehen bis in die Antike zurück: Bereits zu dieser Zeit hat Dionysius Thrax ein Schema mit acht Wortarten etabliert. Die darin auftretenden Wortarten sind Substantive, Verben, Adjektive, Artikel, Pronomen, Präpositionen, Adverbien und Konjunktionen. Diese Zahl wird wiederum in den unterschiedlichen Grammatikansätzen unserer Zeit variiert. So verwendet der generative Ansatz beispielsweise vier Wortarten – Bergenholtz/Schaeder (1977) verzeichnen dagegen ganze 51 verschiedene Wortarten und zusätzlich 5 Lexemklassen. Allein diese starken Schwankungen in der angenommenen Anzahl der Wortarten verdeutlichen die allgemeinen Schwierigkeiten bei der Abgrenzung der Wortarten in ihren Kriterien.
Das Zitat "Denn sie gliedern sich in Stämme wie die Menschen" aus Érik Orsennas "Die Grammatik ist ein sanftes Lied" leitet den Titel dieser Arbeit ein und markiert gleichzeitig eine Schnittstelle zwischen der Literaturwissenschaft und der Linguistik und speziell der Grammatik. Als metasprachliche Erzählung setzt sich Orsennas Erzählung literarisch mit der Sprache und ihrer Grammatik auseinander. In der vorliegenden Arbeit beschäftige ich mich vorrangig mit der Analyse der Kriterien zur Klassifikation von Wortarten und ihrer literarischen Darstellung und Ausgestaltung in Orsennas Text über die Wörter, die in Stämmen in der Stadt der Wörter zusammenleben und in einer Fabrik miteinander zu Sätzen verbunden werden können. Der Originaltext von Orsenna ist eine Erzählung in französischer Sprache. Die Übersetzerin Caroline Vollmann hat den Text an die Gegebenheiten und speziellen Phänomene der deutschen Sprache angepasst. Aus diesem Grund spreche ich in der Arbeit von Orsenna und Vollmann als Verfassern.
Da die Darstellung der Wortarten bei Orsenna und Vollmann primär durch Metaphern realisiert wird und den Wörtern als "Stämmen" in einer Stadt menschliche Eigenschaften zugewiesen werden, möchte ich besonders auf die Grundlagen der kognitiven Metapherntheorie von Lakoff und Johnson eingehen. Um eine möglichst wissenschaftlich fundierte Grundlage für die Analyse von Kriterien zur Wortartenklassifikation zu gewährleisten, habe ich drei Grammatiken als Vergleichsmedium für die spätere Analyse von Orsennas und Vollmanns Text ausgewählt. Dadurch gewinne ich sowohl eine syntaktisch als auch morphologisch und semantisch orientierte Perspektive auf den Untersuchungsgegenstand. Aus den Grammatiken von Hentschel/Weydt (2003), Helbig/Buscha (2005) und Boettcher (2009) soll im Verlauf der Arbeit ein Kriterienkatalog erstellt werden, der in einem weiteren Schritt auf die Analyse der Wortartenklassifikation des literarischen Textes angewendet werden kann.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
This study outlines the formation of the category of "modal verb" within the grammaticography of German from the beginnings in the 16th century up to its "canonization" in the first half of the 20th century, also showing certain parallels to the treatment of modal verbs in the grammaticography of Portuguese. It also describes the influence German grammaticography had on the formation of this category in the grammaticography of Portuguese.
In this paper topic and focus effects at both left and right periphery are argued to be epiphenomena of general properties of tree growth. We incorporate Korean into this account as a prototypical verb-final language, and show how long- and short-distance scrambling form part of this general picture. Multiple long-distance scrambling effects emerge as a consequence of the feeding relationship between different forms of structural under-specification. We also show how the array of effects at the right periphery, in both verb-final and other language-types, can also be explained with the same concepts of tree growth. In particular the Right Roof Constraint, a well-known but little understood constraint, is an immediate consequence of compositionality constraints as articulated in this system.
The present study examines a particular kind of rule blockage – referred to below as an 'antistructure-preservation effect'. An anti-structure-preservation effect occurs if some language has a process which is preempted from going into effect if some sequence of sounds [XY] would occur on the surface, even though other words in the language have [XY] sequences (which are underlyingly /XY/). It will be argued below that anti-structure-preservation effects can be captured in Optimality Theory in terms of a general ranking involving FAITH and MARKEDNESS constraints and that individual languages invoke a specific instantiation of this ranking. A significant point made below is that while anti-structure-preservation effects can be handled straightforwardly in terms of constraint rankings they typically require ad hoc rule-specific conditions in rule-based approaches.
Simplicity as a methodological orientation applies to linguistic theory just as to any other field of research: ‘Occam’s razor’ is the label for the basic heuristic maxim according to which an adequate analysis must ultimately be reduced to indispensible specifications. In this sense, conceptual economy has been a strict and stimulating guideline in the development of Generative Grammar from the very beginning. Halle’s (1959) argument discarding the level of taxonomic phonemics in order to unify two otherwise separate phonological processes is an early characteristic example; a more general notion is that of an evaluation metric introduced in Chomsky (1957, 1975), which relates the relative simplicity of alternative linguistic descriptions systematically to the quest for explanatory adequacy of the theory underlying the descriptions to be evaluated. Further proposals along these lines include the theory of markedness developed in Chomsky and Halle (1968), Kean (1975, 1981), and others, the notion of underspecification proposed e.g. in Archangeli (1984), Farkas (1990), the concept of default values and related notions. An important step promoting this general orientation was the idea of Principles and Parameters developed in Chomsky (1981, 1986), which reduced the notion of language particular rule systems to universal principles, subject merely to parametrization with restricted options, largely related to properties of particular lexical items. On this account, the notion of a simplicity metric is to be dispensed with, as competing analyses of relevant data are now supposed to be essentially excluded by the restrictive system of principles.
This paper is concerned with developing Joan Bybee's proposals regarding the nature of grammatical meaning and synthesizing them with Paul Hopper's concept of grammar as emergent. The basic question is this: How much of grammar may be modeled in terms of grammaticalization? In contradistinction to Heine, Claudi & Hünnemeyer (1991), who propose a fairly broad and unconstrained framework for grammaticalization, we try to present a fairly specific and constrained theory of grammaticalization in order to get a more precise idea of the potential and the problems of this approach. Thus, while Heine et al. (1991:25) expand – without discussion – the traditional notion of grammaticalization to the clause level, and even include non-segmental structure (such as word order), we will here adhere to a strictly 'element-bound' view of grammaticalization: where no grammaticalized element exists, there is no grammaticalization. Despite this fairly restricted concept of grammaticalization, we will attempt to corroborate the claim that essential aspects of grammar may be understood and modeled in terms of grammaticalization. The approach is essentially theoretical (practical applications will, hopefully, follow soon) and many issues are just mentioned and not discussed in detail. The paper presupposes a familiarity with the basic facts of grammaticalization and it does not present any new facts.