Linguistik
Refine
Year of publication
Document Type
- Preprint (122) (remove)
Has Fulltext
- yes (122)
Is part of the Bibliography
- no (122)
Keywords
- Deutsch (19)
- Multicomponent Tree Adjoining Grammar (9)
- Schweizerdeutsch (9)
- Syntax (9)
- Syntaktische Analyse (8)
- Semantik (7)
- Lexicalized Tree Adjoining Grammar (6)
- Dialektologie (5)
- Optimalitätstheorie (5)
- Range Concatenation Grammar (5)
Institute
- Extern (69)
- Sprachwissenschaften (1)
In der Abteilung Grammatik des Instituts für Deutsche Sprache, Mannheim, wird derzeit ein neues Projekt entwickelt, und zwar das einer Grammatik des Deutschen im europäischen Vergleich (GDE). Dieses Projekt fügt sich ein in die kontrastive Tradition des IDS, ist jedoch andererseits auch in vieler Hinsicht innovativ. Bevor ich das Projekt im Einzelnen vorstelle, versuche ich den Bogen zurück zu den kontrastiven Grammatiken zu schlagen. Gerade die Leserschaft polnischer Germanisten braucht an die Tradition kontrastiver Grammatikschreibung sicher nicht eigens erinnert zu werden. Denn diese Tradition, die untrennbar mit dem Namen Ulrich Engel verknüpft ist, ist gerade erst in der neu erschienenen deutsch-polnischen kontrastiven Grammatik kulminiert. Im Bereich der kontrastiven Grammatiken zu Sprachenpaaren, von denen das Deutsche ein Element ist, verfügt das IDS also über eine vergleichsweise reiche Tradition. Am IDS oder in Kooperation mit dem IDS wurden kontrastive Grammatiken zu den Sprachenpaaren Deutsch – Französisch (Zemb 1978), Deutsch – Serbokroatisch , Deutsch – Spanisch (Cartegena/Gauger 1989), Deutsch – Rumänisch (Engel u.a. 1993) erarbeitet. Zum Sprachenpaar Englisch – Deutsch liegt mit Hawkins 1986 eine typologisch-vergleichende Grammatik vor. Die deutsch-polnische kontrastive Grammatik, die unter der Leitung von Ulrich Engel erarbeitet wurde, ist 1999 erscheinen. Abraham 1994 und Glinz 1994 konfrontieren das Deutsche, mit durchaus unterschiedlicher Akzentsetzung, mit mehreren anderen europäischen Sprachen. An der Berliner Humboldt-Universität laufen derzeit die Vorarbeiten zu einer deutsch-russischen kontrastiven Grammatik (Initiative Wolfgang Gladrow und Michail Kotin). Die Aufgabe einer 'Grammatik des Deutschen im europäischen Kontext' ist also hinlänglich vorbereitet.
Die Ressource "Wissen" rückte in den letzten Jahrzehnten als Quelle wissenschaftlicher Innovation immer stärker ins Zentrum des Interesses. Diese Fokussierung mündete in eine Selbstreflexion der Wissenschaft und der wissenschaftlichen Disziplinen: Thematisiert werden vor allem die Art und Weise, wie Wissen gewonnen wird, sowie die damit zusammenhängende Frage nach der Konstruktion von Wissenschaftlichkeit, womit das Bewusstsein gleichzeitig auf die mehr und mehr sich auflösende Abgrenzung zwischen den Disziplinen beziehungsweise zwischen den drei hauptsächlichen Wissenschaftskulturen, von Natur-, Geistes- und Kultur- sowie Sozialwissenschaften gelenkt wird. Innerhalb und außerhalb der Universitäten bildeten und bilden sich nicht immer klar verortbare "trading zones" (Gallison 1997), in denen neue Formen und Techniken der Wissensproduktion und Wissensvermittlung geprüft, geübt und teilweise auch institutionalisiert werden. ...
Is language the key to number? This article argues that the human language faculty provides the cognitive equipment that enables humans to develop a systematic number concept. Crucially, this concept is based on non-iconic representations that involve relations between relations: relations between numbers are linked with relations between objects. In contrast to this, language-independent numerosity concepts provide only iconic representations. The pattern of forming relations between relations lies at the heart of our language faculty, suggesting that it is language that enables humans to make the step from these iconic representations, which we share with other species, to a generalised concept of number.
Der TUSNELDA-Standard : ein Korpusannotierungsstandard zur Unterstützung linguistischer Forschung
(2001)
Die Verwendung von Standards für die Annotierung größerer Sammlungen elektronischer Texte (Korpora) ist eine Voraussetzung für eine mögliche Wiederverwendung dieser Korpora. Dieser Artikel stellt einen Korpusannotierungsstandard vor, der die Anforderungen der Untersuchung unterschiedlichster linguistischer Phänomene berücksichtigt. Der Standard wurde im SFB 441 an der Universität Tübingen entwickelt. Er geht von bestehenden Standards, insbesondere CES und TEI, aus, die sich als teilweise zu ausführlich und zu wenig restriktiv,teilweise auch als nicht ausdrucksstark genug erweisen, um den Bedürfnissen korpusbasierter linguistischer Forschung gerecht zu werden.
Weak function word shift
(2004)
The fact that object shift only affects weak pronouns in mainland Scandinavian is seen as an instance of a more general observation that can be made in all Germanic languages: weak function words tend to avoid the edges of larger prosodic domains. This generalisation has been formulated within Optimality Theory in terms of alignment constraints on prosodic structure by Selkirk (1996) in explaining thedistribution of prosodically strong and weak forms of English functionwords, especially modal verbs, prepositions and pronouns. But a purely phonological account fails to integrate the syntactic licensing conditions for object shift in an appropriate way. The standard semantico-syntactic accounts of object shift, onthe other hand, fail to explain why it is only weak pronouns that undergo object shift. This paper develops an Optimality theoretic model of the syntax-phonology interface which is based on the interaction of syntactic and prosodic factors. The account can successfully be applied to further related phenomena in English and German.
This paper argues for a particular architecture of OT syntax. This architecture hasthree core features: i) it is bidirectional, the usual production-oriented optimisation (called ‘first optimisation’ here) is accompanied by a second step that checks the recoverability of an underlying form; ii) this underlying form already contains a full-fledged syntactic specification; iii) especially the procedure checking for recoverability makes crucial use of semantic and pragmatic factors. The first section motivates the basic architecture. The second section shows with two examples, how contextual factors are integrated. The third section examines its implications for learning theory, and the fourth section concludes with a broader discussion of the advantages and disadvantages of the proposed model.
This paper is part of a research project on OT Syntax and the typology of the free relative (FR) construction. It concentrates on the details of an OT analysis and some of its consequences for OT syntax. I will not present a general discussion of the phenomenon and the many controversial issues it is famous for in generative syntax.
The aim of this paper is the exploration of an optimality theoretic architecture for syntax that is guided by the concept of "correspondence": syntax is understood as the mechanism of "translating" underlying representations into a surface form. In minimalism, this surface form is called "Phonological Form" (PF). Both semantic and abstract syntactic information are reflected by the surface form. The empirical domain where this architecture is tested are minimal link effects, especially in the case of "wh"-movement. The OT constraints require the surface form to reflect the underlying semantic and syntactic representations as maximally as possible. The means by which underlying relations and properties are encoded are precedence, adjacency, surface morphology and prosodic structure. Information that is not encoded in one of these ways remains unexpressed, and gets lost unless it is recoverable via the context. Different kinds of information are often expressed by the same means. The resulting conflicts are resolved by the relative ranking of the relevant correspondence constraints.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.
In the past, a divide could be seen between ’deep’ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TüBa-D/Z treebank and Versley (2005)´s conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)´s parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TüBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved.
In this paper, we investigate the usefulness of a wide range of features for their usefulness in the resolution of nominal coreference, both as hard constraints (i.e. completely removing elements from the list of possible candidates) as well as soft constraints (where a cumulation of violations of soft constraints will make it less likely that a candidate is chosen as the antecedent). We present a state of the art system based on such constraints and weights estimated with a maximum entropy model, using lexical information to resolve cases of coreferent bridging.
We adopt Markert and Nissim (2005)’s approach of using the World Wide Web to resolve cases of coreferent bridging for German and discuss the strength and weaknesses of this approach. As the general approach of using surface patterns to get information on ontological relations between lexical items has only been tried on English, it is also interesting to see whether the approach works for German as well as it does for English and what differences between these languages need to be accounted for. We also present a novel approach for combining several patterns that yields an ensemble that outperforms the best-performing single patterns in terms of both precision and recall.
When a statistical parser is trained on one treebank, one usually tests it on another portion of the same treebank, partly due to the fact that a comparable annotation format is needed for testing. But the user of a parser may not be interested in parsing sentences from the same newspaper all over, or even wants syntactic annotations for a slightly different text type. Gildea (2001) for instance found that a parser trained on the WSJ portion of the Penn Treebank performs less well on the Brown corpus (the subset that is available in the PTB bracketing format) than a parser that has been trained only on the Brown corpus, although the latter one has only half as many sentences as the former. Additionally, a parser trained on both the WSJ and Brown corpora performs less well on the Brown corpus than on the WSJ one. This leads us to the following questions that we would like to address in this paper: - Is there a difference in usefulness of techniques that are used to improve parser performance between the same-corpus and the different-corpus case? - Are different types of parsers (rule-based and statistical) equally sensitive to corpus variation? To achieve this, we compared the quality of the parses of a hand-crafted constraint-based parser and a statistical PCFG-based parser that was trained on a treebank of German newspaper text.
Using a qualitative analysis of disagreements from a referentially annotated newspaper corpus, we show that, in coreference annotation, vague referents are prone to greater disagreement. We show how potentially problematic cases can be dealt with in a way that is practical even for larger-scale annotation, considering a real-world example from newspaper text.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
The purpose of this paper is to describe the TüBa-D/Z treebank of written German and to compare it to the independently developed TIGER treebank (Brants et al., 2002). Both treebanks, TIGER and TüBa-D/Z, use an annotation framework that is based on phrase structure grammar and that is enhanced by a level of predicate-argument structure. The comparison between the annotation schemes of the two treebanks focuses on the different treatments of free word order and discontinuous constituents in German as well as on differences in phrase-internal annotation.
In der folgenden Darstellung geht es einerseits darum, an Beispielen aufzuzeigen, inwiefern die schweizerdeutschen Mundarten und die deutsche Standardsprache in Lautung, Formenbildung, Satzbau und Wortschatz auseinandergehen können, andererseits aber immer auch um das Aufweisen von Gemeinsamkeiten. Oft werden nämlich bestimmte Erscheinungen des dialektalen Sprachbaus vorschnell als Eigenarten der Mundart verstanden, obwohl dieselben Erscheinungen auch im gesprochenen Hochdeutschen anzutreffen sind. Somit liegen also häufig nicht Unterschiede zwischen Mundart und Standardsprache vor, sondern Unterschiede zwischen gesprochener Sprache und geschriebener Sprache. [vollständige Überarbeitung für eine zweite Auflage]
Die Sprachsituation der deutschen Schweiz, wo die Mundarten den großen Teil der gesprochenen Sprachrealität darstellen, bietet ein weites Feld für Erforschung der gesprochenen Sprache. Die starke Position der Mundarten und die weitgehend mündliche Überlieferung machen sie für die Sprachwandelforschung interessant. Nachdem die Erforschung von Sprachwandel lange auf der Rekonstruktion gesprochener Sprache aus Schriftzeugnissen beschränkt war, kann seit dem wissenschaftlich reflektierten Festhalten gesprochener Sprache in Transkripten und seit der Möglichkeit zur Tonarchivierung auf historische Zeugnisse gesprochener Sprache zurückgegriffen werden. So kann die primäre Sprachform berücksichtigt werden. Denn obwohl Lautwandel lange der zentrale Bereich der Sprachgeschichtsschreibung war und die Sprachgeschichtsschreibung weitgehend vom "Primat des Sprechens" (Sonderegger 1979, 11) ausgegangen war, musste sie sich lange mit Schriftzeugnissen abfinden, die nur Reflexe gesprochener Sprache darstellten.
Wer sich einmal in Deutschschweizer IRC-Chatkanälen herumgesehen hat, hat sofort bemerkt, dass neben der Standardsprache häufig Mundart verwendet wird. Eine Analyse der Varietätenverwendung bietet sich an. Es stellt sich die Frage: was bedeutet sprachliche Norm in einem Kommunikationsraum, in dem die Vorgabe, Deutsch zu schreiben, nur heißt nicht Französisch, Italienisch, Türkisch, Serbisch, Portugiesisch usw. zu schreiben, wo also die Standardsprache nur eine der akzeptierten Varietäten ist? Was bedeutet sprachliche Norm, wo Berndeutsch mit /l/-Vokalisierung neben Walliserdeutsch mit archaischen Volltonvokalen in Nebensilben vorkommt, wo für ein standardsprachliches [a:] ‹a, ah, aa, o, oh› oder ‹oo› stehen kann? Der Frage nach einer deskriptiven Norm wird hier nachgegangen, indem Möglichkeiten der Verschriftung einzelner Aspekte aufgezeigt werden und deren Nutzung in regionalen und überregionalen Chaträumen verglichen werden. Aus dem aktuellen Gebrauch wird dann versucht implizite Normen abzuleiten.