Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Preprint (46)
- Conference Proceeding (17)
- Book (5)
- Part of a Book (5)
- Article (4)
- Working Paper (3)
Language
- English (80) (remove)
Has Fulltext
- yes (80) (remove)
Is part of the Bibliography
- no (80)
Keywords
- Computerlinguistik (28)
- Japanisch (15)
- Deutsch (13)
- Maschinelle Übersetzung (8)
- Multicomponent Tree Adjoining Grammar (8)
- Syntaktische Analyse (8)
- Satzanalyse (4)
- Semantik (4)
- German (3)
- Grammatik (3)
- Lexicalized Tree Adjoining Grammar (3)
- Range Concatenation Grammar (3)
- Syntax (3)
- Transkription (3)
- Tree Adjoining Grammar (3)
- Dialog (2)
- Englisch (2)
- Frage (2)
- Grammaires d’Arbres Adjoints (2)
- Höflichkeitsform (2)
- Kongress (2)
- MCTAG (2)
- Numerale (2)
- Software (2)
- Suchmaschine (2)
- Tree Adoining Grammar (2)
- Tree Description Grammar (2)
- speech tagging (2)
- Arabisch (1)
- Automatentheorie (1)
- Automatische Spracherkennung (1)
- Benutzeroberfläche (1)
- Coreference annotation (1)
- Datenstruktur (1)
- Description Tree Grammar (1)
- Formale Sprache (1)
- Formalismes syntaxiques (1)
- Fremdsprache (1)
- Generic NLP Architecture (1)
- Gesprochene Sprache (1)
- HPSG Parsing (1)
- IE (1)
- Korean (1)
- Koreanisch (1)
- Korpus <Linguistik> (1)
- LTAG (1)
- Lexical Resource Semantics (1)
- Morphologie (1)
- Numerus (1)
- Online-Publikation (1)
- Ontologie <Wissensverarbeitung> (1)
- Partikel (1)
- Präposition (1)
- Romanian (1)
- Rumänisch (1)
- SYNtax-based Reference Annotation (1)
- Satzanlyse (1)
- Shallow NLP (1)
- Simple Range Concatenation Grammar (1)
- Sloppiness (1)
- Syntactic formalisms (1)
- TUSNELDA (1)
- Tarragona <2008> (1)
- Tree-Adjoining Grammar (1)
- Tübingen <2007> (1)
- Unordered Vector Grammar with Dominance Link (1)
- Vagueness (1)
- Word Sense Disambiguation (1)
- XML (1)
- allemand (1)
- brouillage d’arguments (1)
- chunk parsing (1)
- computational semantics (1)
- coréen (1)
- formalismes grammaticaux (1)
- german (1)
- grammaires d’arbres (1)
- grammar formalism (1)
- lexicalized tree-adjoining grammar (1)
- memory-based learning (1)
- metagrammars (1)
- multicomponent rewriting (1)
- métagrammaires (1)
- ordre des mots (1)
- quantifier scope (1)
- robust parsing (1)
- role labeling (1)
- scrambling (1)
- similarity-based learning (1)
- time annotation (1)
- tree-based grammars (1)
- treebanking (1)
- underspecification (1)
- word order (1)
Institute
- Extern (80) (remove)
Some requirements for a VERBMOBIL system capable of processing Japanese dialogue input have been explored. Based on a pilot study in the VERBMOBIL domain, dialogues between 2 participants and a professional Japanese interpreter have been analyzed with respect to a very typical and frequent feature: zero pronouns. Zero pronouns in Japanese texts or dialogues as well as overt pronouns in English texts or dialogues are an important element of discourse coherence. As to translation, this difference in the use of pronouns is a case of translation mismatch: information not explicitly expressed in the source language is needed in the target language. (Verb argument positions, normally obligatory in English, are rather frequently omitted in Japanese. Furthermore, verbs in Japanese are not marked with respect to features necessary for pronoun selection in English.)
In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.
In recent years, research in parsing has extended in several new directions. One of these directions is concerned with parsing languages other than English. Treebanks have become available for many European languages, but also for Arabic, Chinese, or Japanese. However, it was shown that parsing results on these treebanks depend on the types of treebank annotations used. Another direction in parsing research is the development of dependency parsers. Dependency parsing profits from the non-hierarchical nature of dependency relations, thus lexical information can be included in the parsing process in a much more natural way. Especially machine learning based approaches are very successful (cf. e.g.). The results achieved by these dependency parsers are very competitive although comparisons are difficult because of the differences in annotation. For English, the Penn Treebank has been converted to dependencies. For this version, Nivre et al. report an accuracy rate of 86.3%, as compared to an F-score of 92.1 for Charniaks parser. The Penn Chinese Treebank is also available in a constituent and a dependency representations. The best results reported for parsing experiments with this treebank give an F-score of 81.8 for the constituent version and 79.8% accuracy for the dependency version. The general trend in comparisons between constituent and dependency parsers is that the dependency parser performs slightly worse than the constituent parser. The only exception occurs for German, where F-scores for constituent plus grammatical function parses range between 51.4 and 75.3, depending on the treebank, NEGRA or TüBa-D/Z. The dependency parser based on a converted version of Tüba-D/Z, in contrast, reached an accuracy of 83.4%, i.e. 12 percent points better than the best constituent analysis including grammatical functions.
This paper profiles significant differences in syntactic distribution and differences in word class frequencies for two treebanks of spoken and written German: the TüBa-D/S, a treebank of transliterated spontaneous dialogues, and the TüBa-D/Z treebank of newspaper articles published in the German daily newspaper die tageszeitung´(taz). The approach can be used more generally as a means of distinguishing and classifying language corpora of different genres.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. The TüSBL parser extends current chunk parsing techniques by a tree-construction component that extends partial chunk parses to complete tree structures including recursive phrase structure as well as function-argument structure. TüSBLs tree construction algorithm relies on techniques from memory-based learning that allow similarity-based classification of a given input structure relative to a pre-stored set of tree instances from a fully annotated treebank. A quantitative evaluation of TüSBL has been conducted using a semi-automatically constructed treebank of German that consists of appr. 67,000 fully annotated sentences. The basic PARSEVAL measures were used although they were developed for parsers that have as their main goal a complete analysis that spans the entire input.This runs counter to the basic philosophy underlying TüSBL, which has as its main goal robustness of partially analyzed structures.
In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
This paper profiles significant differences in syntactic distribution and differences in word class frequencies for two treebanks of spoken and written German: the TüBa-D/S, a treebank of transliterated spontaneous dialogs, and the TüBa-D/Z treebank of newspaper articles published in the German daily newspaper ´die tageszeitung´(taz). The approach can be used more generally as a means of distinguishing and classifying language corpora of different genres.
Tree-local MCTAG with shared nodes : an analysis of word order variation in German and Korean
(2004)
Tree Adjoining Grammars (TAG) are known not to be powerful enough to deal with scrambling in free word order languages. The TAG-variants proposed so far in order to account for scrambling are not entirely satisfying. Therefore, an alternative extension of TAG is introduced based on the notion of node sharing. Considering data from German and Korean, it is shown that this TAG-extension can adequately analyse scrambling data, also in combination with extraposition and topicalization.
Quantitative evaluation of parsers has traditionally centered around the PARSEVAL measures of crossing brackets, (labeled) precision, and (labeled) recall. However, it is well known that these measures do not give an accurate picture of the quality of the parsers output. Furthermore, we will show that they are especially unsuited for partial parsers. In recent years, research has concentrated on dependencybased evaluation measures. We will show in this paper that such a dependency-based evaluation scheme is particularly suitable for partial parsers. TüBa-D, the treebank used here for evaluation, contains all the necessary dependency information so that the conversion of trees into a dependency structure does not have to rely on heuristics. Therefore, the dependency representations are not only reliable, they are also linguistically motivated and can be used for linguistic purposes.