Refine
Year of publication
Document Type
- Preprint (82) (remove)
Has Fulltext
- yes (82)
Is part of the Bibliography
- no (82) (remove)
Keywords
- Deutsch (16)
- Multicomponent Tree Adjoining Grammar (9)
- Syntaktische Analyse (8)
- Syntax (8)
- Semantik (6)
- Kongress (5)
- Optimalitätstheorie (5)
- Range Concatenation Grammar (5)
- Aufsatzsammlung (4)
- German (4)
Institute
- Extern (82) (remove)
Wir Philologen haben gut reden. Wir sehen zu, wie andere, die zumeist nicht zu unserer Zunft gehören, die unübersehbare Fülle von Geschriebenem aus seiner jeweiligen Ursprache in alle möglichen Sprachen bringen, und wir verhalten uns dazu als interessierte Zuschauer. Wir haben allen Grund, uns daran zu freuen: Ohne diesen grenzüberschreitenden Waren- und Gedankentausch bliebe das Feld, auf dem wir grasen, enger und parzellierter, als es nach der Intention der Autoren und auch der Sache nach sein müsste. Wir können (sofern wir den nötigen Überblick haben) das loben, was die Übersetzer zu Wege gebracht haben: die Entsprechungen, die sie entdeckt oder erfunden haben, die Kraft, Geschmeidigkeit und Modulationsvielfalt, die sie in ihren Zielsprachen mit Tausenden von einleuchtenden Funden oder mit dem ganzen Ton und Duktus ihrer Übersetzungen erst aktiviert haben. Wenn wir es uns zutrauen, können wir ihnen ins Handwerk pfuschen und einzelne Stellen oder ganze Werke selber übersetzen. Wir können sie kritisieren, wo uns die vorgelegten Übersetzungen zu matt erscheinen oder wo sie sachlich oder stilistisch mehr als nötig ‚hinter dem Original zurückbleiben; wir können Verbesserungsvorschläge machen. Wenn wir Übersetzungen zitieren und es nötig finden, sie abzuwandeln, bewegen wir uns in einer Grauzone zwischen dem Respekt vor dem Übersetzer, der Lust an noch weiteren erkannten Potenzen des Textes und dem Drang, möglichst ‚alles, was wir aus dem Original herausgelesen haben, in der eigenen Sprache den Hörern oder Lesern nahezubringen.
Recent approaches to Word Sense Disambiguation (WSD) generally fall into two classes: (1) information-intensive approaches and (2) information-poor approaches. Our hypothesis is that for memory-based learning (MBL), a reduced amount of data is more beneficial than the full range of features used in the past. Our experiments show that MBL combined with a restricted set of features and a feature selection method that minimizes the feature set leads to competitive results, outperforming all systems that participated in the SENSEVAL-3 competition on the Romanian data. Thus, with this specific method, a tightly controlled feature set improves the accuracy of the classifier, reaching 74.0% in the fine-grained and 78.7% in the coarse-grained evaluation.
This paper profiles significant differences in syntactic distribution and differences in word class frequencies for two treebanks of spoken and written German: the TüBa-D/S, a treebank of transliterated spontaneous dialogs, and the TüBa-D/Z treebank of newspaper articles published in the German daily newspaper ´die tageszeitung´(taz). The approach can be used more generally as a means of distinguishing and classifying language corpora of different genres.
This paper provides an overview of current research on a hybrid and robust parsing architecture for the morphological, syntactic and semantic annotation of German text corpora. The novel contribution of this research lies not in the individual parsing modules, each of which relies on state-of-the-art algorithms and techniques. Rather what is new about the present approach is the combination of these modules into a single architecture. This combination provides a means to significantly optimize the performance of each component, resulting in an increased accuracy of annotation.
This paper reports on the SYN-RA (SYNtax-based Reference Annotation) project, an on-going project of annotating German newspaper texts with referential relations. The project has developed an inventory of anaphoric and coreference relations for German in the context of a unified, XML-based annotation scheme for combining morphological, syntactic, semantic, and anaphoric information. The paper discusses how this unified annotation scheme relates to other formats currently discussed in the literature, in particular the annotation graph model of Bird and Liberman (2001) and the pie-in-thesky scheme for semantic annotation.
The purpose of this paper is to describe recent developments in the morphological, syntactic, and semantic annotation of the TüBa-D/Z treebank of German. The TüBa-D/Z annotation scheme is derived from the Verbmobil treebank of spoken German [4, 10], but has been extended along various dimensions to accommodate the characteristics of written texts. TüBa-D/Z uses as its data source the "die tageszeitung" (taz) newspaper corpus. The Verbmobil treebank annotation scheme distinguishes four levels of syntactic constituency: the lexical level, the phrasal level, the level of topological fields, and the clausal level. The primary ordering principle of a clause is the inventory of topological fields, which characterize the word order regularities among different clause types of German, and which are widely accepted among descriptive linguists of German [3, 6]. The TüBa-D/Z annotation relies on a context-free backbone (i.e. proper trees without crossing branches) of phrase structure combined with edge labels that specify the grammatical function of the phrase in question. The syntactic annotation scheme of the TüBa-D/Z is described in more detail in [12, 11]. TüBa-D/Z currently comprises approximately 15 000 sentences, with approximately 7 000 sentences being in the correction phase. The latter will be released along with an updated version of the existing treebank before the end of this year. The treebank is available in an XML format, in the NEGRA export format [1] and in the Penn treebank bracketing format. The XML format contains all types of information as described above, the NEGRA export format contains all sentenceinternal information while the Penn treebank format includes only those layers of information that can be expressed as pure tree structures. Over the course of the last year, more fine grained linguistic annotations have been added along the following dimensions: 1. the basic Stuttgart-Tübingen tagset, STTS, [9] labels have been enriched by relevant features of inflectional morphology, 2. named entity information has been encoded as part of the syntactic annotation, and 3. a set of anaphoric and coreference relations has been added to link referentially dependent noun phrases. In the following sections, we will describe each of these innovations in turn and will demonstrate how the additional annotations can be incorporated into one comprehensive annotation scheme.
Part-of-Speech tagging is generally performed by Markov models, based on bigram or trigram models. While Markov models have a strong concentration on the left context of a word, many languages require the inclusion of right context for correct disambiguation. We show for German that the best results are reached by a combination of left and right context. If only left context is available, then changing the direction of analysis and going from right to left improves the results. In a version of MBT (Daelemans et al., 1996) with default parameter settings, the inclusion of the right context improved POS tagging accuracy from 94.00% to 96.08%, thus corroborating our hypothesis. The version with optimized parameters reaches 96.73%.
This paper addresses the problem ofconstraints for relative quantifier sope, in partiular in inverse linking readings wherecertain scope orders are exluded. We show how to account for such restrictions in the Tree Adjoining Grammar (TAG) framework by adopting a notion offlexible composition. In the semantics we use for TAG we introduce quantifier sets that group quantifiers that are "glued" together in the sense that no other quantifieran scopally intervene between them. Theflexible composition approach allows us to obtain the desired quantifier sets and thereby the desiredconstraints for quantifier sope.
Seit gut einem Jahrzehnt wird in Deutschland gewartet: Auf Literatur wird gewartet, auf den großen Berlin-Roman, auf den großen Nachwende-Roman. Und trotz diverser Romane, die Wiedervereinigung und Berlin zum Thema erhoben, ob nun von Günter Grass oder Thomas Brussig, wird weiter gewartet, kann es anscheinend kein Autor recht machen, wird unterhaltsames Erzählen begehrt oder eine Darstellung auf der Höhe moderner Erzählkunst verlangt. Doch die Alternative ist vielleicht falsch gestellt: Könnte denn nicht ein kunstvoll geschriebener Roman mit präziser und variantenreicher Sprache, ausgeklügelten Erzählstrukturen auch unterhaltsam sein? Schließlich ist Döblins nicht gerade schlichter Roman "Berlin Alexanderplatz" ja auch ein Lesevergnügen, vergleichbar mit "Joyces Ulysses" oder Pynchons "Gravity’s Rainbow". Nun lassen sich solche Romane schlecht wiederholen, hinge jeder Nachahmung des Stils der Verdacht an, Plagiat oder Kopie zu sein. Etwas Ähnliches wäre also immer etwas Anderes, neuartig, artifiziell und darin genaueres Abbild seiner Zeit als die Vielzahl schlichter Romane, die von Berlin oder der Wiedervereinigung erzählen. Nun, in letzter Zeit mehren sich im deutschen Feuilleton Stimmen, die eine gewisse, dementsprechende Kunst des Erzählens bei Ulrich Peltzer ausmachen, weswegen hier die Gelegenheit ergriffen wird, einen Gang durch seine drei letzten Publikationen ["Stefan Martinez", "Alle oder keiner", "Bryant Park"] zu unternehmen, um die Entwicklung derselben darzustellen - im Hinterkopf die Frage: Liegt hier vielleicht schon einer der erwarteten großen Berlin-Romane vor?
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.