Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Preprint (48)
- Conference Proceeding (27)
- Part of a Book (9)
- Article (5)
- Book (5)
- Working Paper (4)
Language
- English (98) (remove)
Has Fulltext
- yes (98) (remove)
Is part of the Bibliography
- no (98)
Keywords
- Computerlinguistik (33)
- Japanisch (15)
- Deutsch (13)
- Syntaktische Analyse (10)
- Maschinelle Übersetzung (8)
- Multicomponent Tree Adjoining Grammar (8)
- Lexicalized Tree Adjoining Grammar (5)
- Semantik (5)
- Satzanalyse (4)
- Transkription (4)
- German (3)
- Grammatik (3)
- Implementierung <Informatik> (3)
- Parser (3)
- Range Concatenation Grammar (3)
- Software (3)
- Syntax (3)
- Tree Adjoining Grammar (3)
- Dialog (2)
- Englisch (2)
- Frage (2)
- Grammaires d’Arbres Adjoints (2)
- Höflichkeitsform (2)
- Kongress (2)
- Korpus <Linguistik> (2)
- MCTAG (2)
- Numerale (2)
- Suchmaschine (2)
- Tree Adoining Grammar (2)
- Tree Description Grammar (2)
- speech tagging (2)
- Akustische Phonetik (1)
- Arabisch (1)
- Artikulatorische Phonetik (1)
- Auditive Phonetik (1)
- Automatentheorie (1)
- Automatische Spracherkennung (1)
- Automatische Sprachproduktion (1)
- Benutzeroberfläche (1)
- Chinesisch (1)
- Computersimulation (1)
- Coreference annotation (1)
- Datenstruktur (1)
- Description Tree Grammar (1)
- Ellipse <Linguistik> (1)
- Experimentelle Phonetik (1)
- Formale Sprache (1)
- Formalismes syntaxiques (1)
- Fremdsprache (1)
- Fuzzy-Logik (1)
- Gebärdensprache (1)
- Generic NLP Architecture (1)
- Gesprochene Sprache (1)
- Grammatiktheorie (1)
- HPSG Parsing (1)
- Head-driven phrase structure grammar (1)
- IE (1)
- Kategorialgrammatik (1)
- Kontextfreie Grammatik (1)
- Korean (1)
- Koreanisch (1)
- LTAG (1)
- Lautsprache (1)
- Lexical Resource Semantics (1)
- Lexical Ressource Semantics (1)
- Lexikalisch funktionale Grammatik (1)
- Minimal Recursion Semantics (1)
- Mittelchinesisch (1)
- Morphologie (1)
- Numerus (1)
- Online-Publikation (1)
- Ontologie <Wissensverarbeitung> (1)
- Partikel (1)
- Präposition (1)
- Romanian (1)
- Rumänisch (1)
- SYNtax-based Reference Annotation (1)
- Satzanlyse (1)
- Shallow NLP (1)
- Simple Range Concatenation Grammar (1)
- Sloppiness (1)
- Speicherverwaltung (1)
- Spracherwerb (1)
- Sprachstatistik (1)
- Syntactic formalisms (1)
- TUSNELDA (1)
- Tarragona <2008> (1)
- Tree-Adjoining Grammar (1)
- Tübingen <2007> (1)
- Unordered Vector Grammar with Dominance Link (1)
- Unterspezifikation (1)
- Vagheit (1)
- Vagueness (1)
- Visualisierung (1)
- Word Sense Disambiguation (1)
- XML (1)
- allemand (1)
- brouillage d’arguments (1)
- chunk parsing (1)
- computational semantics (1)
- coréen (1)
- formalismes grammaticaux (1)
- german (1)
- grammaires d’arbres (1)
- grammar formalism (1)
- lexicalized tree-adjoining grammar (1)
- memory-based learning (1)
- metagrammars (1)
- multicomponent rewriting (1)
- métagrammaires (1)
- ordre des mots (1)
- quantifier scope (1)
- robust parsing (1)
- role labeling (1)
- scrambling (1)
- similarity-based learning (1)
- time annotation (1)
- tree-based grammars (1)
- treebanking (1)
- underspecification (1)
- word order (1)
Institute
- Extern (80)
Tree-local MCTAG with shared nodes : an analysis of word order variation in German and Korean
(2004)
Tree Adjoining Grammars (TAG) are known not to be powerful enough to deal with scrambling in free word order languages. The TAG-variants proposed so far in order to account for scrambling are not entirely satisfying. Therefore, an alternative extension of TAG is introduced based on the notion of node sharing. Considering data from German and Korean, it is shown that this TAG-extension can adequately analyse scrambling data, also in combination with extraposition and topicalization.
Quantitative evaluation of parsers has traditionally centered around the PARSEVAL measures of crossing brackets, (labeled) precision, and (labeled) recall. However, it is well known that these measures do not give an accurate picture of the quality of the parsers output. Furthermore, we will show that they are especially unsuited for partial parsers. In recent years, research has concentrated on dependencybased evaluation measures. We will show in this paper that such a dependency-based evaluation scheme is particularly suitable for partial parsers. TüBa-D, the treebank used here for evaluation, contains all the necessary dependency information so that the conversion of trees into a dependency structure does not have to rely on heuristics. Therefore, the dependency representations are not only reliable, they are also linguistically motivated and can be used for linguistic purposes.
The purpose of this paper is to describe the TüBa-D/Z treebank of written German and to compare it to the independently developed TIGER treebank (Brants et al., 2002). Both treebanks, TIGER and TüBa-D/Z, use an annotation framework that is based on phrase structure grammar and that is enhanced by a level of predicate-argument structure. The comparison between the annotation schemes of the two treebanks focuses on the different treatments of free word order and discontinuous constituents in German as well as on differences in phrase-internal annotation.
This paper proposes a corpus encoding standard that meets the needs of linguistic research using a variety of linguistic data structures. The standard was developed in SFB 441, a research project at the University of Tuebingen. The principal concern of SFB 441 are the empirical data structures which feed into linguistic theory building. SFB 441 consists of several projects, most of which are building corpora to empirically investigate various linguistic phenomena in various languages (e.g. modal verbs in German, forms of address and politeness in Russian). These corpora will form the components of the "Tuebingen collection of reusable, empirical, linguistic data structures (TUSNELDA)". The TUSNELDA annotation standard aims at providing a uniform encoding scheme for all subcorpora and texts of TUSNELDA such that they can be processed with uniform standardized tools. To guarantee maximal reusability we use XML for encoding. Previous SGML standards for text encoding were provided by the Text Encoding Initiative (TEI) and the Expert Advisory Group on Language Engineering Standards (Corpus Encoding Standard, CES). The TUSNELDA standard is based on TEI and XCES (XML version of CES) but takes into account the specific needs of the SFB projects, i.e. the peculiarities of the examined languages and linguistic phenomena.
Particles fullfill several distinct central roles in the Japanese language. They can mark arguments as well as adjuncts, can be functional or have semantic functions. There is, however, no straightforward matching from particles to functions, as, e.g., 'ga' can mark the subject, the object or the adjunct of a sentence. Particles can cooccur. Verbal arguments that could be identified by particles can be eliminated in the Japanese sentence. And finally, in spoken language particles are often omitted. A proper treatment of particles is thus necessary to make an analysis of Japanese sentences possible. Our treatment is based on an empirical investigation of 800 dialogues. We set up a type hierarchy of particles motivated by their subcategorizational and modificational behaviour. This type hierarchy is part of the Japanese syntax in VERBMOBIL.
The ACL 2008 Workshop on Parsing German features a shared task on parsing German. The goal of the shared task was to find reasons for the radically different behavior of parsers on the different treebanks and between constituent and dependency representations. In this paper, we describe the task and the data sets. In addition, we provide an overview of the test results and a first analysis.
The research performed in the DeepThought project aims at demonstrating the potential of deep linguistic processing if combined with shallow methods for robustness. Classical information retrieval is extended by high precision concept indexing and relation detection. On the basis of this approach, the feasibility of three ambitious applications will be demonstrated, namely: precise information extraction for business intelligence; email response management for customer relationship management; creativity support for document production and collective brainstorming. Common to these applications, and the basis for their development is the XML-based, RMRS-enabled core architecture framework that will be described in detail in this paper. The framework is not limited to the applications envisaged in the DeepThought project, but can also be employed e.g. to generate and make use of XML standoff annotation of documents and linguistic corpora, and in general for a wide range of NLP-based applications and research purposes.
The Conference on Computational Natural Language Learning features a shared task, in which participants train and test their learning systems on the same data sets. In 2007, as in 2006, the shared task has been devoted to dependency parsing, this year with both a multilingual track and a domain adaptation track. In this paper, we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languages. In addition, we characterize the different approaches of the participating systems, report the test results, and provide a first analysis of these results.
MED (Media EDitor) is a program designed to facilitate the transcription of digitized soundfiles into textfiles. It was written by Hans Drexler and Daan Broeder, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands. [...] The aim of MED is to facilitate the transcription of sound into text using a single program. It works on the principle of the coexistence and interaction of two basic elements, the waveform display window and the text window. [...] This means that you no longer need to use both a sound editor and a word processor at the same time in order to transcribe digitized speech files. Instead, you can directly type the sound you hear (and see) via MED into the text window. Furthermore, you can directly link sound portions of the waveform display window to text portions of the text window, so that you can easily locate and listen to the original source of your transcription once the links have been set. In this function the waveform display window and the text window virtually interact with each other.