Linguistik
Refine
Year of publication
- 2007 (54) (remove)
Document Type
- Part of a Book (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- Referenzidentität (10)
- Deutsch (8)
- Referenz <Linguistik> (5)
- Spracherwerb (5)
- Sprachverstehen (5)
- focus (5)
- Personalpronomen (4)
- Anapher <Syntax> (3)
- Demonstrativpronomen (3)
- topic (3)
Institute
- Extern (3)
New evidence is provided for a grammatical principle that singles out contrastive focus (Rooth 1996; Truckenbrodt 1995) and distinguishes it from discourse-new “informational” focus. Since the prosody of discourse-given constituents may also be distinguished from discourse-new, a three-way distinction in representation is motivated. It is assumed that an F-feature marks just contrastive focus (Jackendoff 1972, Rooth 1992), and that a G-feature marks discoursegiven constituents (Féry and Samek-Lodovici 2006), while discoursenew is unmarked. A crucial argument for G-marking comes from second occurrence focus (SOF) prosody, which arguably derives from a syntactic representation where SOF is both F-marked and G-marked. This analysis relies on a new G-Marking Condition specifying that a contrastive focus may be G-marked only if the focus semantic value of its scope is discourse-given, i.e. only if the contrast itself is given.
This article takes stock of the basic notions of Information Structure (IS). It first provides a general characterization of IS — following Chafe (1976) — within a communicative model of Common Ground (CG), which distinguishes between CG content and CG management. IS is concerned with those features of language that concern the local CG. Second, this paper defines and discusses the notions of Focus (as indicating alternatives) and its various uses, Givenness (as indicating that a denotation is already present in the CG), and Topic (as specifying what a statement is about). It also proposes a new notion, Delimitation, which comprises contrastive topics and frame setters, and indicates that the current conversational move does not entirely satisfy the local communicative needs. It also points out that rhetorical structuring partly belongs to IS.
We investigate methods to improve the recall in coreference resolution by also trying to resolve those definite descriptions where no earlier mention of the referent shares the same lexical head (coreferent bridging). The problem, which is notably harder than identifying coreference relations among mentions which have the same lexical head, has been tackled with several rather different approaches, and we attempt to provide a meaningful classification along with a quantitative comparison. Based on the different merits of the methods, we discuss possibilities to improve them and show how they can be effectively combined.
The Conference on Computational Natural Language Learning features a shared task, in which participants train and test their learning systems on the same data sets. In 2007, as in 2006, the shared task has been devoted to dependency parsing, this year with both a multilingual track and a domain adaptation track. In this paper, we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languages. In addition, we characterize the different approaches of the participating systems, report the test results, and provide a first analysis of these results.