Refine
Year of publication
- 2009 (195) (remove)
Document Type
- Article (93)
- Part of a Book (40)
- Part of Periodical (18)
- Report (14)
- Preprint (9)
- Book (6)
- Conference Proceeding (6)
- Review (5)
- Other (2)
- Doctoral Thesis (1)
Language
Has Fulltext
- yes (195)
Is part of the Bibliography
- no (195) (remove)
Keywords
- Lehrdichtung (18)
- Linguistik (17)
- Rezension (17)
- Mittelhochdeutsch (12)
- Rhetorik (10)
- Angola (9)
- Film (5)
- Kajkavisch (5)
- Kroatisch (4)
- Namenkunde (4)
Institute
- Extern (195) (remove)
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.