Bochumer linguistische Arbeitsberichte : BLA
Hrsg.: Stefanie Dipper ; Björn Rothstein
Refine
Document Type
- Working Paper (8)
Language
- English (8) (remove)
Has Fulltext
- yes (8)
Is part of the Bibliography
- no (8)
Keywords
- Annotation (1)
- Fremdsprachenlernen (1)
- Gesprochene Sprache (1)
- Korpus <Linguistik> (1)
- Patholinguistik (1)
- Spracherwerb (1)
- Sprachverstehen (1)
Institute
18
The Shared Task on Source and Target Extraction from Political Speeches (STEPS) first ran in 2014 and is organized by the Interest Group on German Sentiment Analysis (IGGSA). This volume presents the proceedings of the workshop of the second iteration of the shared task. The workshop was held at KONVENS 2016 at Ruhr-University Bochum on September 22, 2016.
As in the first edition of the shared task the main focus of STEPS was on fine-grained sentiment analysis and offered a full task as well as two subtasks for the extraction Subjective Expressions and/or their respective Sources and Targets.
In order to make the task more accessible, the annotation schema was revised for this year’s edition and an adjudicated gold standard was used for the evaluation. In contrast to the pilot task, this iteration provided training data for the participants, opening the Shared Task for systems based on machine learning approaches.
The gold standard1 as well as the evaluation tool2 have been made publicly available to the research community via the STEPS’ website.
We would like to thank the GSCL for their financial support in annotating the 2014 test data, which were available as training data in this iteration. A special thanks also goes to Stephanie Köser for her support on preparing and carrying out the annotation of this year’s test data. Finally, we would like to thank all the participants for their contributions and discussions at the workshop.
17
NLP4CMC III : 3rd workshop on natural language processing for computer-mediated communication
(2016)
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g., title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
13
This paper deals with spelling normalization of historical texts with regard to further processing with modern part-of-speech taggers. Different methods for this task are presented and evaluated on a set of historical German texts from the 15th–18th century, and specific problems inherent to the processing of historical data are discussed. A chain combination using word-based and character-based techniques is shown to be best for normalization, while POS tagging of normalized data is shown to benefit from ignoring punctuation marks. Using these techniques, when 500 manually normalized tokens are used as training data for the normalization, the tagging accuracy of a manuscript from the 15th century can be raised from 28.65% to 76.27%.
9
The comprehension and production of single words involve a variety of processing stages. Which stages need to be accessed differs depending on whether objects (pictures in an experimental environment) or words are supposed to be named. Naming tasks are often employed in psycholinguistic studies in order to provide an insight into the function of mental processes during word production. Differences in naming latencies and naming accuracy between words suggest that the retrieval of some lexical items is easier or more difficult in contrast to others. The relative ease of word retrieval has been found to be strongly influenced by properties of these words, such as familiarity and written or spoken frequency.
Exploring which variables affect naming speed and accuracy will allow gaining more information about the storage and processing of words in general. If a variable has a discernable effect on a specific experimental task, the localization of this effect is of interest for psycholinguistic research. This is because finding the locus of the effect can help specify models of speech production with respect to what processes occur at which stage of lexical retrieval. Additionally, identifying which variables influence language processing is inevitable in order to control for these variables when necessary. Otherwise variance in naming latencies could not be explained by the variable that was to be tested because other, uncontrolled variables could have altered the results.
5
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
3
The article discusses the methodology adopted for a cross-linguistic synchronic and diachronic corpus study on indefinites. The study covered five indefinite expressions, each in a different language. The main goal of the study was to verify the distribution of these indefinites synchronically and to attest their historical development. The methodology we used is a form of functional labeling which combines both context (syntax) and meaning (semantics) using as a starting point Haspelmath’s (1997) functional map. In the article we identify Haspelmath’s functions with logico-semantic interpretations and propose a binary branching decision tree assigning each instance of an indefinite exactly one function in the map.