Linguistik
Refine
Year of publication
- 2005 (187) (remove)
Document Type
- Part of a Book (54)
- Conference Proceeding (52)
- Article (51)
- Preprint (13)
- Book (7)
- Working Paper (5)
- Report (3)
- diplomthesis (1)
- Other (1)
Language
Has Fulltext
- yes (187)
Is part of the Bibliography
- no (187)
Keywords
- Deutsch (14)
- Artikulation (13)
- Artikulatorische Phonetik (13)
- Phonetik (13)
- Englisch (11)
- Artikulator (9)
- Bedeutungswandel (6)
- Computerlinguistik (6)
- Japanisch (6)
- Akustische Phonetik (5)
Institute
Die Datenbank wird auf den Ergebnissen der Analyse einschlägiger umfangreicher Korpora des gesprochenen Deutsch basieren. Um jedoch große Korpora analysieren zu können, ist es notwendig, automatische Analyseverfahren der Variation zu entwickeln. Mit traditionellen manuellen Methoden kann der Aufbau einer korpusbasierten Datenbank kaum verwirklicht werden. Dem eigentlichen Variationsprojekt wurde daher eine kleine Pilotstudie vorgeschaltet, die die Möglichkeiten der automatischen Analyse prüfen sollte. Dabei wurde der Frage nachgegangen, ob es möglich ist, regionale Varianten des Deutschen mit Verfahren der automatischen Spracherkennung zu untersuchen, d.h., ob es möglich ist, eine verlässliche Transkription der regionalen Varianten automatisch herzustellen. Diese Pilotstudie zur automatischen Transkription stützte sich auf das im IDS bereits vorhandene System SPRAT (Speech Recognition and Alignment Tool), das zum Alignieren (Text-Ton-Synchronisation) verwendet wird. Im Rahmen der Pilotstudie wurde dieses System modifiziert und in einer Reihe von Tests dessen automatische Transkription evaluiert (vgl. Abschnitt 3). Das Ziel des vorliegenden Beitrags ist es, die Ergebnisse dieser Pilotstudie vorzustellen. Zunächst aber soll ein kurzer Exkurs verdeutlichen, um welches System es sich beim IDS-Aligner SPRAT handelt.
Trubetzkoy's recognition of a delimitative function of phonology, serving to signal boundaries between morphological units, is expressed in terms of alignment constraints in Optimality Theory, where the relevant constraints require specific morphological boundaries to coincide with phonological structure (Trubetzkoy 1936, 1939, McCarthy & Prince 1993). The approach pursued in the present article is to investigate the distribution of phonological boundary signals to gain insight into the criteria underlying morphological analysis. The evidence from English and Swedish suggests that necessary and sufficient conditions for word-internal morphological analysis concern the recognizability of head constituents, which include the rightmost members of compounds and head affixes. The claim is that the stability of word-internal boundary effects in historical perspective cannot in general be sufficiently explained in terms of memorization and imitation of phonological word form. Rather, these effects indicate a morphological parsing mechanism based on the recognition of word-internal head constituents. Head affixes can be shown to contrast systematically with modifying affixes with respect to syntactic function, semantic content, and prosodic properties. That is, head affixes, which cannot be omitted, often lack inherent meaning and have relatively unmarked boundaries, which can be obscured entirely under specific phonological conditions. By contrast, modifying affixes, which can be omitted, consistently have inherent meaning and have stronger boundaries, which resist prosodic fusion in all phonological contexts. While these correlations are hardly specific to English and Swedish it remains to be investigated to which extent they hold cross-linguistically. The observation that some of the constituents identified on the basis of prosodic evidence lack inherent meaning raises the issue of compositionality. I will argue that certain systematic aspects of word meaning cannot be captured with reference to the syntagmatic level, but require reference to the paradigmatic level instead. The assumption is then that there are two dimensions of morphological analysis: syntagmatic analysis, which centers on the criteria for decomposing words in terms of labelled constituents, and paradigmatic analysis, which centers on the criteria for establishing relations among (whole) words in the mental lexicon. While meaning is intrinsically connected with paradigmatic analysis (e.g. base relations, oppositeness) it is not essential to syntagmatic analysis.
Ziel des Teilprojekts ist die thematische Erschließung der Korpora, um sowohl themenspezifische virtuelle Subkorpora zusammenstellen zu können als auch aufgrund der Analyse sachgebietsbezogener Häufigkeitsverteilungen z.B. Lesarten disambiguieren zu können. Ausgangspunkt ist die Erstellung einer Taxonomie von Sachgebietsthemen. Dies erfolgt in einem semiautomatischen Verfahren, welches die Anwendung von Textmining (Dokumentclustering) und die manuelle Zuordnung von Clustern in eine externen Ontologie beinhaltet. Es wird argumentiert, dass die so gewonnene Taxonomie sowohl intuitiver als auch objektiver ist als bestehende, rein manuelle Ansätze. Sie eignet sich zudem gleichermaßen für manuelle als auch für maschinelle Klassifikation. Für letzteres wird der Naive Bayes'sche Textklassifikator motiviert und für ein klassifiziertes Korpus von knapp zwei Milliarden Wörtern evaluiert.
Dictionaries contain lexicographic data whose occurrence is restricted to certain geo-graphical areas, subject fields, professions, etc. It is part of the duties of the lexicographer to give an account of such deviations to ensure a successful retrieval of the information on the part of the user. This contribution presents a discussion on labelling issues in the Dictionnaire Français–Mpongwé. Although the main focus is on the presentation of different types of labelling as well as problems in labelling, textual condensation procedures and mediostructural representations (to-gether with some aspects of the user perspective) are also critically evaluated. It is shown that these procedures reveal some inconsistencies which are not accounted for in the outer texts (front matter and back matter texts) of the dictionary. Finally suggestions are made for the improvement of the access structure of this dictionary.
Man kann die Menschheit in zwei Gruppen teilen: in solche, die sich an fremden Orten verbal anschmiegen und die lokale Sprache in sich aufnehmen als wäre es schon immer die eigene gewesen und solche, die ihre Sprache an einem fremden Ort beibehalten und sich kaum merklich oder gar nicht von ihrer neuen Umgebung sprachlich beeinflussen lassen. Daher fällt einem meist auf, ob jemand beispielsweise in Solothurn seinen Berner, Basler oder Thurgauer Dialekt beibehalten hat, oder aber man beobachtet – vielleicht sodann mit einer gewissen Skepsis –, dass ein zugezogener Mensch das Solothurnische langsam annimmt und seinen eigenen Dialekt nach und nach zu verlernen scheint. ...
This article combines a brief introduction into a particular philosophical theory of "time" with a demonstration of how this theory has been implemented in a Literary Studies oriented Humanities Computing project. The aim of the project was to create a model of text-based time cognition and design customized markup and text analysis tools that help to understand ‘‘how time works’’: more precisely, how narratively organised and communicated information motivates readers to generate the mental image of a chronologically organized world. The approach presented is based on the unitary model of time originally proposed by McTaggart, who distinguished between two perspectives onto time, the so-called A- and B-series. The first step towards a functional Humanities Computing implementation of this theoretical approach was the development of TempusMarker—a software tool providing automatic and semi-automatic markup routines for the tagging of temporal expressions in natural language texts. In the second step we discuss the principals underlying TempusParser—an analytical tool that can reconstruct temporal order in events by way of an algorithm-driven process of analysis and recombination of textual segments during which the "time stamp" of each segment as indicated by the temporal tags is interpreted.
This paper provides an analysis of an alternative strategy to A´-movement in both German and Dutch where the extracted constituent is preceded by a preposition and a coreferential pronoun appears in the extraction site. The construction has properties of both binding and movement: Whereas reconstruction effects suggest movement out of the embedded clause, there is strong evidence that the operator constituent is linked to an A-position in the matrix clause; this paradox is resolved by assuming a Control-like approach that involves movement from the embedded clause into a theta-position in the matrix clause with subsequent short A´- movement. The coreferential pronoun is interpreted as a resumptive heading a Big-DP which hosts the antecedent in its specifier.
Fronting of an infinite VP across a finite main verb - akin to German "VP-topicalization" - can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several “semiotic layers”, modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
Elke Kasimir´s paper (in this volume) argues against employing the notion of Givenness in the explanation of accent assignment. I will claim that the arguments against Givenness put forward by Kasimir are inconclusive because they beg the question of the role of Givenness. It is concluded that, more generally, arguments against Givenness as a diagnostic for information structural partitions should not be accepted offhand, since the notion of Givenness of discourse referents is (a) theoretically simple, (b) readily observable and quantifiable, and (c) bears cognitive significance.
The semantics of ellipsis
(2005)
There are four phenomena that are particularly troublesome for theories of ellipsis: the existence of sloppy readings when the relevant pronouns cannot possibly be bound; an ellipsis being resolved in such a way that an ellipsis site in the antecedent is not understood in the way it was there; an ellipsis site drawing material from two or more separate antecedents; and ellipsis with no linguistic antecedent. These cases are accounted for by means of a new theory that involves copying syntactically incomplete antecedent material and an analysis of silent VPs and NPs that makes them into higher order definite descriptions that can be bound into.
This paper discusses the use of XSLT stylesheets as a filtering mechanism for refining the results of user queries on treebanks. The discussion is within the context of the TIGER treebank, the associated search engine and query language, but the general ideas can apply to any search engine for XML-encoded treebanks. It will be shown that important classes of linguistic phenomena can be accessed by applying relatively simple XSLT templates to the output of a query, effectively simulating the universal quantifier for a subset of the query language.
In order to investigate the empirical properties of focus, it is necessary to diagnose focus (or: “what is focused”) in particular linguistic examples. It is often taken for granted that the application of one single diagnostic tool, the so-called question-answer test, which roughly says that whatever a question asks for is focused in the answer, is a fool-proof test for focus. This paper investigates one example class where such uncritical belief in the question-answer test has led to the assumption of rather complex focus projection rules: in these examples, pitch accent placement has been claimed to depend on certain parts of the focused constituents being given or not. It is demonstrated that such focus projection rules are unnecessarily complex and in turn require the assumption of unnecessarily complicated meaning rules, not to speak of the difficulties to give a precise semantic/pragmatic definition of the allegedly involved givenness property. For the sake of the argument, an alternative analysis is put forward which relies solely on alternative sets following Mats Rooth´s work, and avoids any recourse to givenness. As it turns out, this alternative analysis is not only simpler but also makes in a critical case the better predictions.
This paper investigates the structural properties of morphosyntactically marked focus constructions, focussing on the often neglected non-focal sentence part in African tone languages. Based on new empirical evidence from five Gur and Kwa languages, we claim that these focus expressions have to be analysed as biclausal constructions even though they do not represent clefts containing restrictive relative clauses. First, we relativize the partly overgeneralized assumptions about structural correspondences between the out-of-focus part and relative clauses, and second, we show that our data do in fact support the hypothesis of a clause coordinating pattern as present in clause sequences in narration. It is argued that we deal with a non-accidental, systematic feature and that grammaticalization may conceal such basic narrative structures.
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications.
Heterogeneity and standardization in data, use, and annotation : a diachronic corpus of German
(2005)
This paper describes the standardization problems that come up in a diachronic corpus: it has to cope with differing standards with regard to diplomaticity, annotation, and header information. Such highly heterogeneous texts must be standardized to allow for comparative research without (too much) loss of information.