Linguistik
Refine
Year of publication
Document Type
- Article (186)
- Preprint (69)
- Part of a Book (65)
- Working Paper (40)
- Conference Proceeding (33)
- Book (24)
- Review (12)
- Part of Periodical (7)
- Course Material (1)
- Report (1)
Language
- Croatian (150)
- English (141)
- German (120)
- Portuguese (9)
- Turkish (7)
- mis (4)
- French (3)
- Italian (2)
- Multiple languages (1)
- Spanish (1)
Has Fulltext
- yes (438) (remove)
Is part of the Bibliography
- no (438)
Keywords
- Kroatisch (50)
- Linguistik (50)
- Rezension (48)
- Deutsch (35)
- Computerlinguistik (32)
- Syntax (19)
- Japanisch (18)
- Grammatik (17)
- Namenkunde (17)
- Rezensionen (17)
Institute
- Extern (438) (remove)
Unter Syntaktikern besteht generell die Tendenz, im Deutschen die Freiheit bezüglich der Positionierung der Adverbiale sogar für noch größer zu halten als die Freiheit der Positionierung der Argumente. Wie die Stellungsfreiheit der Argumente im Mittelfeld eines deutschen Satzes theoretisch zu erfassen sei, wird seit langer Zeit kontrovers diskutiert. Die Hauptfrage dreht sich darum, ob alle Serialisierungen der Argumente basisgeneriert sind oder ob es eine ausgezeichnete Serialisierung der Argumente, eine sogenannte Grundabfolge, gibt, aus der sämtliche anderen Aktantenserialisierungen durch eine Ableitungsoperation bzw. Bewegung zu gewinnen sind. Diese grundsätzlichen Fragen stellen sich auch bezüglich der Positionierungsmöglichkeiten der Adverbiale, auch wenn sie hierfür bei weitem nicht so häufig gestellt und diskutiert wurden.
Buli is an Oti-Volta tone language spoken in Northern Ghana. This paper outlines the basic features of its tonal system and explores whether and in which way pitch respectively phonemic tone is approached as a means to indicate the pragmatic category of focus. Pursued are cases with focus-related surface tone changes as well as cases where pitch could help to disambiguate between broad and narrow foci. It is argued that focus is not consistently encoded by pitch or tone. Parallel findings for the closely related languages Kopen o (phonetic symbol)nni and Dagbani suggest that the apparent lack of significant prosodic focus signals in Buli might pertain to a larger group of tonal languages of the Gur family.
The present article illustrates that the specific articulatory and aerodynamic requirements for voiced but not voiceless alveolar or dental stops can cause tongue tip retraction and tongue mid lowering and thus retroflexion of front coronals. This retroflexion is shown to have occurred diachronically in the three typologically unrelated languages Dhao (Malayo-Polynesian), Thulung (Sino-Tibetan), and Afar (East-Cushitic). In addition to the diachronic cases, we provide synchronic data for retroflexion from an articulatory study with four speakers of German, a language usually described as having alveolar stops. With these combined data we supply evidence that voiced retroflex stops (as the only retroflex segments in a language) did not necessarily emerge from implosives, as argued by Haudricourt (1950), Greenberg (1970), Bhat (1973), and Ohala (1983). Instead, we propose that the voiced front coronal plosive /d/ is generally articulated in a way that favours retroflexion, that is, with a smaller and more retracted place of articulation and a lower tongue and jaw position than /t/.
Woher kommt das neuerwachte Interesse an Sprachrichtigkeit? Woher kommt die ausgeprägte sprachliche Unsicherheit, die auch bei vielen hochgebildeten Menschen den Wunsch entstehen lässt, von Sprachpflegern über ihr Ureigenstes, nämlich ihre Muttersprache, belehrt zu werden? Obwohl Antworten auf diese Fragen letztlich spekulativ bleiben, wage ich doch die These, dass eine Ursache hierfür die Rechtschreibreform ist, die von einem Großteil der Bevölkerung nach wie vor nicht angenommen wird, die insgesamt weder zur Vereinfachung noch zu einer höheren Einheitlichkeit geführt hat; die aber andererseits ein öffentliches Nachdenken und Diskutieren über Sprachrichtigkeit in Gang setzte. – Jedenfalls ist die Verunsicherung ein Faktum, das von Linguisten nicht ignoriert werden sollte.
Ausgangspunkt: Die Kritik am "Zwei-Welten-Modell": Die grundlegende linguistische Unterscheidung zwischen "Sprache" und "Sprechen" ist im Rahmen der neueren Debatten um Sprachmedialität wieder verstärkt thematisiert und kritisiert worden. Lässt sich dieses schulbildende, in der Linguistik geradezu eherne Begriffspaar überhaupt noch sinnvollerweise aufrechterhalten? Oder muss es mindestens umdefiniert, vielleicht sogar gänzlich verworfen werden? Hat sich insbesondere die auf Chomsky zurückgehende Unterscheidung von Sprachkompetenz und -performanz nicht von selbst ad absurdum geführt, nachdem der linguistische Kognitivismus chomskyscher Provenienz Sprache als lebendiges Phänomen, als Medium menschlicher Kommunikation, vollständig aus dem Blick verloren hat? Führt nicht schon die scheinbar harmlose linguistische Differenzierung zwischen einer Sprachregel und ihrer Anwendung zu einer irreführenden und unangemessenen Verdinglichung von Sprache? ...
The medium of (oral) language is mostly disregarded (or overlooked) in contemporary media theories. This "ignoring of language" in media studies is often accompanied by an inadequate transport model of communication, and it converges with an "ignoring of mediality" in mentalistic theories of language. In the present article it will be argued that this misleading opposition of language and media can only be overcome if one already regards oral language, not just written language, as a medium of the human mind. In my argumentation I fall back on Wittgenstein’s conception of language games to try to show how Wittgenstein’s ideas can help us to clear up the problem of the mediality of language and also to show to what extent the mentalistic conception of Chomskyan provenance cannot be adequate to the phenomenon of language.
In this paper, we argue that difficulties in the definition of coreference itself contribute to lower inter-annotator agreement in certain cases. Data from a large referentially annotated corpus serves to corroborate this point, using a quantitative investigation to assess which effects or problems are likely to be the most prominent. Several examples where such problems occur are discussed in more detail, and we then propose a generalisation of Poesio, Reyle and Stevenson’s Justified Sloppiness Hypothesis to provide a unified model for these cases of disagreement and argue that a deeper understanding of the phenomena involved allows to tackle problematic cases in a more principled fashion than would be possible using only pre-theoretic intuitions.
Traditionally, parsers are evaluated against gold standard test data. This can cause problems if there is a mismatch between the data structures and representations used by the parser and the gold standard. A particular case in point is German, for which two treebanks (TiGer and TüBa-D/Z) are available with highly different annotation schemes for the acquisition of (e.g.) PCFG parsers. The differences between the TiGer and TüBa-D/Z annotation schemes make fair and unbiased parser evaluation difficult [7, 9, 12]. The resource (TEPACOC) presented in this paper takes a different approach to parser evaluation: instead of providing evaluation data in a single annotation scheme, TEPACOC uses comparable sentences and their annotations for 5 selected key grammatical phenomena (with 20 sentences each per phenomena) from both TiGer and TüBa-D/Z resources. This provides a 2 times 100 sentence comparable testsuite which allows us to evaluate TiGer-trained parsers against the TiGer part of TEPACOC, and TüBa-D/Z-trained parsers against the TüBa-D/Z part of TEPACOC for key phenomena, instead of comparing them against a single (and potentially biased) gold standard. To overcome the problem of inconsistency in human evaluation and to bridge the gap between the two different annotation schemes, we provide an extensive error classification, which enables us to compare parser output across the two different treebanks. In the remaining part of the paper we present the testsuite and describe the grammatical phenomena covered in the data. We discuss the different annotation strategies used in the two treebanks to encode these phenomena and present our error classification of potential parser errors.
Nous présentons ici différents algorithmes d’analyse pour grammaires à concaténation d’intervalles (Range Concatenation Grammar, RCG), dont un nouvel algorithme de type Earley, dans le paradigme de l’analyse déductive. Notre travail est motivé par l’intérêt porté récemment à ce type de grammaire, et comble un manque dans la littérature existante.
Die Ressource "Wissen" rückte in den letzten Jahrzehnten als Quelle wissenschaftlicher Innovation immer stärker ins Zentrum des Interesses. Diese Fokussierung mündete in eine Selbstreflexion der Wissenschaft und der wissenschaftlichen Disziplinen: Thematisiert werden vor allem die Art und Weise, wie Wissen gewonnen wird, sowie die damit zusammenhängende Frage nach der Konstruktion von Wissenschaftlichkeit, womit das Bewusstsein gleichzeitig auf die mehr und mehr sich auflösende Abgrenzung zwischen den Disziplinen beziehungsweise zwischen den drei hauptsächlichen Wissenschaftskulturen, von Natur-, Geistes- und Kultur- sowie Sozialwissenschaften gelenkt wird. Innerhalb und außerhalb der Universitäten bildeten und bilden sich nicht immer klar verortbare "trading zones" (Gallison 1997), in denen neue Formen und Techniken der Wissensproduktion und Wissensvermittlung geprüft, geübt und teilweise auch institutionalisiert werden. ...
Distributional approximations to lexical semantics are very useful not only in helping the creation of lexical semantic resources (Kilgariff et al., 2004; Snow et al., 2006), but also when directly applied in tasks that can benefit from large-coverage semantic knowledge such as coreference resolution (Poesio et al., 1998; Gasperin and Vieira, 2004; Versley, 2007), word sense disambiguation (Mc- Carthy et al., 2004) or semantical role labeling (Gordon and Swanson, 2007). We present a model that is built from Webbased corpora using both shallow patterns for grammatical and semantic relations and a window-based approach, using singular value decomposition to decorrelate the feature space which is otherwise too heavily influenced by the skewed topic distribution of Web corpora.
Parsing coordinations
(2009)
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n- best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.
Trubetzkoy's recognition of a delimitative function of phonology, serving to signal boundaries between morphological units, is expressed in terms of alignment constraints in Optimality Theory, where the relevant constraints require specific morphological boundaries to coincide with phonological structure (Trubetzkoy 1936, 1939, McCarthy & Prince 1993). The approach pursued in the present article is to investigate the distribution of phonological boundary signals to gain insight into the criteria underlying morphological analysis. The evidence from English and Swedish suggests that necessary and sufficient conditions for word-internal morphological analysis concern the recognizability of head constituents, which include the rightmost members of compounds and head affixes. The claim is that the stability of word-internal boundary effects in historical perspective cannot in general be sufficiently explained in terms of memorization and imitation of phonological word form. Rather, these effects indicate a morphological parsing mechanism based on the recognition of word-internal head constituents. Head affixes can be shown to contrast systematically with modifying affixes with respect to syntactic function, semantic content, and prosodic properties. That is, head affixes, which cannot be omitted, often lack inherent meaning and have relatively unmarked boundaries, which can be obscured entirely under specific phonological conditions. By contrast, modifying affixes, which can be omitted, consistently have inherent meaning and have stronger boundaries, which resist prosodic fusion in all phonological contexts. While these correlations are hardly specific to English and Swedish it remains to be investigated to which extent they hold cross-linguistically. The observation that some of the constituents identified on the basis of prosodic evidence lack inherent meaning raises the issue of compositionality. I will argue that certain systematic aspects of word meaning cannot be captured with reference to the syntagmatic level, but require reference to the paradigmatic level instead. The assumption is then that there are two dimensions of morphological analysis: syntagmatic analysis, which centers on the criteria for decomposing words in terms of labelled constituents, and paradigmatic analysis, which centers on the criteria for establishing relations among (whole) words in the mental lexicon. While meaning is intrinsically connected with paradigmatic analysis (e.g. base relations, oppositeness) it is not essential to syntagmatic analysis.
Mit Erstaunen stellen LinguistInnen aus Deutschland, Österreich und der Schweiz immer wieder fest, dass sich in der "kleinen" Schweiz der geschlechtergerechte Sprachgebrauch in Öffentlichkeit und Alltag weit stärker durchgesetzt hat als in den anderen deutschsprachigen Ländern. Diese Einschätzung gilt es hier zu überprüfen und, falls sie zutrifft, zu belegen. Ausserdem werden - als erster Schritt fur weitere Untersuchungen - Thesen formuliert, die Erklärungen liefern, worauf diese Entwicklung zurückgeführt werden kann. Mit diesem Artikel geben wir anband von ausgewählten, konkreten Beispielen einen Einblick in die Situation, wie sie sich zur Zeit in der Schweiz präsentiert. Wir konzentrieren uns - unter sprachsoziologischer Perspektive - auf eine erste Bestandesaufnahme mit dem Blick auf die Diskussion in den Medien, die Institutionalisierung und die Einstellungen, die die spezifische sprachliche Situation in der Deutschschweiz prägen. Einen Rahmen fur unsere Untersuchung bilden die Überlegungen von Schräpel (SCHRÄPEL 1986), die die Auseinandersetzung um nichtsexistische Sprache als ein besonderes Sprachwandelphänomen untersucht. Sprachwandel im Vollzug ist einerseits einfacher zu erfassen als einer, der weiter zurückliegt, andererseits erschwert die Fülle des greifbaren Materials auch den Durchblick und das klare Erkennen von Tendenzen. Aus diesem Grund werten wir unser Datenmaterial nicht quantitativ aus, sondern konzentrieren uns darauf, für verschiedene Aspekte typische Beispiele zu geben und so den Stand der öffentlichen Diskussion und die Breite der vertretenen Meinungen darzustellen. Es wäre verlockend, das hier vorliegende Material auch allgemeinerer Form unter der Thematik "Sprachkritik" oder "Einstellungen" zu analysieren. Dies ist jedoch nicht im Zentrum unserer Fragestellung, weshalb wir bei einigen Beispielen auf entsprechende Untersuchungen (z.B. BLAUBERGS 1980, SCHOENTHAL 1989) verweisen.
Intimität und Geschlecht : zur Syntax und Pragmatik der Anrede im Liebesbrief des 20. Jahrhunderts
(2000)
Die Trennung der Lebenswelt in Privatsphäre und Öffentlichkeit käme der Verortung von Intimität entgegen. Es scheint aber, als ob Intimität nicht einem klar abgegrenzten Bereich zugeordnet werden kann, sondern nunmehr als relationale Kategorie zu fassen ist. Gerade der historische Vergleich (Vgl. CORBIN 1992) erlaubt weder einheitlich räumliche oder körperliche noch ästhetische Kriterien zur Abgrenzung von Intimität. ...
Das ausgehende 19. und beginnende 20. Jahrhundert setzt sich von den erkenntnistheoretischen Konzepten der vorangegangenen Zeit deutlich ab:Während – stark vereinfacht – die Philosophie bis dahin die Möglichkeit der Erkenntnis entweder in der subjektiven oder objektiven Dimension zu finden glaubte,wobei die Funktion der Sprache im Erkenntnisprozess kaum hinterfragt wurde, wird zur Jahrhundertwende eine Tendenz deutlich, die einerseits die Adäquatheit der sprachlichen Vermittlung entweder in Frage stellt oder zumindest thematisiert, andererseits die tradierten Erkenntnismodi neu reflektiert oder ihnen sogar den Rücken kehrt.
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
We adopt Markert and Nissim (2005)’s approach of using the World Wide Web to resolve cases of coreferent bridging for German and discuss the strength and weaknesses of this approach. As the general approach of using surface patterns to get information on ontological relations between lexical items has only been tried on English, it is also interesting to see whether the approach works for German as well as it does for English and what differences between these languages need to be accounted for. We also present a novel approach for combining several patterns that yields an ensemble that outperforms the best-performing single patterns in terms of both precision and recall.
When a statistical parser is trained on one treebank, one usually tests it on another portion of the same treebank, partly due to the fact that a comparable annotation format is needed for testing. But the user of a parser may not be interested in parsing sentences from the same newspaper all over, or even wants syntactic annotations for a slightly different text type. Gildea (2001) for instance found that a parser trained on the WSJ portion of the Penn Treebank performs less well on the Brown corpus (the subset that is available in the PTB bracketing format) than a parser that has been trained only on the Brown corpus, although the latter one has only half as many sentences as the former. Additionally, a parser trained on both the WSJ and Brown corpora performs less well on the Brown corpus than on the WSJ one. This leads us to the following questions that we would like to address in this paper: - Is there a difference in usefulness of techniques that are used to improve parser performance between the same-corpus and the different-corpus case? - Are different types of parsers (rule-based and statistical) equally sensitive to corpus variation? To achieve this, we compared the quality of the parses of a hand-crafted constraint-based parser and a statistical PCFG-based parser that was trained on a treebank of German newspaper text.
In the past, a divide could be seen between ’deep’ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TüBa-D/Z treebank and Versley (2005)´s conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)´s parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TüBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved.