Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Preprint (53)
- Conference Proceeding (32)
- Article (13)
- Part of a Book (9)
- Book (8)
- Working Paper (4)
- Review (2)
- diplomthesis (1)
Language
- English (96)
- German (21)
- Portuguese (4)
- French (1)
Has Fulltext
- yes (122)
Is part of the Bibliography
- no (122)
Keywords
- Computerlinguistik (38)
- Japanisch (18)
- Deutsch (16)
- Maschinelle Übersetzung (12)
- Syntaktische Analyse (10)
- Multicomponent Tree Adjoining Grammar (8)
- Semantik (6)
- Lexicalized Tree Adjoining Grammar (5)
- Grammatik (4)
- Satzanalyse (4)
Institute
- Extern (90)
- Universitätsbibliothek (1)
In this paper I use the formal framework of minimalist grammars to implement a version of the traditional approach to ellipsis as 'deletion under syntactic (derivational) identity', which, in conjunction with canonical analyses of voice phenomena, immediately allows for voice mismatches in verb phrase ellipsis, but not in sluicing. This approach to ellipsis is naturally implemented in a parser by means of threading a state encoding a set of possible antecedent derivation contexts through the derivation tree. Similarities between ellipsis and pronominal resolution are easily stated in these terms. In the context of this implementation, two approaches to ellipsis in the transformational community are naturally seen as equivalent descriptions at different levels: the LF-copying approach to ellipsis resolution is best seen as a description of the parser, whereas the phonological deletion approach a description of the underlying relation between form and meaning.
In this paper, we report on a transformation scheme that turns a Categorial Grammar, more specifically, a Combinatory Categorial Grammar (CCG; see Baldridge, 2002) into a derivation- and meaning-preserving typed feature structure (TFS) grammar.
We describe the main idea which can be traced back at least to work by Karttunen (1986), Uszkoreit (1986), Bouma (1988), and Calder et al. (1988). We then show how a typed representation of complex categories can be extended by other constraints, such as modes, and indicate how the Lambda semantics of combinators is mapped into a TFS representation, using unification to perform perform alpha-conversion and beta-reduction (Barendregt, 1984). We also present first findings concerning runtime measurements, showing that the PET system, originally developed for the HPSG grammar framework, outperforms the OpenCCG parser by a factor of 8–10 in the time domain and a factor of 4–5 in the space domain.
We consider two alternatives for memory management in typed-feature-structure-based parsers by identifying structural properties of grammar signatures that may be of some predictive value in determining the consequences of those alternatives. We define these properties, summarize the results of a number of experiments on artificially constructed signatures with respect to the relative rank of their asymptotic cost at parse-time, and experimentally consider how they impact memory management.
The process of turning a hand-written HPSG theory into a working computational grammar requires complex considerations. Two leading platforms are available for implementing HPSG grammars: The LKB and TRALE. These platforms are based on different approaches, distinct in their underlying logics and implementation details. This paper adopts the perspective of a computational linguist whose goal is to implement an HPSG theory. It focuses on ten different dimensions, relevant to HPSG grammar implementation, and examines, compares, and evaluates the different means which the two approaches provide for implementing them. The paper concludes that the approaches occupy opposite positions on two axes: faithfulness to the hand-written theory and computational accessibility. The choice between them depends largely on the grammar writer's preferences regarding those properties.
We present a novel well-formedness condition for underspecified semantic representations which requires that every correct MRS representation must be a net. We argue that (almost) all correct MRS representations are indeed nets, and apply this condition to identify a set of eleven rules in the English Resource Grammar (ERG) with bugs in their semantics component. Thus we demonstrate that the net test is useful in grammar debugging.
During the past fifty years sign languages have been recognised as genuine languages with their own syntax and distinctive phonology. In the case of sign languages, phonetic description characterises the manual and non-manual aspects of signing. The latter relate to facial expression and upper torso position. In the case of manual components these characterise hand shape, orientation and position, and hand/arm movement in three dimensional space around the signer's body. These phonetic charcaterisations can be notated as HamNoSys descriptions of signs which has an executable interpretation to drive an avatar.
The HPSG sign language generation component of a text to sign language system prototype is described. The assimilation of SL morphological features to generate signs which respect positional agreement in signing space are emphasised.
The project WBLUX (Wortbildung des moselfränkisch-luxemburgischen Raumes) at the University of Luxembourg aims at the investigation of Luxembourgish word formation through different text sorts and genres. In order to achieve this goal the compilation of an annotated corpus is needed. This article gives an example for benefits of using a corpus with annotations like parts of speech, lemmata and word formation affixes in the analysis of productivity of some selected word formation affixes of Luxembourgish. Then it describes how one can achieve such a corpus from a technical point of view. This includes the choice of corpus format, of a database platform and the designing of programs needed for the annotation process of word formation itself. This article also suggests new corpus linguistic approaches for research of word formation like analyzing the usage of word formation bases in the entire corpus or performing context analysis in order to determine semantical functions of each suffix.
In this paper, I revisit the arguments against the use of fuzzy logic in linguistics (or more generally, against a truth-functional account of vagueness). In part, this is an exercise to explain to fuzzy logicians why linguists have shown little interest in their research paradigm. But, the paper contains more than this interdisciplinary service effort that I started out on: In fact, this seems an opportune time for revisiting the arguments against fuzzy logic in linguistics since three recent developments affect the argument. First, the formal apparatus of fuzzy logic has been made more general since the 1970s, specifically by Hajek [6], and this may make it possible to define operators in a way to make fuzzy logic more suitable for linguistic purposes. Secondly, recent research in philosophy has examined variations of fuzzy logic ([18, 19]). Since the goals of linguistic semantics seem sometimes closer to those of some branches of philosophy of language than they are to the goals of mathematical logic, fuzzy logic work in philosophy may mark the right time to reexamine fuzzy logic from a linguistic perspective as well. Finally, the reasoning used to exclude fuzzy logic in linguistics has been tied to the intuition that p and not p is a contradiction. However, this intuition seems dubious especially when p contains a vague predicate. For instance, one can easily think of circumstances where 'What I did was smart and not smart.' or 'Bea is both tall and not tall.' don’t sound like senseless contradictions. In fact, some recent experimental work that I describe below has shown that contradictions of classical logic aren’t always felt to be contradictory by speakers. So, it is important to see to what extent the argument against fuzzy logic depends on a specific stance on the semantics of contradictions. In sum then, there are three good reasons to take another look at fuzzy logic for linguistic purposes.
Dieser Beitrag basiert auf dem Forschungsprojekt DICONALE, das sich die Erstellung eines konzeptuell orientierten, zweisprachigen Wörterbuchs mit Online-Zugang für Verballexeme des Deutschen und Spanischen zum Ziel gesetzt hat. Das Anliegen dieses Beitrags ist es, die relevantesten Eigenschaften des geplanten Wörterbuchs exemplarisch anhand von zwei Verblexemen aus dem konzeptuellen Feld der KOGNITION vorzustellen. Neben der Beschreibung der paradigmatischen Sinnrelationen der Feldelemente zueinander wird besonderer Wert auf die syntagmatischen Inhalts- und Ausdrucksstrukturen und auf die kontrastive Analyse gelegt. Es wird versucht, einerseits einen Überblick über die wichtigsten Besonderheiten des Wörterbuchs anzubieten und andererseits die Relevanz solcher Kriterien für die heutige kontrastive Lexikographie Deutsch-Spanisch nachzuweisen.