Linguistik-Klassifikation
Refine
Year of publication
Document Type
- Preprint (53)
- Conference Proceeding (26)
- Article (16)
- Part of a Book (9)
- Book (8)
- Working Paper (4)
- Review (2)
- diplomthesis (1)
Language
- English (93)
- German (21)
- Portuguese (4)
- French (1)
Has Fulltext
- yes (119)
Is part of the Bibliography
- no (119)
Keywords
- Computerlinguistik (38)
- Japanisch (18)
- Deutsch (16)
- Maschinelle Übersetzung (12)
- Syntaktische Analyse (9)
- Multicomponent Tree Adjoining Grammar (8)
- Semantik (6)
- Lexicalized Tree Adjoining Grammar (5)
- Grammatik (4)
- Satzanalyse (4)
Institute
- Extern (90)
The process of turning a hand-written HPSG theory into a working computational grammar requires complex considerations. Two leading platforms are available for implementing HPSG grammars: The LKB and TRALE. These platforms are based on different approaches, distinct in their underlying logics and implementation details. This paper adopts the perspective of a computational linguist whose goal is to implement an HPSG theory. It focuses on ten different dimensions, relevant to HPSG grammar implementation, and examines, compares, and evaluates the different means which the two approaches provide for implementing them. The paper concludes that the approaches occupy opposite positions on two axes: faithfulness to the hand-written theory and computational accessibility. The choice between them depends largely on the grammar writer's preferences regarding those properties.
We present a novel well-formedness condition for underspecified semantic representations which requires that every correct MRS representation must be a net. We argue that (almost) all correct MRS representations are indeed nets, and apply this condition to identify a set of eleven rules in the English Resource Grammar (ERG) with bugs in their semantics component. Thus we demonstrate that the net test is useful in grammar debugging.
During the past fifty years sign languages have been recognised as genuine languages with their own syntax and distinctive phonology. In the case of sign languages, phonetic description characterises the manual and non-manual aspects of signing. The latter relate to facial expression and upper torso position. In the case of manual components these characterise hand shape, orientation and position, and hand/arm movement in three dimensional space around the signer's body. These phonetic charcaterisations can be notated as HamNoSys descriptions of signs which has an executable interpretation to drive an avatar.
The HPSG sign language generation component of a text to sign language system prototype is described. The assimilation of SL morphological features to generate signs which respect positional agreement in signing space are emphasised.
The project WBLUX (Wortbildung des moselfränkisch-luxemburgischen Raumes) at the University of Luxembourg aims at the investigation of Luxembourgish word formation through different text sorts and genres. In order to achieve this goal the compilation of an annotated corpus is needed. This article gives an example for benefits of using a corpus with annotations like parts of speech, lemmata and word formation affixes in the analysis of productivity of some selected word formation affixes of Luxembourgish. Then it describes how one can achieve such a corpus from a technical point of view. This includes the choice of corpus format, of a database platform and the designing of programs needed for the annotation process of word formation itself. This article also suggests new corpus linguistic approaches for research of word formation like analyzing the usage of word formation bases in the entire corpus or performing context analysis in order to determine semantical functions of each suffix.
In this paper, I revisit the arguments against the use of fuzzy logic in linguistics (or more generally, against a truth-functional account of vagueness). In part, this is an exercise to explain to fuzzy logicians why linguists have shown little interest in their research paradigm. But, the paper contains more than this interdisciplinary service effort that I started out on: In fact, this seems an opportune time for revisiting the arguments against fuzzy logic in linguistics since three recent developments affect the argument. First, the formal apparatus of fuzzy logic has been made more general since the 1970s, specifically by Hajek [6], and this may make it possible to define operators in a way to make fuzzy logic more suitable for linguistic purposes. Secondly, recent research in philosophy has examined variations of fuzzy logic ([18, 19]). Since the goals of linguistic semantics seem sometimes closer to those of some branches of philosophy of language than they are to the goals of mathematical logic, fuzzy logic work in philosophy may mark the right time to reexamine fuzzy logic from a linguistic perspective as well. Finally, the reasoning used to exclude fuzzy logic in linguistics has been tied to the intuition that p and not p is a contradiction. However, this intuition seems dubious especially when p contains a vague predicate. For instance, one can easily think of circumstances where 'What I did was smart and not smart.' or 'Bea is both tall and not tall.' don’t sound like senseless contradictions. In fact, some recent experimental work that I describe below has shown that contradictions of classical logic aren’t always felt to be contradictory by speakers. So, it is important to see to what extent the argument against fuzzy logic depends on a specific stance on the semantics of contradictions. In sum then, there are three good reasons to take another look at fuzzy logic for linguistic purposes.
Dieser Beitrag basiert auf dem Forschungsprojekt DICONALE, das sich die Erstellung eines konzeptuell orientierten, zweisprachigen Wörterbuchs mit Online-Zugang für Verballexeme des Deutschen und Spanischen zum Ziel gesetzt hat. Das Anliegen dieses Beitrags ist es, die relevantesten Eigenschaften des geplanten Wörterbuchs exemplarisch anhand von zwei Verblexemen aus dem konzeptuellen Feld der KOGNITION vorzustellen. Neben der Beschreibung der paradigmatischen Sinnrelationen der Feldelemente zueinander wird besonderer Wert auf die syntagmatischen Inhalts- und Ausdrucksstrukturen und auf die kontrastive Analyse gelegt. Es wird versucht, einerseits einen Überblick über die wichtigsten Besonderheiten des Wörterbuchs anzubieten und andererseits die Relevanz solcher Kriterien für die heutige kontrastive Lexikographie Deutsch-Spanisch nachzuweisen.
Im folgenden Beitrag handelt es sich um die Entwicklung eines semantischen Wörterbuches der deutschen Sprache für maschinelle Sprachverarbeitungssysteme im Rahmen des Projektes "Compreno" bei dem russischen IT-Unternehmen ABBYY. Es wird eine kurze Übersicht über andere elektronische Quellen zur deutschen Sprache gegeben, ferner werden ihre Unterschiede im Vergleich zum Projektwörterbuch analysiert. An einigen Beispielen werden aktuelle Probleme der Computerlexikografie (Bedeutungsunterscheidung, Komposita-Analyse u.a.) und ihre mögliche Lösung in Bezug auf das Projektwörterbuch betrachtet.
This paper discusses an attempt to write a computer program that would properly model the phonological development of Chinese from Middle Chinese to Modern Peking Mandarin, using the rules in Chen 1976. Several problems are encountered, the most significant being that the rules cannot apply in the same order for all lexical items. The significance of this in terms of the implementation of sound change is briefly discussed.
This special issue of the ZAS Papers in Linguistics contains a collection of papers of the French-German Thematic Summerschool on "Cognitive and physical models of speech production, and speech perception and of their interaction".
Organized by Susanne Fuchs (ZAS Berlin), Jonathan Harrington (IPdS Kiel), Pascal Perrier (ICP Grenoble) and Bernd Pompino-Marschall (HUB and ZAS Berlin) and funded by the German-French University in Saarbrücken this summerschool was held from September 19th till 24th 2004 at the coast of the Baltic Sea at the Heimvolkshochschule Lubmin (Germany) with 45 participants from Germany, France, Great Britain, Italy and Canada. The scientific program of this summerschool that is reprinted at the end of this volume included 11 key-note presentations by invited speakers, 21 oral presentations and a poster session (8 presentations). The names and addresses of all participants are also given in the back matter of this volume.
All participants was offered the opportunity to publish an extended version of their presentation in the ZAS Papers in Linguistics. All submitted papers underwent a review and an editing procedure by external experts and the organizers of the summerschool. As it is the case in a summerschool, papers present either works in progress, or works at a more advanced stage, or tutorials. They are ordered alphabetically by their first author's name, fortunately resulting in the fact that this special issue starts out with the paper that won the award as best pre-doctoral presentation, i.e. Sophie Dupont, Jérôme Aubin and Lucie Ménard with "A study of the McGurk effect in 4 and 5-year-old French Canadian children".
The author presents MASSY, the MODULAR AUDIOVISUAL SPEECH SYNTHESIZER. The system combines two approaches of visual speech synthesis. Two control models are implemented: a (data based) di-viseme model and a (rule based) dominance model where both produce control commands in a parameterized articulation space. Analogously two visualization methods are implemented: an image based (video-realistic) face model and a 3D synthetic head. Both face models can be driven by both the data based and the rule based articulation model.
The high-level visual speech synthesis generates a sequence of control commands for the visible articulation. For every virtual articulator (articulation parameter) the 3D synthetic face model defines a set of displacement vectors for the vertices of the 3D objects of the head. The vertices of the 3D synthetic head then are moved by linear combinations of these displacement vectors to visualize articulation movements. For the image based video synthesis a single reference image is deformed to fit the facial properties derived from the control commands. Facial feature points and facial displacements have to be defined for the reference image. The algorithm can also use an image database with appropriately annotated facial properties. An example database was built automatically from video recordings. Both the 3D synthetic face and the image based face generate visual speech that is capable to increase the intelligibility of audible speech.
Other well known image based audiovisual speech synthesis systems like MIKETALK and VIDEO REWRITE concatenate pre-recorded single images or video sequences, respectively. Parametric talking heads like BALDI control a parametric face with a parametric articulation model. The presented system demonstrates the compatibility of parametric and data based visual speech synthesis approaches.
The goal of our current project is to build a system that can learn to imitate a version of a spoken utterance using an articulatory speech synthesiser. The approach is informed and inspired by knowledge of early infant speech development. Thus we expect our system to reproduce and exploit the utility of infant behaviours such as listening, vocal play, babbling and word imitation. We expect our system to develop a relationship between the sound-making capabilities of its vocal tract and the phonetic/phonological structure of imitated utterances. At the heart of our approach is the learning of an inverse model that relates acoustic and motor representations of speech. The acoustic to auditory mappings uses an auditory filter bank and a self-organizing phase of learning. The inverse model from auditory to vocal tract control parameters is estimated using a babbling phase, in which the vocal tract is essentially driven in a random manner, much like the babbling phase of speech acquisition in infants. The complete system can be used to imitate simple utterances through a direct mapping from sound to control parameters. Our initial results show that this procedure works well for sounds generated by its own voice. Further work is needed to build a phonological control level and achieve better performance with real speech.
Der Beitrag behandelt zunächst die Frage, welche Vorteile elektronische Wörterbücher gegenüber traditionell gedruckten Wörterbüchern besitzen. Danach werden drei Online-Programme zur automatischen Übersetzung (Babelfish, Google Übersetzer, Bing Translator) vorgestellt. Beispieltexte werden mit diesen Programmen übersetzt, danach wird die jeweilige Qualität der Übersetzungen beurteilt. Schließlich diskutiert der Beitrag noch die Folgen, die durch die Möglichkeiten automatischen Übersetzens für die Auslandsgermanistik zu erwarten sind. Dabei zeigt sich, dass Programme für das automatische Übersetzen künftig durchaus ernstzunehmende Auswirkungen auf die philologischen Wissenschaften haben können.
Seit einiger Zeit ist zu beobachten, dass zu dem Handwerkszeug eines DaF-Lerners […] nicht mehr Grammatiken und Wörterbücher im klassischen Sinne gehören. Das Nachschlagen in Printwerken wird auf allen Stufen und für alle Benutzersituationen durch die Konsultation in den unterschiedlichsten über Internet frei zugänglichen Materialien ersetzt. […] So scheint es, dass gerade im DaF-Bereich die Printnachschlagewerke bald schon zu einem Relikt anderer Zeiten angehören werden. Aber genauso wie für die Benutzung von Printwörterbüchern, benötigt der DaF-Lerner durch die ganz neu entstehenden online-Nachschlagetechniken (Engelberg/Lemnitzer 2009, 111) genügend Information und Schulung, um für seine jeweilige Benutzersituation in dem dafür am besten geeigneten Konsultationssystem die jeweils adäquateste Rechercheoption auszuwählen. […] Das gilt gleichermaßen für Print- wie für Onlineressourcen, wobei allerdings gerade bei Internetwörterbüchern bei der Suchanfrage das Risiko des Orientierungsverlustes („lost in hyperspace“) verstärkt auftreten kann (cfr. Haß/Schmitz 2010, 4). Es ist daher Aufgabe der Lehrenden, die entsprechende Orientierung und Hilfestellung zu leisten. Leider ist zu bemerken, dass im DaF-Bereich die nötige lexikographische Kompetenz nicht genügend vermittelt wird, was nicht zuletzt oft an der mangelnden lexikographischen Vorbildung der DaF-Lehrer liegt. Ziel des Beitrages ist es daher, einige Internetwörterbücher (IWB) mit freiem Zugang für die Deutsche Sprache in groben Zügen vorzustellen und für ihren Nutzen in unterschiedliche Benutzersituationen im Bereich DaF zu kommentieren, um dem DaF-Lerner und Lehrer die Auswahl aus dem inzwischen recht unübersichtlichen Angebot für seine jeweiligen Bedürfnisse zu erleichtern. In Anlehnung an die vorgeschlagenen Kriterien von Engelberg/Lemnitzer (42009, 220ff.), Storrer (2010) und das Evaluationsraster zur Beurteilung von online-WB von Kemmer (2010) sollen verschiedene aktuelle IWB der deutschen Gegenwartssprache beurteilt werden. Zur Wörterbuch-Typologisierung orientiere ich mich an den Vorschlägen von Engelberg/Lemnitzer (42009), beschränke aber in diesem Rahmen den Gegenstandsbereich auf zweisprachige IWB, spezifische einsprachige DaF-IWB und einige modularisierte allgemeinsprachige Wörterbuchportale, in denen verschiedene IWB miteinander verlinkt sind.
Estudiosos dos campos da Educação, da Linguística Aplicada e da Formação de Professores de Línguas insistem hoje na grande importância da inclusão de Tecnologias de Informação e Comunicação (TICs) na formação inicial, bem como na necessidade de promover o desenvolvimento do pensamento crítico-reflexivo dos futuros professores. Tomando como pressupostos teóricos estudos acerca das características da sociedade de informação, dos ambientes virtuais e da formação de professores, este trabalho tem como objetivo discutir possibilidades oferecidas pela plataforma Moodle de aprendizagem na formação inicial de professores de alemão. Para tanto, apresentaremos diferentes formas de uso de ambientes virtuais e de ferramentas neles disponíveis, que demonstraram ser de grande valor no processo de formação de licenciandos, tanto em língua alemã, quanto durante suas práticas iniciais. As experiências apontam para um valor inestimável de ambientes virtuais no acompanhamento de licenciandos no processo de aprendizagem da língua e nas primeiras experiências com a docência.
Um den schwierigen Wettbewerbsbedingungen im internationalen Vergleich entgegentreten zu können, benötigen kleine und mittlere Unternehmen nicht nur den Einsatz moderner Informationstechniken und eine kommerzielle Präsenz im multimedialen und grafikintensiven Teil des Internets, sondern auch eine an den Kunden angepasste Web-Präsenz. In diesem Sinne widmen wir uns in diesem Beitrag der wirtschaftlichen Notwendigkeit einer kontrastiven Hypertextgrammatik. In den letzten Jahren ist dank der zunehmenden Bedeutung des Internets als Handelsplattform eine grammatische Unterdisziplin entstanden, die zur Geschäftsoptimierung kleiner und mittlerer Unternehmen einen beachtlichen Beitrag leisten könnte: die kontrastive Hypertextgrammatik. Wir gehen hier der Frage nach, wie man bei einer kontrastiven hypertextgrammatischen Studie vorgehen könnte.
The purpose of this article is to report on the work carried out during the research project "O trabalho de tradutor como fonte para a constituição de base de dados" (The translator´s work as a source for the constitution of a database). Through the restoration, organization and digitalization of the personal glossary and part of the books containing the translations made by the deceased public translator Gustavo Lohnefink, this research project intends to construct a digital database of German – Portuguese technical terms (for the language pair), which could then be used by other translators. In order to achieve this purpose, a specific methodology had to be developed, which could be used as a starting-point for the treatment and recovery of other similarly organized data-collections.
This paper aims to investigate the dynamics of text-image interplay as exemplified by various text types applied to second language teaching and translation didactics. Based on examples of texts from the fields of Science, Technology, Literature and Language Teaching, the authors attempt to assess both successful and unsuccessful instances of the application of iconical resources in text production. Some didactic consequences are discussed.