Linguistik
Refine
Year of publication
Document Type
- Conference Proceeding (166) (remove)
Has Fulltext
- yes (166)
Is part of the Bibliography
- no (166)
Keywords
- Computerlinguistik (20)
- Informationsstruktur (19)
- Deutsch (16)
- Phonetik (13)
- Japanisch (10)
- Maschinelle Übersetzung (9)
- Englisch (7)
- Grammatik (7)
- Nungisch (6)
- Tibetobirmanische Sprachen (6)
Institute
In this paper we describe SOBA, a sub-component of the SmartWeb multi-modal dialog system. SOBA is a component for ontologybased information extraction from soccer web pages for automatic population of a knowledge base that can be used for domainspecific question answering. SOBA realizes a tight connection between the ontology, knowledge base and the information extraction component. The originality of SOBA is in the fact that it extracts information from heterogeneous sources such as tabular structures, text and image captions in a semantically integrated way. In particular, it stores extracted information in a knowledge base, and in turn uses the knowledge base to interpret and link newly extracted information with respect to already existing entities.
As has been noted previously, speakers with coronally low "flat" palates exhibit less articulatory variability than speakers with coronally high "domeshaped" palates. This phenomenon is investigated by means of a tongue model and an EPG experiment. The results show that acoustic variability depends on the shape of the vocal tract. The same articulatory variability leads to more acoustic variability if the palate is flat than if it is domeshaped. Furthermore, speakers with domeshaped palates show more articulatory variability than speakers with flat palates. The results are explained by different control strategies by the speakers. Speakers with flat palates reduce their articulatory variability in order to keep their acoustic variability low.
Temporal development of compensation strategies for perturbed palate shape in German /S/-production
(2006)
The palate shape of four speakers was changed by a prosthesis which either lowered the palate or retracted the alveoles. Subjects wore the prosthesis for two weeks and were recorded several times via EMA. Results of articulatory measurements show that speakers use different compensation methods at different stages of the adaptation. They lower the tongue immediately after the insertion of the prosthesis. Other compensation methods as for example lip protrusion are only acquired after longer practising periods. The results are interpreted as supporting the existence of different mappings between motor commands, vocal tract shape and auditory-acoustic target.
Several articulatory strategies are available during the production of /u/, all resulting in a similar acoustic output. /u/ has two main constrictions, at the velum and at the lips. A perturbation of either constriction can be compensated at the other one, e.g wider constriction at the velum by more lip protrusion, wider lip opening by more tongue retraction. This study investigates whether speakers use this relation under perturbation. Six speakers were provided with palatal prostheses which were worn for two weeks. Speakers were instructed to make a serious attempt to produce normal speech. Their speech was recorded via EMA and acoustics several times over the adaptation period. Formant values of /u/-productions were measured. Velar constriction width and lip protrusion were estimated. For four speakers a correlation between constriction width and lip protrusion was found. A negative correlation between lip protrusion and F1 or F2 could sometimes be observed, but no correlation occurred between constriction size and either of the formants. The results show that under perturbation speakers use motor equivalent strategies in order to adapt. The correlation between constriction size and lip protrusion is stronger than in studies investigating unperturbed speech. This could be because under perturbation speakers are inclined to try out several strategies in order to reach the acoustic target and the co-variability might thus be greater.
A two-week perturbation EMA-experiment was carried out with palatal prostheses. Articulatory effort for five speakers was assessed by means of peak acceleration and jerk during the tongue tip gestures from /t/ towards /i, e, o, y, u/. After a period of no change speakers showed an increase in these values. Towards the end of the experiment the values decreased. The results are interpreted as three phases of carrying out changes in the internal model. At first, the complete production system is shifted in relation to the palatal change, afterwards speakers explore different production mechanisms which involves more articulatory effort. This second phase can be seen as a training phase where several articulatory strategies are explored. In the third phase speakers start to select an optimal movement strategy to produce the sounds so that the values decrease.
The study investigates the contribution of tactile and auditory feedback in the adaptation of /s/ towards a palatal prosthesis. Five speakers were recorded via electromagnetic articulography, at first without the prosthesis, then with the prosthesis and auditory feedback masked, and finally with the prosthesis and auditory feedback available. Tongue position, jaw position and acoustic centre of gravity of productions of the sound were measured. The results show that the initial adaptation attempts without auditory feedback are dependent on the prosthesis type and directed towards reaching the original tongue palate contact pattern. Speakers with a prosthesis which retracted the alveolar ridge retracted the tongue. Speakers with a prosthesis which did not change the place of the alveolar ridge did not retract the tongue. All speakers lowered the jaw. In a second adaptation step with auditory feedback available speakers reorganised tongue and jaw movements in order to produce more subtle acoustic characteristics of the sound such as the high amplitude noise which is typical for sibilants.
The Deep Linguistic Processing with HPSG Initiative (DELH-IN) provides the infrastructure needed to produce open-source semantic transfer-based machine translation systems. We have made available a prototype Japanese-English machine translation system built from existing resources include parsers, generators, bidirectional grammars and a transfer engine.
Metaphern bestimmen nicht nur unser alltägliches Leben, etwa wenn wir vom Rad der Geschichte oder der Bühne des Lebens sprechen, sie geben auch nützliche Orientierung in vielen Bereichen der Wissenschaft, von den schwarzen Löchern der Physiker bis zur Computermetapher des Gehirns in der Kognitionswissenschaft. Eine solche Metapher ist auch die Deutung der Sprache als Werkzeug.
We present an effort for the development of multilingual named entity grammars in a unification-based finite-state formalism (SProUT). Following an extended version of the MUC7 standard, we have developed Named Entity Recognition grammars for German, Chinese, Japanese, French, Spanish, English, and Czech. The grammars recognize person names, organizations, geographical locations, currency, time and date expressions. Subgrammars and gazetteers are shared as much as possible for the grammars of the different languages. Multilingual corpora from the business domain are used for grammar development and evaluation. The annotation format (named entity and other linguistic information) is described. We present an evaluation tool which provides detailed statistics and diagnostics, allows for partial matching of annotations, and supports user-defined mappings between different annotation and grammar output formats.
While the sortal constraints associated with Japanese numeral classifiers are wellstudied, less attention has been paid to the details of their syntax. We describe an analysis implemented within a broadcoverage HPSG that handles an intricate set of numeral classifier construction types and compositionally relates each to an appropriate semantic representation, using Minimal Recursion Semantics.