Linguistik
Refine
Year of publication
- 2005 (54) (remove)
Document Type
- Part of a Book (54) (remove)
Has Fulltext
- yes (54)
Is part of the Bibliography
- no (54)
Keywords
- Artikulation (10)
- Artikulatorische Phonetik (10)
- Artikulator (8)
- Akustische Phonetik (4)
- Deutsch (4)
- Phonetik (4)
- Auditive Phonetik (3)
- Französisch (3)
- Konsonant (3)
- Kontrastive Phonetik (3)
Institute
- Extern (6)
In many languages, a passive-like meaning may be obtained through a noncanonical passive construction. The get passive (1b) in English, the se faire passive (2b) in French and the kriegen passive (3b) in German represent typical manifestations. This squib focuses on the behavior of the get-passive in English and discusses a number of restrictions associated with it as well as the status of get.
Articulatory token-to-token variability not only depends on linguistic aspects like the phoneme inventory of a given language but also on speaker specific morphological and motor constraints. As has been noted previously (Perkell (1997), Mooshammer et al. (2004)), speakers with coronally high "domeshaped" palates exhibit more articulatory variability than speakers with coronally low "flat" palates. One explanation for that is based on perception oriented control by the speaker. The influence of articulatory variation on the cross sectional area and consequently on the acoustics should be greater for flat palates than for domeshaped ones. This should force speakers with flat palates to place their tongue very precisely whereas speakers with domeshaped palates might tolerate a greater variability. A second explanation could be a greater amount of lateral linguo-palatal contact for flat palates holding the tongue in position. In this study both hypotheses were tested.
In order to investigate the influence of the palate shape on the variability of the acoustic output a modelling study was carried out. Parallely, an EPG experiment was conducted in order to investigate the relationship between palate shape, articulatory variability and linguo-palatal contact.
Results from the modelling study suggest that the acoustic variability resulting from a certain amount of articulatory variability is higher for flat palates than for domeshaped ones. Results from the EPG experiment with 20 speakers show that (1.) speakers with a flat palate exhibit a very low articulatory variability whereas speakers with a domeshaped palate vary, (2.) there is less articulatory variability if there is lots of linguo-palatal contact and (3.) there is no relationship between the amount of lateral linguo-palatal contact and palate shape. The results suggest that there is a relationship between token-to-token variability and palate shape, however, it is not that the two parameters correlate, but that speakers with a flat palate always have a low variability because of constraints of the variability range of the acoustic output whereas speakers with a domeshaped palate may choose the degree of variability. Since linguo-palatal contact and variability correlate it is assumed that linguo-palatal contact is a means for reducing the articulatory variability.
It is one of the most highly debated issues in loanword phonology whether loanword adaptations are phonologically or phonetically driven. This paper addresses this issue and aims at demonstrating that only the acceptance of both a phonological as well as a phonetic approximation stance can adequately account for the data found in Japanese. This point will be exemplified with the adaptation of German and French mid front rounded vowels in Japanese. It will be argued that the adaptation of German /oe/ and /ø/ as Japanese /e/ is phonologically grounded, whereas the adaptation of French /oe/ and /ø/ as Japanese /u/ is phonetically grounded. This asymmetry in the adaptation process of German and French mid front rounded vowels and further examples of loans in Japanese lead to the only conclusion that both strategies of loanword adaptation occur in languages. It will be shown that not only perception, but also the influence of orthography, of conventions and the knowledge of the source language play a role in the adaptation process.
It has been shown that visual cues play a crucial role in the perception of vowels and consonants. Conflicting consonantal stimuli presented in the visual and auditory modalities can even result in the emergence of a third perceptual unit (McGurk effect). From a developmental point of view, several studies report that newborns can associate the image of a face uttering a given vowel to the auditory signal corresponding to this vowel; visual cues are thus used by the newborns. Despite the large number of studies carried out with adult speakers and newborns, very little work has been conducted with preschool-aged children. This contribution is aimed at describing the use of auditory and visual cues by 4 and 5-year-old French Canadian speakers, compared to adult speakers, in the identification of voiced consonants. Audiovisual recordings of a French Canadian speaker uttering the sequences [aba], [ada], [aga], [ava], [ibi], [idi], [igi], [ivi] have been carried out. The acoustic and visual signals have been extracted and analysed so that conflicting and non-conflicting stimuli, between the two modalities, were obtained. The resulting stimuli were presented as a perceptual test to eight 4 and 5-year-old French Canadian speakers and ten adults in three conditions: visual-only, auditory-only, and audiovisual. Results show that, even though the visual cues have a significant effect on the identification of the stimuli for adults and children, children are less sensitive to visual cues in the audiovisual condition. Such results shed light on the role of multimodal perception in the emergence and the refinement of the phonological system in children.
The semantics of ellipsis
(2005)
There are four phenomena that are particularly troublesome for theories of ellipsis: the existence of sloppy readings when the relevant pronouns cannot possibly be bound; an ellipsis being resolved in such a way that an ellipsis site in the antecedent is not understood in the way it was there; an ellipsis site drawing material from two or more separate antecedents; and ellipsis with no linguistic antecedent. These cases are accounted for by means of a new theory that involves copying syntactically incomplete antecedent material and an analysis of silent VPs and NPs that makes them into higher order definite descriptions that can be bound into.
The author presents MASSY, the MODULAR AUDIOVISUAL SPEECH SYNTHESIZER. The system combines two approaches of visual speech synthesis. Two control models are implemented: a (data based) di-viseme model and a (rule based) dominance model where both produce control commands in a parameterized articulation space. Analogously two visualization methods are implemented: an image based (video-realistic) face model and a 3D synthetic head. Both face models can be driven by both the data based and the rule based articulation model.
The high-level visual speech synthesis generates a sequence of control commands for the visible articulation. For every virtual articulator (articulation parameter) the 3D synthetic face model defines a set of displacement vectors for the vertices of the 3D objects of the head. The vertices of the 3D synthetic head then are moved by linear combinations of these displacement vectors to visualize articulation movements. For the image based video synthesis a single reference image is deformed to fit the facial properties derived from the control commands. Facial feature points and facial displacements have to be defined for the reference image. The algorithm can also use an image database with appropriately annotated facial properties. An example database was built automatically from video recordings. Both the 3D synthetic face and the image based face generate visual speech that is capable to increase the intelligibility of audible speech.
Other well known image based audiovisual speech synthesis systems like MIKETALK and VIDEO REWRITE concatenate pre-recorded single images or video sequences, respectively. Parametric talking heads like BALDI control a parametric face with a parametric articulation model. The presented system demonstrates the compatibility of parametric and data based visual speech synthesis approaches.
This paper investigates the structural properties of morphosyntactically marked focus constructions, focussing on the often neglected non-focal sentence part in African tone languages. Based on new empirical evidence from five Gur and Kwa languages, we claim that these focus expressions have to be analysed as biclausal constructions even though they do not represent clefts containing restrictive relative clauses. First, we relativize the partly overgeneralized assumptions about structural correspondences between the out-of-focus part and relative clauses, and second, we show that our data do in fact support the hypothesis of a clause coordinating pattern as present in clause sequences in narration. It is argued that we deal with a non-accidental, systematic feature and that grammaticalization may conceal such basic narrative structures.
Studying kinematic behavior in speech production is an indispensable and fruitful methodology in order to describe for instance phonemic contrasts, allophonic variations, prosodic effects in articulatory movements. More intriguingly, it is also interpreted with respect to its underlying control mechanisms. Several interpretations have been borrowed from motor control studies of arm, eye, and limb movements. They do either explain kinematics with respect to a fine tuned control by the Central Nervous System (CNS) or they take into account a combination of influences arising from motor control strategies at the CNS level and from the complex physical properties of the peripheral speech apparatus. We assume that the latter is more realistic and ecological. The aims of this article are: first, to show, via a literature review related to the so called '1/3 power law' in human arm motor control, that this debate is of first importance in human motor control research in general. Second, to study a number of speech specific examples offering a fruitful framework to address this issue. However, it is also suggested that speech motor control differs from general motor control principles in the sense that it uses specific physical properties such as vocal tract limitations, aerodynamics and biomechanics in order to produce the relevant sounds. Third, experimental and modelling results are described supporting the idea that the three properties are crucial in shaping speech kinematics for selected speech phenomena. Hence, caution should be taken when interpreting kinematic results based on experimental data alone.
Die moderne Gesellschaft ist von Veränderungen epistemischer und institutioneller Strukturmerkmale der Wissenschaft geprägt, die ihrerseits einen Wandel in anderen Bereichen der Gesellschaft auslösen. In diesem Zusammenhang – wie auch in der neuzeitlichen Wissenschaftsentwicklung überhaupt – kommt der Sprachlichkeit, dem Kulturphänomen "Wissenschaftssprache", eine eminente Rolle zu, etablierte sich doch in den letzten Jahrzehnten eine „linguistische Teildisziplin der Wissenschaftssprachforschung“ (vgl. KRETZENBACHER 1992: 1; HESS-LÜTTICH 1998). "Wissenschaft" scheint mir jedoch ein (interkultureller) Problembegriff zu sein, beispielsweise auch schon deswegen, da dieses Wort (samt seinen Ableitungen wie Wissenschaftler, wissenschaftlich, Wissenschaftlichkeit) stark kulturbedingt ist (vgl. CLYNE/KREUTZ 2003: 60); so korreliert etwa der deutsche Terminus Wissenschaft nicht mit dem englischen science etc. Das Englische kann zweifellos auf eine konkurrenzlose Karriere als wissenschaftliche Universalsprache zurückblicken: Wissenschaftler – auch deutschsprachige – bedienen sich bei der Veröffentlichung wichtiger Forschungsergebnisse zunehmend der englischen Sprache. Der Anteil der wissenschaftlichen Publikationen auf Englisch beträgt heute weltweit über 90 Prozent, während nur noch wenige Prozent des wissenschaftlichen Publikationsaufkommens deutschsprachig sind. Auch die Zahl der wissenschaftlichen Tagungen (selbst im deutschen Sprach- und Kulturraum), die ausschließlich Englisch als Konferenzsprache zulassen, nimmt stetig zu. Außerdem werden immer mehr Vorlesungen bzw. ganze Studiengänge an sonst deutschsprachigen Universitäten in Englisch angeboten. „Die Spitzenforschung spricht englisch“ – stellte der spätere Präsident der Max-Planck-Gesellschaft, Hubert Markl, bereits vor zwanzig Jahren lapidar fest (Quelle: DUZ, 22/2002, S. 12). Gleichwohl wird immer wieder – oft etwas euphorisch – auf Ostmittel-, Ost- und Südosteuropa verwiesen, die traditionell als ein Refugium des Deutschen u.a. auch als Wissenschaftssprache galten bzw. auf weiten Strecken nach wie vor gelten. So kann exemplarisch die „Physikalische Zeitschrift der Sowjetunion“ erwähnt werden, die von 1932 bis 1937 auf Deutsch erschien. In diesem interessanten und zugleich äußerst komplexen Spannungsfeld soll es sich im vorliegenden Beitrag um das Thema ‘Sprachen in den Wissenschaften’ als Denk- und Darstellungsmedia handeln. Dabei soll zum einen die Problematik der Mehrsprachigkeit der Wissenschaften (mit besonderer Berücksichtigung des Deutschen) im mehrsprachigen, multikulturellen und kultursensiblen Kontaktraum Mittel- und Osteuropa angesprochen werden, zum anderen – weil ja auf unserer Tagung auch andere Teilareale, wie z.B. Rumänien, vertreten sind – soll der besondere Schwerpunkt auf Ungarn liegen. Hauptziel der Erörterungen besteht darin, die Entwicklung der in dieser Region wirksamen Wissenschaftssprachen diachron herauszuarbeiten, den derzeitigen Stand für die Bereiche Sprachen in der akademischen Lehre, Forschungssprachen (d.h. Sprachen der Forschungskommunikation) und Publikationssprachen – auch mit Hilfe empirischer Daten – mehrperspektivisch zu dokumentieren und aktuelle Tendenzen reflektorisch aufzuzeigen.