Linguistik
Refine
Year of publication
- 2001 (102) (remove)
Document Type
- Part of a Book (56)
- Article (17)
- Working Paper (8)
- Conference Proceeding (7)
- Preprint (5)
- Review (4)
- Book (3)
- Diploma Thesis (1)
- Report (1)
Language
- English (102) (remove)
Has Fulltext
- yes (102)
Is part of the Bibliography
- no (102)
Keywords
- Syntax (30)
- Prädikat (18)
- Semantik (18)
- Englisch (12)
- Informationsstruktur (10)
- Deutsch (8)
- Satzakzent (8)
- Chinesisch (5)
- Kontrastive Linguistik (5)
- Russisch (5)
Institute
It is often assumed that the goal of typology is to define the notion ‘possible human language’. This view, which I call the Universalist Typology view is shared, for example, by virtually all contributors to Bynon & Shibatani’s 1995 volume Approaches to Language Typology, and by Moravscik in her review of this volume in Linguistic Typology 1 (p.105). In the following I claim that this assumption is fundamentally mistaken. To clarify the theoretical status of what is meant by ‘possible human language’, I argue here for a distinction between typological theory (theoretical typology) and grammatical theory (theoretical syntax and theoretical morphology) as distinct subdisciplines of linguistics.
The role of migration and language contact in the development of the Sino-Tibetan language family
(2001)
This paper is part of a research project on OT Syntax and the typology of the free relative (FR) construction. It concentrates on the details of an OT analysis and some of its consequences for OT syntax. I will not present a general discussion of the phenomenon and the many controversial issues it is famous for in generative syntax.
I discuss the status of WH-words for interrogative interpretations, and show that the derivation of constituent questions evolves from a specific interplay of syntactic and semantic representations with pragmatics. I argue that WH-pronouns are not ‘interrogative’. Rather, they are underspecified elements; due to this underspecification, WH-words can form a constitutive part not only of interrogative, but also of exclamative and declarative clauses. WH-words introduce a variable of a particular conceptual domain into the semantic representation. Accordingly, they have to be specified for interpretation. Different WH-contexts give rise to different interpretations. In a cross-linguistic overview, I discuss the characteristic elements contributing to the derivation of interrogatives. I argue that specific particles or their phonologically empty counterparts in the head of CP contribute the interrogative aspect. The speech act of ‘asking’ is then carried out via an intonational contour that identifies a question. By default, this intonational contour operates on interrogative sentences; however, other sentence formats – in particular, those of declarative sentences – are possible as well. The distinction of (a) grammatical (syntactic, semantic and phonological) sentence formats for interrogative and declarative sentences, and (b) intonational contours serving the discrimination of speech acts like questions and assertions, can be related to psychological and neurological evidence.
What role does language play in the development of numerical cognition? In the present paper I argue that the evolution of symbolic thinking (as a basis for language) laid the grounds for the emergence of a systematic concept of number. This concept is grounded in the notion of an infinite sequence and encompasses number assignments that can focus on cardinal aspects ("three pencils"), ordinal aspects ("the third runner"), and even nominal aspects ("bus #3"). I show that these number assignments are based on a specific association of relational structures, and that it is the human language faculty that provides a cognitive paradigm for such an association, suggesting that language played a pivotal role in the evolution of systematic numerical cognition.
In linguistics and the philosophy of language, the mass/count distinction has traditionally been regarded as a bi-partition on the nominal domain, where typical instances are nouns like "beef" (mass) vs."cow" (count). In the present paper, we argue that this partition reveals a system that is based on both syntactic features and conceptual features, and present experimental evidence suggesting that the discrimination of the two kinds of features has a psychological reality.
In the present paper, I will discuss the semantic structure of nouns and nominal number markers. In particular, I will discuss the question if it is possible to account for the syntactic and semantic formation of nominals in a parallel way, that is I will try to give a compositional account of nominal semantics. The framework that I will use is "twolevel semantics". The semantic representations and their type-theoretical basis will account for general cross-linguistic characteristics of nouns and nominal number and will show interdependencies between noun classes, number marking and cardinal constructions. While the analysis will give a unified account of bare nouns (like dog / water), it will distinguish between the different kinds of nominal terms (like a dog / dogs / water). Following the proposal, the semantic operations underlying the formation of the SR are basically the same for DPs as for CPs. Hence, from such an analysis, independent semantic arguments can be derived for a structural parallelism of nominals and sentences - that is, for the "sentential aspect" of noun phrases. I will first give a sketch of the theoretical background. I will then discuss the cross-linguistic combinatorial potential of nominal constructions, that is, the potential of nouns and number markers to combine with other elements and form complex expressions. This will lead to a general type-theoretical classification for the elements in question. In the next step, I will model the referential potential of nominal constructions. Together with the combinatorial potential, this will give us semantic representations for the basic elements involved in nominal constructions. In an overview, I will summarize our modeling of nouns and nominal number. I will then discuss in an outlook the "sentential aspect" of noun phrases.
A model is proposed that interprets a variety of connected speech processes as resulting from prosodic modulations at different tiers of functional speech motor control along the hypo-hyper dimension [10]. The general background of the model is given by the trichotomy of A-, B- and C-prosodic phenomena [15] that together constitute the acoustic makeup of any speech utterance (with regard to their respective time domains at the uttarance/phrase level, the syllabic level and the segmental level).
The first printed newspapers in the modern sense of the word appeared in the seventeenth century. They were weekly publications which contained regular reports by correspondents from all over Europe, mainlyon political matters. Although the new medium as such was innovative in its general organization, the individual news items were produced by following text patterns which already had a history of their own. The article reports recent research on the emerging constellation of text types in the first two German newspapers, the Aviso and the Relation of the year 1609. lt is focussed on delineating a prototype-based typology of the relevant text types and on tracing back these forms of presentation of news items to earlier genres and media like chronicles, handwritten newsletters, printed pamphlets and biannual news collections. The general interest of this line of research as a contribution to historical pragmatics lies in the attempt to see historical text types in an evolutionary perspective, taking into account the context of text production and, as far as possible, the reactions of the reading public.
Generative grammar
(2001)
Generative Grammar is the label of the most influential research program in linguistics and related fields in the second half of the 20. century. Initiated by a short book, Noam Chomsky's Syntactic Structures (1957), it became one of the driving forces among the disciplines jointly called the cognitive sciences. The term generative grammar refers to an explicit, formal characterization of the (largely implicit) knowledge determining the formal aspect of all kinds of language behavior. The program had a strong mentalist orientation right from the beginning, documented e.g. in a fundamental critique of Skinner's Verbal behavior (1957) by Chomsky (1959), arguing that behaviorist stimulus-response-theories could in no way account for the complexities of ordinary language use. The "Generative Enterprise", as the program was called in 1982, went through a number of stages, each of which was accompanied by discussions of specific problems and consequences within the narrower domain of linguistics as well as the wider range of related fields, such as ontogenetic development, psychology of language use, or biological evolution. Four stages of the Generative Enterprise can be marked off for expository purposes.
Syntax-semantics interface
(2001)
Intermediate cumulation
(2001)
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. Such larger structures are not only desirable for a deeper syntactic analysis. They also constitute a necessary prerequisite for assigning function-argument structure. The present paper offers a similaritybased algorithm for assigning functional labels such as subject, object, head, complement, etc. to complete syntactic structures on the basis of prechunked input. The evaluation of the algorithm has concentrated on measuring the quality of functional labels. It was performed on a German and an English treebank using two different annotation schemes at the level of function argument structure. The results of 89.73% correct functional labels for German and 90.40%for English validate the general approach.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. The TüSBL parser extends current chunk parsing techniques by a tree-construction component that extends partial chunk parses to complete tree structures including recursive phrase structure as well as function-argument structure. TüSBLs tree construction algorithm relies on techniques from memory-based learning that allow similarity-based classification of a given input structure relative to a pre-stored set of tree instances from a fully annotated treebank. A quantitative evaluation of TüSBL has been conducted using a semi-automatically constructed treebank of German that consists of appr. 67,000 fully annotated sentences. The basic PARSEVAL measures were used although they were developed for parsers that have as their main goal a complete analysis that spans the entire input.This runs counter to the basic philosophy underlying TüSBL, which has as its main goal robustness of partially analyzed structures.