Linguistik
Refine
Year of publication
- 2004 (7) (remove)
Document Type
- Part of a Book (3)
- Conference Proceeding (2)
- Article (1)
- Preprint (1)
Language
- English (7)
Has Fulltext
- yes (7)
Is part of the Bibliography
- no (7) (remove)
Keywords
- Relativsatz (7) (remove)
Institute
- Extern (1)
How the left-periphery of a wh-relative clause determines its syntactic and semantic relationships
(2004)
This paper discusses a certain class of German relative clauses which are characterized by a wh-expression overtly realized at the left periphery of the clause. While investigating empirical and theoretical issues regarding this class of relatives, it argues that a wh-relative clause relates syntactically to a functionally complete sentential projection and semantically to entities of various kinds that are abstracted from the matrix clause. What is shown is that this grammatical behaviour clearly can be attributed to the properties of the elements positioned at the left of a wh-relative clause. Finally, a lexically-based analysis couched in the framework of HPSG is given that accounts for the data presented.
We argue that Malagasy (and related W. Austronesian languages!) has a positive setting for a macro-parameter RICH VOICE MORPHOLOGY which builds complex predicates that code the theta role of their argument: S = [[PreN(6) + (X)] + DP]. Manifestations of this parameter are: (1) Case and theta role are assigned in situ in nuclear clauses with no movement or co-indexing to a topic position. (2) Relative Clauses (and other "extraction" structures) satisfy the "Subjects Only" constraint, again with no movement or indexing. (3) UTAH is freely violated, as theta role assignment derives from compositional semantic interpretation. Predicates resemble lexical Ns in assigning case directly to arguments without using Prepositions and in combining directly with Dets to form DPs that include tense and negation (Keenan 1995, 2000). The major Predicate-Argument type is modeled on the Noun+Possessor one, not the Verb+Object one.
This paper takes a close look at the properties of Hungarian relative clauses that occur in the left periphery of the main clause, preceding a (pro)nominal associate. It will be shown that these left-peripheral relative clauses differ in many ways from relative clauses dislocated on the right periphery, as well as from relative clauses embedded under a (pro)nominal head. To capture the precise syntax of these left-peripheral clauses, these will be compared to ordinary left-dislocated items, with which they have some properties in common. Despite the surface similarities between the two, however, there are a few decisive aspects of behaviour, most notably, distributional properties and connectivity effects, which argue against taking left-peripheral relatives as cases of clausal left-dislocates in Hungarian. Instead, one is led to consider these as correlative clauses, on the basis of the properties they share with well-established correlatives in languages like Hindi.
This paper discusses a special kind of syntax-semantics mismatch: a noun with a relative clause is interpreted as if it were a complement clause. An analysis in terms of Lexical Resource Semantics is developed which provides a uniform account for ''normal'' relative clauses and for the discussed type of relative clause.
The interpretation of traces
(2004)
This paper argues that parts of the lexical content of an A-bar moved phrase must be interpreted in the base position of movement. The argument is based on a study of deletion of a phrase that contains the base position of movement. I show that deletion licensing is sensitive to the content of the moved phrase. In this way, I corroborate and extend conclusions based on Condition C reconstruction by N. Chomsky and D. Fox. My result provides semantic evidence for the existence of traces and gives semantic content to the A/A-bar distinction.
Relative clauses (RCs) in Persian are head-modifying constituents, all typically introduced by the invariant complementizer ke. Persian RCs are Unbounded Dependency Constructions (UDCs), containing either a gap or a resumptive pronoun (RP). In some positions only gaps are allowed, and in other positions only RPs. There are also some positions where both gaps and RPs are alternatively allowed. Illustrating the striking similarities between Persian gaps and RPs, I will provide an HPSG unified approach to take care of the dependency between the licensing structure and the gap/RP with a single mechanism, using only the SLASH feature. Similar to Pollard and Sag s (1994) approach to the bottom of the dependency, I will assume a special sign at the bottom. However, my sign may have a nonempty PHON value. I will introduce a feature called GAPTYPE which is a NONLOCAL feature whose value can be either trace or rp. I will introduce two constraints to capture the pattern of distribution of RPs and traces. At the top of the dependency, I will bind the nonempty SLASH at the complementizer point. I will propose a lexical entry for the complementizer ke that will account for the binding of SLASH by the feature BIND, which has a nonempty set as value.
The argument that I tried to elaborate on in this paper is that the conceptual problem behind the traditional competence/performance distinction does not go away, even if we abandon its original Chomskyan formulation. It returns as the question about the relation between the model of the grammar and the results of empirical investigations – the question of empirical verification The theoretical concept of markedness is argued to be an ideal correlate of gradience. Optimality Theory, being based on markedness, is a promising framework for the task of bridging the gap between model and empirical world. However, this task not only requires a model of grammar, but also a theory of the methods that are chosen in empirical investigations and how their results are interpreted, and a theory of how to derive predictions for these particular empirical investigations from the model. Stochastic Optimality Theory is one possible formulation of a proposal that derives empirical predictions from an OT model. However, I hope to have shown that it is not enough to take frequency distributions and relative acceptabilities at face value, and simply construe some Stochastic OT model that fits the facts. These facts first of all need to be interpreted, and those factors that the grammar has to account for must be sorted out from those about which grammar should have nothing to say. This task, to my mind, is more complicated than the picture that a simplistic application of (not only) Stochastic OT might draw.