430 Germanische Sprachen; Deutsch
Refine
Year of publication
Document Type
- Article (922)
- Part of a Book (205)
- Review (141)
- Part of Periodical (117)
- Conference Proceeding (72)
- Book (51)
- Working Paper (20)
- Contribution to a Periodical (19)
- Doctoral Thesis (17)
- Preprint (11)
Language
- German (1252)
- English (152)
- Portuguese (110)
- Turkish (50)
- Multiple languages (28)
- Spanish (4)
- Croatian (1)
Keywords
- Deutsch (574)
- Deutsch als Fremdsprache (106)
- Linguistik (101)
- Fremdsprachenunterricht (97)
- Deutschunterricht (89)
- Germanistik (82)
- Fremdsprache (72)
- Fremdsprachenlernen (71)
- Phraseologie (69)
- Literatur (51)
Institute
- Extern (63)
- Neuere Philologien (42)
- Präsidium (27)
- Institut für Deutsche Sprache (IDS) Mannheim (26)
- Sprachwissenschaften (9)
- Sprach- und Kulturwissenschaften (4)
- Universitätsbibliothek (4)
- Gleichstellungsbüro (3)
- Informatik (2)
- Erziehungswissenschaften (1)
The Smurf comics series is, among others, famous for the so-called "smurf language", in which words or parts of words can be replaced by "smurf". We will argue that this "smurfing" has the properties of placeholding. Based on data from German translations of Smurf comics, we will provide a formalization of smurfing in German which can be generalized to a theory of placeholder expressions.
This paper presents an incremental approach to verb clusters in German which radically differs from standard HPSG accounts. While the common assumption is that the verbs in subordinate clauses form clusters and accumulate all their valence requirements on a SUBCAT list, the assumption in this paper is that the arguments in verb final clauses are encapsulated incrementally into syntactic and semantic structures before the verbs are attached. The proposed analysis is in line with psycholinguistic findings. A grammar fragment of German demonstrating an implementation of the analysis is presented.
The paper addresses verbal agreement in German sign language from a constraint-based perspective. Based on Meir's Agreement Morphology Principles it presents an HPSG analysis of plain, regular and backwards agreement verbs that models the interaction between phonological (manual) features and syntactico-semantic relationships within a verbal sign by well-defined lexical restrictions. We argue that a sign-based declarative analysis can provide an elegant approach to agreement in sign language since it allows to exploit cross-modular constraints within grammar, and hence permits a direct manipulation of all relevant phonological features of a verb depending on its syntactic and semantic properties.
This paper presents a new analysis of quirky subjects according to which quirky subjects bear multiple grammatical relations and hence differ syntactically from regular subjects. This contrasts with the standard analysis of quirky subjects according to which quirky subjects are regular subjects bearing lexical case and therefore differ only morphologically from regular subjects. Based on the behavior of quirky subjects in Faroese and German, I argue that the syntactic account is superior. Faroese shows that the case borne by a quirky subject is not lexical, whereas German shows that quirky subjects are not regular subjects to begin with. The behavior of quirky subjects in Icelandic, on which the standard analysis is based, is argued to be the result of a morphosyntactic peculiarity of Icelandic.
This paper discusses recent LFG proposals on resultative and benefactive constructions. I show that neither resultative nor benefactive constructions are fully fixed and that this flexibility requires traces or a stipulation of constructional templates at several unrelated places in the grammar, something that is not necessary in lexical approaches. A second part of the paper deals with the active/passive alternation and shows that language-internal generalizations are missed if constraints are assumed to be contributed by phrase structure rules. A third part examines the parallel constructions in German and shows that cross-linguistic generalizations are not captured by phrasal approaches.
I argue for a new type of non-standard constituent in German; a modifier-collocational-cluster. This type of cluster combines (i) a modifier and (ii) a PP from a light-verb construction (or a Funktionsverbgefüge (FVG) as they are known in German) or a bare noun. Such strings are found in German in initial (prefield) position in certain cases of apparent multiple fronting. We are dealing with a syntax-semantics mismatch here since the modifier does not semantically modify the element with which it can first syntactically combine. I show that the modifier is a collocate of both its co-prefield element but also of the verb. I propose a schema which lexically licenses the building of such clusters and I show how we can encode information about what I refer to as collocational selection in the lexical entries of the type of lexemes involved in these multi-word strings. The analysis can be seen as lexical but does not require lexical storage of phrasal elements.
We show how the variation in the passive in Danish, English, and German can be accounted for. The dimensions in which the three languages differ are
- the existence of a morphological passive in Danish
- a subject requirement in Danish and English resulting in expletive insertion in impersonal
- constructions in Danish and absence of impersonal passives in English the possibility to promote the secondary object to subject in Danish
The differences are accounted for by differences in the structural/lexical case distinction and by mapping processes that insert expletives in Danish. The passive in general is accounted for by a lexical rule that is uniform across languages and hence captures the generalization regarding passive.
The present article discusses several aspects of the so-called correlate-es construction in German. This complex clausal construction can be identified by a correlative nominal element es ('it') occuring in the matrix clause and a right-peripheral full clausal argument linked to es. The article supports the hypothesis that correlative es has a janus-faced nature between an expletive and a referential meaning. This is the reason why existing approaches are not sufficient to capture the properties of the discussed construction in its entirety. The first part of the article sums up the common view on correlative es including the empirical properties of the construction as well as a brief survey of the relevant previous approaches trying to account for correlative es. Based on new empirical data, the second part of the article shows that none of these accounts is able to capture all relevant facts of the correlate-es construction because existing approaches usually ignore that the realization of correlative es is verb-class dependent. Hence, a new constraint-based analysis is developed that takes both empirical observations into account, the verb-class dependence and the janus-faced nature.
This paper deals with expletives that are inserted into clauses for structural reasons. We will focus on the Germanic languages Danish, German, and Yiddish. In Danish and Yiddish expletives are inserted in preverbal position in certain wh-clauses: In Danish such an insertion is observed when the subject is locally extracted from an SVO configuration in non-assertive clauses. In Yiddish wh-clauses are formed from a wh-phrase and a V2 clause. If no element would be fronted in the embedded V2 clause, an expletive is inserted in non-assertive clauses in order to meet the V3 requirement for embedded clauses. In addition to embedded wh-clauses, declarative V2 clauses also allow the insertion of an expletive. In Danish the expletive fills the subject position and is not necessarily fronted. In German and Yiddish the expletive has to occur in fronted position. In contrast to Danish and Yiddish, German does not insert expletives into embedded wh-clauses. They are inserted only into declarative V2 clauses in order to fulfill the V2 requirement without having to front another constituent. In this paper we try to provide an account that captures the commonalities between the three languages while being able to account for the differences.
This paper investigates the information-structural characteristics of extraposed subjects in Early New High German (ENHG). Based on new quantitative data from a parsed corpus of ENHG, I will argue that unlike objects, subjects in ENHG have two motivations for extraposing. First, subjects may extrapose in order to receive narrow focus, which is the pattern Bies (1996) has shown for object extraposition in ENHG. Secondly, however, subjects may extrapose in order to receive a default sentence accent, which is most visible in the case of presentational constructions. This motivation does not affect objects, which may achieve the same prosodic goal without having to extrapose. The study has two major consequences: (1) subject extraposition in ENHG demonstrates that there is not necessarily a one-to-one correspondence between syntactic structure and information structural effect (cf. Féry 2007); and (2) the overall phenomenon of DP extraposition in ENHG fits into a broader set of crosslinguistic focus phenomena which demonstrate a subject-object asymmetry (cf. Hartmann and Zimmermann 2007, Skopeteas and Fanselow 2010), raising important questions about the relationship between argument structure and information structural notions.
Coherence generally refers to a kind of predicate formation where a verb forms a complex predicate with the head of its infinitival complement. Adjectives taking infinitival complements have also been shown to allow coherence, but the exact conditions for coherence with adjectives appear not to have been addressed in the literature. Based on a corpus-study (supplemented with grammaticality judgements by native speakers) we show that adjectives fall into three semantically and syntactically defined classes correlating with their ability to construct coherently. Non-factive and non-gradable adjectives allow coherence, factive and gradable adjectives do not allow coherence and non-factive and gradable adjectives are tolerated with coherence. On the basis of previous work on coherence in German we argue that coherence allows the infinitival complement of a verb or an adjective to be "split-up", so that the head and a dependent of this head are associated with different information structural functions. In this respect coherence patterns with extraction structures where the extracted constituent has an information structural function different from the constituent from which it is extracted. Following literature on the information structural basis of extraction islands, we show how the lack of coherence with factive adjectives follows from their complements' being information structurally backgrounded, while the infinitival complements of non-factive adjectives tend to a higher fusion with the matrix clause. We also show that coherence is observed with attributive adjectives as well, arguing that coherence is not a distinct verbal property. Finally we provide an analysis of coherence with adjectives within HPSG.
This papers addresses information-structural restrictions on the occurrence of what is known as "multiple fronting" in German. Multiple fronting involves the realization of (what appears to be) more than one constituent in the first position of main clause declaratives, a clause type that otherwise respects the verb-second constraint of German. Relying on a large body of naturally occurring instances of multiple fronting with the surrounding discourse context, we show that in certain contexts, multiple fronting is fully grammatical in German, in contrast to what has sometimes been claimed previously. Examination of this data reveals two different patterns, which we analyze in terms of two distinct constructions, each instantiating a specific pairing of form, meaning and contextual appropriateness.
In this paper we investigate German idioms which contain phraseologically fixed clauses (PCl). To provide a comprehensive HPSG theory of PCls we extend the idiom theory of Soehn 2006 in such a way that it can distinguish different degrees of regularity in idiomatic expressions. An in-depth analysis of two characteristic PCls shows how our two-dimensional theory of idiomatic expressions can be applied and illustrates the scope of the theory.
On predication
(2009)
This paper discusses copula constructions in English, German, and Danish and argues that a uniform analysis of all copula constructions is inappropriate. I provide evidence from German that there should be a raising variant of the copula in addition to an identificational copula. A unary schema is provided that maps referential NPs that can be used as arguments onto predicational NPs. Data from Danish shows that predicational NPs can be subjects in specificational structures. An account for such specificational structures is provided and the different behaviour of predicational and specificational structures with regard to question tags is explained. A similar contrast can be found in German left dislocation structures, which follows from the assumptions made in this paper.
A modified treatment of complex predicate formation allows for a reduction of selectional features (that is abolishing of xcomp or vcomp) and for a uniform treatment of predicational phrases in copula constructions and resultative secondary predicates. This yields an account for constituent order variants that remained unexplained by earlier analyses.
The paper discusses the so-called adverbial use of the wh-pronoun was ('what'), which establishes a non-standard interrogative construction type in German. It argues that the adverbial use of was ('what') is based on the lexical properties of a categorically deficient pronoun was ('what'), which bears a causal meaning. In addition, adverbial was ('what') differs from canonical argument was ('what') as it is analyzed as a functor which is generated in clause-initial position.
By means of empirical facts mainly provided by d'Avis (2001) it is shown that was ('what') behaves ambivalently regarding the wh-property: On the one hand, was ('what') can introduce an interrogative clause, but on the other hand it cannot license wh-phrases in situ. While formally analyzing the data against the background of existing accounts on wh-interrogatives couched in the framework of Head-driven Phrase Structure Grammar, an analysis is developed that separates two pieces of information to keep track of the wh-information percolating in an interrogative clause. Whereas the WH-value models wh-fronting and pied-piping phenomena, the QUE value links syntactic and semantic information and thus keeps track of wh-phrases in-situ.
Preposition-noun combinations (PNCs) are compositional and productive, but not fully regular. In school grammars and many theoretical approaches, PNCs are neglected, but they have recently been addressed in an HPSG analysis by Baldwin et al. (2006). After discussing some basic properties of PNCs, we show that statistical methods can be employed to prove that PNCs are indeed productive and compositional, which again implies that PNCs should receive a syntactic analysis. Such an analysis, however, is impeded by the limited regularity of the construction. We will point out why adding semantic conditions to syntactic schemata might be necessary but not sufficient and turn then to a framework which allows the derivation of syntactic (and semantic) generalizations from linguistic data without taking recourse to introspective judgments.
This paper is a follow up on Müller, 2006. It contains some comments on suggestions about the interaction of phrasal Constructions with constituent order that Adele Goldberg made at various occasions. In addition the paper discusses various HPSG analyses of particle verbs that assume lexical representations including phonologically specified parts of particle verb lexical entries. A recent phrasal analysis of resultatives (Haugereid, 2007) is discussed as well and it is pointed out that control constructions pose problems for phrasal analyses that do not assume empty elements but require that the subject is realized in a phrasal configuration.
Licenser rules have originally been introduced in Müller (1999) as a part of a grammar based on discontinuous constituents. We propose licenser rules as a means to avoid underspecified empty elements in grammars with continuous constituents. We applied them to a verb movement analysis of the German main clause with right sentence bracket and to complement extraposition. To reduce the number of unnecessary hypotheses, we extended the licenser rule concept with a licenser binding technique. We compared the licenser rule approach to an approach based on underspecified traces with respect to processing performance. In our experiment, the use of licenser rules reduced the parse time by a factor of 13.5.
In this paper, we report on an experiment showing how the introduction of prosodic information from detailed syntactic structures into synthetic speech leads to better disambiguation of structurally ambiguous sentences. Using modifier attachment (MA) ambiguities and subject/object fronting (OF) in German as test cases, we show that prosody which is automatically generated from deep syntactic information provided by an HPSG generator can lead to considerable disambiguation effects, and can even override a strong semantics-driven bias. The architecture used in the experiment, consisting of the LKB generator running a large-scale grammar for German, a syntax-prosody interface module, and the speech synthesis system MARY is shown to be a valuable platform for testing hypotheses in intonation studies.
This paper presents an overview of a proposed linearisation grammar, which relies solely upon information residing in lexical heads to constrain word order. Word order information, which encompasses discontinuity as well as linear precedence conditions, is explicitly encoded as part of the feature structure of lexical heads, thus dispensing with a separate LP specification or 'phenogrammatical' layer standardly posited for linearisation. Instead, such lexicon-originated word order constraints are enforced in projections, propagated upwards and accumulated in the compound PHON feature, which represents phonological yields in an underspecified manner. Though limited somewhat in generative capacity, this approach covers the key phenomena that motivated linearisation grammars and offers a simpler alternative to the standard DOM-oriented theory.