Linguistik
Refine
Year of publication
Document Type
- Part of a Book (591)
- Conference Proceeding (395)
- Article (377)
- Working Paper (117)
- Preprint (97)
- Report (32)
- Book (26)
- Doctoral Thesis (16)
- Part of Periodical (16)
- Review (16)
Language
- English (1688) (remove)
Is part of the Bibliography
- no (1688)
Keywords
- Syntax (114)
- Englisch (109)
- Deutsch (86)
- Spracherwerb (80)
- Semantik (71)
- Phonologie (64)
- Informationsstruktur (51)
- Phonetik (48)
- Japanisch (43)
- Thema-Rhema-Gliederung (42)
- Wortstellung (40)
- Sprachtest (36)
- Sprachtypologie (36)
- Sinotibetische Sprachen (35)
- Morphologie (34)
- Computerlinguistik (33)
- Intonation <Linguistik> (32)
- Koreanisch (32)
- Polnisch (31)
- Verb (31)
- Formale Semantik (30)
- Generative Transformationsgrammatik (30)
- Russisch (30)
- Metapher (29)
- Prädikat (29)
- Kontrastive Linguistik (28)
- Prosodie (28)
- Relativsatz (28)
- Linguistik (27)
- Optimalitätstheorie (27)
- Chinesisch (26)
- Bantusprachen (25)
- Nominalisierung (24)
- Pragmatik (24)
- Pronomen (23)
- Lexikologie (22)
- Grammatik (21)
- Nominalphrase (21)
- Koordination <Linguistik> (20)
- Adjunkt <Linguistik> (19)
- Artikulation (19)
- Bedeutungswandel (19)
- Tibetobirmanische Sprachen (18)
- Vergleichende Sprachwissenschaft (18)
- Grammatiktheorie (17)
- Slawische Sprachen (17)
- Topikalisierung (17)
- Anapher <Syntax> (16)
- Kasus (15)
- Niederländisch (15)
- Rezension (15)
- Griechisch (14)
- Indogermanische Sprachen (14)
- Morphosyntax (14)
- German (13)
- Interrogativsatz (13)
- Morphologie <Linguistik> (13)
- Nungisch (13)
- Sprachverstehen (13)
- focus (13)
- Adverb (12)
- Artikulatorische Phonetik (12)
- Bindungstheorie <Linguistik> (12)
- Kindersprache (12)
- Negation (12)
- Passiv (12)
- Referenzidentität (12)
- Aspekt (11)
- Klitisierung (11)
- Kongress (11)
- Sprachliche Universalien (11)
- Thematische Relation (11)
- Valenz <Linguistik> (11)
- Arabisch (10)
- Aspekt <Linguistik> (10)
- Bulgarisch (10)
- Ellipse <Linguistik> (10)
- Extraposition (10)
- Französisch (10)
- Freier Relativsatz (10)
- Kopula (10)
- Maschinelle Übersetzung (10)
- Neugriechisch (10)
- Nomen (10)
- Syntaktische Analyse (10)
- Tagalog (10)
- Tempus (10)
- Bedeutung (9)
- Head-driven phrase structure grammar (9)
- Multicomponent Tree Adjoining Grammar (9)
- Präposition (9)
- Qiang-Sprache (9)
- Quantor (9)
- Retroflex (9)
- Satzakzent (9)
- Topik (9)
- Adjektiv (8)
- Argumentstruktur (8)
- Artikulator (8)
- Dänisch (8)
- Generative Grammatik (8)
- Konsonant (8)
- Mittelenglisch (8)
- Oberflächenstruktur <Linguistik> (8)
- Reflexivpronomen (8)
- Silbe (8)
- Sprachstatistik (8)
- Syntaktische Kongruenz (8)
- Tiefenstruktur (8)
- Ungarisch (8)
- Zischlaut (8)
- syntax (8)
- Adverbiale (7)
- Akustische Phonetik (7)
- Baltoslawische Sprachen (7)
- Chewa-Sprache (7)
- Diskursanalyse (7)
- Ergativ (7)
- Genus verbi (7)
- Hilfsverb (7)
- Kognitive Linguistik (7)
- Konnotation (7)
- Lautwandel (7)
- Spezifität (7)
- Verschlusslaut (7)
- Vokal (7)
- information structure (7)
- prosody (7)
- Übersetzung (7)
- Argument <Linguistik> (6)
- Aufsatzsammlung (6)
- Beschränkung <Linguistik> (6)
- Drung (6)
- Grammatische Kategorie (6)
- Italienisch (6)
- Kontrastive Phonetik (6)
- Kontrastive Syntax (6)
- Lexicalized Tree Adjoining Grammar (6)
- Lokativ (6)
- Malagassi-Sprache (6)
- Norwegisch (6)
- Palatalisierung (6)
- Persisch (6)
- Personalpronomen (6)
- Plural (6)
- Possessivität (6)
- Prädikation (6)
- Präsupposition (6)
- Quantifizierung <Linguistik> (6)
- Raising (6)
- Referenz <Linguistik> (6)
- Satzanalyse (6)
- Spanisch (6)
- Transitivität (6)
- Türkisch (6)
- Verbalnomen (6)
- Affrikata (5)
- Demonstrativpronomen (5)
- Dialog (5)
- Diskursrepräsentationstheorie (5)
- Ergänzung <Linguistik> (5)
- Erzählen, pragm. (5)
- Extraktion <Linguistik> (5)
- Flexion (5)
- Frage (5)
- Fremdsprachenlernen (5)
- Genitiv (5)
- Gradpartikel (5)
- Infinitkonstruktion (5)
- Japanese (5)
- Kommunikationsanalyse (5)
- Kompositum (5)
- Konjunktion (5)
- Kontrastive Grammatik (5)
- Kontrastive Phonologie (5)
- Kroatisch (5)
- Lehnwort (5)
- Markiertheit (5)
- Metonymie (5)
- Modifikation <Linguistik> (5)
- Numerale (5)
- Phrasenstruktur (5)
- Portugiesisch (5)
- Range Concatenation Grammar (5)
- Reibelaut (5)
- Rumänisch (5)
- Satz (5)
- Satztyp (5)
- Skopus (5)
- Soziolinguistik (5)
- Sprachlehrbuch (5)
- Sprachwandel (5)
- Stimmhaftigkeit (5)
- Stimmlosigkeit (5)
- Suffix (5)
- Unterspezifikation (5)
- Uralische Sprachen (5)
- givenness (5)
- topic (5)
- Ableitung <Linguistik> (4)
- Affix (4)
- Amerikanisches Englisch (4)
- Auditive Phonetik (4)
- Austronesische Sprachen (4)
- Baltische Sprachen (4)
- Bewegungsverb (4)
- Definitheit (4)
- Deklination (4)
- Diachronie (4)
- Dialektologie (4)
- Distribution <Linguistik> (4)
- Evolutionstheorie (4)
- Experimentelle Phonetik (4)
- Frau (4)
- Funktionale Kategorie (4)
- Hebräisch (4)
- Interrogativpronomen (4)
- Irisch (4)
- Isländisch (4)
- Koartikulation (4)
- Konjugation (4)
- Korpus <Linguistik> (4)
- Kymrisch (4)
- Litauisch (4)
- Methodologie (4)
- Modalität <Linguistik> (4)
- Modalverb (4)
- Morphem (4)
- Morphonologie (4)
- Nebensatz (4)
- Negativer Polaritätsausdruck (4)
- Neurolinguistik (4)
- Nicht-restriktiver Relativsatz (4)
- Numerus (4)
- Objekt (4)
- Partikelverb (4)
- Perfekt (4)
- Possessivkonstruktion (4)
- Proto-Tibetobirmanisch (4)
- Referenzsemantik (4)
- Scrambling (4)
- Serialverb-Konstruktion (4)
- Spaltsatz (4)
- Sprachkontakt (4)
- Symposium (4)
- Textlinguistik (4)
- Transkription (4)
- Unbestimmtheit (4)
- Universalgrammatik (4)
- Verbalphrase (4)
- Wortbildung (4)
- alternative semantics (4)
- lexical semantics (4)
- Afrikanische Sprachen (3)
- Anlaut (3)
- Auslaut (3)
- Baskisch (3)
- Belhare (3)
- Cahuilla-Sprache (3)
- Croatian (3)
- Deixis (3)
- Deskriptivität (3)
- Ergänzungsfragesatz (3)
- Erzählen (3)
- Finnisch (3)
- Fremdsprachenunterricht (3)
- Frühneuhochdeutsch (3)
- Gerundium (3)
- Gesprochene Sprache (3)
- Grammatikalisation (3)
- Grammatische Relation (3)
- Hindi (3)
- Historische Sprachwissenschaft (3)
- Hypotaxe (3)
- Implikatur (3)
- Indogermanisch (3)
- Infinitiv (3)
- Inkorporation <Linguistik> (3)
- Instrumental (3)
- Inuktitut (3)
- Inversion <Grammatik> (3)
- Junktur (3)
- Kausativ (3)
- Konstruktionsgrammatik (3)
- Kontrolle <Linguistik> (3)
- Konversation (3)
- Konversion <Linguistik> (3)
- Körperteil (3)
- Lexikographie (3)
- Literary translation (3)
- Mehrsprachigkeit (3)
- Modifikator (3)
- Neutralisation <Linguistik> (3)
- Patholinguistik (3)
- Phonem (3)
- Phraseologie (3)
- Postulat (3)
- Proto-Indo-European (3)
- Prädikativsatz (3)
- Romanische Sprachen (3)
- Satzglied (3)
- Satzsemantik (3)
- Satzteil (3)
- Schwedisch (3)
- Schweizerdeutsch (3)
- Software (3)
- Spieltheorie (3)
- Spracherwerb, biling. (3)
- Sprachlogik (3)
- Sprachproduktion (3)
- Sprachtheorie (3)
- Sprachwahrnehmung (3)
- Stimmgebung (3)
- Subkategorisierung (3)
- Swahili (3)
- Thai (3)
- Tongaisch (3)
- Tonologie (3)
- Tree Adjoining Grammar (3)
- Tschechisch (3)
- Velar (3)
- Verwandtschaftsbezeichnung (3)
- Wirtschaft (3)
- Wortakzent (3)
- Wortfeld (3)
- Zweitsprachenerwerb (3)
- adverbial quantification (3)
- aspect (3)
- conjunction (3)
- contrastive focus (3)
- counterfactuals (3)
- hrvatski (3)
- intonation (3)
- linguistics (3)
- negation (3)
- pragmatics (3)
- reconstruction (3)
- relative clauses (3)
- scalar implicature (3)
- sociolinguistics (3)
- tense (3)
- word order (3)
- Aerodynamik (2)
- Albanisch (2)
- Allomorph (2)
- Altenglisch (2)
- Antonym (2)
- Argument linking (2)
- Aspiration <Linguistik> (2)
- Aufforderungssatz (2)
- Auslassung (2)
- Ausrufesatz (2)
- Australische Sprachen (2)
- Aymara (2)
- Bantu (2)
- Bedeutungsverschlechterung (2)
- Berbersprachen (2)
- Big mess construction (2)
- Biolinguistik (2)
- Brasilien (2)
- Chatten <Kommunikation> (2)
- Consecutio temporum (2)
- Denominativ (2)
- Determinator (2)
- Deutschunterricht (2)
- Deverbativ (2)
- Dialekt (2)
- Diboov zakon (2)
- Direct speech representation (2)
- Diskontinuität (2)
- Dybo’s law (2)
- Emotion (2)
- English (2)
- Epenthese (2)
- Evidentialität (2)
- Expletiv (2)
- Faktiv (2)
- Feldlinguistik (2)
- Finite Verbform (2)
- Finnish (2)
- Focus (2)
- Fokus <Linguistik> (2)
- Funktionsverbgefüge (2)
- Ganda-Sprache (2)
- Genus (2)
- Geschlechterforschung (2)
- Gestik (2)
- Grammaires d’Arbres Adjoints (2)
- Hausa-Sprache (2)
- Herausstellung (2)
- Hobongan (2)
- Höflichkeit (2)
- Höflichkeitsform (2)
- Implementierung <Informatik> (2)
- Indefinitpronomen (2)
- Information structure (2)
- Intonation (2)
- Kajkavian (2)
- Kanuri-Sprache (2)
- Katalanisch (2)
- Keltische Sprachen (2)
- Kernspintomographie (2)
- Khoisan (2)
- Kiezdeutsch (2)
- Kind (2)
- Kiranti (2)
- Klassifikator <Linguistik> (2)
- Kleidung (2)
- Kognitionswissenschaft (2)
- Kompositionalität (2)
- Konditionalsatz (2)
- Konjunktiv (2)
- Kontamination <Wortbildung> (2)
- Kontext (2)
- Kontrastive Semantik (2)
- Konversationsanalyse (2)
- Korean (2)
- Korrelativsatz (2)
- L2 (2)
- Laryngal (2)
- Lautmalerei (2)
- Lautsprache (2)
- Lexical Resource Semantics (2)
- Lexik (2)
- Liaison (2)
- Logische Partikel (2)
- Lokalbezeichnung (2)
- MCTAG (2)
- Makonde-Sprache (2)
- Mauricien (2)
- Metrische Phonologie (2)
- Milieu, Soziolinguistik (2)
- Modalität (2)
- Modus (2)
- Multimodalität (2)
- Nama-Sprache (2)
- Namenkunde (2)
- Nichtlineare Phonologie (2)
- Nominalkompositum (2)
- Nominalsatz (2)
- Nominativ (2)
- Objektsatz (2)
- Paiwan (2)
- Palatal (2)
- Palatographie (2)
- Parasitic gap (2)
- Parataxe (2)
- Parser (2)
- Partikel (2)
- Partizipation (2)
- Philippinen-Austronesisch (2)
- Phrasenkompositum (2)
- Phrasenstrukturgrammatik (2)
- Polarität (2)
- Political correctness (2)
- Postposition (2)
- Preußisch (2)
- Pro-Form (2)
- Proto-Slavic (2)
- Psycholinguistik (2)
- Relevanz <Linguistik> (2)
- Restriktiver Relativsatz (2)
- Resultativ (2)
- Salish-Sprache (2)
- Serbisch (2)
- Sotho (2)
- Sprachkompetenz (2)
- Sprachlehrforschung (2)
- Sprachliches Merkmal (2)
- Sprachverarbeitung (2)
- Sprachverarbeitung <Psycholinguistik> (2)
- Sprechtempo (2)
- Spurtheorie (2)
- Stimmband (2)
- Subjekt (2)
- Subjekt <Linguistik> (2)
- Substantiv (2)
- Suchmaschine (2)
- Temporalsatz (2)
- Tharaka (2)
- Tibetobirmanische Sprachen ; Sinotibetische Sprachen (2)
- Tier (2)
- Tiere (2)
- Tod (2)
- Tough-construction (2)
- Tree Adoining Grammar (2)
- Tree Description Grammar (2)
- Tswana-Sprache (2)
- Tumbuka-Sprache (2)
- Type-Token-Relation (2)
- Typologie (2)
- Unterrichtstechnologie (2)
- Vagheit (2)
- Verben (2)
- Verbloser Satz (2)
- Vergangenheitstempus (2)
- Vietnamese (2)
- Visualisierung (2)
- Vorname (2)
- Wissenschaftsgeschichte (2)
- Yoruba-Sprache (2)
- Zulu (2)
- Zulu-Sprache (2)
- accentuation (2)
- akcentuacija (2)
- case (2)
- cleft constructions (2)
- comparatives (2)
- constructed dialogue (2)
- contrast (2)
- conversational dialogue (2)
- corpus linguistics (2)
- cyclicity (2)
- dass (2)
- definite descriptions (2)
- discourse (2)
- discourse particles (2)
- domain restriction (2)
- double access (2)
- explicitation (2)
- fictional dialogue (2)
- fictionality (2)
- focus ambiguity (2)
- focus intonation (2)
- focus movement (2)
- focus types (2)
- grammaticalization (2)
- hybridity (2)
- hypothetical speech (2)
- identity (2)
- kajkavski (2)
- kinds (2)
- language acquisition (2)
- language change (2)
- language contact (2)
- linguistic approaches to dialogue (2)
- maximize presupposition (2)
- morphological focus marking (2)
- narrative (2)
- narrative structure (2)
- narratology (2)
- parsing (2)
- performance (2)
- presupposition (2)
- presupposition projection (2)
- presuppositions (2)
- processing (2)
- pronoun (2)
- quantification (2)
- quantifiers (2)
- relative clause (2)
- relevance theory (2)
- retranslation (2)
- rhetorical approaches to dialogue in narrative (2)
- scope (2)
- scope of focus (2)
- scrambling (2)
- second occurrence focus (2)
- semantics (2)
- speech acts (2)
- speech tagging (2)
- stylistics (2)
- subjectivity (2)
- telicity (2)
- topicalization (2)
- translation universals (2)
- type composition logic (2)
- typology (2)
- underspecification (2)
- uniqueness (2)
- unreliable narration (2)
- wh-question (2)
- Čakavian (2)
- čakavski (2)
- Štokavian (2)
- štokavski (2)
- "Rabbit" tetralogy (1)
- "The Sisters" (1)
- (Morpho)syntactic focus strategy (1)
- (implicit) prosody (1)
- (non-)gradable predicate (1)
- (un)conditionals (1)
- -tari (1)
- -toka (1)
- 18. stoljeće (1)
- 20th century (1)
- A Touch of Frost (1)
- Abar- movement (1)
- Abduktion <Logik> (1)
- Adamaua-Ost-Sprachen (1)
- Adjective (1)
- Adjektivphrase (1)
- Adversativsatz (1)
- Afar (1)
- Affigierung (1)
- Affirmativer Polaritätsausdruck (1)
- Afro-Asiatic (1)
- Ainu-Sprache (1)
- Akan-Sprache (1)
- Akkusativ mit Infinitiv (1)
- Aktionsart (1)
- Akustische Spektrographie (1)
- Albanian literature (1)
- Alemannic dialects (1)
- Algorithmus (1)
- Alsace (1)
- Altaisch (1)
- Alternative Questions (1)
- Alternativfragen (1)
- Altfranzösisch (1)
- Altgriechisch (1)
- Altitalienisch (1)
- Alveolar (1)
- Ambiguität (1)
- American sign language (1)
- Anatolische Sprachen (1)
- Anführungszeichen (1)
- Angewandte Linguistik (1)
- Anglismus (1)
- Annotation (1)
- Antikausativ (1)
- Applikativ (1)
- Apposition (1)
- Appraisal Theory (1)
- Aramäisch (1)
- Arzt-Patient-Interaktion (1)
- Aschanti-Sprache (1)
- Asia (1)
- Asymmetrie (1)
- Attischer Dialekt (1)
- Attribut (1)
- Austronesian (1)
- Automatentheorie (1)
- Automatische Spracherkennung (1)
- Automatische Sprachproduktion (1)
- Autosegmentale Phonologie (1)
- Auxiliarkomplex (1)
- Bahasa Indonesia (1)
- Bairisch (1)
- Bantoid (1)
- Barnes (1)
- Basaa-Sprache (1)
- Baushi (1)
- Bdeutungswandel (1)
- Bedeutungsunterschied (1)
- Bedrohte Sprache (1)
- Belebtheit <Grammatik> (1)
- Belharisch (1)
- Bemba-Sprache (1)
- Benutzernamen (1)
- Benutzeroberfläche (1)
- Berlin <2001> (1)
- Bezug / Linguistik (1)
- Bibliografie (1)
- Bibliographie (1)
- Binarismus (1)
- Binding (1)
- Binominal noun phrase (1)
- Broad focus (1)
- Burgenland Croatian (1)
- COCA (1)
- Cantonese (1)
- Caryl Churchill (1)
- Casus obliquus (1)
- Cayuga-Sprache (1)
- Chaostheorie (1)
- Chinese (1)
- Chomsky (1)
- Chomsky, Noam (1)
- Christianus C. (1)
- Clitic Doubling (1)
- Clitic-Doubling (1)
- Closure (1)
- Cochlear-Implantat (1)
- Cognate object / Inneres Objekt (1)
- Cognition (1)
- Cognitive Linguistics (1)
- Computersimulation (1)
- Computertomographie (1)
- Computervermittelte Kommunikation (1)
- Conceptual Metaphor (1)
- Coreference annotation (1)
- Croatian dialectology (1)
- Cross-dialectal Diversity (1)
- Cultural Model (1)
- Daqan (1)
- Datenbanksystem (1)
- Datenstruktur (1)
- Death in Venice (1)
- Demokratische Republik Kongo (1)
- Dentallaut (1)
- Description Tree Grammar (1)
- Determination <Linguistik> (1)
- Determinativ (1)
- Deutsch als Fremdsprache (1)
- Deutschland (1)
- Deutschlandbild (1)
- Diocese of Senj and Modruš (Krbava) (1)
- Direktes Objekt (1)
- Disambiguierung (1)
- Discourse analysis (1)
- Discourse mediation (1)
- Disjunktion <Logik> (1)
- Diskontinuierliches Element (1)
- Diskurs (1)
- Ditransitives Verb (1)
- Dolmetschen (1)
- Doppelter Akkusativ (1)
- Doppelter Nominativ (1)
- Downstep (1)
- Doyle (1)
- Dreisprachigkeit (1)
- Dutch (1)
- Dybo's law (1)
- Dyboov zakon (1)
- EKoti (1)
- Edith Wharton (1)
- Edward (1)
- Ehe <Motiv> (1)
- Einbettung <Linguistik> (1)
- Eindeutigkeit (1)
- Einführung (1)
- Elektroglottographie (1)
- Elektromagnetische Artikulographie (1)
- Elision (1)
- Empirische Linguistik (1)
- Enatthembo (1)
- Endkonsonant (1)
- Englischunterricht (1)
- English translation (1)
- Epistemic Containment Principle (ECP) (1)
- Ereignissemantik (1)
- Ergebnis (1)
- Erkenntnistheorie (1)
- Erzählperspektive (1)
- Erzählstrategie (1)
- Erzähltheorie (1)
- Eskimo (1)
- Essen (1)
- Estnisch (1)
- Estonian (1)
- Etymologie (1)
- Euphemismus (1)
- Europa (1)
- Everyday language (1)
- Evidenz (1)
- Evolution of Language (1)
- Ewe-Sprache (1)
- Existentialsatz (1)
- Expressivität <Linguistik> (1)
- F-marking (1)
- Facework (1)
- Fachsprache (1)
- Familie (1)
- Fang-Kuei (1)
- Faïza Guène (1)
- Feministische Literaturwissenschaft (1)
- Fictional dialogue (1)
- Finnish language (1)
- Fipa (1)
- Focus ambiguity (1)
- Focus marker (1)
- Fokus (1)
- Foodo (1)
- Formale Sprache (1)
- Formalismes syntaxiques (1)
- Fragebogen (1)
- Frankfurt <Main, 2003> (1)
- Frankokanadisch (1)
- Fremdsprache (1)
- French translation (1)
- Frost at Christmas (1)
- Frühneuenglisch (1)
- Funktionalismus <Linguistik> (1)
- Funktionsverb (1)
- Futur (1)
- Fuzzy-Logik (1)
- Fußball (1)
- G-marking (1)
- Ga-Sprache (1)
- Galician (1)
- Galicisch (1)
- Gallizismus (1)
- Galloitalienisch (1)
- Gapping (1)
- Gebundenes Morphem (1)
- Gebärdensprache (1)
- Gefühl (1)
- Gemination (1)
- Genderlinguistik (1)
- Generalisierte Phrasenstrukturgrammatik (1)
- Generic NLP Architecture (1)
- Generische Aussage (1)
- Generizität (1)
- Genuswechsel (1)
- Geografie (1)
- Georgisch (1)
- German language Spoken German (1)
- German language Study and teaching (1)
- Germanisch (1)
- Germanismus (1)
- Gerundivum (1)
- Geschehensverb (1)
- Geschichte (1)
- Geschlechtergerechte Sprache (1)
- Geschlechtsunterschied (1)
- Geschmack (1)
- Gesellschaft für Semantik (1)
- Gestalt theory (1)
- Gewalt (1)
- Gleitlaut (1)
- Glossar (1)
- Glottalisierung (1)
- Glottisverschlusslaut (1)
- Glue Semantics (1)
- Glück (1)
- Gold (1)
- Gotisch (1)
- Gradadverb (1)
- Grammatikalisierung (1)
- Grammatikalität (1)
- Grammatikunterricht (1)
- Grammatische Person (1)
- Grammatisches Subjekt (1)
- Greek child speech (1)
- Greek child-directed speech (1)
- Greek language acquisition (1)
- Gujarati (1)
- Gur (1)
- Gälisch-Schottisch (1)
- HPSG Parsing (1)
- HTP (1)
- Hakha Chin (Lai) (1)
- Halbvokal (1)
- Halbī (1)
- Halkomelem (1)
- Handedness (1)
- Hausa (1)
- Haya (1)
- Henry James (1)
- Herero-Sprache (1)
- Heterogenität (1)
- Hethitisch (1)
- Hirnfunktion (1)
- Historische Phonetik (1)
- Historische Phonologie (1)
- Historische Syntax (1)
- Holocene (1)
- Homofon (1)
- Honorativ (1)
- Hypertext (1)
- Höflichkeit, Sprachstil (1)
- Hörstörung (1)
- Hörverstehen (1)
- IE (1)
- Icelandic Family Sagas (1)
- Identity (1)
- Ikon (1)
- Illokutiver Akt (1)
- Immediate Dominance/Linear Precedence (1)
- Immersion (1)
- Impersonale (1)
- Inchoativ (1)
- Indianersprachen (1)
- Indien (1)
- Indirect translation (1)
- Indirekte Rede (1)
- Indirekter Interrogativsatz (1)
- Indogermanistik (1)
- Infix (1)
- Insel <Linguistik> (1)
- Insertion <Linguistik> (1)
- Intensionale Logik (1)
- Interferenz (1)
- Interkulturelles Lernen (1)
- Interlinearversion (1)
- Internationale Migration (1)
- Interpretation (1)
- Intervention Effects (1)
- Interventionseffekte (1)
- Inuit-Sprache (1)
- Ireland (1)
- Jahrestagung (1)
- Jakutisch (1)
- James Joyce (1)
- Je suis Charlie (1)
- Jean / Siebenkäs (1)
- Jiddisch (1)
- Johann Wolfgang von Goethe (1)
- John Updike (1)
- Jugendsprache (1)
- Juxtaposition (1)
- Kaingáng (1)
- Kamerun (1)
- Kanada (1)
- Kantonesisch (1)
- Kategorialgrammatik (1)
- Katze (1)
- Kaukasische Sprachen (1)
- Kehlkopf (1)
- Kette <Linguistik> (1)
- Kikuyu (1)
- Kleinkind (1)
- Kognition (1)
- Kognitive Entwicklung (1)
- Kognitive Semantik (1)
- Kollokationen (1)
- Komitativ (1)
- Kommunikation (1)
- Komoren (1)
- Komparation (1)
- Komparativ (1)
- Komplementierer (1)
- Komponentenanalyse (1)
- Komponentenanalyse <Linguistik> (1)
- Komposition <Wortbildung> (1)
- Konditional (1)
- Konfiguration <Linguistik> (1)
- Kongo-Sprache (1)
- Kongressbericht (1)
- Konkomba (1)
- Konnektionismus (1)
- Konsekutivsatz (1)
- Konsonantengruppe (1)
- Konstruktion <Linguistik> (1)
- Kontrafaktischer Satz (1)
- Kontrastive Linguistik , Vergleichende Sprachwissenschaft (1)
- Kontrastive Morphologie (1)
- Kontrastive Pragmatik (1)
- Konvergenztheorie (1)
- Kopulasatz (1)
- Korrelat (1)
- Koti (1)
- Krankheit (1)
- Kreativität (1)
- Krieg (1)
- Kuanua (1)
- Kurdish (1)
- Kutenai (1)
- Kwa-Sprachen (1)
- KwaNdebele (1)
- LFG (1)
- LTAG (1)
- Language Mapping (1)
- Language Perception (1)
- Language acquisition (1)
- Latein (1)
- Latin (1)
- Lautgesetz (1)
- Lautsymbolik (1)
- Lautwahrnehmung (1)
- Learning strategies (1)
- Lebensmittel (1)
- Lehnprägung (1)
- Lehnübersetzung (1)
- Lehrbuch (1)
- Lehrerbildung (1)
- Leipzig <2001> (1)
- Lerntheorie (1)
- Lettisch (1)
- Lexical Ressource Semantics (1)
- Lexikalisierung (1)
- Li (1)
- Lied (1)
- Linguistic change (1)
- Linksversetzung (1)
- Linksverzweigende Konstruktion (1)
- Literarischer Dialog (1)
- Literary dialogue (1)
- Literary pragmatics (1)
- Logische Form <Linguistik> (1)
- Logopädie (1)
- Lokale Präposition (1)
- London <1990> (1)
- Luiseño-Sprache (1)
- Luxemburgisch (1)
- Makua-Sprache (1)
- Malawi (1)
- Malaysia (1)
- Mandarin (1)
- Mandarin Chinese (1)
- Mann (1)
- Manx (1)
- Marker <Linguistik> (1)
- MaxElide (1)
- Maya-Sprache (1)
- Mazateco (1)
- Mboshi-Sprache (1)
- Mediality (1)
- Medien (1)
- Mediensprache, Fernsehen (1)
- Medium (1)
- Mehrteiliges Prädikat (1)
- Mehrworteinheit (1)
- Melanesische Sprachen (1)
- Mental Model Construction (1)
- Mentalism (1)
- Metaphor (1)
- Metatonie (1)
- Metre (1)
- Migration (1)
- Mikronesische Sprachen (1)
- Minimal Recursion Semantics (1)
- Mittelchinesisch (1)
- Mittelfranzösisch (1)
- Mobile Telekommunikation (1)
- Modalpartikel (1)
- Mohawk (1)
- Mongolisch (1)
- Montague-Grammatik (1)
- More <Linguistik> (1)
- Mosambik (1)
- Move-alpha (1)
- Mozambique (1)
- Moçambique (1)
- Mukrī (1)
- Multiple Spell-Out (1)
- Mura-Sprache (1)
- Musical rhythm (1)
- Mythologie (1)
- Mögliche Welt (1)
- Mögliche-Welten-Semantik (1)
- Mündlichkeit (1)
- NP-deletion (1)
- Namengebung (1)
- Narrative discourse (1)
- Nasal (1)
- Nativismus, Linguistik (1)
- Natürliche Morphologie (1)
- Negative Polarity Items (1)
- Negativpolaritätselemente (1)
- Neo-Latin (1)
- Neologismus (1)
- Newari (1)
- Nias-Sprache (1)
- Nicht-kanonisches Subjekt (1)
- Nicht-Übersetzbarkeit (1)
- Nicknamen (1)
- Niger Delta (1)
- Niger-Kongo-Sprachen (1)
- Nilosaharanische Sprachen (1)
- Nilotische Sprachen (1)
- Niue-Sprache (1)
- Niwchisch (1)
- Noam (1)
- Noam Chomsky (1)
- Nomen actionis (1)
- Nomen agentis (1)
- Nonverbale Mittel (1)
- Nootka (1)
- Notwendigkeit (1)
- Nullmorphem (1)
- Numerativ (1)
- Nähen (1)
- Objekt <Linguistik> (1)
- Obstruent (1)
- Online-Publikation (1)
- Ono <Papuasprachen> (1)
- Onomastik (1)
- Ontologie <Wissensverarbeitung> (1)
- Opaker Kontext (1)
- Opposition <Linguistik> (1)
- Ortsadverb (1)
- Oslo <1999> (1)
- P600 (1)
- PCFG (1)
- Palauisch (1)
- Palaung (1)
- Parameter, Linguistik (1)
- Parametrisierung (1)
- Parenthese (1)
- Partitiv (1)
- Partizip (1)
- Partizip Perfekt (1)
- Pedersen, Holger (1)
- Performance/competence (1)
- Periphrastische Konjugation (1)
- Perspektivierung (1)
- Pferd (1)
- Phonologische Opposition (1)
- Phrasenmarker (1)
- Phrasing (1)
- Pirahã (1)
- Pitch Reset (1)
- Plusquamperfekt (1)
- Pocken (1)
- Polabisch (1)
- Politik (1)
- Politische Rede (1)
- Poltern (1)
- Portugiesisch / Brasilien (1)
- Portuguese (1)
- Position of Antecedent strategy (1)
- Postcolonial writing (1)
- Postmoderne (1)
- Potsdam <2002> (1)
- Potsdam <2004> (1)
- Pragmalinguistik (1)
- Pro-Drop-Parameter (1)
- Produktivität <Linguistik> (1)
- Pronominalization (1)
- Proportionalsatz (1)
- Prosody (1)
- Prototyp <Linguistik> (1)
- Prädikatsnomen (1)
- Präfix (1)
- Präpositionalphrase (1)
- Präsentisches Perfekt (1)
- Pseudokoordination (1)
- Pseudopartitiv (1)
- Psiphänomen (1)
- Q-adverbs (1)
- Quelle (1)
- Question Under Discussion (QUD) (1)
- Rattenfängerkonstruktion (1)
- Reception Theory (1)
- Reduktion <Linguistik> (1)
- Reduplikation (1)
- Reflexivierung (1)
- Reflexivsatz (1)
- Regelordnung (1)
- Reihenfolge (1)
- Rekonstruktion (1)
- Relativpronomen (1)
- Relevanztheorie (1)
- Religion (1)
- Reziprozität (1)
- Reziprozität <Linguistik> (1)
- Rhetorik (1)
- Richtungsangabe (1)
- Road movie (1)
- Robust Minimal Recursion Semantics (1)
- Role and Reference Grammar (1)
- Romanian (1)
- Romanistik (1)
- Rufname (1)
- Russennorwegisch (1)
- Rückfrage (1)
- SDRT (1)
- SYNtax-based Reference Annotation (1)
- Saharanische Sprachen (1)
- Samoanisch (1)
- Sapir (1)
- Satzadverb (1)
- Satzanlyse (1)
- Satzellipse (1)
- Satzverbindung (1)
- Schlegel, Friedrich von (1)
- Schmerz (1)
- Schottisch (1)
- Schriftlichkeit (1)
- Schugnī (1)
- Schwa (1)
- Selbsteinschätzung (1)
- Selkupisch (1)
- Semantics (1)
- Semantische Analyse (1)
- Semantische Kongruenz (1)
- Semantische Lizenzierung (1)
- Semasiologie (1)
- Semiotik (1)
- Semitische Sprachen (1)
- Senjska i Modruška (Krbavska) biskupija (1)
- Senufo (1)
- Serbian (1)
- Shallow NLP (1)
- Shanghai (1)
- Silbenstruktur (1)
- Silbentrennung (1)
- Simple Range Concatenation Grammar (1)
- Sinn und Bedeutung (1)
- Sino-Tibetan (1)
- Skandinavische Sprachen (1)
- Slang (1)
- Slavic accentology (1)
- Sloppiness (1)
- Slovakisch (1)
- Slovene (1)
- Slovene neo-cirkumflex (1)
- Slovenian (1)
- Slowenisch (1)
- Sluicing <Linguistik> (1)
- Sm'algyax (1)
- Sonant (1)
- Sonorität (1)
- Soranī (1)
- Southeast Asia (1)
- Southern Ndebele (1)
- Soziale Medien (1)
- Soziolekt (1)
- Sozioonomastik (1)
- Spam (1)
- Speicherverwaltung (1)
- Sprachdaten (1)
- Sprache (1)
- Sprachgebrauch (1)
- Sprachgeschichte (1)
- Sprachphilosophie (1)
- Sprachpurismus (1)
- Sprachtod (1)
- Sprachvariante (1)
- Sprechakt (1)
- Sprechakte (1)
- Sprechaktklassifikation (1)
- Spreech Akte (1)
- Stangov zakon (1)
- Stang’s law (1)
- Stativ <Grammatik> (1)
- Steigerungspartikel (1)
- Stereotypie (1)
- Stilistik (1)
- Stochastik (1)
- Strukturelle Grammatik (1)
- Strukturelle Phonologie (1)
- Strukturelle Semantik (1)
- Student (1)
- Suffixbildung (1)
- Suppire (1)
- Suppire-Sprache (1)
- Suppletivismus (1)
- Swedish language (1)
- Synchronie (1)
- Synonymie (1)
- Syntactic formalisms (1)
- Syntaxbaum (1)
- Südafrika (1)
- TUSNELDA (1)
- Tadschikisch (1)
- Taiwan-Austronesisch (1)
- Taiwanesisch (1)
- Tarragona <2008> (1)
- Technologie (1)
- Teilsatz (1)
- Temperatur (1)
- Temporaladverb (1)
- Test (1)
- Texttypologie (1)
- Thailändisch (1)
- Theorie (1)
- Theory of mind (1)
- Thetik (1)
- Thomas Mann (1)
- Tibetisch (1)
- Tibetobirmanische Sprachen ; Nungisch (1)
- Tiersymbolik (1)
- Tigrinisch (1)
- Tiwa (1)
- Todesanzeigen (1)
- Ton <Phonologie> (1)
- Tone language (1)
- Tonhöhe (1)
- Topic/Comment (1)
- Tourismus (1)
- Touristeninformation (1)
- Transitives Verb (1)
- Tree Tuple (1)
- Tree-Adjoining Grammar (1)
- Tree-adjoining grammar (1)
- Tschuktschisch (1)
- Tswana (1)
- Tukangbesi (1)
- Tungusisch (1)
- Turkish (1)
- Tätigkeitsverb (1)
- Tübingen <2007> (1)
- Türkei (1)
- Tōrwālī (1)
- Uganda <West > (1)
- Uhlenbeck (1)
- Umgangssprache (1)
- Universität (1)
- Unordered Vector Grammar with Dominance Link (1)
- Unregelmäßiges Verb (1)
- Unterricht (1)
- Urdu (1)
- Urslawisch (1)
- Usability (1)
- VP-ellipsis (1)
- Vagueness (1)
- Valenz (Linguistik) (1)
- Van Wijkov zakon (1)
- Van Wijk’s law (1)
- Vedisch (1)
- Venetisch (1)
- Verbalisierung (1)
- Verbalkomplex (1)
- Verbalkompositum (1)
- Verbum sentiendi (1)
- Vergleich (1)
- Vergleichssatz (1)
- Versprecher (1)
- Vietnamesisch (1)
- Vokaldehnung (1)
- Vokalharmonie (1)
- Vokativ (1)
- Volksliteratur (1)
- Vorlesen (1)
- W-Bewegung (1)
- W-Fragen (1)
- Wahnsinn (1)
- Wakash-Sprachen (1)
- Wambaya (1)
- West Africa, Scotland (1)
- Westfriesisch (1)
- Wh-Questions (1)
- Wh-question (1)
- WhatsApp (1)
- Winterson (1)
- Wittgenstein (1)
- Wittgenstein, Ludwig (1)
- Wolfgang von Kempelen (1)
- Word Sense Disambiguation (1)
- World Englishes (1)
- Wortart (1)
- Wortfamilie (1)
- Worthäufigkeit (1)
- Wortlänge (1)
- Wortschatz, Spracherwerb (1)
- Wortverbindung (1)
- Wortwahl (1)
- Wunsch (1)
- Wörterbuch (1)
- XML (1)
- Xhosa (1)
- Zahlbegriff (1)
- Zeit (1)
- Zeitbewusstsein (1)
- Zeitschrift (1)
- Zentralisierung <Linguistik> (1)
- Zentralkhoisan-Sprachen (1)
- Zitat (1)
- Zunge (1)
- Zusammenbildung (1)
- Zustandsverb (1)
- Zweitstellung (1)
- Zwillingsformel (1)
- Zwillingsforschung (1)
- acceptability (1)
- accessibility (1)
- accounts (1)
- acquisition (1)
- acute (1)
- ad hominem moves (1)
- adaptation (1)
- adjectival antonyms (1)
- adjectives (1)
- adjectives of completeness (1)
- adverbs of frequency (1)
- adverbs of quantity (1)
- affect (1)
- agree (1)
- agreement mismatch (1)
- akut (1)
- alignment in communication structural coupling (1)
- allemand (1)
- also (1)
- alternative questions (1)
- alternative semantics presupposition projection (1)
- amounts (1)
- animacy (1)
- announcements (1)
- anticausatives (1)
- appositives (1)
- argument dislocation (1)
- argument/adjunct focus (1)
- as-phrases (1)
- assertion (1)
- assertions (1)
- at-issue content (1)
- atomicity (1)
- attitude reports (1)
- auditory language processing (1)
- authentic dialogue (1)
- authenticity (1)
- autobiographical writing (1)
- autobiographical writing (1)
- auxiliaries (1)
- auxiliary selection (1)
- ba-Konstruktion (1)
- background particles (1)
- be (1)
- bias (1)
- bilingual word processing (1)
- binding (1)
- bleiben (1)
- boundaries (1)
- breadth of focus (1)
- bridge principles (1)
- britischer Film (1)
- brouillage d’arguments (1)
- canonical visitations (1)
- causal dependence (1)
- causal sufficiency (1)
- causality (1)
- causatives (1)
- change of state verb (1)
- change of state verbs (1)
- character profiles (1)
- characterisation (1)
- choice functions (1)
- chunk parsing (1)
- classifiers (1)
- cleft (1)
- clefts (1)
- clitic doubling (1)
- co-reference (1)
- code-mixing (1)
- code-switching (1)
- coercion (1)
- coercions (1)
- cognitive approaches to language and literature (1)
- cognitive deixis (1)
- cognitive poetics (1)
- cognitive turn (1)
- coherence relations (1)
- common ground (1)
- comparable corpus French-Dutch (1)
- comparative constructions (1)
- compensatory lengthening (1)
- compensatory role of congruence (1)
- complementation (1)
- complex speech acts (1)
- compounding (1)
- compounds (1)
- computational semantics (1)
- conceptual metaphors (1)
- conditionals (1)
- confidence interval (1)
- constituency (1)
- contemplation (1)
- contrastive topic (1)
- conventional implicatures (1)
- conversation analysis (1)
- conversational implicatures (1)
- cornering (1)
- corpora (1)
- corpus analysis (1)
- corpus-based, methodology (1)
- correction (1)
- corrective focus (1)
- coréen (1)
- counterfactual (1)
- counteridenticals (1)
- covert variables (1)
- creation predicate (1)
- crime fiction (1)
- critical sociolinguistics (1)
- crkveni jezik (1)
- crosslinguistic semantics (1)
- cultur (1)
- cultural references (1)
- culturally bound items (1)
- dance semantics (1)
- de dicto (1)
- de-accenting (1)
- decomposition, (1)
- defaults (1)
- definiteness (1)
- definites (1)
- degree achievement (1)
- degrees (1)
- dehumanization (1)
- deixis (1)
- deleted t (1)
- deontic modals (1)
- depiction verbs (1)
- determiners (1)
- diachronic change (1)
- dialect (1)
- dialogism (1)
- dialogue (1)
- differential verbal comparatives. (1)
- digital fiction (1)
- dijelovi rečenice (1)
- diplomatic transcript (1)
- direct discourse (1)
- direct speech (1)
- direct speech representation (1)
- direct vs. indirect causation (1)
- discourse analysis (1)
- discourse coherence (1)
- discourse expectability (1)
- discourse presentation (1)
- discourse structure (1)
- disjoint reference (1)
- disjunction (1)
- distributional semantics (1)
- dominance (1)
- donkey sentences (1)
- doseg (1)
- dream reports (1)
- duetting (1)
- dynamics of controversy (1)
- e-mail scam (1)
- early Germanic (1)
- early modern english (1)
- ecclesiastical language (1)
- educational proposals (1)
- effort (1)
- eighteenth century (1)
- ellipses (1)
- ellipsis (1)
- embedded clauses (1)
- embedded implicature (1)
- embedding (1)
- emphasis (1)
- empirical (1)
- engagement (1)
- engleski (1)
- english (1)
- enough (1)
- entailment (1)
- epistemic 'modals' (1)
- epistemic expressions (1)
- epistemic indefinites (1)
- epistemic modals (1)
- epp (1)
- ergativity (1)
- ethnic minority writers (1)
- event semantics (1)
- events (1)
- ever free relatives (1)
- evidentiality (1)
- ex-situ focus (1)
- exhaustive identification (1)
- exhaustivity (1)
- expectation (1)
- experimental linguistics (1)
- experimental semantics (1)
- experiments (1)
- explicit performatives (1)
- explizite Performative (1)
- extreme nouns (1)
- face-work (1)
- factivity (1)
- familiarity (1)
- features (1)
- felicity conditions (1)
- ficar (1)
- fictional creatures (1)
- fieldwork (1)
- firsthand experience (1)
- focalization (1)
- focus anaphoricity (1)
- focus asymmetries (1)
- focus constructions (1)
- focus copula (1)
- focus marker (1)
- focus marking (1)
- focus meaning (1)
- focus particles (1)
- focus position (1)
- focus type (1)
- foregrounding (1)
- formalismes grammaticaux (1)
- frame semantics (1)
- frame theory (1)
- free choice (1)
- free direct speech (1)
- free indirect discourse (1)
- free indirect speech (1)
- free-choice (1)
- function words (1)
- functional sentence perspective (1)
- funkcionalistički pogled na rečenicu (1)
- future (1)
- ge <Morphem> (1)
- generic quantifier (1)
- genetic encoding (1)
- german (1)
- gesture (1)
- gestures (1)
- glagolska akcentuacija (1)
- glagolski pridjev radni (1)
- gradable adjectives (1)
- gradience grammar (1)
- gradišćanski hrvatski (1)
- gramatika uloga i referenci (1)
- grammaires d’arbres (1)
- grammar acquisistion (1)
- grammar formalism (1)
- grammatical aspect (1)
- grammaticality (1)
- grief (1)
- group chats (1)
- habitual (1)
- habituals (1)
- handles (1)
- hard cases (1)
- have (1)
- heritage language development (1)
- heritage speakers (1)
- heterolingualism (1)
- hierarchies (1)
- hierarchy (1)
- higher-order quantification (1)
- historical pragmatics (1)
- history (1)
- hrvatska dijalektologija (1)
- iconic semantics (1)
- illusion of authenticity (1)
- imbrication (1)
- imperatives (1)
- imperfective (1)
- implicated presupposition (1)
- implicatives (1)
- implicature (1)
- impoliteness (1)
- imposters (1)
- imprecision (1)
- indefinite pronouns (1)
- indexicality (1)
- indirect speech (1)
- indirect translation (1)
- individual variation (1)
- infants (1)
- inferencing task (1)
- infinitives (1)
- informal language learning (1)
- information management (1)
- informational focus (1)
- input (1)
- intensification (1)
- intensification scale (1)
- intensifiers (1)
- intensional quantifiers (1)
- intensional transitives (1)
- intensity (1)
- intenzifikacija (1)
- intenzifikatori (1)
- interactional roles (1)
- interpretation (1)
- interrogatives (1)
- interrupting (1)
- intervention effect (1)
- intonation (language) (1)
- kanonske vizitacije (1)
- kind reference (1)
- knowledge (1)
- kompenzacijska uloga sročnosti (1)
- kompenzacijsko duljenje (1)
- l-participle (1)
- language ecology (1)
- language pedagogy (1)
- language planning (1)
- language policy (1)
- latinski jezik (1)
- lexical aspect (1)
- lexical causative verbs (1)
- lexical tone (1)
- lexical-functional grammar (1)
- lexicalized tree-adjoining grammar (1)
- line (1)
- linear order (1)
- linguistic creativity (1)
- linguistic networks graph distance measures (1)
- linguistic repertoires (1)
- linguistic variation (1)
- linking elements (1)
- literary corpus (1)
- literary linguistics (1)
- literary pragmatics (1)
- literary translation (1)
- literature (1)
- ljestvica pojačajnosti (1)
- loanwords (1)
- local context (1)
- logical form (1)
- long wh-movement (1)
- machine translation (1)
- macroroles (1)
- makrouloge (1)
- manner implicature (1)
- manuscript transcription (1)
- maximality (1)
- maximizers (1)
- mediational repertoire (1)
- memory-based learning (1)
- mention-some (1)
- metagrammars (1)
- metalinguistic negation (1)
- metre (1)
- metrics (1)
- middle english (1)
- migrant writing (1)
- migrants’ language (1)
- miners puzzle (1)
- minor and minority literatures (1)
- modal flavor (1)
- modal inferences (1)
- modal particles (1)
- modality (1)
- modalne čestice (1)
- modalnost (1)
- modification (1)
- monotonicity (1)
- morphological derivation (1)
- movement (1)
- multi-ethnolect (1)
- multicomponent rewriting (1)
- multilingualism (1)
- multimodal analysis (1)
- multimodal narratives (1)
- multiple encoding (1)
- mundane technology use (1)
- mutual information of graphs (1)
- métagrammaires (1)
- narrator discourse (1)
- natural language (1)
- natural language metaphysics (1)
- natural speech processes (1)
- naturalization (1)
- ne (1)
- necessary (but not necessarily sufficient) causes (1)
- negative polar questions (1)
- negative polarity item (NPI) (1)
- negative prefix (1)
- negative strengthening (1)
- negative-islands (1)
- neo cirkumfleks (1)
- neo-circumflex (1)
- neodređene zamjenice (1)
- neologisms (1)
- niječni prefiks (1)
- njemački (1)
- nominal nominal (1)
- non-intersective adjectives (1)
- non-restrictive relative clause (1)
- non-specific transparent (1)
- non-standard features (1)
- normalization (1)
- norms (1)
- not-at-issue content (1)
- novolatinski jezik (1)
- null subjects (1)
- number construction (1)
- number neutrality (1)
- numerals (1)
- obligatory control (1)
- obvezna kontrola (1)
- odds ratio (1)
- old english (1)
- onomastics (1)
- operator movement (1)
- optional classifiers (1)
- oral narratives (1)
- ordinary conversation (1)
- ordre des mots (1)
- overlapping (1)
- overlapping hierarchies (1)
- paradigm uniformity (1)
- particles (1)
- partition (1)
- partitives (1)
- parts of the sentence (1)
- passive (1)
- passives (1)
- pathologischer Spracherwerb (1)
- perception (statement-question matching) (1)
- perception of deletion (1)
- perfect (1)
- performative modality (1)
- performativity (1)
- person agreement (1)
- person splits (1)
- personal reference (1)
- perspective (1)
- phase (1)
- phi-features (1)
- phonological status (1)
- phonological word (1)
- phonology (1)
- physical structure vs. textual structure (1)
- picture semantics (1)
- pirahã (1)
- pitch accent (1)
- play script (1)
- plurality (1)
- poetic form (1)
- pojačajnost (1)
- polarity focus (1)
- political speech (1)
- polymedia (1)
- polyphony (1)
- post-focus reduction (1)
- pp modification (1)
- pragmatic enrichment (1)
- pragmatic inference (1)
- praslavenski (1)
- predicate focus (1)
- predicates of personal taste (1)
- preference predicates (1)
- prefix (1)
- prepositions (1)
- presentational constructions (1)
- presuppositional implicatures (1)
- prijedlozi (1)
- priming (1)
- probabilistic theories of causation (1)
- probabilities (1)
- probability (1)
- progressive (1)
- projection (1)
- prominence (1)
- pronoun movement (1)
- pronouns (1)
- properties (1)
- prosodic focus (1)
- prosodic integration (1)
- prosodic phrasing (1)
- prosodic prominence (1)
- protoslavenski jezik (1)
- psycholinguistics (1)
- publicness (1)
- quantificational variability (1)
- quantifier processing (1)
- quantifier scope (1)
- quantity (1)
- question formation (1)
- reaction time (1)
- reader-response (1)
- reasoning errors (1)
- recursivity (1)
- red riječi (1)
- reference resolution in production and comprehension (1)
- referential expression (1)
- refugees (1)
- regional profiles (1)
- register (1)
- register variation (1)
- relational adjectives (1)
- representation (1)
- representation accuracy (1)
- responsive predicates (1)
- restrictive relative clause (1)
- resultative (1)
- resumptive pronouns (1)
- rhetorical relations (1)
- rhythmic aptitude (1)
- robust parsing (1)
- role labeling (1)
- root classes (1)
- salience (1)
- scalar changes (1)
- scalar diversity (1)
- scalar enrichment (1)
- scalar implicatures (1)
- scalar inferences (1)
- scale structure (1)
- schisming (1)
- secondary focus (1)
- segment reconstruction (1)
- self-naming (1)
- semantic types (1)
- semantic variability (1)
- semantics annual meeting (1)
- semantics/pragmatics interface (1)
- sentence-final particles (1)
- sex-/gender-neutral language (1)
- silence (1)
- similarity (1)
- similarity approach (1)
- similarity-based learning (1)
- simplification (1)
- sintaksa (1)
- situation variables (1)
- situations (1)
- slavenska akcentologija (1)
- slavenski neocirkumfleks (1)
- slovenski (1)
- smartphone-based language practices (1)
- smartphones (1)
- so <Wort> (1)
- social media (1)
- social media narratives (1)
- sociolect (1)
- sociology of language (1)
- spectatorship (1)
- speech reports (1)
- speech rhythm (1)
- speech segmentation (1)
- speeded verification (1)
- spirituality (1)
- split antecedent (1)
- spoken discourse (1)
- stance (1)
- standard solution (1)
- standardization (1)
- storytelling (1)
- strategies (1)
- street culture (1)
- stress patterns (1)
- style (1)
- subject inversion (1)
- subjunctive conditionals (1)
- sufficient (but not necessarily necessary) causes (1)
- support (1)
- syllogisms (1)
- symmetric predicate (1)
- syntactic decomposition (1)
- syntactic focus marking (1)
- tag questions (1)
- technical vocabulary (1)
- television drama (1)
- temporal gradation (1)
- temporal limitation (1)
- temporal/modal operators (1)
- tense semantics (1)
- tense switches (1)
- tension (1)
- terminology (1)
- terms of address (1)
- theatre (1)
- theory of controversy (1)
- time (1)
- time annotation (1)
- tone (1)
- tone (language) (1)
- tone languages (1)
- tones (1)
- too (1)
- topic affixes (1)
- topic markers (1)
- topic-comment (1)
- traces (1)
- translation (1)
- translation procedures and techniques (1)
- translation strategies (1)
- translation studies (1)
- tree-based grammars (1)
- treebanking (1)
- treebanks (1)
- trochee (1)
- turn-taking (1)
- type shifting (1)
- type-shifting (1)
- typification (1)
- unalternative semantics (1)
- universal presupposition projection (1)
- universal quantifiers (1)
- update semantics (1)
- usernames (1)
- van Wijk's law (1)
- van Wijkov zakon (1)
- variational linguistics (1)
- varieties of English (1)
- verb placemen (1)
- verb-initial language (1)
- verb-second (1)
- verba dicendi (1)
- verbal accentuation (1)
- verlan (1)
- visusal representations (1)
- weak free adjuncts (1)
- werden <Wort> (1)
- wh-questions (1)
- wh-scope (1)
- whinterrogatives (1)
- wide scope indefinites (1)
- word formation (1)
- word order in Italian and Greek (1)
- word order variation (1)
- working memory (1)
- Österreichisches Deutsch (1)
- čestice (1)
- ē-osnove (1)
- ē–stems (1)
Institute
- Extern (141)
- Institut für Deutsche Sprache (IDS) Mannheim (97)
- Neuere Philologien (26)
- Sprachwissenschaften (8)
- Medizin (2)
- Sprach- und Kulturwissenschaften (2)
- Gesellschaftswissenschaften (1)
- Informatik (1)
- SFB 268 (1)
The problem of vocalization, or diacritization, is essential to many tasks in Arabic NLP. Arabic is generally written without the short vowels, which leads to one written form having several pronunciations with each pronunciation carrying its own meaning(s). In the experiments reported here, we define vocalization as a classification problem in which we decide for each character in the unvocalized word whether it is followed by a short vowel. We investigate the importance of different types of context. Our results show that the combination of using memory-based learning with only a word internal context leads to a word error rate of 6.64%. If a lexical context is added, the results deteriorate slowly.
In syntax, the trend nowadays is towards lexicalized grammar formalisms. It is now widely accepted that dividing words into wordclasses may serve as a laborsaving mechanism - but at the same time, it discards all detailed information on the idiosyncratic behavior of words. And that is exactly the type of information that may be necessary in order to parse a sentence. For learning approaches, however, lexicalized grammars represent a challenge for the very reason that they include so much detailed and specific information, which is difficult to learn. This paper will present an algorithm for learning a link grammar of German. The problem of data sparseness is tackled by using all the available information from partial parses as well as from an existing grammar fragment and a tagger. This is a report about work in progress so there are no representative results available yet.
This paper presents a comparative study of probabilistic treebank parsing of German, using the Negra and TüBa-D/Z treebanks. Experiments with the Stanford parser, which uses a factored PCFG and dependency model, show that, contrary to previous claims for other parsers, lexicalization of PCFG models boosts parsing performance for both treebanks. The experiments also show that there is a big difference in parsing performance, when trained on the Negra and on the TüBa-D/Z treebanks. Parser performance for the models trained on TüBa-D/Z are comparable to parsing results for English with the Stanford parser, when trained on the Penn treebank. This comparison at least suggests that German is not harder to parse than its West-Germanic neighbor language English.
How to compare treebanks
(2008)
Recent years have seen an increasing interest in developing standards for linguistic annotation, with a focus on the interoperability of the resources. This effort, however, requires a profound knowledge of the advantages and disadvantages of linguistic annotation schemes in order to avoid importing the flaws and weaknesses of existing encoding schemes into the new standards. This paper addresses the question how to compare syntactically annotated corpora and gain insights into the usefulness of specific design decisions. We present an exhaustive evaluation of two German treebanks with crucially different encoding schemes. We evaluate three different parsers trained on the two treebanks and compare results using EVALB, the Leaf-Ancestor metric, and a dependency-based evaluation. Furthermore, we present TePaCoC, a new testsuite for the evaluation of parsers on complex German grammatical constructions. The testsuite provides a well thought-out error classification, which enables us to compare parser output for parsers trained on treebanks with different encoding schemes and provides interesting insights into the impact of treebank annotation schemes on specific constructions like PP attachment or non-constituent coordination.
In the last decade, the Penn treebank has become the standard data set for evaluating parsers. The fact that most parsers are solely evaluated on this specific data set leaves the question unanswered how much these results depend on the annotation scheme of the treebank. In this paper, we will investigate the influence which different decisions in the annotation schemes of treebanks have on parsing. The investigation uses the comparison of similar treebanks of German, NEGRA and TüBa-D/Z, which are subsequently modified to allow a comparison of the differences. The results show that deleted unary nodes and a flat phrase structure have a negative influence on parsing quality while a flat clause structure has a positive influence.
Transforming constituent-based annotation into dependency-based annotation has been shown to work for different treebanks and annotation schemes (e.g. Lin (1995) has transformed the Penn treebank, and Kübler and Telljohann (2002) the Tübinger Baumbank des Deutschen (TüBa-D/Z)). These ventures are usually triggered by the conflict between theory-neutral annotation, that targets most needs of a wider audience, and theory-specific annotation, that provides more fine-grained information for a smaller audience. As a compromise, it has been pointed out that treebanks can be designed to support more than one theory from the start (Nivre, 2003). We argue that information can also be added to an existing annotation scheme so that it supports additional theory-specific annotations. We also argue that such a transformation is useful for improving and extending the original annotation scheme with respect to both ambiguous annotation and annotation errors. We show this by analysing problems that arise when generating dependency information from the constituent-based TüBa-D/Z.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. Such larger structures are not only desirable for a deeper syntactic analysis. They also constitute a necessary prerequisite for assigning function-argument structure. The present paper offers a similaritybased algorithm for assigning functional labels such as subject, object, head, complement, etc. to complete syntactic structures on the basis of prechunked input. The evaluation of the algorithm has concentrated on measuring the quality of functional labels. It was performed on a German and an English treebank using two different annotation schemes at the level of function argument structure. The results of 89.73% correct functional labels for German and 90.40%for English validate the general approach.
In this paper, we investigate the role of sub-optimality in training data for part-of-speech tagging. In particular, we examine to what extent the size of the training corpus and certain types of errors in it affect the performance of the tagger. We distinguish four types of errors: If a word is assigned a wrong tag, this tag can belong to the ambiguity class of the word (i.e. to the set of possible tags for that word) or not; furthermore, the major syntactic category (e.g. "N" or "V") can be correctly assigned (e.g. if a finite verb is classified as an infinitive) or not (e.g. if a verb is classified as a noun). We empirically explore the decrease of performance that each of these error types causes for different sizes of the training set. Our results show that those types of errors that are easier to eliminate have a particularly negative effect on the performance. Thus, it is worthwhile concentrating on the elimination of these types of errors, especially if the training corpus is large.
Prepositional phrase (PP) attachment is one of the major sources for errors in traditional statistical parsers. The reason for that lies in the type of information necessary for resolving structural ambiguities. For parsing, it is assumed that distributional information of parts-of-speech and phrases is sufficient for disambiguation. For PP attachment, in contrast, lexical information is needed. The problem of PP attachment has sparked much interest ever since Hindle and Rooth (1993) formulated the problem in a way that can be easily handled by machine learning approaches: In their approach, PP attachment is reduced to the decision between noun and verb attachment; and the relevant information is reduced to the two possible attachment sites (the noun and the verb) and the preposition of the PP. Brill and Resnik (1994) extended the feature set to the now standard 4-tupel also containing the noun inside the PP. Among many publications on the problem of PP attachment, Volk (2001; 2002) describes the only system for German. He uses a combination of supervised and unsupervised methods. The supervised method is based on the back-off model by Collins and Brooks (1995), the unsupervised part consists of heuristics such as ”If there is a support verb construction present, choose verb attachment”. Volk trains his back-off model on the Negra treebank (Skut et al., 1998) and extracts frequencies for the heuristics from the ”Computerzeitung”. The latter also serves as test data set. Consequently, it is difficult to compare Volk’s results to other results for German, including the results presented here, since not only he uses a combination of supervised and unsupervised learning, but he also performs domain adaptation. Most of the researchers working on PP attachment seem to be satisfied with a PP attachment system; we have found hardly any work on integrating the results of such approaches into actual parsers. The only exceptions are Mehl et al. (1998) and Foth and Menzel (2006), both working with German data. Mehl et al. report a slight improvement of PP attachment from 475 correct PPs out of 681 PPs for the original parser to 481 PPs. Foth and Menzel report an improvement of overall accuracy from 90.7% to 92.2%. Both integrate statistical attachment preferences into a parser. First, we will investigate whether dependency parsing, which generally uses lexical information, shows the same performance on PP attachment as an independent PP attachment classifier does. Then we will investigate an approach that allows the integration of PP attachment information into the output of a parser without having to modify the parser: The results of an independent PP attachment classifier are integrated into the parse of a dependency parser for German in a postprocessing step.
This report explores the question of compatibility between annotation projects including translating annotation formalisms to each other or to common forms. Compatibility issues are crucial for systems that use the results of multiple annotation projects. We hope that this report will begin a concerted effort in the field to track the compatibility of annotation schemes for part of speech tagging, time annotation, treebanking, role labeling and other phenomena.
This paper reports on the SYN-RA (SYNtax-based Reference Annotation) project, an on-going project of annotating German newspaper texts with referential relations. The project has developed an inventory of anaphoric and coreference relations for German in the context of a unified, XML-based annotation scheme for combining morphological, syntactic, semantic, and anaphoric information. The paper discusses how this unified annotation scheme relates to other formats currently discussed in the literature, in particular the annotation graph model of Bird and Liberman (2001) and the pie-in-thesky scheme for semantic annotation.
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks. Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances. The TüSBL parser extends current chunk parsing techniques by a tree-construction component that extends partial chunk parses to complete tree structures including recursive phrase structure as well as function-argument structure. TüSBLs tree construction algorithm relies on techniques from memory-based learning that allow similarity-based classification of a given input structure relative to a pre-stored set of tree instances from a fully annotated treebank. A quantitative evaluation of TüSBL has been conducted using a semi-automatically constructed treebank of German that consists of appr. 67,000 fully annotated sentences. The basic PARSEVAL measures were used although they were developed for parsers that have as their main goal a complete analysis that spans the entire input.This runs counter to the basic philosophy underlying TüSBL, which has as its main goal robustness of partially analyzed structures.
This paper provides an overview of current research on a hybrid and robust parsing architecture for the morphological, syntactic and semantic annotation of German text corpora. The novel contribution of this research lies not in the individual parsing modules, each of which relies on state-of-the-art algorithms and techniques. Rather what is new about the present approach is the combination of these modules into a single architecture. This combination provides a means to significantly optimize the performance of each component, resulting in an increased accuracy of annotation.
A lot of interest has recently been paid to constraint-based definitions and extensions of Tree Adjoining Grammars (TAG). Examples are the so-called quasi-trees, D-Tree Grammars and Tree Description Grammars. The latter are grammars consisting of a set of formulars denoting trees. TDGs are derivation based where in each derivation step a conjunction is built of the old formular, a formular of the grammar and additional equivalences between node names of the two formulars. This formalism is more powerfull than TAGs. TDGs offer the advantages of MC-TAG and D-Tree Grammars for natural languages and they allow underspecification. However the problem is that TDGs might be unnecessarily powerfull for natural languages. To solve this problem, in this paper, I will propose a local TDGs, a restricted version of TDGs. Local TDGs still have the advantages of TDGs but they are semilinear and therefore more appropriate for natural languages. First, the notion of the semilinearity is defined. Then local TDGs are introduced, and, finally, semilinearity of local Tree Description Languages is proven.
This paper proposes a compositional semantics for lexicalized tree adjoining grammars (LTAG). Tree-local multicompnent derivations allow seperation of semantiv contribution of a lexical item into one component contributing to the predicate argument structure and second a component contributing to scope semantics. Based on this idea a syntx-semantics interface is presented where the compositional semantics depends only on the derivation structure. It is shown that the derivation structure allows an appropriate amount of underspecification. This is illustrated by investigating underspecified representations for quantifier scpoe ambiguities and related phenomena such as adjunct scope and island constraints.
A hierarchy of local TDGs
(1998)
Many recent variants of Tree Adoining Grammars (TAG) allow an underspecifiaction of the parent relation between nodes in a tree, i.e. they do not deal with fully specified trees as it is the case with TAGs.Such TAG variants are for example Description Tree Grammars (DTG), Unordered Vector Grammars with Dominance Links (UVG-DL), a definition of TAGs via so-called quasi trees and Tree Description Grammars (TDG. The last TAg variant, local TDG, is an extension of TAG generating Tree Descriptions. Local TDGs even allow an underspecification of the dominance relation between node names and thereby provide the possibility to generate underspecified representations for structural ambiguities such as quantifier scope ambiguities. This abstract deals with formal properties of local TDGs. A hierarchiy of local TDGs is established together with a pumping lemma for local TDGs of a certain rank.
Tree-local MCTAG with shared nodes : an analysis of word order variation in German and Korean
(2004)
Tree Adjoining Grammars (TAG) are known not to be powerful enough to deal with scrambling in free word order languages. The TAG-variants proposed so far in order to account for scrambling are not entirely satisfying. Therefore, an alternative extension of TAG is introduced based on the notion of node sharing. Considering data from German and Korean, it is shown that this TAG-extension can adequately analyse scrambling data, also in combination with extraposition and topicalization.
In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
This paper proposes a corpus encoding standard that meets the needs of linguistic research using a variety of linguistic data structures. The standard was developed in SFB 441, a research project at the University of Tuebingen. The principal concern of SFB 441 are the empirical data structures which feed into linguistic theory building. SFB 441 consists of several projects, most of which are building corpora to empirically investigate various linguistic phenomena in various languages (e.g. modal verbs in German, forms of address and politeness in Russian). These corpora will form the components of the "Tuebingen collection of reusable, empirical, linguistic data structures (TUSNELDA)". The TUSNELDA annotation standard aims at providing a uniform encoding scheme for all subcorpora and texts of TUSNELDA such that they can be processed with uniform standardized tools. To guarantee maximal reusability we use XML for encoding. Previous SGML standards for text encoding were provided by the Text Encoding Initiative (TEI) and the Expert Advisory Group on Language Engineering Standards (Corpus Encoding Standard, CES). The TUSNELDA standard is based on TEI and XCES (XML version of CES) but takes into account the specific needs of the SFB projects, i.e. the peculiarities of the examined languages and linguistic phenomena.
Existing analyses of German scrambling phenomena within TAG-related formalisms all use non-local variants of TAG. However, there are good reasons to prefer local grammars, in particular with respect to the use of the derivation structure for semantics. Therefore this paper proposes to use local TDGs, a TAG-variant generating tree descriptions that shows a local derivation structure. However the construction of minimal trees for the derived tree descriptions is not subject to any locality constraint. This provides just the amount of non-locality needed for an adequate analysis of scrambling. To illustrate this a local TDG for some German scrambling data is presented.
This paper develops a framework for TAG (Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches.Then, within this framework, an analysis of scope is proposed that accounts for the different scopal properties of quantifiers, adverbs, raising verbs and attitude verbs. Finally, including situation variables in the semantics, different situation binding possibilities are derived for different types of quantificational elements.
This paper presents an LTAG analysis of reflexives like himself and reciprocals like each other. These items need to find a c-commanding antecedent from which they retrieve (part of) their own denotation and with which they syntactically agree. The relation between anaphoric item and antecendent must satisfy the following important locality conditions (Chomsky (1981)).
Relative quantifier scope in German depends, in contrast to English, very much on word order. The scope possibilities of a quantifier are determined by its surface position, its base position and the type of the quantifier. In this paper we propose a multicomponent analysis for German quantifiers computing the scope of the quantifier, in particular its minimal nuclear scope, depending on the syntactic configuration it occurs in.
This paper investigates the relation between TT-MCTAG, a formalism used in computational linguistics, and RCG. RCGs are known to describe exactly the class PTIME; simple RCG even have been shown to be equivalent to linear context-free rewriting systems, i.e., to be mildly context-sensitive. TT-MCTAG has been proposed to model free word order languages. In general, it is NP-complete. In this paper, we will put an additional limitation on the derivations licensed in TT-MCTAG. We show that TT-MCTAG with this additional limitation can be transformed into equivalent simple RCGs. This result is interesting for theoretical reasons (since it shows that TT-MCTAG in this limited form is mildly context-sensitive) and, furthermore, even for practical reasons: We use the proposed transformation from TT-MCTAG to RCG in an actual parser that we have implemented.
This paper sets up a framework for LTAG (Lexicalized Tree Adjoining Grammar) semantics that brings together ideas from different recent approaches addressing some shortcomings of TAG semantics based on the derivation tree. Within this framework, several sample analyses are proposed, and it is shown that the framework allows to analyze data that have been claimed to be problematic for derivation tree based LTAG semantics approaches.
LTAG semantics for questions
(2004)
This papers presents a compositional semantic analysis of interrogatives clauses in LTAG (Lexicalized Tree Adjoining Grammar) that captures the scopal properties of wh- and nonwh-quantificational elements. It is shown that the present approach derives the correct semantics for examples claimed to be problematic for LTAG semantic approaches based on the derivation tree. The paper further provides an LTAG semantics for embedded interrogatives.
Our paper aims at capturing the distribution of negative polarity items (NPIs) within lexicalized Tree Adjoining Grammar (LTAG). The condition under which an NPI can occur in a sentence is for it to be in the scope of a negation with no quantifiers scopally intervening. We model this restriction within a recent framework for LTAG semantics based on semantic unification. The proposed analysis provides features that signal the presence of a negation in the semantics and that specify its scope. We extend our analysis to modelling the interaction of NPI licensing and neg raising constructions.
This paper addresses the problem ofconstraints for relative quantifier sope, in partiular in inverse linking readings wherecertain scope orders are exluded. We show how to account for such restrictions in the Tree Adjoining Grammar (TAG) framework by adopting a notion offlexible composition. In the semantics we use for TAG we introduce quantifier sets that group quantifiers that are "glued" together in the sense that no other quantifieran scopally intervene between them. Theflexible composition approach allows us to obtain the desired quantifier sets and thereby the desiredconstraints for quantifier sope.
In this paper we will explore the similarities and differences between two feature logic-based approaches to the composition of semantic representations. The first approach is formulated for Lexicalized Tree Adjoining Grammar (LTAG, Joshi and Schabes 1997), the second is Lexical Ressource Semantics (LRS, Richter and Sailer 2004) and was first defined in Head-driven Phrase Structure Grammar. The two frameworks have several common characteristics that make them easy to compare: 1 They use languages of two sorted type theory for semantic representations. 2. They allow underspecification. LTAG uses scope constraints while LRS provides component-of contraints. 3 They use feature logics for computing semantic representations. 4. they are designed for computational applications. By comparing the two frameworks we will also point outsome characteristics and advantages of feature logic-based semantic computation in genereal.
TT-MCTAG lets one abstract away from the relative order of co-complements in the final derived tree, which is more appropriate than classic TAG when dealing with flexible word order in German. In this paper, we present the analyses for sentential complements, i.e., wh-extraction, thatcomplementation and bridging, and we work out the crucial differences between these and respective accounts in XTAG (for English) and V-TAG (for German).
In this paper we propose a compositional semantics for lexicalized tree-adjoining grammar (LTAG). Tree-local multicomponent derivations allow separation of the semantic contribution of a lexical item into one component contributing to the predicate argument structure and a second component contributing to scope semantics. Based on this idea a syntax-semantics interface is presented where the compositional semantics depends only on the derivation structure. It is shown that the derivation structure (and indirectly the locality of derivations) allows an appropriate amount of underspecification. This is illustrated by investigating underspecified representations for quantifier scope ambiguities and related phenomena such as adjunct scope and island constraints.
In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.
Developing linguistic resources, in particular grammars, is known to be a complex task in itself, because of (amongst others) redundancy and consistency issues. Furthermore some languages can reveal themselves hard to describe because of specific characteristics, e.g. the free word order in German. In this context, we present (i) a framework allowing to describe tree-based grammars, and (ii) an actual fragment of a core multicomponent tree-adjoining grammar with tree tuples (TT-MCTAG) for German developed using this framework. This framework combines a metagrammar compiler and a parser based on range concatenation grammar (RCG) to respectively check the consistency and the correction of the grammar. The German grammar being developed within this framework already deals with a wide range of scrambling and extraction phenomena.
This paper compares two approaches to computational semantics, namely semantic unification in Lexicalized Tree Adjoining Grammars (LTAG) and Lexical Resource Semantics (LRS) in HPSG. There are striking similarities between the frameworks that make them comparable in many respects. We will exemplify the differences and similarities by looking at several phenomena. We will show, first of all, that many intuitions about the mechanisms of semantic computations can be implemented in similar ways in both frameworks. Secondly, we will identify some aspects in which the frameworks intrinsically differ due to more general differences between the approaches to formal grammar adopted by LTAG and HPSG.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. Looking only at the result of a derivation (i.e., the derived tree and the derivation tree), this simultaneity is no longer visible and therefore cannot be checked. I.e., this way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. Therefore, in this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees the MCTAG licences.
Multicomponent Tree Adjoining Grammars (MCTAG) is a formalism that has been shown to be useful for many natural language applications. The definition of MCTAG however is problematic since it refers to the process of the derivation itself: a simultaneity constraint must be respected concerning the way the members of the elementary tree sets are added. This way of characterizing MCTAG does not allow to abstract away from the concrete order of derivation. In this paper, we propose an alternative definition of MCTAG that characterizes the trees in the tree language of an MCTAG via the properties of the derivation trees (in the underlying TAG) the MCTAG licences. This definition gives a better understanding of the formalism, it allows a more systematic comparison of different types of MCTAG, and, furthermore, it can be exploited for parsing.
The present work reports two experiments on brain electric correlates of cognitive and emotional functions. (1) Studying paranormal belief, 35-channel resting EEG (10 believers and 13 skeptics) was analyzed with "Low Resolution Electromagnetic Tomography" (LORETA) in seven frequency bands. LORETA gravity centers of all bands shifted to the left in believers vs. sceptics, and showed that believers had stronger left fronto-temporo-parietal activity than skeptics. Self-rating of affective attitude showed believers to be less negative than skeptics. The observed EEG lateralization agreed with the ‘valence hypothesis’ that posits predominant left hemispheric processing for positive emotions. (2) Studying emotions, positive and negative emotion words were presented to 21 subjects while "Event-Related Potentials" (ERPs) were recorded. During word presentation (450 ms), 13 microstates (steps of information processing) were identified. Three microstates showed different potential maps for positive vs. negative words; LORETA functional imaging showed stronger activity in microstate #4 (106-122 ms) for positive words right anterior, for negative words left central; in #6 (138-166 ms) for positive words left anterior, for negative words left posterior; in #7 (166-198 ms), for positive words right anterior, for negative words right central. In conclusion: during word processing, the extraction of emotion content starts as early as 106 ms after stimulus onset; the brain identifies emotion content repeatedly in three separate, brief microstate epochs; and, this processing of emotion content in the three microstates involves different brain mechanisms to represent the distinction positive vs. negative valence.
This paper examines the development of periphrastic constructions involving auxiliary "have" and "be" with a past participle in the history of English, on the basis of parsed electronic corpora. It is argued that the two constructions represented distinct syntactic and semantic structures: while the one with have developed into a true perfect in the course of Middle English, the one with be remained a stative resultative throughout its history. In this way, it is explained why the be construction was rarely or never used in a number of contexts, including past counterfactuals, iteratives, duratives, certain kinds of infinitives and various other utterance types that cannot be characterized as perfects of result. When the construction with have became a true perfect, it was used in such contexts, regardless of the identity of the main verb, leading to the appearance of have with verbs like come which had previously only taken be. Crucially, however, have was not spreading at the expense of be, as the be perfect had never been used in such contexts, but rather at the expense of the old simple past. At least until the end of the Early Modern English period, the shift in the relative frequency of have and be perfects is to be explained in terms of the expansion of the former into new contexts, while the latter remained stable. A formal analysis is proposed, taking as its starting point a comparison with German which shows that the older English be perfect indeed behaves more like the German stative passive than its haben and sein perfects.
In this paper, we will argue for a novel analysis of the auxiliary alternation in Early English, its development and subsequent loss which has broader consequences for the way that auxiliary selection is looked at cross-linguistically. We will present evidence that the choice of auxiliaries accompanying past participles in Early English differed in several significant respects from that in the familiar modern European languages. Specifically, while the construction with have became a full-fledged perfect by some time in the ME period, that with be was actually a stative resultative, which it remained until it was lost. We will show that this accounts for some otherwise surprising restrictions on the distribution of BE in Early English and allows a better understanding of the spread of HAVE through late ME and EModE. Perhaps more importantly, the Early English facts also provide insight into the genesis of the kind of auxiliary selection found in German, Dutch and Italian. Our analysis of them furthermore suggests a promising strategy for explaining cross-linguistic variation in auxiliary selection in terms of variation in the syntactico-semantic structure of the perfect. In this introductory section, we will first provide some background on the historical situation we will be discussing, then we will lay out the main claims for which we will be arguing in the paper.
In this paper I seek to account for the productive word-formation process resulting in the current proliferation of un-nouns, the semi-legitimate offspring of Humpty Dumpty´s un-birthday present (1871) and 7-Up´s commercial incarnation as The Un-Cola (1968), a construction that can be linked to the more well-established categories of un-adjectives and un-verbs, whose formation constraints we will also examine. Drawing on a large corpus of novel un-nouns I have assembled in collaboration with Beth Levin presented in the Appendices to this paper, I will invoke Rosch´s prototype semantics and Aristotle´s notion of PRIVATIVE opposites, defined in terms of a marked exception to a general class property, to generalize across the different categories of un-words. It will be argued that a given un-noun refers either to an element just outside a given category with whose members it shares a salient function (e.g. un-cola) or to a peripheral member of a given category (an unhotel is a hotel but not a good exemplar of the class-not a HOTEL hotel).
The retreat of BE as perfect auxiliary in the history of English is examined. Corpus data are presented showing that the initial advance of HAVE was most closely connected to a restriction against BE in past counterfactuals. Other factors which have been reported to favor the spread of HAVE are either dependent on the counterfactual effect, or significantly weaker in comparison. It is argued that the effect can be traced to the semantics of the BE perfect, which denoted resultativity rather than anteriority proper. Related data from other older Germanic and Romance languages are presented, and finally implications for existing theories of auxiliary selection stemming from the findings presented are discussed.
In the course of the ME period, HAVE began to encroach on territory previously held by BE. According to Rydén and Brorström (1987); Kytö (1997), this occurred especially in iterative and durational contexts, in the perfect infinitive and modal constructions. In Early Modern English (henceforth EModE), BE was increasingly restricted to the most common intransitives come and go, before disappearing entirely in the 18th and 19th centuries. This development raises a number of questions, both historical and theoretical. First, why did HAVE start spreading at the expense of BE in the first place? Second, why was the change conditioned by the factors mentioned by Rydén and Brorström (1987) and Kytö (1997)? Third, why did the change take on the order of 800 years to go to completion? Fourth, what implications does the change have for general theories of auxiliary selection? In this paper we’ll try to answer the first question by focusing on one the earliest clearly identifiable advance of HAVE onto BE territory – its first appearance with the verb come, which for a number of reasons is an ideal verb to focus on. First, come is by far the most common intransitive verb, so we get large enough numbers for statistical analysis. Second, clauses containing the past participle of come with a form of BE are unambiguous perfects: they cannot be passives, and they did not continue into modern English with a stative reading like he is gone. Third, and perhaps most importantly, come selected BE categorically in the early stages of English, so the first examples we find with HAVE are clear evidence for innovation. We will present evidence from a corpus study showing that the first spread of HAVE was due to a ban on auxiliary BE in certain types of counterfactual perfects, and will propose an account for that ban in terms of Iatridou’s (2000) Exclusion theory of counterfactuals.
It has often been noticed that one syntactic argument position can be realized by elements which seem to realize different thematic roles. This is notably the case with the external argument position of verbs of change of state which licenses volitional agents, instruments or natural forces/causers, showing the generality and abstractness of the external argument relation. (1) a. John broke the window (Agent) b. The hammer broke the window (Instrument) c. The storm broke the window (Causer) In order to capture this generality, Van Valin & Wilkins (1996) and Ramchand (2003) among others have proposed that the thematic role of the external argument position is in fact underspecified. The relevant notion is that of an effector (in Van Valin & Wilkins) or of an abstract causer/initiator (in Ramchand). In this paper we argue against a total underspecification of the external argument relation. While we agree that (1b) does not instantiate an instrument theta role in subject position, we argue that a complete underspecification of the external theta-position is not feasible, but that two types of external theta roles have to be distinguished, Agents and Causers. Our arguments are based on languages where Agents and Causers show morpho-syntactic independence (section 2.1) and the behavior of instrument subjects in English, Dutch, German and Greek (section 2.2 and 3). We show that instrument subjects are either Agent or Causer like. In section (4) we give an analysis how arguments realizing these thematic notions are introduced into syntax.
Verbs, nouns and affixation
(2008)
What explains the rich patterns of deverbal nominalization? Why do some nouns have argument structure, while others do not? We seek a solution in which properties of deverbal nouns are composed from properties of verbs, properties of nouns, and properties of the morphemes that relate them. The theory of each plus the theory of howthey combine, should give the explanation. In exploring this, we investigate properties of two theories of nominalization. In one, the verb-like properties of deverbal nouns result from verbal syntactic structure (a “structural model”). See, for example, van Hout & Roeper 1998, Fu, Roeper and Borer 1993, 2001, to appear, Alexiadou 2001, to appear). According to the structural hypothesis, some nouns contain VPs and/or verbal functional layers. In the other theory, the verbal properties of deverbal nouns result from the event structure and argument structure of the DPs that they head. By “event structure” we mean a representation of the elements and structure of a linguistic event, not a representation of the world. We refer to this view as the “event model”. According to the event model hypothesis, all derived nouns are represented with the same syntactic structure, the difference lying in argument structure – which in turn is critically related to event structure, in the way sketched in Grimshaw (1990), Siloni (1997) among others. In pursuing these lines of analysis, and at least to some extent disentangling their properties, we reach the conclusion that, with respect to a core set of phenomena, the two theories are remarkably similar – specifically, they achieve success with the same problems, and must resort to the same stipulations to address the remaining issues that we discuss (although the stipulations are couched in different forms).
In many languages, a passive-like meaning may be obtained through a noncanonical passive construction. The get passive (1b) in English, the se faire passive (2b) in French and the kriegen passive (3b) in German represent typical manifestations. This squib focuses on the behavior of the get-passive in English and discusses a number of restrictions associated with it as well as the status of get.
In this paper we investigate Greek, an optional clitic doubling language not subject to Kaynes generalization (Jaeggli 1982), and we argue that in this language, doubled DPs are in A-positions. We propose that Greek clitics are formal features that move, permitting DPs in argument positions. This leads to a typology according to which there are two types of clitic/agreement languages -configurational and nonconfigurational ones-, depending upon whether clitics are instantiations of formal features or not.
The paper is structured as follows. Section 2.1 introduces the basic classes of adjectives that constitute the factual core of the paper. Section 2.2 summarizes in greater detail the X° and the XP movement approaches to word order variation within the DP. Section 3 briefly discusses problems for both approaches. Sections 4.1, 5.1, and 5.2 draw from Alexiadou (2001) and contain a discussion of Greek DS and its relevance for a re-analysis of the word order variation in the Romance DP. Section 4.2 introduces refinements to Alexiadou & Wilder (1998) and Alexiadou (2001). Section 5.3. discusses certain issues that arise from the analysis of postnominal adjectives in Romance as involving raising of XPs. Section 6 discusses phenomena found in other languages, which at first sight seem similar to DS. However, I show that double definiteness in e.g. Hebrew, Scandinavian or other Balkan languages constitutes a different type of phenomenon from Greek DS, thus making a distinction between determiners that introduce CPs (Greek) and those that are merely morphological/agreement markers (Hebrew, Scandinavian, Albanian).
Class features as probes
(2008)
In this article, we adress (i) the form and (ii) the function on inflection class features in minimalist grammar. The empirical evidence comes from noun inflection systems involving fusional markers in German, Greek, and Russian. As for (i), we argue (based on instances of transparadigmatic syncretism) that class features are not privative; rather, class information must be decomposed into more abstract, binary features. Concerning (ii), we propose that class features qualify as the very device that brings about fusional infection: They are uninterpretable in syntax and actas probes on stems, with matching inflection markers as goels, and thus trigger morphological Agree operations that merge stem and inflection marker before syntax is reached.
The goal of this paper is to re-examine the status of the condition in (1) proposed in Alexiadou and Anagnostopoulou (2001; henceforth A&A 2001), in view of recent developments in syntactic theory. (1) The subject-in-situ generalization (SSG) By Spell-Out, vP can contain only one argument with a structural Case feature. We argue that (1) is a more general condition than previously recognized, and that the domain of its application is parametrized. More specifically, based on a comparison between Indo-European (IE) and Khoisan languages, we argue that (1) supports an interpretation of the EPP as a general principle, and not as a property of T. Viewed this way, the SSG is a condition that forces dislocation of arguments as a consequence of a constraint on Case checking.
In this paper we compare the distribution of PPs introducing external arguments in nominalizations with PPs introducing external arguments in the verbal domain. We show that several mismatches exist between the behavior of PPs in nominalizations and PPs in the verbal domain. This leads us to suggest that while PPs in the verbal domain are licensed by functional structure alone, within the nominal domain, PPs can also be licensed via an interplay of the encyclopaedic meaning of the root involved and the properties of the preposition itself. This second mechanism kicks in in the absence of functional structure.
Structuring participles
(2008)
In this paper we discuss three types of adjectival participles in Greek, ending in -tos and –menos, and provide a further argument for the view that finer distinctions are necessary in the domain of participles (Kratzer 2001, Embick 2004). We further compare Greek stative participles to their German (and English) counterparts. We propose that a number of semantic as well as syntactic differences shown by these derive from differences in their respective morpho-syntactic composition.
In this paper we investigate the distribution of PPs related to external arguments (agent, causer, instrument, causing event) in Greek. We argue that their distribution supports an analysis, according to which agentive/instrument and causer PPs are licensed by distinct functional heads, respectively. We argue against a conceivable alternative analysis, which links agentivity and causation to the prepositions themselves. We furthermore identify a particular type of Voice head in Greek anticausative realised by non-active Voice morphology.
In the recent literature there is growing interest in the morpho-syntactic encoding of hierarchical effects. The paper investigates one domain where such effects are attested: ergative splits conditioned by person. This type of splits is then compared to hierarchical effects in direct-inverse alternations. On the basis of two case studies (Lummi instantiating an ergative split person language and Passamaquoddy an inverse language) we offer an account that makes no use of hierarchies as a primitive. We propose that the two language types differ as far as the location of person features is concerned. In inverse systems person features are located exclusively in T, while in ergative systems, they are located in T and a particular type of v. A consequence of our analysis is that Case checking in split and inverse systems is guided by the presence/absence of specific phi-features. This in turn provides evidence for a close connection between Case and phi-features, reminiscent of Chomsky’s (2000, 2001) Agree.
On the role of syntactic locality in morphological processes : the case of (Greek) derived nominals
(2008)
The paper is structured as follows. In section 2, I briefly summarize the facts on English and Greek nominalizations. In section 3, I discuss English nominal derivation in some detail. In section 4, I turn to the question of licensing of AS in nominals. In section 5, I turn to the issue of the optionality of licensing of AS in the nominal system.
The causative/anticausative alternation has been the topic of much typological and theoretical discussion in the linguistic literature. This alternation is characterized by verbs with transitive and intransitive uses, such that the transitive use of a verb V means roughly "cause to Vintransitive" (see Levin 1993). The discussion revolves around two issues: the first one concerns the similarities and differences between the anticausative and the passive, and the second one concerns the derivational relationship, if any, between the transitive and intransitive variant. With respect to the second issue, a number of approaches have been developed. Judging the approach conceptually unsatisfactory, according to which each variant is assigned an independent lexical entry, it was concluded that the two variants have to be derivationally related. The question then is which one of the two is basic and where this derivation takes place in the grammar. Our contribution to this discussion is to argue against derivational approaches to the causative / anticausative alternation. We focus on the distribution of PPs related to external arguments (agent, causer, instrument, causing event) in passives and anticausatives of English, German and Greek and the set of verbs undergoing the causative/anticausative alternation in these languages. We argue that the crosslinguistic differences in these two domains provide evidence against both causativization and detransitivization analyses of the causative / anticausative alternation. We offer an approach to this alternation which builds on a syntactic decomposition of change of state verbs into a Voice and a CAUS component. Crosslinguistic variation in passives and anticausatives depends on properties of Voice and its combinations with CAUS and various types of roots.
This paper deals with the variable position of adjectives in the Romanian DP. As all other Romance languages, Romanian allows for adjectives to appear in both prenominal and post-nominal position. In addition, however, Romanian has a third pattern: the so-called cel construction, in which the adjective in the post-nominal position is preceded by a determiner-like element, cel. This pattern is superficially similar to Determiner Spreading in Greek. In this paper we contrast the cel construction to Greek DS and discuss the similarities and differences between the two. We then present an analysis of cel as involving an appositive specification clause, building on de Vries (2002). We argue that the same structure is also involved in the context of nominal ellipsis, the second environment in which cel is found.
A commonly held view in the literature on Scrambling and Clitic Doubling is that both constructions are sensitive to Specificity. For this reason Sportiche (1992) proposes to unify the two, an approach which has become quite standard in the relevant literature ever since. However, the claim that clitic doubling is the counterpart of Germanic scrambling has never been substantiated. In this paper we present extensive evidence from Greek that Clitic Doubling has common formal properties with Germanic Scrambling/Object Shift. Our evidence consists mainly of binding facts observed when doubling takes place, which seem, at first sight, to be completely unexpected. On closer inspection, however, it turns out that these facts are strongly reminiscent of the effects showing up in Germanic scrambling. We propose that these properties can be derived under a theory of clitic constructions along the lines of Sportiche (1992) implemented into the framework of Chomsky (1995). Finally we suggest the that the crosslinguistic distribution of Scrambling as opposed to Clitic Doubling should be linked to a parameter relating to properties of Agr: Move/Merge XP vs. Move/Merge X° to Agr. We show that this parameter unifies the behaviour of subjects and objects within a language and across languages. The paper is organised as follows. In section 2 we present evidence from binding, interpretational and prosodic effects that doubling and scrambling display very similar properties. In section 3 we present Sportiches account and point out some problems for it. In section 4 we present our proposal.
The limits of Cushitic
(1980)
Gegenstände der Untersuchung sind genetische Gliederung und historische Rekonstruktion im Kuschitischen. Nach dem Kriterium gemeinsamer sprachlicher Innovationen sind folgende Schlüsse möglich: (1) Ik ist keine kuschitische, nicht einmal eine afroasiatische Sprache. (2) Es ist durchaus nicht sicher, daß die Burji-Sidamo-Gruppe (Rift-Valley-Kuschitisch) mit dem Tieflandkuschitischen einen genetischen Zweig - das Ostkuschitische - bildet. Die Burji-Sidamo-Gruppe kannte am engsten mit dem Agaw verwandt sein und mit ihm einen anderen genetischen Zweig - das Hochlandkuschitische - bilden. (3) Die Iraqw-Gruppe - und mit ihr vermutlich das gesamte Südkuschitische - gehört zum Tieflandkuschitischen und bildet keinen selbständigen Zweig des Kuschitischen. (4) Obwohl das Beja zweifellos eine afroasiatische Sprache ist, ist jedoch nicht zuverlässig bewiesen, daß es zum Kuschitischen gehört. Seine genaue Stellung zum Kuschitischen (dem Kuschitischen nächstverwandter Zweig oder nicht einmal dies?) bleibt noch zu klären. Die Erörterung und Beweisführung beruht auf Rekonstruktionen des Verbalsystems und der Kasus, auf einem Systemvergleich der Determinationselemente und der Genitivmorpheme sowie auf anderen syntaktischen und morphologischen Merkmalen. Auch einige Prinzipien der linguistischen Typologie wurden herangezogen. Es handelt sich um vorläufige Ergebnisse.
Theories of cognition that are based on information processing and representation are reactive (Rosen, 1985) or backwards looking, not anticipatory. In a previous article (Thibault, 2005a), I looked at the reasons why humans and bonobos do not need an innate language faculty in order to be minded, languaging beings. The present article takes up some of the questions explored there, but, it asks, on the other hand, what sort of a minded agent has language and what kind of account of language and more broadly meaning do we need to explain minded, languaged agents and the activities they participate in? Following Rosen (1985), I also take up and further develop a point first raised in Thibault (2004a: 187) on language as an anticipatory system, rather than a reactively ‘representational’ one (see also Bickhard, 2005).
The paper focuses on business negotiation in settings in which participants from different mothertongue backgrounds choose French, English andfor German as one of their languages of communication. A general scheme of the action-pattem of buying and selling will be sketched out which allows us to analyze specific Courses of verbal actions according ta their communicative functions within the negotiation process. In particular, the discourse of business communication is to be specified as a decision making process on the part of the buyer which is executed in a step-by-step order, and which is Open to the application of a bundle of the seller's strategies, tactics, and communicative techniques. In international negotiations, effects of unobserved miscommunication are, among others, far-stretched communicative circles, prolongation of negotiation time, non-functional explanations and several other repetitive structures. 1. Languages of trade and commerce - languages of communication 2. Communication in a Buy-Sell-Context is patterned 2.1. Entering the Pattern 2.2. The Main Phase 2.3. The Bidding Phase 2.4. The Specifc Conditions 2.5. Negotiating the Contract 3. The Central Point 3.1. The Buyer's Decision-Making Process 3.2 Decision-Making and Role-Playing 3.3. Intercultural Difference of the Decision-Making Process 4. Bridging the Buyer's Gap of Knowledge 5. The Language of Trade and Commerce 6. The Needs of Further Research: Data References
What are the similarities and differences in the loss of grammatical systems across individual languages? To answer this question, I examine structural consequences of language attrition and the correspondences between language-particular and cross-linguistic phenomena under circumstances of severe attrition. However, the very formulation of this approach, involving "severe attrition", already warrants some clarification, It leads to the formuIation of two collateral questions. First, how can the level of language attrition be quantified? Second, which structural features are diagnostic of the decline of grammar? I present data on structural change in six attrited languages as compared to non-attrited control languages and demonstrate that there is significant parallelism in structural change across languages. Next, I show a correlation between levels of grammatical and lexical loss and introduce a simple test allowing us to measure the level of attrition.
There is an inexhaustible stream of theoretical work on aspect. More than 20 major books of a gelteral nature have come out during the past few years, not to mention the vast amount of shorter articles. The theoretical proposals found in these works are often radically different. What is the state of the art in this highly controversial area? To what extent can the "ordinary working linguist" profit from the flood of theoretical proposals? This paper started out as a review article on five recent books on aspect. These reviews are incorporated here into a general assessment of contemporary aspect theories. We will classify different approaches to aspect and try to sort out their theoretical primitives. The paper concludes wich a brief summary pointing out the most urgent desiderata for a typologically adequate approach to aspect.
With one group generally constituting the autochthonous host - representing the core population in the centre - immigrant groups tend to reside in separate ethnic wards and even work in wards/quartiers identified with their ethno-specific crafts and trades - and often named after them. The socio-lingustic survey will therefore use available and new maps and ethno-linguistic statistics: For the former, the urban surveys by the Max Lock Company of north-eastern Nigeria have been of great help, but have to be updated ; for the latter, various censuses had to be supplemented by more recent information . With ethno-linguistic wards constituting enclaves which can only interact through a language or languages in common, we can apply the general model of the triglottic configuration by positing x territorial and y immigrant, ethnic languages of solidarity; one general urban community language or lingua franca of interaction; and the official language of authority and administration. This language of authority was formerly a local aristolect (Kanuri or Fulfulde), but is now mostly an exolect - English or French. This short presentation concerns ongoing work in urban socio-lingustics developed in Maiduguri over some 15 years.
This paper argues that short (clause-internal) scrambling to a pre-subject position has A properties in Japanese but A'-properties in German, while long scrambling (scrambling across sentence boundaries) from finite clauses, which is possible in Japanese but not in German, has A'-properties throughout. It is shown that these differences between German and Japanese can be traced back to parametric variation of phrase structure and the parameterized properties of functional heads. Due to the properties of Agreement, sentences in Japanese may contain multiple (Agro- and Agrs-) specifiers whereas German does not allow for this. In Japanese, a scrambled element may be located in a Spec AgrP, i.e. an A- or L-related position, whereas scrambled NPs in German can only appear in an AgrP-adjoined (broadly-L-related) position, which only has A'-properties. Given our assumption that successive cyclic adjunction is generally impossible, elements in German may not be long scrambled because a scrambled element that is moved to an adjunction site inside an embedded clause may not move further. In Japanese, long distance scrambling out of finite CPs is possible since scrambling may proceed in a successive cyclic manner via embedded Spec- (AgrP) positions. Our analysis of the differences between German and Japanese scrambling provides us with an account of further contrasts between the two languages such as the existence of surprising asymmetries between German and Japanese remnant-movement phenomena, and the fact that unlike German, Japanese freely allows wh-scrambling. Investigation of the properties of Japanese wh-movement also leads us to the formulation of the "Wh-cluster Hypothesis", which implies that Japanese is an LF multiple wh-fronting language.
In this paper I discuss the properties of particle verbs in light of a proposal about syntactic projection. In section 2 I suggest that projection involves functional structure in two important ways: (i) only functional phrases can be complements, and (ii) lexical heads that take complements and project must be inflected. In section 3, I show that the structure of particle verbs is not uniform with respect to (i) and (ii). On the one hand, a particle always combines with an inflected verb; in this respect, particle verbs look like verb-complement constructions. On the other hand, the particle is not a functional phrase and therefore is not a proper complement, which makes the combination of the particle and the verb look more like a morphologically complex verb. I argue that syntactic rules can in fact interpret the node dominating the particle and the verb as a projection and as a complex head. In section 4, I show that many of the characteristic properties of particle verbs in the Germanic languages follow from the fact that they are structural hybrids.
Expletives as features
(2000)
Expletives have always been a central topic of theoretical debate and subject to different analyses within the different stages of the Principles and Parameter theory (see Chomsky 1981, 1986, 1995; Lasnik 1992, 1995; Frampton and Gutman 1997; among others). However, most analyses center on the question how to explain the behavior of expletives in A-chains (such as there in English or Þad in Icelandic). No account relates wh-expletives (as one finds them in so-called partial wh-movement constructions in languages such as Hungarian, Romani, and German) to expletives in Achains. In this paper, I argue that the framework of the Minimalist Program opens up the possibility of accounting for expletive-associate relations in A-/A'-chains in a unified manner. The main idea of the unitary analysis is that an expletive is an overtly realized feature bundle that is (sub)extracted from its associate DP. There in an expletive-associate chain is a moved D-feature which orginates inside the associate DP. Similarily, in A'-chains, the whexpletive originates as a focus-/wh-feature in the wh-phrase with which it is associated. This analysis provides evidence for the feature-checking theory in Chomsky (1995). The paper is organized as follows. Section 2 contains the discussion of expletive there. In section 3 I suggest an analysis for whexpletives, and I also explore whether this analysis can be extended to relations between X°-categories such as auxiliary and participle complexes.
In this paper I show that Clitic Climbing (CC) in Spanish and Long Scrambling (LS) in German (and Polish) are (im-)possible out of the same environments. For an explanation of this fact I propose a feature-oriented analysis of incorporation phenomena. The idea is that restructuring is a phenomenon of syntactic incorporation. In German and Polish, Agro incorporates covertly into the matrix clause and licenses LS out of the infinitival into the matrix clause. Similarily the clitic in Spanish, which is analysed as an Agro-head, incorporates into the matrix clause. I argue that this movement is necessary for reasons of feature-checking, i. e. for checking of an [+R]- or Restructuring-feature. In section 2 I discuss several differences between CC and LS. For example, the proposed analysis correctly predicts that clitics in contrast to scrambled phrases are subject to several serialization restrictions. Throughout the paper I use the term restructuring only in a descriptive sense, in order to describe the phenomenon in question.
Plural semantics for natural language understanding : a computational proof-theoretic approach
(2005)
The semantics of natural language plurals poses a number of intricate problems – both from a formal and a computational perspective. In this thesis I investigate problems of representing, disambiguating and reasoning with plurals from a computational perspective. The work defines a computationally suitable representation for important plural constructions, proposes a tractable resolution algorithm for semantic plural ambiguities, and integrates an automatic reasoning component for plurals. My solution combines insights from formal semantics, computational linguistics and automated theorem proving and is based on the following main ideas. Whereas many existing approaches to plural semantics work on a model-theoretic basis using higher-order representation languages I propose a proof-theoretic approach to plural semantics based on a flat firstorder semantic representation language thus showing that a trade-off between expressive power and logical tractability can be found. The problem of automatic disambiguation of plurals is tackled by a deliberate decision to drastically reduce recourse to contextual knowledge for disambiguation but rely instead on structurally available and thus computationally manageable information. A further central aspect of the solution lies in carefully drawing the borderline between real ambiguity and mere indeterminacy in the interpretation of plural noun phrases. As a practical result of my computational proof-theoretic approach to plural semantics I can use my methods to perform automated reasoning with plurals by applying advanced firstorder theorem provers and model-generators available off-the shelf. The results are prototypically implemented within the two logic-oriented natural language understanding applications DRoPs and Attempto. DRoPs provides an automatic plural disambiguation component for uncontrolled natural language whereas Attempto works with a constructive disambiguation strategy for controlled natural language. Both systems provide tools for the automated analysis of technical texts allowing users for example to automatically detect inconsistencies, to perform question answering, to check whether a conjecture follows from a text or to find equivalences and redundancies.
The volume is a collection of papers given at the conference “sub8 -- Sinn und Bedeutung”, the eighth annual conference of the Gesellschaft für Semantik, held at the Johann-Wolfgang-Goethe-Universität, Frankfurt (Germany) in September 2003. During this conference, experts presented and discussed various aspects of semantics. The very different topics included in this book provide insight into fields of ongoing Semantics research.
Dutch nominalised infinitives have been notoriously difficult to analyse, partly because they seem to show mixed verbal and nominal properties interspersed across the structure. In this paper, it is argued that at least two types of such infinitives should be distinguished, one which contains a high level of verbal functional structure, and one that differs at least in not projecting TP. On the basis of this distinction it is possible to show that Dutch nominalised infinitives have much more predictable properties than could previously be identified. They show evidence of conforming to a model of analysing mixed categories in terms of category switch within the constituent. In order to account for the seemingly interspersed nature of nominal and verbal properties in Dutch nominalised infinitives I propose that Dutch of-phrases (van-phrases) may merge inside the VP, provided they have access to nominal functional structure for feature checking. I will show that if D° is filled by a special type of non-deictic demonstratives van-phrases may even occur in SpecDP.