Refine
Document Type
- Part of a Book (3)
- Article (1)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- no (4)
Keywords
- lexical semantics (4) (remove)
The verb ‘rise’ can be used both with property-denoting nouns like ‘temperature’ but also with NPs like ‘a Titan’ or ‘China’. Whereas in the former case the change triggered by a rising event is directly related to the subject (its current value increases), this does not hold for ‘a titan’ or ‘China’. In this case it is a property of these objects, say their height or their political power, which increases in value. Furthermore, ‘rise’ does not target a particular property as the examples above show. This data has led Cooper (2010) to the conclusion that it is presumably not possible (i) “to extract a single general meaning of words which covers all the particular meanings of the word in context”, and (ii) “to determine once and for all the set of particular contextually determined meanings of a word”. In this article we present a solution to the two problems raised by ‘rise’ in a frame theory. ‘Rise’ is analyzed as a scalar verb which does not lexicalize a complete scale in its meaning. Rather, it shows underspecification relative to the dimension (property) parameter of a scale. The set of admissible properties is determined by a constraint on the value ranges of properties. If the property is not uniquely determined by the subject, the comprehender uses probabilistic reasoning based on world knowledge and discourse information to defeasibly infer the most likely candidates from this set (2nd problem).
The first problem is solved not by simply introducing objects into the representation of a discourse but instead by pairs consisting of an object and an associated frame component which collects the object information contributed by the discourse. Changes triggered by events like the one denoted by ‘rise’ are modelled as update operations on the frame component while the object component is left unchanged.
This paper investigates the meaning adaptability of change of state (CoS) verbs. It
argues that both coercion and underspecification are necessary mechanisms in order to properly
account for the semantic adaptability observable for CoS verbs in combination with their
complements. This type of meaning adaptability has received little formal attention to date,
although some recent work has already led the way on this topic (Spalek, 2014; Lukassek and
Spalek, 2016; Asher et al., 2017). Our paper is part of a cross-linguistic case study of German
einfrieren and Spanish congelar (‘freeze’). We model the meaning adaptability of this test case
within Type Composition Logic (TCL) (Asher, 2011). We build on Asher’s coercion mechanism
and introduce an additional mechanism for underspecification that exploits the fine-grained type
system in TCL.
In this paper I seek to account for the productive word-formation process resulting in the current proliferation of un-nouns, the semi-legitimate offspring of Humpty Dumpty´s un-birthday present (1871) and 7-Up´s commercial incarnation as The Un-Cola (1968), a construction that can be linked to the more well-established categories of un-adjectives and un-verbs, whose formation constraints we will also examine. Drawing on a large corpus of novel un-nouns I have assembled in collaboration with Beth Levin presented in the Appendices to this paper, I will invoke Rosch´s prototype semantics and Aristotle´s notion of PRIVATIVE opposites, defined in terms of a marked exception to a general class property, to generalize across the different categories of un-words. It will be argued that a given un-noun refers either to an element just outside a given category with whose members it shares a salient function (e.g. un-cola) or to a peripheral member of a given category (an unhotel is a hotel but not a good exemplar of the class-not a HOTEL hotel).
Counter to the often assumed division of labour between content and function words, we argue that both types of words have lexical content in addition to their logical content. We propose that the difference between the two types of words is a difference in degree. We conducted a preliminary study of quantificational determiners with methods from Distributional Semantics, a computational approach to natural language semantics. Our findings have implications both for distributional and formal semantics. For distributional semantics, they indicate a possible avenue that can be used to tap into the meaning of function words. For formal semantics, they bring into light the context-sensitive, lexical aspects of function words that can be recovered from the data even when these aspects are not overtly marked. Such pervasive context-sensitivity has profound implications for how we think about meaning in natural language.