Refine
Document Type
- Part of a Book (23)
- Working Paper (14)
- Article (2)
- Doctoral Thesis (2)
Language
- English (41)
Has Fulltext
- yes (41)
Is part of the Bibliography
- no (41)
Keywords
- Formale Semantik (41) (remove)
Institute
- Informatik (11)
- Neuere Philologien (1)
This paper revisits the question of whether propositions in situation semantics must be persistent (Kratzer (1989)). It shows that ignoring persistence causes empirical problems to theories which use quantification over minimal situations as a solution for donkey anaphora (Elbourne (2005)), while at the same time modifying these theories to incorporate persistence makes them incompatible with the use of situations for contextual restriction (Kratzer (2004)).
The paper investigates the interaction of focus and adverbial quantification in Hausa, a Chadic tone language spoken in West Africa. The discussion focuses on similarities and differences between intonation and tone languages concerning the way in which adverbial quantifiers (AQs) and focus particles (FPs) associate with focus constituents. It is shown that the association of AQs with focused elements does not differ fundamentally in intonation and tone languages such as Hausa, despite the fact that focus marking in Hausa works quite differently. This may hint at the existence of a universal mechanism behind the interpretation of adverbial quantifiers across languages. From a theoretical perspective, the Hausa data can be taken as evidence in favour of pragmatic approaches to the focus-sensitivity of AQs, such as e.g. Beaver & Clark (2003).
Russian predicate cleft constructions have the surprising property of being associated with adversative clauses of the opposite polarity. I argue that clefts are associated with adversative clauses because they have the semantics of S-Topics in Büring's (1997, 2000) sense of the term. It is shown that the polarity of the adversative clause is obligatorily opposed to that of the cleft because the use of a cleft gives rise to a relevance-based pragmatic scale. The ordering principle according to which these scale
Dealing with alternatives
(2006)
Traditionally, pure additive particles and scalar additive particles are both characterized by an existential presupposition. They differ insofar as the set of alternatives that is built is unordered for the former, and ordered for the latter, which carry the so-called scalar presupposition. As a result, the two characterisations cannot be cumulated, an impossibility that is at odds with the fact that several languages exhibit this combination of readings for a single item. The discussion of Italian neanche '(n)either/(not) even', an item that can both be additive and scalar, allows us to expose the connection between the oppositions non-ordered vs ordered set of alternatives and verified vs accommodated existential presupposition by adding content to the traditional view that the set of alternatives is made up of 'relevant' items in the context. The question of how to characterise this item is set against the backdrop of a more general discussion of the network of additive particles found in Italian.
This paper looks at sentences with "quantificational indefinites," discussed by Diesing (1992) and others. I propose that these sentences generate sets of alternatives of the form {p, not p and it's possible that p}, which restrict the quantification by an extension of familiar focus principles. For example, in the sentence "I usually read a book about slugs" (on the relevant reading), "usually" quantifies over pairs <x,t> such that x is a book about slugs, t is a time interval, and one alternative is true from the set {I read x at t, I can but do not read x at t}. In addition to accounting for a well-known contrast between creation and non-creation verbs, this also explains a second contrast that Diesing’s analysis cannot account for.
The expressions few and a few are typically considered to be separate quantifiers. I challenge this assumption, showing that with the appropriate definition of few, a few can be derived compositionally as a + few. The core of the analysis is a proposal that few has a denotation as a one-place predicate which incorporates a negation operator. From this, argument interpretations can be derived for expressions such as few students and a few students, differing only in the scope of negation. I show that this approach adequately captures the interpretive differences between few and a few. I further show that other such pairs are blocked by a constraint against the vacuous application of a.
Band II von II
Band I von II
This paper presents two experimental studies investigating the processing of presupposed content. Both studies employ the German additive particle auch (too). In the first study, participants were given a questionnaire containing bi-clausal, ambiguous sentences with 'auch' in the second clause. The presupposition introduced by auch was only satisfied on one of the two readings of the sentence, and this reading corresponded to a syntactically dispreferred parse of the sentence. The prospect of having the auch-presupposition satisfied made participants choose this syntactically dispreferred reading more frequently than in a control condition. The second study used the self-paced-reading paradigm and compared the reading times on clauses containing auch, which differed in whether the presupposition of auch was satisfied or not. Participants read the clause more slowly when the presupposition was not satisfied. It is argued that the two studies show that presuppositions play an important role in online sentence comprehension and affect the choice of syntactic analysis. Some theoretical implications of these findings for semantic theory and dynamic accounts of presuppositions as well as for theories of semantic processing are discussed.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
We show on an abstract level that contextual equivalence in non-deterministic program calculi defined by may- and must-convergence is maximal in the following sense. Using also all the test predicates generated by the Boolean, forall- and existential closure of may- and must-convergence does not change the contextual equivalence. The situation is different if may- and total must-convergence is used, where an expression totally must-converges if all reductions are finite and terminate with a value: There is an infinite sequence of test-predicates generated by the Boolean, forall- and existential closure of may- and total must-convergence, which also leads to an infinite sequence of different contextual equalities.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
This article develops a Gricean account for the computation of scalar implicatures in cases where one scalar term is in the scope of another. It shows that a cross-product of two quantitative scales yields the appropriate scale for many such cases. One exception is cases involving disjunction. For these, I propose an analysis that makes use of a novel, partially ordered quantitative scale for disjunction and capitalizes on the idea that implicatures may have different epistemic status.
The interpretation of traces
(2004)
This paper argues that parts of the lexical content of an A-bar moved phrase must be interpreted in the base position of movement. The argument is based on a study of deletion of a phrase that contains the base position of movement. I show that deletion licensing is sensitive to the content of the moved phrase. In this way, I corroborate and extend conclusions based on Condition C reconstruction by N. Chomsky and D. Fox. My result provides semantic evidence for the existence of traces and gives semantic content to the A/A-bar distinction.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. It is designed to perform automatic induction proofs and can also deal with partial functions. This paper provides a reconstruction of the corresponding logic and semantics using the standard treatment of undefinedness which adapts and improves the VeriFun-logic by allowing reasoning on nonterminating expressions and functions. Equality of expressions is defined as contextual equivalence based on observing termination in all closing contexts. The reconstruction shows that several restrictions of the VeriFun framework can easily be removed, by natural generalizations: mutual recursive functions, abstractions in the data values, and formulas with arbitrary quantifier prefix can be formulated. The main results of this paper are: an extended set of deduction rules usable in VeriFun under the adapted semantics is proved to be correct, i.e. they respect the observational equivalence in all extensions of a program. We also show that certain classes of theorems are conservative under extensions, like universally quantified equations. Also other special classes of theorems are analyzed for conservativity.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. This paper provides a reconstruction of the corresponding logic when partial functions are permitted. Typing is polymorphic for the definition of functions but monomorphic for terms in formulas. Equality of terms is defined as contextual equivalence based on observing termination in all contexts. The reconstruction also allows several generalizations of the functional language like mutual recursive functions and abstractions in the data values. The main results are: Correctness of several program transformations for all extensions of a program, which have a potential usage in a deduction system. We also proved that universally quantified equations are conservative, i.e. if a universally quantified equation is valid w.r.t. a program P, then it remains valid if the program is extended by new functions and/or new data types.
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
Modifiability by almost has been used as a test for the quantificational force of a DP without stating the meaning of almost explicitly. The aim of this paper is to give a semantics for almost applying across categories and to evaluate the validity of the almost test as a diagnosis for universal quantifiers. It is argued that almost is similar to other cross-categorial modifiers such as at least or exactly in referring to alternatives ordered on a scale. I propose that almost evaluates alternatives in which the modified expression is replaced by a value close by on the corresponding Horn scale. It is shown that a semantics for almost that refers to scalar alternatives derives the correct truth conditions for almost and explains selectional restrictions. At the same time, taking the semantics of almost seriously invalidates the almost test as a simple diagnosis for the nature of quantifiers.
There is an elegant account, proposed by Beaver and Condoravdi (2003), that assumes that the temporal connectives before and after are converses (i.e., they are analyzed by means of a unified lexical schema), and that explains away their different logical and veridical behavior appealing to other factors. There is an elegant explanation that connects the licensing of Polarity Items to informational strengthening requirements: Polarity Items are viewed as existentials that lead to a widening of the domain of quantification, and they are predicted to be legitimate only when this widening leads to a stronger statement (roughly, in downward monotone contexts). My plan is to connect these two approaches – by proposing an amendment in the definition Beaver and Condoravdi presented for before and after that is meant to account also for their Polarity Items licensing behavior.
Multiple modals construction
(2006)
Modal items of different semantic types can only be combined in a specific order. Epistemic items, for instance, cannot be embedded under deontic ones. I'll argue that this fact cannot be explained by the current semantic theories of modality. A solution to this problem will be developed in an update semantics framework. On the semantic side, a distinction will be drawn between circumstantial information about the world and information about duties, whereas I'll use Nuyts' notion of m-performativity to account for certain use of the modal items.
Functions of English "man"
(2006)
This paper discusses the semantics of the English particle man. It is shown that this particle does different things when used sentence-initially and sentence-finally. The sentenceinitial use is further shown to separate into two distinct intonational types with different semantic content. A formal semantics is proposed for these types.
If we want to develop a semantic analysis for explicit performatives such as I promise you to free Willy, we are faced with the following puzzle: In order to account for the speech act expressed by the performative verb, one can assume that the so-called performative clause is purely performative and provides the illocutionary force of the speech act whose content is given by the semantic object denoted by the complement clause. Yet under this perspective, the performative clause that is, next to the performative verb, the indexicals I and you that refer to the speaker and to the addressee of the utterance context is semantically invisible and does not contribute compositionally its meaning to the meaning of the entire explicit performative sentence. Conversely, if we account for the truth conditional contribution of the performative clause and deny that the meaning of the performative verb is purely performative, then we have to find a way to account for the speech act expressed by the performative verb. Of course, there is already the widely accepted and very appealing indirectness account for explicit performative utterances developed by Bach & Harnish (1979). Roughly, Bach and Harnish solve this puzzle in deriving the performativity by means of a pragmatic inference process. According to them, the important speech act performed by means of the utterance of the explicit performative sentence is a kind of the conventionalized indirect speech act. However, the boundary between semantics and pragmatics can be drawn in many various ways. Therefore, I think there could be other perspectives regarding the interface between the truth-functional treatment of the declarative explicit performative sentences and the speech acts performed with their utterances and which are expressed by the performative verbs. Hence, this thesis consists in the experiment to develop a further analysis and to check out its consequences with respect to the semantics and pragmatics of explicit performative utterances and the new interface emerged. Briefly, the experiment runs as follows: First, I develop an analysis for explicit performative sentences framed by parenthetical structures such as in (1)(a). In a second step, this parenthetical analysis is applied to the proper Austinian explicit performative sentences in (1)(b). (1) a. Tomorrow, I promise you this, I will teach them Tyrolean songs. b. I promise you that I will teach them Tyrolean songs. To analyze at first explicit performatives framed by parenthetical structures bears the convenience that we are faced with two utterances of two main clauses. In (1)(a) there is the utterance of the host sentence Tomorrow I will teach them Tyrolean songs, and the utterance of the explicit parenthetical I promise you this, where the demonstrative this refers to the utterance of Tomorrow I will teach them Tyrolean songs. Since speakers perform speech acts with utterances of main clauses, I assume that the meaning of the explicit parenthetical I promise you this specifies that the actual illocutionary force of the utterance of Tomorrow I will teach them Tyrolean songs is the illocutionary force of a promise. Hence, instead of deriving an indirect illocutionary force by means of a pragmatic inference schema, we can deal with an ordinary direct speech act that is performed with the utterance of the host sentence. This kind of analysis stresses the particular discourse function of explicit performative utterances. Performative verbs are used whenever the contextual information is not sufficient to determine the illocutionary force of the corresponding implicit speech act. The resulting consequences of the parenthetical analysis are interesting since they cast a different light on performative verbs. Surprisingly, the performative verbs are not performative at all. They do not constitute the execution of a speech act, but are execution supporting. Instead of constituting the particular illocutionary force, they merely specify the illocutionary force of the utterance of the host sentence. For instance, the speaker utters the explicit parenthetical I promise you this for specifying what he is simultaneously doing. Hence the speaker does not succeed in performing the promise simply because he is uttering I promise you this. Rather, by means of the information conveyed by the utterance of I promise you this, the potential illocutionary forces of the utterance of the host sentence are disambiguated. Thus, it is not the case that explicit parentheticals are trivially true when uttered. Their function is more complex. Their self-verifying property (‘saying so makes it so’) is explained by means of disambiguation. Furthermore, according to the parenthetical analysis, instead of being purely performative, the performative verbs contribute compositionally their meanings to the truth conditions of the entire explicit performative sentence. Together with its consequences, this analysis is applied to the proper Austinian performatives, which display subordination. I assume that regardless of their structure, explicit performatives always semantically and pragmatically behave as the parenthetical analysis predicts.
The aim of this paper is to investigate Rizzi's (2001) recent claim that in combien constructions full movement correlates with a specific or D-linking interpretation of the nominal (see also Obenauer, 1994) while the in-situ option corresponds to focus of the noun. On the one hand, it is argued that the notion of specificity or D-linking for the raised nominal is too strong while on the other hand it is shown that the stranded nominal is not a focus, but a topic, albeit of a special kind. It is also argued that there is a dedicated postverbal position for this kind of topic and that the nominal has all the properties of an incorporated nominal: it is interpreted as an asserted background topic. In the final part of the article, some time is spent discussing the pragmatics and the modality involved in discontinous structures, and showing that the stranded nominal is interpreted inside the VP/below the event variable.
We propose a compositional analysis for sentences of the kind "You only have to go to the North End to get good cheese", referred to as the Sufficiency Modal Construction in the recent literature. We argue that the SMC is ambiguous depending on the kind of ordering induced by only. So is the exceptive construction – its cross-linguistic counterpart. Only is treated as inducing either a 'comparative possibility' scale or an 'implication-based' partial order on propositions. The properties of the 'comparative possibility' scale explain the absence of the prejacent presupposition that is usually associated with only. By integrating the scalarity into the semantics of the SMC, we explain the polarity facts observed in both variants of the construction. The sufficiency meaning component is argued to be due to a pragmatic inference.
How the left-periphery of a wh-relative clause determines its syntactic and semantic relationships
(2004)
This paper discusses a certain class of German relative clauses which are characterized by a wh-expression overtly realized at the left periphery of the clause. While investigating empirical and theoretical issues regarding this class of relatives, it argues that a wh-relative clause relates syntactically to a functionally complete sentential projection and semantically to entities of various kinds that are abstracted from the matrix clause. What is shown is that this grammatical behaviour clearly can be attributed to the properties of the elements positioned at the left of a wh-relative clause. Finally, a lexically-based analysis couched in the framework of HPSG is given that accounts for the data presented.
Relational data exchange deals with translating relational data according to a given specification. This problem is one of the many tasks that arise in data integration, for example, in data restructuring, in ETL (Extract-Transform-Load) processes used for updating data warehouses, or in data exchange between different, possibly independently created, applications. Systems for relational data exchange exist for several decades now. Motivated by their experiences with one of those systems, Fagin, Kolaitis, Miller, and Popa (2003) studied fundamental and algorithmic issues arising in relational data exchange. One of these issues is how to answer queries that are posed against the target schema (i.e., against the result of the data exchange) so that the answers are consistent with the source data. For monotonic queries, the certain answers semantics proposed by Fagin, Kolaitis, Miller, and Popa (2003) is appropriate. For many non-monotonic queries, however, the certain answers semantics was shown to yield counter-intuitive results. This thesis deals with computing the certain answers for monotonic queries on the one hand, and on the other hand, it deals with the issue of which semantics are appropriate for answering non-monotonic queries, and how hard it is to evaluate non-monotonic queries under these semantics. As shown by Fagin, Kolaitis, Miller, and Popa (2003), computing the certain answers for unions of conjunctive queries - a subclass of the monotonic queries - basically reduces to computing universal solutions, provided the data transformation is specified by a set of tgds (tuple-generating dependencies) and egds (equality-generating dependencies). If M is such a specification and S is a source database, then T is called a solution for S under M if T is a possible result of translating S according to M. Intuitively, universal solutions are most general solutions. Since the above-mentioned work by Fagin, Kolaitis, Miller, and Popa it was unknown whether it is decidable if a source database has a universal solution under a given data exchange specification. In this thesis, we show that this problem is undecidable. More precisely, we construct a specification M that consists of tgds only so that it is undecidable whether a given source database has a universal solution under M. From the proof it also follows that it is undecidable whether the chase procedure - by which universal models can be obtained - terminates on a given source database and the set of tgds in M. The above results in particular strengthen results of Deutsch, Nash, and Remmel (2008). Concerning the issue of which semantics are appropriate for answering non-monotonic queries, we study several semantics for answering such queries. All of these semantics are based on the closed world assumption (CWA). First, the CWA-semantics of Libkin (2006) are extended so that they can be applied to specifications consisting of tgds and egds. The key is to extend the concept of CWA-solution, on which the CWA-semantics are based. CWA-solutions are characterized as universal solutions that are derivable from the source database using a suitably controlled version of the chase procedure. In particular, if CWA-solutions exist, then there is a minimal CWA-solution that is unique up to isomorphism: the core of the universal solutions introduced by Fagin, Kolaitis, and Popa (2003). We show that evaluation of a query under some of the CWA-semantics reduces to computing the certain answers to the query on the minimal CWA-solution. The CWA-semantics resolve some the known problems with answering non-monotonic queries. There are, however, two natural properties that are not possessed by the CWA-semantics. On the one hand, queries may be answered differently with respect to data exchange specifications that are logically equivalent. On the other hand, there are queries whose answer under the CWA-semantics intuitively contradicts the information derivable from the source database and the data exchange specification. To find an alternative semantics, we first test several CWA-based semantics from the area of deductive databases for their suitability regarding non-monotonic query answering in relational data exchange. More precisely, we focus on the CWA-semantics by Reiter (1978), the GCWA-semantics (Minker 1982), the EGCWA-semantics (Yahya, Henschen 1985) and the PWS-semantics (Chan 1993). It turns out that these semantics are either too weak or too strong, or do not possess the desired properties. Finally, based on the GCWA-semantics we develop the GCWA*-semantics which intuitively possesses the desired properties. For monotonic queries, some of the CWA-semantics as well as the GCWA*-semantics coincide with the certain answers semantics, that is, results obtained for the certain answers semantics carry over to those semantics. When studying the complexity of evaluating non-monotonic queries under the above-mentioned semantics, we focus on the data complexity, that is, the complexity when the data exchange specification and the query are fixed. We show that in many cases, evaluating non-monotonic queries is hard: co-NP- or NP-complete, or even undecidable. For example, evaluating conjunctive queries with at least one negative literal under simple specifications may be co-NP-hard. Notice, however, that this result only says that there is such a query and such a specification for which the problem is hard, but not that the problem is hard for all such queries and specifications. On the other hand, we identify a broad class of queries - the class of universal queries - which can be evaluated in polynomial time under the GCWA*-semantics, provided the data exchange specification is suitably restricted. More precisely, we show that universal queries can be evaluated on the core of the universal solutions, independent of the source database and the specification.
This paper proposes a new strategy for accounting for the narrow scope readings of quantificational contrastive topics in Hungarian, which is based on a consideration of the types of questions that declaratives with such contrastive topics can be uttered as partial or complete congruent answers to. The meaning of the declaratives with contrastive topics will be represented with the help of the structured meaning approach to matching questions proposed in Krifka 2002.
In a recent contribution to a long-standing discussion in semantics as to whether the neo-Davidsonian analysis should be extended to stative predicates or not, Maienborn (2004, 2005) proposes to distinguish two types of statives; one of them is said to have a referential argument of the Davidsonian type, the other not. As one of her arguments for making such a distinction, Maienborn observes that manner modification seems to be supported only by certain statives but to be excluded by others (thus linking the issue to the use of manner modification as one major argument in favour of event semantics, cf. Parsons 1990). In this paper, it is argued that the absence of manner modification with Maienborn's second group of statives is actually due to a failure of conceptual construal: modification of a predicate is ruled out whenever its internal conceptual structure is too poor to provide a construal for the modifier; hence, the effects observed by Maienborn reduce to the fact that eventive predicates have a more complex conceptual substructure than stative ones. Hence, the issue of manner modification with statives is shown to be orthogonal to questions of logical form and event semantics. The explanatory power of the conceptual approach is demonstrated with a case study on predicates of light emission, adapting the representation format of Barsalou's (1992) frame model.
Russian and Spanish each have two variants of the predicational copular sentence. In Russian, the variation concerns the case of the predicate phrase, which can be nominative or instrumental, while in Spanish, the variation involves the choice of the copular verb, either ser or estar. It is shown that the choice of the particular variant of copular sentence in both languages depends on the speaker’s perspective, i.e., on whether or not the predication is linked to a specific topic situation.
Mention some of all
(2006)
In the interpretation of natural language one may distinguish three types of dynamics: there are the acts or moves that are made; there are structural relations between subsequent moves; and interlocutors reason about the beliefs and intentions of the participants in a particular language game. Building on some of the formalisms developed to account for the first two types of dynamics, I will generalize and formalize Gricean insights into the third type, and show by means of a case study that such a formalization allows a direct account of an apparent ambiguity: the ‘exhaustive’ versus the ‘mention some’ interpretation of questions and their answers. While the principles which I sketch, like those of Grice, are motivated by assumptions of rationality and cooperativity, they do not presuppose these assumptions to be always warranted.
Fronting a noun phrase changes the focus structure of a sentence. Therefore, it may affect truth conditions, since some operators, in particular quantificational adverbs, are sensitive to focus. However, the position of the quantificational adverb itself, hence its informational status, is usually assumed not to have any semantic effect. In this paper I discuss a reading of some quantificational adverbs, the relative reading, which disappears if the adverb is fronted. I propose that this reading relies not only on focus, but on B-accent (fall-rise intonation) as well. A fronted Q-adverb is usually pronounced with a B-accent; since only one element can be B-accented, this means that the scope of the adverb contains no B-accented material, hence no relative readings. Thus, the effects of fronting range more widely than is usually assumed, and quantificational adverbs are a useful tool with which to investigate these effects.
The paper investigates the interpretation of the Romanian subjunctive B (subjB) mood when it is embedded under the propositional attitude verb crede (believe). SubjB is analyzed as a single package of three distinct presuppositions: temporal de se, dissociation and propositional de se. I show that subjB is the temporal analogue of null PRO in the individual domain: it allows only for a de se reading. Dissociation enables us to show that subjB always takes scope over a negation embedded in a belief report. Propositional de se derives this empirical generalization. The introduction of centered propositions (generalizing centered worlds), together with propositional de se, dissociation and the belief 'introspection' principles, derives the fact that subjB belief reports (unlike their indicative counterparts) are infelicitous with embedded probabil.
Dog after dog revisited
(2006)
This paper presents a compositional semantic analysis of pluractional adverbial modifiers like 'dog after dog' and 'one dog after the other'. We propose a division of labour according to which much of the semantics is carried by a family of plural operators. The adverbial itself contributes a semantics that we call pseudoreciprocal.
Complex focus versus double focus : investigations on multiple focus interpretations in Hungarian
(2006)
The main aim of this paper is to point out several problems with the semantic analysis of Hungarian focus interpretation and 'only'. For current semantic analyses the interpretation of Hungarian identificational/exhaustive focus and 'only' is problematic, since in classical semantic analyses 'only' is identified with an exhaustivity operator. In this paper I will discuss multiple focus constructions and question-answer pairs in Hungarian to show that such a view cannot be applied to Hungarian exhaustive focus. Next to this I will discuss possible interpretations of Hungarian sentences containing multiple prosodic foci: complex focus versus double focus. My claim is that in order to interpret multiple focus (in Hungarian) we have to take into consideration the different intonation patterns, the occurrence of 'only', and the syntactic structure as well.