Refine
Document Type
- Part of a Book (23)
- Working Paper (14)
- Article (2)
- Doctoral Thesis (2)
Language
- English (41)
Has Fulltext
- yes (41)
Is part of the Bibliography
- no (41) (remove)
Keywords
- Formale Semantik (41) (remove)
Institute
- Informatik (11)
- Neuere Philologien (1)
We show how Sestoft’s abstract machine for lazy evaluation of purely functional programs can be extended to evaluate expressions of the calculus CHF – a process calculus that models Concurrent Haskell extended by imperative and implicit futures. The abstract machine is modularly constructed by first adding monadic IO-actions to the machine and then in a second step we add concurrency. Our main result is that the abstract machine coincides with the original operational semantics of CHF, w.r.t. may- and should-convergence.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
Relational data exchange deals with translating relational data according to a given specification. This problem is one of the many tasks that arise in data integration, for example, in data restructuring, in ETL (Extract-Transform-Load) processes used for updating data warehouses, or in data exchange between different, possibly independently created, applications. Systems for relational data exchange exist for several decades now. Motivated by their experiences with one of those systems, Fagin, Kolaitis, Miller, and Popa (2003) studied fundamental and algorithmic issues arising in relational data exchange. One of these issues is how to answer queries that are posed against the target schema (i.e., against the result of the data exchange) so that the answers are consistent with the source data. For monotonic queries, the certain answers semantics proposed by Fagin, Kolaitis, Miller, and Popa (2003) is appropriate. For many non-monotonic queries, however, the certain answers semantics was shown to yield counter-intuitive results. This thesis deals with computing the certain answers for monotonic queries on the one hand, and on the other hand, it deals with the issue of which semantics are appropriate for answering non-monotonic queries, and how hard it is to evaluate non-monotonic queries under these semantics. As shown by Fagin, Kolaitis, Miller, and Popa (2003), computing the certain answers for unions of conjunctive queries - a subclass of the monotonic queries - basically reduces to computing universal solutions, provided the data transformation is specified by a set of tgds (tuple-generating dependencies) and egds (equality-generating dependencies). If M is such a specification and S is a source database, then T is called a solution for S under M if T is a possible result of translating S according to M. Intuitively, universal solutions are most general solutions. Since the above-mentioned work by Fagin, Kolaitis, Miller, and Popa it was unknown whether it is decidable if a source database has a universal solution under a given data exchange specification. In this thesis, we show that this problem is undecidable. More precisely, we construct a specification M that consists of tgds only so that it is undecidable whether a given source database has a universal solution under M. From the proof it also follows that it is undecidable whether the chase procedure - by which universal models can be obtained - terminates on a given source database and the set of tgds in M. The above results in particular strengthen results of Deutsch, Nash, and Remmel (2008). Concerning the issue of which semantics are appropriate for answering non-monotonic queries, we study several semantics for answering such queries. All of these semantics are based on the closed world assumption (CWA). First, the CWA-semantics of Libkin (2006) are extended so that they can be applied to specifications consisting of tgds and egds. The key is to extend the concept of CWA-solution, on which the CWA-semantics are based. CWA-solutions are characterized as universal solutions that are derivable from the source database using a suitably controlled version of the chase procedure. In particular, if CWA-solutions exist, then there is a minimal CWA-solution that is unique up to isomorphism: the core of the universal solutions introduced by Fagin, Kolaitis, and Popa (2003). We show that evaluation of a query under some of the CWA-semantics reduces to computing the certain answers to the query on the minimal CWA-solution. The CWA-semantics resolve some the known problems with answering non-monotonic queries. There are, however, two natural properties that are not possessed by the CWA-semantics. On the one hand, queries may be answered differently with respect to data exchange specifications that are logically equivalent. On the other hand, there are queries whose answer under the CWA-semantics intuitively contradicts the information derivable from the source database and the data exchange specification. To find an alternative semantics, we first test several CWA-based semantics from the area of deductive databases for their suitability regarding non-monotonic query answering in relational data exchange. More precisely, we focus on the CWA-semantics by Reiter (1978), the GCWA-semantics (Minker 1982), the EGCWA-semantics (Yahya, Henschen 1985) and the PWS-semantics (Chan 1993). It turns out that these semantics are either too weak or too strong, or do not possess the desired properties. Finally, based on the GCWA-semantics we develop the GCWA*-semantics which intuitively possesses the desired properties. For monotonic queries, some of the CWA-semantics as well as the GCWA*-semantics coincide with the certain answers semantics, that is, results obtained for the certain answers semantics carry over to those semantics. When studying the complexity of evaluating non-monotonic queries under the above-mentioned semantics, we focus on the data complexity, that is, the complexity when the data exchange specification and the query are fixed. We show that in many cases, evaluating non-monotonic queries is hard: co-NP- or NP-complete, or even undecidable. For example, evaluating conjunctive queries with at least one negative literal under simple specifications may be co-NP-hard. Notice, however, that this result only says that there is such a query and such a specification for which the problem is hard, but not that the problem is hard for all such queries and specifications. On the other hand, we identify a broad class of queries - the class of universal queries - which can be evaluated in polynomial time under the GCWA*-semantics, provided the data exchange specification is suitably restricted. More precisely, we show that universal queries can be evaluated on the core of the universal solutions, independent of the source database and the specification.
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. It is designed to perform automatic induction proofs and can also deal with partial functions. This paper provides a reconstruction of the corresponding logic and semantics using the standard treatment of undefinedness which adapts and improves the VeriFun-logic by allowing reasoning on nonterminating expressions and functions. Equality of expressions is defined as contextual equivalence based on observing termination in all closing contexts. The reconstruction shows that several restrictions of the VeriFun framework can easily be removed, by natural generalizations: mutual recursive functions, abstractions in the data values, and formulas with arbitrary quantifier prefix can be formulated. The main results of this paper are: an extended set of deduction rules usable in VeriFun under the adapted semantics is proved to be correct, i.e. they respect the observational equivalence in all extensions of a program. We also show that certain classes of theorems are conservative under extensions, like universally quantified equations. Also other special classes of theorems are analyzed for conservativity.
The interactive verification system VeriFun is based on a polymorphic call-by-value functional language and on a first-order logic with initial model semantics w.r.t. constructors. This paper provides a reconstruction of the corresponding logic when partial functions are permitted. Typing is polymorphic for the definition of functions but monomorphic for terms in formulas. Equality of terms is defined as contextual equivalence based on observing termination in all contexts. The reconstruction also allows several generalizations of the functional language like mutual recursive functions and abstractions in the data values. The main results are: Correctness of several program transformations for all extensions of a program, which have a potential usage in a deduction system. We also proved that universally quantified equations are conservative, i.e. if a universally quantified equation is valid w.r.t. a program P, then it remains valid if the program is extended by new functions and/or new data types.
This note shows that in non-deterministic extended lambda calculi with letrec, the tool of applicative (bi)simulation is in general not usable for contextual equivalence, by giving a counterexample adapted from data flow analysis. It also shown that there is a flaw in a lemma and a theorem concerning finite simulation in a conference paper by the first two authors.
If we want to develop a semantic analysis for explicit performatives such as I promise you to free Willy, we are faced with the following puzzle: In order to account for the speech act expressed by the performative verb, one can assume that the so-called performative clause is purely performative and provides the illocutionary force of the speech act whose content is given by the semantic object denoted by the complement clause. Yet under this perspective, the performative clause that is, next to the performative verb, the indexicals I and you that refer to the speaker and to the addressee of the utterance context is semantically invisible and does not contribute compositionally its meaning to the meaning of the entire explicit performative sentence. Conversely, if we account for the truth conditional contribution of the performative clause and deny that the meaning of the performative verb is purely performative, then we have to find a way to account for the speech act expressed by the performative verb. Of course, there is already the widely accepted and very appealing indirectness account for explicit performative utterances developed by Bach & Harnish (1979). Roughly, Bach and Harnish solve this puzzle in deriving the performativity by means of a pragmatic inference process. According to them, the important speech act performed by means of the utterance of the explicit performative sentence is a kind of the conventionalized indirect speech act. However, the boundary between semantics and pragmatics can be drawn in many various ways. Therefore, I think there could be other perspectives regarding the interface between the truth-functional treatment of the declarative explicit performative sentences and the speech acts performed with their utterances and which are expressed by the performative verbs. Hence, this thesis consists in the experiment to develop a further analysis and to check out its consequences with respect to the semantics and pragmatics of explicit performative utterances and the new interface emerged. Briefly, the experiment runs as follows: First, I develop an analysis for explicit performative sentences framed by parenthetical structures such as in (1)(a). In a second step, this parenthetical analysis is applied to the proper Austinian explicit performative sentences in (1)(b). (1) a. Tomorrow, I promise you this, I will teach them Tyrolean songs. b. I promise you that I will teach them Tyrolean songs. To analyze at first explicit performatives framed by parenthetical structures bears the convenience that we are faced with two utterances of two main clauses. In (1)(a) there is the utterance of the host sentence Tomorrow I will teach them Tyrolean songs, and the utterance of the explicit parenthetical I promise you this, where the demonstrative this refers to the utterance of Tomorrow I will teach them Tyrolean songs. Since speakers perform speech acts with utterances of main clauses, I assume that the meaning of the explicit parenthetical I promise you this specifies that the actual illocutionary force of the utterance of Tomorrow I will teach them Tyrolean songs is the illocutionary force of a promise. Hence, instead of deriving an indirect illocutionary force by means of a pragmatic inference schema, we can deal with an ordinary direct speech act that is performed with the utterance of the host sentence. This kind of analysis stresses the particular discourse function of explicit performative utterances. Performative verbs are used whenever the contextual information is not sufficient to determine the illocutionary force of the corresponding implicit speech act. The resulting consequences of the parenthetical analysis are interesting since they cast a different light on performative verbs. Surprisingly, the performative verbs are not performative at all. They do not constitute the execution of a speech act, but are execution supporting. Instead of constituting the particular illocutionary force, they merely specify the illocutionary force of the utterance of the host sentence. For instance, the speaker utters the explicit parenthetical I promise you this for specifying what he is simultaneously doing. Hence the speaker does not succeed in performing the promise simply because he is uttering I promise you this. Rather, by means of the information conveyed by the utterance of I promise you this, the potential illocutionary forces of the utterance of the host sentence are disambiguated. Thus, it is not the case that explicit parentheticals are trivially true when uttered. Their function is more complex. Their self-verifying property (‘saying so makes it so’) is explained by means of disambiguation. Furthermore, according to the parenthetical analysis, instead of being purely performative, the performative verbs contribute compositionally their meanings to the truth conditions of the entire explicit performative sentence. Together with its consequences, this analysis is applied to the proper Austinian performatives, which display subordination. I assume that regardless of their structure, explicit performatives always semantically and pragmatically behave as the parenthetical analysis predicts.
We show on an abstract level that contextual equivalence in non-deterministic program calculi defined by may- and must-convergence is maximal in the following sense. Using also all the test predicates generated by the Boolean, forall- and existential closure of may- and must-convergence does not change the contextual equivalence. The situation is different if may- and total must-convergence is used, where an expression totally must-converges if all reductions are finite and terminate with a value: There is an infinite sequence of test-predicates generated by the Boolean, forall- and existential closure of may- and total must-convergence, which also leads to an infinite sequence of different contextual equalities.
Russian predicate cleft constructions have the surprising property of being associated with adversative clauses of the opposite polarity. I argue that clefts are associated with adversative clauses because they have the semantics of S-Topics in Büring's (1997, 2000) sense of the term. It is shown that the polarity of the adversative clause is obligatorily opposed to that of the cleft because the use of a cleft gives rise to a relevance-based pragmatic scale. The ordering principle according to which these scale
Dog after dog revisited
(2006)
This paper presents a compositional semantic analysis of pluractional adverbial modifiers like 'dog after dog' and 'one dog after the other'. We propose a division of labour according to which much of the semantics is carried by a family of plural operators. The adverbial itself contributes a semantics that we call pseudoreciprocal.
In a recent contribution to a long-standing discussion in semantics as to whether the neo-Davidsonian analysis should be extended to stative predicates or not, Maienborn (2004, 2005) proposes to distinguish two types of statives; one of them is said to have a referential argument of the Davidsonian type, the other not. As one of her arguments for making such a distinction, Maienborn observes that manner modification seems to be supported only by certain statives but to be excluded by others (thus linking the issue to the use of manner modification as one major argument in favour of event semantics, cf. Parsons 1990). In this paper, it is argued that the absence of manner modification with Maienborn's second group of statives is actually due to a failure of conceptual construal: modification of a predicate is ruled out whenever its internal conceptual structure is too poor to provide a construal for the modifier; hence, the effects observed by Maienborn reduce to the fact that eventive predicates have a more complex conceptual substructure than stative ones. Hence, the issue of manner modification with statives is shown to be orthogonal to questions of logical form and event semantics. The explanatory power of the conceptual approach is demonstrated with a case study on predicates of light emission, adapting the representation format of Barsalou's (1992) frame model.
Complex focus versus double focus : investigations on multiple focus interpretations in Hungarian
(2006)
The main aim of this paper is to point out several problems with the semantic analysis of Hungarian focus interpretation and 'only'. For current semantic analyses the interpretation of Hungarian identificational/exhaustive focus and 'only' is problematic, since in classical semantic analyses 'only' is identified with an exhaustivity operator. In this paper I will discuss multiple focus constructions and question-answer pairs in Hungarian to show that such a view cannot be applied to Hungarian exhaustive focus. Next to this I will discuss possible interpretations of Hungarian sentences containing multiple prosodic foci: complex focus versus double focus. My claim is that in order to interpret multiple focus (in Hungarian) we have to take into consideration the different intonation patterns, the occurrence of 'only', and the syntactic structure as well.
This paper looks at sentences with "quantificational indefinites," discussed by Diesing (1992) and others. I propose that these sentences generate sets of alternatives of the form {p, not p and it's possible that p}, which restrict the quantification by an extension of familiar focus principles. For example, in the sentence "I usually read a book about slugs" (on the relevant reading), "usually" quantifies over pairs <x,t> such that x is a book about slugs, t is a time interval, and one alternative is true from the set {I read x at t, I can but do not read x at t}. In addition to accounting for a well-known contrast between creation and non-creation verbs, this also explains a second contrast that Diesing’s analysis cannot account for.
Russian and Spanish each have two variants of the predicational copular sentence. In Russian, the variation concerns the case of the predicate phrase, which can be nominative or instrumental, while in Spanish, the variation involves the choice of the copular verb, either ser or estar. It is shown that the choice of the particular variant of copular sentence in both languages depends on the speaker’s perspective, i.e., on whether or not the predication is linked to a specific topic situation.
There is an elegant account, proposed by Beaver and Condoravdi (2003), that assumes that the temporal connectives before and after are converses (i.e., they are analyzed by means of a unified lexical schema), and that explains away their different logical and veridical behavior appealing to other factors. There is an elegant explanation that connects the licensing of Polarity Items to informational strengthening requirements: Polarity Items are viewed as existentials that lead to a widening of the domain of quantification, and they are predicted to be legitimate only when this widening leads to a stronger statement (roughly, in downward monotone contexts). My plan is to connect these two approaches – by proposing an amendment in the definition Beaver and Condoravdi presented for before and after that is meant to account also for their Polarity Items licensing behavior.
Mention some of all
(2006)
In the interpretation of natural language one may distinguish three types of dynamics: there are the acts or moves that are made; there are structural relations between subsequent moves; and interlocutors reason about the beliefs and intentions of the participants in a particular language game. Building on some of the formalisms developed to account for the first two types of dynamics, I will generalize and formalize Gricean insights into the third type, and show by means of a case study that such a formalization allows a direct account of an apparent ambiguity: the ‘exhaustive’ versus the ‘mention some’ interpretation of questions and their answers. While the principles which I sketch, like those of Grice, are motivated by assumptions of rationality and cooperativity, they do not presuppose these assumptions to be always warranted.