Refine
Year of publication
- 2009 (80) (remove)
Document Type
- Working Paper (80) (remove)
Language
- English (80) (remove)
Has Fulltext
- yes (80)
Is part of the Bibliography
- no (80)
Keywords
- Deutschland (6)
- Haushalt (6)
- Bank (5)
- Lambda-Kalkül (5)
- USA (5)
- Household Finance (4)
- Buffer Stock Saving (3)
- Europäische Union (3)
- Fiscal Stimulus (3)
- Fiskalpolitik (3)
Institute
- Center for Financial Studies (CFS) (32)
- Wirtschaftswissenschaften (15)
- Institute for Monetary and Financial Stability (IMFS) (14)
- Informatik (9)
- Institute for Law and Finance (ILF) (3)
- Institut für sozial-ökologische Forschung (ISOE) (2)
- Interdisziplinäres Zentrum für Ostasienstudien (IZO) (2)
- Rechtswissenschaft (2)
- Sprach- und Kulturwissenschaften (1)
Papers on pragmasemantics
(2009)
Optimality theory as used in linguistics (Prince & Smolensky, 1993/2004; Smolensky & Legendre, 2006) and cognitive psychology (Gigerenzer & Selten, 2001) is a theoretical framework that aims to integrate constraint based knowledge representation systems, generative grammar, cognitive skills, and aspects of neural network processing. In the last years considerable progress was made to overcome the artificial separation between the disciplines of linguistic on the one hand which are mainly concerned with the description of natural language competences and the psychological disciplines on the other hand which are interested in real language performance.
The semantics and pragmatics of natural language is a research topic that is asking for an integration of philosophical, linguistic, psycholinguistic aspects, including its neural underpinning. Especially recent work on experimental pragmatics (e.g. Noveck & Sperber, 2005; Garrett & Harnish, 2007) has shown that real progress in the area of pragmatics isn’t possible without using data from all available domains including data from language acquisition and actual language generation and comprehension performance. It is a conceivable research programme to use the optimality theoretic framework in order to realize the integration.
Game theoretic pragmatics is a relatively young development in pragmatics. The idea to view communication as a strategic interaction between speaker and hearer is not new. It is already present in Grice' (1975) classical paper on conversational implicatures. What game theory offers is a mathematical framework in which strategic interaction can be precisely described. It is a leading paradigm in economics as witnessed by a series of Nobel prizes in the field. It is also of growing importance to other disciplines of the social sciences. In linguistics, its main applications have been so far pragmatics and theoretical typology. For pragmatics, game theory promises a firm foundation, and a rigor which hopefully will allow studying pragmatic phenomena with the same precision as that achieved in formal semantics.
The development of game theoretic pragmatics is closely connected to the development of bidirectional optimality theory (Blutner, 2000). It can be easily seen that the game theoretic notion of a Nash equilibrium and the optimality theoretic notion of a strongly optimal form-meaning pair are closely related to each other. The main impulse that bidirectional optimality theory gave to research on game theoretic pragmatics stemmed from serious empirical problems that resulted from interpreting the principle of weak optimality as a synchronic interpretation principle.
In this volume, we have collected papers that are concerned with several aspects of game and optimality theoretic approaches to pragmatics.
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence, and as a consequence also yields soundness of the encodings with respect to a contextually defined notion of program equivalence. While these translations encode blocking into queuing and waiting, we also describe an adequate encoding of buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
In contrast to the US and recently Europe, Japan appears to be unsuccessful in establishing new industries. An oft-cited example is Japan's practical invisibility in the global business software sector. Literature has ascribed Japan's weakness – or conversely, America's strength – to the specific institutional settings and competences of actors within the respective national innovation system. It has additionally been argued that unlike the American innovation system, with its proven ability to give birth to new industries, the inherent path dependency of the Japanese innovation system makes innovation and establishment of new industries quite difficult. However, there are two notable weaknesses underlying current propositions postulating that only certain innovation systems enable the creation of new industries: first, they mistakenly confound context specific with general empirical observations. And second, they grossly underestimate – or altogether fail to examine – the dynamics within innovation systems. This paper will show that it is precisely the dynamics within innovation systems – dynamics founded on the concept of path plasticity – which have enabled Japan to charge forward as a global leader in a highly innovative field: the game software sector as well as the biotechnology industry.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
The selection of features for classification, clustering and approximation is an important task in pattern recognition, data mining and soft computing. For real-valued features, this contribution shows how feature selection for a high number of features can be implemented using mutual in-formation. Especially, the common problem for mutual information computation of computing joint probabilities for many dimensions using only a few samples is treated by using the Rènyi mutual information of order two as computational base. For this, the Grassberger-Takens corre-lation integral is used which was developed for estimating probability densities in chaos theory. Additionally, an adaptive procedure for computing the hypercube size is introduced and for real world applications, the treatment of missing values is included. The computation procedure is accelerated by exploiting the ranking of the set of real feature values especially for the example of time series. As example, a small blackbox-glassbox example shows how the relevant features and their time lags are determined in the time series even if the input feature time series determine nonlinearly the output. A more realistic example from chemical industry shows that this enables a better ap-proximation of the input-output mapping than the best neural network approach developed for an international contest. By the computationally efficient implementation, mutual information becomes an attractive tool for feature selection even for a high number of real-valued features.