Refine
Year of publication
Document Type
- Working Paper (2337) (remove)
Language
- English (2337) (remove)
Is part of the Bibliography
- no (2337)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (30)
- Sprachtypologie (23)
Institute
- Center for Financial Studies (CFS) (1364)
- Wirtschaftswissenschaften (1294)
- Sustainable Architecture for Finance in Europe (SAFE) (727)
- House of Finance (HoF) (597)
- Institute for Monetary and Financial Stability (IMFS) (169)
- Rechtswissenschaft (146)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
The merchant language of the Georgian Jews deserves scholarly attention for several reasons. The political and social developments of the last fifty years have caused the extinction of this very interesting form of communication, as most Georgian Jews have emigrated to Israel. In a natural interaction, the type of language described in this article can be found very rarely, if at all. Records of this communication have been preserved in various contexts and received different levels of scholarly attention. Our interest concerns the linguistic aspects as well as the classification.
In the following paper we argue that the specific merchant language of Georgian Jews belongs to the pragmatic phenomenon of “very indirect language.” The use of mostly Hebrew lexemes in Georgian conversation leads to an unfounded assumption that the speakers are equally competent in Hebrew and Georgian. It is reported that a high level of linguistic competence in Hebrew does not guarantee understanding of the Jewish merchant language. In the Georgian context, the decisive factors are membership in the professional interest group of merchants and residential membership in the Jewish community. These factors seem to be equivalent, because Jewish members of other professional groups (and those from outside the particular urban residential area) have difficulties in following the language that are similar to those of the Georgian majority. We describe the pragmatic structure of interactions conducted with the help of the merchant language and take into account the purpose of the language’s use or the intention of the speakers. Relevant linguistic examples are analysed and their sociocultural contexts explained.
This paper deals with the proposed use of sovereign credit ratings in the "Basel Accord on Capital Adequacy" (Basel II) and considers its potential effect on emerging markets financing. It investigates in a first attempt the consequences of the planned revisions on the two central aspects of international bank credit flows: the impact on capital costs and the volatility of credit supply across the risk spectrum of borrowers. The empirical findings cast doubt on the usefulness of credit ratings in determining commercial banks' capital adequacy ratios since the standardized approach to credit risk would lead to more divergence rather than convergence between investment-grade and speculative-grade borrowers. This conclusion is based on the lateness and cyclical determination of credit rating agencies' sovereign risk assessments and the continuing incentives for short-term rather than long-term interbank lending ingrained in the proposed Basel II framework.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
Unquestionably (or: undoubtedly), every competent speaker has already come to doubt with respect to the question of which form is correct or appropriate and should be used (in the standard language) when faced with two or more almost identical competing variants of words, word forms or sentence and phrase structure (e.g. German "Pizzas/Pizzen/Pizze" 'pizzas', Dutch "de drie mooiste/mooiste drie stranden" 'the three most beautiful/most beautiful three beaches', Swedish "större än jag/mig" 'taller than I/me'). Such linguistic uncertainties or "cases of doubt" (cf. i.a. Klein 2003, 2009, 2018; Müller & Szczepaniak 2017; Schmitt, Szczepaniak & Vieregge 2019; Stark 2019 as well as the useful collections of data of Duden vol. 9, Taaladvies.net, Språkriktighetsboken etc.) systematically occur also in native speakers and they do not necessarily coincide with the difficulties of second language learners. In present-day German, most grammatical uncertainties occur in the domains of inflection (nominal plural formation, genitive singular allomorphy of strong masc./neut. nouns, inflectional variation of weak masc. nouns, strong/weak adjectival inflection and comparison forms, strong/weak verb forms, perfect auxiliary selection) and word-formation (linking elements in compounds, separability of complex verbs). As for syntax, there are often doubts in connection with case choice (pseudo-partitive constructions, prepositional case government) and agreement (especially due to coordination or appositional structures). This contribution aims to present a contrastive approach to morphological and syntactic uncertainties in contemporary Germanic languages (mostly German, Dutch, and Swedish) in order to obtain a broader and more fine-grained typology of grammatical instabilities and their causes. As will be discussed, most doubts of competent speakers - a problem also for general linguistic theory - can be attributed to processes of language change in progress, to language or variety contact, to gaps and rule conflicts in the grammar of every language or to psycholinguistic conditions of language processing. Our main concerns will be the issues of which (kinds of) common or different critical areas there are within Germanic (and, on the other hand, in which areas there are no doubts), which of the established (cross-linguistically valid) explanatory approaches apply to which phenomena and, ultimately, the question whether the new data reveals further lines of explanation for the empirically observable (standard) variation.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.