Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Central banks have faced a succession of crises over the past years as well as a number of structural factors such as a transition to a greener economy, demographic developments, digitalisation and possibly increased onshoring. These suggest that the future inflation environment will be different from the one we know. Thus uncertainty about important macroeconomic variables and, in particular, inflation dynamics will likely remain high.
This paper reviews social network analysis (SNA) as a method to be utilized in biographical research which is a novel contribution. We argue that applying SNA in the context of biography research through standardized data collection as well as visualization of networks can open up participants’ interpretations of relations throughout their lives, and allow a creative and innovative way of data collection that is responsive to participants’ own meanings and associations while allowing the researchers to conduct systematical data analysis. The paper discusses the analytical potential of SNA in biographical research, where the efficacy of this method is critically discussed, together with its limitations, and its potential within the context of biographical research.
We present an empirical study focusing on the estimation of a fundamental multi-factor model for a universe of European stocks. Following the approach of the BARRA model, we have adopted a cross-sectional methodology. The proportion of explained variance ranges from 7.3% to 66.3% in the weekly regressions with a mean of 32.9%. For the individual factors we give the percentage of the weeks when they yielded statistically significant influence on stock returns. The best explanatory power – apart from the dominant country factors – was found among the statistical constructs „success“ and „variability in markets“.
We focus on the role of social media as a high-frequency, unfiltered mass information transmission channel and how its use for government communication affects the aggregate stock markets. To measure this effect, we concentrate on one of the most prominent Twitter users, the 45th President of the United States, Donald J. Trump. We analyze around 1,400 of his tweets related to the US economy and classify them by topic and textual sentiment using machine learning algorithms. We investigate whether the tweets contain relevant information for financial markets, i.e. whether they affect market returns, volatility, and trading volumes. Using high-frequency data, we find that Trump’s tweets are most often a reaction to pre-existing market trends and therefore do not provide material new information that would influence prices or trading. We show that past market information can help predict Trump’s decision to tweet about the economy.
This paper solves a dynamic model of households' mortgage decisions incorporating labor income, house price, inflation, and interest rate risk. It uses a zero-profit condition for mortgage lenders to solve for equilibrium mortgage rates given borrower characteristics and optimal decisions. The model quantifies the effects of adjustable vs. fixed mortgage rates, loan-to-value ratios, and mortgage affordability measures on mortgage premia and default. Heterogeneity in borrowers' labor income risk is important for explaining the higher default rates on adjustable-rate mortgages during the recent US housing downturn, and the variation in mortgage premia with the level of interest rates.
The Inuit inhabit a vast area of--from a European point of view--most inhospitable land, stretching from the northeastern tip of Asia to the east coast of Greenland. Inuit peoples have never been numerous, their settlements being scattered over enormous distances. But nevertheless, from an ethnological point of view, all Inuit peoples shared a distinct culture, featuring sea mammal and caribou hunting, sophisticated survival skills, technical and social devices, including the sharing of essential goods and strategies for minimizing and controlling aggression.
On average, "young" people underestimate whereas "old" people overestimate their chances to survive into the future. We adopt a Bayesian learning model of ambiguous survival beliefs which replicates these patterns. The model is embedded within a non-expected utility model of life-cycle consumption and saving. Our analysis shows that agents with ambiguous survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival probabilities. Our ambiguity-driven model therefore simultaneously accounts for three important empirical findings on household saving behavior.
Based on a cognitive notion of neo-additive capacities reflecting likelihood insensitivity with respect to survival chances, we construct a Choquet Bayesian learning model over the life-cycle that generates a motivational notion of neo-additive survival beliefs expressing ambiguity attitudes. We embed these neo-additive survival beliefs as decision weights in a Choquet expected utility life-cycle consumption model and calibrate it with data on subjective survival beliefs from the Health and Retirement Study. Our quantitative analysis shows that agents with calibrated neo-additive survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival chances. Our neo-additive life-cycle model can therefore simultaneously accommodate three important empirical findings on household saving behavior.
We consider an imperfectly competitive loan market in which a local relationship lender has an information advantage vis-à-vis distant transaction lenders. Competitive pressure from the transaction lenders prevents the local lender from extracting the full surplus from projects, so that she inefficiently rejects marginally profitable projects. Collateral mitigates the inefficiency by increasing the local lender’s payoff from precisely those marginal projects that she inefficiently rejects. The model predicts that, controlling for observable borrower risk, collateralized loans are more likely to default ex post, which is consistent with the empirical evidence. The model also predicts that borrowers for whom local lenders have a relatively smaller information advantage face higher collateral requirements, and that technological innovations that narrow the information advantage of local lenders, such as small business credit scoring, lead to a greater use of collateral in lending relationships. JEL classification: D82; G21 Keywords: Collateral; Soft infomation; Loan market competition; Relationship lending
As part of the Next Generation EU (NGEU) program, the European Commission has pledged to issue up to EUR 250 billion of the NGEU bonds as green bonds, in order to confirm their commitment to sustainable finance and to support the transition towards a greener Europe. Thereby, the EU is not only entering the green bond market, but also set to become one of the biggest green bond issuers. Consequently, financial market participants are eager to know what to expect from the EU as a new green bond issuer and whether a negative green bond premium, a so-called Greenium, can be expected for the NGEU green bonds. This research paper formulates an expectation in regards to a potential Greenium for the NGEU green bonds, by conducting an interview with 15 sustainable finance experts and analyzing the public green bond market from September 2014 until June 2021, with respect to a potential green bond premium and its underlying drivers. The regression results confirm the existence of a significant Greenium (-0.7 bps) in the public green bond market and that the Greenium increases for supranational issuers with AAA rating, such as the EU. Moreover, the green bond premium is influenced by issuer sector and credit rating, but issue size and modified duration have no significant effect. Overall, the evaluated expert interviews and regression analysis lead to an expected Greenium for the NGEU green bonds of up to -4 bps, with the potential to further increase in the secondary market.
We examine how U.S. monetary policy affects the international activities of U.S. Banks. We access a rarely studied US bank‐level dataset to assess at a quarterly frequency how changes in the U.S. Federal funds rate (before the crisis) and quantitative easing (after the onset of the crisis) affects changes in cross‐border claims by U.S. banks across countries, maturities and sectors, and also affects changes in claims by their foreign affiliates. We find robust evidence consistent with the existence of a potent global bank lending channel. In response to changes in U.S. monetary conditions, U.S. banks strongly adjust their cross‐border claims in both the pre and post‐crisis period. However, we also find that U.S. bank affiliate claims respond mainly to host country monetary conditions.
Futures markets are a potentially valuable source of information about market expectations. Exploiting this information has proved difficult in practice, because the presence of a time-varying risk premium often renders the futures price a poor measure of the market expectation of the price of the underlying asset. Even though the expectation in principle may be recovered by adjusting the futures price by the estimated risk premium, a common problem in applied work is that there are as many measures of market expectations as there are estimates of the risk premium. We propose a general solution to this problem that allows us to uniquely pin down the best possible estimate of the market expectation for any set of risk premium estimates. We illustrate this approach by solving the long-standing problem of how to recover the market expectation of the price of crude oil. We provide a new measure of oil price expectations that is considerably more accurate than the alternatives and more economically plausible. We discuss implications of our analysis for the estimation of economic models of energy-intensive durables, for the debate on speculation in oil markets, and for oil price forecasting.
The human mind may produce prototypization within virtually any realm of cognition and behavior. A "comparative prototype-typology" might prove to be an interesting field of study – perhaps a new subfield of semiotics. This, however, would presuppose a clear view on the samenesses and differences of prototypization in these various fields. It seems realistic for the time being that the linguist first confine himself to describing prototypization within the realm of language proper. The literature on prototypes has steadily grown in the past ten years or so. I confine myself to mentioning the volume on Noun Classes and Categorization, edited by C. Craig (1986), which contains a wealth of factual information on the subject, along with some theoretical vistas. By and large, however, linguistic prototype research is still basically in a taxonomic stage - which, of course, represents the precondition for moving beyond. The procedure is largely per ostensionem, and by accumulating examples of prototypes. We still lack a comprehensive prototype theory. The following pages are intended, not to provide such, a theory, but to do the first steps in this direction. Section 2 will feature some elements of a functional theory of prototypes. They have been developed by this author within the frame of the UNITYP model of research on language universals and typology. Section 3 will bring a discussion of prototypization with regard to selected phenomena of a wide range of levels of analysis: Phonology, morphosyntax, speech acts, and the lexicon. Prototypization will finally be studied within one of the universal dimensions, that of APPREHENSION - the linguistic representation of the concepts of objects – as proposed by Seiler (1986).
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
A version of this paper was originally written for a plenary session about "The Futures of Ethnography" at the 1998 EASA conference in Frankfurt/Main. In the preparation of the paper, I sent out some questions to my former fellow researchers by e-mail. I thank Douglas Anthony, Jan-Patrick Heiß, Alaine Hutson, Matthias Krings, and Brian Larkin for their answers.
Often adopting a feminist perspective, the sociological literature on migrant domestic services (MDS) does not make explicit which feminist paradigm it speaks from. This article situates this literature within ongoing debates in feminist theory, in particular the tension between materialist and poststructuralist approaches. Then, it discusses the empirical relevance of each of those two paradigms on the example of the results of original research into the personalization of employment relationships in MDS.
The contribution proposes a new way of making sense of the diversity of feminist theories, distinguishing between modern and postmodern approaches. Indeed, since the 1980s, feminist theory in the US and Western Europe has undergone a ‘postmodern turn’, which renders previous typologies much less up-to-speed with recent developments in the field. Then, the article examines which paradigms are implicit in the sociological literature on MDS. Initially, personalization in MDS was mainly seen in materialist terms, as a way to maximize the quantity and quality of labour (including emotional labour) extracted from domestic workers. The emergence of postmodern approaches in feminist theory set off a progressive shift in MDS literature. First, this literature showed that personalization also fulfils identity functions for employers and
workers, then it widened its focus to include the affective dimensions of domestic labour (not to be confused with emotional labour). The final section shows how modern and postmodern feminist approaches can be combined within a single research, on the example of original research on personalization in MDS in Belgium and Poland. In particular, the contribution shows that the distinction between material functions of personalization on the one hand, and its emotional/identity functions on the other is not empirically operative. Indeed, migrant domestic workers generally use emotional/identity categories to frame material questions, and vice versa. This final part shows that, rather than representing incompatible approaches, modern and postmodern feminisms complete each other, in this case showing a fuller image of personalization processes in MDS.
This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies the martingale method. More precisely, we construct the non-separable value function by formalizing the optimal constrained terminal wealth to be a (conjectured) contingent claim on the optimal non-constrained terminal wealth. This is relevant by itself, but also opens up the opportunity to derive new solutions to constrained problems. As a second contribution, we thus derive new results for non-strict constraints on the shortfall of inter¬mediate wealth and/or consumption.
This paper considers a trading game in which sequentially arriving liquidity traders either opt for a market order or for a limit order. One class of traders is considered to have an extended trading horizon, implying their impatience is linked to their trading orientation. More specifically, sellers are considered to have a trading horizon of two periods, whereas buyers only have a single-period trading scope (the extended buyer-horizon case is completely symmetric). Clearly, as the life span of their submitted limit orders is longer, this setting implies sellers are granted a natural advantage in supplying liquidity. This benefit is hampered, however, by the direct competition arising between consecutively arriving sellers. Closed-form characterizations for the order submission strategies are obtained when solving for the equilibrium of this dynamic game. These allow to examine how these forces affect traders´ order placement decisions. Further, the analysis yields insight into the dynamic process of price formation and into the market clearing process of a non-intermediated, order driven market.
The article, which summarizes key findings of my German book ‘Die Gemeinfreiheit. Begriff, Funktion, Dogmatik’ (‘The Public Domain: Theory, Func-tion, Doctrine’), asks whether there are any provisions or principles under Ger-man and EU law that protect the public domain from interference by the legisla-ture, courts and private parties. In order to answer this question, it is necessary to step out of the intellectual property (IP) system and to analyze this body of law from the outside, and – even more important – to develop a positive legal conception of the public domain as such. By giving the public domain a proper doctrinal place in the legal system, the structural asymmetry between heavily theorized and protected IP rights on the one hand and a neglected public do-main on the other is countered. The overarching normative purpose is to devel-op a framework for a balanced IP system, which can only be achieved if the public domain forms an integral part of the overall regulation of information.
Recent models with liquidity constraints and impatience emphasize that consumers use savings to buffer income fluctuations. When wealth is below an optimal target, consumers try to increase their buffer stock of wealth by saving more. When it is above target, they increase consumption. This important implication of the buffer stock model of saving has not been subject to direct empirical testing. We derive from the model an appropriate theoretical restriction and test it using data on working-age individuals drawn from the 2002 and 2004 Italian Surveys of Household Income and Wealth. One of the most appealing features of the survey is that it has data on the amount of wealth held for precautionary purposes, which we interpret as target wealth in a buffer stock model. The test results do not support buffer stock behavior, even among population groups that are more likely, a priori, to display such behavior. The saving behavior of young households is instead consistent with models in which impatience, relative to prudence, is not as high as in buffer stock models. JEL Classification: D91
The Stanford Project on Language Universals began its activities in October 1967 and brought them to an end in August 1976. Its directors were Joseph H. Greenberg and Charles A. Ferguson. The Cologne Project on Language Universals and Typology [with particular reference to functional aspects], abbreviated UNITYP, had its early beginnings in 1972, but deployed its full activities from 1976 onwards and is still operating. This writer, who is the principal investigator, had the privilege of collaborating with the Stanford Project during spring of 1976. […] One of the leading Greenbergian ideas is that of implicational generalizations, has been integrated as a fundamental principle in the construction of continua and of universal dimensions as proposed by UNITYP. It is hoped that the following considerations on numeral systems will be apt to bear witness to this situation. They would be unthinkable without Greenberg’s pioneering work on "Generalizations about numeral systems" (Greenberg 1978: 249 ff., henceforth referred to as Greenberg, NS). Further work on this domain and on other comparable domains almost inevitably leads one to the view that generalizations of the Greenberg type have a functional significance and that a dimensional framework is apt to bring this to the fore. This is the view on linguistic behaviour as being purposeful, and on language as a problem- solving device. The problem consists in the linguistic representation of cognitive-conceptual ideas. The solution is represented by the corresponding linguistic structures in their diversity and the task of the linguist consists in reconstructing the program and subprograms underlying the process of problem-solving. It is claimed that the construct of continua and of universal dimensions makes these programs intelligible.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
Artificial drainage of agricultural land, for example with ditches or drainage tubes, is used to avoid water logging and to manage high groundwater tables. Among other impacts it influences the nutrient balances by increasing leaching losses and by decreasing denitrification. To simulate terrestrial transport of nitrogen on the global scale, a digital global map of artificially drained agricultural areas was developed. The map depicts the percentage of each 5’ by 5’ grid cell that is equipped for artificial drainage. Information on artificial drainage in countries or sub-national units was mainly derived from international inventories. Distribution to grid cells was based, for most countries, on the "Global Croplands Dataset" of Ramankutty et al. (1998) and the "Digital Global Map of Irrigation Areas" of Siebert et al. (2005). For some European countries the CORINE land cover dataset was used instead of the both datasets mentioned above. Maps with outlines of artificially drained areas were available for 6 countries. The global drainage area on the map is 167 Mio hectares. For only 11 out of the 116 countries with information on artificial drainage areas, sub-national information could be taken into account. Due to this coarse spatial resolution of the data sources, we recommended to use the map of artificially drained areas only for continental to global scale assessments. This documentation describes the dataset, the data sources and the map generation, and it discusses the data uncertainty.
The emergence of Capitalism is said to always lead to extreme changes in the structure of a society. This view implies that Capitalism is a universal and unique concept that needs an explicit institutional framework and should not discriminate between a German or US Capitalism. In contrast, this work argues that the ‘ideal type’ of Capitalism in a Weberian sense does not exist. It will be demonstrated that Capitalism is not a concept that shapes a uniform institutional framework within every society, constructing a specific economic system. Rather, depending on the institutional environment - family structures in particular - different forms of Capitalism arise. To exemplify this, the networking (Guanxi) Capitalism of contemporary China will be presented, where social institutions known from the past were reinforced for successful development. It will be argued that especially the change, destruction and creation of family and kinship structures are key factors that determined the further development and success of the Chinese economy and the type of Capitalism arising there. In contrast to Weber, it will be argued that Capitalism not necessarily leads to a process of destruction of traditional structures and to large-scale enterprises under rational, bureaucratic management, without leaving space for socio-cultural structures like family businesses. The flexible global production increasingly favours small business production over larger corporations. Small Chinese family firms are able to respond to rapidly changing market conditions and motivate maximum efforts for modest pay. The structure of the Chinese family proved to be very persistent over time and to be able to accommodate diverse economic and political environments while maintaining its core identity. This implies that Chinese Capitalism may be an entirely new economic system, based on Guanxi and the family.
Context unification is a variant of second-order unification and also a generalization of string unification. Currently it is not known whether context uni cation is decidable. An expressive fragment of context unification is stratified context unification. Recently, it turned out that stratified context unification and one-step rewrite constraints are equivalent. This paper contains a description of a decision algorithm SCU for stratified context unification together with a proof of its correctness, which shows decidability of stratified context unification as well as of satisfiability of one-step rewrite constraints.
n the EU there are longstanding and ongoing pressures towards a tax that is levied on the EU level to substitute for national contributions. We discuss conditions under which such a transition can make sense, starting from what we call a "decentralization theorem of taxation" that is analogous to Oates (1972) famous result that in the absence of spill-over effects and economies of scale decentralized public good provision weakly dominates central provision. We then drop assumptions that turn out to be unnecessary for this results. While spill-over effects of taxation may call for central rules for taxation, as long as spill-over effects do not depend on the intra-regional distribution of the tax burden, decentralized taxation plus tax coordination is found superior to a union-wide tax.
The merchant language of the Georgian Jews deserves scholarly attention for several reasons. The political and social developments of the last fifty years have caused the extinction of this very interesting form of communication, as most Georgian Jews have emigrated to Israel. In a natural interaction, the type of language described in this article can be found very rarely, if at all. Records of this communication have been preserved in various contexts and received different levels of scholarly attention. Our interest concerns the linguistic aspects as well as the classification.
In the following paper we argue that the specific merchant language of Georgian Jews belongs to the pragmatic phenomenon of “very indirect language.” The use of mostly Hebrew lexemes in Georgian conversation leads to an unfounded assumption that the speakers are equally competent in Hebrew and Georgian. It is reported that a high level of linguistic competence in Hebrew does not guarantee understanding of the Jewish merchant language. In the Georgian context, the decisive factors are membership in the professional interest group of merchants and residential membership in the Jewish community. These factors seem to be equivalent, because Jewish members of other professional groups (and those from outside the particular urban residential area) have difficulties in following the language that are similar to those of the Georgian majority. We describe the pragmatic structure of interactions conducted with the help of the merchant language and take into account the purpose of the language’s use or the intention of the speakers. Relevant linguistic examples are analysed and their sociocultural contexts explained.
This paper deals with the proposed use of sovereign credit ratings in the "Basel Accord on Capital Adequacy" (Basel II) and considers its potential effect on emerging markets financing. It investigates in a first attempt the consequences of the planned revisions on the two central aspects of international bank credit flows: the impact on capital costs and the volatility of credit supply across the risk spectrum of borrowers. The empirical findings cast doubt on the usefulness of credit ratings in determining commercial banks' capital adequacy ratios since the standardized approach to credit risk would lead to more divergence rather than convergence between investment-grade and speculative-grade borrowers. This conclusion is based on the lateness and cyclical determination of credit rating agencies' sovereign risk assessments and the continuing incentives for short-term rather than long-term interbank lending ingrained in the proposed Basel II framework.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
Unquestionably (or: undoubtedly), every competent speaker has already come to doubt with respect to the question of which form is correct or appropriate and should be used (in the standard language) when faced with two or more almost identical competing variants of words, word forms or sentence and phrase structure (e.g. German "Pizzas/Pizzen/Pizze" 'pizzas', Dutch "de drie mooiste/mooiste drie stranden" 'the three most beautiful/most beautiful three beaches', Swedish "större än jag/mig" 'taller than I/me'). Such linguistic uncertainties or "cases of doubt" (cf. i.a. Klein 2003, 2009, 2018; Müller & Szczepaniak 2017; Schmitt, Szczepaniak & Vieregge 2019; Stark 2019 as well as the useful collections of data of Duden vol. 9, Taaladvies.net, Språkriktighetsboken etc.) systematically occur also in native speakers and they do not necessarily coincide with the difficulties of second language learners. In present-day German, most grammatical uncertainties occur in the domains of inflection (nominal plural formation, genitive singular allomorphy of strong masc./neut. nouns, inflectional variation of weak masc. nouns, strong/weak adjectival inflection and comparison forms, strong/weak verb forms, perfect auxiliary selection) and word-formation (linking elements in compounds, separability of complex verbs). As for syntax, there are often doubts in connection with case choice (pseudo-partitive constructions, prepositional case government) and agreement (especially due to coordination or appositional structures). This contribution aims to present a contrastive approach to morphological and syntactic uncertainties in contemporary Germanic languages (mostly German, Dutch, and Swedish) in order to obtain a broader and more fine-grained typology of grammatical instabilities and their causes. As will be discussed, most doubts of competent speakers - a problem also for general linguistic theory - can be attributed to processes of language change in progress, to language or variety contact, to gaps and rule conflicts in the grammar of every language or to psycholinguistic conditions of language processing. Our main concerns will be the issues of which (kinds of) common or different critical areas there are within Germanic (and, on the other hand, in which areas there are no doubts), which of the established (cross-linguistically valid) explanatory approaches apply to which phenomena and, ultimately, the question whether the new data reveals further lines of explanation for the empirically observable (standard) variation.
In this paper we analyze the semantics of a higher-order functional language with concurrent threads, monadic IO and synchronizing variables as in Concurrent Haskell. To assure declarativeness of concurrent programming we extend the language by implicit, monadic, and concurrent futures. As semantic model we introduce and analyze the process calculus CHF, which represents a typed core language of Concurrent Haskell extended by concurrent futures. Evaluation in CHF is defined by a small-step reduction relation. Using contextual equivalence based on may- and should-convergence as program equivalence, we show that various transformations preserve program equivalence. We establish a context lemma easing those correctness proofs. An important result is that call-by-need and call-by-name evaluation are equivalent in CHF, since they induce the same program equivalence. Finally we show that the monad laws hold in CHF under mild restrictions on Haskell’s seq-operator, which for instance justifies the use of the do-notation.
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings.
A call on art investments
(2010)
The art market has seen boom and bust during the last years and, despite the downturn, has received more attention from investors given the low interest environment following the financial crisis. However, participation has been reserved for a few investors and the hedging of exposures remains dificult. This paper proposes to overcome these problems by introducing a call option on an art index, derived from one of the most comprehensive data sets of art market transactions. The option allows investors to optimize their exposure to art. For pricing purposes, non-tradability of the art index is acknowledged and option prices are derived in an equilibrium setting as well as by replication arguments. In the former, option prices depend on the attractiveness of gaining exposure to a previously non-traded risk. This setting further overcomes the problem of art market exposures being dificult to hedge. Results in the replication case are primarily driven by the ability to reduce residual hedging risk. Even if this is not entirely possible, the replication approach serves as pricing benchmark for investors who are significantly exposed to art and try to hedge their art exposure by selling a derivative. JEL Classification: G11, G13, Z11
I present a new business cycle model in which decision making follows a simple mental process motivated by neuroeconomics. Decision makers first compute the value of two different options and then choose the option that offers the highest value, but with errors. The resulting model is highly tractable and intuitive. A demand function in level replaces the traditional Euler equation. As a result, even liquid consumers can have a large marginal propensity to consume. The interest rate affects consumption through the cost of borrowing and not through intertemporal substitution. I discuss the implications for stimulus policies.
Im Mai 2008 verwüstete der Sturm Nargis über Myanmar/Burma hinweg, 140.000 Menschen wurden getötet. Das autokratisch regierte Land wies jedoch Katastrophenhilfe als innere Einmischung zurück und verweigerte die Einfuhr von Medikamenten und Lebensmitteln. Der französische Außenminister Kouschner drängte angesichts dieser Situation die UN zum Handeln, auf Grundlage der Responsibility to Protect (kurz R2P).
Dieser Akt der Versicherheitlichung steht allerdings im Kontrast zur Medienberichterstattung, wie Gabi Schlag in diesem Papier untersucht. Besonders das Bildmaterial aus dem Katastrophengebiet erzählt eine andere Geschichte. Die Photos der Berichterstattung von BBC.com zum Thema bilden ein visuelles Narrativ, welches keine Hilfsbedürftigkeit suggeriert, sondern kontrolliertes, besonnenes Vorgehen der lokalen Kräfte. Dieser Kontrast verweist auf die sprichwörtliche Macht der Bilder, welche die jeweiligen Bedingungen von Handlungsmöglichkeiten vorstrukturieren.
Consumers purchase energy in many forms. Sometimes energy goods are consumed directly, for instance, in the form of gasoline used to operate a vehicle, electricity to light a home, or natural gas to heat a home. At other times, the cost of energy is embodied in the prices of goods and services that consumers buy, say when purchasing an airline ticket or when buying online garden furniture made from plastic to be delivered by mail. Previous research has focused on quantifying the pass-through of the price of crude oil or the price of motor gasoline to U.S. inflation. Neither approach accounts for the fact that percent changes in refined product prices need not be proportionate to the percent change in the price of oil, that not all energy is derived from oil, and that the correlation of price shocks across energy markets is far from one. This paper develops a vector autoregressive model that quantifies the joint impact of shocks to several energy prices on headline and core CPI inflation. Our analysis confirms that focusing on gasoline price shocks alone will underestimate the inflationary pressures emanating from the energy sector, but not enough to overturn the conclusion that much of the observed increase in headline inflation in 2021 and 2022 reflected non-energy price shocks.
We introduce a regularization and blocking estimator for well-conditioned high-dimensional daily covariances using high-frequency data. Using the Barndorff-Nielsen, Hansen, Lunde, and Shephard (2008a) kernel estimator, we estimate the covariance matrix block-wise and regularize it. A data-driven grouping of assets of similar trading frequency ensures the reduction of data loss due to refresh time sampling. In an extensive simulation study mimicking the empirical features of the S&P 1500 universe we show that the ’RnB’ estimator yields efficiency gains and outperforms competing kernel estimators for varying liquidity settings, noise-to-signal ratios, and dimensions. An empirical application of forecasting daily covariances of the S&P 500 index confirms the simulation results.
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
We collect data on the size distribution of all U.S. corporate businesses for 100 years. We document that corporate concentration (e.g., asset share or sales share of the top 1%) has increased persistently over the past century. Rising concentration was stronger in manufacturing and mining before the 1970s, and stronger in services, retail, and wholesale after the 1970s. Furthermore, rising concentration in an industry aligns closely with investment intensity in research and development and information technology. Industries with higher increases in concentration also exhibit higher output growth. The long-run trends of rising corporate concentration indicate increasingly stronger economies of scale.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
The term structure of interest rates is crucial for the transmission of monetary policy to financial markets and the macroeconomy. Disentangling the impact of monetary policy on the components of interest rates, expected short rates and term premia, is essential to understanding this channel. To accomplish this, we provide a quantitative structural model with endogenous, time-varying term premia that are consistent with empirical findings. News about future policy, in contrast to unexpected policy shocks, has quantitatively significant effects on term premia along the entire term structure. This provides a plausible explanation for partly contradictory estimates in the empirical literature.
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.