Refine
Year of publication
Document Type
- Working Paper (2358) (remove)
Language
- English (2358) (remove)
Is part of the Bibliography
- no (2358)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (24)
Institute
- Center for Financial Studies (CFS) (1383)
- Wirtschaftswissenschaften (1313)
- Sustainable Architecture for Finance in Europe (SAFE) (745)
- House of Finance (HoF) (610)
- Institute for Monetary and Financial Stability (IMFS) (174)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Motivated by the U.S. events of the 2000s, we address whether a too low for too long interest rate policy may generate a boom-bust cycle. We simulate anticipated and unanticipated monetary policies in state-of-the-art DSGE models and in a model with bond financing via a shadow banking system, in which the bond spread is calibrated for normal and optimistic times. Our results suggest that the U.S. boom-bust was caused by the combination of (i) too low for too long interest rates, (ii) excessive optimism and (iii) a failure of agents to anticipate the extent of the abnormally favorable conditions.
In this paper, I introduce lumpy micro-level capital adjustment into a sticky information general equilibrium model. Lumpy adjustment arises because of inattentiveness in capital investment decisions instead of the more common assumption of non-convex adjustment costs. The model features inattentiveness as the only source of stickiness. I find that the model with lumpy investment yields business cycle dynamics which differ substantially from those of an otherwise identical model with frictionless investment and are much more consistent with the empirical evidence. These results therefore strengthen the case in favour of the relevance of microeconomic investment lumpiness for the business cycle.
This paper outlines relatively easy to implement reforms for the supervision of transnational banking-groups in the E.U. that should not be primarily based on legal form but on the actual risk structures of the pertinent financial institutions. The proposal also aims at paying close attention to the economics of public administration and international relations in allocating competences among national and supranational supervisory bodies. Before detailing the own proposition, this paper looks into the relationship between sovereign debt and banking crises that drive regulatory reactions to the financial turmoil in the Euro area. These initiatives inter alia affirm effective prudential supervision as a pivotal element of crisis prevention. In order to arrive at a more informed idea, which determinants apart from a perceived appetite for regulatory arbitrage drive banks’ organizational choices, this paper scrutinizes the merits of either a branch or subsidiary structure for the cross-border business of financial institutions. In doing so, it also considers the policy-makers perspective. The analysis shows that no one size fits all organizational structure is available and concludes that banks’ choices should generally not be second-guessed, particularly because they are subject to (some) market discipline. The analysis proceeds with describing and evaluating how competences in prudential supervision are currently allocated among national and supranational supervisory authorities. In order to assess the findings the appraisal adopts insights form the economics of public administration and international relations. It argues that the supervisory architecture has to be more aligned with bureaucrats’ incentives and that inefficient requirements to cooperate and share information should be reduced. Contrary to a widespread perception, shifting responsibility to a supranational authority cannot solve all the problems identified. Resting on these foundations, the last part of this paper finally sketches an alternative solution that dwells on far-reaching mutual recognition of national supervisory regimes and allocates competences in line with supervisors’ incentives and the risk inherent in crossborder banking groups.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Power and law in enlightened absolutism : Carl Gottlieb Svarez' theoretical and practical approach
(2012)
The term Enlightened Absolutism reflects a certain tension between its two components. This tension is in a way a continuation of the dichotomy between power on one hand and law on the other. The present paper shall provide an analysis of these two concepts from the perspective of Carl Gottlieb Svarez, who, in his position as a high-ranking Prussian civil servant and legal reformist, has had unparalleled influence on the legislative history of the
Prussian states towards the end of the 18th century. Working side-by-side with Johann Heinrich Casimir von Carmer, who held the post of Prussian minister of justice from 1779 to 1798, Svarez was able to make use of his talent for reforming and legislating. From 1780 to 1794 he was primarily responsible for the elaboration of the codification of the Prussian private law – the “Allgemeines Landrecht für die Preußischen Staaten” in 1794. In the present paper, Svarez’ approach to the relation between law and power shall be analysed on two different levels. Firstly, on a theoretical level, the reformist’s thoughts and reflections as laid down in his numerous works, papers and memorandums, shall be discussed. Secondly, on a practical level, the question of the extent to which he implemented his ideas in Prussian legal reality shall be explored.
Rare Earth Elements (REE) have become the new strategic economic weapon for the modern age. Used in the manufacturing of products ranging from mobile phones to jet fighter engines, REEs have become the new “oil” of today in terms of economic and strategic importance. Currently, 95% of REEs mined globally are mined in China, giving China a monopoly on the industry. Deng Xiaoping foresaw the importance of REEs in 1992 when he commented: “as there is oil in the Middle East, there is rare earth in China.” Recently, China temporarily stopped exports of REEs to Japan, the EU and the US as an unofficial response to varying political and economic issues. This stoppage raised concerns as to the dependability of China and REE exports. Using the theory of neo-mercantilism, this paper analyzes China’s actions in the REE market and its subsequent economic and political implications. It concludes with a look at how countries are trying to position themselves away from a dependency on China.
Japan's quest for energy security : risks and opportunities in a changing geopolitical landscape
(2011)
For much of the 20th century, economic growth was fueled by cheap oil-based energy supply. Due to increasing resource constraints, however, the political and strategic importance of oil has become a significant part of energy and foreign policy making in East and Southeast Asian countries. In Japan, the rise of China’s economic and military power is a source of considerable concern. To enhance energy security, the Japanese government has recently amended its energy regulatory framework, which reveals high political awareness of risks resulting from the looming key resources shortage and competition over access. An essential understanding that national energy security is a politically and economically sensitive area with a clear international dimension affecting everyday life is critical in shaping a nation’s energy future.
It has often been asked whether today´s Japan will be able to move into new and promising industries, or whether it is locked into an innovation system with an inherent inability to give birth to new industries. One argument reasons that the thick institutional complementarities among labour, innovation, and finance among its enterprises and the public sector favour industrial development in sectors of intermediate uncertainty, while it is difficult to move into areas of major uncertainty. In this paper, we present the case of the silver industry or, somewhat more prosaically, the 60+ or even 50+ industry, for which most would agree that Japan has indeed become a lead market and lead producer on the global market. For an institutional economist, the case of the silver industry is particularly interesting, because Japan´s success is based on the cooperation of existing actors, the enterprise and public sector in particular, which helped overcome the information uncertainties and asymmetries involved in the new market by relying on several established mechanisms developed well before. In that sense, Japan´s silver industry presents a case of of what we propose to call successful institutional path activation with the effect of an innovative market creation, instead of the problematic lockin effects that are usually associated with the term path dependence.
The emergence of Capitalism is said to always lead to extreme changes in the structure of a society. This view implies that Capitalism is a universal and unique concept that needs an explicit institutional framework and should not discriminate between a German or US Capitalism. In contrast, this work argues that the ‘ideal type’ of Capitalism in a Weberian sense does not exist. It will be demonstrated that Capitalism is not a concept that shapes a uniform institutional framework within every society, constructing a specific economic system. Rather, depending on the institutional environment - family structures in particular - different forms of Capitalism arise. To exemplify this, the networking (Guanxi) Capitalism of contemporary China will be presented, where social institutions known from the past were reinforced for successful development. It will be argued that especially the change, destruction and creation of family and kinship structures are key factors that determined the further development and success of the Chinese economy and the type of Capitalism arising there. In contrast to Weber, it will be argued that Capitalism not necessarily leads to a process of destruction of traditional structures and to large-scale enterprises under rational, bureaucratic management, without leaving space for socio-cultural structures like family businesses. The flexible global production increasingly favours small business production over larger corporations. Small Chinese family firms are able to respond to rapidly changing market conditions and motivate maximum efforts for modest pay. The structure of the Chinese family proved to be very persistent over time and to be able to accommodate diverse economic and political environments while maintaining its core identity. This implies that Chinese Capitalism may be an entirely new economic system, based on Guanxi and the family.
In contrast to the US and recently Europe, Japan appears to be unsuccessful in establishing new industries. An oft-cited example is Japan's practical invisibility in the global business software sector. Literature has ascribed Japan's weakness – or conversely, America's strength – to the specific institutional settings and competences of actors within the respective national innovation system. It has additionally been argued that unlike the American innovation system, with its proven ability to give birth to new industries, the inherent path dependency of the Japanese innovation system makes innovation and establishment of new industries quite difficult. However, there are two notable weaknesses underlying current propositions postulating that only certain innovation systems enable the creation of new industries: first, they mistakenly confound context specific with general empirical observations. And second, they grossly underestimate – or altogether fail to examine – the dynamics within innovation systems. This paper will show that it is precisely the dynamics within innovation systems – dynamics founded on the concept of path plasticity – which have enabled Japan to charge forward as a global leader in a highly innovative field: the game software sector as well as the biotechnology industry.
European scholars, colonial administrators, missionaries, bibliophiles and others were the main collectors of Malay books in the nineteenth century, both in manuscript or printed form. Among these persons were many well-known names in the field of Malay literature and culture like Raffles, Marsden, Crawfurd, Klinkert, van der Tuuk, von Dewall, Roorda, Favre, Maxwell, Overbeck, Wilkinson and Skeat, to name only a few. Their collections were often handed over to public libraries where they form an important part of the relevant Oriental or Southeast Asian manuscript collections.
Therefore the knowledge of the intellectual culture of the Malay Peninsula and the Malay World in general depended very much on these manuscripts and printed books collected often by chance or in a rather unsystematic way. The collections reflect in a strong sense the interests of its administrative or philologist collectors: court histories, genealogies of aristocratic lineages, law collections (adat-istiadat as well as undangundang) or prose belles-lettres build a vast bulk of these collections, while Islamic religious texts and poetry forms popular in the 19th century (especially syair) are fairly underrepresented. Malay manuscripts and books located in religious institutions like mosques or pondok/pesantren schools have not been searched for; until today there are more or less no systematic studies of these collections. As in some statistics religious texts build about 20% of all existing Malay manuscripts, their neglect by Europeans scholars leads to a distorted view of the literary culture in the Malay language.
In the aftermath of the global financial crisis and great recession, many countries face substantial deficits and growing debts. In the United States, federal government outlays as a ratio to GDP rose substantially from about 19.5 percent before the crisis to over 24 percent after the crisis. In this paper we consider a fiscal consolidation strategy that brings the budget to balance by gradually reducing this spending ratio over time to the level that prevailed prior to the crisis. A crucial issue is the impact of such a consolidation strategy on the economy. We use structural macroeconomic models to estimate this impact focussing primarily on a dynamic stochastic general equilibrium model with price and wage rigidities and adjustment costs. We separate out the impact of reductions in government purchases and transfers, and we allow for a reduction in both distortionary taxes and government debt relative to the baseline of no consolidation. According to the model simulations GDP rises in the short run upon announcement and implementation of this fiscal consolidation strategy and remains higher than the baseline in the long run. We explore the role of the mix of expenditure cuts and tax reductions as well as gradualism in achieving this policy outcome. Finally, we conduct sensitivity studies regarding the type of model used and its parameterization.
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
We argue that the U.S. personal saving rate’s long stability (1960s–1980s), subsequent steady decline (1980s–2007), and recent substantial rise (2008–2011) can be interpreted using a parsimonious ‘buffer stock’ model of consumption in the presence of labor income uncertainty and credit constraints. Saving in the model is affected by the gap between ‘target’ and actual wealth, with the target determined by credit conditions and uncertainty. An estimated structural version of the model suggests that increased credit availability accounts for most of the long-term saving decline, while fluctuations in wealth and uncertainty capture the bulk of the business-cycle variation.
This paper investigates whether preference interactions can explain why risk preferences change over time and across contexts. We conduct an experiment in which subjects accept or reject gambles involving real money gains and losses. We introduce within-subject variation by alternating subjectively liked music and disliked music in the background. We find that favourite music increases risk-taking, and disliked music suppresses risk-taking, compared to a baseline of no music. Several theories in psychology propose mechanisms by which mood affects risktaking, but none of them fully explain our results. The results are, however, consistent with preference complementarities that extend to risk preference.
Remarks on deixis
(1992)
The prevailing conception of deixis is oriented to the idea of 'concrete' physical and perceptual characteristics of the situation of speech. Signs standardly adduced as typical deictics are I, you, here, now, this, that. I and you are defined as meaning "the person producing the utterance in question" and "the person spoken to", here and now as meaning "where the speaker is at utterance time" and "at the moment the utterance is made" (also, "at the place/time of the speech exchange"); similarly, the meanings of this and that are as a rule defined via proximity to speaker's physical location. The elements used in such definitions form the conceptual framework of most of the general characterisations of deixis in the literature. [...] There is much in the literature, of course, that goes far beyond this framework . A great variety of elements, mostly with very abstract meanings, have been found to share deictic characteristics although they do not fit into the personnel-place-time-of-utterance schema. The adequacy of that schema is also called into question by many observations to the effect that the use of such standard deictics as here, now, this, that cannot really be accounted for on its basis, and by the far-reaching possibilities of orienting deictics to reference points in situations other than the situation of speech, to 'deictic centers' other than the speaker. [...] Analyses along the lines of the standard conception regularly acknowledge the existence of deviations from the assumed basic meanings. One traditional solution attributes them to speaker's "subjectivity", or to differences between "physical" and "psychological" space or time; in a similar vein, metaphorical extensions may be said to be at play, or a distinction between prototypical and non-prototypical meanings invoked. Quite apart from the question of the relative merits of these explanatory principles, which I do not wish to discuss here, the problem with all such accounts is that the definitions of the assumed basic meanings themselves are founded on axiom rather than analysis of situated use. The logical alternative, of course, is to set out for more abstract and comprehensive meaning definitions from the start. In fact, a number of recent, discourse-oriented, treatments of the demonstratives proceed this way; they view those elements as processing instructions rather than signs with inherently spatial denotation (Isard 1975, Hawkins 1978, Kirsner 1979, Linde 1979 , Ehlich 1982.)
Oppositeness, i.e. the relation between opposites or contraries or contradictories, has a fundamental role in human cognition. In the various domains of intellectual and psychological activity we find ordering schemas that are based, in one way or another, on the cognitive figure of oppositeness. It is therefore not surprising that the figure and its corresponding ordering schemas show their reflexes in the languages of the world. [...] We shall be dealing with oppositeness in the sense that a linguistically untrained native speaker, when asked what would be the opposite of 'long' can come up with some such answer as 'short', and likewise intuitively grasp the relation between 'man' and 'woman', 'corne' and 'go', 'up' and 'down', etc. Thinking that much of the vocabulary of a language is organized in such opposite pairs we must recognize that this is an important faculty, and we are curious to know how this is done, what are the underlying conceptual-cognitive structures and processes, and how they are encoded in the languages of the world. We shall leave out of consideration such oppositions as singular vs. plural. present vs. past, voiced vs. unvoiced, oppositions that the linguist states by means of a metalanguage which is itself derived from a concept of oppositeness as manifested by the examples which I gave earlier. Our approach will connect with earlier versions of the UNITYP framework. However, as a novel feature, and, hopefully, as an improvement, we shall apply some sort of a division of labor. We shall first try to reconstruct the conceptual-cognitive content of oppositeness and to keep it separate from the discussion of its reflexes in the individual languages. We shall find that a dimensional ordering of content in PARAMETERS and a continuum of TECHNIQUES is possible already on the conceptual-cognitive level. In order to keep it distinct from the level of linguistic encoding we shall use a separate terminology, graphically marked by capital 1etters.
Why should we engage in language universals research and language typology? What do we want to explain? It is a fact that, although languages differ significantly and considerably. indeed, no one would deny, that they have something in common; how else could they be labelled 'language'? - There is obviously unity among them, no matter how vaguely felt and for what reasons: Scientific, practical, moral, etc. Neither diversity per se nor unity per se is what we want to explain. There is no reason whatsoever to consider either one of them as primary, and the other as derived. What we do want to explain is "equivalence in difference" – cf. our motto – which manifests itself, among others, in the translatability from one language to another, the learnability of any language, language change – which all presuppose that speakers intuitively find their way from diversity to unity. This is a highly salient property which deserves to be brought into our consciousness. Generally then, our basic goal is to explain the way in which language-specific facts are connected with a unitarian concept of language – "die Sprache" – "le langage".
The Stanford Project on Language Universals began its activities in October 1967 and brought them to an end in August 1976. Its directors were Joseph H. Greenberg and Charles A. Ferguson. The Cologne Project on Language Universals and Typology [with particular reference to functional aspects], abbreviated UNITYP, had its early beginnings in 1972, but deployed its full activities from 1976 onwards and is still operating. This writer, who is the principal investigator, had the privilege of collaborating with the Stanford Project during spring of 1976. […] One of the leading Greenbergian ideas is that of implicational generalizations, has been integrated as a fundamental principle in the construction of continua and of universal dimensions as proposed by UNITYP. It is hoped that the following considerations on numeral systems will be apt to bear witness to this situation. They would be unthinkable without Greenberg’s pioneering work on "Generalizations about numeral systems" (Greenberg 1978: 249 ff., henceforth referred to as Greenberg, NS). Further work on this domain and on other comparable domains almost inevitably leads one to the view that generalizations of the Greenberg type have a functional significance and that a dimensional framework is apt to bring this to the fore. This is the view on linguistic behaviour as being purposeful, and on language as a problem- solving device. The problem consists in the linguistic representation of cognitive-conceptual ideas. The solution is represented by the corresponding linguistic structures in their diversity and the task of the linguist consists in reconstructing the program and subprograms underlying the process of problem-solving. It is claimed that the construct of continua and of universal dimensions makes these programs intelligible.