Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Is part of the Bibliography
- no (2350)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (147)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in callby-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We develop a proof method to show that in a (deterministic) lambda calculus with letrec and equipped with contextual equivalence the call-by-name and the call-by-need evaluation are equivalent, and also that the unrestricted copy-operation is correct. Given a let-binding x = t, the copy-operation replaces an occurrence of the variable x by the expression t, regardless of the form of t. This gives an answer to unresolved problems in several papers, it adds a strong method to the tool set for reasoning about contextual equivalence in higher-order calculi with letrec, and it enables a class of transformations that can be used as optimizations. The method can be used in different kind of lambda calculi with cyclic sharing. Probably it can also be used in non-deterministic lambda calculi if the variable x is “deterministic”, i.e., has no interference with non-deterministic executions. The main technical idea is to use a restricted variant of the infinitary lambda-calculus, whose objects are the expressions that are unrolled w.r.t. let, to define the infinite developments as a reduction calculus on the infinite trees and showing a standardization theorem.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Panel Sample Selection ModelsThe empirical evidence currently available in the literature regarding the effects of a country's IMF program participation on its output growth is rather inconclusive. In this paper we propose and estimate a panel data sample selection model featuring state dependence. As in this model the output growth effects of program participation can be conditional on the realization of a state variable (conditional pooling), our framework may reconcile previous empirical evidence based on models without state-dependent effects. We find that the effects of IMF program participation on output growth vary systematically with an index reflecting a country's institutional record, and that output growth effects of program participation are significantly positive only if the program participation is coupled with sufficient improvement of the institutional record.
Although oil price shocks have long been viewed as one of the leading candidates for explaining U.S. recessions, surprisingly little is known about the extent to which oil price shocks explain recessions. We provide the first formal analysis of this question with special attention to the possible role of net oil price increases in amplifying the transmission of oil price shocks. We quantify the conditional recessionary effect of oil price shocks in the net oil price increase model for all episodes of net oil price increases since the mid-1970s. Compared to the linear model, the cumulative effect of oil price shocks over course of the next two years is much larger in the net oil price increase model. For example, oil price shocks explain a 3% cumulative reduction in U.S. real GDP in the late 1970s and early 1980s and a 5% cumulative reduction during the financial crisis. An obvious concern is that some of these estimates are an artifact of net oil price increases being correlated with other variables that explain recessions. We show that the explanatory power of oil price shocks largely persists even after augmenting the nonlinear model with a measure of credit supply conditions, of the monetary policy stance and of consumer confidence. There is evidence, however, that the conditional fit of the net oil price increase model is worse on average than the fit of the corresponding linear model, suggesting much smaller cumulative effects of oil price shocks for these episodes of at most 1%.
Efforts to control bank risk address the wrong problem in the wrong way. They presume that the financial crisis was caused by CEOs who failed to supervise risk-taking employees. The responses focus on executive pay, believing that executives will bring non-executives into line—using incentives to manage risk-taking—once their own pay is regulated. What they overlook is the effect on non-executive pay of the competition for talent. Even if executive pay is regulated, and executives act in the bank’s best interests, they will still be trapped into providing incentives that encourage risk-taking by non-executives due to the negative externality that arises from that competition. Greater risk-taking can increase short-term profits and, in turn, the amount a non-executive receives, potentially at the expense of long-term bank value. Non-executives, therefore, have an incentive to incur significant risk upfront so long as they can depart for a new employer before any losses materialize. The result is an upward spiral in compensation—reducing an executive’s ability to set non-executive pay and the ability of any one bank to adjust compensation to reflect risk-taking and long-term outcomes. New regulation must address the tension between compensation and competition. Regulators should take account of the effect of competition on market-wide levels of pay, including by non-banks who compete for talent. The ability of non-executives to jump from a bank employer to another financial firm should also be limited. In addition, banks should be required to include a long-term equity component in non-executive pay, with subsequent employers being restricted from compensating a new employee for any losses she incurs related to her prior work.
We examine trust and trustworthiness of individuals with varying professional preferences and experiences. Our subjects study business and economics in Frankfurt, the financial center of Germany and continental Europe. In the trust game, subjects with a high interest in working in the financial industry return 25 percent less than subjects with a low interest. We find no evidence that the extent of professional experience in the financial industry has a negative impact on trustworthiness. We also do not find any evidence that the financial industry screens out less trustworthy individuals in the hiring process. In a prediction game that is strategically equivalent to the trust game, the amount sent by first-movers was significantly smaller when the second-mover indicated a high interest in working in finance. These results suggest that the financial industry attracts less trustworthy individuals, which may contribute to the current lack of trust in its employees.
In the wake of the Global Financial Crisis that started in 2007, policymakers were forced to respond quickly and forcefully to a recession caused not by short-term factors, but rather by an over-accumulation of debt by sovereigns, banks, and households: a so-called “balance sheet recession.” Though the nature of the crisis was understood relatively early on, policy prescriptions for how to deal with its consequences have continued to diverge. This paper gives a short overview of the prescriptions, the remaining challenges and key lessons for monetary policy.
n a contribution prepared for the Athens Symposium on “Banking Union, Monetary Policy and Economic Growth”, Otmar Issing describes forward guidance by central banks as the culmination of the idea of guiding expectations by pure communication. In practice, he argues, forward guidance has proved a misguided idea. What is presented as state of the art monetary policy is an example of pretence of knowledge. Forward guidance tries to give the impression of a kind of rule-based monetary policy. De facto, however, it is an overambitious discretionary approach which, to be successful, would need much more (or rather better) information than is currently available. In Issing's view, communication must be clear and honest about the limits of monetary policy in a world of uncertainty.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.
he observed hump-shaped life-cycle pattern in individuals' consumption cannot be explained by the classical consumption-savings model. We explicitly solve a model with utility of both consumption and leisure and with educational decisions affecting future wages. We show optimal consumption is hump shaped and determine the peak age. The hump results from consumption and leisure being substitutes and from the implicit price of leisure being decreasing over time; more leisure means less education, which lowers future wages, and the present value of foregone wages decreases with age. Consumption is hump shaped whether the wage is hump shaped or increasing over life.
This paper provides a systematic analysis of individual attitudes towards ambiguity, based on laboratory experiments. The design of the analysis allows to capture individual behavior across various levels of ambiguity, ranging from low to high. Attitudes towards risk and attitudes towards ambiguity are disentangled, providing pure measures of ambiguity aversion. Ambiguity aversion is captured in several ways, i.e. as a discount factor net of a risk premium, and as an estimated parameter in a generalized utility function. We find that ambiguity aversion varies across individuals, and with the level of ambiguity, being most prominent for intermediate levels. Around one third of subjects show no aversion, one third show maximum aversion, and one third show intermediate levels of ambiguity aversion, while there is almost no ambiguity seeking. While most theoretical work on ambiguity builds on maxmin expected utility, our results provide evidence that MEU does not adequately capture individual attitudes towards ambiguity for the majority of individuals. Instead, our results support models that allow for intermediate levels of ambiguity aversion. Moreover, we find risk aversion to be statistically unrelated to ambiguity aversion on average. Taken together, the results support the view that ambiguity is an important and distinct argument in decision making under uncertainty.
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
The pi-calculus is a well-analyzed model for mobile processes and mobile computations.
While a lot of other process and lambda calculi that are core languages of higher-order concurrent and/or functional programming languages use a contextual semantics observing the termination behavior of programs in all program contexts, traditional program equivalences in the pi-calculus are bisimulations and barbed testing equivalences, which observe the communication capabilities of processes under reduction and in contexts.
There is a distance between these two approaches to program equivalence which makes it hard to compare the pi-calculus with other languages. In this paper we contribute to bridging this gap by investigating a contextual semantics of the synchronous pi-calculus with replication and without sums.
To transfer contextual equivalence to the pi-calculus we add a process Stop as constant which indicates success and is used as the base to define and analyze the contextual equivalence which observes may- and should-convergence of processes.
We show as a main result that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus. This implies that results on contextual equivalence can be directly transferred to the (Stop-free) pi-calculus with barbed testing equivalence.
We analyze the contextual ordering, prove some nontrivial process equivalences, and provide proof tools for showing contextual equivalences. Among them are a context lemma, and new notions of sound applicative similarities for may- and should-convergence.
Motivated by our experience in analyzing properties of translations between programming languages with observational semantics, this paper clarifies the notions, the relevant questions, and the methods, constructs a general framework, and provides several tools for proving various correctness properties of translations like adequacy and full abstractness. The presented framework can directly be applied to the observational equivalences derived from the operational semantics of programming calculi, and also to other situations, and thus has a wide range of applications.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
We study consumption-portfolio and asset pricing frameworks with recursive preferences and unspanned risk. We show that in both cases, portfolio choice and asset pricing, the value function of the investor/representative agent can be characterized by a specific semilinear partial differential equation. To date, the solution to this equation has mostly been approximated by Campbell-Shiller techniques, without addressing general issues of existence and uniqueness. We develop a novel approach that rigorously constructs the solution by a fixed point argument. We prove that under regularity conditions a solution exists and establish a fast and accurate numerical method to solve consumption-portfolio and asset pricing problems with recursive preferences and unspanned risk. Our setting is not restricted to affine asset price dynamics. Numerical examples illustrate our approach.
We study self- and cross-excitation of shocks in the Eurozone sovereign CDS market. We adopt a multivariate setting with credit default intensities driven by mutually exciting jump processes, to capture the salient features observed in the data, in particular, the clustering of high default probabilities both in time (over days) and in space (across countries). The feedback between jump events and the intensity of these jumps is the key element of the model. We derive closed-form formulae for CDS prices, and estimate the model by matching theoretical prices to their empirical counterparts. We find evidence of self-excitation and asymmetric cross-excitation. Using impulse-response analysis, we assess the impact of shocks and a potential policy intervention not just on a single country under scrutiny but also, through the effect on cross-excitation risk which generates systemic sovereign risk, on other interconnected countries.
Exit strategies
(2014)
We study alternative scenarios for exiting the post-crisis fiscal and monetary accommodation using a macromodel where banks choose their capital structure and are subject to runs. Under a Taylor rule, the post-crisis interest rate hits the zero lower bound (ZLB) and remains there for several years. In that condition, pre-announced and fast fiscal consolidations dominate - based on output and inflation performance and bank stability - alternative strategies incorporating various degrees of gradualism and surprise. We also examine an alternative monetary strategy in which the interest rate does not reach the ZLB; the benefits from fiscal consolidation persist, but are more nuanced.
We study the behavioral underpinnings of adopting cash versus electronic payments in retail transactions. A novel theoretical and experimental framework is developed to primarily assess the impact of sellers’ service fees and buyers’ rewards from using electronic payments. Buyers and sellers face a coordination problem, independently choosing a payment method before trading. In the experiment, sellers readily adopt electronic payments but buyers do not. Eliminating service fees or introducing rewards significantly boosts the adoption of electronic payments. Hence, buyers’ incentives play a pivotal role in the diffusion of electronic payments but monetary incentives cannot fully explain their adoption choices. Findings from this experiment complement empirical findings based on surveys and field data.
This note proposes a new set-up for the fund backing the Single Resolution Mechanism (SRM). The proposed fund is a Multi-Tier Resolution Fund (MTRF), restricting the joint and several supranational liability to a limited range of losses, bounded by national liability at the upper and the lower end. The layers are, in ascending order: a national fund (first losses), a European fund (second losses), the national budget (third losses), the ESM (fourth losses, as a backup for sovereigns). The system works like a reinsurance scheme, providing clear limits to European-level joint liability, and therefore confining moral hazard. At the same time, it allows for some degree of risk sharing, which is important for financial stability if shocks to the financial system are exogenous (e.g., of a supranational macroeconomic nature). The text has four parts. Section A describes the operation of the Multi-Tier Resolution Fund, assuming the fund capital to be fully paid-in (“Steady State“). Section B deals with the build-up phase of the fund capital (“Build up“). Section C discusses how the proposal deals with the apparent incentive conflicts. The final Section D summarizes open questions which need further thought (“Open Questions“).
Securities transaction tax in France: impact on market quality and inter-market price coordination
(2014)
The general concept of a Securities Transaction Tax is controversial among academics and politicians. While theoretical research is quite advanced, the empirical guidance in a fragmented market context is still scarce. Possible negative effects for market liquidity and market efficiency are theoretically predicted, but have not been empirically tested yet. In light of the agreement of eleven European member states to implement an STT, this study aims to give a comprehensive overview of the effects of the STT, introduced in France in 2012, on liquidity demand, liquidity supply, volatility and inter-market information transmission. The results show that the STT has led to a decline in liquidity demand, has had a detrimental effect on liquidity supply and negatively influences the inter-market information transmission efficiency. However, no effect on volatility can be observed.
In the United States, on April 1, 2014, the set of rules commonly known as the "Volcker Rule", prohibiting proprietary trading activities in banks, became effective. The implementation of this rule took more than three years, as “proprietary trading” is an inherently vague concept, overlapping strongly with genuinely economically useful activities such as market-making. As a result, the final Rule is a complex and lengthy combination of prohibitions and exemptions.
In January 2014, the European Commission put forward its proposal on banking structural reform. The proposal includes a Volcker-like provision, prohibiting large, systemically relevant financial institutions from engaging in proprietary trading or hedge fund-related business. This paper offers lessons to be learned from the implementation process for the Volcker rule in the US for the European regulatory process.
Financial innovation is, as usual, faster than regulation. New forms of speculation and intermediation are rapidly emerging. Largely as a result of the evaporation of trust in financial intermediation, an exponentially increasing role is being played by the so-called peer to peer intermediation. The most prominent example at the moment is Bitcoin.
If one expects that shocks in these markets could destabilize also traditional financial markets, then it will be necessary to extend regulatory measures also to these innovations.
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
This article discusses the recent proposal for debt restructuring in the euro zone by Pierre Paris and Charles Wyplosz. It argues that the plan cannot realize the promised debt relief without producing moral hazard. Ester Faia revisits the Redemption Fund proposed in November 2011 by the German Council of Economic Experts and argues that this plan, up to date, still remains the most promising path towards succesful debt restructuring in Europe.
On November 8, 2013, several members of the British House of Lords’ Subcommittee A conducted a hearing at the ECB in Frankfurt, Germany, on “Genuine Economic and Monetary Union and its Implications for the UK”. Professors Otmar Issing and Jan Pieter Krahnen were called as expert witnesses.
The testimony began with a general discussion on the elements considered necessary for a functioning internal market. Do economic union and monetary union require a fiscal union or even a political union, beyond the elements of the banking union currently being prepared? In this context, also the critique of the German current account surplus and the international expectations that Germany stimulate internal demand to support growth in crisis countries, were discussed.
With regard to the monetary union, the members of the subcommittee asked for an assessment of how European nations and the banking industry would have fared in the banking crisis that followed the Lehman collapse, had there not been a common currency. Given the important role that the ECB has played in the course of the crisis management, the members further asked for an evaluation of the OMT-program of the ECB and also if the monetary union is in need of common debt instruments, in order to provide the ECB with the possibility of buying EU liabilities, comparable to the Fed buying US Treasury bonds. Finally, the dual role of the ECB for monetary policy and banking supervision was an issue touched on by several questions.
In many cases, the dire situation of public finances calls into question the very soundness of sovereigns and prompts corrective actions with far-reaching consequences. In this context, European authorities responded with several measures on different fronts, for instance by passing the "Fiscal Compact", which entered into force on January 1, 2013. Of critical importance in this framework is the assessment of a country’s situation by way of statistical measures, in order to take corrective actions when called for according to the letter of the law. If these statistics are not correct, there is a risk of imposing draconian measures on countries that do not really need it.
Before the 2007–09 crisis, standard risk measurement methods substantially underestimated the threat to the financial system. One reason was that these methods didn’t account for how closely commercial banks, investment banks, hedge funds, and insurance companies were linked. As financial conditions worsened in one type of institution, the effects spread to others. A new method that more accurately accounts for these spillover effects suggests that hedge funds may have been central in generating systemic risk during the crisis.
Social impact bonds are a special type of bond whose purpose is to provide long term funds to projects with a social impact. Especially in the UK and in the US these bonds are increasingly being used to raise funds to finance government projects. Their return depends on the social improvements achieved. Especially in times of crisis, governments lack funds to prevent the social consequences of recessions. Faia argues that the European Union should develop an equivalent to the British Social Finance Ltd. to finance projects for social improvement.
Neither Northerners are willing to invest in a South they perceive as unwilling to undertake necessary structural reforms, nor are Southerners willing to invest in their countries in a climate of austerity and policy uncertainty imposed, in their view, by the North. This results in a vicious cycle of mistrust. However, as the author argues, big steps in the direction of reforms may provide just enough thrust to break out of this vicious cycle, propel southern countries – and especially Greece – to a much happier future, and promote the chances for more balanced economic performance in North and South.
Social Security rules that determine retirement, spousal, and survivor benefits, along with benefit adjustments according to the age at which these are claimed, open up a complex set of financial options for household decisions. These rules influence optimal household asset allocation, insurance, and work decisions, subject to life cycle demographic shocks, such as marriage, divorce, and children. Our model-based research generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper investigates the role of monetary policy in the collapse in the long-term real interest rates in the decade before the onset of the financial crisis using a sample of five advanced economies (United States, United Kingdom, the euro area, Sweden and Canada). The results from an estimated panel VAR with monthly data show that, while monetary policy shocks had negligible effects on long-term real interest rates, shocks to the long-term real interest rates had a one-to-one effect on the short nominal rate.
This paper empirically tests the role of bank lending tightening on non-financial corporate (NFC) bond issuance in the eurozone. By utilizing a unique data set provided by the ECB Bank Lending Survey, we capture the "pure" credit supply effect on corporate external financing. We find that tightened credit standards positively affect the NFC bond issuance: A 1pp increase in banks reporting considerable tightening on loans leads to around a 7% increase in firms' bond issuance in the eurozone. Focusing on a spectrum of aspects contributing to bank credit tightening, we document that banks' balance sheet constraints, as well as the perception of risk lead to significantly higher NFC bond issuance. In addition, we show that stricter lending conditions, such as wider margins, higher collateral requirements and covenants significantly increase NFC bond issuance volumes too. Furthermore, the impact of bank credit tightening on firms' bond issuance is particularly observable in core eurozone countries and not in peripheral countries. This is partially due to the underdeveloped of debt capital markets in the peripheral countries.
This paper investigates the determinants of value and growth investing in a large administrative panel of Swedish residents over the 1999-2007 period. We document strong relationships between a household’s portfolio tilt and the household’s financial and demographic characteristics. Value investors have higher financial and real estate wealth, lower leverage, lower income risk, lower human capital, and are more likely to be female than the average growth investor. Households actively migrate to value stocks over the life-cycle and, at higher frequencies, dynamically offset the passive variations in the value tilt induced by market movements. We verify that these results are not driven by cohort effects, financial sophistication, biases toward popular or professionally close stocks, or unobserved heterogeneity in preferences. We relate these household-level results to some of the leading explanations of the value premium.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
We examine the impact of so-called "Crisis Contracts" on bank managers' risk-taking incentives and on the probability of banking crises. Under a Crisis Contract, managers are required to contribute a pre-specified share of their past earnings to finance public rescue funds when a crisis occurs. This can be viewed as a retroactive tax that is levied only when a crisis occurs and that leads to a form of collective liability for bank managers. We develop a game-theoretic model of a banking sector whose shareholders have limited liability, so that society at large will suffer losses if a crisis occurs. Without Crisis Contracts, the managers' and shareholders' interests are aligned, and managers take more than the socially optimal level of risk. We investigate how the introduction of Crisis Contracts changes the equilibrium level of risk-taking and the remuneration of bank managers. We establish conditions under which the introduction of Crisis Contracts will reduce the probability of a banking crisis and improve social welfare. We explore how Crisis Contracts and capital requirements can supplement each other and we show that the efficacy of Crisis Contracts is not undermined by attempts to hedge.
Banks can deal with their liquidity risk by holding liquid assets (self-insurance), by participating in interbank markets (coinsurance), or by using flexible financing instruments, such as bank capital (risk-sharing). We use a simple model to show that undiversifiable liquidity risk, i.e. the liquidity risk that banks are unable to coinsure on interbank markets, represents an important risk factor affecting their capital structures. Banks facing higher undiversifiable liquidity risk hold more capital. We posit that empirically banks that are more exposed to undiversifiable liquidity risk are less active on interbank markets. Therefore, we test for the existence of a negative relationship between bank capital and interbank market activity and find support in a large sample of U.S. commercial banks.
From the late middle ages to early modern times (ca. 1200-1600) the Lübeck City Council was the most important courthouse in the Baltic. About 100 cities and towns on its shores lived according to the law of Lübeck. The paper deals with the old theory that Imperial law, i.e. mainly the learned Ius commune, was generally rejected by the council on the grounds of its foreign nature. The paper rejects this view with the help of 8 case studies. There exist rather spectacular statements against Imperial Law, but a closer look reveals that they have to be seen in the light of a specific practical context. They must not be confounded with general statements in which the council had no interest. Its attitude towards Learned Law was flexible and purely pragmatic.
I analyze a critical illness insurance in a consumption-investment model over the life cycle. I solve a model with stochastic mortality risk and health shock risk numerically. These shocks are interpreted as critical illness and can negatively affect the expected remaining lifetime, the health expenses, and the income. In order to hedge the health expense effect of a shock, the agent has the possibility to contract a critical illness insurance. My results highlight that the critical illness insurance is strongly desired by the agents. With an insurance profit of 20%, nearly all agents contract the insurance in the working stage of the life cycle and more than 50% of the agents contract the insurance during retirement. With an insurance profit of 200%, still nearly all working agents contract the insurance, whereas there is little demand in the retirement stage.
I numerically solve realistically calibrated life cycle consumption-investment problems in continuous time featuring stochastic mortality risk driven by jumps, unspanned labor income as well as short-sale and liquidity constraints and a simple insurance. I compare models with deterministic and stochastic hazard rate of death to a model without mortality risk. Mortality risk has only minor effects on the optimal controls early in the life cycle but it becomes crucial in later years. A diffusive component in the hazard rate of death has no significant impact, whereas a jump component is desired by the agent and influences optimal controls and wealth evolution. The insurance is used to ensure optimal bequest such that there is no accidental bequest. In the absence of the insurance, the biggest part of bequest is accidental.
We explore the sources of household balance sheet adjustment following the collapse of the housing market in 2006. First, we use microdata from the Federal Reserve Board’s Senior Loan Officer Opinion Survey to document that banks cumulatively tightened consumer lending standards more in counties that experienced a house price boom in the mid-2000s than in non-boom counties. We then use the idea that renters, unlike homeowners, did not experience an adverse wealth shock when the housing market collapsed to examine the relative importance of two explanations for the observed deleveraging and the sluggish pickup in consumption after 2008. First, households may have optimally adjusted to lower wealth by reducing their demand for debt and implicitly, their demand for consumption. Alternatively, banks may have been more reluctant to lend in areas with pronounced real estate declines. Our evidence is consistent with the second explanation. Renters with low risk scores, compared to homeowners in the same markets, reduced their levels of nonmortgage debt and credit card debt more in counties where house prices fell more. The contrast suggests that the observed reductions in aggregate borrowing were more driven by cutbacks in the provision of credit than by a demand-based response to lower housing wealth.
This paper solves a dynamic model of households' mortgage decisions incorporating labor income, house price, inflation, and interest rate risk. It uses a zero-profit condition for mortgage lenders to solve for equilibrium mortgage rates given borrower characteristics and optimal decisions. The model quantifies the effects of adjustable vs. fixed mortgage rates, loan-to-value ratios, and mortgage affordability measures on mortgage premia and default. Heterogeneity in borrowers' labor income risk is important for explaining the higher default rates on adjustable-rate mortgages during the recent US housing downturn, and the variation in mortgage premia with the level of interest rates.
The paper analyses the relationship between deposit insurance, debt-holder monitoring, bank charter values, and risk taking for European banks. Utilising cross-sectional and time series variation in the existence of deposit insurance schemes in the EU, we find that the establishment of explicit deposit insurance significantly reduces the risk taking of banks. This finding stands in contrast to most of the previous empirical literature. It supports the hypothesis that in the absence of deposit insurance, European banking systems have been characterised by strong implicit insurance operating through the expectation of public intervention at times of distress. Hence the introduction of an explicit system may imply a de facto reduction in the scope of the safety net. This finding provides a new perspective on the effects of deposit insurance on risk taking. Unless the absence of any safety net is credible, the introduction of deposit insurance serves to explicitly limit the safety net and, hence, moral hazard. We also test further hypotheses regarding the interaction between deposit insurance and monitoring, charter values and "too-big-to-fail." We find that banks with lower charter values and more subordinated debt reduce risk taking more after the introduction of explicit deposit insurance, in support of the notion that charter values and subordinated debt may mitigate moral hazard. Finally, large banks (as measured in relation to the banking system as a whole) do not change their risk taking in response to the introduction of deposit insurance, which suggests that the introduction of explicit deposit insurance does not mitigate "too-big-to-fail" problems.
This paper uses the co-incidence of extreme shocks to banks’ risk to examine within country and across country contagion among large EU banks. Banks’ risk is measured by the first difference of weekly distances to default and abnormal returns. Using Monte Carlo simulations, the paper examines whether the observed frequency of large shocks experienced by two or more banks simultaneously is consistent with the assumption of a multivariate normal or a student t distribution. Further, the paper proposes a simple metric, which is used to identify contagion from one bank to another and identify “systemically important” banks in the EU.
Using a normalized CES function with factor-augmenting technical progress, we estimate a supply-side system of the US economy from 1953 to 1998. Avoiding potential estimation biases that have occurred in earlier studies and putting a high emphasis on the consistency of the data set, required by the estimated system, we obtain robust results not only for the aggregate elasticity of substitution but also for the parameters of labor and capital augmenting technical change. We find that the elasticity of substitution is significantly below unity and that the growth rates of technical progress show an asymmetrical pattern where the growth of laboraugmenting technical progress is exponential, while that of capital is hyperbolic or logarithmic.
Recent empirical studies on the inflation-growth-relationship underline that inflation has negative growth effects already under relatively modest rates. Most contributions to monetary growth theory, however, have difficulties in explaining such a pattern. It is shown in this paper that this problem can be overcome by establishing a link between monetary instability and the aggregate elasticity of factor substitution. Several microeconomic justifications can be found for a negative influence of inflation on factor substitution. It turns out that already in a simple neoclassical monetary growth model this effect is usually strong enough to question the superneutrality benchmark result in the steady state and to dominate all potential positive effects of inflation along the convergence path. In a more general perspective the paper contributes to a better integration of institutional change in aggregate models of economic growth.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
In ‘Strafe für fremde Schuld’ Harald Maihold uncovered how a doctrine of surrogate punishment in the legal treatises of the Salamanca school gradually gave way to the principle of guilt. This meant that punishment eventually could only be inflicted upon a culprit and no longer upon an innocent. We will use René Girard’s philosophy of (the disruption of) scapegoat mechanisms and sacrifice to develop a coherent interpretation not only of how this institution of surrogate punishment functioned, how it selected its victims and the way it was legitimated, but also of the theology that formed its background. We argue that most of what surrogate punishment is about can be grasped in two words: sacrificial logic. The elimination of surrogation from criminal law would then correspond to the rejection of this logic, an evolution which could be interpreted as a desacralisation or secularisation of criminal law under the influence of the upcoming principle of guilt.
Noumenal Power
(2014)
In political or social philosophy, we speak about power all the time. Yet the meaning of this important concept is rarely made explicit, especially in the context of normative discussions. But as with many other concepts, once one considers it more closely, fundamental problems arise, such as whether a power relation is necessarily a relation of subordination and domination. In the following, I suggest a novel understanding of what power is and what it means to exercise it.
Francisco Suárez (1548-1617) and Rodrigo Arriaga (1592-1667) on the state of innocence and community
(2014)
Recent scholarship on late-scholastic thought has stressed a Jesuit discontinuity from Thomism. While Aquinas’ Aristotelian thesis located the political sphere in the state of innocence, Jesuit thought on community formation is said to have referred to ‘fallen’ and ‘pure’ nature. In this piece, I trace one particular narrative: In the hypothetical, lasting state of innocence (if original sin had not occurred), Aquinas identified the political community, but not the institution of the sacraments. Two celebrated Jesuit scholastics, Francisco Suárez and Rodrigo Arriaga, challenged the latter claim and defended the naturalness of spiritual alongside temporal power. This effectively allowed them to connect ‘nature’ to ‘utility’ and ‘necessity’ without tying their claims to the supernatural teleology. To them, the state of innocence remained relevant for politics, albeit in a way that challenged the Thomist account.
In this paper, we study the effect of proportional transaction costs on consumption-portfolio decisions and asset prices in a dynamic general equilibrium economy with a financial market that has a single-period bond and two risky stocks, one of which incurs the transaction cost. Our model has multiple investors with stochastic labor income, heterogeneous beliefs, and heterogeneous Epstein-Zin-Weil utility functions. The transaction cost gives rise to endogenous variations in liquidity. We show how equilibrium in this incomplete-markets economy can be characterized and solved for in a recursive fashion. We have three main findings. One, costs for trading a stock lead to a substantial reduction in the trading volume of that stock, but have only a small effect on the trading volume of the other stock and the bond. Two, even in the presence of stochastic labor income and heterogeneous beliefs, transaction costs have only a small effect on the consumption decisions of investors, and hence, on equity risk premia and the liquidity premium. Three, the effects of transaction costs on quantities such as the liquidity premium are overestimated in partial equilibrium relative to general equilibrium.
This paper studies the life cycle consumption-investment-insurance problem of a family. The wage earner faces the risk of a health shock that significantly increases his probability of dying. The family can buy term life insurance with realistic features. In particular, the available contracts are long term so that decisions are sticky and can only be revised at significant costs. Furthermore, a revision is only possible as long as the insured person is healthy. A second important and realistic feature of our model is that the labor income of
the wage earner is unspanned. We document that the combination of unspanned labor income and the stickiness of insurance decisions reduces the insurance demand significantly. This is because an income shock induces the need to reduce the insurance coverage, since premia become less affordable. Since such a reduction is costly and families anticipate these potential costs, they buy less protection at all ages. In particular, young families stay away from life insurance markets altogether.
The financial services industry worldwide has undergone major transformation since the late 1970s. Technological advancements in information processing and communication facilitated financial innovation and narrowed traditional distinctions in financial products and services, allowing them to become close substitutes for one another. The deregulation process in many major economies prior to the recent financial crisis blurred the traditional lines of demarcation between the distinct types of financial institutions, exposing those firms to new competitors in their traditional business areas, while the increasing globalization of financial markets fostered the provision of financial services across national borders. Against this backdrop, a trend toward consolidation across financial sectors as well as across national borders increasingly manifested itself since the 1990s. The developments in the financial markets ever more intensified competition in the financial services industry and induced financial institutions to redefine their business strategies in search of higher profitability and growth opportunities. Consolidation across distinct financial sectors, i.e. financial conglomeration, in particular became a popular business strategy in light of the potential operational synergies and diversification benefits it can offer. This trend spurred the growth of diversified financial groups, the so-called financial conglomerates, which commingle banking, securities, and insurance activities under one corporate umbrella.5 Still today, large, complex financial conglomerates are represented among major players in the financial markets worldwide, whose activities not only sway across traditional boundaries of banking, securities, and insurance sectors but also across national borders.
Notwithstanding the economic benefits that conglomeration may produce as a business strategy, the emergence of financial conglomerates also exacerbated existing and created new prudential risks in the financial system. 6 The mixing of a variety of financial products and services under one corporate roof and the generally large and complex group structure of financial conglomerates expose such organizations to specific group risks such as contagion and arbitrage risk as well as systemic risk. When realized, these risks may not only cause the failure of an entire financial group but threaten the stability of the financial system as a whole, as evidenced by the events during recent financial crisis of 2007-2009...
Following the experience of the global financial crisis, central banks have been asked to undertake unprecedented responsibilities. Governments and the public appear to have high expectations that monetary policy can provide solutions to problems that do not necessarily fit in the realm of traditional monetary policy. This paper examines three broad public policy goals that may overburden monetary policy: full employment; fiscal sustainability; and financial stability. While central banks have a crucial position in public policy, the appropriate policy mix also involves other institutions, and overreliance on monetary policy to achieve these goals is bound to disappoint. Central Bank policies that facilitate postponement of needed policy actions by governments may also have longer-term adverse consequences that could outweigh more immediate benefits. Overburdening monetary policy may eventually diminish and compromise the independence and credibility of the central bank, thereby reducing its effectiveness to preserve price stability and contribute to crisis management.
Banks' financial distress, lending supply and consumption expenditure : [version december 2013]
(2014)
The paper employs a unique identification strategy that links survey data on household consumption expenditure to bank level data in order to estimate the effects of bank financial distress on consumer credit and consumption expenditures. Specifically, we show that households whose banks were more exposed to funding shocks report significantly lower levels of non-mortgage liabilities compared to a matched sample of households. The reduced access to credit, however, does not result in lower levels of consumption. Instead, we show that households compensate by drawing down liquid assets. Only households without the ability to draw on liquid assets reduce consumption. The results are consistent with consumption smoothing in the face of a temporary adverse lending supply shock. The results contrast with recent evidence on the real effects of finance on firms' investment, where even temporary adverse credit supply shocks are associated with significant real effects.
This paper tests whether an increase in insured deposits causes banks to become more risky. We use variation introduced by the U.S. Emergency Economic Stabilization Act in October 2008, which increased the deposit insurance coverage from $100,000 to $250,000 per depositor and bank. For some banks, the amount of insured deposits increased significantly; for others, it was a minor change. Our analysis shows that the more affected banks increase their investments in risky commercial real estate loans and become more risky relative to unaffected banks following the change. This effect is most distinct for affected banks that are low capitalized.
We introduce a new measure of systemic risk, the change in the conditional joint probability of default, which assesses the effects of the interdependence in the financial system on the general default risk of sovereign debtors. We apply our measure to examine the fragility of the European financial system during the ongoing sovereign debt crisis. Our analysis documents an increase in systemic risk contributions in the euro area during the post-Lehman global recession and especially after the beginning of the euro area sovereign debt crisis. We also find a considerable potential for cascade effects from small to large euro area sovereigns. When we investigate the effect of sovereign default on the European Union banking system, we find that bigger banks, banks with riskier activities, with poor asset quality, and funding and liquidity constraints tend to be more vulnerable to a sovereign default. Surprisingly, an increase in leverage does not seem to influence systemic vulnerability.
We show that market discipline, defined as the extent to which firm specific risk characteristics are reflected in market prices, eroded during the recent financial crisis in 2008. We design a novel test of changes in market discipline based on the relation between firm specific risk characteristics and debt-to-equity hedge ratios. We find that market discipline already weakened after the rescue of Bear Stearns before disappearing almost entirely after the failure of Lehman Brothers. The effect is stronger for investment banks and large financial institutions, while there is no comparable effect for non-financial firms.
Inflation differentials in the euro area have been persistent since the adoption of the single currency. This paper analyzes the impact of product and labor market regulation on inflation in a sample of 11 countries. The results show that, after the adoption of the euro, product market deregulation has a relevant and significant effect on the level of inflation, while higher labor market regulation increases the responsiveness of inflation to the output gap.
We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
Even though fiscal sovereignty still counts as a fundamental principle of government, global and regional economic integration as well as increasing levels of sovereign debt severely limit governments’ tax policy choices. In particular the redistributive function of taxation has suffered in the pursuit of economic competitiveness. As inequality rises and attention is directed again at taxation as a means for redistribution, international cooperation appears as an avenue to enable redistribution through taxation. Yet, one of the predominant international institutions dealing with tax matters – the OECD – with its focus on economic growth and competitiveness and resulting tax policy advice prevents rather than promotes national and international debates on taxation as a question of social justice. The paper argues that questions of taxation need to be perceived as questions of social justice and thus as questions of politics, and not merely of economics. Only if taxation is not considered a mere economic instrument can a ‘political economy’ be maintained. The paper addresses the three objectives of taxation – revenue generation, redistribution and regulation -- and how they are affected as governments aim for fiscal consolidation to conclude that governments’ power to freely pursue and calibrate these objectives has come to appear rather as a myth than the core of sovereignty. It then demonstrates how the OECD’s tax policy advice and cooperation in tax matters react to the constraints on governmental taxation powers; how they aim at economic growth and competitiveness to the detriment of (other) ideas of social justice. The paper concludes with a call for (re)integrating social and global justice concerns into debates on taxation.
The article introduces a research project financed by the Academy of Sciences and Literature Mainz began in 2013 and will extend over an 18-year period. It aims at producing a historical-semantic dictionary elucidating central terms of the School of Salamanca's discourses and their significance for modern political theory and jurisprudence. The project's fundament will be a digital corpus of important texts from the School of Salamanca which will be linked up with the dictionary's online version. By making the source corpus accessible in searchable full text (as well as in high quality digital images), the project is creating a new research tool with exciting possibilities for further investigations. The dictionary will be a valuable source of information for the interdisciplinary research carried out in this field.
Sovereign bond risk premiums
(2013)
Credit risk has become an important factor driving government bond returns. We therefore introduce an asset pricing model which exploits information contained in both forward interest rates and forward CDS spreads. Our empirical analysis covers euro-zone countries with German government bonds as credit risk-free assets. We construct a market factor from the first three principal components of the German forward curve as well as a common and a country-specific credit factor from the principal components of the forward CDS curves. We find that predictability of risk premiums of sovereign euro-zone bonds improves substantially if the market factor is augmented by a common and an orthogonal country-specific credit factor. While the common credit factor is significant for most countries in the sample, the country-specific factor is significant mainly for peripheral euro-zone countries. Finally, we find that during the current crisis period, market and credit risk premiums of government bonds are negative over long subintervals, a finding that we attribute to the presence of financial repression in euro-zone countries.
This paper takes a novel approach to estimating bankruptcy costs by inference from market prices of equity and put options using a dynamic structural model of capital structure. This approach avoids the selection bias of looking at firms in or near default and therefore permits theories of ex ante capital structure determination to be tested. We identify significant cross sectional variation in bankruptcy costs across industries and relate these to specific firm characteristics. We find that asset volatility and growth options have significant positive impacts, while tangibility and size have negative impacts. Our bankruptcy cost variable estimate significantly negatively impacts leverage ratios. This negative impact is in addition to that of other firm characteristics such as asset intangibility and asset volatility. The results provide strong support for the tradeoff theory of capital structure.
We study to what extent firms spread out their debt maturity dates across time, which we call "granularity of corporate debt." We consider the role of debt granularity using a simple model in which a firm's inability to roll over expiring debt causes inefficiencies, such as costly asset sales or underinvestment. Since multiple small asset sales are less costly than a single large one, firms may diversify debt rollovers across maturity dates. We construct granularity measures using data on corporate bond issuers for the 1991-2011 period and establish a number of novel findings. First, there is substantial variation in granularity in that many firms have either very concentrated or highly dispersed maturity structures. Second, our model's predictions are consistent with observed variation in granularity. Corporate debt maturities are more dispersed for larger and more mature firms, for firms with better investment opportunities, with higher leverage ratios, and with lower levels of current cash flows. We also show that during the recent financial crisis especially firms with valuable investment opportunities implemented more dispersed maturity structures. Finally, granularity plays an important role for bond issuances, because we document that newly issued corporate bond maturities complement pre-existing bond maturity profiles.
We consider an economy where individuals privately choose effort and trade competitively priced securities that pay off with effort-determined probability. We show that if insurance against a negative shock is sufficiently incomplete, then standard functional form restrictions ensure that individual objective functions are optimized by an effort and insurance combination that is unique and satisfies first- and second-order conditions. Modeling insurance incompleteness in terms of costly production of private insurance services, we characterize the constrained inefficiency arising in general equilibrium from competitive pricing of nonexclusive financial contracts.
We propose a new classification of consumption goods into nondurable goods, durable goods and a new class which we call “memorable” goods. A good is memorable if a consumer can draw current utility from its past consumption experience through memory. We construct a novel consumption-savings model in which a consumer has a well-defined preference ordering over both nondurable goods and memorable goods. Memorable goods consumption differs from nondurable goods consumption in that current memorable goods consumption may also impact future utility through the accumulation process of the stock of memory. In our model, households optimally choose a lumpy profile of memorable goods consumption even in a frictionless world. Using Consumer Expenditure Survey data, we then document levels and volatilities of different groups of consumption goods expenditures, as well as their expenditure patterns, and show that the expenditure patterns on memorable goods indeed differ significantly from those on nondurable and durable goods. Finally, we empirically evaluate our model’s predictions with respect to the welfare cost of consumption fluctuations and conduct an excess-sensitivity test of the consumption response to predictable income changes. We find that (i) the welfare cost of household-level consumption fluctuations may be overstated by 1.7 percentage points (11.9% points as opposed to 13.6% points of permanent consumption) if memorable goods are not appropriately accounted for; (ii) the finding of excess sensitivity of consumption documented in important papers of the literature might be entirely due to the presence of memorable goods.
There is mounting evidence that retail investors make predictable, costly investment mistakes, including underinvestment, naïve diversification, and payment of excessive fund fees. Over the past thirty-five years, however, participant-directed 401(k) plans have largely replaced professionally managed pension plans, requiring unsophisticated retail investors to navigate the financial markets themselves. Policy-makers have struggled with regulatory interventions designed to improve the quality of investment decisions without a clear understanding of the reasons for investor mistakes. Absent such an understanding, it is difficult to design effective regulatory responses. This article offers a first step in understanding the investor decision-making process. We use an internet-based experiment to disentangle possible explanations for inefficient investment decisions. The experiment employs a simplified construct of an employee’s allocation among the options in a retirement plan coupled with technology that enables us to collect data on the specific information that investors choose to view. In addition to collecting general information about the process by which investors choose among mutual fund options, we employ an experimental manipulation to test the effect of an instruction on the importance of mutual fund fees. Pairing this instruction with simplified fee disclosure allows us to distinguish between motivation-limits and cognition-limits as explanations for the widespread findings that investors ignore fees in their investment decisions. Our results offer partial but limited grounds for optimism. On the one hand, within our simplified experimental construct, our subjects allocated more money, on average, to higher-value funds. Furthermore, subjects who received the fees instruction paid closer attention to mutual fund fees and allocated their investments into funds with lower fees. On the other hand, the effects of even a blunt fees instruction were limited, and investors were unable to identify and avoid clearly inferior fund options. In addition, our results suggest that excessive, naïve diversification strategies are driving many investment decisions. Although our findings are preliminary, they suggest valuable avenues for future research and important implications for regulation of retail investing.
The substantial variation in the real price of oil since 2003 has renewed interest in the question of how to forecast monthly and quarterly oil prices. There also has been increased interest in the link between financial markets and oil markets, including the question of whether financial market information helps forecast the real price of oil in physical markets. An obvious advantage of financial data in forecasting oil prices is their availability in real time on a daily or weekly basis. We investigate whether mixed-frequency models may be used to take advantage of these rich data sets. We show that, among a range of alternative high-frequency predictors, especially changes in U.S. crude oil inventories produce substantial and statistically significant real-time improvements in forecast accuracy. The preferred MIDAS model reduces the MSPE by as much as 16 percent compared with the no-change forecast and has statistically significant directional accuracy as high as 82 percent. This MIDAS forecast also is more accurate than a mixed-frequency realtime VAR forecast, but not systematically more accurate than the corresponding forecast based on monthly inventories. We conclude that typically not much is lost by ignoring high-frequency financial data in forecasting the monthly real price of oil.
Model case procedures have some fundamentals in common with collective redress in civil law countries. This is particularly true in the field of investor protection which is highly regulated and marked by resulting enforcement failures, which led the German legislator to the enactment of the KapMuG and its recent amendment which highlight exemplary elements of model case procedure. A survey of the ongoing activities of the European Union in the area of collective redress and of its repercussions on the member state level therefore forms a suitable basis for the following analysis of the 2012 amendment of the KapMuG. It clearly brings into focus a shift from sector-specific regulation with an emphasis on the cross-border aspect of protecting consumers towards a “coherent approach” strengthening the enforcement of EU law. As a result, regulatory policy and collective redress are two sides of the same coin today. With respect to the KapMuG such a development brings about some tension between its aim to aggregate small individual claims as efficiently as possible and the dominant role of individual procedural rights in German civil procedure. This conflict can be illustrated by some specific rules of the KapMuG: its scope of application, the three-tier procedure of a model case procedure, the newly introduced notification of claims and the new opt-out settlement under the amended §§ 17-19.
We propose the realized systemic risk beta as a measure for financial companies’ contribution to systemic risk given network interdependence between firms’ tail risk exposures. Conditional on statistically pre-identified network spillover effects and market as well as balance sheet information, we define the realized systemic risk beta as the total time-varying marginal effect of a firm’s Value-at-risk (VaR) on the system’s VaR. Statistical inference reveals a multitude of relevant risk spillover channels and determines companies’ systemic importance in the U.S. financial system. Our approach can be used to monitor companies’ systemic importance allowing for a transparent macroprudential supervision.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
Does it pay to invest in art? A selection-corrected returns perspective : [draft october 15, 2013]
(2013)
This paper shows the importance of correcting for sample selection when investing in illiquid assets with endogenous trading. Using a large sample of 20,538 paintings that were sold repeatedly at auction between 1972 and 2010, we find that paintings with higher price appreciation are more likely to trade. This strongly biases estimates of returns. The selection-corrected average annual index return is 6.5 percent, down from 10 percent for traditional uncorrected repeat sales regressions, and Sharpe Ratios drop from 0.24 to 0.04. From a pure financial perspective, passive index investing in paintings is not a viable investment strategy once selection bias is accounted for. Our results have important implications for other illiquid asset classes that trade endogenously.
The 2011 European short sale ban on financial stocks: a cure or a curse? : [version 31 july 2013]
(2013)
Did the August 2011 European short sale bans on financial stocks accomplish their goals? In order to answer this question, we use stock options’ implied volatility skews to proxy for investors’ risk aversion. We find that on ban announcement day, risk aversion levels rose for all stocks but more so for the banned financial stocks. The banned stocks’ volatility skews remained elevated during the ban but dropped for the other unbanned stocks. We show that it is the imposition of the ban itself that led to the increase in risk aversion rather than other causes such as information flow, options trading volumes, or stock specific factors. Substitution effects were minimal, as banned stocks’ put trading volumes and put-call ratios declined during the ban. We argue that although the ban succeeded in curbing further selling pressure on financial stocks by redirecting trading activity towards index options, this result came at the cost of increased risk aversion and some degree of market failure.
We show that the presence of high frequency trading (HFT) has significantly mitigated the frequency and severity of end-of-day price dislocation, counter to recent concerns expressed in the media. The effect of HFT is more pronounced on days when end of day price dislocation is more likely to be the result of market manipulation on days of option expiry dates and end of month. Moreover, the effect of HFT is more pronounced than the role of trading rules, surveillance, enforcement and legal conditions in curtailing the frequency and severity of end-of-day price dislocation. We show our findings are robust to different proxies of the start of HFT by trade size, cancellation of orders, and co-location.
We examine the impact of stock exchange trading rules and surveillance on the frequency and severity of suspected insider trading cases in 22 stock exchanges around the world over the period January 2003 through June 2011. Using new indices for market manipulation, insider trading, and broker-agency conflict based on the specific provisions of the trading rules of each stock exchange, along with surveillance to detect non-compliance with such rules, we show that more detailed exchange trading rules and surveillance over time and across markets significantly reduce the number of cases, but increase the profits per case.
We use responses to survey questions in the 2010 Italian Survey of Household Income and Wealth that ask consumers how much of an unexpected transitory income change they would consume. We find that the marginal propensity to consume (MPC) is 48 percent on average, and that there is substantial heterogeneity in the distribution. We find that households with low cash-on-hand exhibit a much higher MPC than affluent households, which is in agreement with models with precautionary savings where income risk plays an important role. The results have important implications for the evaluation of fiscal policy, and for predicting household responses to tax reforms and redistributive policies. In particular, we find that a debt-financed increase in transfers of 1 percent of national disposable income targeted to the bottom decile of the cash-on-hand distribution would increase aggregate consumption by 0.82 percent. Furthermore, we find that redistributing 1% of national disposable income from the top to the bottom decile of the income distribution would boost aggregate consumption by 0.33%.
Prior research suggests that those who rely on intuition rather than effortful reasoning when making decisions are less averse to risk and ambiguity. The evidence is largely correlational, however, leaving open the question of the direction of causality. In this paper, we present experimental evidence of causation running from reliance on intuition to risk and ambiguity preferences. We directly manipulate participants’ predilection to rely on intuition and find that enhancing reliance on intuition lowers the probability of being ambiguity averse by 30 percentage points and increases risk tolerance by about 30 percent in the experimental sub-population where we would a priori expect the manipulation to be successful(males).
Investment in financial literacy, social security and portfolio choice : [version may 21, 2013]
(2013)
We present an intertemporal portfolio choice model where individuals invest in financial literacy, save, allocate their wealth between a safe and a risky asset, and receive a pension when they retire. Financial literacy affects the excess return and the cost of stock market participation. Since literacy depreciates over time and has a cost related to current consumption, investors simultaneously choose how much to save, the portfolio allocation, and the optimal investment in literacy. This last depends on households' resources, its preference parameters and on how much financial literacy affects the returns on risky assets and the stock market participation cost, and the returns on social security wealth. The model implies one should observe a positive correlation between stock market participation (and risky asset share, conditional on participation) and financial literacy, and a negative correlation between the generosity of the social security system and financial literacy. The model also implies that the stock of financial literacy accumulated early in life is positively correlated with the individual's wealth and portfolio allocations later in life. Using microeconomic cross-country data, we find support for these predictions.
The U.S. Energy Information Administration (EIA) regularly publishes monthly and quarterly forecasts of the price of crude oil for horizons up to two years, which are widely used by practitioners. Traditionally, such out-of-sample forecasts have been largely judgmental, making them difficult to replicate and justify. An alternative is the use of real-time econometric oil price forecasting models. We investigate the merits of constructing combinations of six such models. Forecast combinations have received little attention in the oil price forecasting literature to date. We demonstrate that over the last 20 years suitably constructed real-time forecast combinations would have been systematically more accurate than the no-change forecast at horizons up to 6 quarters or 18 months. MSPE reduction may be as high as 12% and directional accuracy as high as 72%. The gains in accuracy are robust over time. In contrast, the EIA oil price forecasts not only tend to be less accurate than no-change forecasts, but are much less accurate than our preferred forecast combination. Moreover, including EIA forecasts in the forecast combination systematically lowers the accuracy of the combination forecast. We conclude that suitably constructed forecast combinations should replace traditional judgmental forecasts of the price of oil.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
U.S. retail food price increases in recent years may seem large in nominal terms, but after adjusting for inflation have been quite modest even after the change in U.S. biofuel policies in 2006. In contrast, increases in the real prices of corn, soybeans, wheat and rice received by U.S. farmers have been more substantial and can be linked in part to increases in the real price of oil. That link, however, appears largely driven by common macroeconomic determinants of the prices of oil and agricultural commodities rather than the pass-through from higher oil prices. We show that there is no evidence that corn ethanol mandates have created a tight link between oil and agricultural markets. Rather increases in food commodity prices not associated with changes in global real activity appear to reflect a wide range of idiosyncratic shocks ranging from changes in biofuel policies to poor harvests. Increases in agricultural commodity prices in turn contribute little to U.S. retail food price increases, because of the small cost share of agricultural products in food prices. There is no evidence that oil price shocks have caused more than a negligible increase in retail food prices in recent years. Nor is there evidence for the prevailing wisdom that oil-price driven increases in the cost of food processing, packaging, transportation and distribution are responsible for higher retail food prices. Finally, there is no evidence that oil-market specific events or for that matter U.S. biofuel policies help explain the evolution of the real price of rice, which is perhaps the single most important food commodity for many developing countries.
We investigate the theoretical impact of including two empirically-grounded insights in a dynamic life cycle portfolio choice model. The first is to recognize that, when managing their own financial wealth, investors incur opportunity costs in terms of current and future human capital accumulation, particularly if human capital is acquired via learning by doing. The second is that we incorporate age-varying efficiency patterns in financial decisionmaking. Both enhancements produce inactivity in portfolio adjustment patterns consistent with empirical evidence. We also analyze individuals’ optimal choice between self-managing their wealth versus delegating the task to a financial advisor. Delegation proves most valuable to the young and the old. Our calibrated model quantifies welfare gains from including investment time and money costs, as well as delegation, in a life cycle setting.