Refine
Year of publication
Document Type
- Working Paper (2350) (remove)
Language
- English (2350) (remove)
Is part of the Bibliography
- no (2350)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (147)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
We investigate methods and tools for analyzing translations between programming languages with respect to observational semantics. The behavior of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and mustconvergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extensions.
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus’ semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as nondeterminism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe’s proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools for observational semantics of deterministic and non-deterministic higher-order calculi that use a small-step reduction semantics. The generic (sharing) context lemmas are provided for may- as well as two variants of must-convergence, which hold in a broad class of extended process- and extended lambda calculi, if the calculi satisfy certain natural conditions. As a guide-line, the proofs of the context lemmas are valid in call-by-need calculi, in callby-value calculi if substitution is restricted to variable-by-variable and in process calculi like variants of the π-calculus. For calculi employing beta-reduction using a call-by-name or call-by-value strategy or similar reduction rules, some iu-variants of ciu-theorems are obtained from our context lemmas. Our results reestablish several context lemmas already proved in the literature, and also provide some new context lemmas as well as some new variants of the ciu-theorem. To make the results widely applicable, we use a higher-order abstract syntax that allows untyped calculi as well as certain simple typing schemes. The approach may lead to a unifying view of higher-order calculi, reduction, and observational equality.
This paper proves several generic variants of context lemmas and thus contributes to improving the tools to develop observational semantics that is based on a reduction semantics for a language. The context lemmas are provided for may- as well as two variants of mustconvergence and a wide class of extended lambda calculi, which satisfy certain abstract conditions. The calculi must have a form of node sharing, e.g. plain beta reduction is not permitted. There are two variants, weakly sharing calculi, where the beta-reduction is only permitted for arguments that are variables, and strongly sharing calculi, which roughly correspond to call-by-need calculi, where beta-reduction is completely replaced by a sharing variant. The calculi must obey three abstract assumptions, which are in general easily recognizable given the syntax and the reduction rules. The generic context lemmas have as instances several context lemmas already proved in the literature for specific lambda calculi with sharing. The scope of the generic context lemmas comprises not only call-by-need calculi, but also call-by-value calculi with a form of built-in sharing. Investigations in other, new variants of extended lambda-calculi with sharing, where the language or the reduction rules and/or strategy varies, will be simplified by our result, since specific context lemmas are immediately derivable from the generic context lemma, provided our abstract conditions are met.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb that is locally bottom-avoiding. We use a small-step operational semantics in form of a single-step rewriting system that defines a (nondeterministic) normal order reduction. This strategy can be made fair by adding resources for bookkeeping. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into any program context their termination behaviour is the same, where we use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We show that we can drop the fairness condition for equational reasoning, since the valid equations w.r.t. normal order reduction are the same as for fair normal order reduction. We evolve different proof tools for proving correctness of program transformations, in particular, a context lemma for may- as well as mustconvergence is proved, which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations preserve contextual equivalence.We also prove a standardisation theorem for fair normal order reduction. The structure of the ordering <=c a is also analysed: Ω is not a least element, and <=c already implies contextual equivalence w.r.t. may-convergence.
We develop a proof method to show that in a (deterministic) lambda calculus with letrec and equipped with contextual equivalence the call-by-name and the call-by-need evaluation are equivalent, and also that the unrestricted copy-operation is correct. Given a let-binding x = t, the copy-operation replaces an occurrence of the variable x by the expression t, regardless of the form of t. This gives an answer to unresolved problems in several papers, it adds a strong method to the tool set for reasoning about contextual equivalence in higher-order calculi with letrec, and it enables a class of transformations that can be used as optimizations. The method can be used in different kind of lambda calculi with cyclic sharing. Probably it can also be used in non-deterministic lambda calculi if the variable x is “deterministic”, i.e., has no interference with non-deterministic executions. The main technical idea is to use a restricted variant of the infinitary lambda-calculus, whose objects are the expressions that are unrolled w.r.t. let, to define the infinite developments as a reduction calculus on the infinite trees and showing a standardization theorem.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in an extended lambda-calculus LS with case, constructors, seq, let, and choice, with a simple set of reduction rules; and to argue that an approximation calculus LA is equivalent to LS w.r.t. the contextual preorder, which enables the proof tool of simulation. Unfortunately, a direct proof appears to be impossible.
The correctness proof is by defining another calculus L comprising the complex variants of copy, case-reduction and seq-reductions that use variable-binding chains. This complex calculus has well-behaved diagrams and allows a proof of correctness of transformations, and that the simple calculus LS, the calculus L, and the calculus LA all have an equivalent contextual preorder.
The calculus CHF models Concurrent Haskell extended by concurrent, implicit futures. It is a process calculus with concurrent threads, monadic concurrent evaluation, and includes a pure functional lambda-calculus which comprises data constructors, case-expressions, letrec-expressions, and Haskell’s seq. Futures can be implemented in Concurrent Haskell using the primitive unsafeInterleaveIO, which is available in most implementations of Haskell. Our main result is conservativity of CHF, that is, all equivalences of pure functional expressions are also valid in CHF. This implies that compiler optimizations and transformations from pure Haskell remain valid in Concurrent Haskell even if it is extended by futures. We also show that this is no longer valid if Concurrent Haskell is extended by the arbitrary use of unsafeInterleaveIO.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Panel Sample Selection ModelsThe empirical evidence currently available in the literature regarding the effects of a country's IMF program participation on its output growth is rather inconclusive. In this paper we propose and estimate a panel data sample selection model featuring state dependence. As in this model the output growth effects of program participation can be conditional on the realization of a state variable (conditional pooling), our framework may reconcile previous empirical evidence based on models without state-dependent effects. We find that the effects of IMF program participation on output growth vary systematically with an index reflecting a country's institutional record, and that output growth effects of program participation are significantly positive only if the program participation is coupled with sufficient improvement of the institutional record.
Although oil price shocks have long been viewed as one of the leading candidates for explaining U.S. recessions, surprisingly little is known about the extent to which oil price shocks explain recessions. We provide the first formal analysis of this question with special attention to the possible role of net oil price increases in amplifying the transmission of oil price shocks. We quantify the conditional recessionary effect of oil price shocks in the net oil price increase model for all episodes of net oil price increases since the mid-1970s. Compared to the linear model, the cumulative effect of oil price shocks over course of the next two years is much larger in the net oil price increase model. For example, oil price shocks explain a 3% cumulative reduction in U.S. real GDP in the late 1970s and early 1980s and a 5% cumulative reduction during the financial crisis. An obvious concern is that some of these estimates are an artifact of net oil price increases being correlated with other variables that explain recessions. We show that the explanatory power of oil price shocks largely persists even after augmenting the nonlinear model with a measure of credit supply conditions, of the monetary policy stance and of consumer confidence. There is evidence, however, that the conditional fit of the net oil price increase model is worse on average than the fit of the corresponding linear model, suggesting much smaller cumulative effects of oil price shocks for these episodes of at most 1%.
Efforts to control bank risk address the wrong problem in the wrong way. They presume that the financial crisis was caused by CEOs who failed to supervise risk-taking employees. The responses focus on executive pay, believing that executives will bring non-executives into line—using incentives to manage risk-taking—once their own pay is regulated. What they overlook is the effect on non-executive pay of the competition for talent. Even if executive pay is regulated, and executives act in the bank’s best interests, they will still be trapped into providing incentives that encourage risk-taking by non-executives due to the negative externality that arises from that competition. Greater risk-taking can increase short-term profits and, in turn, the amount a non-executive receives, potentially at the expense of long-term bank value. Non-executives, therefore, have an incentive to incur significant risk upfront so long as they can depart for a new employer before any losses materialize. The result is an upward spiral in compensation—reducing an executive’s ability to set non-executive pay and the ability of any one bank to adjust compensation to reflect risk-taking and long-term outcomes. New regulation must address the tension between compensation and competition. Regulators should take account of the effect of competition on market-wide levels of pay, including by non-banks who compete for talent. The ability of non-executives to jump from a bank employer to another financial firm should also be limited. In addition, banks should be required to include a long-term equity component in non-executive pay, with subsequent employers being restricted from compensating a new employee for any losses she incurs related to her prior work.
We examine trust and trustworthiness of individuals with varying professional preferences and experiences. Our subjects study business and economics in Frankfurt, the financial center of Germany and continental Europe. In the trust game, subjects with a high interest in working in the financial industry return 25 percent less than subjects with a low interest. We find no evidence that the extent of professional experience in the financial industry has a negative impact on trustworthiness. We also do not find any evidence that the financial industry screens out less trustworthy individuals in the hiring process. In a prediction game that is strategically equivalent to the trust game, the amount sent by first-movers was significantly smaller when the second-mover indicated a high interest in working in finance. These results suggest that the financial industry attracts less trustworthy individuals, which may contribute to the current lack of trust in its employees.
In the wake of the Global Financial Crisis that started in 2007, policymakers were forced to respond quickly and forcefully to a recession caused not by short-term factors, but rather by an over-accumulation of debt by sovereigns, banks, and households: a so-called “balance sheet recession.” Though the nature of the crisis was understood relatively early on, policy prescriptions for how to deal with its consequences have continued to diverge. This paper gives a short overview of the prescriptions, the remaining challenges and key lessons for monetary policy.
n a contribution prepared for the Athens Symposium on “Banking Union, Monetary Policy and Economic Growth”, Otmar Issing describes forward guidance by central banks as the culmination of the idea of guiding expectations by pure communication. In practice, he argues, forward guidance has proved a misguided idea. What is presented as state of the art monetary policy is an example of pretence of knowledge. Forward guidance tries to give the impression of a kind of rule-based monetary policy. De facto, however, it is an overambitious discretionary approach which, to be successful, would need much more (or rather better) information than is currently available. In Issing's view, communication must be clear and honest about the limits of monetary policy in a world of uncertainty.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.
he observed hump-shaped life-cycle pattern in individuals' consumption cannot be explained by the classical consumption-savings model. We explicitly solve a model with utility of both consumption and leisure and with educational decisions affecting future wages. We show optimal consumption is hump shaped and determine the peak age. The hump results from consumption and leisure being substitutes and from the implicit price of leisure being decreasing over time; more leisure means less education, which lowers future wages, and the present value of foregone wages decreases with age. Consumption is hump shaped whether the wage is hump shaped or increasing over life.
This paper provides a systematic analysis of individual attitudes towards ambiguity, based on laboratory experiments. The design of the analysis allows to capture individual behavior across various levels of ambiguity, ranging from low to high. Attitudes towards risk and attitudes towards ambiguity are disentangled, providing pure measures of ambiguity aversion. Ambiguity aversion is captured in several ways, i.e. as a discount factor net of a risk premium, and as an estimated parameter in a generalized utility function. We find that ambiguity aversion varies across individuals, and with the level of ambiguity, being most prominent for intermediate levels. Around one third of subjects show no aversion, one third show maximum aversion, and one third show intermediate levels of ambiguity aversion, while there is almost no ambiguity seeking. While most theoretical work on ambiguity builds on maxmin expected utility, our results provide evidence that MEU does not adequately capture individual attitudes towards ambiguity for the majority of individuals. Instead, our results support models that allow for intermediate levels of ambiguity aversion. Moreover, we find risk aversion to be statistically unrelated to ambiguity aversion on average. Taken together, the results support the view that ambiguity is an important and distinct argument in decision making under uncertainty.
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
The pi-calculus is a well-analyzed model for mobile processes and mobile computations.
While a lot of other process and lambda calculi that are core languages of higher-order concurrent and/or functional programming languages use a contextual semantics observing the termination behavior of programs in all program contexts, traditional program equivalences in the pi-calculus are bisimulations and barbed testing equivalences, which observe the communication capabilities of processes under reduction and in contexts.
There is a distance between these two approaches to program equivalence which makes it hard to compare the pi-calculus with other languages. In this paper we contribute to bridging this gap by investigating a contextual semantics of the synchronous pi-calculus with replication and without sums.
To transfer contextual equivalence to the pi-calculus we add a process Stop as constant which indicates success and is used as the base to define and analyze the contextual equivalence which observes may- and should-convergence of processes.
We show as a main result that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus. This implies that results on contextual equivalence can be directly transferred to the (Stop-free) pi-calculus with barbed testing equivalence.
We analyze the contextual ordering, prove some nontrivial process equivalences, and provide proof tools for showing contextual equivalences. Among them are a context lemma, and new notions of sound applicative similarities for may- and should-convergence.
Motivated by our experience in analyzing properties of translations between programming languages with observational semantics, this paper clarifies the notions, the relevant questions, and the methods, constructs a general framework, and provides several tools for proving various correctness properties of translations like adequacy and full abstractness. The presented framework can directly be applied to the observational equivalences derived from the operational semantics of programming calculi, and also to other situations, and thus has a wide range of applications.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
We study consumption-portfolio and asset pricing frameworks with recursive preferences and unspanned risk. We show that in both cases, portfolio choice and asset pricing, the value function of the investor/representative agent can be characterized by a specific semilinear partial differential equation. To date, the solution to this equation has mostly been approximated by Campbell-Shiller techniques, without addressing general issues of existence and uniqueness. We develop a novel approach that rigorously constructs the solution by a fixed point argument. We prove that under regularity conditions a solution exists and establish a fast and accurate numerical method to solve consumption-portfolio and asset pricing problems with recursive preferences and unspanned risk. Our setting is not restricted to affine asset price dynamics. Numerical examples illustrate our approach.
We study self- and cross-excitation of shocks in the Eurozone sovereign CDS market. We adopt a multivariate setting with credit default intensities driven by mutually exciting jump processes, to capture the salient features observed in the data, in particular, the clustering of high default probabilities both in time (over days) and in space (across countries). The feedback between jump events and the intensity of these jumps is the key element of the model. We derive closed-form formulae for CDS prices, and estimate the model by matching theoretical prices to their empirical counterparts. We find evidence of self-excitation and asymmetric cross-excitation. Using impulse-response analysis, we assess the impact of shocks and a potential policy intervention not just on a single country under scrutiny but also, through the effect on cross-excitation risk which generates systemic sovereign risk, on other interconnected countries.
Exit strategies
(2014)
We study alternative scenarios for exiting the post-crisis fiscal and monetary accommodation using a macromodel where banks choose their capital structure and are subject to runs. Under a Taylor rule, the post-crisis interest rate hits the zero lower bound (ZLB) and remains there for several years. In that condition, pre-announced and fast fiscal consolidations dominate - based on output and inflation performance and bank stability - alternative strategies incorporating various degrees of gradualism and surprise. We also examine an alternative monetary strategy in which the interest rate does not reach the ZLB; the benefits from fiscal consolidation persist, but are more nuanced.
We study the behavioral underpinnings of adopting cash versus electronic payments in retail transactions. A novel theoretical and experimental framework is developed to primarily assess the impact of sellers’ service fees and buyers’ rewards from using electronic payments. Buyers and sellers face a coordination problem, independently choosing a payment method before trading. In the experiment, sellers readily adopt electronic payments but buyers do not. Eliminating service fees or introducing rewards significantly boosts the adoption of electronic payments. Hence, buyers’ incentives play a pivotal role in the diffusion of electronic payments but monetary incentives cannot fully explain their adoption choices. Findings from this experiment complement empirical findings based on surveys and field data.
This note proposes a new set-up for the fund backing the Single Resolution Mechanism (SRM). The proposed fund is a Multi-Tier Resolution Fund (MTRF), restricting the joint and several supranational liability to a limited range of losses, bounded by national liability at the upper and the lower end. The layers are, in ascending order: a national fund (first losses), a European fund (second losses), the national budget (third losses), the ESM (fourth losses, as a backup for sovereigns). The system works like a reinsurance scheme, providing clear limits to European-level joint liability, and therefore confining moral hazard. At the same time, it allows for some degree of risk sharing, which is important for financial stability if shocks to the financial system are exogenous (e.g., of a supranational macroeconomic nature). The text has four parts. Section A describes the operation of the Multi-Tier Resolution Fund, assuming the fund capital to be fully paid-in (“Steady State“). Section B deals with the build-up phase of the fund capital (“Build up“). Section C discusses how the proposal deals with the apparent incentive conflicts. The final Section D summarizes open questions which need further thought (“Open Questions“).
Securities transaction tax in France: impact on market quality and inter-market price coordination
(2014)
The general concept of a Securities Transaction Tax is controversial among academics and politicians. While theoretical research is quite advanced, the empirical guidance in a fragmented market context is still scarce. Possible negative effects for market liquidity and market efficiency are theoretically predicted, but have not been empirically tested yet. In light of the agreement of eleven European member states to implement an STT, this study aims to give a comprehensive overview of the effects of the STT, introduced in France in 2012, on liquidity demand, liquidity supply, volatility and inter-market information transmission. The results show that the STT has led to a decline in liquidity demand, has had a detrimental effect on liquidity supply and negatively influences the inter-market information transmission efficiency. However, no effect on volatility can be observed.
In the United States, on April 1, 2014, the set of rules commonly known as the "Volcker Rule", prohibiting proprietary trading activities in banks, became effective. The implementation of this rule took more than three years, as “proprietary trading” is an inherently vague concept, overlapping strongly with genuinely economically useful activities such as market-making. As a result, the final Rule is a complex and lengthy combination of prohibitions and exemptions.
In January 2014, the European Commission put forward its proposal on banking structural reform. The proposal includes a Volcker-like provision, prohibiting large, systemically relevant financial institutions from engaging in proprietary trading or hedge fund-related business. This paper offers lessons to be learned from the implementation process for the Volcker rule in the US for the European regulatory process.
Financial innovation is, as usual, faster than regulation. New forms of speculation and intermediation are rapidly emerging. Largely as a result of the evaporation of trust in financial intermediation, an exponentially increasing role is being played by the so-called peer to peer intermediation. The most prominent example at the moment is Bitcoin.
If one expects that shocks in these markets could destabilize also traditional financial markets, then it will be necessary to extend regulatory measures also to these innovations.
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
This article discusses the recent proposal for debt restructuring in the euro zone by Pierre Paris and Charles Wyplosz. It argues that the plan cannot realize the promised debt relief without producing moral hazard. Ester Faia revisits the Redemption Fund proposed in November 2011 by the German Council of Economic Experts and argues that this plan, up to date, still remains the most promising path towards succesful debt restructuring in Europe.
On November 8, 2013, several members of the British House of Lords’ Subcommittee A conducted a hearing at the ECB in Frankfurt, Germany, on “Genuine Economic and Monetary Union and its Implications for the UK”. Professors Otmar Issing and Jan Pieter Krahnen were called as expert witnesses.
The testimony began with a general discussion on the elements considered necessary for a functioning internal market. Do economic union and monetary union require a fiscal union or even a political union, beyond the elements of the banking union currently being prepared? In this context, also the critique of the German current account surplus and the international expectations that Germany stimulate internal demand to support growth in crisis countries, were discussed.
With regard to the monetary union, the members of the subcommittee asked for an assessment of how European nations and the banking industry would have fared in the banking crisis that followed the Lehman collapse, had there not been a common currency. Given the important role that the ECB has played in the course of the crisis management, the members further asked for an evaluation of the OMT-program of the ECB and also if the monetary union is in need of common debt instruments, in order to provide the ECB with the possibility of buying EU liabilities, comparable to the Fed buying US Treasury bonds. Finally, the dual role of the ECB for monetary policy and banking supervision was an issue touched on by several questions.
In many cases, the dire situation of public finances calls into question the very soundness of sovereigns and prompts corrective actions with far-reaching consequences. In this context, European authorities responded with several measures on different fronts, for instance by passing the "Fiscal Compact", which entered into force on January 1, 2013. Of critical importance in this framework is the assessment of a country’s situation by way of statistical measures, in order to take corrective actions when called for according to the letter of the law. If these statistics are not correct, there is a risk of imposing draconian measures on countries that do not really need it.
Before the 2007–09 crisis, standard risk measurement methods substantially underestimated the threat to the financial system. One reason was that these methods didn’t account for how closely commercial banks, investment banks, hedge funds, and insurance companies were linked. As financial conditions worsened in one type of institution, the effects spread to others. A new method that more accurately accounts for these spillover effects suggests that hedge funds may have been central in generating systemic risk during the crisis.
Social impact bonds are a special type of bond whose purpose is to provide long term funds to projects with a social impact. Especially in the UK and in the US these bonds are increasingly being used to raise funds to finance government projects. Their return depends on the social improvements achieved. Especially in times of crisis, governments lack funds to prevent the social consequences of recessions. Faia argues that the European Union should develop an equivalent to the British Social Finance Ltd. to finance projects for social improvement.
Neither Northerners are willing to invest in a South they perceive as unwilling to undertake necessary structural reforms, nor are Southerners willing to invest in their countries in a climate of austerity and policy uncertainty imposed, in their view, by the North. This results in a vicious cycle of mistrust. However, as the author argues, big steps in the direction of reforms may provide just enough thrust to break out of this vicious cycle, propel southern countries – and especially Greece – to a much happier future, and promote the chances for more balanced economic performance in North and South.
Social Security rules that determine retirement, spousal, and survivor benefits, along with benefit adjustments according to the age at which these are claimed, open up a complex set of financial options for household decisions. These rules influence optimal household asset allocation, insurance, and work decisions, subject to life cycle demographic shocks, such as marriage, divorce, and children. Our model-based research generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper investigates the role of monetary policy in the collapse in the long-term real interest rates in the decade before the onset of the financial crisis using a sample of five advanced economies (United States, United Kingdom, the euro area, Sweden and Canada). The results from an estimated panel VAR with monthly data show that, while monetary policy shocks had negligible effects on long-term real interest rates, shocks to the long-term real interest rates had a one-to-one effect on the short nominal rate.
This paper empirically tests the role of bank lending tightening on non-financial corporate (NFC) bond issuance in the eurozone. By utilizing a unique data set provided by the ECB Bank Lending Survey, we capture the "pure" credit supply effect on corporate external financing. We find that tightened credit standards positively affect the NFC bond issuance: A 1pp increase in banks reporting considerable tightening on loans leads to around a 7% increase in firms' bond issuance in the eurozone. Focusing on a spectrum of aspects contributing to bank credit tightening, we document that banks' balance sheet constraints, as well as the perception of risk lead to significantly higher NFC bond issuance. In addition, we show that stricter lending conditions, such as wider margins, higher collateral requirements and covenants significantly increase NFC bond issuance volumes too. Furthermore, the impact of bank credit tightening on firms' bond issuance is particularly observable in core eurozone countries and not in peripheral countries. This is partially due to the underdeveloped of debt capital markets in the peripheral countries.
This paper investigates the determinants of value and growth investing in a large administrative panel of Swedish residents over the 1999-2007 period. We document strong relationships between a household’s portfolio tilt and the household’s financial and demographic characteristics. Value investors have higher financial and real estate wealth, lower leverage, lower income risk, lower human capital, and are more likely to be female than the average growth investor. Households actively migrate to value stocks over the life-cycle and, at higher frequencies, dynamically offset the passive variations in the value tilt induced by market movements. We verify that these results are not driven by cohort effects, financial sophistication, biases toward popular or professionally close stocks, or unobserved heterogeneity in preferences. We relate these household-level results to some of the leading explanations of the value premium.