Refine
Year of publication
- 2014 (149) (remove)
Document Type
- Working Paper (149) (remove)
Language
- English (149) (remove)
Has Fulltext
- yes (149)
Is part of the Bibliography
- no (149)
Keywords
- financial crisis (6)
- monetary policy (6)
- transparency (4)
- Household Finance (3)
- Labor income risk (3)
- Portfolio choice (3)
- Systemic risk (3)
- asset pricing (3)
- financial literacy (3)
- shadow banking (3)
Institute
- Wirtschaftswissenschaften (128)
- Center for Financial Studies (CFS) (117)
- Sustainable Architecture for Finance in Europe (SAFE) (74)
- House of Finance (HoF) (71)
- Rechtswissenschaft (10)
- Institute for Monetary and Financial Stability (IMFS) (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (4)
- Gesellschaftswissenschaften (3)
- Informatik (2)
- Institut für sozial-ökologische Forschung (ISOE) (2)
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
The recent decline in euro area inflation has triggered new calls for additional monetary stimulus by the ECB in order to counter the threat of a self‐reinforcing deflation and recession spiral. This note reviews the available evidence on inflation expectations, output gaps and other factors driving current inflation through the lens of the Phillips curve. It also draws a comparison to the Japanese experience with deflation in the late 1990s and the evidence from Japan concerning the outputinflation nexus at low trend inflation. The note concludes from this evidence that the risk of a selfreinforcing deflation remains very small. Thus, the ECB best await the impact of the long‐term refinancing operations decided in June that have the potential to induce substantial monetary accommodation once implemented for the first time in September.
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
This paper contrasts the recent European initiatives on regulating corporate groups with alternative approaches to the phenomenon. In doing so it pays particular regard to the German codified law on corporate groups as the polar opposite to the piecemeal approach favored by E.U. legislation.
It finds that the European Commission’s proposal to submit (significant) related party transactions to enhanced transparency, outside fairness review, and ex ante shareholder approval is both flawed in its design and based on contestable assumptions on informed voting of institutional investors. In particular, the contemplated exemption for transactions with wholly owned subsidiaries allows controlling shareholders to circumvent the rule extensively. Moreover, vesting voting rights with (institutional) investors will not lead to the informed assessment that is hoped for, because these investors will rationally abstain from active monitoring and rely on proxy advisory firms instead whose competency to analyze non-routine significant related party transactions is questionable.
The paper further delineates that the proposed recognition of an overriding interest of the group requires strong counterbalances to adequately protect minority shareholders and creditors. Hence, if the Commission choses to go down this route it might end up with a comprehensive regulation that is akin to the unpopular Ninth Company Law Directive in spirit, though not in content. The latter prediction is corroborated by the pertinent parts of the proposal for a European Model Company Act.
How special are they? - Targeting systemic risk by regulating shadow banking : (October 5, 2014)
(2014)
This essay argues that at least some of the financial stability concerns associated with shadow banking can be addressed by an approach to financial regulation that imports its functional foundations more vigorously into the interpretation and implementation of existing rules. It shows that the general policy goals of prudential banking regulation remain constant over time despite dramatic transformations in the financial and technological landscape. Moreover, these overarching policy goals also legitimize intervention in the shadow banking sector. On these grounds, this essay encourages a more normative construction of available rules that potentially limits both the scope for regulatory arbitrage and the need for ever more rapid updates and a constant increase in the complexity of the regulatory framework. By tying the regulatory treatment of financial innovation closely to existing prudential rules and their underlying policy rationales, the proposed approach potentially ends the socially wasteful race between hare and tortoise that signifies the relation between regulators and a highly dynamic industry. In doing so it does not generally hamper market participants’ efficient discoveries where disintermediation proves socially beneficial. Instead, it only weeds-out rent-seeking circumventions of existing rules and standards.
How special are they? - Targeting systemic risk by regulating shadow banking : (October 5, 2014)
(2014)
This essay argues that at least some of the financial stability concerns associated with shadow banking can be addressed by an approach to financial regulation that imports its functional foundations more vigorously into the interpretation and implementation of existing rules. It shows that the general policy goals of prudential banking regulation remain constant over time despite dramatic transformations in the financial and technological landscape. Moreover, these overarching policy goals also legitimize intervention in the shadow banking sector. On these grounds, this essay encourages a more normative construction of available rules that potentially limits both the scope for regulatory arbitrage and the need for ever more rapid updates and a constant increase in the complexity of the regulatory framework. By tying the regulatory treatment of financial innovation closely to existing prudential rules and their underlying policy rationales, the proposed approach potentially ends the socially wasteful race between hare and tortoise that signifies the relation between regulators and a highly dynamic industry. In doing so it does not generally hamper market participants’ efficient discoveries where disintermediation proves socially beneficial. Instead, it only weeds-out rent-seeking circumventions of existing rules and standards.
We examine both the degree and the structural stability of inflation persistence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. As economic theory provides reasons for inflation persistence to differ across conditional quantiles, this is a potentially severe constraint. Conventional studies of inflation persistence cannot identify changes in persistence at selected quantiles that leave persistence at the median of the distribution unchanged. Based on post-war US data we indeed find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. While prior to the 1980s inflation was not mean reverting, quantile autoregression based unit root tests suggest that since the end of the Volcker disinflation the unit root can be rejected at every quantile of the conditional inflation distribution.
Obstetrical care as a matter of time: ultrasound screening in anticipatory regimes of pregnancy
(2014)
This article explores the ways in which ultrasound screening influences the temporal dimensions of prevention in the obstetrical management of pregnancy. Drawing on praxeographic perspectives and empirically based on participant observation of ultrasound examinations in obstetricians’ offices, it asks how ultrasound scanning facilitates anticipatory modes of pregnancy management, and investigates the entanglement of different notions of time and temporality in the highly risk-oriented modes of prenatal care in Germany. Arguing that the paradoxical temporality of prevention – acting now in the name of the future – is intensified by ultrasound screening, I show how the attribution of risk regarding foetal growth in prenatal check-ups is based on the fragmentation of procreative time and ask how time standards come into play, how pregnancy is located in calendrical time, and how notions of foetal time and the everyday life times of pregnant women clash during negotiations between obstetricians and pregnant women about the determination of the due date. By analysing temporality as a practical accomplishment via technological devices such as ultrasound, the paper contributes to debates in feminist STS studies on the role of time in reproduction technologies and the management of pregnancy and birth in contemporary societies.
We examine the effects of credit default swaps (CDS), a major type of over-the-counter derivative, on the corporate liquidity management of the reference firms. CDS help firms to access the credit market since the lenders can hedge their credit risk more easily using these contracts. However, CDS-protected creditors can be tougher in debt renegotiations and less willing to support distressed borrowers, causing some firms to become more cautious. Consequently, we find that firms hold significantly more cash after the inception of CDS trading on their debt. The increase in cash holdings by CDS firms is more pronounced for financially constrained firms and firms facing higher refinancing risk. Moreover, bank relationships and outstanding credit facilities intensify the CDS effect on cash holding. Finally, firms with greater financial expertise hold more cash when their debt is referenced by CDS. These findings suggest that CDS, which are primarily a risk management tool for lenders, induce firms to adopt more conservative liquidity policies.
Robustness, validity, and significance of the ECB's asset quality review and stress test exercise
(2014)
As we are moving toward a eurozone banking union, the European Central Bank (ECB) is going to take over the regulatory oversight of 128 banks in November 2014. To that end, the ECB conducted a comprehensive assessment of these banks, which included an asset quality review (AQR) and a stress test. The fundamental question is how accurately will the financial condition of these banks have been assessed by the ECB when it commences its regulatory oversight? And, can the comprehensive assessment lead to a full repair of banks’ balance sheets so that the ECB takes over financially sound banks and is the necessary regulation in place to facilitate this? Overall, the evidence presented in this paper based on the design of the comprehensive assessment as well as own stress test exercises suggest that the ECB’s assessment might not comprehensively deal with the problems in the financial sector and risks may remain that will pose substantial threats to financial stability in the eurozone.
Efforts to control bank risk address the wrong problem in the wrong way. They presume that the financial crisis was caused by CEOs who failed to supervise risk-taking employees. The responses focus on executive pay, believing that executives will bring non-executives into line—using incentives to manage risk-taking—once their own pay is regulated. What they overlook is the effect on non-executive pay of the competition for talent. Even if executive pay is regulated, and executives act in the bank’s best interests, they will still be trapped into providing incentives that encourage risk-taking by non-executives due to the negative externality that arises from that competition. Greater risk-taking can increase short-term profits and, in turn, the amount a non-executive receives, potentially at the expense of long-term bank value. Non-executives, therefore, have an incentive to incur significant risk upfront so long as they can depart for a new employer before any losses materialize. The result is an upward spiral in compensation—reducing an executive’s ability to set non-executive pay and the ability of any one bank to adjust compensation to reflect risk-taking and long-term outcomes. New regulation must address the tension between compensation and competition. Regulators should take account of the effect of competition on market-wide levels of pay, including by non-banks who compete for talent. The ability of non-executives to jump from a bank employer to another financial firm should also be limited. In addition, banks should be required to include a long-term equity component in non-executive pay, with subsequent employers being restricted from compensating a new employee for any losses she incurs related to her prior work.
Motivated by the question whether sound and expressive applicative similarities for program calculi with should-convergence exist, this paper investigates expressive applicative similarities for the untyped call-by-value lambda-calculus extended with McCarthy's ambiguous choice operator amb. Soundness of the applicative similarities w.r.t. contextual equivalence based on may-and should-convergence is proved by adapting Howe's method to should-convergence. As usual for nondeterministic calculi, similarity is not complete w.r.t. contextual equivalence which requires a rather complex counter example as a witness. Also the call-by-value lambda-calculus with the weaker nondeterministic construct erratic choice is analyzed and sound applicative similarities are provided. This justifies the expectation that also for more expressive and call-by-need higher-order calculi there are sound and powerful similarities for should-convergence.
I analyze a critical illness insurance in a consumption-investment model over the life cycle. I solve a model with stochastic mortality risk and health shock risk numerically. These shocks are interpreted as critical illness and can negatively affect the expected remaining lifetime, the health expenses, and the income. In order to hedge the health expense effect of a shock, the agent has the possibility to contract a critical illness insurance. My results highlight that the critical illness insurance is strongly desired by the agents. With an insurance profit of 20%, nearly all agents contract the insurance in the working stage of the life cycle and more than 50% of the agents contract the insurance during retirement. With an insurance profit of 200%, still nearly all working agents contract the insurance, whereas there is little demand in the retirement stage.
I numerically solve realistically calibrated life cycle consumption-investment problems in continuous time featuring stochastic mortality risk driven by jumps, unspanned labor income as well as short-sale and liquidity constraints and a simple insurance. I compare models with deterministic and stochastic hazard rate of death to a model without mortality risk. Mortality risk has only minor effects on the optimal controls early in the life cycle but it becomes crucial in later years. A diffusive component in the hazard rate of death has no significant impact, whereas a jump component is desired by the agent and influences optimal controls and wealth evolution. The insurance is used to ensure optimal bequest such that there is no accidental bequest. In the absence of the insurance, the biggest part of bequest is accidental.
The pi-calculus is a well-analyzed model for mobile processes and mobile computations.
While a lot of other process and lambda calculi that are core languages of higher-order concurrent and/or functional programming languages use a contextual semantics observing the termination behavior of programs in all program contexts, traditional program equivalences in the pi-calculus are bisimulations and barbed testing equivalences, which observe the communication capabilities of processes under reduction and in contexts.
There is a distance between these two approaches to program equivalence which makes it hard to compare the pi-calculus with other languages. In this paper we contribute to bridging this gap by investigating a contextual semantics of the synchronous pi-calculus with replication and without sums.
To transfer contextual equivalence to the pi-calculus we add a process Stop as constant which indicates success and is used as the base to define and analyze the contextual equivalence which observes may- and should-convergence of processes.
We show as a main result that contextual equivalence in the pi-calculus with Stop conservatively extends barbed testing equivalence in the (Stop-free) pi-calculus. This implies that results on contextual equivalence can be directly transferred to the (Stop-free) pi-calculus with barbed testing equivalence.
We analyze the contextual ordering, prove some nontrivial process equivalences, and provide proof tools for showing contextual equivalences. Among them are a context lemma, and new notions of sound applicative similarities for may- and should-convergence.
We outline a procedure for consistent estimation of marginal and joint default risk in the euro area financial system. We interpret the latter risk as the intrinsic financial system fragility and derive several systemic fragility indicators for euro area banks and sovereigns, based on CDS prices. Our analysis documents that although the fragility of the euro area banking system had started to deteriorate before Lehman Brothers' file for bankruptcy, investors did not expect the crisis to affect euro area sovereigns' solvency until September 2008. Since then, and especially after November 2009, joint sovereign default risk has outpaced the rise of systemic risk within the banking system.
This country report was prepared for the 19th World Congress of the International Academy of Comparative Law in Vienna in 2014. It is structured as a questionnaire and provides an overview of the legal framework for Free and Open Source Software (FOSS) and other alternative license models like (e.g.) Creative Commons under German law. The first set of questions addresses the applicable statutory provisions and the reported case law in this area. The second section concerns contractual issues, in particular with regard to the interpretation and validity of open content licenses. The third section deals with copyright aspects of open content models, for example regarding revocation rights and rights to equitable remuneration. The final set of questions pertains to patent, trademark and competition law issues of open content licenses.
his paper distils three lessons for bank regulation from the experience of the 2009-12 euro-area financial crisis. First, it highlights the key role that sovereign debt exposures of banks have played in the feedback loop between bank and fiscal distress, and inquires how the regulation of banks’ sovereign exposures in the euro area should be changed to mitigate this feedback loop in the future. Second, it explores the relationship between the forbearance of non-performing loans by European banks and the tendency of EU regulators to rescue rather than resolving distressed banks, and asks to what extent the new regulatory framework of the euro-area “banking union” can be expected to mitigate excessive forbearance and facilitate resolution of insolvent banks. Finally, the paper highlights that capital requirements based on the ratio of Tier-1 capital to banks’ risk-weighted assets were massively gamed by large banks, which engaged in various forms of regulatory arbitrage to minimize their capital charges while expanding leverage. This argues in favor of relying on a set of simpler and more robust indicators to determine banks’ capital shortfall, such as book and market leverage ratios.
Has economic research been helpful in dealing with the financial crises of the early 2000s? On the whole, the answer is negative, although there are bright spots. Economists have largely failed to predict both crises, largely because most of them were not analytically equipped to understand them, in spite of their recurrence in the last 25 years. In the pre-crisis period, however, there have been important exceptions – theoretical and empirical strands of research that largely laid out the basis for our current thinking about financial crises. Since 2008, a flurry of new studies offered several different interpretations of the US crisis: to some extent, they point to potentially complementary factors, but disagree on their relative importance, and therefore on policy recommendations. Research on the euro debt crisis has so far been much more limited: even Europe-based researchers – including CEPR ones – have often directed their attention more to the US crisis than to that occurring on their doorstep. In terms of impact on policy and regulatory reform, the record is uneven. On the one hand, the swift and massive liquidity provision by central banks in the wake of both crises is, at least partly, to be credited to previous research on the role of central banks as lenders of last resort in crises and on the real effects of bank lending and monetary policy. On the other hand, economists have had limited impact on the reform of prudential and security market regulation. In part, this is due to their neglect of important regulatory choices, which policy-makers are therefore left to take without the guidance of academic research-based analysis.
Especially in developing countries credit constraints are often perceived as one of the most important market frictions constraining firm innovation and growth. Huge amounts of public money are being devoted to the removal of such constraints but their effectiveness is still subject to an intense policy debate. This paper contributes to this debate by analysing the effects of the Brazilian Development Bank (BNDES) loans. It finds that, before receiving BNDES support, granted firms are indeed more credit constrained than comparable non-granted firms. It also finds that BNDES support allows granted firms to achieve the same level of performance as similar non-granted firms that are not credit constrained. However, it does not allow granted firms to outperform similar non-granted ones.
What would be the economic effects of the UK leaving the European Union on living standards of British people? We focus on the effects of trade on welfare net of lower fiscal transfers to the EU. We use a standard quantitative static general equilibrium trade model with multiple sectors, countries and intermediates, as in Costinot and Rodriguez-Clare (2013). Static losses range between 1.13% and 3.09% of GDP, depending on the assumptions used in our counterfactual scenarios. Including dynamic effects could more than double such losses.
This chapter discusses whether and how 'new quantitative trade models' (NQTMs) can be fruitfully applied to quantify the welfare effects of trade liberalization, thus shedding light on the trade-related effects of further European integration. On the one hand, it argues that NQTMs have indeed the potential of being used to supplement traditional 'computable general equilibrium' (CGE) analysis thanks to their tight connection between theory and data, appealing micro-theoretical foundations, and enhanced attention to the estimation of structural parameters. On the other hand, further work is still needed in order to fully exploit such potential.
What happened in Cyprus? The economic consequences of the last communist government in Europe
(2014)
This paper reviews developments in the Cypriot economy following the introduction of the euro on 1 January 2008 and leading to the economic collapse of the island five years later. The main cause of the collapse is identified with the election of a communist government in February 2008, within two months of the introduction of the euro, and its subsequent choices for action and inaction on economic policy matters. The government allowed a rapid deterioration of public finances, and despite repeated warnings, damaged the country's creditworthiness and lost market access in May 2011. The destruction of the island's largest power station in July 2011 subsequently threw the economy into recession. Together with the intensification of the euro area crisis in the summer and fall of 2011, these events weakened the banking system which was vulnerable due to its exposure in Greece. Rather than deal with its fiscal crisis, the government secured a loan from the Russian government that allowed it to postpone action until after the February 2013 election. Rather than protect the banking system, losses were imposed on banks and a campaign against them was coordinated and used as a platform by the communist party for the February 2013 election. The strategy succeeded in delaying resolution of the crisis and avoiding short-term political cost for the communist party before the election, but also in precipitating a catastrophe right after the election.
Are rules and boundaries sufficient to limit harmful central bank discretion? Lessons from Europe
(2014)
Marvin Goodfriend’s (2014) insightful, informative and provocative work explains concisely and convincingly why the Fed needs rules and boundaries. This paper reviews the broader institutional design problem regarding the effectiveness of the central bank in practice and confirms the need for rules and boundaries. The framework proposed for improving the Fed incorporates key elements that have already been adopted in the European Union. The case of ELA provision by the ECB and the Central Bank of Cyprus to Marfin-Laiki Bank during the crisis, however, suggests that the existence of rules and boundaries may not be enough to limit harmful discretion. During a crisis, novel interpretations of the legal authority of the central bank may be introduced to create a grey area that might be exploited to justify harmful discretionary decisions even in the presence of rules and boundaries. This raises the question how to ensure that rules and boundaries are respected in practice
We analyze the differential impact of domestic and foreign monetary policy on the local supply of bank credit in domestic and foreign currencies. We analyze a novel, supervisory dataset from Hungary that records all bank lending to firms including its currency denomination. Accounting for time-varying firm-specific heterogeneity in loan demand, we find that a lower domestic interest rate expands the supply of credit in the domestic but not in the foreign currency. A lower foreign interest rate on the other hand expands lending by lowly versus highly capitalized banks relatively more in the foreign than in the domestic currency.
Inflation differentials in the euro area have been persistent since the adoption of the single currency. This paper analyzes the impact of product and labor market regulation on inflation in a sample of 11 countries. The results show that, after the adoption of the euro, product market deregulation has a relevant and significant effect on the level of inflation, while higher labor market regulation increases the responsiveness of inflation to the output gap.
This paper investigates the role of monetary policy in the collapse in the long-term real interest rates in the decade before the onset of the financial crisis using a sample of five advanced economies (United States, United Kingdom, the euro area, Sweden and Canada). The results from an estimated panel VAR with monthly data show that, while monetary policy shocks had negligible effects on long-term real interest rates, shocks to the long-term real interest rates had a one-to-one effect on the short nominal rate.
Riley (1979)'s reactive equilibrium concept addresses problems of equilibrium existence in competitive markets with adverse selection. The game-theoretic interpretation of the reactive equilibrium concept in Engers and Fernandez (1987) yields the Rothschild-Stiglitz (1976)/Riley (1979) allocation as an equilibrium allocation, however multiplicity of equilibrium emerges. In this note we imbed the reactive equilibrium's logic in a dynamic market context with active consumers. We show that the Riley/Rothschild-Stiglitz contracts constitute the unique equilibrium allocation in any pure strategy subgame perfect Nash equilibrium.
Europe's debt crisis casts doubt on the effectiveness of fiscal austerity in highly-integrated economies. Closed-economy models overestimate its effectiveness, because they underestimate tax-base elasticities and ignore cross-country tax externalities. In contrast, we study tax responses to debt shocks in a two-country model with endogenous utilization that captures those externalities and matches the capital-tax-base elasticity. Quantitative results show that unilateral capital tax hikes cannot restore fiscal solvency in Europe, and have large negative (positive) effects at "home" ("abroad"). Restoring solvency via either Nash competition or Cooperation reduces (increases) capital (labor) taxes significantly, and leaves countries with larger debt shocks preferring autarky.
In my paper I take issue with proponents of ‘intersectionality’ which believe that a theoretical concept cannot/should not be detached from its original context of invention. Instead, I argue that the traveling of theory in a global context automatically involves appropriations, amendment and changes in response to the original meaning. However, I reject the idea that ‘intersectionality’ can be used as a freefloating signifier; on the contrary, it has to be embedded in the respective (historical, social, cultural) context in which it is used. I will start by mapping some of the current debates engaging with the pros and cons of the global implementation of the concept (the controversy about master categories, the dispute about the centrality of ‘race’, and the argument about the amendment of categories). I will then turn to my own use of ‘intersectionality’ as a methodological tool (elaborated in Lutz and Davis 2005). Here, we shifted attention from how structures of racism, class discrimination and sexism determine individuals’ identities and practices to how individuals ongoingly and flexibly negotiate their multiple and converging identities in the context of everyday life. Introducing the term doing intersectionality we explored how individuals creatively and often in surprising ways draw upon various aspects of their multiple identities as a resource to gain control over their lives.
In my paper I will show how ‘gender’ or ‘ethnicity’ are invariably linked to structures of domination, but can also mobilize or deconstruct disempowering discourses, even undermine and transform oppressive practices.
This paper investigates extensions of the method of endogenous gridpoints (ENDGM) introduced by Carroll (2006) to higher dimensions with more than one continuous endogenous state variable. We compare three different categories of algorithms: (i) the conventional method with exogenous grids (EXOGM), (ii) the pure method of endogenous gridpoints (ENDGM) and (iii) a hybrid method (HYBGM). ENDGM comes along with Delaunay interpolation on irregular grids. Comparison of methods is done by evaluating speed and accuracy. We find that HYBGM and ENDGM both dominate EXOGM. In an infinite horizon model, ENDGM also always dominates HYBGM. In a finite horizon model, the choice between HYBGM and ENDGM depends on the number of gridpoints in each dimension. With less than 150 gridpoints in each dimension ENDGM is faster than HYBGM, and vice versa. For a standard choice of 25 to 50 gridpoints in each dimension, ENDGM is 1.4 to 1.7 times faster than HYBGM in the finite horizon version and 2.4 to 2.5 times faster in the infinite horizon version of the model.
On 23 July 2014, the U.S. Securities and Exchange Commission (SEC) passed the “Money Market Reform: Amendments to Form PF ,” designed to prevent investor runs on money market mutual funds such as those experienced in institutional prime funds following the bankruptcy of Lehman Brothers. The present article evaluates the reform choices in the U.S. and draws conclusions for the proposed EU regulation of money market funds.
This paper investigates the impact of news media sentiment on financial market returns and volatility in the long-term. We hypothesize that the way the media formulate and present news to the public produces different perceptions and, thus, incurs different investor behavior. To analyze such framing effects we distinguish between optimistic and pessimistic news frames. We construct a monthly media sentiment indicator by taking the ratio of the number of newspaper articles that contain predetermined negative words to the number of newspaper articles that contain predetermined positive words in the headline and/or the lead paragraph. Our results indicate that pessimistic news media sentiment is positively related to global market volatility and negatively related to global market returns 12 to 24 months in advance. We show that our media sentiment indicator reflects very well the financial market crises and pricing bubbles over the past 20 years.
The record-breaking prices observed in the art market over the last three years raise the question of whether we are experiencing a speculative bubble. Given the difficulty to determine the fundamental value of artworks, we apply a right-tailed unit root test with forward recursive regressions (SADF test) to detect explosive behaviors directly in the time series of four different art market segments (“Impressionist and Modern”, “Post-war and Contemporary”, “American”, and “Latin American”) for the period from 1970 to 2013. We identify two historical speculative bubbles and find an explosive movement in today’s “Post-war and Contemporary” and “American” fine art market segments.
This chapter analyzes the risk and return characteristics of investments in artists from the Middle East and Northern Africa (MENA) region over the sample period 2000 to 2012. With hedonic regression modeling we create an annual index that is based on 3,544 paintings created by 663 MENA artists. Our empirical results prove that investing in such a hypothetical index provides strong financial returns. While the results show an exponential growth in sales since 2006, the geometric annual return of the MENA art index is a stable13.9 percent over the whole period. We conclude that investing in MENA paintings would have been profitable but also note that we examined the performance of an emerging art market that has only seen an upward trend without any correction, yet.
SAFE Professor Michalis Haliassos was a member of the National Council for Research and Technology (ESET) established by the Government of Greece for the period 2010-2013. The council, consisting of eleven scientists from a range of disciplines, has now published their communiqué "National Strategic Framework for Research and Innovation 2014 -2020".
To promote the advancement of research, technology and innovation in Greece, the strategic plan proposed by the authors seeks to identify areas of existing research strength and excellence that can be further advanced to become engines for progress and growth in Greece, as well as flaws inherent to the present system. The authors stress the need to address current constraints to growth, which include the declining education system; the confusion and weaknesses of R&D governance and management; the discontinuities and inefficiencies of resource allocation and investment; the lack of adaptation to clearly-defined national priorities; and the inadequate opportunities and funding for high-quality research and development to flourish. They stress the need for prioritisation and efficient allocation; stability of the policy frame; predictability of planning; provision of opportunity; recognition of excellence; and responsiveness to current and future needs.
This assessment concept paper provides a methodological approach for the formative assessment and summative assessment of GIZ’s International Water Stewardship Programme (IWaSP) and its component partnerships. IWaSP promotes partnerships between the private sector (corporations and SMEs), the public sector and the society to tackle shared water risks and to manage water equitably to meet competing demands. This evaluative assessment concept describes the generic approach of the assessment, the cycle for the assessment of partnerships, the country coordination and the programme.
The overall goal of the assessment is to provide evidence for taxpayers in the donor countries and for citizens in the partnership countries. It also aims to examine the relevance of the programme’s approach, its underlying assumptions, and the heterogeneity of stakeholders and their specific interests. Since the assessment is also formative feedback to GIZ and IWaSP stakeholders, it aims to guide the future implementation of the partnerships and the programme.
The assessment is guided by several generic principles: assessing for learning (formative assessment); assessment of learning (summative assessment); iteration; structuring complex problems; unblocking results; and conformity with other assessment criteria set out by the OECD the Development Assistance Committee (DAC) and GIZ’s Capacity Works success factors (GTZ 2010).
These generic criteria are adapted to the three levels of the IWaSP structure. First, the assessment cycle for partnerships includes the validation of stakeholders (mapping), the analysis of secondary literature, face-to-face interviews and a process for feeding back the findings. Generic tools are provided to guide the assessment, such as a list of key documents and an interview guide. Partnerships will undergo a baseline, interim assessment and final assessment. As progress varies across individual IWaSP partnerships, the steps taken by each partnership to assess shared water risks, prioritise and agree interventions, are expected to differ slightly. In response to these differences the sequencing and content of the assessment may need to be adapted for the different partnerships.
Second, the country-level assessment considers issues such as the coordination of partnerships within a country, scoping strategies, and interaction between partnership and the programme. Information gathered during the partnership assessment feeds into the country-level assessment.
Third, the assessment cycle for the programme involves a document and monitoring plan analysis, reflection on the different perspectives of the programme staff, country staff and external stakeholders.
The final section is concerned with reporting. Several annexes are provided relating to the organisation and preparation of the assessment, including question guidelines and analysis procedures.
This paper provides a systematic analysis of individual attitudes towards ambiguity, based on laboratory experiments. The design of the analysis allows to capture individual behavior across various levels of ambiguity, ranging from low to high. Attitudes towards risk and attitudes towards ambiguity are disentangled, providing pure measures of ambiguity aversion. Ambiguity aversion is captured in several ways, i.e. as a discount factor net of a risk premium, and as an estimated parameter in a generalized utility function. We find that ambiguity aversion varies across individuals, and with the level of ambiguity, being most prominent for intermediate levels. Around one third of subjects show no aversion, one third show maximum aversion, and one third show intermediate levels of ambiguity aversion, while there is almost no ambiguity seeking. While most theoretical work on ambiguity builds on maxmin expected utility, our results provide evidence that MEU does not adequately capture individual attitudes towards ambiguity for the majority of individuals. Instead, our results support models that allow for intermediate levels of ambiguity aversion. Moreover, we find risk aversion to be statistically unrelated to ambiguity aversion on average. Taken together, the results support the view that ambiguity is an important and distinct argument in decision making under uncertainty.
A recent proposal by the Financial Stability Board (FSB) suggests a new risk capital buffer for globally operating systemically important financial institutions. The suggested metric, “Total Loss Absorbing Capacity“ (TLAC), is composed of Tier-1 capital and loss absorbing debt. In a crisis situation, “bail-in-able” debt is to be written down or converted into equity. Jan Krahnen argues that the credibility of bail-in, in the case of systemically important financial institutions, hinges crucially on the design of TLAC and the requirements that will be placed on loss absorbing “bail-in-able” debt.The fear of direct systemic consequences through bail-in could be overcome, if a holding ban were placed on the “bail-in-bonds” of financial institutions. The holding ban would stipulate that these bonds cannot be held by other institutions within the banking sector.
We study consumption-portfolio and asset pricing frameworks with recursive preferences and unspanned risk. We show that in both cases, portfolio choice and asset pricing, the value function of the investor/representative agent can be characterized by a specific semilinear partial differential equation. To date, the solution to this equation has mostly been approximated by Campbell-Shiller techniques, without addressing general issues of existence and uniqueness. We develop a novel approach that rigorously constructs the solution by a fixed point argument. We prove that under regularity conditions a solution exists and establish a fast and accurate numerical method to solve consumption-portfolio and asset pricing problems with recursive preferences and unspanned risk. Our setting is not restricted to affine asset price dynamics. Numerical examples illustrate our approach.
In this paper, we propose a novel approach on how to estimate systemic risk and identify its key determinants. For all US financial companies with publicly traded equity options, we extract their option-implied value-at-risks (VaRs) and measure the spillover effects between individual company VaRs and the option-implied VaR of an US financial index. First, we study the spillover effect of increasing company risks on the financial sector. Second, we analyze which companies are most affected if the tail risk of the financial sector increases. We find that key accounting and market valuation metrics such as size, leverage, balance sheet composition, market-to-book ratio and earnings have a significant influence on the systemic risk profile of a financial institution. In contrast to earlier studies, the employed panel vector autoregression (PVAR) estimator allows for a causal interpretation of the results.
This paper studies the life cycle consumption-investment-insurance problem of a family. The wage earner faces the risk of a health shock that significantly increases his probability of dying. The family can buy term life insurance with realistic features. In particular, the available contracts are long term so that decisions are sticky and can only be revised at significant costs. Furthermore, a revision is only possible as long as the insured person is healthy. A second important and realistic feature of our model is that the labor income of
the wage earner is unspanned. We document that the combination of unspanned labor income and the stickiness of insurance decisions reduces the insurance demand significantly. This is because an income shock induces the need to reduce the insurance coverage, since premia become less affordable. Since such a reduction is costly and families anticipate these potential costs, they buy less protection at all ages. In particular, young families stay away from life insurance markets altogether.
he observed hump-shaped life-cycle pattern in individuals' consumption cannot be explained by the classical consumption-savings model. We explicitly solve a model with utility of both consumption and leisure and with educational decisions affecting future wages. We show optimal consumption is hump shaped and determine the peak age. The hump results from consumption and leisure being substitutes and from the implicit price of leisure being decreasing over time; more leisure means less education, which lowers future wages, and the present value of foregone wages decreases with age. Consumption is hump shaped whether the wage is hump shaped or increasing over life.
Advertising arbitrage
(2014)
Speculators often advertise arbitrage opportunities in order to persuade other investors and thus accelerate the correction of mispricing. We show that in order to minimize the risk and the cost of arbitrage an investor who identifies several mispriced assets optimally advertises only one of them, and overweights it in his portfolio; a risk-neutral arbitrageur invests only in this asset. The choice of the asset to be advertised depends not only on mispricing but also on its "advertisability" and accuracy of future news about it. When several arbitrageurs identify the same arbitrage opportunities, their decisions are strategic complements: they invest in the same asset and advertise it. Then, multiple equilibria may arise, some of which inefficient: arbitrageurs may correct small mispricings while failing to eliminate large ones. Finally, prices react more strongly to the ads of arbitrageurs with a successful track record, and reputation-building induces high-skill arbitrageurs to advertise more than others.
Most simulated micro-founded macro models use solely consumer-demand aggregates in order to estimate deep economy-wide preference parameters, which are useful for policy evaluation. The underlying demand-aggregation properties that this approach requires, should be easy to empirically disprove: since household-consumption choices differ for households with more members, aggregation can be rejected if appropriate data violate an affine equation regarding how much individuals benefit from within-household sharing of goods. We develop a survey method that tests the validity of this equation, without utility-estimation restrictions via models. Surprisingly, in six countries, this equation is not rejected, lending support to using consumer-demand aggregates.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
We characterize optimal redistribution in a dynastic family model with human capital. We show how a government can improve the trade-off between equality and incentives by changing the amount of observable human capital. We provide an intuitive decomposition for the wedge between human-capital investment in the laissez faire and the social optimum. This wedge differs from the wedge for bequests because human capital carries risk: its returns depend on the non-diversi
able risk of children's ability. Thus, human capital investment is encouraged more than bequests in the social optimum if human capital is a bad hedge for consumption risk.
In this paper we argue that very high marginal labor income tax rates are an effective tool for social insurance even when households have preferences with high labor supply elasticity, make dynamic savings decisions, and policies have general equilibrium effects. To make this point we construct a large scale Overlapping Generations Model with uninsurable labor productivity risk, show that it has a wealth distribution that matches the data well, and then use it to characterize fiscal policies that achieve a desired degree of redistribution in society. We find that marginal tax rates on the top 1% of the earnings distribution of close to 90% are optimal. We document that this result is robust to plausible variation in the labor supply elasticity and holds regardless of whether social welfare is measured at the steady state only or includes transitional generations.
Although oil price shocks have long been viewed as one of the leading candidates for explaining U.S. recessions, surprisingly little is known about the extent to which oil price shocks explain recessions. We provide the first formal analysis of this question with special attention to the possible role of net oil price increases in amplifying the transmission of oil price shocks. We quantify the conditional recessionary effect of oil price shocks in the net oil price increase model for all episodes of net oil price increases since the mid-1970s. Compared to the linear model, the cumulative effect of oil price shocks over course of the next two years is much larger in the net oil price increase model. For example, oil price shocks explain a 3% cumulative reduction in U.S. real GDP in the late 1970s and early 1980s and a 5% cumulative reduction during the financial crisis. An obvious concern is that some of these estimates are an artifact of net oil price increases being correlated with other variables that explain recessions. We show that the explanatory power of oil price shocks largely persists even after augmenting the nonlinear model with a measure of credit supply conditions, of the monetary policy stance and of consumer confidence. There is evidence, however, that the conditional fit of the net oil price increase model is worse on average than the fit of the corresponding linear model, suggesting much smaller cumulative effects of oil price shocks for these episodes of at most 1%.
This article examines how the shale oil revolution has shaped the evolution of U.S. crude oil and gasoline prices. It puts the evolution of shale oil production into historical perspective, highlights uncertainties about future shale oil production, and cautions against the view that the U.S. may become the next Saudi Arabia. It then reviews the role of the ban on U.S. crude oil exports, of capacity constraints in refining and transporting crude oil, of differences in the quality of conventional and unconventional crude oil, and of the recent regional fragmentation of the global market for crude oil for the determination of U.S. oil and gasoline prices. It discusses the reasons for the persistent wedge between U.S. crude oil prices and global crude oil prices in recent years and for the fact that domestic oil prices below global levels need not translate to lower U.S. gasoline prices. It explains why the shale oil revolution unlike the shale gas revolution is unlikely to stimulate a boom in oil-intensive manufacturing industries. It also explores the implications of shale oil production for the transmission of oil price shocks to the U.S. economy.
This paper empirically tests the role of bank lending tightening on non-financial corporate (NFC) bond issuance in the eurozone. By utilizing a unique data set provided by the ECB Bank Lending Survey, we capture the "pure" credit supply effect on corporate external financing. We find that tightened credit standards positively affect the NFC bond issuance: A 1pp increase in banks reporting considerable tightening on loans leads to around a 7% increase in firms' bond issuance in the eurozone. Focusing on a spectrum of aspects contributing to bank credit tightening, we document that banks' balance sheet constraints, as well as the perception of risk lead to significantly higher NFC bond issuance. In addition, we show that stricter lending conditions, such as wider margins, higher collateral requirements and covenants significantly increase NFC bond issuance volumes too. Furthermore, the impact of bank credit tightening on firms' bond issuance is particularly observable in core eurozone countries and not in peripheral countries. This is partially due to the underdeveloped of debt capital markets in the peripheral countries.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
After the Global Financial Crisis a controversial rush to fiscal austerity followed in many countries. Yet research on the effects of austerity on macroeconomic aggregates was and still is unsettled, mired by the difficulty of identifying multipliers from observational data. This paper reconciles seemingly disparate estimates of multipliers within a unified and state-contingent framework. We achieve identification of causal effects with new propensity-score based methods for time series data. Using this novel approach, we show that austerity is always a drag on growth, and especially so in depressed economies: a one percent of GDP fiscal consolidation translates into 4 percent lower real GDP after five years when implemented in the slump rather than the boom. We illustrate our findings with a counterfactual evaluation of the impact of the U.K. government’s shift to austerity policies in 2010 on subsequent growth.
On November 8, 2013, several members of the British House of Lords’ Subcommittee A conducted a hearing at the ECB in Frankfurt, Germany, on “Genuine Economic and Monetary Union and its Implications for the UK”. Professors Otmar Issing and Jan Pieter Krahnen were called as expert witnesses.
The testimony began with a general discussion on the elements considered necessary for a functioning internal market. Do economic union and monetary union require a fiscal union or even a political union, beyond the elements of the banking union currently being prepared? In this context, also the critique of the German current account surplus and the international expectations that Germany stimulate internal demand to support growth in crisis countries, were discussed.
With regard to the monetary union, the members of the subcommittee asked for an assessment of how European nations and the banking industry would have fared in the banking crisis that followed the Lehman collapse, had there not been a common currency. Given the important role that the ECB has played in the course of the crisis management, the members further asked for an evaluation of the OMT-program of the ECB and also if the monetary union is in need of common debt instruments, in order to provide the ECB with the possibility of buying EU liabilities, comparable to the Fed buying US Treasury bonds. Finally, the dual role of the ECB for monetary policy and banking supervision was an issue touched on by several questions.
n a contribution prepared for the Athens Symposium on “Banking Union, Monetary Policy and Economic Growth”, Otmar Issing describes forward guidance by central banks as the culmination of the idea of guiding expectations by pure communication. In practice, he argues, forward guidance has proved a misguided idea. What is presented as state of the art monetary policy is an example of pretence of knowledge. Forward guidance tries to give the impression of a kind of rule-based monetary policy. De facto, however, it is an overambitious discretionary approach which, to be successful, would need much more (or rather better) information than is currently available. In Issing's view, communication must be clear and honest about the limits of monetary policy in a world of uncertainty.
In the wake of the Global Financial Crisis that started in 2007, policymakers were forced to respond quickly and forcefully to a recession caused not by short-term factors, but rather by an over-accumulation of debt by sovereigns, banks, and households: a so-called “balance sheet recession.” Though the nature of the crisis was understood relatively early on, policy prescriptions for how to deal with its consequences have continued to diverge. This paper gives a short overview of the prescriptions, the remaining challenges and key lessons for monetary policy.
Can a tightening of the bank resolution regime lead to more prudent bank behavior? This policy paper reviews arguments for why this could be the case and presents evidence linking changes in bank resolution regimes with bank risk-taking. The authors find that the tightening of bank resolution in the U.S. (i.e., the introduction of the Orderly Liquidation Authority) significantly decreased overall risk-taking of the most affected banks. This effect, however, does not hold for the largest and most systemically important banks – too-big-to-fail seems to be unresolved. Building on the insights from the U.S. experience, the authors derive principles for effective resolution regimes and evaluate the emerging resolution regime for Europe.
Social Security rules that determine retirement, spousal, and survivor benefits, along with benefit adjustments according to the age at which these are claimed, open up a complex set of financial options for household decisions. These rules influence optimal household asset allocation, insurance, and work decisions, subject to life cycle demographic shocks, such as marriage, divorce, and children. Our model-based research generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
US data and new stockholding data from fifteen European countries and China exhibit a common pattern: stockholding shares increase in household income and wealth. Yet, there is a multitude of numbers to match through models. Using a single utility function across households (parsimony), we suggest a strategy for fitting stockholding numbers, while replicating that saving rates increase in wealth, too. The key is introducing subsistence consumption to an Epstein-Zin-Weil utility function, creating endogenous risk-aversion differences across rich and poor. A closed-form solution for the model with insurable labor-income risk serves as calibration guide for numerical simulations with uninsurable labor-income risk.
How much additional tax revenue can the government generate by increasing labor income taxes? In this paper we provide a quantitative answer to this question, and study the importance of the progressivity of the tax schedule for the ability of the government to generate tax revenues. We develop a rich overlapping generations model featuring an explicit family structure, extensive and intensive margins of labor supply, endogenous accumulation of labor market experience as well as standard intertemporal consumption-savings choices in the presence of uninsurable idiosyncratic labor productivity risk. We calibrate the model to US macro, micro and tax data and characterize the labor income tax Laffer curve under the current choice of the progressivity of the labor income tax code as well as when varying progressivity. We find that more progressive labor income taxes significantly reduce tax revenues. For the US, converting to a flat tax code raises the peak of the Laffer curve by 6%, whereas converting to a tax system with progressivity similar to Denmark would lower the peak by 7%. We also show that, relative to a representative agent economy tax revenues are less sensitive to the progressivity of the tax code in our economy. This finding is due to the fact that labor supply of two earner households is less elastic (along the intensive margin) and the endogenous accumulation of labor market experience makes labor supply of females less elastic (around the extensive margin) to changes in tax progressivity.
This paper analyzes how on-the-job search (OJS) by an agent impacts the moral hazard problem in a repeated principal-agent relationship. OJS is found to constitute a source of agency costs because efficient search incentives require that the agent receives all gains from trade. Further, the optimal incentive contract with OJS matches the design of empirically observed compensation contracts more accurately than models that ignore OJS. In particular, the optimal contract entails excessive performance pay plus efficiency wages. Efficiency wages reduce the opportunity costs of work effort and hence serve as a complement to bonuses. Thus, the model offers a novel explanation for the use of efficiency wages. When allowing for renegotiation, the model generates wage and turnover dynamics that are consistent with empirical evidence. I argue that the model contributes to explaining the concomitant rise in the use of performance pay and in competition for high-skill workers during the last three decades.
This paper studies the effect of graduating from college on lifetime earnings. We develop a quantitative model of college choice with uncertain graduation. Departing from much of the literature, we model in detail how students progress through college. This allows us to parameterize the model using transcript data. College transcripts reveal substantial and persistent heterogeneity in students’ credit accumulation rates that are strongly related to graduation outcomes. From this data, the model infers a large ability gap between college graduates and high school graduates that accounts for 54% of the college lifetime earnings premium.
Research results confirm the existence of various forms of international tax planning by multinational firms. Prominent examples for firms employing tax avoidance strategies are Amazon, Google and Starbucks. Increasing availability of administrative data for Europe has enabled researchers to study behavioural responses of European multinationals to taxation. The present paper summarizes what we can learn from these recent studies in general and about German multinationals in particular.
We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
We develop a model of managerial compensation structure and asset risk choice. The model provides predictions about how inside debt features affect the relation between credit spreads and compensation components. First, inside debt reduces credit spreads only if it is unsecured. Second, inside debt exerts important indirect effects on the role of equity incentives: When inside debt is large and unsecured, equity incentives increase credit spreads; When inside debt is small or secured, this effect is weakened or reversed. We test our model on a sample of U.S. public firms with traded CDS contracts, finding evidence supportive of our predictions. To alleviate endogeneity concerns, we also show that our results are robust to using an instrumental variable approach.
When markets are incomplete, social security can partially insure against idiosyncratic and aggregate risks. We incorporate both risks into an analytically tractable model with two overlapping generations and demonstrate that they interact over the life-cycle. The interactions appear even though the two risks are orthogonal and they amplify the welfare consequences of introducing social security. On the one hand, the interactions increase the welfare benefits from insurance. On the other hand, they can in- or decrease the welfare costs from crowding out of capital formation. This ambiguous effect on crowding out means that the net effect of these two channels is positive, hence the interactions of risks increase the total welfare benefits of social security.
The Eurozone fiscal crisis has created pressure for institutional harmonization, but skeptics argue that cultural predispositions can prevent convergence in behavior. Our paper derives a robust cultural classification of European countries and utilizes unique data on natives and immigrants to Sweden. Classification based on genetic distance or on Hofstede’s cultural dimensions fails to identify a single ‘southern’ culture but points to a ‘northern’ culture. Significant differences in financial behavior are found across cultural groups, controlling for household characteristics. Financial behavior tends to converge with longer exposure to common institutions, but is slowed down by longer exposure to original institutions.
Neither Northerners are willing to invest in a South they perceive as unwilling to undertake necessary structural reforms, nor are Southerners willing to invest in their countries in a climate of austerity and policy uncertainty imposed, in their view, by the North. This results in a vicious cycle of mistrust. However, as the author argues, big steps in the direction of reforms may provide just enough thrust to break out of this vicious cycle, propel southern countries – and especially Greece – to a much happier future, and promote the chances for more balanced economic performance in North and South.
Francisco Suárez (1548-1617) and Rodrigo Arriaga (1592-1667) on the state of innocence and community
(2014)
Recent scholarship on late-scholastic thought has stressed a Jesuit discontinuity from Thomism. While Aquinas’ Aristotelian thesis located the political sphere in the state of innocence, Jesuit thought on community formation is said to have referred to ‘fallen’ and ‘pure’ nature. In this piece, I trace one particular narrative: In the hypothetical, lasting state of innocence (if original sin had not occurred), Aquinas identified the political community, but not the institution of the sacraments. Two celebrated Jesuit scholastics, Francisco Suárez and Rodrigo Arriaga, challenged the latter claim and defended the naturalness of spiritual alongside temporal power. This effectively allowed them to connect ‘nature’ to ‘utility’ and ‘necessity’ without tying their claims to the supernatural teleology. To them, the state of innocence remained relevant for politics, albeit in a way that challenged the Thomist account.
This is a chapter for a forthcoming volume Oxford Handbook of Financial Regulation (Oxford University Press 2014) (eds. Eilís Ferran, Niamh Moloney, and Jennifer Payne). It provides an overview of EU financial regulation from the first banking directive up until its most recent developments in the aftermath of the financial crisis, focusing on the multiple layers of multi-level governance and their characteristic conceptual difficulties. Therefore the paper discusses the need to accommodate cross-border capital flows following from the EU internal market and the resulting regulatory strategies. This includes a brief overview of the principle of home country control and the ensuing Financial Services Action Plan. Dealing with the accommodation of cross-border capital flows and their regulation necessarily require an orchestration of the underlying supervisory structures, which is therefore also discussed. In the aftermath of the financial crisis of 2007-09 an additional aspect of necessary orchestration has emerged, that is the need to control systemic risk. Specific attention is paid to microprudential supervision by the newly established European Supervisory Authorities and macroprudential supervision in the European Banking Union, the latter’s underlying drivers and the accompanying Single Supervisory Mechanism, including the SSM’s institutional framework as well as the consideration of its rationales and the Single Resolution Mechanism closely linked to it.
Expressivist theories of punishment, according to which a penal sanction articulates or expresses a certain meaning to the offender, to the victim and to society, become more and more prominent among the traditional theories of punishment as retribution or deterrence. What these theories have in common is the idea that the conveyance of the meaning is in need of a communicative action, and that the penal sanction is such a communicative act. This article argues that pure communicative theories of punishment face great difficulties in generating any justification for hard treatment. One challenge is that certain types of sanctions – in particularly, hard treatment – restrict the communicative opportunities of the incarcerated individual; which generates a paradox, in that it turns punishment into a communicative action of non-communication. Beyond that, moreover, all practices of hard treatment potentially become unnecessary, if expressing the moral message of censure constitutes a kind of action in itself, and as such, itself a treatment of the offender, embedded in a communicative relationship between offender, victim and society; such that we may be able to think of the history of punishment as a development where hard treatment becomes more and more unnecessary for the conveyance of the message.
One of the leading methods of estimating the structural parameters of DSGE models is the VAR-based impulse response matching estimator. The existing asympotic theory for this estimator does not cover situations in which the number of impulse response parameters exceeds the number of VAR model parameters. Situations in which this order condition is violated arise routinely in applied work. We establish the consistency of the impulse response matching estimator in this situation, we derive its asymptotic distribution, and we show how this distribution can be approximated by bootstrap methods. Our methods of inference remain asymptotically valid when the order condition is satisfied, regardless of whether the usual rank condition for the application of the delta method holds. Our analysis sheds new light on the choice of the weighting matrix and covers both weakly and strongly identified DSGE model parameters. We also show that under our assumptions special care is needed to ensure the asymptotic validity of Bayesian methods of inference. A simulation study suggests that the frequentist and Bayesian point and interval estimators we propose are reasonably accurate in finite samples. We also show that using these methods may affect the substantive conclusions in empirical work.
Concepts of legal capacity and legal subjectivity have developed gradually through intermediate stages. Accordingly, there are numerous types of legal subjects and partial legal subjects, and ever-new types can develop, at the latest once the law confronts new social and technological challenges. Today such challenges seem to be making themselves felt especially in the field of information and communication technologies. Their specific communicative conditions resulting from the technological networking of social communication have a particularly pronounced influence on legal attributions of identity and action, and hence above all on issues of liability in electronic commerce. Here in particular it is becoming increasingly difficult to distinguish concrete human actors and, for example, to identify them as authors of declarations of intent or even as individually responsible agencies of legal transgressions. The communicative processes in this area appear instead as new kinds of chains of effects whose actors seem to be more socio-technical ensembles of people and things – whereby the artificial components of these hybrid human being-thing linkages can sometimes even be represented as driving forces and independent agents.
We explore the sources of household balance sheet adjustment following the collapse of the housing market in 2006. First, we use microdata from the Federal Reserve Board’s Senior Loan Officer Opinion Survey to document that banks cumulatively tightened consumer lending standards more in counties that experienced a house price boom in the mid-2000s than in non-boom counties. We then use the idea that renters, unlike homeowners, did not experience an adverse wealth shock when the housing market collapsed to examine the relative importance of two explanations for the observed deleveraging and the sluggish pickup in consumption after 2008. First, households may have optimally adjusted to lower wealth by reducing their demand for debt and implicitly, their demand for consumption. Alternatively, banks may have been more reluctant to lend in areas with pronounced real estate declines. Our evidence is consistent with the second explanation. Renters with low risk scores, compared to homeowners in the same markets, reduced their levels of nonmortgage debt and credit card debt more in counties where house prices fell more. The contrast suggests that the observed reductions in aggregate borrowing were more driven by cutbacks in the provision of credit than by a demand-based response to lower housing wealth.
Before the 2007–09 crisis, standard risk measurement methods substantially underestimated the threat to the financial system. One reason was that these methods didn’t account for how closely commercial banks, investment banks, hedge funds, and insurance companies were linked. As financial conditions worsened in one type of institution, the effects spread to others. A new method that more accurately accounts for these spillover effects suggests that hedge funds may have been central in generating systemic risk during the crisis.
On average, "young" people underestimate whereas "old" people overestimate their chances to survive into the future. We adopt a Bayesian learning model of ambiguous survival beliefs which replicates these patterns. The model is embedded within a non-expected utility model of life-cycle consumption and saving. Our analysis shows that agents with ambiguous survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival probabilities. Our ambiguity-driven model therefore simultaneously accounts for three important empirical findings on household saving behavior.
Consumption-based asset pricing with rare disaster risk : a simulated method of moments approach
(2014)
The rare disaster hypothesis suggests that the extraordinarily high postwar U.S. equity premium resulted because investors ex ante demanded compensation for unlikely but calamitous risks that they happened not to incur. Although convincing in theory, empirical tests of the rare disaster explanation are scarce. We estimate a disaster-including consumption-based asset pricing model (CBM) using a combination of the simulated method of moments and bootstrapping. We consider several methodological alternatives that differ in the moment matches and the way to account for disasters in the simulated consumption growth and return series. Whichever specification is used, the estimated preference parameters are of an economically plausible size, and the estimation precision is much higher than in previous studies that use the canonical CBM. Our results thus provide empirical support for the rare disaster hypothesis, and help reconcile the nexus between real economy and financial markets implied by the consumption-based asset pricing paradigm.
The long-run consumption risk (LRR) model is a promising approach to resolve prominent asset pricing puzzles. The simulated method of moments (SMM) provides a natural framework to estimate its deep parameters, but caveats concern model solubility and weak identification. We propose a two-step estimation strategy that combines GMM and SMM, and for which we elicit informative macroeconomic and financial moment matches from the LRR model structure. In particular, we exploit the persistent serial correlation of consumption and dividend growth and the equilibrium conditions for market return and risk-free rate, as well as the model-implied predictability of the risk-free rate. We match analytical moments when possible and simulated moments when necessary and determine the crucial factors required for both identification and reasonable estimation precision. A simulation study – the first in the context of long-run risk modeling – delineates the pitfalls associated with SMM estimation of a non-linear dynamic asset pricing model. Our study provides a blueprint for successful estimation of the LRR model.
Securities transaction tax in France: impact on market quality and inter-market price coordination
(2014)
The general concept of a Securities Transaction Tax is controversial among academics and politicians. While theoretical research is quite advanced, the empirical guidance in a fragmented market context is still scarce. Possible negative effects for market liquidity and market efficiency are theoretically predicted, but have not been empirically tested yet. In light of the agreement of eleven European member states to implement an STT, this study aims to give a comprehensive overview of the effects of the STT, introduced in France in 2012, on liquidity demand, liquidity supply, volatility and inter-market information transmission. The results show that the STT has led to a decline in liquidity demand, has had a detrimental effect on liquidity supply and negatively influences the inter-market information transmission efficiency. However, no effect on volatility can be observed.
This paper uses laboratory experiments to provide a systematic analysis of how di↵erent presentation formats a↵ect individuals’ investment decisions. The results indicate that the type of presentation as well as personal characteristics influence both, the consistency of decisions and the riskiness of investment choices. However, while personal characteristics have a larger impact on consistency, the chosen risk level is determined more by framing e↵ects. On the level of personal characteristics, participants’ decisions show that better financial literacy and a better understanding of the presentation format enhance consistency and thus decision quality. Moreover, female participants on average make less consistent decisions and tend to prefer less risky alternatives. On the level of framing dimensions, subjects choose riskier investments when possible outcomes are shown in absolute values rather than rates of return and when the loss potential is less obvious. In particular, reducing the emphasis on downside risk and upside potential simultaneously leads to a substantial increase in risk taking.
We examine trust and trustworthiness of individuals with varying professional preferences and experiences. Our subjects study business and economics in Frankfurt, the financial center of Germany and continental Europe. In the trust game, subjects with a high interest in working in the financial industry return 25 percent less than subjects with a low interest. We find no evidence that the extent of professional experience in the financial industry has a negative impact on trustworthiness. We also do not find any evidence that the financial industry screens out less trustworthy individuals in the hiring process. In a prediction game that is strategically equivalent to the trust game, the amount sent by first-movers was significantly smaller when the second-mover indicated a high interest in working in finance. These results suggest that the financial industry attracts less trustworthy individuals, which may contribute to the current lack of trust in its employees.
In this study prepared for the ECON Committee of the European Parliament, Gellings, Jungbluth and Langenbucher present a graphic overview on core legislation in the area of economic and financial services in Europe. The mapping overview can serve as background for further deliberations. The study covers legislation in force, proposals and other relevant provisions in fourteen policy areas, i.e. banking, securities markets and investment firms, market infrastructure, insurance and occupational pensions, payment services, consumer protection in financial services, the European System of Financial Supervision, European Monetary Union, Euro bills and Coins and statistics, competition, taxation, commerce and company law, accounting and auditing.
Regulation of investor access to financial products is often based on product familiarity indicated by previous use. The underlying premise that lack of familiarity with a product class causes unwarranted participation is difficult to test. This paper uses household-level data from the ‘experiment’ of German reunification that (exogenously) offered to East Germans access to capitalist products (exogenously) unfamiliar to them. We compare the evolution of post-unification participation of former East and West Germans in financial products, controlling for relevant household characteristics. We vary familiarity differentials by considering (i) both unfamiliar ‘capitalist’ products (stocks, bonds, and consumer credit) and ones available in the East (savings accounts and life insurance); and (ii) cohorts with different exposure to capitalism. We find that East Germans participated immediately in unfamiliar risky securities, at rates comparable to West Germans of similar characteristics. They phased out disproportionate participation in previously familiar assets as familiarity with capitalist products grew. They were more likely to use consumer debt, partly to catch up with richer new peers. We find no signs of abrupt participation drops that could suggest mistakes or regret related to lack of familiarity.
We use a unique data set from the Trade Reporting and Compliance Engine (TRACE) to study liquidity effects in the US structured product market. Our main contribution is the analysis of the relation between the accuracy in measuring liquidity and the potential degree of disclosure. Having access to all relevant trading information, we provide evidence that transaction cost measures that use dealer specific information such as trader identity and trade direction can be efficiently proxied by measures that use less detailed information. This finding is important for all market participants in the context of OTC markets, as it fosters our understanding of the information contained in transaction data. Thus, our results provide guidance for improving transparency while maintaining trader confidentiality. In addition, we analyze liquidity in the structured product market in general and show that securities that are mainly institutionally traded, guaranteed by a federal authority, or have low credit risk, tend to be more liquid.
This essay reviews a cornerstone of the European Banking Union project, the resolution of systemically important banks. The focus is on the inherent conflict between a possible intervention by resolution authorities, conditional on a crisis situation, and effective prevention prior to a crisis. Moreover, the paper discusses the rules for bail-in debt and conversion rules for different layers of debt. Finally, some organizational requirements to achieve effective resolution results will be analyzed.
Noumenal Power
(2014)
In political or social philosophy, we speak about power all the time. Yet the meaning of this important concept is rarely made explicit, especially in the context of normative discussions. But as with many other concepts, once one considers it more closely, fundamental problems arise, such as whether a power relation is necessarily a relation of subordination and domination. In the following, I suggest a novel understanding of what power is and what it means to exercise it.
The Solvency II standard formula employs an approximate Value-at-Risk approach to define risk-based capital requirements. This paper investigates how the standard formula’s stock risk calibration influences the equity position and investment strategy of a shareholder-value-maximizing insurer with limited liability. The capital requirement for stock risks is determined by multiplying a regulation-defined stock risk parameter by the value of the insurer’s stock portfolio. Intuitively, a higher stock risk parameter should reduce risky investments as well as insolvency risk. However, we find that the default probability does not necessarily decrease when reducing the investment risk (by increasing the stock investment risk parameter). We also find that depending on the precise interaction between assets and liabilities, some insurers will invest conservatively, whereas others will prefer a very risky investment strategy, and a slight change of the stock risk parameter may lead from a conservative to a high risk asset allocation.
Since the 2008 financial crisis, in which the Reserve Primary Fund “broke the buck,” money market funds (MMFs) have been the subject of ongoing policy debate. Many commentators view MMFs as a key contributor to the crisis because widespread redemption demands during the days following the Lehman bankruptcy contributed to a freeze in the credit markets. In response, MMFs were deemed a component of the nefarious shadow banking industry and targeted for regulatory reform. The Securities and Exchange Commission’s (SEC) misguided 2014 reforms responded by potentially exacerbating MMF fragility while potentially crippling large segments of the MMF industry.
Determining the appropriate approach to MMF reform has been difficult. Banks regulators supported requiring MMFs to trade at a floating net asset value (NAV) rather than a stable $1 share price. By definition, a floating NAV prevents MMFs from breaking the buck but is unlikely to eliminate the risk of large redemptions in a time of crisis. Other reform proposals have similar shortcomings. More fundamentally, the SEC’s reforms may substantially reduce the utility of MMFs for many investors, which could, in turn, affect the availability of short term credit.
The shape of MMF reform has been influenced by a turf war among regulators as the SEC has battled with bank regulators both about the need for additional reforms and about the structure and timing of those reforms. Bank regulators have been influential in shaping the terms of the debate by using banking rhetoric to frame the narrative of MMF fragility. This rhetoric masks a critical difference between banks and MMFs – asset segregation. Unlike banks, MMF sponsors have assets and operations that are separate from the assets of the MMF itself. This difference has caused the SEC to mistake sponsor support as a weakness rather than a key stability-enhancing feature. As a result, the SEC mistakenly adopted reforms that burden sponsor support instead of encouraging it.
As this article explains, required sponsor support offers a novel and simple regulatory solution to MMF fragility. Accordingly this article proposes that the SEC require MMF sponsors explicitly to guarantee the $1 share price. Taking sponsor support out of the shadows embraces rather than ignores the advantage that MMFs offer over banks through asset partitioning. At the same time, sponsor support harnesses market discipline as a constraint against MMF risk-taking and moral hazard.
While distribution conflicts over natural resources were central to the debates on a New International Economic Order, during the last decades the specific distribution conflicts surrounding natural resource exploitation no longer have been at the core of international law. In this paper I trace the developments in the relationship between international law and resource distribution conflicts. I first argue that the New International Economic Order favored the political resolution of distribution conflicts over natural resources and envisaged international distribution conflicts to be addressed by the political organs of international institutions within legal procedures Second, I show how the NIEO was surpassed by a different order that relied largely on the market as a distribution mechanism for raw materials and how international institutions and international law played a crucial role in the establishment of this order by promoting the privatization of natural resource exploitation and protecting foreign direct investment and trade. With reference to the copper industry in Zambia I thirdly illustrate how international investment law, and more broadly international economic law, is shaping (and affecting the resolution of) not only distribution conflicts between, but also within States. I conclude with a call for a renewed focus on an international law of resource conflicts to allow for their political resolution given the countermoves we can observe with respect to international investment law and the persistence of (violent) conflicts over natural resource exploitation within States.
Even though fiscal sovereignty still counts as a fundamental principle of government, global and regional economic integration as well as increasing levels of sovereign debt severely limit governments’ tax policy choices. In particular the redistributive function of taxation has suffered in the pursuit of economic competitiveness. As inequality rises and attention is directed again at taxation as a means for redistribution, international cooperation appears as an avenue to enable redistribution through taxation. Yet, one of the predominant international institutions dealing with tax matters – the OECD – with its focus on economic growth and competitiveness and resulting tax policy advice prevents rather than promotes national and international debates on taxation as a question of social justice. The paper argues that questions of taxation need to be perceived as questions of social justice and thus as questions of politics, and not merely of economics. Only if taxation is not considered a mere economic instrument can a ‘political economy’ be maintained. The paper addresses the three objectives of taxation – revenue generation, redistribution and regulation -- and how they are affected as governments aim for fiscal consolidation to conclude that governments’ power to freely pursue and calibrate these objectives has come to appear rather as a myth than the core of sovereignty. It then demonstrates how the OECD’s tax policy advice and cooperation in tax matters react to the constraints on governmental taxation powers; how they aim at economic growth and competitiveness to the detriment of (other) ideas of social justice. The paper concludes with a call for (re)integrating social and global justice concerns into debates on taxation.
Financial innovation is, as usual, faster than regulation. New forms of speculation and intermediation are rapidly emerging. Largely as a result of the evaporation of trust in financial intermediation, an exponentially increasing role is being played by the so-called peer to peer intermediation. The most prominent example at the moment is Bitcoin.
If one expects that shocks in these markets could destabilize also traditional financial markets, then it will be necessary to extend regulatory measures also to these innovations.
This article discusses the recent proposal for debt restructuring in the euro zone by Pierre Paris and Charles Wyplosz. It argues that the plan cannot realize the promised debt relief without producing moral hazard. Ester Faia revisits the Redemption Fund proposed in November 2011 by the German Council of Economic Experts and argues that this plan, up to date, still remains the most promising path towards succesful debt restructuring in Europe.
Social impact bonds are a special type of bond whose purpose is to provide long term funds to projects with a social impact. Especially in the UK and in the US these bonds are increasingly being used to raise funds to finance government projects. Their return depends on the social improvements achieved. Especially in times of crisis, governments lack funds to prevent the social consequences of recessions. Faia argues that the European Union should develop an equivalent to the British Social Finance Ltd. to finance projects for social improvement.
In the United States, on April 1, 2014, the set of rules commonly known as the "Volcker Rule", prohibiting proprietary trading activities in banks, became effective. The implementation of this rule took more than three years, as “proprietary trading” is an inherently vague concept, overlapping strongly with genuinely economically useful activities such as market-making. As a result, the final Rule is a complex and lengthy combination of prohibitions and exemptions.
In January 2014, the European Commission put forward its proposal on banking structural reform. The proposal includes a Volcker-like provision, prohibiting large, systemically relevant financial institutions from engaging in proprietary trading or hedge fund-related business. This paper offers lessons to be learned from the implementation process for the Volcker rule in the US for the European regulatory process.
The article introduces a research project financed by the Academy of Sciences and Literature Mainz began in 2013 and will extend over an 18-year period. It aims at producing a historical-semantic dictionary elucidating central terms of the School of Salamanca's discourses and their significance for modern political theory and jurisprudence. The project's fundament will be a digital corpus of important texts from the School of Salamanca which will be linked up with the dictionary's online version. By making the source corpus accessible in searchable full text (as well as in high quality digital images), the project is creating a new research tool with exciting possibilities for further investigations. The dictionary will be a valuable source of information for the interdisciplinary research carried out in this field.
A greater firm-level transparency through enhanced disclosure provides more information regarding the risk situation of an insurer to its outside stakeholders such as stock investors and policyholders. The disclosure of the insurer's risktaking can result in negative influences on, for example, its stock performance and insurance demand when stock investors and policyholders are risk-averse. Insurers, which are concerned about the potential ex post adverse effects of risk-taking under greater transparency, are thus inclined to limit their risks ex ante. In other words, improved firm-level transparency can induce less risktaking incentive of insurers. This article investigates empirically the relationship between firm-level transparency and insurers' strategies on capitalization and risky investments. By exploring the disclosure levels and the risk behavior of 52 European stock insurance companies from 2005 to 2012, the results show that insurers tend to hold more equity capital under the anticipation of greater transparency, and this strategy on capital-holding is consistent for different types of insurance businesses. When considering the influence of improved transparency on the investment policy of insurers, the results are mixed for different types of insurers.
This article explores life insurance consumption in 31 European countries from 2003 to 2012 and aims to investigate the extent to which market transparency can affect life insurance demand. The cross-country evidence for the entire sample period shows that greater market transparency, which resolves asymmetric information, can generate a higher demand for life insurance. However, when considering the financial crisis period (2008-2012) separately, the results suggest a negative impact of enhanced market transparency on life insurance consumption. The mixed findings imply a trade-off between the reduction in adverse selection under greater market transparency and the possible negative effects on life insurance consumption during the crisis period due to more effective market discipline. Furthermore, this article studies the extent to which transparency can influence the reaction of life insurance demand to bad market outcomes: i.e., low solvency ratios or low profitability. The results indicate that the markets with bad outcomes generate higher life insurance demand under greater transparency compared to the markets that also experience bad outcomes but are less transparent.