Refine
Year of publication
- 2014 (149) (remove)
Document Type
- Working Paper (149) (remove)
Language
- English (149) (remove)
Has Fulltext
- yes (149)
Is part of the Bibliography
- no (149) (remove)
Keywords
- financial crisis (6)
- monetary policy (6)
- transparency (4)
- Household Finance (3)
- Labor income risk (3)
- Portfolio choice (3)
- Systemic risk (3)
- asset pricing (3)
- financial literacy (3)
- shadow banking (3)
Institute
- Wirtschaftswissenschaften (128)
- Center for Financial Studies (CFS) (117)
- Sustainable Architecture for Finance in Europe (SAFE) (74)
- House of Finance (HoF) (71)
- Rechtswissenschaft (10)
- Institute for Monetary and Financial Stability (IMFS) (6)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (4)
- Gesellschaftswissenschaften (3)
- Informatik (2)
- Institut für sozial-ökologische Forschung (ISOE) (2)
This paper uses laboratory experiments to provide a systematic analysis of how di↵erent presentation formats a↵ect individuals’ investment decisions. The results indicate that the type of presentation as well as personal characteristics influence both, the consistency of decisions and the riskiness of investment choices. However, while personal characteristics have a larger impact on consistency, the chosen risk level is determined more by framing e↵ects. On the level of personal characteristics, participants’ decisions show that better financial literacy and a better understanding of the presentation format enhance consistency and thus decision quality. Moreover, female participants on average make less consistent decisions and tend to prefer less risky alternatives. On the level of framing dimensions, subjects choose riskier investments when possible outcomes are shown in absolute values rather than rates of return and when the loss potential is less obvious. In particular, reducing the emphasis on downside risk and upside potential simultaneously leads to a substantial increase in risk taking.
This paper is the first to conduct an incentive-compatible experiment using real monetary payoffs to test the hypothesis of probabilistic insurance which states that willingness to pay for insurance decreases sharply in the presence of even small default probabilities as compared to a risk-free insurance contract. In our experiment, 181 participants state their willingness to pay for insurance contracts with different levels of default risk. We find that the willingness to pay sharply decreases with increasing default risk. Our results hence strongly support the hypothesis of probabilistic insurance. Furthermore, we study the impact of customer reaction to default risk on an insurer’s optimal solvency level using our experimentally obtained data on insurance demand. We show that an insurer should choose to be default-free rather than having even a very small default probability. This risk strategy is also optimal when assuming substantial transaction costs for risk management activities undertaken to achieve the maximum solvency level.
The Solvency II standard formula employs an approximate Value-at-Risk approach to define risk-based capital requirements. This paper investigates how the standard formula’s stock risk calibration influences the equity position and investment strategy of a shareholder-value-maximizing insurer with limited liability. The capital requirement for stock risks is determined by multiplying a regulation-defined stock risk parameter by the value of the insurer’s stock portfolio. Intuitively, a higher stock risk parameter should reduce risky investments as well as insolvency risk. However, we find that the default probability does not necessarily decrease when reducing the investment risk (by increasing the stock investment risk parameter). We also find that depending on the precise interaction between assets and liabilities, some insurers will invest conservatively, whereas others will prefer a very risky investment strategy, and a slight change of the stock risk parameter may lead from a conservative to a high risk asset allocation.
A greater firm-level transparency through enhanced disclosure provides more information regarding the risk situation of an insurer to its outside stakeholders such as stock investors and policyholders. The disclosure of the insurer's risktaking can result in negative influences on, for example, its stock performance and insurance demand when stock investors and policyholders are risk-averse. Insurers, which are concerned about the potential ex post adverse effects of risk-taking under greater transparency, are thus inclined to limit their risks ex ante. In other words, improved firm-level transparency can induce less risktaking incentive of insurers. This article investigates empirically the relationship between firm-level transparency and insurers' strategies on capitalization and risky investments. By exploring the disclosure levels and the risk behavior of 52 European stock insurance companies from 2005 to 2012, the results show that insurers tend to hold more equity capital under the anticipation of greater transparency, and this strategy on capital-holding is consistent for different types of insurance businesses. When considering the influence of improved transparency on the investment policy of insurers, the results are mixed for different types of insurers.
This article explores life insurance consumption in 31 European countries from 2003 to 2012 and aims to investigate the extent to which market transparency can affect life insurance demand. The cross-country evidence for the entire sample period shows that greater market transparency, which resolves asymmetric information, can generate a higher demand for life insurance. However, when considering the financial crisis period (2008-2012) separately, the results suggest a negative impact of enhanced market transparency on life insurance consumption. The mixed findings imply a trade-off between the reduction in adverse selection under greater market transparency and the possible negative effects on life insurance consumption during the crisis period due to more effective market discipline. Furthermore, this article studies the extent to which transparency can influence the reaction of life insurance demand to bad market outcomes: i.e., low solvency ratios or low profitability. The results indicate that the markets with bad outcomes generate higher life insurance demand under greater transparency compared to the markets that also experience bad outcomes but are less transparent.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
In my paper I take issue with proponents of ‘intersectionality’ which believe that a theoretical concept cannot/should not be detached from its original context of invention. Instead, I argue that the traveling of theory in a global context automatically involves appropriations, amendment and changes in response to the original meaning. However, I reject the idea that ‘intersectionality’ can be used as a freefloating signifier; on the contrary, it has to be embedded in the respective (historical, social, cultural) context in which it is used. I will start by mapping some of the current debates engaging with the pros and cons of the global implementation of the concept (the controversy about master categories, the dispute about the centrality of ‘race’, and the argument about the amendment of categories). I will then turn to my own use of ‘intersectionality’ as a methodological tool (elaborated in Lutz and Davis 2005). Here, we shifted attention from how structures of racism, class discrimination and sexism determine individuals’ identities and practices to how individuals ongoingly and flexibly negotiate their multiple and converging identities in the context of everyday life. Introducing the term doing intersectionality we explored how individuals creatively and often in surprising ways draw upon various aspects of their multiple identities as a resource to gain control over their lives.
In my paper I will show how ‘gender’ or ‘ethnicity’ are invariably linked to structures of domination, but can also mobilize or deconstruct disempowering discourses, even undermine and transform oppressive practices.
Obstetrical care as a matter of time: ultrasound screening in anticipatory regimes of pregnancy
(2014)
This article explores the ways in which ultrasound screening influences the temporal dimensions of prevention in the obstetrical management of pregnancy. Drawing on praxeographic perspectives and empirically based on participant observation of ultrasound examinations in obstetricians’ offices, it asks how ultrasound scanning facilitates anticipatory modes of pregnancy management, and investigates the entanglement of different notions of time and temporality in the highly risk-oriented modes of prenatal care in Germany. Arguing that the paradoxical temporality of prevention – acting now in the name of the future – is intensified by ultrasound screening, I show how the attribution of risk regarding foetal growth in prenatal check-ups is based on the fragmentation of procreative time and ask how time standards come into play, how pregnancy is located in calendrical time, and how notions of foetal time and the everyday life times of pregnant women clash during negotiations between obstetricians and pregnant women about the determination of the due date. By analysing temporality as a practical accomplishment via technological devices such as ultrasound, the paper contributes to debates in feminist STS studies on the role of time in reproduction technologies and the management of pregnancy and birth in contemporary societies.
This assessment concept paper provides a methodological approach for the formative assessment and summative assessment of GIZ’s International Water Stewardship Programme (IWaSP) and its component partnerships. IWaSP promotes partnerships between the private sector (corporations and SMEs), the public sector and the society to tackle shared water risks and to manage water equitably to meet competing demands. This evaluative assessment concept describes the generic approach of the assessment, the cycle for the assessment of partnerships, the country coordination and the programme.
The overall goal of the assessment is to provide evidence for taxpayers in the donor countries and for citizens in the partnership countries. It also aims to examine the relevance of the programme’s approach, its underlying assumptions, and the heterogeneity of stakeholders and their specific interests. Since the assessment is also formative feedback to GIZ and IWaSP stakeholders, it aims to guide the future implementation of the partnerships and the programme.
The assessment is guided by several generic principles: assessing for learning (formative assessment); assessment of learning (summative assessment); iteration; structuring complex problems; unblocking results; and conformity with other assessment criteria set out by the OECD the Development Assistance Committee (DAC) and GIZ’s Capacity Works success factors (GTZ 2010).
These generic criteria are adapted to the three levels of the IWaSP structure. First, the assessment cycle for partnerships includes the validation of stakeholders (mapping), the analysis of secondary literature, face-to-face interviews and a process for feeding back the findings. Generic tools are provided to guide the assessment, such as a list of key documents and an interview guide. Partnerships will undergo a baseline, interim assessment and final assessment. As progress varies across individual IWaSP partnerships, the steps taken by each partnership to assess shared water risks, prioritise and agree interventions, are expected to differ slightly. In response to these differences the sequencing and content of the assessment may need to be adapted for the different partnerships.
Second, the country-level assessment considers issues such as the coordination of partnerships within a country, scoping strategies, and interaction between partnership and the programme. Information gathered during the partnership assessment feeds into the country-level assessment.
Third, the assessment cycle for the programme involves a document and monitoring plan analysis, reflection on the different perspectives of the programme staff, country staff and external stakeholders.
The final section is concerned with reporting. Several annexes are provided relating to the organisation and preparation of the assessment, including question guidelines and analysis procedures.
The papers in this volume take up some aspects of the preverbal domain(s) in Bantu languages. They were originally presented at the Workshop BantuSynPhonIS: Preverbal Domain(s), held at the Center for General Linguistics (ZAS), in Berlin, on 14-15 November 2014. This workshop was coorganized by ZAS (Fatima Hamlaoui & Tonjes Veenstra) and the Humboldt University (Tom Güldemann, Yukiko Morimoto and Ines Fiedler).
Europe's debt crisis casts doubt on the effectiveness of fiscal austerity in highly-integrated economies. Closed-economy models overestimate its effectiveness, because they underestimate tax-base elasticities and ignore cross-country tax externalities. In contrast, we study tax responses to debt shocks in a two-country model with endogenous utilization that captures those externalities and matches the capital-tax-base elasticity. Quantitative results show that unilateral capital tax hikes cannot restore fiscal solvency in Europe, and have large negative (positive) effects at "home" ("abroad"). Restoring solvency via either Nash competition or Cooperation reduces (increases) capital (labor) taxes significantly, and leaves countries with larger debt shocks preferring autarky.
After the Global Financial Crisis a controversial rush to fiscal austerity followed in many countries. Yet research on the effects of austerity on macroeconomic aggregates was and still is unsettled, mired by the difficulty of identifying multipliers from observational data. This paper reconciles seemingly disparate estimates of multipliers within a unified and state-contingent framework. We achieve identification of causal effects with new propensity-score based methods for time series data. Using this novel approach, we show that austerity is always a drag on growth, and especially so in depressed economies: a one percent of GDP fiscal consolidation translates into 4 percent lower real GDP after five years when implemented in the slump rather than the boom. We illustrate our findings with a counterfactual evaluation of the impact of the U.K. government’s shift to austerity policies in 2010 on subsequent growth.
Austerity
(2014)
We shed light on the function, properties and optimal size of austerity using the standard sovereign model augmented to include incomplete information about credit risk. Austerity is defined as the shortfall of consumption from the level desired by a country and supported by its repayment capacity. We find that austerity serves as a tool for securing a more favorable loan package; that it is associated with over-investment even when investment does not create collateral; and that low risk borrowers may favor more to less severe austerity. These findings imply that the amount of fresh funds obtained by a sovereign is not a reliable measure of austerity suffered; and that austerity may actually be associated with higher growth. Our analysis accommodates costly signalling for gaining credibility and also assigns a novel role to spending multipliers in the determination of optimal austerity.
Does austerity pay off?
(2014)
Policy makers often implement austerity measures when the sustainability of public finances is in doubt and, hence, sovereign yield spreads are high. Is austerity successful in bringing about a reduction in yield spreads? We employ a new panel data set which contains sovereign yield spreads for 31 emerging and advanced economies and estimate the effects of cuts of government consumption on yield spreads and economic activity. The conditions under which austerity takes place are crucial. During times of fiscal stress, spreads rise in response to the spending cuts, at least in the short-run. In contrast, austerity pays off, if conditions are more benign.
In this paper, we investigate how the introduction of complex, model-based capital regulation affected credit risk of financial institutions. Model-based regulation was meant to enhance the stability of the financial sector by making capital charges more sensitive to risk. Exploiting the staggered introduction of the model-based approach in Germany and the richness of our loan-level data set, we show that (1) internal risk estimates employed for regulatory purposes systematically underpredict actual default rates by 0.5 to 1 percentage points; (2) both default rates and loss rates are higher for loans that were originated under the model-based approach, while corresponding risk-weights are significantly lower; and (3) interest rates are higher for loans originated under the model-based approach, suggesting that banks were aware of the higher risk associated with these loans and priced them accordingly. Further, we document that large banks benefited from the reform as they experienced a reduction in capital charges and consequently expanded their lending at the expense of smaller banks that did not introduce the model-based approach. Counter to the stated objectives, the introduction of complex regulation adversely affected the credit risk of financial institutions. Overall, our results highlight the pitfalls of complex regulation and suggest that simpler rules may increase the efficacy of financial regulation.
We show that the correct experiment to evaluate the effects of a fiscal adjustment is the simulation of a multi year fiscal plan rather than of individual fiscal shocks. Simulation of fiscal plans adopted by 16 OECD countries over a 30-year period supports the hypothesis that the effects of consolidations depend on their design. Fiscal adjustments based upon spending cuts are much less costly, in terms of output losses, than tax-based ones and have especially low output costs when they consist of permanent rather than stop and go changes in taxes and spending. The difference between tax-based and spending-based adjustments appears not to be explained by accompanying policies, including monetary policy. It is mainly due to the different response of business confidence and private investment.
This paper investigates the risk channel of monetary policy on the asset side of banks’ balance sheets. We use a factoraugmented vector autoregression (FAVAR) model to show that aggregate lending standards of U.S. banks, such as their collateral requirements for firms, are significantly loosened in response to an unexpected decrease in the Federal Funds rate. Based on this evidence, we reformulate the costly state verification (CSV) contract to allow for an active financial intermediary, embed it in a New Keynesian dynamic stochastic general equilibrium (DSGE) model, and show that – consistent with our empirical findings – an expansionary monetary policy shock implies a temporary increase in bank lending relative to borrower collateral. In the model, this is accompanied by a higher default rate of borrowers.
Are rules and boundaries sufficient to limit harmful central bank discretion? Lessons from Europe
(2014)
Marvin Goodfriend’s (2014) insightful, informative and provocative work explains concisely and convincingly why the Fed needs rules and boundaries. This paper reviews the broader institutional design problem regarding the effectiveness of the central bank in practice and confirms the need for rules and boundaries. The framework proposed for improving the Fed incorporates key elements that have already been adopted in the European Union. The case of ELA provision by the ECB and the Central Bank of Cyprus to Marfin-Laiki Bank during the crisis, however, suggests that the existence of rules and boundaries may not be enough to limit harmful discretion. During a crisis, novel interpretations of the legal authority of the central bank may be introduced to create a grey area that might be exploited to justify harmful discretionary decisions even in the presence of rules and boundaries. This raises the question how to ensure that rules and boundaries are respected in practice
This country report was prepared for the 19th World Congress of the International Academy of Comparative Law in Vienna in 2014. It is structured as a questionnaire and provides an overview of the legal framework for Free and Open Source Software (FOSS) and other alternative license models like (e.g.) Creative Commons under German law. The first set of questions addresses the applicable statutory provisions and the reported case law in this area. The second section concerns contractual issues, in particular with regard to the interpretation and validity of open content licenses. The third section deals with copyright aspects of open content models, for example regarding revocation rights and rights to equitable remuneration. The final set of questions pertains to patent, trademark and competition law issues of open content licenses.
Concepts of legal capacity and legal subjectivity have developed gradually through intermediate stages. Accordingly, there are numerous types of legal subjects and partial legal subjects, and ever-new types can develop, at the latest once the law confronts new social and technological challenges. Today such challenges seem to be making themselves felt especially in the field of information and communication technologies. Their specific communicative conditions resulting from the technological networking of social communication have a particularly pronounced influence on legal attributions of identity and action, and hence above all on issues of liability in electronic commerce. Here in particular it is becoming increasingly difficult to distinguish concrete human actors and, for example, to identify them as authors of declarations of intent or even as individually responsible agencies of legal transgressions. The communicative processes in this area appear instead as new kinds of chains of effects whose actors seem to be more socio-technical ensembles of people and things – whereby the artificial components of these hybrid human being-thing linkages can sometimes even be represented as driving forces and independent agents.
This article examines how the shale oil revolution has shaped the evolution of U.S. crude oil and gasoline prices. It puts the evolution of shale oil production into historical perspective, highlights uncertainties about future shale oil production, and cautions against the view that the U.S. may become the next Saudi Arabia. It then reviews the role of the ban on U.S. crude oil exports, of capacity constraints in refining and transporting crude oil, of differences in the quality of conventional and unconventional crude oil, and of the recent regional fragmentation of the global market for crude oil for the determination of U.S. oil and gasoline prices. It discusses the reasons for the persistent wedge between U.S. crude oil prices and global crude oil prices in recent years and for the fact that domestic oil prices below global levels need not translate to lower U.S. gasoline prices. It explains why the shale oil revolution unlike the shale gas revolution is unlikely to stimulate a boom in oil-intensive manufacturing industries. It also explores the implications of shale oil production for the transmission of oil price shocks to the U.S. economy.
One of the leading methods of estimating the structural parameters of DSGE models is the VAR-based impulse response matching estimator. The existing asympotic theory for this estimator does not cover situations in which the number of impulse response parameters exceeds the number of VAR model parameters. Situations in which this order condition is violated arise routinely in applied work. We establish the consistency of the impulse response matching estimator in this situation, we derive its asymptotic distribution, and we show how this distribution can be approximated by bootstrap methods. Our methods of inference remain asymptotically valid when the order condition is satisfied, regardless of whether the usual rank condition for the application of the delta method holds. Our analysis sheds new light on the choice of the weighting matrix and covers both weakly and strongly identified DSGE model parameters. We also show that under our assumptions special care is needed to ensure the asymptotic validity of Bayesian methods of inference. A simulation study suggests that the frequentist and Bayesian point and interval estimators we propose are reasonably accurate in finite samples. We also show that using these methods may affect the substantive conclusions in empirical work.
On 23 July 2014, the U.S. Securities and Exchange Commission (SEC) passed the “Money Market Reform: Amendments to Form PF ,” designed to prevent investor runs on money market mutual funds such as those experienced in institutional prime funds following the bankruptcy of Lehman Brothers. The present article evaluates the reform choices in the U.S. and draws conclusions for the proposed EU regulation of money market funds.
Can a tightening of the bank resolution regime lead to more prudent bank behavior? This policy paper reviews arguments for why this could be the case and presents evidence linking changes in bank resolution regimes with bank risk-taking. The authors find that the tightening of bank resolution in the U.S. (i.e., the introduction of the Orderly Liquidation Authority) significantly decreased overall risk-taking of the most affected banks. This effect, however, does not hold for the largest and most systemically important banks – too-big-to-fail seems to be unresolved. Building on the insights from the U.S. experience, the authors derive principles for effective resolution regimes and evaluate the emerging resolution regime for Europe.
A recent proposal by the Financial Stability Board (FSB) suggests a new risk capital buffer for globally operating systemically important financial institutions. The suggested metric, “Total Loss Absorbing Capacity“ (TLAC), is composed of Tier-1 capital and loss absorbing debt. In a crisis situation, “bail-in-able” debt is to be written down or converted into equity. Jan Krahnen argues that the credibility of bail-in, in the case of systemically important financial institutions, hinges crucially on the design of TLAC and the requirements that will be placed on loss absorbing “bail-in-able” debt.The fear of direct systemic consequences through bail-in could be overcome, if a holding ban were placed on the “bail-in-bonds” of financial institutions. The holding ban would stipulate that these bonds cannot be held by other institutions within the banking sector.
We characterize optimal redistribution in a dynastic family model with human capital. We show how a government can improve the trade-off between equality and incentives by changing the amount of observable human capital. We provide an intuitive decomposition for the wedge between human-capital investment in the laissez faire and the social optimum. This wedge differs from the wedge for bequests because human capital carries risk: its returns depend on the non-diversi
able risk of children's ability. Thus, human capital investment is encouraged more than bequests in the social optimum if human capital is a bad hedge for consumption risk.
Money is more than memory
(2014)
Impersonal exchange is the hallmark of an advanced society. One key institution for impersonal exchange is money, which economic theory considers just a primitive arrangement for monitoring past conduct in society. If so, then a public record of past actions — or memory — supersedes the function performed by money. This intriguing theoretical postulate remains untested. In an experiment, we show that the suggested functional equality between money and memory does not translate into an empirical equivalence. Monetary systems perform a richer set of functions than just revealing past behaviors, which proves to be crucial in promoting large-scale cooperation.
Emotions-at-risk: an experimental investigation into emotions, option prices and risk perception
(2014)
This paper experimentally investigates how emotions are associated with option prices and risk perception. Using a binary lottery, we find evidence that the emotion ‘surprise’ plays a significant role in the negative correlation between lottery returns and estimates of the price of a put option. Our findings shed new light on various existing theories on emotions and affect. We find gratitude, admiration, and joy to be positively associated with risk perception, although the affect heuristic predicts a negative association. In contrast with the predictions of the appraisal tendency framework (ATF), we document a negative correlation between option price and surprise for lottery winners. Finally, the results show that the option price is not associated with risk perception as commonly used in psychology.
This chapter analyzes the risk and return characteristics of investments in artists from the Middle East and Northern Africa (MENA) region over the sample period 2000 to 2012. With hedonic regression modeling we create an annual index that is based on 3,544 paintings created by 663 MENA artists. Our empirical results prove that investing in such a hypothetical index provides strong financial returns. While the results show an exponential growth in sales since 2006, the geometric annual return of the MENA art index is a stable13.9 percent over the whole period. We conclude that investing in MENA paintings would have been profitable but also note that we examined the performance of an emerging art market that has only seen an upward trend without any correction, yet.
The record-breaking prices observed in the art market over the last three years raise the question of whether we are experiencing a speculative bubble. Given the difficulty to determine the fundamental value of artworks, we apply a right-tailed unit root test with forward recursive regressions (SADF test) to detect explosive behaviors directly in the time series of four different art market segments (“Impressionist and Modern”, “Post-war and Contemporary”, “American”, and “Latin American”) for the period from 1970 to 2013. We identify two historical speculative bubbles and find an explosive movement in today’s “Post-war and Contemporary” and “American” fine art market segments.
This paper investigates the impact of news media sentiment on financial market returns and volatility in the long-term. We hypothesize that the way the media formulate and present news to the public produces different perceptions and, thus, incurs different investor behavior. To analyze such framing effects we distinguish between optimistic and pessimistic news frames. We construct a monthly media sentiment indicator by taking the ratio of the number of newspaper articles that contain predetermined negative words to the number of newspaper articles that contain predetermined positive words in the headline and/or the lead paragraph. Our results indicate that pessimistic news media sentiment is positively related to global market volatility and negatively related to global market returns 12 to 24 months in advance. We show that our media sentiment indicator reflects very well the financial market crises and pricing bubbles over the past 20 years.
Since the 2008 financial crisis, in which the Reserve Primary Fund “broke the buck,” money market funds (MMFs) have been the subject of ongoing policy debate. Many commentators view MMFs as a key contributor to the crisis because widespread redemption demands during the days following the Lehman bankruptcy contributed to a freeze in the credit markets. In response, MMFs were deemed a component of the nefarious shadow banking industry and targeted for regulatory reform. The Securities and Exchange Commission’s (SEC) misguided 2014 reforms responded by potentially exacerbating MMF fragility while potentially crippling large segments of the MMF industry.
Determining the appropriate approach to MMF reform has been difficult. Banks regulators supported requiring MMFs to trade at a floating net asset value (NAV) rather than a stable $1 share price. By definition, a floating NAV prevents MMFs from breaking the buck but is unlikely to eliminate the risk of large redemptions in a time of crisis. Other reform proposals have similar shortcomings. More fundamentally, the SEC’s reforms may substantially reduce the utility of MMFs for many investors, which could, in turn, affect the availability of short term credit.
The shape of MMF reform has been influenced by a turf war among regulators as the SEC has battled with bank regulators both about the need for additional reforms and about the structure and timing of those reforms. Bank regulators have been influential in shaping the terms of the debate by using banking rhetoric to frame the narrative of MMF fragility. This rhetoric masks a critical difference between banks and MMFs – asset segregation. Unlike banks, MMF sponsors have assets and operations that are separate from the assets of the MMF itself. This difference has caused the SEC to mistake sponsor support as a weakness rather than a key stability-enhancing feature. As a result, the SEC mistakenly adopted reforms that burden sponsor support instead of encouraging it.
As this article explains, required sponsor support offers a novel and simple regulatory solution to MMF fragility. Accordingly this article proposes that the SEC require MMF sponsors explicitly to guarantee the $1 share price. Taking sponsor support out of the shadows embraces rather than ignores the advantage that MMFs offer over banks through asset partitioning. At the same time, sponsor support harnesses market discipline as a constraint against MMF risk-taking and moral hazard.
How much additional tax revenue can the government generate by increasing labor income taxes? In this paper we provide a quantitative answer to this question, and study the importance of the progressivity of the tax schedule for the ability of the government to generate tax revenues. We develop a rich overlapping generations model featuring an explicit family structure, extensive and intensive margins of labor supply, endogenous accumulation of labor market experience as well as standard intertemporal consumption-savings choices in the presence of uninsurable idiosyncratic labor productivity risk. We calibrate the model to US macro, micro and tax data and characterize the labor income tax Laffer curve under the current choice of the progressivity of the labor income tax code as well as when varying progressivity. We find that more progressive labor income taxes significantly reduce tax revenues. For the US, converting to a flat tax code raises the peak of the Laffer curve by 6%, whereas converting to a tax system with progressivity similar to Denmark would lower the peak by 7%. We also show that, relative to a representative agent economy tax revenues are less sensitive to the progressivity of the tax code in our economy. This finding is due to the fact that labor supply of two earner households is less elastic (along the intensive margin) and the endogenous accumulation of labor market experience makes labor supply of females less elastic (around the extensive margin) to changes in tax progressivity.
US data and new stockholding data from fifteen European countries and China exhibit a common pattern: stockholding shares increase in household income and wealth. Yet, there is a multitude of numbers to match through models. Using a single utility function across households (parsimony), we suggest a strategy for fitting stockholding numbers, while replicating that saving rates increase in wealth, too. The key is introducing subsistence consumption to an Epstein-Zin-Weil utility function, creating endogenous risk-aversion differences across rich and poor. A closed-form solution for the model with insurable labor-income risk serves as calibration guide for numerical simulations with uninsurable labor-income risk.
There has been a considerable debate about whether disaster models can rationalize the equity premium puzzle. This is because empirically disasters are not single extreme events, but long-lasting periods in which moderate negative consumption growth realizations cluster. Our paper proposes a novel way to explain this stylized fact. By allowing for consumption drops that can spark an economic crisis, we introduce a new economic channel that combines long-run and short-run risk. First, we document that our model can match consumption data of several countries. Second, it generates a large equity risk premium even if consumption drops are of moderate size.
We analyze the implications of the structure of a network for asset prices in a general equilibrium model. Networks are represented via self- and mutually exciting jump processes, and the representative agent has Epstein-Zin preferences. Our approach provides a exible and tractable unifying foundation for asset pricing in networks. The model endogenously generates results in accordance with, e.g., the robust-yetfragile feature of financial networks shown in Acemoglu, Ozdaglar, and Tahbaz-Salehi (2014) and the positive centrality premium documented in Ahern (2013). We also show that models with simpler preference assumptions cannot generate all these findings simultaneously.
Using data from the US Health and Retirement Study, we study the causal effect of increased health insurance coverage through Medicare and the associated reduction in health-related background risk on financial risk-taking. Given the onset of Medicare at age 65, we identify our effect of interest using a regression discontinuity approach. We find that getting Medicare coverage induces stockholding for those with at least some college education, but not for their less-educated counterparts. Hence, our results indicate that a reduction in background risk induces financial risk-taking in individuals for whom informational and pecuniary stock market participation costs are relatively low.
We examine both the degree and the structural stability of inflation persistence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. As economic theory provides reasons for inflation persistence to differ across conditional quantiles, this is a potentially severe constraint. Conventional studies of inflation persistence cannot identify changes in persistence at selected quantiles that leave persistence at the median of the distribution unchanged. Based on post-war US data we indeed find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. While prior to the 1980s inflation was not mean reverting, quantile autoregression based unit root tests suggest that since the end of the Volcker disinflation the unit root can be rejected at every quantile of the conditional inflation distribution.
The European Central Bank (ECB) has finalized its comprehensive assessment of the solvency of the largest banks in the euro area and on October 26 disclosed the results of this assessment. In the present paper, Acharya and Steffen compare the outcomes of the ECB's assessment to their own benchmark stress tests conducted for 39 publically listed financial institutions that are also included in the ECB's regulatory review. The authors identify a negative correlation between their benchmark estimates for capital shortfalls and the regulatory capital shortfall, but a positive correlation between their benchmark estimates for losses under stress both in the banking book and in the trading book. They conclude that the regulatory stress test outcomes are potentially heavily affected by discretion of national regulators in measuring what is capital, and especially the use of risk-weighted assets in calculating the prudential capital requirement.
Robustness, validity, and significance of the ECB's asset quality review and stress test exercise
(2014)
As we are moving toward a eurozone banking union, the European Central Bank (ECB) is going to take over the regulatory oversight of 128 banks in November 2014. To that end, the ECB conducted a comprehensive assessment of these banks, which included an asset quality review (AQR) and a stress test. The fundamental question is how accurately will the financial condition of these banks have been assessed by the ECB when it commences its regulatory oversight? And, can the comprehensive assessment lead to a full repair of banks’ balance sheets so that the ECB takes over financially sound banks and is the necessary regulation in place to facilitate this? Overall, the evidence presented in this paper based on the design of the comprehensive assessment as well as own stress test exercises suggest that the ECB’s assessment might not comprehensively deal with the problems in the financial sector and risks may remain that will pose substantial threats to financial stability in the eurozone.
On average, "young" people underestimate whereas "old" people overestimate their chances to survive into the future. We adopt a Bayesian learning model of ambiguous survival beliefs which replicates these patterns. The model is embedded within a non-expected utility model of life-cycle consumption and saving. Our analysis shows that agents with ambiguous survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival probabilities. Our ambiguity-driven model therefore simultaneously accounts for three important empirical findings on household saving behavior.
This paper investigates extensions of the method of endogenous gridpoints (ENDGM) introduced by Carroll (2006) to higher dimensions with more than one continuous endogenous state variable. We compare three different categories of algorithms: (i) the conventional method with exogenous grids (EXOGM), (ii) the pure method of endogenous gridpoints (ENDGM) and (iii) a hybrid method (HYBGM). ENDGM comes along with Delaunay interpolation on irregular grids. Comparison of methods is done by evaluating speed and accuracy. We find that HYBGM and ENDGM both dominate EXOGM. In an infinite horizon model, ENDGM also always dominates HYBGM. In a finite horizon model, the choice between HYBGM and ENDGM depends on the number of gridpoints in each dimension. With less than 150 gridpoints in each dimension ENDGM is faster than HYBGM, and vice versa. For a standard choice of 25 to 50 gridpoints in each dimension, ENDGM is 1.4 to 1.7 times faster than HYBGM in the finite horizon version and 2.4 to 2.5 times faster in the infinite horizon version of the model.
When markets are incomplete, social security can partially insure against idiosyncratic and aggregate risks. We incorporate both risks into an analytically tractable model with two overlapping generations and demonstrate that they interact over the life-cycle. The interactions appear even though the two risks are orthogonal and they amplify the welfare consequences of introducing social security. On the one hand, the interactions increase the welfare benefits from insurance. On the other hand, they can in- or decrease the welfare costs from crowding out of capital formation. This ambiguous effect on crowding out means that the net effect of these two channels is positive, hence the interactions of risks increase the total welfare benefits of social security.
We outline a procedure for consistent estimation of marginal and joint default risk in the euro area financial system. We interpret the latter risk as the intrinsic financial system fragility and derive several systemic fragility indicators for euro area banks and sovereigns, based on CDS prices. Our analysis documents that although the fragility of the euro area banking system had started to deteriorate before Lehman Brothers' file for bankruptcy, investors did not expect the crisis to affect euro area sovereigns' solvency until September 2008. Since then, and especially after November 2009, joint sovereign default risk has outpaced the rise of systemic risk within the banking system.
This paper studies the effect of graduating from college on lifetime earnings. We develop a quantitative model of college choice with uncertain graduation. Departing from much of the literature, we model in detail how students progress through college. This allows us to parameterize the model using transcript data. College transcripts reveal substantial and persistent heterogeneity in students’ credit accumulation rates that are strongly related to graduation outcomes. From this data, the model infers a large ability gap between college graduates and high school graduates that accounts for 54% of the college lifetime earnings premium.
his paper distils three lessons for bank regulation from the experience of the 2009-12 euro-area financial crisis. First, it highlights the key role that sovereign debt exposures of banks have played in the feedback loop between bank and fiscal distress, and inquires how the regulation of banks’ sovereign exposures in the euro area should be changed to mitigate this feedback loop in the future. Second, it explores the relationship between the forbearance of non-performing loans by European banks and the tendency of EU regulators to rescue rather than resolving distressed banks, and asks to what extent the new regulatory framework of the euro-area “banking union” can be expected to mitigate excessive forbearance and facilitate resolution of insolvent banks. Finally, the paper highlights that capital requirements based on the ratio of Tier-1 capital to banks’ risk-weighted assets were massively gamed by large banks, which engaged in various forms of regulatory arbitrage to minimize their capital charges while expanding leverage. This argues in favor of relying on a set of simpler and more robust indicators to determine banks’ capital shortfall, such as book and market leverage ratios.
We study a model where some investors ("hedgers") are bad at information processing, while others ("speculators") have superior information-processing ability and trade purely to exploit it. The disclosure of financial information induces a trade externality: if speculators refrain from trading, hedgers do the same, depressing the asset price. Market transparency reinforces this mechanism, by making speculators' trades more visible to hedgers. As a consequence, issuers will oppose both the disclosure of fundamentals and trading transparency. Issuers may either under- or over-provide information compared to the socially efficient level if speculators have more bargaining power than hedgers, while they never under-provide it otherwise. When hedgers have low financial literacy, forbidding their access to the market may be socially efficient.
Most simulated micro-founded macro models use solely consumer-demand aggregates in order to estimate deep economy-wide preference parameters, which are useful for policy evaluation. The underlying demand-aggregation properties that this approach requires, should be easy to empirically disprove: since household-consumption choices differ for households with more members, aggregation can be rejected if appropriate data violate an affine equation regarding how much individuals benefit from within-household sharing of goods. We develop a survey method that tests the validity of this equation, without utility-estimation restrictions via models. Surprisingly, in six countries, this equation is not rejected, lending support to using consumer-demand aggregates.
Riley (1979)'s reactive equilibrium concept addresses problems of equilibrium existence in competitive markets with adverse selection. The game-theoretic interpretation of the reactive equilibrium concept in Engers and Fernandez (1987) yields the Rothschild-Stiglitz (1976)/Riley (1979) allocation as an equilibrium allocation, however multiplicity of equilibrium emerges. In this note we imbed the reactive equilibrium's logic in a dynamic market context with active consumers. We show that the Riley/Rothschild-Stiglitz contracts constitute the unique equilibrium allocation in any pure strategy subgame perfect Nash equilibrium.