Refine
Year of publication
Document Type
- Working Paper (1484)
- Part of Periodical (562)
- Article (186)
- Report (141)
- Book (98)
- Doctoral Thesis (70)
- Contribution to a Periodical (44)
- Conference Proceeding (21)
- Part of a Book (13)
- Periodical (12)
Is part of the Bibliography
- no (2659)
Keywords
- Deutschland (98)
- Financial Institutions (90)
- ECB (65)
- Capital Markets Union (64)
- Financial Markets (59)
- Banking Union (50)
- Banking Regulation (49)
- Household Finance (45)
- Banking Supervision (40)
- Macro Finance (40)
Institute
- Wirtschaftswissenschaften (2659) (remove)
How much additional tax revenue can the government generate by increasing labor income taxes? In this paper we provide a quantitative answer to this question, and study the importance of the progressivity of the tax schedule for the ability of the government to generate tax revenues. We develop a rich overlapping generations model featuring an explicit family structure, extensive and intensive margins of labor supply, endogenous accumulation of labor market experience as well as standard intertemporal consumption-savings choices in the presence of uninsurable idiosyncratic labor productivity risk. We calibrate the model to US macro, micro and tax data and characterize the labor income tax Laffer curve under the current choice of the progressivity of the labor income tax code as well as when varying progressivity. We find that more progressive labor income taxes significantly reduce tax revenues. For the US, converting to a flat tax code raises the peak of the Laffer curve by 6%, whereas converting to a tax system with progressivity similar to Denmark would lower the peak by 7%. We also show that, relative to a representative agent economy tax revenues are less sensitive to the progressivity of the tax code in our economy. This finding is due to the fact that labor supply of two earner households is less elastic (along the intensive margin) and the endogenous accumulation of labor market experience makes labor supply of females less elastic (around the extensive margin) to changes in tax progressivity.
US data and new stockholding data from fifteen European countries and China exhibit a common pattern: stockholding shares increase in household income and wealth. Yet, there is a multitude of numbers to match through models. Using a single utility function across households (parsimony), we suggest a strategy for fitting stockholding numbers, while replicating that saving rates increase in wealth, too. The key is introducing subsistence consumption to an Epstein-Zin-Weil utility function, creating endogenous risk-aversion differences across rich and poor. A closed-form solution for the model with insurable labor-income risk serves as calibration guide for numerical simulations with uninsurable labor-income risk.
There has been a considerable debate about whether disaster models can rationalize the equity premium puzzle. This is because empirically disasters are not single extreme events, but long-lasting periods in which moderate negative consumption growth realizations cluster. Our paper proposes a novel way to explain this stylized fact. By allowing for consumption drops that can spark an economic crisis, we introduce a new economic channel that combines long-run and short-run risk. First, we document that our model can match consumption data of several countries. Second, it generates a large equity risk premium even if consumption drops are of moderate size.
We analyze the implications of the structure of a network for asset prices in a general equilibrium model. Networks are represented via self- and mutually exciting jump processes, and the representative agent has Epstein-Zin preferences. Our approach provides a exible and tractable unifying foundation for asset pricing in networks. The model endogenously generates results in accordance with, e.g., the robust-yetfragile feature of financial networks shown in Acemoglu, Ozdaglar, and Tahbaz-Salehi (2014) and the positive centrality premium documented in Ahern (2013). We also show that models with simpler preference assumptions cannot generate all these findings simultaneously.
Using data from the US Health and Retirement Study, we study the causal effect of increased health insurance coverage through Medicare and the associated reduction in health-related background risk on financial risk-taking. Given the onset of Medicare at age 65, we identify our effect of interest using a regression discontinuity approach. We find that getting Medicare coverage induces stockholding for those with at least some college education, but not for their less-educated counterparts. Hence, our results indicate that a reduction in background risk induces financial risk-taking in individuals for whom informational and pecuniary stock market participation costs are relatively low.
We examine both the degree and the structural stability of inflation persistence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. As economic theory provides reasons for inflation persistence to differ across conditional quantiles, this is a potentially severe constraint. Conventional studies of inflation persistence cannot identify changes in persistence at selected quantiles that leave persistence at the median of the distribution unchanged. Based on post-war US data we indeed find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. While prior to the 1980s inflation was not mean reverting, quantile autoregression based unit root tests suggest that since the end of the Volcker disinflation the unit root can be rejected at every quantile of the conditional inflation distribution.
The European Central Bank (ECB) has finalized its comprehensive assessment of the solvency of the largest banks in the euro area and on October 26 disclosed the results of this assessment. In the present paper, Acharya and Steffen compare the outcomes of the ECB's assessment to their own benchmark stress tests conducted for 39 publically listed financial institutions that are also included in the ECB's regulatory review. The authors identify a negative correlation between their benchmark estimates for capital shortfalls and the regulatory capital shortfall, but a positive correlation between their benchmark estimates for losses under stress both in the banking book and in the trading book. They conclude that the regulatory stress test outcomes are potentially heavily affected by discretion of national regulators in measuring what is capital, and especially the use of risk-weighted assets in calculating the prudential capital requirement.
Robustness, validity, and significance of the ECB's asset quality review and stress test exercise
(2014)
As we are moving toward a eurozone banking union, the European Central Bank (ECB) is going to take over the regulatory oversight of 128 banks in November 2014. To that end, the ECB conducted a comprehensive assessment of these banks, which included an asset quality review (AQR) and a stress test. The fundamental question is how accurately will the financial condition of these banks have been assessed by the ECB when it commences its regulatory oversight? And, can the comprehensive assessment lead to a full repair of banks’ balance sheets so that the ECB takes over financially sound banks and is the necessary regulation in place to facilitate this? Overall, the evidence presented in this paper based on the design of the comprehensive assessment as well as own stress test exercises suggest that the ECB’s assessment might not comprehensively deal with the problems in the financial sector and risks may remain that will pose substantial threats to financial stability in the eurozone.
On average, "young" people underestimate whereas "old" people overestimate their chances to survive into the future. We adopt a Bayesian learning model of ambiguous survival beliefs which replicates these patterns. The model is embedded within a non-expected utility model of life-cycle consumption and saving. Our analysis shows that agents with ambiguous survival beliefs (i) save less than originally planned, (ii) exhibit undersaving at younger ages, and (iii) hold larger amounts of assets in old age than their rational expectations counterparts who correctly assess their survival probabilities. Our ambiguity-driven model therefore simultaneously accounts for three important empirical findings on household saving behavior.
This paper investigates extensions of the method of endogenous gridpoints (ENDGM) introduced by Carroll (2006) to higher dimensions with more than one continuous endogenous state variable. We compare three different categories of algorithms: (i) the conventional method with exogenous grids (EXOGM), (ii) the pure method of endogenous gridpoints (ENDGM) and (iii) a hybrid method (HYBGM). ENDGM comes along with Delaunay interpolation on irregular grids. Comparison of methods is done by evaluating speed and accuracy. We find that HYBGM and ENDGM both dominate EXOGM. In an infinite horizon model, ENDGM also always dominates HYBGM. In a finite horizon model, the choice between HYBGM and ENDGM depends on the number of gridpoints in each dimension. With less than 150 gridpoints in each dimension ENDGM is faster than HYBGM, and vice versa. For a standard choice of 25 to 50 gridpoints in each dimension, ENDGM is 1.4 to 1.7 times faster than HYBGM in the finite horizon version and 2.4 to 2.5 times faster in the infinite horizon version of the model.