330 Wirtschaft
Refine
Year of publication
Document Type
- Working Paper (1834)
- Article (483)
- Part of Periodical (446)
- Report (105)
- Doctoral Thesis (40)
- Book (28)
- Conference Proceeding (14)
- Periodical (11)
- Part of a Book (9)
- Review (7)
- Preprint (3)
- Bachelor Thesis (2)
- Diploma Thesis (1)
- Master's Thesis (1)
Language
- English (2984) (remove)
Is part of the Bibliography
- no (2984)
Keywords
- Deutschland (117)
- Geldpolitik (55)
- USA (51)
- monetary policy (50)
- Financial Institutions (48)
- Schätzung (48)
- Europäische Union (44)
- Monetary Policy (44)
- ECB (42)
- Bank (39)
Institute
- Wirtschaftswissenschaften (1871)
- Center for Financial Studies (CFS) (1483)
- Sustainable Architecture for Finance in Europe (SAFE) (1057)
- House of Finance (HoF) (698)
- E-Finance Lab e.V. (358)
- Institute for Monetary and Financial Stability (IMFS) (191)
- Rechtswissenschaft (89)
- Foundation of Law and Finance (50)
- Gesellschaftswissenschaften (31)
- Institute for Law and Finance (ILF) (31)
Learning and equilibrium selection in a monetary overlapping generations model with sticky prices
(2003)
We study adaptive learning in a monetary overlapping generations model with sticky prices and monopolistic competition for the case where learning agents observe current endogenous variables. Observability of current variables is essential for informational consistency of the learning setup with the model set up but generates multiple temporary equilibria when prices are flexible and prevents a straightforward construction of the learning dynamics. Sticky prices overcome this problem by avoiding simultaneity between prices and price expectations. Adaptive learning then robustly selects the determinate (monetary) steady state independent from the degree of imperfect competition. The indeterminate (non-monetary) steady state and non-stationary equilibria are never stable. Stability in a deterministic version of the model may differ because perfect foresight equilibria can be the limit of restricted perceptions equilibria of the stochastic economy with vanishing noise and thereby inherit different stability properties. This discontinuity at the zero variance of shocks suggests to analyze learning in stochastic models.
This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well.
This paper compares Bayesian decision theory with robust decision theory where the decision maker optimizes with respect to the worst state realization. For a class of robust decision problems there exists a sequence of Bayesian decision problems whose solution converges towards the robust solution. It is shown that the limiting Bayesian problem displays infinite risk aversion and that decisions are insensitive (robust) to the precise assignment of prior probabilities. This holds independent from whether the preference for robustness is global or restricted to local perturbations around some reference model.
We study optimal nominal demand policy in an economy with monopolistic competition and flexible prices when firms have imperfect common knowledge about the shocks hitting the economy. Parametrizing firms´ information imperfections by a (Shannon) capacity parameter that constrains the amount of information flowing to each firm, we study how policy that minimizes a quadratic objective in output and prices depends on this parameter. When price setting decisions of firms are strategic complements, for a large range of capacity values optimal policy nominally accommodates mark-up shocks in the short-run. This finding is robust to the policy maker observing shocks imperfectly or being uncertain about firms´ capacity parameter. With persistent mark-up shocks accommodation may increase in the medium term, but decreases in the long-run thereby generating a hump-shaped price response and a slow reduction in output. Instead, when prices are strategic substitutes, policy tends to react restrictively to mark-up shocks. However, rational expectations equilibria may then not exist with small amounts of imperfect common knowledge.
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not imply positive average inflation rates in equilibrium. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks.
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
Motivated by the observation that survey expectations of stock returns are inconsistent with rational return expectations under real-world probabilities, we investigate whether alternative expectations hypotheses entertained in the asset pricing literature are consistent with the survey evidence. We empirically test (1) the notion that survey forecasts constitute rational but risk-neutral forecasts of future returns, and (2) the notion that survey fore- casts are ambiguity averse/robust forecasts of future returns. We find that these alternative hypotheses are also strongly rejected by the data, albeit for different reasons. Hypothesis (1) is rejected because survey return forecasts are not in line with risk-free interest rates and because survey expected excess returns are predictable. Hypothesis (2) is rejected because agents are not al- ways pessimistic about future returns, instead often display overly optimistic return expectations. We speculate as to what kind of expectations theories might be consistent with the available survey evidence.
Optimal trend inflation
(2017)
We present a sticky-price model incorporating heterogeneous Firms and systematic firm-level productivity trends. Aggregating the model in closed form, we show that it delivers radically different predictions for the optimal inflation rate than canonical sticky price models featuring homogenous Firms:
(1) the optimal steady-state inflation rate generically differs from zero and,
(2) inflation optimally responds to productivity disturbances.
Using micro data from the US Census Bureau to estimate the inflation-relevant productivity trends at the firm level, we find that the optimal US inflation rate is positive. It was slightly above 2 percent in the year 1986, but continuously declined thereafter, reaching about 1 percent in the year 2013.
We analytically characterize optimal monetary policy for an augmented New Keynesian model with a housing sector. In a setting where the private sector has rational expectations about future housing prices and inflation, optimal monetary policy can be characterized without making reference to housing price developments: commitment to a 'target criterion' that refers to inflation and the output gap only is optimal, as in the standard model without a housing sector. When the policymaker is concerned with potential departures of private sector expectations from rational ones and seeks to choose a policy that is robust against such possible departures, then the optimal target criterion must also depend on housing prices. In the empirically realistic case where housing is subsidized and where monopoly power causes output to fall short of its optimal level, the robustly optimal target criterion requires the central bank to 'lean against' housing prices: following unexpected housing price increases, policy should adopt a stance that is projected to undershoot its normal targets for inflation and the output gap, and similarly aim to overshoot those targets in the case of unexpected declines in housing prices. The robustly optimal target criterion does not require that policy distinguish between 'fundamental' and 'non-fundamental' movements in housing prices.
In the secondary art market, artists play no active role. This allows us to isolate cultural influences on the demand for female artists’ work from supply-side factors. Using 1.5 million auction transactions in 45 countries, we document a 47.6% gender discount in auction prices for paintings. The discount is higher in countries with greater gender inequality. In experiments, participants are unable to guess the gender of an artist simply by looking at a painting and they vary in their preferences for paintings associated with female artists. Women's art appears to sell for less because it is made by women.
In this paper, we develop a state-dependent sensitivity value-at-risk (SDSVaR) approach that enables us to quantify the direction, size, and duration of risk spillovers among financial institutions as a function of the state of financial markets (tranquil, normal, and volatile). Within a system of quantile regressions for four sets of major financial institutions (commercial banks, investment banks, hedge funds, and insurance companies) we show that while small during normal times, equivalent shocks lead to considerable spillover effects in volatile market periods. Commercial banks and, especially, hedge funds appear to play a major role in the transmission of shocks to other financial institutions. Using daily data, we can trace out the spillover effects over time in a set of impulse response functions and find that they reach their peak after 10 to 15 days.
Credit boom detection methodologies (such as threshold method) lack robustness as they are based on univariate detrending analysis and resort to ratios of credit to real activity. I propose a quantitative indicator to detect atypical behavior of credit from a multivariate system - a monetary VAR. This methodology explicitly accounts for endogenous interactions between credit, asset prices and real activity and detects atypical credit expansions and contractions in the Euro Area, Japan and the U.S. robustly and timely. The analysis also proves useful in real time.
This paper investigates the risk channel of monetary policy on the asset side of banks’ balance sheets. We use a factoraugmented vector autoregression (FAVAR) model to show that aggregate lending standards of U.S. banks, such as their collateral requirements for firms, are significantly loosened in response to an unexpected decrease in the Federal Funds rate. Based on this evidence, we reformulate the costly state verification (CSV) contract to allow for an active financial intermediary, embed it in a New Keynesian dynamic stochastic general equilibrium (DSGE) model, and show that – consistent with our empirical findings – an expansionary monetary policy shock implies a temporary increase in bank lending relative to borrower collateral. In the model, this is accompanied by a higher default rate of borrowers.
We find that on average consumers chose the contract that ex post minimized their net costs. A substantial fraction of consumers (about 40%) still chose the ex post sub-optimal contract, with some incurring hundreds of dollars of avoidable interest costs. Nonetheless, the probability of choosing the sub-optimal contract declines with the dollar magnitude of the potential error, and consumers with larger errors were more likely to subsequently switch to the optimal contract. Thus most of the errors appear not to have been very costly, with the exception that a small minority of consumers persists in holding substantially sub-optimal contracts without switching. Klassifikation: G11, G21, E21, E51
The reaction of consumer spending and debt to tax rebates – evidence from consumer credit data
(2008)
We use a new panel dataset of credit card accounts to analyze how consumer responded to the 2001 Federal income tax rebates. We estimate the monthly response of credit card payments, spending, and debt, exploiting the unique, randomized timing of the rebate disbursement. We find that, on average, consumers initially saved some of the rebate, by increasing their credit card payments and thereby paying down debt. But soon afterwards their spending increased, counter to the canonical Permanent-Income model. Spending rose most for consumers who were initially most likely to be liquidity constrained, whereas debt declined most (so saving rose most) for unconstrained consumers. More generally, the results suggest that there can be important dynamics in consumers’ response to “lumpy” increases in income like tax rebates, working in part through balance sheet (liquidity) mechanisms.
Even as online advertising continues to grow, a central question remains: Who to target? Yet, advertisers know little about how to select from the hundreds of audience segments for targeting (and combinations thereof) for a profitable online advertising campaign. Utilizing insights from a field experiment on Facebook (Study 1), we develop a model that helps advertisers solve the cold-start problem of selecting audience segments for targeting. Our model enables advertisers to calculate the break-even performance of an audience segment to make a targeted ad campaign at least as profitable as an untargeted one. Advertisers can use this novel model to decide whether to test specific audience segments in their campaigns (e.g., in randomized controlled trials). We apply our model to data from the Spotify ad platform to study the profitability of different audience segments (Study 2). Approximately half of those audience segments require the click-through rate to double compared to an untargeted campaign, which is unrealistically high for most ad campaigns. Our model also shows that narrow segments require a lift that is likely not attainable, specifically when the data quality of these segments is poor. We confirm this theoretical finding in an empirical study (Study 3): A decrease in data quality due to Apple’s introduction of the App Tracking Transparency (ATT) framework more negatively affects the click-through rate of narrow (versus broad) audience segments.
DESPITE AMPLE EVIDENCE THAT CUSTOMERS EXHIBIT HIGHER DISCOUNT RATES THAN FIRMS, IT IS NOT CLEAR HOW DIFFERENCES IN DISCOUNT RATES AFFECT OPTIMAL PRICES, PROFITS, AND WELFARE OF COMPLEMENTARY PRODUCTS (WHICH COULD BE GOODS OR SERVICES). WE SHOW FOR COMPLEMENTARY PROUCTS THAT HIGHER DISCOUNT RATES OF CUSTOMERS DO NOT INCREASE PROFIT OR CONSUMER SURPLUS. FIRMS, INCLUDING BANKS, WOULD BE ADVISED TO SEEK TO REDUCE EXCESSIVE DISCOUNT RATES AMONG CONSUMERS.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.
This study uses Markov-switching models to evaluate the informational content of the term structure as a predictor of recessions in eight OECD countries. The empirical results suggest that for all countries the term spread is sensibly modelled as a two-state regime-switching process. Moreover, our simple univariate model turns out to be a filter that transforms accurately term spread changes into turning point predictions. The term structure is confirmed to be a reliable recession indicator. However, the results of probit estimations show that the markov-switching filter does not significantly improve the forecasting ability of the spread.
In this study a regime switching approach is applied to estimate the chartist and fundamentalist (c&f) exchange rate model originally proposed by Frankel and Froot (1986). The c&f model is tested against alternative regime switching specifications applying likelihood ratio tests. Nested atheoretical models like the popular segmented trends model suggested by Engel and Hamilton (1990) are rejected in favour of the multi agent model. Moreover, the c&f regime switching model seems to describe the data much better than a competing regime switching GARCH(1,1) model. Finally, our findings turned out to be relatively robust when estimating the model in subsamples. The empirical results suggest that the model is able to explain daily DM/Dollar forward exchange rate dynamics from 1982 to 1998.
A common prediction of macroeconomic models of credit market frictions is that the tightness of financial constraints is countercyclical. As a result, theory implies a negative collateralizability premium; that is, capital that can be used as collateral to relax financial constraints provides insurance against aggregate shocks and commands a lower risk compensation compared with non-collateralizable assets. We show that a longshort portfolio constructed using a novel measure of asset collateralizability generates an average excess return of around 8% per year. We develop a general equilibrium model with heterogeneous firms and financial constraints to quantitatively account for the collateralizability premium.
Most insurers in the European Union determine their regulatory capital requirements based on the standard formula of Solvency II. However, there is evidence that the standard formula inaccurately reflects insurers’ risk situation and may provide misleading steering incentives. In the second pillar, Solvency II requires insurers to perform a so-called “Own Risk and Solvency Assessment” (ORSA). In their ORSA, insurers must establish their own risk measurement approaches, including those based on scenarios, in order to derive suitable risk assessments and address shortcomings of the standard formula. The idea of this paper is to identify scenarios in such a way that the standard formula in connection with the ORSA provides a reliable basis for risk management decisions. Using an innovative method for scenario identification, our approach allows for a simple but relatively precise assessment of marginal and even non-marginal portfolio changes. We numerically evaluate the proposed approach in the context of market risk employing an internal model from the academic literature and the Solvency Capital Requirement (SCR) calculation under Solvency II.
Gradient capital allocation, also known as Euler allocation, is a technique used to redistribute diversified capital requirements among different segments of a portfolio. The method is commonly employed to identify dominant risks, assessing the risk-adjusted profitability of segments, and installing limit systems. However, capital allocation can be misleading in all these applications because it only accounts for the current portfolio composition and ignores how diversification effects may change with a portfolio restructuring. This paper proposes enhancing the gradient capital allocation by adding “orthogonal convexity scenarios” (OCS). OCS identify risk concentrations that potentially drive portfolio risk and become relevant after restructuring. OCS have strong ties with principal component analysis (PCA), but they are a more general concept and compatible with common empirical patterns of risk drivers being fat-tailed and increasingly dependent in market downturns. We illustrate possible applications of OCS in terms of risk communication and risk limits.
We show that the presence of high frequency trading (HFT) has significantly mitigated the frequency and severity of end-of-day price dislocation, counter to recent concerns expressed in the media. The effect of HFT is more pronounced on days when end of day price dislocation is more likely to be the result of market manipulation on days of option expiry dates and end of month. Moreover, the effect of HFT is more pronounced than the role of trading rules, surveillance, enforcement and legal conditions in curtailing the frequency and severity of end-of-day price dislocation. We show our findings are robust to different proxies of the start of HFT by trade size, cancellation of orders, and co-location.
We examine the impact of stock exchange trading rules and surveillance on the frequency and severity of suspected insider trading cases in 22 stock exchanges around the world over the period January 2003 through June 2011. Using new indices for market manipulation, insider trading, and broker-agency conflict based on the specific provisions of the trading rules of each stock exchange, along with surveillance to detect non-compliance with such rules, we show that more detailed exchange trading rules and surveillance over time and across markets significantly reduce the number of cases, but increase the profits per case.
In this paper we assess the implications of sunk costs and product differentiation on the pricing decisions of the multinational firms. For this purpose we use a modified version of Salop's spatial competition. The model yields clear-cut predictions regarding the effects of exchange rate shocks on the market structure and on pass-through. The main results are following: shocks within the band of inaction do not affect market structure. The upper bound of this range rises as the industry ratio of sunk- to fixed costs increases. As fixed costs and product heterogeneity jointly increase, the lower bound drops. Outside of the range, depreciations cause one or several of those foreign brands closest to the home brand to leave. This decreases the overall responsiveness of prices to exchange rate shocks. Large appreciations induce entry and increase the elasticity of prices. This asymmetry implies larger positive than negative PPP deviations. When accounting for price changes in foreign markets, strategic pricing behaviour is no longer sufficient to generate real exchange rate variability. Incomplete pass-through obtains if and only if the domestic firms have a smaller market share abroad. With large nominal exchange rate shocks a hysteresis result obtains if and only if sunk costs are non-zero. Klassifikation: C33, E31
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
Recent empirical research found that the strong short-term relationship between monetary aggregates and US real output and inflation, as outlined in the classical study by M. Friedman and Schwartz, mostly disappeared since the early 1980s. In the light of the B. Friedman and Kuttner (1992) information value approach, we reevaluate the vanishing relationship between US monetary aggregates and these macroeconomic fundamentals by taking into account the international currency feature of the US dollar. In practice, by using official US data for foreign flows constructed by Porter and Judson (1996) we find that domestic money (currency component of M1 corrected for the foreign holdings of dollars) contains valuable information about future movements of US real output and inflation. Statistical evidence here provided thus suggests that the Friedman and Schwartz's stylized facts can be reestablished once the focus of analysis is back on the domestic monetary aggregates. This Version: August, 2001. Klassifikation: E3, E4, E5
Research on interbank networks and systemic importance is starting to recognise that the web of exposures linking banks balance sheets is more complex than the single-layer-of-exposure paradigm. We use data on exposures between large European banks broken down by both maturity and instrument type to characterise the main features of the multiplex structure of the network of large European banks. This multiplex network presents positive correlated multiplexity and a high similarity between layers, stemming both from standard similarity analyses as well as a core-periphery analyses of the different layers. We propose measures of systemic importance that fit the case in which banks are connected through an arbitrary number of layers (be it by instrument, maturity or a combination of both). Such measures allow for a decomposition of the global systemic importance index for any bank into the contributions of each of the sub-networks, providing a useful tool for banking regulators and supervisors. We use the dataset of exposures between large European banks to illustrate the proposed measures.
We uncover a new channel for spillovers of funding dry-ups. The 2016 US money market fund (MMF) reform exogenously reduced unsecured MMF funding for some banks. We use novel data to trace those banks to a platform for corporate deposit funding. We show that intensified competition for corporate deposits spilled the funding squeeze over to other banks with no MMF exposure. These banks paid more for deposits, and their pool of funding providers deteriorated. Moreover, their lending volumes and margins declined, and their stocks underperformed. Our results suggest that banks' competitiveness in funding markets affect their competitiveness in lending markets.
We present a network model of the interbank market in which optimizing risk averse banks lend to each other and invest in non-liquid assets. Market clearing takes place through a tâtonnement process which yields the equilibrium price, while traded quantities are determined by means of a matching algorithm. We compare three alternative matching algorithms: maximum entropy, closest matching and random matching. Contagion occurs through liquidity hoarding, interbank interlinkages and fire sale externalities. The resulting network configurations exhibits a core-periphery structure, dis-assortative behavior and low clustering coefficient. We measure systemic importance by means of network centrality and input-output metrics and the contribution of systemic risk by means of Shapley values. Within this framework we analyze the effects of prudential policies on the stability/efficiency trade-off. Liquidity requirements unequivocally decrease systemic risk but at the cost of lower efficiency (measured by aggregate investment in non-liquid assets); equity requirements tend to reduce risk (hence increase stability) without reducing significantly overall investment.
In many cases, the dire situation of public finances calls into question the very soundness of sovereigns and prompts corrective actions with far-reaching consequences. In this context, European authorities responded with several measures on different fronts, for instance by passing the "Fiscal Compact", which entered into force on January 1, 2013. Of critical importance in this framework is the assessment of a country’s situation by way of statistical measures, in order to take corrective actions when called for according to the letter of the law. If these statistics are not correct, there is a risk of imposing draconian measures on countries that do not really need it.
The implications of delegating fiscal decision making power to sub-national governments has become an area of significant interest over the past two decades, in the expectation that these reforms will lead to better and more efficient provision of public goods and services. The move towards decentralization has, however, not been homogeneously implemented on the revenue and expenditure side: decentralization has materialized more substantially on the latter than on the former, creating "vertical fiscal imbalances". These imbalances measure the extent to which sub-national governments’ expenditures are financed through their own revenues. This mismatch between own revenues and expenditures may have negative consequences for public finances performance, for example by softening the budget constraint of sub-national governments. Using a large sample of countries covering a long time period from the IMF’s Government Finance Statistics Yearbook, this paper is the first to examine the effects of vertical fiscal imbalances on fiscal performance through the accumulation of government debt. Our findings suggest that vertical fiscal imbalances are indeed relevant in explaining government debt accumulation, and call for a degree of caution when promoting fiscal decentralization.
In 2003, a number of banks adopted the Equator Principles (EPs), a voluntary Code of Conduct based on the International Finance Corporation’s (IFC) performance standards, to ensure the ecological and social sustainability of project finance. These so called Equator Principles Financial Institutions (EPFI) commit to requiring their borrowers to adopt sustainable management plans of environmental and social risks associated with their projects. The Principles apply to the project finance business segment of the banks and cover projects with a total cost of US $10 million or more. While for long developing countries relied on World Bank and other public assistance to finance infrastructure projects there has occurred a shift in recent years to private funding. The NGOs have been frustrated by this shift of project finance as they had spent their resources to exercise pressure on the public financial institutions to incorporate environmental and social standards in their project finance activities. However, after a shift of NGO pressure to private financial institutions the latter adopted the EPs for fear of reputational risks. NGOs had laid down their own more ambitious ideas about sustainable finance in the Collevecchio Declaration on Financial Institutions and Sustainability. Legally speaking, the EPs are a self-regulatory soft law instrument. However, it has a hard law dimension as the Equator Banks require their borrowers to comply with the EPs through covenants in the loan contracts that may trigger a default in a case of violation. ...
[Tagungsbericht] Making finance sustainable: Ten years equator principles – success or letdown?
(2013)
In 2003, a number of banks adopted the Equator Principles (EPs), a voluntary Code of Conduct based on the International Finance Corporation’s (IFC) performance standards, to ensure the ecological and social sustainability of project finance. These so called Equator Principles Financial Institutions (EPFI) commit to requiring their borrowers to adopt sustainable management plans of environmental and social risks associated with their projects. The Principles apply to the project finance business segment of the banks and cover projects with a total cost of US $10 million or more. While for long developing countries relied on World Bank and other public assistance to finance infrastructure projects there has occurred a shift in recent years to private funding. The NGOs have been frustrated by this shift of project finance as they had spent their resources to exercise pressure on the public financial institutions to incorporate environmental and social standards in their project finance activities. However, after a shift of NGO pressure to private financial institutions the latter adopted the EPs for fear of reputational risks. NGOs had laid down their own more ambitious ideas about sustainable finance in the Collevecchio Declaration on Financial Institutions and Sustainability. Legally speaking, the EPs are a self-regulatory soft law instrument. However, it has a hard law dimension as the Equator Banks require their borrowers to comply with the EPs through covenants in the loan contracts that may trigger a default in a case of violation. ...
Since the outbreak of the financial crisis, the macro-prudential policy paradigm has gained increasing prominence (Bank of England, 2009; Bernanke, 2011). The dynamics of this shift in the economic discourse, and the reasons this shift has not taken place prior to the crisis have not been addressed systemically. This paper investigates the evolution of the economic discourse on systemic risk and banking regulation to better understand these changes and their timing. Further, we use our sample to inquire whether, and if so, why the economic regulatory studies failed to recommend a reliable banking regulation prior to the crisis. By following a discourse analysis, we establish that the economic discourse on banking regulation has not been suitable for providing the knowledge basis required for a dynamically reliable banking regulation, and we identify the underlying reasons for such failure. These reasons include the obsession of economic discourse with optimization and particular forms of formalism, particularly, partial equilibrium analysis. Further, the economic discourse on banking regulation excludes historical and practitioners’ discourses and ignores weak signals. We point out that post-crisis, these epistemological failures of the economic discourse on banking regulation were not sufficiently recognized and that recent attempts to conceptualize systemic risk as a negative externality and to thus price it point to the persistence of formalism, equilibrium thinking and optimization, with their attending dangers.
We develop a novel empirical approach to identify the effectiveness of policies against a pandemic. The essence of our approach is the insight that epidemic dynamics are best tracked over stages, rather than over time. We use a normalization procedure that makes the pre-policy paths of the epidemic identical across regions. The procedure uncovers regional variation in the stage of the epidemic at the time of policy implementation. This variation delivers clean identification of the policy effect based on the epidemic path of a leading region that serves as a counterfactual for other regions. We apply our method to evaluate the effectiveness of the nationwide stay-home policy enacted in Spain against the Covid-19 pandemic. We find that the policy saved 15.9% of lives relative to the number of deaths that would have occurred had it not been for the policy intervention. Its effectiveness evolves with the epidemic and is larger when implemented at earlier stages.
We show that the correct experiment to evaluate the effects of a fiscal adjustment is the simulation of a multi year fiscal plan rather than of individual fiscal shocks. Simulation of fiscal plans adopted by 16 OECD countries over a 30-year period supports the hypothesis that the effects of consolidations depend on their design. Fiscal adjustments based upon spending cuts are much less costly, in terms of output losses, than tax-based ones and have especially low output costs when they consist of permanent rather than stop and go changes in taxes and spending. The difference between tax-based and spending-based adjustments appears not to be explained by accompanying policies, including monetary policy. It is mainly due to the different response of business confidence and private investment.
We develop a methodology to identify and rank “systemically important financial institutions” (SIFIs). Our approach is consistent with that followed by the Financial Stability Board (FSB) but, unlike the latter, it is free of judgment and it is based entirely on publicly available data, thus filling the gap between the official views of the regulator and those that market participants can form with their own information set. We apply the methodology to annual data on three samples of banks (global, EU and euro area) for the years 2007-2012. We examine the evolution of the SIFIs over time and document the shifs in the relative weights of the major geographic areas. We also discuss the implication of the 2013 update of the identification methodology proposed by the FSB.
Some have argued that recent increases in credit risk transfer are desirable because they improve the diversification of risk. Others have suggested that they may be undesirable if they increase the risk of financial crises. Using a model with banking and insurance sectors, we show that credit risk transfer can be beneficial when banks face uniform demand for liquidity. However, when they face idiosyncratic liquidity risk and hedge this risk in an interbank market, credit risk transfer can be detrimental to welfare. It can lead to contagion between the two sectors and increase the risk of crises. Klassifikation: G21, G22
When liquidity plays an important role as in times of financial crisis, asset prices in some markets may reflect the amount of liquidity available in the market rather than the future earning power of the asset. Mark-to-market accounting is not a desirable way to assess the solvency of a financial institution in such circumstances. We show that a shock in the insurance sector can cause the current value of banks’ assets to be less than the current value of their liabilities so the banks are insolvent. In contrast, if historic cost accounting is used, banks are allowed to continue and can meet all their future liabilities. Mark-to-market accounting can thus lead to contagion where none would occur with historic cost accounting. Klassifizierung: G21, G22, M41
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We consider the advantages and disadvantages of stakeholder-oriented firms that are concerned with employees and suppliers as well as shareholders compared to shareholder-oriented firms. Societies with stakeholder-oriented firms have higher prices, lower output, and can have greater firm value than shareholder-oriented societies. In some circumstances, firms may voluntarily choose to be stakeholder-oriented because this increases their value. Consumers that prefer to buy from stakeholder firms can also enforce a stakeholder society. With globalization entry by stakeholder firms is relatively more attractive than entry by shareholder firms for all societies. JEL Classification: D02, D21, G34, L13, L21
Banking and markets
(2001)
This paper integrates a number of recent themes in the literature in banking and asset markets–optimal risk sharing, limited market participation, asset-price volatility, market liquidity, and financial crises–in a general-equilibrium theory of the financial system. A complex financial system comprises both financial markets financial institutions. Financial institutions can take the form of intermediaries or banks. Banks, inlike intermediaries, are subject to runs, but crises do not imply market failure. We show that a sophisticated financiel system–a system with complete markets for aggregate risk and limited market participation–is incentive-efficient, if the institutions take the form of intermediaries, or else constrained-efficient, of they take the form of banks. We also consider an economy in which the markets for aggregate risks are incomplete. In this context, there is a rolefpr prudential regulation: regulating liquidity can improve welfare.
In this paper we propose a way forward towards increased financial resilience in times of growing disagreement concerning open borders, free trade and global regulatory standards. In light of these concerns, financial resilience remains a highly valued policy objective. We wish to contribute by suggesting an agenda of concrete, do-able steps supporting an enhanced level of resilience, combined with a deeper understanding of its relevance in the public domain.
First, remove inconsistencies across regulatory rules and territorial regimes, and ensure their credibility concerning implementation. Second, discourage the use of financial regulatory standards as means of international competition. Third, give more weight to pedagogically explaining the established regulatory standards in public, to strengthen their societal backing.
We study the information flow from the ECB on policy dates since its inception, using tick data. We show that three factors capture about all of the variation in the yield curve but that these are different factors with different variance shares in the window that contains the policy decision announcement and the window that contains the press conference. We also show that the QE-related policy factor has been dominant in the recent period and that Forward Guidance and QE effects have been very persistent on the longer-end of the yield curve. We further show that broad and banking stock indices' responses to monetary policy surprises depended on the perceived nature of the surprises. We find no evidence of asymmetric responses of financial markets to positive and negative surprises, in contrast to the literature on asymmetric real effects of monetary policy. Lastly, we show how to implement our methodology for any policy-related news release, such as policymaker speeches. To carry out the analysis, we construct the Euro Area Monetary Policy Event- Study Database (EA-MPD). This database, which contains intraday asset price changes around the policy decision announcement as well as around the press conference, is a contribution on its own right and we expect it to be the standard in monetary policy research for the euro area.
We investigate whether government credit guarantee schemes, extensively used at the onset of the Covid-19 pandemic, led to substitution of non-guaranteed with guaranteed credit rather than fully adding to the supply of lending. We study this issue using a unique euro-area credit register data, matched with supervisory bank data, and establish two main findings. First, guaranteed loans were mostly extended to small but comparatively creditworthy firms in sectors severely affected by the pandemic, borrowing from large, liquid and well-capitalized banks. Second, guaranteed loans partially substitute pre-existing non-guaranteed debt. For firms borrowing from multiple banks, the substitution mainly arises from the lending behavior of the bank extending guaranteed loans. Substitution was highest for funding granted to riskier and smaller firms in sectors more affected by the pandemic, and borrowing from larger and stronger banks. Overall, the evidence indicates that government guarantees contributed to the continued extension of credit to relatively creditworthy firms hit by the pandemic, but also benefited banks’ balance sheets to some extent.
Using novel monthly data for 226 euro-area banks from 2007 to 2015, we investigate the determinants of changes in banks’ sovereign exposures and their effects during and after the crisis. First, public, bailed out and poorly capitalized banks responded to sovereign stress by purchasing domestic public debt more than other banks, with public banks’ purchases growing especially in coincidence with the largest ECB liquidity injections. Second, bank exposures significantly amplified the transmission of risk from the sovereign and its impact on lending. This amplification of the impact on lending does not appear to arise from spurious correlation or reverse causality.
In this paper we develop empirical measures for the strength of spillover effects. Modifying and extending the framework by Diebold and Yilmaz (2011), we quantify spillovers between sovereign credit markets and banks in the euro area. Spillovers are estimated recursively from a vector autoregressive model of daily CDS spread changes, with exogenous common factors. We account for interdependencies between sovereign and bank CDS spreads and we derive generalised impulse response functions. Specifically, we assess the systemic effect of an unexpected shock to the creditworthiness of a particular sovereign or country-specific bank index to other sovereign or bank CDSs between October 2009 and July 2012. Channels of transmission from or to sovereigns and banks are aggregated as a Contagion index (CI). This index is disentangled into four components, the average potential spillover: i) amongst sovereigns, ii) amongst banks, iii) from sovereigns to banks, and iv) vice-versa. We highlight the impact of policy-related events along the different components of the contagion index. The systemic contribution of each sovereign or banking group is quantified as the net spillover weight in the total net-spillover measure. Finally, the captured time-varying interdependence between banks and sovereigns emphasises the evolution of their strong nexus.
AS WE INCREASINGLY RELY ON SEARCH ENGINES AS AN IMPORTANT SOURCE OF INFORMATION TO SUPPORT OUR DECISIONS, SEARCH ENGINES BECAME AN IMPORTANT VENUE FOR FIRMS TO ATTRACT ATTENTION AND SECURE THE LONGEVITY OF THEIR OPERATIONS. THIS ARTICLE DISCUSSES THE RESULTS OF OUR EMPIRICAL STUDIES ON HOW TO CAPTURE A FIRM’S VISIBILITY IN ORGANIC SEARCH AND HOW IT AFFECTS ITS SHORT- AND LONG-TERM FINANCIAL PERFORMANCE.
TO INCREASE THE INFORMATION SECURITY AWARENESS AMONG THEIR WORKFORCE AND TO ACHIEVE SECURE INFORMATION SYSTEMS, DECISION-MAKERS EMPLOY MEASURES OF INFORMATION SECURITY, SUCH AS SECURITY POLICIES OR ASSOCIATED TRAINING AND EDUCATIONAL PROGRAMS. HOWEVER, THESE MEASURES MIGHT STRESS EMPLOYEES. THIS IS TRUE IF, FOR INSTANCE, INFORMATION SECURITY MEASURES ARE PERCEIVED AS DIFFICULT TO UNDERSTAND, AS AN INVASION OF PRIVACY, OR IF THEY GIVE RISE TO CONFLICTS OF INTEREST. CONSEQUENTLY, A MULTI-FACETED PERSPECTIVE ON EMPLOYEES' STRUGGLE WITH INFORMATION SECURITY IS DISCUSSED.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
What do academics have to offer market risk management practitioners in financial institutions? Current industry practice largely follows one of two extremely restrictive approaches: historical simulation or RiskMetrics. In contrast, we favor flexible methods based on recent developments in financial econometrics, which are likely to produce more accurate assessments of market risk. Clearly, the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress parsimonious models that are easily estimated, and we discuss a variety of practical approaches for high-dimensional covariance matrix modeling, along with what we see as some of the pitfalls and problems in current practice. In so doing we hope to encourage further dialog between the academic and practitioner communities, hopefully stimulating the development of improved market risk management technologies that draw on the best of both worlds.
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency returns coupled with relatively simple reduced-form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c,d) for related bi-power variation measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in realized volatility measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find the jump component of the price process to be distinctly less persistent than the continuous sample path component. Explicitly including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for realized volatility results in highly significant jump coefficient estimates at the daily, weekly and quarterly forecast horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling, forecasting and pricing of the continuous and jump components of total return variability.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
We extend the classical ”martingale-plus-noise” model for high-frequency prices by an error correction mechanism originating from prevailing mispricing. The speed of price reversal is a natural measure for informational efficiency. The strength of the price reversal relative to the signal-to-noise ratio determines the signs of the return serial correlation and the bias in standard realized variance estimates. We derive the model’s properties and locally estimate it based on mid-quote returns of the NASDAQ 100 constituents. There is evidence of mildly persistent local regimes of positive and negative serial correlation, arising from lagged feedback effects and sluggish price adjustments. The model performance is decidedly superior to existing stylized microstructure models. Finally, we document intraday periodicities in the speed of price reversion and noise-to-signal ratios.
This paper analyses two reasons why inflation may interfere with price adjustment so as to create inefficiencies in resource allocation at low rates of inflation. The first argument is that the higher the rate of inflation the lower the likelihood that downward nominal rigidities are binding (the Tobin argument) which implies a non-linear Phillips-curve. The second argument is that low inflation strengthens nominal price rigidities and thus impairs the flexibility of the price system resulting in a less efficient resource allocation. It is argued that inflation can be too low from a welfare point of view due to the presence of nominal rigidities, but the quantitative importance is an open question.
We investigate the characteristics of infrastructure as an asset class from an investment perspective of a limited partner. While non U.S. institutional investors gain exposure to infrastructure assets through a mix of direct investments and private fund vehicles, U.S. investors predominantly invest in infrastructure through private funds. We find that the stream of cash flows delivered by private infrastructure funds to institutional investors is very similar to that delivered by other types of private equity, as reflected by the frequency and amounts of net cash flows. U.S. public pension funds perform worse than other institutional investors in their infrastructure fund investments, although they are exposed to underlying deals with very similar project stage, concession terms, ownership structure, industry, and geographical location. By selecting funds that invest in projects with poor financial performance, U.S. public pension funds have created an implicit subsidy to infrastructure as an asset class, which we estimate within the range of $730 million to $3.16 billion per year depending on the benchmark.
Shallow meritocracy
(2023)
Meritocracies aspire to reward hard work and promise not to judge individuals by the circumstances into which they were born. However, circumstances often shape the choice to work hard. I show that people's merit judgments are "shallow" and insensitive to this effect. They hold others responsible for their choices, even if these choices have been shaped by unequal circumstances. In an experiment, US participants judge how much money workers deserve for the effort they exert. Unequal circumstances disadvantage some workers and discourage them from working hard. Nonetheless, participants reward the effort of disadvantaged and advantaged workers identically, regardless of the circumstances under which choices are made. For some participants, this reflects their fundamental view regarding fair rewards. For others, the neglect results from the uncertain counterfactual. They understand that circumstances shape choices but do not correct for this because the counterfactual—what would have happened under equal circumstances—remains uncertain.
We document the individual willingness to act against climate change and study the role of social norms in a large sample of US adults. Individual beliefs about social norms positively predict pro-climate donations, comparable in strength to universal moral values and economic preferences such as patience and reciprocity. However, we document systematic misperceptions of social norms. Respondents vastly underestimate the prevalence of climate-friendly behaviors and norms. Correcting these misperceptions in an experiment causally raises individual willingness to act against climate change as well as individual support for climate policies. The effects are strongest for individuals who are skeptical about the existence and threat of global warming.
This paper shows that support for climate action is high across survey participants from all EU countries in three dimensions: (1) Participants are willing to contribute personally to combating climate change, (2) they approve of pro-climate social norms, and (3) they demand government action. In addition, there is a significant perception gap where individuals underestimate others' willingness to contribute to climate action by over 10 percentage points, influencing their own willingness to act. Policymakers should recognize the broad support for climate action among European citizens and communicate this effectively to counteract the vocal minority opposed to it.
Investors' return expectations are pivotal in stock markets, but the reasoning behind these expectations remains a black box for economists. This paper sheds light on economic agents' mental models -- their subjective understanding -- of the stock market, drawing on surveys with the US general population, US retail investors, US financial professionals, and academic experts. Respondents make return forecasts in scenarios describing stale news about the future earnings streams of companies, and we collect rich data on respondents' reasoning. We document three main results. First, inference from stale news is rare among academic experts but common among households and financial professionals, who believe that stale good news lead to persistently higher expected returns in the future. Second, while experts refer to the notion of market efficiency to explain their forecasts, households and financial professionals reveal a neglect of equilibrium forces. They naively equate higher future earnings with higher future returns, neglecting the offsetting effect of endogenous price adjustments. Third, a series of experimental interventions demonstrate that these naive forecasts do not result from inattention to trading or price responses but reflect a gap in respondents' mental models -- a fundamental unfamiliarity with the concept of equilibrium.
Speculative news on corporate takeovers may hurt productivity because uncertainty and threat of job loss cause anxiety, distraction, and reduced collaboration and morale among employees and managers. Using a panel of OECD-headquartered firms, we show that firm productivity temporarily declines upon announcements of speculative takeover rumors that do not materialize. This productivity dip is more pronounced for targets and for firms in countries with weaker employee rights and less long-term orientation. Abnormal stock returns mirror these results. The evidence fosters our understanding of potential real effects of speculative financial news and the costs of takeover threats.
This paper studies the impact of the concentration of control, the type of controlling shareholder and the dividend tax preference of the controlling shareholder on dividend policy for a panel of 220 German firms over 1984-2005. While the concentration of control does not have an effect on the dividend payout, there is strong evidence that the type of controlling shareholder matters as family controlled firms have high dividend payouts whereas bank controlled firms have low dividend payouts. However, there is no evidence that the dividend preference of the large shareholder has an impact on the dividend decision. JEL Classification: G32, G35 Keywords: Dividend Policy, Payout Policy, Lintner Dividend Model, Tax Clientele Effects, Corporate Governance
Recent advances in natural language processing have contributed to the development of market sentiment measures through text content analysis in news providers and social media. The effectiveness of these sentiment variables depends on the imple- mented techniques and the type of source on which they are based. In this paper, we investigate the impact of the release of public financial news on the S&P 500. Using automatic labeling techniques based on either stock index returns or dictionaries, we apply a classification problem based on long short-term memory neural networks to extract alternative proxies of investor sentiment. Our findings provide evidence that there exists an impact of those sentiments in the market on a 20-minute time frame. We find that dictionary-based sentiment provides meaningful results with respect to those based on stock index returns, which partly fails in the mapping process between news and financial returns.
Discussions about the banking union have restarted. Its success so far is limited: national banking sectors are still overwhelmingly exposed to their own countries’ economies, cross border banking has not increased and capital and liquidity remain locked within national boundaries. The policy letter highlights that the current debate, centered on sovereign exposures and deposit insurance, misses critical underlying problems in the supervision and resolution frameworks. The ECB supervisors’ efforts to facilitate cross-border banking have been hampered by national ringfencing. The resolution framework is not up to its task: limited powers of the SRB, prohibitive access conditions and limited size of the Single Resolution Fund limit its effectiveness. A lack of a coherent European framework for insolvency unlevels the regulatory field and creates incentives to bypass European rules. The new Commission and European Parliament, with the new ECB leadership, provide a unique opportunity to address these shortcomings and make the banking union work.
There is much discussion today about a possible digital euro (PDE). Is this attention exaggerated? Are “central bank digital currencies” (CBDCs) “a solution in search of a problem”, as some have argued? This article summarizes the main facts about the PDE and concludes that, if the decision on adoption had to be taken today, the arguments against would outweigh those in favor. However, there may be future circumstances in which having a CBDC ready for use can indeed be useful. Therefore, preparing is a good thing, even if the odds of its usefulness in normal conditions are slim.
In its first ten years (2014-2023), the banking union was successful in its prudential agenda but failed spectacularly in its underlying objective: establishing a single banking market in the euro area. This goal is now more important than ever, and easier to attain than at any time in the last decade. To make progress, cross-border banks should receive a specific treatment within general banking union legislation. Suggestions are made on how to make such regulatory carve-out effective and legally sound.
This policy note summarizes our assessment of financial sanctions against Russia. We see an increase in sanctions severity starting from (1) the widely discussed SWIFT exclusions, followed by (2) blocking of correspondent banking relationships with Russian banks, including the Central Bank, alongside secondary sanctions, and (3) a full blacklisting of the ‘real’ export-import flows underlying the financial transactions. We assess option (1) as being less impactful than often believed yet sending a strong signal of EU unity; option (2) as an effective way to isolate the Russian banking system, particularly if secondary sanctions are in place, to avoid workarounds. Option (3) represents possibly the most effective way to apply economic and financial pressure, interrupting trade relationships.
We assess, through VAR evidence, the effects of monetary policy on banks’ risk exposure and find the presence of a risk-taking channel. A model combining fragile banks prone to risk mis-incentives and credit constrained firms, whose collateral fluctuations generate a balance sheet channel, is used to rationalize the evidence. A monetary expansion increases bank leverage. With two consequences: on the one side this exacerbates risk exposure; on the other, the risk spiral depresses output, therefore dampening the conventional amplification effect of the financial accelerator.
We assess the effects of monetary policy on bank risk to verify the existence of a risk-taking channel - monetary expansions inducing banks to assume more risk. We first present VAR evidence confirming that this channel exists and tends to concentrate on the bank funding side. Then, to rationalize this evidence we build a macro model where banks subject to runs endogenously choose their funding structure (deposits vs. capital) and risk level. A monetary expansion increases bank leverage and risk. In turn, higher bank risk in steady state increases asset price volatility and reduces equilibrium output.
Exit strategies
(2010)
We study alternative scenarios for exiting the post-crisis fiscal and monetary accommodation using the model of Angeloni and Faia (2010), that combines a standard DSGE framework with a fragile banking sector, suitably modified and calibrated for the euro area. Credibly announced and fast fiscal consolidations dominate – based on simple criteria – alternative strategies incorporating various degrees of gradualism and surprise. The fiscal adjustment should be based on spending cuts or else be relatively skewed towards consumption taxes. The phasing out of monetary accommodation should be simultaneous or slightly delayed. We also find that, contrary to widespread belief, Basel III may well have an expansionary macroeconomic effect. Keywords: Exit Strategies , Debt Consolidation , Fiscal Policy , Monetary Policy , Capital Requirements , Bank Runs JEL Classification: G01, E63, H12
Exit strategies
(2014)
We study alternative scenarios for exiting the post-crisis fiscal and monetary accommodation using a macromodel where banks choose their capital structure and are subject to runs. Under a Taylor rule, the post-crisis interest rate hits the zero lower bound (ZLB) and remains there for several years. In that condition, pre-announced and fast fiscal consolidations dominate - based on output and inflation performance and bank stability - alternative strategies incorporating various degrees of gradualism and surprise. We also examine an alternative monetary strategy in which the interest rate does not reach the ZLB; the benefits from fiscal consolidation persist, but are more nuanced.
We present new statistical indicators of the structure and performance of US banks from 1990 to today, geographically disaggregated at the level of individual counties. The constructed data set (20 indicators for some 3150 counties over 31 years, for a total of about 2 million data points) conveys a detailed picture of how the geography of US banking has evolved in the last three decades. We consider the data as a stepping stone to understand the role banks and banking policies may have played in mitigating, or exacerbating, the rise of poverty and inequality in certain US regions.
We examine the impact of so-called "Crisis Contracts" on bank managers' risk-taking incentives and on the probability of banking crises. Under a Crisis Contract, managers are required to contribute a pre-specified share of their past earnings to finance public rescue funds when a crisis occurs. This can be viewed as a retroactive tax that is levied only when a crisis occurs and that leads to a form of collective liability for bank managers. We develop a game-theoretic model of a banking sector whose shareholders have limited liability, so that society at large will suffer losses if a crisis occurs. Without Crisis Contracts, the managers' and shareholders' interests are aligned, and managers take more than the socially optimal level of risk. We investigate how the introduction of Crisis Contracts changes the equilibrium level of risk-taking and the remuneration of bank managers. We establish conditions under which the introduction of Crisis Contracts will reduce the probability of a banking crisis and improve social welfare. We explore how Crisis Contracts and capital requirements can supplement each other and we show that the efficacy of Crisis Contracts is not undermined by attempts to hedge.
Four years after the Panama Papers scandal, tax avoidance remains an urgent moral-political problem. Moving beyond both the academic and policy mainstream, I advocate the “democratization of tax enforcement,” by which I mean systematic efforts to make tax avoiders accountable to the judgment of ordinary citizens. Both individual oligarchs and multinational corporations have access to sophisticated tax avoidance strategies that impose significant fiscal costs on democracies and exacerbate preexisting distributive and political inequalities. Yet much contemporary tax sheltering occurs within the letter of the law, rendering criminal sanctions ineffective. In response, I argue for the creation of Citizen Tax Juries, deliberative minipublics empowered to scrutinize tax avoiders, demand accountability, and facilitate concrete reforms. This proposal thus responds to the wider aspiration, within contemporary democratic theory, to secure more popular control over essential economic processes.