Refine
Year of publication
- 2022 (79) (remove)
Document Type
- Working Paper (79)
Language
- English (79) (remove)
Has Fulltext
- yes (79) (remove)
Is part of the Bibliography
- no (79)
Keywords
- AI borrower classification (2)
- AI enabled credit scoring (2)
- Artificial Intelligence (2)
- Banking Union (2)
- Big Data (2)
- COVID-19 (2)
- ESG (2)
- FinTech (2)
- Financial Regulation (2)
- Performance (2)
Institute
- Center for Financial Studies (CFS) (79) (remove)
Financial ties between drug companies and medical researchers are thought to bias results published in medical journals. To enable readers to account for such bias, most medical journals require authors to disclose potential conflicts of interest. For such policies to be effective, conflict disclosure must modify readers’ beliefs. We therefore examine whether disclosure of financial ties with industry reduces article citations, indicating a discount. A challenge to estimating this effect is selection as drug companies may seek out higher quality authors as consultants or fund their studies, generating a positive correlation between disclosed conflicts and citations. Our analysis confirms this positive association. Including observable controls for article and author quality attenuates but does not eliminate this relation. To tease out whether other researchers discount articles with conflicts, we perform three tests. First, we show that the positive association is weaker for review articles, which are more susceptible to bias. Second, we examine article recommendations to family physicians by medical experts, who choose from articles that are a priori more homogenous in quality. Here, we find a significantly negative association between disclosure and expert recommendations, consistent with discounting. Third, we conduct an analysis within author and article, exploiting journal policy changes that result in conflict disclosure by an author. We examine the effect of this disclosure on citations to a previously published article by the same author. This analysis reveals a negative citation effect. Overall, we find evidence that disclosures negatively affect citations, consistent with the notion that other researchers discount articles with disclosed conflicts.
The author proposes a Differential-Independence Mixture Ensemble (DIME) sampler for the Bayesian estimation of macroeconomic models.It allows sampling from particularly challenging, high-dimensional black-box posterior distributions which may also be computationally expensive to evaluate. DIME is a “Swiss Army knife”, combining the advantages of a broad class of gradient-free global multi-start optimizers with the properties of a Monte Carlo Markov chain (MCMC). This includes fast burn-in and convergence absent any prior numerical optimization or initial guesses, good performance for multimodal distributions, a large number of chains (the “ensemble”) running in parallel, an endogenous proposal density generated from the state of the full ensemble, which respects the bounds of the prior distribution. The author shows that the number of parallel chains scales well with the number of necessary ensemble iterations.
DIME is used to estimate the medium-scale heterogeneous agent New Keynesian (“HANK”) model with liquid and illiquid assets, thereby for the first time allowing to also include the households’ preference parameters. The results mildly point towards a less accentuated role of household heterogeneity for the empirical macroeconomic dynamics.
European banks have substantial investments in assets that are
measured without directly observable market prices (mark-to-
model). Financial disclosures of these value estimates lack
standardization and are hard to compare across banks. These
comparability concerns are concentrated in large European
banks that extensively rely on level 3 estimates with the most
unobservable inputs. Although the relevant balance sheet
positions only represent a small fraction of these large banks’
total assets (2.9%), their value equals a significant fraction of core
equity tier 1 (48.9%). Incorrect valuations thus have a potential to
impact financial stability. 85% of these bank assets are under
direct ECB supervision. Prudential regulation requires value
adjustments that are apt to shield capital against valuation risk.
Yet, stringent enforcement is critical for achieving this objective.
This document was provided by the Economic Governance
Support Unit at the request of the ECON Committee.
The great financial crisis and the euro area crisis led to a substantial reform of financial safety nets across Europe and – critically – to the introduction of supranational elements. Specifically, a supranational supervisor was established for the euro area, with discrete arrangements for supervisory competences and tasks depending on the systemic relevance of supervised credit institutions. A resolution mechanism was created to allow the frictionless resolution of large financial institutions. This resolution mechanism has been now complemented with a funding instrument.
While much more progress has been achieved than most observers could imagine 12 years ago, the banking union remains unfinished with important gaps and deficiencies. The experience over the past years, especially in the area of crisis management and resolution, has provided impetus for reform discussions, as reflected most lately in the Eurogroup statement of 16 June 2022.
This Policy Insight looks primarily at the current and the desired state of the banking union project. The key underlying question, and the focus here, is the level of ambition and how it is matched with effective legal and regulatory tools. Specifically, two questions will structure the discussions:
What would be a reasonable definition and rationale for a ‘complete’ banking union? And what legal reforms would be required to achieve it?
Banking union is a case of a new remit of EU-level policy that so far has been established on the basis of long pre-existing treaty stipulations, namely, Article 127(6) TFEU (for banking supervision) and Article 114 TFEU (for crisis management and deposit insurance). Could its completion be similarly carried out through secondary law? Or would a more comprehensive overhaul of the legal architecture be required to ensure legal certainty and legitimacy?
This article compares the three initial safety nets spanned by the European Union in response to the Covid-19 crisis: SURE, the Pandemic Crisis Support, and the European Guarantee Fund. It compares their design regarding scope, generosity, target groups, implementation, the types of solidarity and conditionality, and asks how they reflect on core-periphery relations in the EU. The article finds that the most important factor in all three instruments is risk-sharing between member states, even though SURE and the EGF display elements of fiscal solidarity. Finally, the article shows that Euro crisis countries from the South are the main recipients of financial aid, while Central and East European countries receive significantly less assistance and core countries in the North and West have no need for them.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
In more and more situations, artificially intelligent algorithms have to model humans’ (social) preferences on whose behalf they increasingly make decisions. They can learn these preferences through the repeated observation of human behavior in social encounters. In such a context, do individuals adjust the selfishness or prosociality of their behavior when it is common knowledge that their actions produce various externalities through the training of an algorithm? In an online experiment, we let participants’ choices in dictator games train an algorithm. Thereby, they create an externality on future decision making of an intelligent system that affects future participants. We show that individuals who are aware of the consequences of their training on the pay- offs of a future generation behave more prosocially, but only when they bear the risk of being harmed themselves by future algorithmic choices. In that case, the externality of artificially intelligence training induces a significantly higher share of egalitarian decisions in the present.
Using granular supervisory data from Germany, we investigate the impact of unconventional monetary policies via central banks’ purchase of corporate bonds. While this policy results in a loosening of credit market conditions as intended by policy makers, we document two unintended side effects. First, banks that are more exposed to borrowers benefiting from the bond purchases now lend more to high-risk firms with no access to bond markets. Since more loan write-offs arise from these firms and banks are not compensated for this risk by higher interest rates, we document a drop in bank profitability. Second, the policy impacts the allocation of loans among industries. Affected banks reallocate loans from investment grade firms active on bond markets to mainly real estate firms without investment grade rating. Overall, our findings suggest that central banks’ quantitative easing via the corporate bond markets has the potential to contribute to both banking sector instability and real estate bubbles.
Financial literacy affects wealth accumulation, and pension planning plays a key role in this relationship. In a large field experiment, we employ a digital pension aggregation tool to confront a treatment group with a simplified overview of their current pension claims across all pillars of the pension system. We combine survey and administrative bank data to measure the effects on actual saving behavior. Access to the tool decreases pension uncertainty for treated individuals. Average savings increase - especially for the financially less literate. We conclude that simplification of pension information can potentially reduce disparities in pension planning and savings behavior.
The financial sector plays an important role in financing the green transformation of the European economy. A critical assessment of the current regulatory framework for sustainable finance in Europe leads to ambiguous results. Although the level of transparency on ESG aspects of financial products has been significantly improved, it is questionable whether the complex, mainly disclosure-oriented architecture is sufficient to mobilise more private capital into sustainable investments. It should be discussed whether a minimum Taxonomy ratio or Green Asset Ratio has to be fulfilled to market a financial product as “green”. Furthermore, because of the high complexity of the regulation, it could be helpful for the understanding of private investors to establish a simplified green rating, based on the Taxonomy ratio, to facilitate the selection of green financial products.
This policy note summarizes our assessment of financial sanctions against Russia. We see an increase in sanctions severity starting from (1) the widely discussed SWIFT exclusions, followed by (2) blocking of correspondent banking relationships with Russian banks, including the Central Bank, alongside secondary sanctions, and (3) a full blacklisting of the ‘real’ export-import flows underlying the financial transactions. We assess option (1) as being less impactful than often believed yet sending a strong signal of EU unity; option (2) as an effective way to isolate the Russian banking system, particularly if secondary sanctions are in place, to avoid workarounds. Option (3) represents possibly the most effective way to apply economic and financial pressure, interrupting trade relationships.
For the academic audience, this paper presents the outcome of a well-identified, large change in the monetary policy rule from the lens of a standard New Keynesian model and asks whether the model properly captures the effects. For policymakers, it presents a cautionary tale of the dismal effects of ignoring basic macroeconomics. The Turkish monetary policy experiment of the past decade, stemming from a belief of the government that higher interest rates cause higher inflation, provides an unfortunately clean exogenous variance in the policy rule. The mandate to keep rates low, and the frequent policymaker turnover orchestrated by the government to enforce this, led to the Taylor principle not being satisfied and eventually a negative coeffcient on inflation in the policy rule. In such an environment, was the exchange rate still a random walk? Was inflation anchored? Does the “standard model”” suffice to explain the broad contours of macroeconomic outcomes in an emerging economy with large identifying variance in the policy rule? There are no surprises for students of open-economy macroeconomics; the answers are no, no, and yes.
Despite the impressive success of deep neural networks in many application areas, neural network models have so far not been widely adopted in the context of volatility forecasting. In this work, we aim to bridge the conceptual gap between established time series approaches, such as the Heterogeneous Autoregressive (HAR) model (Corsi, 2009), and state-of-the-art deep neural network models. The newly introduced HARNet is based on a hierarchy of dilated convolutional layers, which facilitates an exponential growth of the receptive field of the model in the number of model parameters. HARNets allow for an explicit initialization scheme such that before optimization, a HARNet yields identical predictions as the respective baseline HAR model. Particularly when considering the QLIKE error as a loss function, we find that this approach significantly stabilizes the optimization of HARNets. We evaluate the performance of HARNets with respect to three different stock market indexes. Based on this evaluation, we formulate clear guidelines for the optimization of HARNets and show that HARNets can substantially improve upon the forecasting accuracy of their respective HAR baseline models. In a qualitative analysis of the filter weights learnt by a HARNet, we report clear patterns regarding the predictive power of past information. Among information from the previous week, yesterday and the day before, yesterday's volatility makes by far the most contribution to today's realized volatility forecast. Moroever, within the previous month, the importance of single weeks diminishes almost linearly when moving further into the past.
In a parsimonious regime switching model, we find strong evidence that expected consumption growth varies over time. Adding inflation as a second variable, we uncover two states in which expected consumption growth is low, one with high and one with negative expected inflation. Embedded in a general equilibrium asset pricing model with learning, these dynamics replicate the observed time variation in stock return volatilities and stock- bond return correlations. They also provide an alternative derivation for a measure of time-varying disaster risk suggested by Wachter (2013), implying that both the disaster and the long-run risk paradigm can be extended towards explaining movements in the stock-bond correlation.
This paper examines optimal enviromental policy when external financing is costly for firms. We introduce emission externalities and industry equilibrium in the Holmström and Tirole (1997) model of corporate finance. While a cap-and- trading system optimally governs both firms` abatement activities (internal emission margin) and industry size (external emission margin) when firms have sufficient internal funds, external financing constraints introduce a wedge between these two objectives. When a sector is financially constrained in the aggregate, the optimal cap is strictly above the Pigouvian benchmark and emission allowances should be allocated below market prices. When a sector is not financially constrained in the aggregate, a cap that is below the Pigiouvian benchmark optimally shifts market share to less polluting firms and, moreover, there should be no "grandfathering" of emission allowances. With financial constraints and heterogeneity across firms or sectors, a uniform policy, such as a single cap-and-trade system, is typically not optimal.
We study liquidity provision by competitive high-frequency trading firms (HFTs) in a dynamic trading model with private information. Liquidity providers face adverse selection risk from trading with privately informed investors and from trading with other HFTs that engage in latency arbitrage upon public information. The impact of the two different sources of risk depends on the details of the market design. We determine equilibrium transaction costs in continuous limit order book (CLOB) markets and under frequent batch auctions (FBA). In the absence of informed trading, FBA dominates CLOB just as in Budish et al. (2015). Surprisingly, this result does no longer hold with privately informed investors. We show that FBA allows liquidity providers to charge markups and earn profits – even under risk neutrality and perfect competition. A slight variation of the FBA design removes the inefficiency by allowing traders to submit orders conditional on auction excess demand.
There have been numerous attempts to reform the Economic and Monetary Union (EMU) after the Great Recession, however the reform success varies greatly among sub-fields. Additionally, the political science research community has engaged a diverse set of theory- driven explanations, causal mechanisms, and variables to explain respective reform success. This article takes stock of reform policies in the EMU from two angles. First, it outlines distinct theoretical approaches that seek to explain success and failure of reform proposals and second, it surveys how they explain policy output and policy outcome in four policy subfields: financial stabilization, economic governance, financial solidarity, and cooperative dissolution. Finally, the article develops a set of explanatory factors from the existing literature that will be used for a Qualitative Comparative Analysis (QCA).
The sixth sanction package of the European Union in the context of the aggression against Ukraine excludes Sberbank, the largest Russian bank, from the SWIFT network. The increasing use of SWIFT as a tool for sanctions stimulates the rollout of alternative payment information systems by the governments of Russia and China. This policy white paper informs about the alternatives at hand, as well as their advantages and disadvantages. Careful reflection about these issues is particularly important, given the call for an “Economic Article 5” tabled for the next NATO meeting. Finally, the white paper highlights the need for institutional reforms, if policymakers decide to return SWIFT to the status of a global public good after the war.
Liquidity derivatives
(2022)
It is well established that investors price market liquidity risk. Yet, there exists no financial claim contingent on liquidity. We propose a contract to hedge uncertainty over future transaction costs, detailing potential buyers and sellers. Introducing liquidity derivatives in Brunnermeier and Pedersen (2009) improves financial stability by mitigating liquidity spirals. We simulate liquidity option prices for a panel of NYSE stocks spanning 2000 to 2020 by fitting a stochastic process to their bid-ask spreads. These contracts reduce the exposure to liquidity factors. Their prices provide a novel illiquidity measure refllecting cross-sectional commonalities. Finally, stock returns significantly spread along simulated prices.
With Big Data, decisions made by machine learning algorithms depend on training data generated by many individuals. In an experiment, we identify the effect of varying individual responsibility for the moral choices of an artificially intelligent algorithm. Across treatments, we manipulated the sources of training data and thus the impact of each individual’s decisions on the algorithm. Diffusing such individual pivotality for algorithmic choices increased the share of selfish decisions and weakened revealed prosocial preferences. This does not result from a change in the structure of incentives. Rather, our results show that Big Data offers an excuse for selfish behavior through lower responsibility for one’s and others’ fate.