Working Paper
Refine
Year of publication
Document Type
- Working Paper (2351) (remove)
Language
- English (2351) (remove)
Is part of the Bibliography
- no (2351)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
We employ a proprietary transaction-level dataset in Germany to examine how capital requirements affect the liquidity of corporate bonds. Using the 2011 European Banking Authority capital exercise that mandated certain banks to increase regulatory capital, we find that affected banks reduce their inventory holdings, pre-arrange more trades, and have smaller average trade size. While non-bank affiliated dealers increase their market-making activity, they are unable to bridge this gap - aggregate liquidity declines. Our results are stronger for banks with a higher capital shortfall, for non-investment grade bonds, and for bonds where the affected banks were the dominant market-maker.
We develop a two-sector incomplete markets integrated assessment model to analyze the effectiveness of green quantitative easing (QE) in complementing fiscal policies for climate change mitigation. We model green QE through an outstanding stock of private assets held by a monetary authority and its portfolio allocation between a clean and a dirty sector of production. Green QE leads to a partial crowding out of private capital in the green sector and to a modest reduction of the global temperature by 0.04 degrees of Celsius until 2100. A moderate global carbon tax of 50 USD per tonne of carbon is 4 times more effective.
Many people do not understand the concepts of life expectancy and longevity risk, potentially leading them to under-save for retirement or to not purchase longevity insurance, which in turn could reduce wellbeing at older ages. We investigate alternative ways to increase the salience of both concepts, allowing us to assess whether these change peoples’ perceptions and financial decision making. Using randomly-assigned vignettes providing subjects with information about either life expectancy or longevity, we show that merely prompting people to think about financial decisions changes their perceptions regarding subjective survival probabilities. Moreover, this information also boosts respondents’ interest in saving and demand for longevity insurance. In particular, longevity information influences both subjective survival probabilities and financial decisions, while life expectancy information influences only annuity choices. We provide some evidence that many people are simply unaware of longevity risk.
When the COVID-19 crisis struck, banks using internal-rating based (IRB) models quickly recognized the increase in risk and reduced lending more than banks using a standardized approach. This effect is not driven by borrowers’ quality or by banks in countries with credit booms before the pandemic. The higher risk sensitivity of IRB models does not always result in lower credit provision when risk intensifies. Certain features of the IRB models – the use of a downturn Loss Given Default parameter – can increase banks’ resilience and preserve their intermediation capacity also during downturns. Affected borrowers were not able to fully insulate and decreased corporate investments.
Previous studies document a relationship between gambling activity at the aggregate level and investments in securities with lottery-like features. We combine data on individual gambling consumption with portfolio holdings and trading records to examine whether gambling and trading act as substitutes or complements. We find that gamblers are more likely than the average investor to hold lottery stocks, but significantly less likely than active traders who do not gamble. Our results suggest that gambling behavior across domains is less relevant compared to other portfolio characteristics that predict investing in high-risk and high-skew securities, and that gambling on and off the stock market act as substitutes to satisfy the same need, e.g., sensation seeking.
Colocation services offered by stock exchanges enable market participants to achieve execution costs for large orders that are substantially lower and less sensitive to transacting against high-frequency traders. However, these benefits manifest only for orders executed on the colocated brokers' own behalf, whereas customers' order execution costs are substantially higher. Analyses of individual order executions indicate that customer orders originating from colocated brokers are less actively monitored and achieve inferior execution quality. This suggests that brokers do not make effective use of their technology, possibly due to agency frictions or poor algorithm selection and parameter choice by customers.
The leading premium
(2022)
In this paper, we consider conditional measures of lead-lag relationships between aggregate growth and industry-level cash-flow growth in the US. Our results show that firms in leading industries pay an average annualized return 3.6\% higher than that of firms in lagging industries. Using both time series and cross sectional tests, we estimate an annual pure timing premium ranging from 1.2% to 1.7%. This finding can be rationalized in a model in which (a) agents price growth news shocks, and (b) leading industries provide valuable resolution of uncertainty about the growth prospects of lagging industries.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Many nations incentivize retirement saving by letting workers defer taxes on pension contributions, imposing them when retirees withdraw their funds. Using a dynamic life cycle model, we show how ‘Rothification’ – that is, taxing 401(k) contributions rather than payouts – alters saving, investment, consumption, and Social Security claiming patterns. We find that taxing pension contributions instead of withdrawals leads to delayed retirement, somewhat lower lifetime tax payments, and relatively small reductions in consumption. Indeed, the two tax regimes generate quite similar relative inequality metrics: the relative consumption inequality ratio under TEE is only four percent higher than in the EET case. Moreover, results indicate that the Gini measures are also strikingly similar under the EET and the TEE regimes for lifetime consumption, cash on hand, and 401(k) assets, differing by only 1-4 percent. While tax payments are higher early in life under the TEE regime, they are slightly lower in the long run. Moreover, higher EET tax payments are also accompanied by higher volatility. We therefore find few reasons for policymakers to favor either tax approach on egalitarian or revenue-enhancing grounds.
We analyze how market fragmentation affects market quality of SME and other less actively traded stocks. Compared to large stocks, they are less likely to be traded on multiple venues and show, if at all, low levels of fragmentation. Concerning the impact of fragmentation on market quality, we find evidence for a hockey stick effect: Fragmentation has no effect for infrequently traded stocks, a negative effect on liquidity of slightly more active stocks, and increasing benefits for liquidity of large and actively traded stocks. Consequently, being traded on multiple venues is not necessarily harmful for SME stock market quality.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
We investigate the impact of uneven transparency regulation across countries and industries on the location of economic activity. Using two distinct sources of regulatory variation—the varying extent of financial-reporting requirements and the staggered introduction of electronic business registers in Europe—, we consistently document that direct exposure to transparency regulation is negatively associated with the focal industry’s economic activity in terms of inputs (e.g., employment) and outputs (e.g., production). By contrast, we find that indirect exposure to supplier and customer industries’ transparency regulation is positively associated with the focal industry’s economic activity. Our evidence suggests uneven transparency regulation can reallocate economic activity from regulated toward unregulated countries and industries, distorting the location of economic activity.
To ensure the credibility of market discipline induced by bail-in, neither retail investors nor peer banks should appear prominently among the investor base of banks’ loss absorbing capital. Empirical evidence on bank-level data provided by the German Federal Financial Supervisory Authority raises a few red flags. Our list of policy recommendations encompasses disclosure policy, data sharing among supervisors, information transparency on holdings of bail-inable debt for all stakeholders, threshold values, and a well-defined upper limit for any bail-in activity. This document was provided by the Economic Governance Support Unit at the request of the ECON Committee.
European banks have substantial investments in assets that are
measured without directly observable market prices (mark-to-
model). Financial disclosures of these value estimates lack
standardization and are hard to compare across banks. These
comparability concerns are concentrated in large European
banks that extensively rely on level 3 estimates with the most
unobservable inputs. Although the relevant balance sheet
positions only represent a small fraction of these large banks’
total assets (2.9%), their value equals a significant fraction of core
equity tier 1 (48.9%). Incorrect valuations thus have a potential to
impact financial stability. 85% of these bank assets are under
direct ECB supervision. Prudential regulation requires value
adjustments that are apt to shield capital against valuation risk.
Yet, stringent enforcement is critical for achieving this objective.
This document was provided by the Economic Governance
Support Unit at the request of the ECON Committee.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
Short sale bans may improve market quality during crises: new evidence from the 2020 Covid crash
(2022)
In theory, banning short selling stabilizes stock prices but undermines pricing efficiency and has ambiguous impacts on market liquidity. Empirical studies find mixed and conflicting results. This paper leverages cross-country policy variation during the 2020 Covid crisis to assess differential impacts of bans on stock liquidity, prices, and volatility. Results suggest that bans improved liquidity and stabilized prices for illiquid stocks but temporarily diminished liquidity for highly liquid stocks.The findings support theories in which short sale bans may improve liquidity by selectively filtering out informed— potentially predatory—traders. Thus, policies that target the most illiquid stocks may deliver better overall market quality than uniform short sale bans imposed on all stocks.
With open banking, consumers take greater control over their own financial data and share it at their discretion. Using a rich set of loan application data from the largest German FinTech lender in consumer credit, this paper studies what characterizes borrowers who share data and assesses its impact on loan application outcomes. I show that riskier borrowers share data more readily, which subsequently leads to an increase in the probability of loan approval and a reduction in interest rates. The effects hold across all credit risk profiles but are the most pronounced for borrowers with lower credit scores (a higher increase in loan approval rate) and higher credit scores (a larger reduction in interest rate). I also find that standard variables used in credit scoring explain substantially less variation in loan application outcomes when customers share data. Overall, these findings suggest that open banking improves financial inclusion, and also provide policy implications for regulators engaged in the adoption or extension of open banking policies.
With free delivery of products virtually being a standard in E-commerce, product returns pose a major challenge for online retailers and society. For retailers, product returns involve significant transportation, labor, disposal, and administrative costs. From a societal perspective, product returns contribute to greenhouse gas emissions and packaging disposal and are often a waste of natural resources. Therefore, reducing product returns has become a key challenge. This paper develops and validates a novel smart green nudging approach to tackle the problem of product returns during customers’ online shopping processes. We combine a green nudge with a novel data enrichment strategy and a modern causal machine learning method. We first run a large-scale randomized field experiment in the online shop of a German fashion retailer to test the efficacy of a novel green nudge. Subsequently, we fuse the data from about 50,000 customers with publicly-available aggregate data to create what we call enriched digital footprints and train a causal machine learning system capable of optimizing the administration of the green nudge. We report two main findings: First, our field study shows that the large-scale deployment of a simple, low-cost green nudge can significantly reduce product returns while increasing retailer profits. Second, we show how a causal machine learning system trained on the enriched digital footprint can amplify the effectiveness of the green nudge by “smartly” administering it only to certain types of customers. Overall, this paper demonstrates how combining a low-cost marketing instrument, a privacy-preserving data enrichment strategy, and a causal machine learning method can create a win-win situation from both an environmental and economic perspective by simultaneously reducing product returns and increasing retailers’ profits.