Wirtschaftswissenschaften
Refine
Year of publication
- 2022 (172) (remove)
Document Type
- Working Paper (99)
- Part of Periodical (55)
- Article (8)
- Contribution to a Periodical (5)
- Book (4)
- Part of a Book (1)
Has Fulltext
- yes (172)
Is part of the Bibliography
- no (172) (remove)
Keywords
- regulation (5)
- financial markets (4)
- ESG (3)
- inflation (3)
- AI borrower classification (2)
- AI enabled credit scoring (2)
- Artificial Intelligence (2)
- Banking Union (2)
- Big Data (2)
- COVID-19 (2)
Institute
- Wirtschaftswissenschaften (172)
- Sustainable Architecture for Finance in Europe (SAFE) (126)
- Center for Financial Studies (CFS) (86)
- House of Finance (HoF) (67)
- Foundation of Law and Finance (18)
- Rechtswissenschaft (15)
- Institute for Monetary and Financial Stability (IMFS) (14)
- Präsidium (9)
- E-Finance Lab e.V. (4)
- Gesellschaftswissenschaften (3)
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Many nations incentivize retirement saving by letting workers defer taxes on pension contributions, imposing them when retirees withdraw their funds. Using a dynamic life cycle model, we show how ‘Rothification’ – that is, taxing 401(k) contributions rather than payouts – alters saving, investment, consumption, and Social Security claiming patterns. We find that taxing pension contributions instead of withdrawals leads to delayed retirement, somewhat lower lifetime tax payments, and relatively small reductions in consumption. Indeed, the two tax regimes generate quite similar relative inequality metrics: the relative consumption inequality ratio under TEE is only four percent higher than in the EET case. Moreover, results indicate that the Gini measures are also strikingly similar under the EET and the TEE regimes for lifetime consumption, cash on hand, and 401(k) assets, differing by only 1-4 percent. While tax payments are higher early in life under the TEE regime, they are slightly lower in the long run. Moreover, higher EET tax payments are also accompanied by higher volatility. We therefore find few reasons for policymakers to favor either tax approach on egalitarian or revenue-enhancing grounds.
We analyze how market fragmentation affects market quality of SME and other less actively traded stocks. Compared to large stocks, they are less likely to be traded on multiple venues and show, if at all, low levels of fragmentation. Concerning the impact of fragmentation on market quality, we find evidence for a hockey stick effect: Fragmentation has no effect for infrequently traded stocks, a negative effect on liquidity of slightly more active stocks, and increasing benefits for liquidity of large and actively traded stocks. Consequently, being traded on multiple venues is not necessarily harmful for SME stock market quality.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
We investigate the impact of uneven transparency regulation across countries and industries on the location of economic activity. Using two distinct sources of regulatory variation—the varying extent of financial-reporting requirements and the staggered introduction of electronic business registers in Europe—, we consistently document that direct exposure to transparency regulation is negatively associated with the focal industry’s economic activity in terms of inputs (e.g., employment) and outputs (e.g., production). By contrast, we find that indirect exposure to supplier and customer industries’ transparency regulation is positively associated with the focal industry’s economic activity. Our evidence suggests uneven transparency regulation can reallocate economic activity from regulated toward unregulated countries and industries, distorting the location of economic activity.
To ensure the credibility of market discipline induced by bail-in, neither retail investors nor peer banks should appear prominently among the investor base of banks’ loss absorbing capital. Empirical evidence on bank-level data provided by the German Federal Financial Supervisory Authority raises a few red flags. Our list of policy recommendations encompasses disclosure policy, data sharing among supervisors, information transparency on holdings of bail-inable debt for all stakeholders, threshold values, and a well-defined upper limit for any bail-in activity. This document was provided by the Economic Governance Support Unit at the request of the ECON Committee.
European banks have substantial investments in assets that are
measured without directly observable market prices (mark-to-
model). Financial disclosures of these value estimates lack
standardization and are hard to compare across banks. These
comparability concerns are concentrated in large European
banks that extensively rely on level 3 estimates with the most
unobservable inputs. Although the relevant balance sheet
positions only represent a small fraction of these large banks’
total assets (2.9%), their value equals a significant fraction of core
equity tier 1 (48.9%). Incorrect valuations thus have a potential to
impact financial stability. 85% of these bank assets are under
direct ECB supervision. Prudential regulation requires value
adjustments that are apt to shield capital against valuation risk.
Yet, stringent enforcement is critical for achieving this objective.
This document was provided by the Economic Governance
Support Unit at the request of the ECON Committee.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.