Refine
Year of publication
Document Type
- Working Paper (663) (remove)
Has Fulltext
- yes (663)
Is part of the Bibliography
- no (663) (remove)
Keywords
- Banking Union (14)
- monetary policy (12)
- regulation (11)
- COVID-19 (9)
- Covid-19 (9)
- ESG (9)
- systemic risk (9)
- banks (8)
- corporate governance (8)
- financial stability (8)
Institute
- House of Finance (HoF) (663) (remove)
Private equity fund managers are typically required to invest their own money alongside the fund. We examine how this coinvestment affects the acquisition strategy of leveraged buyout funds. In a simple model, where the investment and capital structure decisions are made simultaneously, we show that a higher coinvestment induces managers to chose less risky firms and use more leverage. We test these predictions in a unique sample of private equity investments in Norway, where the fund manager's taxable wealth is publicly available. Consistent with the model, portfolio company risk decreases and leverage ratios increase with the coinvestment fraction of the manager's wealth. Moreover, funds requiring a relatively high coinvestment tend to spread its capital over a larger number of portfolio firms, consistent with a more conservative investment policy.
We identify strong cross-border institutions as a driver for the globalization of in-novation. Using 67 million patents from over 100 patent offices, we introduce novel measures of innovation diffusion and collaboration. Exploiting staggered bilateral in-vestment treaties as shocks to cross-border property rights and contract enforcement, we show that signatory countries increase technology adoption and sourcing from each other. They also increase R&D collaborations. These interactions result in techno-logical convergence. The effects are particularly strong for process innovation, and for countries that are technological laggards or have weak domestic institutions. Increased inter-firm rather than intra-firm foreign investment is the key channel.
This paper documents that resource reallocation across firms is an important mechanism through which creditor rights affect real outcomes. I exploit the staggered adoption of an international convention that provides globally consistent strong creditor protection for aircraft finance. After this reform, country-level productivity in the aviation sector increases by 12%, driven mostly by across-firm reallocation. Productive airlines borrow more, expand, and adopt new technology at the expense of unproductive ones. Such reallocation is facilitated by (i) easier and quicker asset redeployment; and (ii) the influx of foreign financiers offering innovative financial products to improve credit allocative efficiency. I further document an increase in competition and an improvement in the breadth and the quality of products available to consumers.
This paper utilizes a comprehensive worker-firm panel for the Netherlands to quantifythe impact of ICT capital-skill complementarity on the finance wage premium after the Global Financial Crisis. We apply additive worker and firm fixed-effect models to account for unobserved worker- and firm-heterogeneity and show that firm fixed-effects correct for a downward bias in the estimated finance wage premium. Our results indicate a sizable finance wage premium for both fixed- and full-hourly wages. The complementarity between ICT capital spending and the share of high skill workers at the firm-level reduces the full-wage premium considerably and the fixed-wage premium almost entirely.
We propose a shrinkage and selection methodology specifically designed for network inference using high dimensional data through a regularised linear regression model with Spike-and-Slab prior on the parameters. The approach extends the case where the error terms are heteroscedastic, by adding an ARCH-type equation through an approximate Expectation-Maximisation algorithm. The proposed model accounts for two sets of covariates. The first set contains predetermined variables which are not penalised in the model (i.e., the autoregressive component and common factors) while the second set of variables contains all the (lagged) financial institutions in the system, included with a given probability. The financial linkages are expressed in terms of inclusion probabilities resulting in a weighted directed network where the adjacency matrix is built “row by row". In the empirical application, we estimate the network over time using a rolling window approach on 1248 world financial firms (banks, insurances, brokers and other financial services) both active and dead from 29 December 2000 to 6 October 2017 at a weekly frequency. Findings show that over time the shape of the out degree distribution exhibits the typical behavior of financial stress indicators and represents a significant predictor of market returns at the first lag (one week) and the fourth lag (one month).
The disposition effect is implicitly assumed to be constant over time. However, drivers of the disposition effect (preferences and beliefs) are rather countercyclical. We use individual investor trading data covering several boom and bust periods (2001-2015). We show that the disposition effect is countercyclical, i.e. is higher in bust than in boom periods. Our findings are driven by individuals being 25% more likely to realize gains in bust than in boom periods. These changes in investors’ selling behavior can be linked to changes in investors’ risk aversion and in their beliefs across financial market cycles.
Using German and US brokerage data we find that investors are more likely to sell speculative stocks trading at a gain. Investors’ gain realizations are monotonically increasing in a stock’s speculativeness. This translates into a high disposition effect for speculative and a much lower disposition effect for non-speculative stocks. Our findings hold across asset classes (stocks, passive, and active funds) and explain cross-sectional differences in investor selling behavior which previous literature attributed primarily to investor demographics. Our results are robust to rank or attention effects and can be linked to realization utility and rolling mental account.
A tale of one exchange and two order books : effects of fragmentation in the absence of competition
(2018)
Exchanges nowadays routinely operate multiple, almost identically structured limit order markets for the same security. We study the effects of such fragmentation on market performance using a dynamic model where agents trade strategically across two identically-organized limit order books. We show that fragmented markets, in equilibrium, offer higher welfare to intermediaries at the expense of investors with intrinsic trading motives, and lower liquidity than consolidated markets. Consistent with our theory, we document improvements in liquidity and lower profits for liquidity providers when Euronext, in 2009, consolidated its order ow for stocks traded across two country-specific and identically-organized order books into a single order book. Our results suggest that competition in market design, not fragmentation, drives previously documented improvements in market quality when new trading venues emerge; in the absence of such competition, market fragmentation is harmful.
The Russian war of aggression against Ukraine since 24 February 2022 has intensified the discussion of Europe’s reliance on energy imports from Russia. A ban on Russian imports of oil, natural gas and coal has already been imposed by the United States, while the United Kingdom plans to cease imports of oil and coal from Russia by the end of 2022. The German Federal Government is currently opposing an energy embargo against Russia. However, the Federal Ministry for Economic Affairs and Climate Action is working on a strategy to reduce energy imports from Russia. In this paper, the authors give an overview of the German and European reliance on energy imports from Russia with a focus on gas imports and discuss price effects, alternative suppliers of natural gas, and the potential for saving and replacing natural gas. They also provide an overview of estimates of the consequences on the economic outlook if the conflict intensifies.
Using granular supervisory data from Germany, we investigate the impact of unconventional monetary policies via central banks’ purchase of corporate bonds. While this policy results in a loosening of credit market conditions as intended by policy makers, we document two unintended side effects. First, banks that are more exposed to borrowers benefiting from the bond purchases now lend more to high-risk firms with no access to bond markets. Since more loan write-offs arise from these firms and banks are not compensated for this risk by higher interest rates, we document a drop in bank profitability. Second, the policy impacts the allocation of loans among industries. Affected banks reallocate loans from investment grade firms active on bond markets to mainly real estate firms without investment grade rating. Overall, our findings suggest that central banks’ quantitative easing via the corporate bond markets has the potential to contribute to both banking sector instability and real estate bubbles.
Climate risk has become a major concern for financial institutions and financial markets. Yet, climate policy is still in its infancy and contributes to increased uncertainty. For example, the lack of a sufficiently high carbon price and the variety of definitions for green activities lower the value of existing and new capital, and complicate risk management. This column argues that it would be welfare-enhancing if policy changes were to follow a predictable longer-term path. Accordingly, the authors suggest a role for financial regulation in the transition.
This paper investigates systemic risk in the insurance industry. We first analyze the systemic contribution of the insurance industry vis-à-vis other industries by applying 3 measures, namely the linear Granger causality test, conditional value at risk and marginal expected shortfall, on 3 groups, namely banks, insurers and non-financial companies listed in Europe over the last 14 years. We then analyze the determinants of the systemic risk contribution within the insurance industry by using balance sheet level data in a broader sample. Our evidence suggests that i) the insurance industry shows a persistent systemic relevance over time and plays a subordinate role in causing systemic risk compared to banks, and that ii) within the industry, those insurers which engage more in non-insurance-related activities tend to pose more systemic risk. In addition, we are among the first to provide empirical evidence on the role of diversification as potential determinant of systemic risk in the insurance industry. Finally, we confirm that size is also a significant driver of systemic risk, whereas price-to-book ratio and leverage display counterintuitive results.
During the last IAIS Global Seminar in June 2017, IAIS disclosed the agenda for a gradual shift in the systemic risk assessment methodology from the current Entity Based Approach (EBA) to a new Activity Based Approach(ABA). The EBA, which was developed in the aftermath of the 2008/2009 financial crisis, defines a list of Global Systemically Important Insurers (G-SIIs) based on a pre-defined set of criteria related to the size of the institution. These G-SIIs are subject to additional regulatory requirements since their distress or disorderly failure would potentially cause significant disruption to the global financial system and economic activity. Even if size is still a needed element of a systemic risk assessment, the strong emphasis put on the too-big-to-fail approach in insurance, i.e. EBA, might be partially missing the underlying nature of systemic risk in insurance. Not only certain activities, including insurance activities such as life or non-life lines of business, but also common exposures or certain managerial practices such as leverage or funding structures, tend to contribute to systemic risk of insurers but are not covered by the current EBA (Berdin and Sottocornola, 2015). Therefore, we very much welcome the general development of the systemic risk assessment methodology, even if several important questions still need to be answered.
A stochastic forward-looking model to assess the profitability and solvency of european insurers
(2016)
In this paper, we develop an analytical framework for conducting forward-looking assessments of profitability and solvency of the main euro area insurance sectors. We model the balance sheet of an insurance company encompassing both life and non-life business and we calibrate it using country level data to make it representative of the major euro area insurance markets. Then, we project this representative balance sheet forward under stochastic capital markets, stochastic mortality developments and stochastic claims. The model highlights the potential threats to insurers solvency and profitability stemming from a sustained period of low interest rates particularly in those markets which are largely exposed to reinvestment risks due to the relatively high guarantees and generous profit participation schemes. The model also proves how the resilience of insurers to adverse financial developments heavily depends on the diversification of their business mix. Finally, the model identifies potential negative spillovers between life and non-life business thorugh the redistribution of capital within groups.
Low interest rates are becoming a threat to the stability of the life insurance industry, especially in countries such as Germany, where products with relatively high guaranteed returns sold in the past still represent a prominent share of the total portfolio. This contribution aims to assess and quantify the effects of the current low interest rate phase on the balance sheet of a representative German life insurer, given the current asset allocation and the outstanding liabilities. To do so, we generate a stochastic term structure of interest rates as well as stock market returns to simulate investment returns of a stylized life insurance business portfolio in a multi-period setting. Based on empirically calibrated parameters, we can observe the evolution of the life insurers' balance sheet over time with a special focus on their solvency situation. To account for different scenarios and in order to check the robustness of our findings, we calibrate different capital market settings and different initial situations of capital endowment. Our results suggest that a prolonged period of low interest rates would markedly affect the solvency situation of life insurers, leading to relatively high cumulative probability of default for less capitalized companies.
This paper examines how networks of professional contacts contribute to the development of the careers of executives of North American and European companies. We build a dynamic model of career progression in which career moves may both depend upon existing networks and contribute to the development of future networks. We test the theory on an original dataset of nearly 73 000 executives in over 10 000 _rms. In principle professional networks could be relevant both because they are rewarded by the employer and because they facilitate job mobility. Our econometric analysis suggests that, although there is a substantial positive correlation between network size and executive compensation, with an elasticity of around 20%, almost all of this is due to unobserved individual characteristics. The true causal impact of networks on compensation is closer to an elasticity of 1 or 2% on average, all of this due to enhanced probability of moving to a higher-paid job. And there appear to be strongly diminishing returns to network size.
Coming early to the party
(2017)
We examine the strategic behavior of High Frequency Traders (HFTs) during the pre-opening phase and the opening auction of the NYSE-Euronext Paris exchange. HFTs actively participate, and profitably extract information from the order flow. They also post "flash crash" orders, to gain time priority. They make profits on their last-second orders; however, so do others, suggesting that there is no speed advantage. HFTs lead price discovery, and neither harm nor improve liquidity. They "come early to the party", and enjoy it (make profits); however, they also help others enjoy the party (improve market quality) and do not have privileges (their speed advantage is not crucial).
Do competition and incentives offered to designated market makers (DMMs) improve market liquidity? Using data from NYSE Euronext Paris, we show that an exogenous increase in competition among DMMs leads to a significant decrease in quoted and effective spreads, mainly through a reduction in adverse selection costs. In contrast, changes in incentives, through small changes in rebates and requirements for DMMs, do not have any tangible effect on market liquidity. Our results are of relevance for designing optimal contracts between exchanges and DMMs and for regulatory market oversight.
We study whether the presence of low-latency traders (including high-frequency traders (HFTs)) in the pre-opening period contributes to market quality, defined by price discovery and liquidity provision, in the opening auction. We use a unique dataset from the Tokyo Stock Exchange (TSE) based on server-IDs and find that HFTs dynamically alter their presence in different stocks and on different days. In spite of the lack of immediate execution, about one quarter of HFTs participate in the pre-opening period, and contribute significantly to market quality in the pre-opening period, the opening auction that ensues and the continuous trading period. Their contribution is largely different from that of the other HFTs during the continuous period.
This paper analyses whether the post-crisis regulatory reforms developed by global-standard-setting bodies have created appropriate incentives for different types of market participants to centrally clear Over-The-Counter (OTC) derivative contracts. Beyond documenting the observed facts, we analyze four main drivers for the decision to clear: 1) the liquidity and riskiness of the reference entity; 2) the credit risk of the counterparty; 3) the clearing member’s portfolio net exposure with the Central Counterparty Clearing House (CCP) and 4) post trade transparency. We use confidential European trade repository data on single-name Sovereign Credit Derivative Swap (CDS) transactions, and show that for all the transactions reported in 2016 on Italian, German and French Sovereign CDS 48% were centrally cleared, 42% were not cleared despite being eligible for central clearing, while 9% of the contracts were not clearable because they did not satisfy certain CCP clearing criteria. However, there is a large difference between CCP clearing members that clear about 53% of their transactions and non-clearing members, even those that are subject to counterparty risk capital requirements, that almost never clear their trades. Moreover, we find that diverse factors explain clearing members’ decision to clear different CDS contracts: for Italian CDS, counterparty credit risk exposures matter most for the decision to clear, while for French and German CDS, margin costs are the most important factor for the decision. Clearing members use clearing to reduce their exposures to the CCP and largely clear contracts when at least one of the traders has a high counterparty credit risk.
We show that High Frequency Traders (HFTs) are not beneficial to the stock market during flash crashes. They actually consume liquidity when it is most needed, even when they are rewarded by the exchange to provide immediacy. The behavior of HFTs exacerbate the transient price impact, unrelated to fundamentals, typically observed during a flash crash. Slow traders provide liquidity instead of HFTs, taking advantage of the discounted price. We thus uncover a trade-o↵ between the greater liquidity and efficiency provided by HFTs in normal times, and the disruptive consequences of their trading activity during distressed times.
This Paper gives an overview of the German banking system and current challenges it is facing. It starts with an overview of the so-called ‘Three-Pillar-Banking-System’ and a detailed description of the current structure of the banking system in Germany. A brief comparison of the banking system in Germany with the ones in other European countries points out its uniqueness. The consequences of the financial crisis of 2007/2008 and further challenges for the German banking system are discussed, as well as the the ongoing debate around the question whether the strong government involvement should be sustained.
In this paper we investigate the implications of providing loan officers with a compensation structure that rewards loan volume and penalizes poor performance versus a fixed wage unrelated to performance. We study detailed transaction information for more than 45,000 loans issued by 240 loan officers of a large commercial bank in Europe. We examine the three main activities that loan officers perform: monitoring, originating, and screening. We find that when the performance of their portfolio deteriorates, loan officers increase their effort to monitor existing borrowers, reduce loan origination, and approve a higher fraction of loan applications. These loans, however, are of above-average quality. Consistent with the theoretical literature on multitasking in incomplete contracts, we show that loan officers neglect activities that are not directly rewarded under the contract, but are in the interest of the bank. In addition, while the response by loan officers constitutes a rational response to a time allocation problem, their reaction to incentives appears myopic in other dimensions.
In this paper, we investigate how the introduction of complex, model-based capital regulation affected credit risk of financial institutions. Model-based regulation was meant to enhance the stability of the financial sector by making capital charges more sensitive to risk. Exploiting the staggered introduction of the model-based approach in Germany and the richness of our loan-level data set, we show that (1) internal risk estimates employed for regulatory purposes systematically underpredict actual default rates by 0.5 to 1 percentage points; (2) both default rates and loss rates are higher for loans that were originated under the model-based approach, while corresponding risk-weights are significantly lower; and (3) interest rates are higher for loans originated under the model-based approach, suggesting that banks were aware of the higher risk associated with these loans and priced them accordingly. Further, we document that large banks benefited from the reform as they experienced a reduction in capital charges and consequently expanded their lending at the expense of smaller banks that did not introduce the model-based approach. Counter to the stated objectives, the introduction of complex regulation adversely affected the credit risk of financial institutions. Overall, our results highlight the pitfalls of complex regulation and suggest that simpler rules may increase the efficacy of financial regulation.
Using loan-level data from Germany, we investigate how the introduction of model-based capital regulation affected banks’ ability to absorb shocks. The objective of this regulation was to enhance financial stability by making capital requirements responsive to asset risk. Our evidence suggests that banks ‘optimized’ model-based regulation to lower their capital requirements. Banks systematically underreported risk, with under reporting being more pronounced for banks with higher gains from it. Moreover, large banks benefitted from the regulation at the expense of smaller banks. Overall, our results suggest that sophisticated rules may have undesired effects if strategic misbehavior is difficult to detect.
In this paper, we examine how the institutional design affects the outcome of bank bailout decisions. In the German savings bank sector, distress events can be resolved by local politicians or a state-level association. We show that decisions by local politicians with close links to the bank are distorted by personal considerations: While distress events per se are not related to the electoral cycle, the probability of local politicians injecting taxpayers’ money into a bank in distress is 30 percent lower in the year directly preceding an election. Using the electoral cycle as an instrument, we show that banks that are bailed out by local politicians experience less restructuring and perform considerably worse than banks that are supported by the savings bank association. Our findings illustrate that larger distance between banks and decision makers reduces distortions in the decision making process, which has implications for the design of bank regulation and supervision.
We investigate the default probability, recovery rates and loss distribution of a portfolio of securitised loans granted to Italian small and medium enterprises (SMEs). To this end, we use loan level data information provided by the European DataWarehouse platform and employ a logistic regression to estimate the company default probability. We include loan-level default probabilities and recovery rates to estimate the loss distribution of the underlying assets. We find that bank securitised loans are less risky, compared to the average bank lending to small and medium enterprises.
The great financial crisis and the euro area crisis led to a substantial reform of financial safety nets across Europe and – critically – to the introduction of supranational elements. Specifically, a supranational supervisor was established for the euro area, with discrete arrangements for supervisory competences and tasks depending on the systemic relevance of supervised credit institutions. A resolution mechanism was created to allow the frictionless resolution of large financial institutions. This resolution mechanism has been now complemented with a funding instrument.
While much more progress has been achieved than most observers could imagine 12 years ago, the banking union remains unfinished with important gaps and deficiencies. The experience over the past years, especially in the area of crisis management and resolution, has provided impetus for reform discussions, as reflected most lately in the Eurogroup statement of 16 June 2022.
This Policy Insight looks primarily at the current and the desired state of the banking union project. The key underlying question, and the focus here, is the level of ambition and how it is matched with effective legal and regulatory tools. Specifically, two questions will structure the discussions:
What would be a reasonable definition and rationale for a ‘complete’ banking union? And what legal reforms would be required to achieve it?
Banking union is a case of a new remit of EU-level policy that so far has been established on the basis of long pre-existing treaty stipulations, namely, Article 127(6) TFEU (for banking supervision) and Article 114 TFEU (for crisis management and deposit insurance). Could its completion be similarly carried out through secondary law? Or would a more comprehensive overhaul of the legal architecture be required to ensure legal certainty and legitimacy?
Mindfully Resisting the Bandwagon – IT Implementation and Its Consequences in the Financial Crisis
(2013)
Although the ”financial meltdown” between 2007 and 2009 can be substantially attributed to herding behaviour in the subprime market for credit default swaps, a “mindless” IT implementation of participating financial services providers played a major role in the facilitation of the underlying bandwagon. The problem was a discrepancy between two core complementary capabilities: (1.) the (economic-rationalistic) ability to execute financial transactions (to comply with the herd) in milliseconds and (2.) the required contextualized mindfulness capabilities to comprehend the implications of the transactions being executed and the associated IT innovation decisions that enabled these transactions.
The global financial crisis (as well as the European sovereign debt crisis) has led to a substantial redesign of rules and institutions – aiming in particular at underwriting financial stability. At the same time, the crisis generated a renewed interest in properly appraising systemic financial vulnerabilities. Employing most recent data and applying a variety of largely only recently developed methods we provide an assessment of indicators of financial stability within the Euro Area. Taking a “functional” approach, we analyze comprehensively all financial intermediary activities, regardless of the institutional roof – banks or non-bank (shadow) banks – under which they are conducted. Our results reveal a declining role of banks (and a commensurate increase in non-bank banking). These structural shifts (between institutions) are coincident with regulatory and supervisory reforms (implemented or firmly anticipated) as well as a non-standard monetary policy environment. They might, unintendedly, actually imply a rise in systemic risk. Overall, however, our analyses suggest that financial imbalances have been reduced over the course of recent years. Hence, the financial intermediation sector has become more resilient. Nonetheless, existing (equity) buffers would probably not suffice to face substantial volatility shocks.
Non-bank (-balance sheet) based financial intermediation has become considerably more important over the last couple of decades. For the U.S., this trend has been discussed ever since the mid-1990s. As a consequence, traditional monetary transmission mechanisms, mainly operating through bank balance sheets, have apparently become less relevant. This in particular applies to the bank lending channel. Concurrently, recent theoretical and empirical work uncovered a "risk-taking channel" of monetary policy. This mechanism is not confined to traditional banks but has been found to operate also across the spectrum of financial intermediaries and intermediation devices, including securitization and collateralized lending/borrowing. In addition, recent empirical evidence suggests that the increasing importance of shadow-banking activities might have given rise to a so-called "waterbed effect". This is a mediating mechanisms, dampening or counteracting typically to be expected reactions to monetary policy impulses. Employing flow-of-funds data, we can document also for the Euro Area that a trend towards non-bank (not necessarily more 'market'-based) intermediation has occurred. This is, however, a fairly recent development, substantially weaker than in the U.S. Nonetheless, analyzing the response of Euro Area bank and nonbank financial intermediaries to monetary policy impulses, we find some notable behavioral differences between mainly deposit-funded and more 'market'-based financial intermediaries. We also detect, inter alia, the existence of a (still) fairly weak, but potentially policyrelevant, "waterbed" effect.
Euro area shadow banking activities in a low-interest-rate environment: a flow-of-funds perspective
(2016)
Very low policy rates as well as the substantial redesign of rules and supervisory institutions have changed background conditions for the Euro Area’s financial intermediary sector substantially. Both policy initiatives have been targeted at improving societal welfare. And their potential side effects (or costs) have been discussed intensively, in academic as well as policy circles. Very low policy rates (and correspondingly low market rates) are likely to whet investors’ risk taking incentives. Concurrently, the tightened regulatory framework, in particular for banks, increases the comparative attractiveness of the less regulated, so-called shadow banking sector. Employing flow-of-funds data for the Euro Area’s non-bank banking sector we take stock of recent developments in this part of the financial sector. In addition, we examine to which extent low interest rates have had an impact on investment behavior. Our results reveal a declining role of banks (and, simultaneously, an increase in non-bank banking). Overall intermediation activity, hence, has remained roughly at the same level. Moreover, our findings also suggest that non-bank banks have tended to take positions in riskier assets (particularly in equities). In line with this observation, balance-sheet based risk measures indicate a rise in sector-specific risks in the non-bank banking sector (when narrowly defined).
High-frequency changes in interest rates around FOMC announcements are an important tool for identifying the effects of monetary policy on asset prices and the macroeconomy. However, some recent studies have questioned both the exogeneity and the relevance of these monetary policy surprises as instruments, especially for estimating the macroeconomic effects of monetary policy shocks. For example, monetary policy surprises are correlated with macroeconomic and financial data that is publicly available prior to the FOMC announcement. The authors address these concerns in two ways: First, they expand the set of monetary policy announcements to include speeches by the Fed Chair, which essentially doubles the number and importance of announcements in our dataset. Second, they explain the predictability of the monetary policy surprises in terms of the “Fed response to news” channel of Bauer and Swanson (2021) and account for it by orthogonalizing the surprises with respect to macroeconomic and financial data. Their subsequent reassessment of the effects of monetary policy yields two key results: First, estimates of the high-frequency effects on financial markets are largely unchanged. Second, estimates of the macroeconomic effects of monetary policy are substantially larger and more significant than what most previous empirical studies have found.
Recent regulatory measures such as the European Union’s AI Act re-quire artificial intelligence (AI) systems to be explainable. As such, under-standing how explainability impacts human-AI interaction and pinpoint-ing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investiga-tion involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possess-ing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explana-tions. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived no-tions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Incentives, self-selection, and coordination of motivated agents for the production of social goods
(2021)
We study, theoretically and empirically, the effects of incentives on the self-selection and coordination of motivated agents to produce a social good. Agents join teams where they allocate effort to either generate individual monetary rewards (selfish effort) or contribute to the production of a social good with positive effort complementarities (social effort). Agents differ in their motivation to exert social effort. Our model predicts that lowering incentives for selfish effort in one team increases social good production by selectively attracting and coordinating motivated agents. We test this prediction in a lab experiment allowing us to cleanly separate the selection effect from other effects of low incentives. Results show that social good production more than doubles in the low- incentive team, but only if self-selection is possible. Our analysis highlights the important role of incentives in the matching of motivated agents engaged in social good production.
Der Einsatz von Künstliche Intelligenz (KI) – Technologien eröffnet viele Chancen, birgt aber auch viele Risiken – insbesondere in der Finanzbranche. Dieses Whitepaper gibt einen Überblick über den aktuellen Stand der Anwendung und Regulierung von KI-Technologien in der Finanzbranche, und diskutiert Chancen und Risiken von KI. KI findet in der Finanzbranche zahlreiche Anwendungsgebiete. Dazu gehören Chatbots, intelligente Assistenten für Kunden, automatischer Hochfrequenzhandel, automatisierte Betrugserkennung, Überwachung der Compliance, Gesichtserkennungssoftware zur Kundenidentifikation u. v. m. Auch Finanzaufsichtsbehörden setzen zunehmend KI-Anwendungen ein, um große und komplexe Datenmengen (Big Data) automatisiert und skalierbar auf Muster zu untersuchen und ihren Aufsichtspflichten nachzukommen.
Die Regulierung von KI in der Finanzbranche ist ein Balanceakt. Auf der einen Seite gibt es eine Notwendigkeit Flexibilität zu gewährleisten, um Innovationen nicht einzudämmen und im internationalen Wettbewerb nicht abgehängt zu werden. Strenge Auflagen können in diesem Zusammenhang als Barriere für die erfolgreiche Weiter-)Entwicklung von KI-Applikationen in der Finanzbranche wirken. Auf der anderen Seite müssen Persönlichkeitsrechte geschützt und Entscheidungsprozesse nachvollziehbar bleiben. Die fehlende Erklärbarkeit und Interpretierbarkeit von KI-Modellen entsteht in erster Linie durch Intransparenz bei einem Großteil heutiger KI-Anwendungen, bei welchen zwar die Natur der Ein- und Ausgaben beobachtbar und verständlich ist, nicht jedoch die genauen Verarbeitungsschritte dazwischen (Blackbox Prinzip).
Dieses Spannungsfeld zeigt sich auch im aktuellen regulatorischen Ansatz verschiedener Behörden. So werden einerseits die positiven Seiten von KI betont, wie Effizienz- und Effektivitätsgewinne sowie Rentabilitäts- und Qualitätssteigerungen (Bundesregierung, 2019) oder neue Methoden der Gefahrenanalyse in der Finanzmarktregulierung (BaFin, 2018a). Andererseits, wird darauf verwiesen, dass durch KI getroffene Entscheidungen immer von Menschen verantwortet werden müssen (EU Art. 22 DSGVO) und demokratische Rahmenbedingungen des Rechtsstaats zu wahren seien (FinTechRat, 2017).
Für die Zukunft sehen wir die Notwendigkeit internationale Regularien prinzipienbasiert, vereinheitlicht und technologieneutral weiterzuentwickeln, ohne dabei die Entwicklung neuer KIbasierter Geschäftsmodelle zu bremsen. Im globalen Wettstreit sollte Europa bei der Regulierung des KI-Einsatzes eine Vorreiterrolle einnehmen und damit seine demokratischen Werte der digitalen Freiheit, Selbstbestimmung und das Recht auf Information weltweit exportieren. Förderprogramme sollten einen stärkeren Fokus auf die Entwicklung nachhaltiger und verantwortungsvoller KI in Banken legen. Dazu zählt insbesondere die (Weiter-)Entwicklung breit einsetzbarer Methoden, die es erlauben, menschen-interpretierbare Erklärungen für erzeugte Ausgaben bereitzustellen und Problemen wie dem Blackbox Prinzip entgegenzuwirken.
Aus Sicht der Unternehmen in der Finanzbranche könnte eine Kooperation mit BigTech-Unternehmen sinnvoll sein, um gemeinsam das Potential der Technologie bestmöglich ausschöpfen zu können. Nützlich wäre auch ein gemeinsames semantisches Metadatenmodell zur Beschreibung der in der Finanzbranche anfallenden Daten. In Zukunft könnten künstliche Intelligenzen Daten aus sozialen Netzwerken berücksichtigen oder Smart Contracts aushandeln. Eine der größten Herausforderungen der Zukunft wird das Anwerben geeigneten Personals darstellen.
How does group identity affect belief formation? To address this question, we conduct a series of online experiments with a representative sample of individuals in the US. Using the setting of the 2020 US presidential election, we find evidence of intergroup preference across three distinct components of the belief formation cycle: a biased prior belief, avoid-ance of outgroup information sources, and a belief-updating process that places greater (less) weight on prior (new) information. We further find that an intervention reducing the salience of information sources decreases outgroup information avoidance by 50%. In a social learn-ing context in wave 2, we find participants place 33% more weight on ingroup than outgroup guesses. Through two waves of interventions, we identify source utility as the mechanism driving group effects in belief formation. Our analyses indicate that our observed effects are driven by groupy participants who exhibit stable and consistent intergroup preferences in both allocation decisions and belief formation across all three waves. These results suggest that policymakers could reduce the salience of group and partisan identity associated with a policy to decrease outgroup information avoidance and increase policy uptake.
Using a novel experimental design, I test how the exposure to information about a group’s relative performance causally affects the members’ level of identification and thereby their propensity to harm affiliates of comparison groups. I find that both, being informed about a high and poor relative performance of the ingroup similarly fosters identification. Stronger ingroup identification creates increased hostility against the group of comparison. In cases where participants learn about poor relative performance, there appears to be a direct level effect additionally elevating hostile discrimination. My findings shed light on a specific channel through which social media may contribute to intergroup fragmentation and polarization.
This paper aims at an improved understanding of the relationship between monetary policy and racial inequality. We investigate the distributional effects of monetary policy in a unified framework, linking monetary policy shocks both to earnings and wealth differentials between black and white households. Specifically, we show that, although a more accommodative monetary policy increases employment of black households more than white households, the overall effects are small. At the same time, an accommodative monetary policy shock exacerbates the wealth difference between black and white households, because black households own less financial assets that appreciate in value. Over multi-year time horizons, the employment effects are substantially smaller than the countervailing portfolio effects. We conclude that there is little reason to think that accommodative monetary policy plays a significant role in reducing racial inequities in the way often discussed. On the contrary, it may well accentuate inequalities for extended periods.
We examine how a firms' investment behavior affects the investment of a neighboring firm. Economic theory yields ambiguous predictions regarding the direction of firm peer effects and consistent with earlier work, we find that firms display similar investment behavior within an area using OLS analysis. Exploiting time-variation in the rise of U.S. states' corporate income taxes and utilizing heterogeneity in firms' exposure to increases in corporate income tax rates, we identify the causal impact of local firms' investments. Using this as an instrumental variable in a 2SLS estimation, we find that an increases in local firms' investment reduces the investment of a local peer firm. This effect is more pronounced if local competition among firms is stronger and supports theories that firm investments are strategic substitutes due to competition.
What are the aggregate and distributional consequences of the relationship be-tween an individual’s social network and financial decisions? Motivated by several well-documented facts about the influence of social connections on financial decisions, we build and calibrate a model of stock market participation with a social network that emphasizes the interplay between connectivity and network structure. Since connections to informed agents help spread information, there is a pivotal role for factors that determine sorting among agents. An increase in the average number of connections raises the average participation rate, mostly due to richer agents. A higher degree of sorting benefits richer agents by creating clusters where information spreads more efficiently. We show empirical evidence consistent with the importance of connectivity and sorting. We discuss several new avenues for future research into the aggregate impact of peer effects in finance.
Peer effects can lead to better financial outcomes or help propagate financial mistakes across social networks. Using unique data on peer relationships and portfolio composition, we show considerable overlap in investment portfolios when an investor recommends their brokerage to a peer. We argue that this is strong evidence of peer effects and show that peer effects lead to better portfolio quality. Peers become more likely to invest in funds when their recommenders also invest, improving portfolio diversification compared to the average investor and various placebo counterfactuals. Our evidence suggests that social networks can provide good advice in settings where individuals are personally connected.
Liquidity derivatives
(2022)
It is well established that investors price market liquidity risk. Yet, there exists no financial claim contingent on liquidity. We propose a contract to hedge uncertainty over future transaction costs, detailing potential buyers and sellers. Introducing liquidity derivatives in Brunnermeier and Pedersen (2009) improves financial stability by mitigating liquidity spirals. We simulate liquidity option prices for a panel of NYSE stocks spanning 2000 to 2020 by fitting a stochastic process to their bid-ask spreads. These contracts reduce the exposure to liquidity factors. Their prices provide a novel illiquidity measure refllecting cross-sectional commonalities. Finally, stock returns significantly spread along simulated prices.
Industry classification groups firms into finer partitions to help investments and empirical analysis. To overcome the well-documented limitations of existing industry definitions, like their stale nature and coarse categories for firms with multiple operations, we employ a clustering approach on 69 firm characteristics and allocate companies to novel economic sectors maximizing the within-group explained variation. Such sectors are dynamic yet stable, and represent a superior investment set compared to standard classification schemes for portfolio optimization and for trading strategies based on within-industry mean-reversion, which give rise to a latent risk factor significantly priced in the cross-section. We provide a new metric to quantify feature importance for clustering methods, finding that size drives differences across classical industries while book-to-market and financial liquidity variables matter for clustering-based sectors.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.
We investigate the relationship between anchoring and the emergence of bubbles in experimental asset markets. We show that setting a visual anchor at the fundamental value (FV) in the first period only is sufficient to eliminate or to significantly reduce bubbles in laboratory asset markets. If no FV-anchor is set, bubble-crash patterns emerge. Our results indicate that bubbles in laboratory environments are primarily sparked in the first period. If prices are initiated around the FV, they stay close to the FV over the entire trading horizon. Our insights can be related to initial public offerings and the interaction between prices set on pre-opening markets and subsequent intra-day price dynamics.