330 Wirtschaft
Refine
Year of publication
- 2021 (218)
- 2014 (181)
- 2017 (173)
- 2020 (173)
- 2022 (167)
- 2018 (163)
- 2023 (155)
- 2016 (150)
- 2013 (145)
- 2015 (141)
- 2019 (133)
- 2012 (106)
- 2008 (100)
- 2005 (96)
- 2003 (95)
- 2009 (93)
- 2010 (92)
- 2011 (90)
- 2006 (82)
- 2004 (73)
- 2007 (68)
- 2024 (60)
- 2002 (45)
- 2001 (41)
- 1999 (35)
- 2000 (33)
- 1998 (31)
- 1997 (11)
- 1996 (10)
- 1993 (5)
- 1994 (4)
- 1995 (3)
- 1992 (2)
- 1892 (1)
- 1943 (1)
- 1946 (1)
- 1976 (1)
- 1990 (1)
- 1991 (1)
Document Type
- Working Paper (1834)
- Article (480)
- Part of Periodical (446)
- Report (105)
- Doctoral Thesis (40)
- Book (28)
- Conference Proceeding (14)
- Periodical (11)
- Part of a Book (9)
- Review (7)
Language
- English (2981) (remove)
Is part of the Bibliography
- no (2981)
Keywords
- Deutschland (117)
- Geldpolitik (55)
- USA (51)
- monetary policy (50)
- Financial Institutions (48)
- Schätzung (48)
- Europäische Union (44)
- Monetary Policy (44)
- ECB (42)
- Bank (39)
Institute
- Wirtschaftswissenschaften (1868)
- Center for Financial Studies (CFS) (1482)
- Sustainable Architecture for Finance in Europe (SAFE) (1057)
- House of Finance (HoF) (698)
- E-Finance Lab e.V. (356)
- Institute for Monetary and Financial Stability (IMFS) (191)
- Rechtswissenschaft (89)
- Foundation of Law and Finance (50)
- Gesellschaftswissenschaften (31)
- Institute for Law and Finance (ILF) (31)
Doing safe by doing good : ESG investing and corporate social responsibility in the U.S. and Europe
(2019)
This paper examines the profitability of investing according to environmental, social and governance (ESG) criteria in the U.S. and Europe. Based on data from 2003 to 2017, we show that a portfolio long in stocks with the highest ESG scores and short in those with the lowest scores yields a significantly negative abnormal return. Interestingly, this is caused by the strong positive return of firms with the lowest ESG activity. As we find that increasing ESG scores reduce firm risk (particularly downside risk), this hints at an insurance-like character of corporate social responsibility: Firms with low ESG activity need to offer a corresponding risk premium. The perception of ESG as an insurance can be shown to be stronger in more volatile capital markets for U.S. firms, but not for European firms. Socially responsible investment may therefore be of varying attractiveness in different market phases.
Open-end real estate funds are of particular importance in the German bankdominated financial system. However, recently the German open-end fund industry came under severe distress which triggered a broad discussion of required regulatory interventions. This paper gives a detailed description of the institutional structure of these funds and of the events that led to the crisis. Furthermore, it applies recent banking theory to open-end real estate funds in order to understand why the open-end fund structure was so prevalent in Germany. Based on these theoretical insights we evaluate the various policy recommendation that have been raised.
This paper examines the effect of imperfect labor market competition on the efficiency of compensation schemes in a setting with moral hazard, private information and risk-averse agents. Two vertically differentiated firrms compete for agents by offering contracts with fixed and variable payments. Vertical differentiation between firms leads to endogenous, type-dependent exit options for agents. In contrast to screening models with perfect competition, we find that existence of equilibria does not depend on whether the least-cost separating allocation is interim efficient. Rather, vertical differentiation allows the inferior firm to offer (cross-)subsidizing fixed payments even above the interim efficient level. We further show that the efficiency of variable pay depends on the degree of competition for agents: For small degrees of competition, low-ability agents are under-incentivized and exert too little effort. For large degrees of competition, high-ability agents are over-incentivized and bear too much risk. For intermediate degrees of competition, however, contracts are second-best despite private information.
This study examines the role of actual and perceived financial sophistication (i.e., financial literacy and confidence) for individuals' wealth accumulation. Using survey data from the German SAVE initiative, we find strong gender- and education-related differences in the distribution of the two variables and their effects on wealth: As financial literacy rises in formal education, whereas confidence increases in education for men but decreases for women, we observe that women become strongly underconfident with higher education, while men remain overconfident.Regarding wealth accumulation, we show that financial literacy has a positive effect that is stronger for women than for men and that is increasing (decreasing) in education for women (men). Confidence, however, supports only highly-educated men's wealth. When considering different channels for wealth accumulation, we observe that financial literacy is more important for current financial market participation, whereas confidence is more strongly associated with future-oriented financial planning. Overall, we demonstrate that highly-educated men's wealth levels benefit from their overconfidence via all financial decisions considered, but highly-educated women's financial planning suffers from their underconfidence. This may impair their wealth levels in old age.
We analyze the market reaction to the sentiment of the CEO speech at the Annual General Meeting (AGM). As the AGM is typically preceded by several information disclosures, the CEO speech may be expected to contribute only marginally to investors’ decision-making. Surprisingly, however, we observe from the transcripts of 338 CEO speeches of German corporates between 2008 and 2016 that their sentiment is significantly related to abnormal stock returns and trading volumes following the AGM. Using a novel business-specific German dictionary based on Loughran and McDonald (2011), we find a negative association of the post-AGM returns with the speeches’ negativity and a positive association with the speeches’ relative positivity (i.e. positivity relative to negativity). Relative positivity moreover corresponds with a lower trading volume in a short time window surrounding the AGM. Investors hence seem to perceive the sentiment of CEO speeches at AGMs as a valuable indicator of future firm performance.
We analyze the market reaction to the sentiment of the CEO speech at the Annual General Meeting (AGM). As the AGM is typically preceded by several information disclosures, the CEO speech may be expected to contribute only marginally to investors’ decision making. Surprisingly, however, we observe from the transcripts of 338 CEO speeches of German corporates between 2008 and 2016 that their sentiment is significantly related to abnormal stock returns and trading volume around the AGM. By adapting a finance-specific German dictionary based on Loughran and McDonald (2011), we find a negative association of the post-AGM returns with the speeches’ negativity and a positive association with the speeches’ relative positivity (i.e. positivity relative to negativity). Relative positivity moreover corresponds with a lower trading volume around the AGM. Investors hence seem to perceive the sentiment of CEO speeches at AGMs as a valuable indicator of future firm performance. Our results are robust against different weighting schemes and our dictionary appears to be better suited to grasp the sentiment of German business documents compared to general dictionaries.
We examine firms’ simultaneous choice of investment, debt financing and liquidity in a large sample of US corporates between 1980 and 2014. We partition the sample according to the firms’ financial constraints and their needs to hedge against future shortfalls in operating income. In contrast to earlier work, our joint estimation approach shows that cash flows affect the corporate decisions of unconstrained firms more strongly than those of constrained firms. Investment-cash flow sensitivities are particularly intense for unconstrained firms with high hedging needs. Investment opportunities (as proxied by Q), however, play a larger role for constrained firms with the effects being strongest in case of low hedging needs. Interestingly, constrained firms with low hedging needs are found to employ more debt to finance their investment opportunities and build up significant cash holdings at the same time. Our results hence indicate overinvestment behavior for unconstrained firms but no underinvestment for constrained firms if they have low hedging needs.
In this paper, we propose a model of credit rating agencies using the global games framework to incorporate information and coordination problems. We introduce a refined utility function of a credit rating agency that, additional to reputation maximization, also embeds aspects of competition and feedback effects of the rating on the rated firms. Apart from hinting at explanations for several hypotheses with regard to agencies' optimal rating assessments, our model suggests that the existence of rating agencies may decrease the incidence of multiple equilibria. If investors have discretionary power over the precision of their private information, we can prove that public rating announcements and private information collection are complements rather than substitutes in order to secure uniqueness of equilibrium. In this respect, rating agencies may spark off a virtuous circle that increases the efficiency of the market outcome.
This paper studies the use of performance pricing (PP) provisions in debt contracts and compares accounting-based with rating-based pricing designs. We find that rating-based provisions are used by volatile-growth borrowers and allow for stronger spread increases over the credit period. Accounting-based provisions are employed by opaque-growth borrowers and stipulate stronger spread reductions. Further, a higher spread-increase potential in rating-based contracts lowers the spread at the loan’s inception and improves the borrower’s performance later on. In contrast, a higher spread-decrease potential in accounting-based contracts lowers the initial spread and raises the borrower’s leverage afterwards. The evidence indicates that rating-based contracts are indeed employed for different reasons than accounting-based contracts: the former to signal a borrower’s quality, the latter to mitigate investment inefficiencies.
We examine how a firms' investment behavior affects the investment of a neighboring firm. Economic theory yields ambiguous predictions regarding the direction of firm peer effects and consistent with earlier work, we find that firms display similar investment behavior within an area using OLS analysis. Exploiting time-variation in the rise of U.S. states' corporate income taxes and utilizing heterogeneity in firms' exposure to increases in corporate income tax rates, we identify the causal impact of local firms' investments. Using this as an instrumental variable in a 2SLS estimation, we find that an increases in local firms' investment reduces the investment of a local peer firm. This effect is more pronounced if local competition among firms is stronger and supports theories that firm investments are strategic substitutes due to competition.
Public employees in many developing economies earn much higher wages than similar privatesector workers. These wage premia may reflect an efficient return to effort or unobserved skills, or an inefficient rent causing labor misallocation. To distinguish these explanations, we exploit the Kenyan government’s algorithm for hiring eighteen-thousand new teachers in 2010 in a regression discontinuity design. Fuzzy regression discontinuity estimates yield a civil-service wage premium of over 100 percent (not attributable to observed or unobserved skills), but no effect on motivation, suggesting rent-sharing as the most plausible explanation for the wage premium.
From 1963 through 2015, idiosyncratic risk (IR) is high when market risk (MR) is high. We show that the positive relation between IR and MR is highly stable through time and is robust across exchanges, firm size, liquidity, and market-to-book groupings. Though stock liquidity affects the strength of the relation, the relation is strong for the most liquid stocks. The relation has roots in fundamentals as higher market risk predicts greater idiosyncratic earnings volatility and as firm characteristics related to the ability of firms to adjust to higher uncertainty help explain the strength of the relation. Consistent with the view that growth options provide a hedge against macroeconomic uncertainty, we find evidence that the relation is weaker for firms with more growth options.
This paper aims at an improved understanding of the relationship between monetary policy and racial inequality. We investigate the distributional effects of monetary policy in a unified framework, linking monetary policy shocks both to earnings and wealth differentials between black and white households. Specifically, we show that, although a more accommodative monetary policy increases employment of black households more than white households, the overall effects are small. At the same time, an accommodative monetary policy shock exacerbates the wealth difference between black and white households, because black households own less financial assets that appreciate in value. Over multi-year time horizons, the employment effects are substantially smaller than the countervailing portfolio effects. We conclude that there is little reason to think that accommodative monetary policy plays a significant role in reducing racial inequities in the way often discussed. On the contrary, it may well accentuate inequalities for extended periods.
This paper investigates the financial contracting behavior of German venture capitalists against the results of recent theoretical work on the design of venture capital contracts, especially with regard to the use of convertible securities. First, we identify a special feature of the German market, namely that public-private partnership agencies require significantly lower returns than private and young venture capitalists. The latter are most likely to follow their North-American counterpart by refinancing themselves with closed-end funds. Second, with regard to financing practices it is shown that the use of convertibles, relative to other instruments, is influenced by the anticipated severity of agency problems. Klassifikation: C24; G24; G32
During the 1980s and early 1990s, the importance of small firm growth and industrial districts in Italy became the focus of a large number of regional development studies. According to this literature, successful industrial districts are characterized by intensive cooperation and market producer-user interaction between small and medium-sized, flexibly specialized firms (Piore and Sabel, 1984; Scott, 1988). In addition, specialized local labor markets develop which are complemented by a variety of supportive institutions and a tradition of collaboration based on trust relations (Amin and Robins, 1990; Amin and Thrift, 1995). It has also been emphasized that industrial districts are deeply embedded into the socio-institutional structures within their particular regions (Grabher, 1993). Many case studies have attempted to find evidence that the regional patterns identified in Italy are a reflection of a general trend in industrial development rather than just being historical exceptions. Silicon Valley, which is focused on high technology production, has been identified as being one such production complex similar to those in Italy (see, for instance, Hayter, 1997). However, some remarkable differences do exist in the institutional context of this region, as well as its particular social division of labor (Markusen, 1996). Even though critics, such as Amin and Robins (1990), emphasized quite early that the Italian experience could not easily be applied to other socio-cultural settings, many studies have classified other high technology regions in the U.S. as being industrial districts, such as Boston s Route 128 area. Too much attention has been paid to the performance of small and medium-sized firms and the regional level of industrial production in the ill-fated debate regarding industrial districts (Martinelli and Schoenberger, 1991). Harrison (1997) has provided substantial evidence that large firms continue to dominate the global economy. This does not, however, imply that a de-territorialization of economic growth is necessarily taking place as globalization tendencies continue (Storper, 1997; Maskell and Malmberg, 1998). In the case of Boston, it has been misleading to define its regional economy as being an industrial district. Neither have small and medium-sized firms been decisive in the development of the Route 128 area nor has the region developed a tradition of close communication between vertically-disintegrated firms (Dorfman, 1983; Bathelt, 1991a). Saxenian (1994) found that Boston s economy contrasted sharply with that of an industrial district. Specifically, the region has been dominated by large, vertically-integrated high technology firms which are reliant on proprietary technologies and autarkic firm structures. Several studies have tried to compare the development of the Route 128 region to Silicon Valley. These studies have shown that both regions developed into major 2 agglomerations of high technology industries in the post-World War II period. Due to their different traditions, structures and practices, Silicon Valley and Route 128 have followed divergent development paths which have resulted in a different regional specialization (Dorfman, 1983; Saxenian, 1985; Kenney and von Burg, 1999). In the mid 1970s, both regions were almost equally important in terms of the size of their high technology sectors. Since then, however, Silicon Valley has become more important and has now the largest agglomeration of leading-edge technologies in the U.S. (Saxenian, 1994). Saxenian (1994) argues that the superior performance of high technology industries in Silicon Valley over those in Boston is based on different organizational patterns and manufacturing cultures which are embedded in those socio-institutional traditions which are particular to each region. Despite the fact that Saxenian (1994) has been criticized for basing her conclusions on weak empirical research (i.e. Harrison, 1997; Markusen, 1998), she offers a convincing explanation as to why the development paths of both regions have differed.1 Saxenian s (1994) study does not, however, identify which structures and processes have enabled both regions to overcome economic crises. In the case of the Boston economy, high technology industries have proven that they are capable of readjusting and rejuvenating their product and process structures in such a way that further innovation and growth is stimulated. This is also exemplified by the region s recent economic development. In the late 1980s, Boston experienced an economic decline when the minicomputer industry lost its competitive basis and defense expenditures were drastically reduced. The number of high technology manufacturing jobs decreased by more than 45,000 between 1987 and 1995. By the mid 1990s, however, the regional economy began to recover. The rapidly growing software sector compensated for some of the losses experienced in manufacturing. In this paper, I aim to identify the forces behind this economic recovery. I will investigate whether high technology firms have uncovered new ways to overcome the crisis and the extent to which they have given up their focus on self-reliance and autarkic structures. The empirical findings will also be discussed in the context of the recent debate about the importance of regional competence and collective learning (Storper, 1997; Maskell and Malmberg, 1998). There is a growing body of literature which suggests that some regional economies During the 1980s and early 1990s, the importance of small firm growth and industrial districts in Italy became the focus of a large number of regional development studies. According to this literature, successful industrial districts are characterized by intensive cooperation and market producer-user interaction between small and medium-sized, flexibly specialized firms (Piore and Sabel, 1984; Scott, 1988). In addition, specialized local labor markets develop which are complemented by a variety of supportive institutions and a tradition of collaboration based on trust relations (Amin and Robins, 1990; Amin and Thrift, 1995). It has also been emphasized that industrial districts are deeply embedded into the socio-institutional structures within their particular regions (Grabher, 1993). Many case studies have attempted to find evidence that the regional patterns identified in Italy are a reflection of a general trend in industrial development rather than just being historical exceptions. Silicon Valley, which is focused on high technology production, has been identified as being one such production complex similar to those in Italy (see, for instance, Hayter, 1997). However, some remarkable differences do exist in the institutional context of this region, as well as its particular social division of labor (Markusen, 1996). Even though critics, such as Amin and Robins (1990), emphasized quite early that the Italian experience could not easily be applied to other socio-cultural settings, many studies have classified other high technology regions in the U.S. as being industrial districts, such as Boston s Route 128 area. Too much attention has been paid to the performance of small and medium-sized firms and the regional level of industrial production in the ill-fated debate regarding industrial districts (Martinelli and Schoenberger, 1991). Harrison (1997) has provided substantial evidence that large firms continue to dominate the global economy. This does not, however, imply that a de-territorialization of economic growth is necessarily taking place as globalization tendencies continue (Storper, 1997; Maskell and Malmberg, 1998). In the case of Boston, it has been misleading to define its regional economy as being an industrial district. Neither have small and medium-sized firms been decisive in the development of the Route 128 area nor has the region developed a tradition of close communication between vertically-disintegrated firms (Dorfman, 1983; Bathelt, 1991a). Saxenian (1994) found that Boston s economy contrasted sharply with that of an industrial district. Specifically, the region has been dominated by large, vertically-integrated high technology firms which are reliant on proprietary technologies and autarkic firm structures. Several studies have tried to compare the development of the Route 128 region to Silicon Valley. These studies have shown that both regions developed into major 2 agglomerations of high technology industries in the post-World War II period. Due to their different traditions, structures and practices, Silicon Valley and Route 128 have followed divergent development paths which have resulted in a different regional specialization (Dorfman, 1983; Saxenian, 1985; Kenney and von Burg, 1999). In the mid 1970s, both regions were almost equally important in terms of the size of their high technology sectors. Since then, however, Silicon Valley has become more important and has now the largest agglomeration of leading-edge technologies in the U.S. (Saxenian, 1994). Saxenian (1994) argues that the superior performance of high technology industries in Silicon Valley over those in Boston is based on different organizational patterns and manufacturing cultures which are embedded in those socio-institutional traditions which are particular to each region. Despite the fact that Saxenian (1994) has been criticized for basing her conclusions on weak empirical research (i.e. Harrison, 1997; Markusen, 1998), she offers a convincing explanation as to why the development paths of both regions have differed.1 Saxenian s (1994) study does not, however, identify which structures and processes have enabled both regions to overcome economic crises. In the case of the Boston economy, high technology industries have proven that they are capable of readjusting and rejuvenating their product and process structures in such a way that further innovation and growth is stimulated. This is also exemplified by the region s recent economic development. In the late 1980s, Boston experienced an economic decline when the minicomputer industry lost its competitive basis and defense expenditures were drastically reduced. The number of high technology manufacturing jobs decreased by more than 45,000 between 1987 and 1995. By the mid 1990s, however, the regional economy began to recover. The rapidly growing software sector compensated for some of the losses experienced in manufacturing. In this paper, I aim to identify the forces behind this economic recovery. I will investigate whether high technology firms have uncovered new ways to overcome the crisis and the extent to which they have given up their focus on self-reliance and autarkic structures. The empirical findings will also be discussed in the context of the recent debate about the importance of regional competence and collective learning (Storper, 1997; Maskell and Malmberg, 1998). There is a growing body of literature which suggests that some regional economies an develop into learning economies which are based on intra-regional production linkages, interactive technological learning processes, flexibility and proximity (Storper, 1992; Lundvall and Johnson, 1994; Gregersen and Johnson, 1997). In the next section of this paper, I will discuss some of the theoretical issues regarding localized learning processes, learning economies and learning regions (see, also, Bathelt, 1999). I will then describe the methodology used. What follows is a brief overview of how Boston s economy has specialized in high technology production. The main part of the paper will then focus on recent trends in Boston s high technology industries. It will be shown that the high technology economy consists of different subsectors which are not tied to a single technological development path. The various subsectors are, at least partially, dependent on different forces and unrelated processes. There is, however, tentative evidence which suggests that cooperative behavior and collective learning in supplierproducer- user relations have become important factors in securing reproductivity in the regional structure. The importance of these trends will be discussed in the conclusions.
Using a novel experimental design, I test how the exposure to information about a group’s relative performance causally affects the members’ level of identification and thereby their propensity to harm affiliates of comparison groups. I find that both, being informed about a high and poor relative performance of the ingroup similarly fosters identification. Stronger ingroup identification creates increased hostility against the group of comparison. In cases where participants learn about poor relative performance, there appears to be a direct level effect additionally elevating hostile discrimination. My findings shed light on a specific channel through which social media may contribute to intergroup fragmentation and polarization.
How does group identity affect belief formation? To address this question, we conduct a series of online experiments with a representative sample of individuals in the US. Using the setting of the 2020 US presidential election, we find evidence of intergroup preference across three distinct components of the belief formation cycle: a biased prior belief, avoid-ance of outgroup information sources, and a belief-updating process that places greater (less) weight on prior (new) information. We further find that an intervention reducing the salience of information sources decreases outgroup information avoidance by 50%. In a social learn-ing context in wave 2, we find participants place 33% more weight on ingroup than outgroup guesses. Through two waves of interventions, we identify source utility as the mechanism driving group effects in belief formation. Our analyses indicate that our observed effects are driven by groupy participants who exhibit stable and consistent intergroup preferences in both allocation decisions and belief formation across all three waves. These results suggest that policymakers could reduce the salience of group and partisan identity associated with a policy to decrease outgroup information avoidance and increase policy uptake.
Incentives, self-selection, and coordination of motivated agents for the production of social goods
(2021)
We study, theoretically and empirically, the effects of incentives on the self-selection and coordination of motivated agents to produce a social good. Agents join teams where they allocate effort to either generate individual monetary rewards (selfish effort) or contribute to the production of a social good with positive effort complementarities (social effort). Agents differ in their motivation to exert social effort. Our model predicts that lowering incentives for selfish effort in one team increases social good production by selectively attracting and coordinating motivated agents. We test this prediction in a lab experiment allowing us to cleanly separate the selection effect from other effects of low incentives. Results show that social good production more than doubles in the low- incentive team, but only if self-selection is possible. Our analysis highlights the important role of incentives in the matching of motivated agents engaged in social good production.
In current discussions on large language models (LLMs) such as GPT, understanding their ability to emulate facets of human intelligence stands central. Using behavioral economic paradigms and structural models, we investigate GPT’s cooperativeness in human interactions and assess its rational goal-oriented behavior. We discover that GPT cooperates more than humans and has overly optimistic expectations about human cooperation. Intriguingly, additional analyses reveal that GPT’s behavior isn’t random; it displays a level of goal-oriented rationality surpassing human counterparts. Our findings suggest that GPT hyper-rationally aims to maximize social welfare, coupled with a strive of self-preservation. Methodologically, our esearch highlights how structural models, typically employed to decipher human behavior, can illuminate the rationality and goal-orientation of LLMs. This opens a compelling path for future research into the intricate rationality of sophisticated, yet enigmatic artificial agents.
Advances in Machine Learning (ML) led organizations to increasingly implement predictive decision aids intended to improve employees’ decision-making performance. While such systems improve organizational efficiency in many contexts, they might be a double-edged sword when there is the danger of a system discontinuance. Following cognitive theories, the provision of ML-based predictions can adversely affect the development of decision-making skills that come to light when people lose access to the system. The purpose of this study is to put this assertion to the test. Using a novel experiment specifically tailored to deal with organizational obstacles and endogeneity concerns, we show that the initial provision of ML decision aids can latently prevent the development of decision-making skills which later becomes apparent when the system gets discontinued. We also find that the degree to which individuals 'blindly' trust observed predictions determines the ultimate performance drop in the post-discontinuance phase. Our results suggest that making it clear to people that ML decision aids are imperfect can have its benefits especially if there is a reasonable danger of (temporary) system discontinuances.
Using experimental data from a comprehensive field study, we explore the causal effects of algorithmic discrimination on economic efficiency and social welfare. We harness economic, game-theoretic, and state-of-the-art machine learning concepts allowing us to overcome the central challenge of missing counterfactuals, which generally impedes assessing economic downstream consequences of algorithmic discrimination. This way, we are able to precisely quantify downstream efficiency and welfare ramifications, which provides us a unique opportunity to assess whether the introduction of an AI system is actually desirable. Our results highlight that AI systems’ capabilities in enhancing welfare critically depends on the degree of inherent algorithmic biases. While an unbiased system in our setting outperforms humans and creates substantial welfare gains, the positive impact steadily decreases and ultimately reverses the more biased an AI system becomes. We show that this relation is particularly concerning in selective-labels environments, i.e., settings where outcomes are only observed if decision-makers take a particular action so that the data is selectively labeled, because commonly used technical performance metrics like the precision measure are prone to be deceptive. Finally, our results depict that continued learning, by creating feedback loops, can remedy algorithmic discrimination and associated negative effects over time.
Recent regulatory measures such as the European Union’s AI Act re-quire artificial intelligence (AI) systems to be explainable. As such, under-standing how explainability impacts human-AI interaction and pinpoint-ing the specific circumstances and groups affected, is imperative. In this study, we devise a formal framework and conduct an empirical investiga-tion involving real estate agents to explore the complex interplay between explainability of and delegation to AI systems. On an aggregate level, our findings indicate that real estate agents display a higher propensity to delegate apartment evaluations to an AI system when its workings are explainable, thereby surrendering control to the machine. However, at an individual level, we detect considerable heterogeneity. Agents possess-ing extensive domain knowledge are generally more inclined to delegate decisions to AI and minimize their effort when provided with explana-tions. Conversely, agents with limited domain knowledge only exhibit this behavior when explanations correspond with their preconceived no-tions regarding the relationship between apartment features and listing prices. Our results illustrate that the introduction of explainability in AI systems may transfer the decision-making control from humans to AI under the veil of transparency, which has notable implications for policy makers and practitioners that we discuss.
This paper explores the interplay of feature-based explainable AI (XAI) tech- niques, information processing, and human beliefs. Using a novel experimental protocol, we study the impact of providing users with explanations about how an AI system weighs inputted information to produce individual predictions (LIME) on users’ weighting of information and beliefs about the task-relevance of information. On the one hand, we find that feature-based explanations cause users to alter their mental weighting of available information according to observed explanations. On the other hand, explanations lead to asymmetric belief adjustments that we inter- pret as a manifestation of the confirmation bias. Trust in the prediction accuracy plays an important moderating role for XAI-enabled belief adjustments. Our results show that feature-based XAI does not only superficially influence decisions but re- ally change internal cognitive processes, bearing the potential to manipulate human beliefs and reinforce stereotypes. Hence, the current regulatory efforts that aim at enhancing algorithmic transparency may benefit from going hand in hand with measures ensuring the exclusion of sensitive personal information in XAI systems. Overall, our findings put assertions that XAI is the silver bullet solving all of AI systems’ (black box) problems into perspective.
Conditional yield skewness is an important summary statistic of the state of the economy. It exhibits pronounced variation over the business cycle and with the stance of monetary policy, and a tight relationship with the slope of the yield curve. Most importantly, variation in yield skewness has substantial forecasting power for future bond excess returns, high-frequency interest rate changes around FOMC announcements, and consensus survey forecast errors for the ten-year Treasury yield. The COVID pandemic did not disrupt these relations: historically high skewness correctly anticipated the run-up in long-term Treasury yields starting in late 2020. The connection between skewness, survey forecast errors, excess returns, and departures of yields from normality is consistent with a theoretical framework where one of the agents has biased beliefs.
The authors estimate perceptions about the Fed's monetary policy rule from panel data on professional forecasts of interest rates and macroeconomic conditions. The perceived dependence of the federal funds rate on economic conditions is time-varying and cyclical: high during tightening episodes but low during easings. Forecasters update their perceptions about the policy rule in response to monetary policy actions, measured by high-frequency interest rate surprises, suggesting that forecasters have imperfect information about the rule. The perceived rule impacts asset prices crucial for monetary policy transmission, driving how interest rates respond to macroeconomic news and explaining term premia in long-term interest rates.
High-frequency changes in interest rates around FOMC announcements are an important tool for identifying the effects of monetary policy on asset prices and the macroeconomy. However, some recent studies have questioned both the exogeneity and the relevance of these monetary policy surprises as instruments, especially for estimating the macroeconomic effects of monetary policy shocks. For example, monetary policy surprises are correlated with macroeconomic and financial data that is publicly available prior to the FOMC announcement. The authors address these concerns in two ways: First, they expand the set of monetary policy announcements to include speeches by the Fed Chair, which essentially doubles the number and importance of announcements in our dataset. Second, they explain the predictability of the monetary policy surprises in terms of the “Fed response to news” channel of Bauer and Swanson (2021) and account for it by orthogonalizing the surprises with respect to macroeconomic and financial data. Their subsequent reassessment of the effects of monetary policy yields two key results: First, estimates of the high-frequency effects on financial markets are largely unchanged. Second, estimates of the macroeconomic effects of monetary policy are substantially larger and more significant than what most previous empirical studies have found.
High-frequency changes in interest rates around FOMC announcements are a standard method of measuring monetary policy shocks. However, some recent studies have documented puzzling effects of these shocks on private-sector forecasts of GDP, unemployment, or inflation that are opposite in sign to what standard macroeconomic models would predict. This evidence has been viewed as supportive of a „Fed information effect“ channel of monetary policy, whereby an FOMC tightening (easing) communicates that the economy is stronger (weaker) than the public had expected.
The authors show that these empirical results are also consistent with a „Fed response to news“ channel, in which incoming, publicly available economic news causes both the Fed to change monetary policy and the private sector to revise its forecasts. They provide substantial new evidence that distinguishes between these two channels and strongly favors the latter; for example, regressions that include the previously omitted public macroeconomic news, high-frequency stock market responses to Fed announcements, and a new survey that they conduct of individual Blue Chip forecasters all indicate that the Fed and private sector are simply responding to the same public news, and that there is little if any role for a „Fed information effect“.
It is commonly believed that the response of the price of corn ethanol (and hence of the price of corn) to shifts in biofuel policies operates in part through market expectations and shifts in storage demand, yet to date it has proved difficult to measure these expectations and to empirically evaluate this view. We utilize a recently proposed methodology to estimate the market’s expectations of the prices of ethanol, unfinished motor gasoline and crude oil at horizons from three months to one year. We quantify the extent to which price changes were anticipated by the market, the extent to which they were unanticipated, and how the risk premium in these markets has evolved. We show that the Renewable Fuel Standard (RFS) is likely to have increased ethanol price expectations by as much $1.45 in the year before and in the year after the implementation of the RFS had started. Our analysis of the term structure of expectations provides support for the view that a shift in ethanol storage demand starting in 2005 caused an increase in the price of ethanol. There is no conclusive evidence that the tightening of the RFS in 2008 shifted market expectations, but our analysis suggests that policy uncertainty about how to deal with the blend wall raised the risk premium in the ethanol futures market in mid-2013 by as much as 50 cents at longer horizons. Finally, we present evidence against a tight link from ethanol price expectations to corn price expectations and hence to storage demand for corn in 2005-06.
The substantial variation in the real price of oil since 2003 has renewed interest in the question of how to forecast monthly and quarterly oil prices. There also has been increased interest in the link between financial markets and oil markets, including the question of whether financial market information helps forecast the real price of oil in physical markets. An obvious advantage of financial data in forecasting oil prices is their availability in real time on a daily or weekly basis. We investigate whether mixed-frequency models may be used to take advantage of these rich data sets. We show that, among a range of alternative high-frequency predictors, especially changes in U.S. crude oil inventories produce substantial and statistically significant real-time improvements in forecast accuracy. The preferred MIDAS model reduces the MSPE by as much as 16 percent compared with the no-change forecast and has statistically significant directional accuracy as high as 82 percent. This MIDAS forecast also is more accurate than a mixed-frequency realtime VAR forecast, but not systematically more accurate than the corresponding forecast based on monthly inventories. We conclude that typically not much is lost by ignoring high-frequency financial data in forecasting the monthly real price of oil.
Futures markets are a potentially valuable source of information about market expectations. Exploiting this information has proved difficult in practice, because the presence of a time-varying risk premium often renders the futures price a poor measure of the market expectation of the price of the underlying asset. Even though the expectation in principle may be recovered by adjusting the futures price by the estimated risk premium, a common problem in applied work is that there are as many measures of market expectations as there are estimates of the risk premium. We propose a general solution to this problem that allows us to uniquely pin down the best possible estimate of the market expectation for any set of risk premium estimates. We illustrate this approach by solving the long-standing problem of how to recover the market expectation of the price of crude oil. We provide a new measure of oil price expectations that is considerably more accurate than the alternatives and more economically plausible. We discuss implications of our analysis for the estimation of economic models of energy-intensive durables, for the debate on speculation in oil markets, and for oil price forecasting.
Some observers have conjectured that oil supply shocks in the United States and in other countries are behind the plunge in the price of oil since June 2014. Others have suggested that a major shock to oil price expectations occurred when in late November 2014 OPEC announced that it would maintain current production levels despite the steady increase in non-OPEC oil production. Both conjectures are perfectly reasonable ex ante, yet we provide quantitative evidence that neither explanation appears supported by the data. We show that more than half of the decline in the price of oil was predictable in real time as of June 2014 and therefore must have reflected the cumulative effects of earlier oil demand and supply shocks. Among the shocks that occurred after June 2014, the most influential shock resembles a negative shock to the demand for oil associated with a weakening economy in December 2014. In contrast, there is no evidence of any large positive oil supply shocks between June and December. We conclude that the difference in the evolution of the price of oil, which declined by 44% over this period, compared with other commodity prices, which on average only declined by about 5%-15%, reflects oil-market specific developments that took place prior to June 2014.
U.S. retail food price increases in recent years may seem large in nominal terms, but after adjusting for inflation have been quite modest even after the change in U.S. biofuel policies in 2006. In contrast, increases in the real prices of corn, soybeans, wheat and rice received by U.S. farmers have been more substantial and can be linked in part to increases in the real price of oil. That link, however, appears largely driven by common macroeconomic determinants of the prices of oil and agricultural commodities rather than the pass-through from higher oil prices. We show that there is no evidence that corn ethanol mandates have created a tight link between oil and agricultural markets. Rather increases in food commodity prices not associated with changes in global real activity appear to reflect a wide range of idiosyncratic shocks ranging from changes in biofuel policies to poor harvests. Increases in agricultural commodity prices in turn contribute little to U.S. retail food price increases, because of the small cost share of agricultural products in food prices. There is no evidence that oil price shocks have caused more than a negligible increase in retail food prices in recent years. Nor is there evidence for the prevailing wisdom that oil-price driven increases in the cost of food processing, packaging, transportation and distribution are responsible for higher retail food prices. Finally, there is no evidence that oil-market specific events or for that matter U.S. biofuel policies help explain the evolution of the real price of rice, which is perhaps the single most important food commodity for many developing countries.
The U.S. Energy Information Administration (EIA) regularly publishes monthly and quarterly forecasts of the price of crude oil for horizons up to two years, which are widely used by practitioners. Traditionally, such out-of-sample forecasts have been largely judgmental, making them difficult to replicate and justify. An alternative is the use of real-time econometric oil price forecasting models. We investigate the merits of constructing combinations of six such models. Forecast combinations have received little attention in the oil price forecasting literature to date. We demonstrate that over the last 20 years suitably constructed real-time forecast combinations would have been systematically more accurate than the no-change forecast at horizons up to 6 quarters or 18 months. MSPE reduction may be as high as 12% and directional accuracy as high as 72%. The gains in accuracy are robust over time. In contrast, the EIA oil price forecasts not only tend to be less accurate than no-change forecasts, but are much less accurate than our preferred forecast combination. Moreover, including EIA forecasts in the forecast combination systematically lowers the accuracy of the combination forecast. We conclude that suitably constructed forecast combinations should replace traditional judgmental forecasts of the price of oil.
It has been forty years since the oil crisis of 1973/74. This crisis has been one of the defining economic events of the 1970s and has shaped how many economists think about oil price shocks. In recent years, a large literature on the economic determinants of oil price fluctuations has emerged. Drawing on this literature, we first provide an overview of the causes of all major oil price fluctuations between 1973 and 2014. We then discuss why oil price fluctuations remain difficult to predict, despite economists’ improved understanding of oil markets. Unexpected oil price fluctuations are commonly referred to as oil price shocks. We document that, in practice, consumers, policymakers, financial market participants and economists may have different oil price expectations, and that, what may be surprising to some, need not be equally surprising to others.
Although there is much interest in the future retail price of gasoline among consumers, industry analysts, and policymakers, it is widely believed that changes in the price of gasoline are essentially unforecastable given publicly available information. We explore a range of new forecasting approaches for the retail price of gasoline and compare their accuracy with the no-change forecast. Our key finding is that substantial reductions in the mean-squared prediction error (MSPE) of gasoline price forecasts are feasible in real time at horizons up to two years, as are substantial increases in directional accuracy. The most accurate individual model is a VAR(1) model for real retail gasoline and Brent crude oil prices. Even greater reductions in MSPEs are possible by constructing a pooled forecast that assigns equal weight to five of the most successful forecasting models. Pooled forecasts have lower MSPE than the EIA gasoline price forecasts and the gasoline price expectations in the Michigan Survey of Consumers. We also show that as much as 39% of the decline in gas prices between June and December 2014 was predictable.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
Whatever it takes to understand a central banker : embedding their words using neural networks
(2023)
Dictionary approaches are at the forefront of current techniques for quantifying central bank communication. In this paper, the author propose a novel language model that is able to capture subtleties of messages such as one of the most famous sentences in central bank communications when ECB President Mario Draghi stated that "within [its] mandate, the ECB is ready to do whatever it takes to preserve the euro".
The authors utilize a text corpus that is unparalleled in size and diversity in the central bank communication literature, as well as introduce a novel approach to text quantication from computational linguistics. This allows them to provide high-quality central bank-specific textual representations and demonstrate their applicability by developing an index that tracks deviations in the Fed's communication towards inflation targeting. Their findings indicate that these deviations in communication significantly impact monetary policy actions, substantially reducing the reaction towards inflation deviation in the US.
This article presents a structural overview of corporate disclosure in Germany against the background of a rapidly evolving European market. Professor Baums first makes the theoretical case for mandatory disclosure and outlines the standard, regulatory elements of market transparency. He then turns to German law and illustrates both how it attempts to meet the principle, theoretical demands of disclosure and how it should be improved. The article also presents in some detail the actual channels of corporate disclosure used in Germany and the manner in which German law now fits into the overall development of the broader, European Community scheme, as well as the contemplated changes and improvements both at the national and the supranational level.
The paper was submitted to the conference on company law reform at the University of Cambridge, July 4th, 2002. Since the introduction of corporation laws in the individual German states during the first half of the 19th century, Germany has repeatedly amended and reformed its company law. Such reforms and amendments were prompted in part by stock exchange fraud and the collapse of large corporations, but also by a routine adjustment of law to changing commercial and societal conditions. During the last ten years, a series of significant changes to German company law led one commentator to speak from a "company law in permanent reform". Two years ago, the German Federal Chancellor established a Regierungskommission Corporate Governance ("Government Commission on Corporate Governance") and instructed it to examine the German Corporate Governance system and German company law as a whole, and formulate recommendations for reform.
Universal banking means that banks are permitted to offer all of the various kinds of financial services. This includes classical banking activities like the credit and deposit business, as well as investment services, placement and brokerage of securities, and even insurance activities, trading in real estate and others. German universal banks also hold stock in nonfinancial firms and offer to vote their clients' shares in other firms. This paper deals with universal banks and their role in the investment business, more specifically, their links with investment companies and their various roles as shareholders and providers of financial services to such companies. Banks and investment companies have, as financial intermediaries, one trait in common: they both transform capital of investors (depositors and shareholders of investment funds, respectively) into funds (loans and equity or debt securities, respectively) that are channeled to other firms. So why should a regulation forbid to combine these transformation tasks in one institution or group, and why should the law not allow banks to establish investment companies and provide all kinds of financial services to them in addition to their banking services? German banking and investment company law have answered these questions in the affirmative. This paper argues that the existing regulation is not a sound and recommendable one. The paper is organized as follows: Sections II - V identify four areas where the combination of banking and investment might either harm the shareholders of the investment funds and/or negatively affect other constituencies such as the shareholders of the banking institution. These sections will at the same time explore whether there are institutional or regulatory provisions in place or market forces at work that adequately protect investors and the other constituencies in question. Concluding remarks follow (VI.).
The corporate governance systems in Europe differ markedly. Economists tend to use stylized models and distinguish between the Anglo-American, the German and the Latinist model.1 In this view, for instance, the Austrian, Dutch, German, and Swiss systems are said to be variations of one model. For lawyers the picture is of course, much more detailed as particular rules may vary even where common principles prevail. Many comparative studies on these differences have been undertaken meanwhile.2 I do not want to add another study but to treat a different question. Are there as a consequence of growing internationalization, globalization of markets and technological change, also tendencies of convergence of our corporate governance systems? My answer will be in two parts. As corporate governance systems are traditionally mainly shaped by legislation, the first part will analyze the influence of the economic and technological change on the rule-setting process itself. How does this process react to the fundamental environmental change? That includes a short analysis of the solution of centralized harmonizing of company law within the EU as well as the question of whether EU-wide competition between national corporate law legislators can be observed or be expected in the future. The second part will then turn to the national level. It deals with actual tendencies of convergence or, more correctly, of approach by the German corporate governance system to the Anglo-American one.
The article describes the legal structure of the Daimler-Chrysler merger. It asks why this specific structure rather than another cheaper way was chosen. This leads to the more general question of the pros and cons of mandatory corporate law as a regulatory device. The article advocates an "optional" approach: The legislator should offer various menus or sets of binding rules among which the parties may choose. (JEL: ...)
The previous proposal for a company law directive on takeovers in 1990 was rejected in Germany almost unanimously for several different reasons. The new "slimmed down" draft proposal, in the light of the subsidiarity principle, takes the different approaches to investorprotection in the various member states better into account. Notably, the most controversial principle of the previous draft, viz. the mandatory bid rule as the only means of investorprotection in case of a change of control, has been given up. Therefore a much higher degree of acceptance seems likely. The Bundesrat (upper house) and the industry associations have already expressed their consent; the Bundestag (Federal Parliament) will deal with the proposal shortly. The technique of a "frame directive" leaves ample leeway for the member states. That will shift the discussion back to the national level and there will lead to the question as to how to make use of this leeway (cf. II, III, below) rather than to a debate about principles as in the past. It seems likely that criticism will confine itself to more technical questions (cf. IV, below).