Refine
Year of publication
Document Type
- Working Paper (1504)
- Part of Periodical (578)
- Article (207)
- Report (141)
- Book (100)
- Doctoral Thesis (70)
- Contribution to a Periodical (44)
- Conference Proceeding (21)
- Part of a Book (13)
- Periodical (12)
Is part of the Bibliography
- no (2719)
Keywords
- Deutschland (98)
- Financial Institutions (92)
- Capital Markets Union (67)
- ECB (67)
- Financial Markets (59)
- Banking Regulation (53)
- Banking Union (52)
- Household Finance (47)
- Monetary Policy (41)
- Banking Supervision (40)
Institute
- Wirtschaftswissenschaften (2719) (remove)
Optimal investment decisions by institutional investors require accurate predictions with respect to the development of stock markets. Motivated by previous research that revealed the unsatisfactory performance of existing stock market prediction models, this study proposes a novel prediction approach. Our proposed system combines Artificial Intelligence (AI) with data from Virtual Investment Communities (VICs) and leverages VICs’ ability to support the process of predicting stock markets. An empirical study with two different models using real data shows the potential of the AI-based system with VICs information as an instrument for stock market predictions. VICs can be a valuable addition but our results indicate that this type of data is only helpful in certain market phases.
Artificial Intelligence (AI) and Machine Learning (ML) are currently hot topics in industry and business practice, while management-oriented research disciplines seem reluctant to adopt these sophisticated data analytics methods as research instruments. Even the Information Systems (IS) discipline with its close connections to Computer Science seems to be conservative when conducting empirical research endeavors. To assess the magnitude of the problem and to understand its causes, we conducted a bibliographic review on publications in high-level IS journals. We reviewed 1,838 articles that matched corresponding keyword-queries in journals from the AIS senior scholar basket, Electronic Markets and Decision Support Systems (Ranked B). In addition, we conducted a survey among IS researchers (N = 110). Based on the findings from our sample we evaluate different potential causes that could explain why ML methods are rather underrepresented in top-tier journals and discuss how the IS discipline could successfully incorporate ML methods in research undertakings.
This article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.
We focus on the role of social media as a high-frequency, unfiltered mass information transmission channel and how its use for government communication affects the aggregate stock markets. To measure this effect, we concentrate on one of the most prominent Twitter users, the 45th President of the United States, Donald J. Trump. We analyze around 1,400 of his tweets related to the US economy and classify them by topic and textual sentiment using machine learning algorithms. We investigate whether the tweets contain relevant information for financial markets, i.e. whether they affect market returns, volatility, and trading volumes. Using high-frequency data, we find that Trump’s tweets are most often a reaction to pre-existing market trends and therefore do not provide material new information that would influence prices or trading. We show that past market information can help predict Trump’s decision to tweet about the economy.
We develop a two-sector incomplete markets integrated assessment model to analyze the effectiveness of green quantitative easing (QE) in complementing fiscal policies for climate change mitigation. We model green QE through an outstanding stock of private assets held by a monetary authority and its portfolio allocation between a clean and a dirty sector of production. Green QE leads to a partial crowding out of private capital in the green sector and to a modest reduction of the global temperature by 0.04 degrees of Celsius until 2100. A moderate global carbon tax of 50 USD per tonne of carbon is 4 times more effective.
The ECB’s Outright Monetary Transactions (OMT) program, launched in summer 2012, indirectly recapitalized periphery country banks through its positive impact on the value of sovereign bonds. However, the regained stability of the European banking sector has not fully transferred into economic growth. We show that zombie lending behavior of banks that still remained undercapitalized after the OMT announcement is an important reason for this development. As a result, there was no positive impact on real economic activity like employment or investment. Instead, firms mainly used the newly acquired funds to build up cash reserves. Finally, we document that creditworthy firms in industries with a high prevalence of zombie firms suffered significantly from the credit misallocation, which slowed down the economic recovery.
We investigate the transmission of central bank liquidity to bank deposits and loan spreads in Europe over the January 2006 to June 2010 period. We find evidence consistent with an impaired transmission channel due to bank risk. Central bank liquidity does not translate into lower loan spreads for high-risk banks, even as it lowers deposit rates for both high-risk and low-risk banks. This adversely affects the balance sheets of high-risk bank borrowers, leading to lower payouts, lower capital expenditures, and lower employment. Overall, our results suggest that banks’ capital constraints at the time of an easing of monetary policy pose a challenge to the effectiveness of the bank lending channel and the effectiveness of the central bank as a lender of last resort.
The European Central Bank (ECB) has finalized its comprehensive assessment of the solvency of the largest banks in the euro area and on October 26 disclosed the results of this assessment. In the present paper, Acharya and Steffen compare the outcomes of the ECB's assessment to their own benchmark stress tests conducted for 39 publically listed financial institutions that are also included in the ECB's regulatory review. The authors identify a negative correlation between their benchmark estimates for capital shortfalls and the regulatory capital shortfall, but a positive correlation between their benchmark estimates for losses under stress both in the banking book and in the trading book. They conclude that the regulatory stress test outcomes are potentially heavily affected by discretion of national regulators in measuring what is capital, and especially the use of risk-weighted assets in calculating the prudential capital requirement.
We develop a dynamic recursive model where political and economic decisions interact, to study how excessive debt-GDP ratios affect political sustainability of prudent fiscal policies. Rent seeking groups make political decisions – to cooperate (or not) – on the allocation of fiscal budgets (including rents) and issuance of sovereign debt. A classic commons problem triggers collective fiscal impatience and excessive debt issuing, leading to a vicious circle of high borrowing costs and sovereign default. We analytically characterize debt-GDP thresholds that foster cooperation among rent seeking groups and avoid default. Our analysis and application helps in understanding the politico-economic sustainability of sovereign rescues, emphasizing the need for fiscal targets and possible debt haircuts. We provide a calibrated example that quantifies the threshold debt-GDP ratio at 137%, remarkably close to the target set for private sector involvement in the case of Greece.
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not imply positive average inflation rates in equilibrium. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks.
Zur Reform der Einlagensicherung: Elemente einer anreizkompatiblen Europäischen Rückversicherung
(2020)
Bankeinlagen bis 100.000 Euro sind de jure überall im Euroraum gleichermaßen vor Verlusten geschützt. De facto hängt der Wert dieser gesetzlichen Haftungszusage unter anderem von der Ausstattung des nationalen Sicherungsfonds und der relativen Größe des Bankensektors in einer Volkswirtschaft ab. Um die Homogenität des Einlagenschutzes zu gewährleisten und die Bankenunion zu vollenden, bedarf es einer einheitlichen europäischen Einlagensicherung. Die bestehende implizite Risikoteilung im Euroraum ist ordnungspolitisch nicht wünschenswert. Ferner kann eine explizite und glaubwürdige Zweitsicherung Fehlanreize zur Übernahme exzessiver Risiken verhindern, bevor es zum Schadensfall kommt. Daher plädiert dieser Beitrag für ein zweistufiges, streng subsidiär organisiertes Rückversicherungsmodell: Nationale Erstversicherungen würden einen festgeschriebenen Teil, die europäische Rückversicherung nachrangig den Rest der Deckungssumme besichern. Die Rückversicherung gewährt diese Liquiditätshilfen in Form von Kassenkrediten. Weil die Haftung auf nationaler Ebene verbleibt, werden Risiken geteilt aber nicht vergemeinschaftet. Marktgerechte Prämien müssen nicht nur das individuelle Risikogewicht einer Bank sondern auch länderspezifische Risikofaktoren berücksichtigen. Zuletzt braucht der Rückversicherer umfangreiche Aufsichtsrechte, um die Zahlungsfähigkeit der Erstversicherer mit Hinblick auf die nationalen Haftungspflichten jederzeit sicherzustellen.
Motivated by the observation that survey expectations of stock returns are inconsistent with rational return expectations under real-world probabilities, we investigate whether alternative expectations hypotheses entertained in the asset pricing literature are consistent with the survey evidence. We empirically test (1) the notion that survey forecasts constitute rational but risk-neutral forecasts of future returns, and (2) the notion that survey fore- casts are ambiguity averse/robust forecasts of future returns. We find that these alternative hypotheses are also strongly rejected by the data, albeit for different reasons. Hypothesis (1) is rejected because survey return forecasts are not in line with risk-free interest rates and because survey expected excess returns are predictable. Hypothesis (2) is rejected because agents are not al- ways pessimistic about future returns, instead often display overly optimistic return expectations. We speculate as to what kind of expectations theories might be consistent with the available survey evidence.
Optimal trend inflation
(2017)
We present a sticky-price model incorporating heterogeneous Firms and systematic firm-level productivity trends. Aggregating the model in closed form, we show that it delivers radically different predictions for the optimal inflation rate than canonical sticky price models featuring homogenous Firms:
(1) the optimal steady-state inflation rate generically differs from zero and,
(2) inflation optimally responds to productivity disturbances.
Using micro data from the US Census Bureau to estimate the inflation-relevant productivity trends at the firm level, we find that the optimal US inflation rate is positive. It was slightly above 2 percent in the year 1986, but continuously declined thereafter, reaching about 1 percent in the year 2013.
We analytically characterize optimal monetary policy for an augmented New Keynesian model with a housing sector. In a setting where the private sector has rational expectations about future housing prices and inflation, optimal monetary policy can be characterized without making reference to housing price developments: commitment to a 'target criterion' that refers to inflation and the output gap only is optimal, as in the standard model without a housing sector. When the policymaker is concerned with potential departures of private sector expectations from rational ones and seeks to choose a policy that is robust against such possible departures, then the optimal target criterion must also depend on housing prices. In the empirically realistic case where housing is subsidized and where monopoly power causes output to fall short of its optimal level, the robustly optimal target criterion requires the central bank to 'lean against' housing prices: following unexpected housing price increases, policy should adopt a stance that is projected to undershoot its normal targets for inflation and the output gap, and similarly aim to overshoot those targets in the case of unexpected declines in housing prices. The robustly optimal target criterion does not require that policy distinguish between 'fundamental' and 'non-fundamental' movements in housing prices.
In the secondary art market, artists play no active role. This allows us to isolate cultural influences on the demand for female artists’ work from supply-side factors. Using 1.5 million auction transactions in 45 countries, we document a 47.6% gender discount in auction prices for paintings. The discount is higher in countries with greater gender inequality. In experiments, participants are unable to guess the gender of an artist simply by looking at a painting and they vary in their preferences for paintings associated with female artists. Women's art appears to sell for less because it is made by women.
In this paper, we develop a state-dependent sensitivity value-at-risk (SDSVaR) approach that enables us to quantify the direction, size, and duration of risk spillovers among financial institutions as a function of the state of financial markets (tranquil, normal, and volatile). Within a system of quantile regressions for four sets of major financial institutions (commercial banks, investment banks, hedge funds, and insurance companies) we show that while small during normal times, equivalent shocks lead to considerable spillover effects in volatile market periods. Commercial banks and, especially, hedge funds appear to play a major role in the transmission of shocks to other financial institutions. Using daily data, we can trace out the spillover effects over time in a set of impulse response functions and find that they reach their peak after 10 to 15 days.
Credit boom detection methodologies (such as threshold method) lack robustness as they are based on univariate detrending analysis and resort to ratios of credit to real activity. I propose a quantitative indicator to detect atypical behavior of credit from a multivariate system - a monetary VAR. This methodology explicitly accounts for endogenous interactions between credit, asset prices and real activity and detects atypical credit expansions and contractions in the Euro Area, Japan and the U.S. robustly and timely. The analysis also proves useful in real time.
This paper investigates the risk channel of monetary policy on the asset side of banks’ balance sheets. We use a factoraugmented vector autoregression (FAVAR) model to show that aggregate lending standards of U.S. banks, such as their collateral requirements for firms, are significantly loosened in response to an unexpected decrease in the Federal Funds rate. Based on this evidence, we reformulate the costly state verification (CSV) contract to allow for an active financial intermediary, embed it in a New Keynesian dynamic stochastic general equilibrium (DSGE) model, and show that – consistent with our empirical findings – an expansionary monetary policy shock implies a temporary increase in bank lending relative to borrower collateral. In the model, this is accompanied by a higher default rate of borrowers.
Even as online advertising continues to grow, a central question remains: Who to target? Yet, advertisers know little about how to select from the hundreds of audience segments for targeting (and combinations thereof) for a profitable online advertising campaign. Utilizing insights from a field experiment on Facebook (Study 1), we develop a model that helps advertisers solve the cold-start problem of selecting audience segments for targeting. Our model enables advertisers to calculate the break-even performance of an audience segment to make a targeted ad campaign at least as profitable as an untargeted one. Advertisers can use this novel model to decide whether to test specific audience segments in their campaigns (e.g., in randomized controlled trials). We apply our model to data from the Spotify ad platform to study the profitability of different audience segments (Study 2). Approximately half of those audience segments require the click-through rate to double compared to an untargeted campaign, which is unrealistically high for most ad campaigns. Our model also shows that narrow segments require a lift that is likely not attainable, specifically when the data quality of these segments is poor. We confirm this theoretical finding in an empirical study (Study 3): A decrease in data quality due to Apple’s introduction of the App Tracking Transparency (ATT) framework more negatively affects the click-through rate of narrow (versus broad) audience segments.