Working Paper
Refine
Year of publication
Document Type
- Working Paper (2353) (remove)
Language
- English (2353) (remove)
Is part of the Bibliography
- no (2353)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1378)
- Wirtschaftswissenschaften (1308)
- Sustainable Architecture for Finance in Europe (SAFE) (740)
- House of Finance (HoF) (606)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
updated version --
The Multilingual Assessment Instrument for Narratives (MAIN) was designed in order to assess narrative skills in children who acquire one or more languages from birth or from early age. MAIN is suitable for children from 3 to 10 years and evaluates both comprehension and production of narratives. Its design allows for the assessment of several languages in the same child, as well as for different elicitation modes: Model Story, Retelling, and Telling. MAIN contains four parallel stories, each with a carefully designed six-picture sequence. The stories are controlled for cognitive and linguistic complexity, parallelism in macrostructure and microstructure, as well as for cultural appropriateness and robustness. The instrument has been developed on the basis of extensive piloting with more than 550 monolingual and bilingual children aged 3 to 10, for 15 different languages and language combinations. Even though MAIN has not been norm-referenced yet, its standardized procedures can be used for evaluation, intervention and research purposes. MAIN is currently available in the following languages: English, Afrikaans, Albanian, Basque, Bulgarian, Croatian, Cypriot Greek, Danish, Dutch, Estonian, Finnish, French, German, Greek, Hebrew, Icelandic, Italian, Lithuanian, Norwegian, Polish, Russian, Spanish, Standard Arabic, Swedish, Turkish, Vietnamese, and Welsh.
Doing safe by doing good : ESG investing and corporate social responsibility in the U.S. and Europe
(2019)
This paper examines the profitability of investing according to environmental, social and governance (ESG) criteria in the U.S. and Europe. Based on data from 2003 to 2017, we show that a portfolio long in stocks with the highest ESG scores and short in those with the lowest scores yields a significantly negative abnormal return. Interestingly, this is caused by the strong positive return of firms with the lowest ESG activity. As we find that increasing ESG scores reduce firm risk (particularly downside risk), this hints at an insurance-like character of corporate social responsibility: Firms with low ESG activity need to offer a corresponding risk premium. The perception of ESG as an insurance can be shown to be stronger in more volatile capital markets for U.S. firms, but not for European firms. Socially responsible investment may therefore be of varying attractiveness in different market phases.
Do household inflation expectations affect consumption-savings decisions? We link survey data on quantitative inflation expectations to administrative data on income and wealth. We document that households with higher inflation expectations save less. Estimating panel data models with year and household fixed effects, we find that a one percentage point increase in a household's inflation expectation over time is associated with a 250-400 euro reduction in the household's change in net worth per year on average. We also document that households with higher inflation expectations are more likely to acquire a car and acquire higher-value cars. In addition, we provide a quantitative model of household-level inflation expectations.
We propose a simple modification of the time series filter by Hamilton (2018) that yields reliable and economically meaningful real-time output gap estimates. The original filter relies on 8-quarter-ahead forecast errors of a simple autoregression of log real GDP. While this approach yields a cyclical component of GDP that is hardly revised with new incoming data due to the one-sided filtering approach, it does not cover typical business cycle frequencies evenly, but short business cycles are muted and medium length business cycles are amplified. Further, the estimated trend is as volatile as GDP itself and can thus hardly be interpreted as potential GDP. A simple modification that is based on the mean of 4- to 12-quarter-ahead forecast errors shares the favorable real-time properties of the Hamilton filter, but leads to a much better coverage of typical business cycle frequencies and a smooth estimated trend. Based on output growth and inflation forecasts and a comparison to revised output gap estimates from policy institutions, we find that real-time output gaps based on the modified Hamilton filter are economically much more meaningful measures of the business cycle than those based on other simple statistical trend-cycle decomposition techniques such as the HP or the Bandpass filter.
We analyze cyclical co-movement in credit, house prices, equity prices, and longterm interest rates across 17 advanced economies. Using a time-varying multi-level dynamic factor model and more than 130 years of data, we analyze the dynamics of co-movement at different levels of aggregation and compare recent developments to earlier episodes such as the early era of financial globalization from 1880 to 1913 and the Great Depression. We find that joint global dynamics across various financial quantities and prices as well as variable-specific global co-movements are important to explain fluctuations in the data. From a historical perspective, global co-movement in financial variables is not a new phenomenon, but its importance has increased for some variables since the 1980s. For equity prices, global cycles play currently a historically unprecedented role, explaining more than half of the fluctuations in the data. Global cycles in credit and housing have become much more pronounced and longer, but their importance in explaining dynamics has only increased for some economies including the US, the UK and Nordic European countries. We also include GDP in the analysis and find an increasing role for a global business cycle.
There is substantial disagreement about the consequences of the Tax Cuts and Jobs Act (TCJA) of 2017, which constitutes the most extensive tax reform in the United States in more than 30 years. Using a large-scale two-country dynamic general equilibrium model with nominal rigidities, we find that the TCJA increases GDP by about 2% in the medium-run and by about 2.5% in the long-run. The shortrun impact depends crucially on the degree and costs of variable capital utilization, with GDP effects ranging from 1 to 3%. At the same time, the TCJA does not pay for itself. In our analysis, the reform decreases tax revenues and raises the debt-to-GDP ratio by about 15 percentage points in the medium-run until 2025. We show that combining the TCJA with spending cuts can dampen the increase in government indebtedness without reducing its expansionary effect.
We investigate the transmission of central bank liquidity to bank deposits and loan spreads in Europe over the January 2006 to June 2010 period. We find evidence consistent with an impaired transmission channel due to bank risk. Central bank liquidity does not translate into lower loan spreads for high-risk banks, even as it lowers deposit rates for both high-risk and low-risk banks. This adversely affects the balance sheets of high-risk bank borrowers, leading to lower payouts, lower capital expenditures, and lower employment. Overall, our results suggest that banks’ capital constraints at the time of an easing of monetary policy pose a challenge to the effectiveness of the bank lending channel and the effectiveness of the central bank as a lender of last resort.
Job loss expectations, durable consumption and household finances : evidence from linked survey data
(2019)
Job security is important for durable consumption and household savings. Using surveys, workers express a probability that they will lose their job in the next 12 months. In order to assess the empirical content of these probabilities, we link survey data to administrative data with labor market outcomes. Workers predict job loss quite well, in particular those whose job loss is followed by unemployment. Workers with higher job loss expectations acquire cheaper cars, and are less likely to buy new cars. In line with models of precautionary saving, higher job loss expectations are associated with more savings and less exposure to risky assets.
We study how the informativeness of stock prices changes with the presence of high-frequency trading (HFT). Our estimate is based on the staggered start of HFT participation in a panel of international exchanges. With HFT presence, market prices are a less reliable predictor of future cash flows and investment, even more so for longer horizons. Further, firm-level idiosyncratic volatility decreases, and the holdings and trades by institutional investors deviate less from the market-capitalization weighted portfolio as a benchmark. Our results document that the informativeness of prices decreases subsequent to the start of HFT. These findings are consistent with theoretical models of HFTs' ability to anticipate informed order flow, resulting in decreased incentives to acquire fundamental information.
In this note, we first highlight different developments for banks under direct ECB supervision within the SSM that may prompt further investigation by supervisors. We find that banks that were weakly capitalized at the start of direct ECB supervision (1) still face elevated levels of non-performing loans, (2) are less cost-efficient and (3) reduced their share of subordinated debt financing over the last years. We then stress the importance of continuous and ongoing cost-benefit analysis regarding banking supervision in Europe. We also encourage processes to question existing supervisory practices to ensure a lean and efficient banking supervision. Finally, we underline the need of continuous and intensified coordination among regulatory bodies in the Banking Union since the efficacy of European bank supervision rests on its interplay with many different institutions.
This document was requested by the European Parliament's Committee on Economic and Monetary Affairs. It was originally published on the European Parliament’s webpage.
Do competition and incentives offered to designated market makers (DMMs) improve market liquidity? Using data from NYSE Euronext Paris, we show that an exogenous increase in competition among DMMs leads to a significant decrease in quoted and effective spreads, mainly through a reduction in adverse selection costs. In contrast, changes in incentives, through small changes in rebates and requirements for DMMs, do not have any tangible effect on market liquidity. Our results are of relevance for designing optimal contracts between exchanges and DMMs and for regulatory market oversight.
We show that banks that are facing relatively high locally non-diversifiable risks in their home region expand more across states than banks that do not face such risks following branching deregulation in the 1990s and 2000s. These banks with high locally non-diversifiable risks also benefit relatively more from deregulation in terms of higher bank stability. Further, these banks expand more into counties where risks are relatively high and positively correlated with risks in their home region, suggesting that they do not only diversify but also build on their expertise in local risks when they expand into new regions.
Self-control failure is among the major pathologies (Baumeister et al. (1994)) affecting individual investment decisions which has hardly been measurable in empirical research. We use cigarette addiction identified from checking account transactions to proxy for low self-control and compare over 5,000 smokers to 14,000 nonsmokers. Smokers self-directing their investment trade more frequently, exhibit more biases and achieve lower portfolio returns. We also find that smokers, some of which might be aware of their limited levels of self-control, exhibit a higher propensity than nonsmokers to delegate decision making to professional advisors and fund managers. We document that such precommitments work successfully.
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It Abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences; all but one reduction rule of the calculus are shown to be space improvements, and the exceptional one, the copy-rule, is shown to increase space only moderately.
Several further program transformations are shown to be space improvements or space equivalences, in particular the translation into machine expressions is a space equivalence. These results are a step Forward in making predictions about the change in runtime space behavior of optimizing transformations in callbyneed functional languages.
We explore space improvements in LRP, a polymorphically typed call-by-need functional core language. A relaxed space measure is chosen for the maximal size usage during an evaluation. It Abstracts from the details of the implementation via abstract machines, but it takes garbage collection into account and thus can be seen as a realistic approximation of space usage. The results are: a context lemma for space improving translations and for space equivalences; all but one reduction rule of the calculus are shown to be space improvements, and the exceptional one, the copy-rule, is shown to increase space only moderately.
Several further program transformations are shown to be space improvements or space equivalences, in particular the translation into machine expressions is a space equivalence. These results are a step Forward in making predictions about the change in runtime space behavior of optimizing transformations in callbyneed functional languages.
The synchronous pi-calculus is translated into a core language of Concurrent Haskell extended by futures (CHF). The translation simulates the synchronous message-passing of the pi-calculus by sending messages and adding synchronization using Concurrent Haskell's mutable shared-memory locations (MVars). The semantic criterion is a contextual semantics of the pi-calculus and of CHF using may- and should-convergence as observations. The results are equivalence with respect to the observations, full abstraction of the translation of closed processes, and adequacy of the translation on open processes. The translation transports the semantics of the pi-calculus processes under rather strong criteria, since error-free programs are translated into error-free ones, and programs without non-deterministic error possibilities are also translated into programs without non-deterministic error-possibilities. This investigation shows that CHF embraces the expressive power and the concurrency capabilities of the pi-calculus.
Distributed ledger technology especially in the form of publicly coordinated validation networks such as Ethereum and Bitcoin with their own monetary circles provide for a revealing litmus test for current financial regulatory schemes. The paper highlights the interrelation between distributed coordination and the emission of virtual currency to make sense of the function of the new monetary phenomenon. It then argues for the regulation of financial services on the ground of the technology to ensure integrity standards. In this respect, it is useful to gear the development of a regulatory scheme towards the existing financial regulatory principles. However, future measures of the regulators must take the distributed nature of the platforms into account by relying on a “regulated self-regulation” of the community. Finally, the article focuses on the shortcomings of the current EU regulatory regimes, especially the regulation frameworks regarding financial services, payment services and electronic money.
Exploiting the natural experiment of the German reunification, we examine how consumers adapt to a new environment in their macroeconomic forecasting. We document that East Germans expect higher in inflation and make larger forecast errors than West
Germans even decades after reunification. Differences in consumption baskets, financial literacy, risk aversion or trust in the central bank cannot fully account for these patterns. We find most support for the explanation that East Germans, who were used to a strong norm of zero inflation, persistently overadjusted the level of their expectations in the face of the initial inflation shock in reunified Germany. Our findings suggest that large changes in the economic environment can permanently impede people's ability to form accurate macroeconomic expectations, with an important role for the interaction of old norms and new experiences around the event.
Policymakers attach an important role to the macroeconomic outlook of households. Using a representative online panel form the U.S., the authors examine how individuals' macroeconomic expectations causally affect their personal economic prospects and their behavior and provide them with different professional forecasts about the likelihood of a recession. The authors find that groups with the largest exposure to aggregate risk, such as individuals working in cyclical industries, are most likely to respond to an improved macroeconomic outlook, while a large fraction of the population is unlikely to react.
This paper uses unique administrative data and a quasi-field experiment of exogenous allocation in Sweden to estimate medium- and longer-run effects on financial behavior from exposure to financially literate neighbors. It contributes evidence of causal impact of exposure and of a social multiplier of financial knowledge, but also of unfavorable distributional aspects of externalities. Exposure promotes saving in private retirement accounts and stockholding, especially when neighbors have economics or business education, but only for educated households and when interaction possibilities are substantial. Findings point to transfer of knowledge rather than mere imitation or effects through labor, education, or mobility channels.
The recent sovereign debt crisis in the Eurozone was characterized by a monetary policy, which has been constrained by the zero lower bound (ZLB) on nominal interest rates, and several countries, which faced high risk spreads on their sovereign bonds. How is the government spending multiplier affected by such an economic environment?While prominent results in the academic literature point to high government spending multipliers at the ZLB, higher public indebtedness is often associated with small government spending multipliers. I develop a DSGE model with leverage constrained banks that captures both features of this economic environment, the ZLB and fiscal stress. In this model, I analyze the effects of government spending shocks. I find that not only are multipliers large at the ZLB, the presence of fiscal stress can even increase their size. For longer durations of the ZLB,multipliers in this model can be considerably larger than one.
JEL Classification: E32, E 44, E62
Recently, Fuest and Sinn (2018) have demanded a change of rules for the Eurozone’s Target 2 payment system, claiming it would violate the Statutes of the European System of Central Banks and of the European Central Bank. The authors present a stylized model based on a set of macro-economic assumptions, and show that Target 2 may lead to loss sharing among national central banks (NCBs), thus violating the no risk-sharing requirement laid out by the Eurosystem Statutes.
In this note, I present an augmented model that incorporates essential features of the micro- and macroprudential regulatory and supervisory regime that today is hard-wired into Europe’s banking system. The model shows that the original no-risk-sharing principle is not necessarily violated during a financial crisis of a member state. Moreover, it shows that under a banking union regime, financial crisis asset value losses at or below the 99.9th percentile are borne by private investors, not by taxpayers, and particularly not by central banks.
Therefore, policy conclusions from the micro-founded model differ significantly from those suggested by Fuest and Sinn (2018).
We propose a shrinkage and selection methodology specifically designed for network inference using high dimensional data through a regularised linear regression model with Spike-and-Slab prior on the parameters. The approach extends the case where the error terms are heteroscedastic, by adding an ARCH-type equation through an approximate Expectation-Maximisation algorithm. The proposed model accounts for two sets of covariates. The first set contains predetermined variables which are not penalised in the model (i.e., the autoregressive component and common factors) while the second set of variables contains all the (lagged) financial institutions in the system, included with a given probability. The financial linkages are expressed in terms of inclusion probabilities resulting in a weighted directed network where the adjacency matrix is built “row by row". In the empirical application, we estimate the network over time using a rolling window approach on 1248 world financial firms (banks, insurances, brokers and other financial services) both active and dead from 29 December 2000 to 6 October 2017 at a weekly frequency. Findings show that over time the shape of the out degree distribution exhibits the typical behavior of financial stress indicators and represents a significant predictor of market returns at the first lag (one week) and the fourth lag (one month).
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
This paper examines how networks of professional contacts contribute to the development of the careers of executives of North American and European companies. We build a dynamic model of career progression in which career moves may both depend upon existing networks and contribute to the development of future networks. We test the theory on an original dataset of nearly 73 000 executives in over 10 000 _rms. In principle professional networks could be relevant both because they are rewarded by the employer and because they facilitate job mobility. Our econometric analysis suggests that, although there is a substantial positive correlation between network size and executive compensation, with an elasticity of around 20%, almost all of this is due to unobserved individual characteristics. The true causal impact of networks on compensation is closer to an elasticity of 1 or 2% on average, all of this due to enhanced probability of moving to a higher-paid job. And there appear to be strongly diminishing returns to network size.
Using a unique confidential contract level dataset merged with firm-level asset price data, we find robust evidence that firms' stock market valuations and employment levels respond more to monetary policy announcements the higher the degree of wage rigidity. Data on the renegotiations of collective bargaining agreements allow us to construct an exogenous measure of wage rigidity. We also find that the amplification induced by wage rigidity is stronger for firms with high labor intensity and low profitability, providing evidence of distributional consequences of monetary policy. We rationalize the evidence through a model in which firms in different sectors feature different degrees of wage rigidity due to staggered renegotiations vis-a-vis unions.
This paper analyzes the effect of financial constraints on firms' corporate social responsibility. Exploiting heterogeneity in firms' exposure to a monetary policy shock in the U.S., which reduced financial constraints for some firms, I find that firms increase their environmental responsibility. I use facility-level data to account for unobservable time-varying influences on pollution and find that toxic emissions decrease when parent companies are more exposed to the monetary policy shock. I further find that these facilities are also more likely to implement pollution abatement activities. Examining within-parent company heterogeneity I find that pollution abatement investments center on facilities at greater risk of facing additional costs due to environmental regulation. The findings are consistent with the idea that a reduction in financial constraints reduces pollution as it allows firms to implement pollution abatement measures.
Households buy life insurance as part of their liquidity management. The option to surrender such a policy can serve as a buffer when a household faces a liquidity need. In this study, we investigate empirically which individual and household specific sociodemographic factors influence the surrender behavior of life insurance policyholders. Based on the Socio-Economic Panel (SOEP), an ongoing wide-ranging representative longitudinal study of around 11,000 private households in Germany, we construct a proxy to identify life insurance surrender in the data. We use this proxy to conduct fixed effect regressions and support the results with survival analyses. We find that life events that possibly impose a liquidity shock to the household, such as birth of a child and divorce increase the likelihood to surrender an existing life insurance policy for an average household in the panel. The acquisition of a dwelling and unemployment are further aspects that can foster life insurance surrender. Our results are robust with respect to different models and hold conditioning on region specific trends; they vary however for different age groups. Our analyses contribute to the existing literature supporting the emergency fund hypothesis. The findings obtained in this study can help life insurers and regulators to detect and understand industry specific challenges of the demographic change.
Higher capital ratios are believed to improve system-wide financial stability through three main channels: (i) higher loss-absorption capacity, (ii) lower moral hazard, (iii) stabilization of the financial cycle if capital ratios are increased during good times. We examine these mechanisms in a laboratory asset market experiment with indebted participants. We find support for the loss-absorption channel: higher capital ratios reduce the bankruptcy rate. However, we do not find support for the moral hazard channel. Higher capital ratios (insignificantly) increase asset price bubbles, an aggregate measure of excessive risk-taking. Additional evidence suggests that bankruptcy aversion explains this surprising result. Finally, the evidence supports the idea that higher capital ratios in good times stabilize the financial cycle.
Whither artificial intelligence? Debating the policy challenges of the upcoming transformation
(2018)
The School of Salamanca, and Iberian late Scholasticism in general, had the merit of transposing the wisdom of medieval scholasticism into the coordinates of early modernity. Due to the economic growth after the discovery of America, economic terms and moral problems become a central focus for moral theologians. In this article, I consider important key economic concepts that deliver a surprising wealth of insights into the modernization brought about by the leading scholars of the time. Social mobility, the principle of majority decision, the inviolability of property, human rights of the person, limited political power of the pope, and other key concepts that were decisive for the development of democracy and modernity are to be found in the works of the School of Salamanca in connection with economic issues.
Distributed ledger technologies rely on consensus protocols confronting traders with random waiting times until the transfer of ownership is accomplished. This time consuming settlement process exposes arbitrageurs to price risk and imposes limits to arbitrage. We derive theoretical arbitrage boundaries under general assumptions and show that they increase with expected latency, latency uncertainty, spot volatility, and risk aversion. Using high-frequency data from the Bitcoin network, we estimate arbitrage boundaries due to settlement latency of on average 124 basis points, covering 88% of the observed cross-exchange price differences. Settlement through decentralized systems thus induces non-trivial frictions affecting market efficiency and price formation.
Much ado about nothing : a study of differential pricing and liquidity of short and long term bonds
(2018)
Are yields of long-maturity bonds distorted by demand pressure of clientele investors, regulatory effects, or default, flight-to-safety or liquidity premiums? Using data on German nominal bonds between 2005 and 2015, we study the differential pricing and liquidity of short and long maturity bonds. We find statistically significant, but economically negligible segmentation in yields and some degree of liquidity segmentation of short-term versus long-term bonds. These results have important policy implications for the e17.5 trillion European pension and insurance industries: long maturity bond yields seem appropriate for the valuation of long-term liabilities.
A number of recent studies have concluded that consumer spending patterns over the month are closely linked to the timing of income receipt. This correlation is interpreted as evidence of hyperbolic discounting. I re-examine patterns of spending in the diary sample of the U.S. Consumer Expenditure Survey, incorporating information on the timing of the main consumption commitment for most households - their monthly rent or mortgage payment. I find that non-durable and food spending increase with 30-48% on the day housing payments are made, with smaller increases in the days after. Moreover, households with weekly, biweekly and monthly income streams but the same timing of rent/mortgage payments have very similar consumption patterns. Exploiting variation in income, I find that households with extra liquidity decrease non-durable spending around housing payments, especially those households with a large budget share of housing.
A recent US Treasury regulation allowed deferred longevity income annuities to be included in pension plan menus as a default payout solution, yet little research has investigated whether more people should convert some of the $15 trillion they hold in employer-based defined contribution plans into lifelong income streams. We investigate this innovation using a calibrated lifecycle consumption and portfolio choice model embodying realistic institutional considerations. Our welfare analysis shows that defaulting a small portion of retirees’ 401(k) assets (over a threshold) is an attractive way to enhance retirement security, enhancing welfare by up to 20% of retiree plan accruals.
We provide the first partner tenure and rotation analysis for a large cross-section of U.S. publicly listed firms over an extended period. We analyze the effects on audit quality as well as economic tradeoffs with respect to audit hours and fees. On average, we find no evidence for audit quality declines over the tenure cycle and, consistent with the former, little support for fresh-look benefits after five-year mandatory rotations. Nevertheless, partner rotations have significant economic consequences. We find increases in audit fees and decreases in audit hours over the tenure cycle, which differ by partner experience, client size, and competitiveness of the local audit market. Our findings are consistent with efforts by the audit firms to minimize disruptions and audit failures around mandatory rotations. We also analyze special circumstances, such as audit firm or audit team switches and early partner rotations. We show that these situations are more disruptive and more likely to exhibit audit quality effects. In particular, we find that low quality audits give rise to early engagement partner rotations and in this sense have (career) consequences for partners.
Manipulative communications touting stocks are common in capital markets around the world. Although the price distortions created by so-called “pump-and-dump” schemes are well known, little is known about the investors in these frauds. By examining 421 “pump-and-dump” schemes between 2002 and 2015 and a proprietary set of trading records for over 110,000 individual investors from a major German bank, we provide evidence on the participation rate, magnitude of the investments, losses, and the characteristics of the individuals who invest in such schemes. Our evidence suggests that participation is quite common and involves sizable losses, with nearly 6% of active investors participating in at least one “pump-and-dump” and an average loss of nearly 30%. Moreover, we identify several distinct types of investors, some of which should not be viewed as falling prey to these frauds. We also show that portfolio composition and past trading behavior can better explain participation in touted stocks than demographics. Our analysis offers insights into the challenges associated with designing effective investor protection against market manipulation.
An important question in banking is how strict supervision affects bank lending and in turn local business activity. Forcing banks to recognize losses could choke off lending and amplify local economic woes, especially after financial crises. But stricter supervision could also lead to changes in how banks assess loans and manage their loan portfolios. Estimating such effects is challenging. We exploit the extinction of the thrift regulator (OTS) – a large change in prudential supervision, affecting ten percent of all U.S. depository institutions. Using this event, we analyze economic links between strict supervision, bank lending and business activity. We first show that the OTS replacement indeed resulted in stricter supervision of former OTS banks. We then analyze the lending effects of this regulatory change and show that former OTS banks increase small business lending by approximately 10 percent. This increase stems primarily from well capitalized banks and those more affected by the new regime. These findings suggest that stricter supervision operates not only through capital but can also overcome frictions in bank management, leading to more lending and a reallocation of loans. Consistent with the latter, we find increases in business entry and exit in counties with greater expose to OTS banks.
The use of evidence and economic analysis in policymaking is on the rise, and accounting standard setting and financial regulation are no exception. This article discusses the promise of evidence-based policymaking in accounting and financial markets as well as the challenges and opportunities for research supporting this endeavor. In principle, using sound theory and robust empirical evidence should lead to better policies and regulations. But despite its obvious appeal and substantial promise, evidence-based policymaking is easier demanded than done. It faces many challenges related to the difficulty of providing relevant causal evidence, lack of data, the reliability of published research, and the transmission of research findings. Overcoming these challenges requires substantial infrastructure investments for generating and disseminating relevant research. To illustrate this point, I draw parallels to the rise of evidence-based medicine. The article provides several concrete suggestions for the research process and the aggregation of research findings if scientific evidence is to inform policymaking. I discuss how policymakers can foster and support policy-relevant research, chiefly by providing and generating data. The article also points to potential pitfalls when research becomes increasingly policy-oriented.
We examine whether the economy can be insured against banking crises with deposit and loan contracts contingent on macroeconomic shocks. We study banking competition and show that the private sector insures the banking system through such contracts, and banking crises are avoided, provided that failed banks are not bailed out. When risks are large, banks may shift part of them to depositors. In contrast, when banks are bailed out by the next generation, depositors receive non-contingent contracts with high interest rates, while entrepreneurs obtain loan contracts that demand high repayment in good times and low repayment in bad times. As a result, the present generation overinvests, and banks generate large macroeconomic risks for future generations, even if the underlying productivity risk is small or zero. We conclude that a joint policy package of orderly default procedures and contingent contracts is a promising way to reduce the threat of a fragile banking system.
Following the introduction of the one-child policy in China, the capital-labor (K/L) ratio of China increased relative to that of India, and, simultaneously, FDI inflows relative to GDP for China versus India declined. These observations are explained in the context of a simple neoclassical OLG paradigm. The adjustment mechanism works as follows: the reduction in the growth rate of the (urban) labor force due to the one-child policy permanently increases the capital per worker inherited from the previous generation. The resulting increase in China's (domestic K)/L thus "crowds out" the need for FDI in China relative to India. Our paper is a contribution to the nascent literature exploring demographic transitions and their effects on FDI flows.
Based on OECD evidence, equity/housing-price busts and credit crunches are followed by substantial increases in public consumption. These increases in unproductive public spending lead to increases in distortionary marginal taxes, a policy in sharp contrast with presumably optimal Keynesian fiscal stimulus after a crisis. Here we claim that this seemingly adverse policy selection is optimal under rational learning about the frequency of rare capital-value busts. Bayesian updating after a bust implies massive belief jumps toward pessimism, with investors and policymakers believing that busts will be arriving more frequently in the future. Lowering taxes would be as if trying to kick a sick horse in order to stand up and run, since pessimistic markets would be unwilling to invest enough under any temporarily generous tax regime.
We present empirical evidence on the heterogeneity in monetary policy transmission across countries with different home ownership rates. We use household-level data together with shocks to the policy rate identified from high-frequency data. We find that housing tenure reacts more strongly to unexpected changes in the policy rate in Germany and Switzerland –the OECD countries with the lowest home ownership rates– compared with existing evidence for the U.S. An unexpected decrease in the policy rate by 25 basis points increases the home ownership rate by 0.8 percentage points in Germany and by 0.6 percentage points in Switzerland. The response of non-housing consumption in Switzerland is less heterogeneous across renters and mortgagors, and has a different pattern across age groups than in the U.S. We discuss economic explanations for these findings and implications for monetary policy.
In 1983, Brian Henderson published an article that examined various types of narrative structure in film, including flashbacks and flashforwards. After analyzing a whole spectrum of techniques capable of effecting a transition between past and present – blurs, fades, dissolves, and so on – he concluded: "Our discussions indicate that cinema has not (yet) developed the complexity of tense structures found in literary works". His "yet" (in parentheses) was an instance of laudable caution, as very soon – in some ten–fifteen years – the situation would change drastically, and temporal twists would become a trademark of a new genre that has not (yet) acquired a standardized name: "modular narratives", "puzzle films", and "complex films" are among the labels used.
Asset transaction prices sampled at high frequency are much staler than one might expect in the sense that they frequently lack new updates showing zero returns. In this paper, we propose a theoretical framework for formalizing this phenomenon. It hinges on the existence of a latent continuous-time stochastic process pt valued in the open interval (0; 1), which represents at any point in time the probability of the occurrence of a zero return. Using a standard infill asymptotic design, we develop an inferential theory for nonparametrically testing, the null hypothesis that pt is constant over one day. Under the alternative, which encompasses a semimartingale model for pt, we develop non-parametric inferential theory for the probability of staleness that includes the estimation of various integrated functionals of pt and its quadratic variation. Using a large dataset of stocks, we provide empirical evidence that the null of the constant probability of staleness is fairly rejected. We then show that the variability of pt is mainly driven by transaction volume and is almost unaffected by bid-ask spread and realized volatility.
Through the lens of market participants' objective to minimize counterparty risk, we provide an explanation for the reluctance to clear derivative trades in the absence of a central clearing obligation. We develop a comprehensive understanding of the benefits and potential pitfalls with respect to a single market participant's counterparty risk exposure when moving from a bilateral to a clearing architecture for derivative markets. Previous studies suggest that central clearing is beneficial for single market participants in the presence of a sufficiently large number of clearing members. We show that three elements can render central clearing harmful for a market participant's counterparty risk exposure regardless of the number of its counterparties: 1) correlation across and within derivative classes (i.e., systematic risk), 2) collateralization of derivative claims, and 3) loss sharing among clearing members. Our results have substantial implications for the design of derivatives markets, and highlight that recent central clearing reforms might not incentivize market participants to clear derivatives.
A tale of one exchange and two order books : effects of fragmentation in the absence of competition
(2018)
Exchanges nowadays routinely operate multiple, almost identically structured limit order markets for the same security. We study the effects of such fragmentation on market performance using a dynamic model where agents trade strategically across two identically-organized limit order books. We show that fragmented markets, in equilibrium, offer higher welfare to intermediaries at the expense of investors with intrinsic trading motives, and lower liquidity than consolidated markets. Consistent with our theory, we document improvements in liquidity and lower profits for liquidity providers when Euronext, in 2009, consolidated its order ow for stocks traded across two country-specific and identically-organized order books into a single order book. Our results suggest that competition in market design, not fragmentation, drives previously documented improvements in market quality when new trading venues emerge; in the absence of such competition, market fragmentation is harmful.
This paper presents new evidence on the expectation formation process from a Dutch household survey. Households become too optimistic about their future income after their income has improved, consistent with the over-extrapolation of their experience. We show that this effect of experience is persistent and that households over-extrapolate income losses more than income gains. Furthermore, older households over-extrapolate more, suggesting that they did not learn over time to form more accurate expectations. Finally, we study the relationship between expectation errors and consumption. We find that more over-optimistic households intend to consume more and subsequently report higher consumption, even though they do not consume as much as they intended to. These results suggests that overextrapolation hurts consumers and amplify business cycles.
Popularity/Prestige
(2018)
What is the canon? Usually this question is just a proxy for something like, "Which works are in the canon?" But the first question is not just a concise version of the second, or at least it doesn’t have to be. Instead, it can ask what the structure of the canon is - in other words, when things are in the canon, what are they in? This question came to the fore during the project that resulted in Pamphlet 11. The members of that group were looking for morphological differences between the canon and the archive. The latter they define, straightforwardly and capaciously, as "that portion of published literature that has been preserved—in libraries and elsewhere" The canon is a slipperier concept; the authors speak instead of multiple canons, like the books preserved in the Chadwyck-Healey Nineteenth-Century Fiction Collection, the constituents of the six different "best-twentieth century novels" lists analyzed by Mark Algee-Hewitt and Mark McGurl in Pamphlet 8, authors included in the British Dictionary of National Biography, and so forth. [...] This last conundrum points the way out of these difficulties and into a workable model of the structure of the canon. It suggests two different ways of entering the canon: being read by many and being prized by an elite few—or, to use the terms arrived at in Pamphlet 11, popularity and prestige. With these two dimensions, we arrive at a canonical space [...].
The propagation of regional shocks in housing markets: evidence from oil price shocks in Canada
(2018)
Shocks to the demand for housing that originate in one region may seem important only for that regional housing market. We provide evidence that such shocks can also affect housing markets in other regions. Our analysis focuses on the response of Canadian housing markets to oil price shocks. Oil price shocks constitute an important source of exogenous regional variation in income in Canada because oil production is highly geographically concentrated. We document that, at the national level, real oil price shocks account for 11% of the variability in real house price growth over time. At the regional level, we find that unexpected increases in the real price of oil raise housing demand and real house prices not only in oil-producing regions, but also in other regions. We develop a theoretical model of the propagation of real oil price shocks across regions that helps understand this finding. The model differentiates between oil-producing and non-oil-producing regions and incorporates multiple sectors, trade between provinces, government redistribution, and consumer spending on fuel. We empirically confirm the model prediction that oil price shocks are propagated to housing markets in non-oil-producing regions by the government redistribution of oil revenue and by increased interprovincial trade.
We analytically characterize optimal monetary policy for an augmented New Keynesian model with a housing sector. In a setting where the private sector has rational expectations about future housing prices and inflation, optimal monetary policy can be characterized without making reference to housing price developments: commitment to a 'target criterion' that refers to inflation and the output gap only is optimal, as in the standard model without a housing sector. When the policymaker is concerned with potential departures of private sector expectations from rational ones and seeks to choose a policy that is robust against such possible departures, then the optimal target criterion must also depend on housing prices. In the empirically realistic case where housing is subsidized and where monopoly power causes output to fall short of its optimal level, the robustly optimal target criterion requires the central bank to 'lean against' housing prices: following unexpected housing price increases, policy should adopt a stance that is projected to undershoot its normal targets for inflation and the output gap, and similarly aim to overshoot those targets in the case of unexpected declines in housing prices. The robustly optimal target criterion does not require that policy distinguish between 'fundamental' and 'non-fundamental' movements in housing prices.
We establish that the labor market helps discipline asset managers via the impact of fund liquidations on their careers. Using hand-collected data on 1,948 professionals, we find that top managers working for funds liquidated after persistently poor relative performance suffer demotion coupled with a significant loss in imputed compensation. Scarring effects are absent when liquidations are preceded by normal relative performance or involve mid-level employees. Seen through the lens of a model with moral hazard and adverse selection, these results can be ascribed to reputation loss rather than bad luck. The findings suggest that performance-induced liquidations supplement compensation-based incentives.
In talent-intensive jobs, workers’ quality is revealed by their performance. This enhances productivity and earnings, but also increases layoff risk. Firms cannot insure workers against this risk if they compete fiercely for talent. In this case, the more risk-averse workers will choose less quality-revealing jobs. This lowers expected productivity and salaries. Public unemployment insurance corrects this inefficiency, enhancing employment in talent-sensitive industries, consistently with international evidence. Unemployment insurance dominates legal restrictions on firms’ dismissals, which penalize more talent-sensitive firms and thus depress expected productivity. Finally, unemployment insurance fosters education, by encouraging investment in risky human capital that enhances talent discovery.
We assess the relationship between finance and growth over the period 1980-2014. We estimate a cross-country growth regression for 48 countries during 20 periods of 15 years starting in 1980 (to 1995) and ending in 1999 (to 2014). We use OLS and IV estimations and we find that: 1) overall financial development had a positive effect on economic growth during all periods of our sample, i.e., we confirm that from 1980 to 2014 financial services provided by the various financial systems were significant (to various degrees) for firm creation, industrial expansion and economic growth; but that, 2) the structure of financial markets was particularly relevant for economic growth until the financial crisis; while 3) the structure of the banking sector played a major role since; and finally that, 4) the legal system is the primary determinant of the effectiveness of the overall financial system in facilitating innovation and growth in (almost) all of our sample period. Hence, overall our results suggest that the relationship between finance and growth matters but also that it varies over time in strength and in sector origination.
JEL Classification: O16, G16, G20.
Motivated by the observation that survey expectations of stock returns are inconsistent with rational return expectations under real-world probabilities, we investigate whether alternative expectations hypotheses entertained in the asset pricing literature are consistent with the survey evidence. We empirically test (1) the notion that survey forecasts constitute rational but risk-neutral forecasts of future returns, and (2) the notion that survey fore- casts are ambiguity averse/robust forecasts of future returns. We find that these alternative hypotheses are also strongly rejected by the data, albeit for different reasons. Hypothesis (1) is rejected because survey return forecasts are not in line with risk-free interest rates and because survey expected excess returns are predictable. Hypothesis (2) is rejected because agents are not al- ways pessimistic about future returns, instead often display overly optimistic return expectations. We speculate as to what kind of expectations theories might be consistent with the available survey evidence.
Europe is a key normative power. Its legitimacy as a force for ensuring the reign of rule of law in international relations is unparalleled. It also packs an economic punch. In data protection and the fight against cybercrime, European norms have been successfully globalized. The time is right to take the next step: Europe must now become the international normative leader for developing a new deal on internet governance. To ensure this, European powers should commit to rules that work in security, economic development and human rights on the internet and implement them in a reinvigorated IGF.
This paper argues that the introduction of the Banking Recovery and Resolution Directive (BRRD) improved market discipline in the European bank market for unsecured debt. The different impact of the BRRD on bank bonds provides a quasi-natural experiment that allows to study the effect of the BRRD within banks using a difference-in-difference approach. Identification is based on the fact that (otherwise identical) bonds of a given bank maturing before 2016 are explicitly protected from BRRD bail-in. The empirical results are consistent with the hypothesis that debt holders actively monitor banks and that the BRRD diminished bail-out expectations. Bank bonds subject to BRRD bail-in carry a 10 basis points bail-in premium in terms of the yield spread. While there is some evidence that the bail-in premium is more pronounced for non-GSIB banks and banks domiciled in peripheral European countries, weak capitalization is the main driver.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
What institutional arrangements for an independent central bank with a price stability mandate promote good policy outcomes when unconventional policies become necessary? Unconventional monetary policy poses challenges. The large scale asset purchases needed to counteract the zero lower bound on nominal interest rates have uncomfortable fiscal and distributional consequences and require central banks to assume greater risks on their balance sheets.
In his paper, Athanasios Orphanides draws lessons from the experience of the Bank of Japan (BoJ) since the late 1990s for the institutional design of independent central banks. He comes to the conclusion that lack of clarity on the precise definition of price stability, coupled with concerns about the legitimacy of large balance sheet expansions, hinders policy: It encourages the central bank to eschew the decisive quantitative easing needed to reflate the economy and instead to accommodate too-low inflation. The BoJ’s experience with the zero lower bound suggests important benefits from a clear definition of price stability as a symmetric 2% goal for inflation, which the Bank adopted in 2013.
The paper illustrates based on an example the importance of consistency between the empirical measurement and the concept of variables in estimated macroeconomic models. Since standard New Keynesian models do not account for demographic trends and sectoral shifts, the authors proposes adjusting hours worked per capita used to estimate such models accordingly to enhance the consistency between the data and the model. Without this adjustment, low frequency shifts in hours lead to unreasonable trends in the output gap, caused by the close link between hours and the output gap in such models.
The retirement wave of baby boomers, for example, lowers U.S. aggregate hours per capita, which leads to erroneous permanently negative output gap estimates following the Great Recession. After correcting hours for changes in the age composition, the estimated output gap closes gradually instead following the years after the Great Recession.
Financial market interactions can lead to large and persistent booms and recessions. Instability is an inherent threat to economies with speculative financial markets. A central bank’s interest rate setting can amplify the expectation feedback in the financial market and this can lead to unstable dynamics and excess volatility. The paper suggests that policy institutions may be well-advised to handle tools like asset price targeting with care since such instruments might add a structural link between asset prices and macroeconomic aggregates. Neither stock prices nor indices are a good indicator to base decisions on.
The level of capital tax gains has high explanatory power regarding the question of what drives economic inequality. On this basis, the authors develop a simple, yet micro-founded portfolio selection model to explain the dynamics of wealth inequality given empirical tax series in the US. The results emphasize that the level and the transition of speed of wealth inequality depend crucially on the degree of capital taxation. The projections predict that – continuing on the present path of capital taxation in the US – the gap between rich and poor is expected to shrink whereas “massive” tax cuts will further increase the degree of wealth concentration.
We investigate the characteristics of infrastructure as an asset class from an investment perspective of a limited partner. While non U.S. institutional investors gain exposure to infrastructure assets through a mix of direct investments and private fund vehicles, U.S. investors predominantly invest in infrastructure through private funds. We find that the stream of cash flows delivered by private infrastructure funds to institutional investors is very similar to that delivered by other types of private equity, as reflected by the frequency and amounts of net cash flows. U.S. public pension funds perform worse than other institutional investors in their infrastructure fund investments, although they are exposed to underlying deals with very similar project stage, concession terms, ownership structure, industry, and geographical location. By selecting funds that invest in projects with poor financial performance, U.S. public pension funds have created an implicit subsidy to infrastructure as an asset class, which we estimate within the range of $730 million to $3.16 billion per year depending on the benchmark.
Direct financing of consumer credit by individual investors or non-bank institutions through an implementation of marketplace lending is a relatively new phenomenon in financial markets. The emergence of online platforms has made this type of financial intermediation widely available. This paper analyzes the performance of marketplace lending using proprietary cash flow data for each individual loan from the largest platform, Lending Club. While individual loan characteristics would be important for amateur investors holding a few loans, sophisticated lenders, including institutional investors, usually form broad portfolios to benefit from diversification. We find high risk-adjusted performance of approximately 40 basis points per month for these basic loan portfolios. This abnormal performance indicates that Lending Club, and similar marketplace lenders, are likely to attract capital to finance a growing share of the consumer credit market. In the absence of a competitive response from traditional credit providers, these loans lower costs to the ultimate borrowers and increase returns for the ultimate lenders.
We study the relevance of signaling and marketing as explanations for the discount control mechanisms that a closed-end fund may choose to adopt in its prospectus. These policies are designed to narrow the potential gap between share price and net asset value, measured by the fund’s discount. The two most common discount control mechanisms are explicit discretion to repurchase shares based on the magnitude of the fund discount and mandatory continuation votes that provide shareholders the opportunity to liquidate the fund. We find very limited evidence that a discount control mechanism serves as costly signal of information. Funds with mandatory voting are not more likely to delist than the rest of the CEFs in general or whenever the fund discount is large. Similarly, funds that explicitly discuss share repurchases as a potential response do not subsequently buy back shares more often when discounts do increase. Instead, the existence of these policies is more consistent with marketing explanations because the policies are associated with an increased probability of issuing more equity in subsequent periods.
This paper investigates how biases in macroeconomic forecasts are associated with economic surprises and market responses across asset classes around US data announcements. We find that the skewness of the distribution of economic forecasts is a strong predictor of economic surprises, suggesting that forecasters behave strategically (rational bias) and possess private information. Our results also show that consensus forecasts of US macroeconomic releases embed anchoring. Under these conditions, both economic surprises and the returns of assets that are sensitive to macroeconomic conditions are predictable. Our findings indicate that local equities and bond markets are more predictable than foreign markets, currencies and commodities. Economic surprises are found to link to asset returns very distinctively through the stages of the economic cycle, whereas they strongly depend on economic releases being inflation- or growth-related. Yet, when forecasters fail to correctly forecast the direction of economic surprises, regret becomes a relevant cognitive bias to explain asset price responses. We find that the behavioral and rational biases encountered in US economic forecasting also exists in Continental Europe, the United Kingdom and Japan, albeit, to a lesser extent.
In the secondary art market, artists play no active role. This allows us to isolate cultural influences on the demand for female artists’ work from supply-side factors. Using 1.5 million auction transactions in 45 countries, we document a 47.6% gender discount in auction prices for paintings. The discount is higher in countries with greater gender inequality. In experiments, participants are unable to guess the gender of an artist simply by looking at a painting and they vary in their preferences for paintings associated with female artists. Women's art appears to sell for less because it is made by women.
While record-making prices at art auctions receive headline news coverage, artists typically do not receive any direct proceeds from those sales. Early-stage creative work in any field is perennially difficult to value, but the valuation, reward, and incentivization for artistic labor are particularly fraught. A core challenge in studying the real return on artists’ work is the extreme difficulty accessing data from when an artwork was first sold. Galleries keep private records that are difficult to access and to match to public auction results. This paper, for the first time, uses archivally sourced primary market records, for the artists Jasper Johns and Robert Rauschenberg. Although this approach restricts the size of the data set, this innovative method shows much more accurate returns on art than typical regression and hedonic models. We find that if Johns and Rauschenberg had retained 10% equity in their work when it was first sold, the returns to them when the work was resold at auction would have outperformed the US S&P 500 by between 2 and 986 times. The implication of this work opens up vast policy recommendations with regard to secondary art market sales, entrepreneurial strategies using blockchain technology, and implications about how we compensate creative work.
We study the introduction of single-market liquidity provider incentives in fragmented securities markets. Specifically, we investigate whether fee rebates for liquidity providers enhance liquidity on the introducing market and thereby increase its competitiveness and market share. Further, we analyze whether single-market liquidity provider incentives increase overall market liquidity available for market participants. Therefore, we measure the specific liquidity contribution of individual markets to the aggregate liquidity in the fragmented market environment. While liquidity and market share of the venue introducing incentives increase, we find no significant effect for turnover and liquidity of the whole market.
Reliability and relevance of fair values : private equity investments and investee fundamentals
(2018)
We directly test the reliability and relevance of fair values reported by listed private equity firms (LPEs), where the unit of account for fair value measurement attribute (FVM) is an investment stake in an individual investee company. FVMs are observable for multiple investment stakes, fair values are economically important, and granular data on investee economic fundamentals that should underpin fair values are available in public disclosures. We find that LPE fund managers determine valuations based on accounting-based fundamentals—equity book value and net income—that are in line with those investors derive for listed companies. Additionally, our findings suggest that LPE fund managers apply a lower valuation weight to investee net income if direct market inputs are unobservable during investment value estimation. We interpret these findings as evidence that LPE fund managers do not appear mechanically to apply market valuation weights for publicly traded investees when determining valuations of non-listed. We also document that the judgments that LPE fund managers apply when determining investee valuations appear to be perceived as reliable by their investors.
We study the impact of transparency on liquidity in OTC markets. We do so by providing an analysis of liquidity in a corporate bond market without trade transparency (Germany), and comparing our findings to a market with full post-trade disclosure (the U.S.). We employ a unique regulatory dataset of transactions of German financial institutions from 2008 until 2014 to find that: First, overall trading activity is much lower in the German market than in the U.S. Second, similar to the U.S., the determinants of German corporate bond liquidity are in line with search theories of OTC markets. Third, surprisingly, frequently traded German bonds have transaction costs that are 39-61 bp lower than a matched sample of bonds in the U.S. Our results support the notion that, while market liquidity is generally higher in transparent markets, a sub-set of bonds could be more liquid in more opaque markets because of investors "crowding" their demand into a small number of more actively traded securities.
This paper analyzes how the combination of borrowing constraints and idiosyncratic risk affects the equity premium in an overlapping generations economy. I find that introducing a zero-borrowing constraint in an economy without idiosyncratic risk increases the equity premium by 70 percent, which means that the mechanism described in Constantinides, Donaldson, and Mehra (2002) is dampened because of the large number of generations and production. With social security the effect of the zero-borrowing constraint is a lot weaker. More surprisingly, when I introduce idiosyncratic labor income risk in an economy without a zero-borrowing constraint, the equity premium increases by 50 percent, even though the income shocks are independent of aggregate risk and are not permanent. The reason is that idiosyncratic risk makes the endogenous natural borrowing limits much tighter, so that they have a similar effect to an exogenously imposed zero-borrowing constraint. This intuition is confirmed when I add idiosyncratic risk in an economy with a zero-borrowing constraint: neither the equity premium nor the Sharpe ratio change, because the zero-borrowing constraint is already tighter than the natural borrowing limits that result when idiosyncratic risk is added.
We propose a spatiotemporal approach for modeling risk spillovers using time-varying proximity matrices based on observable financial networks and introduce a new bilateral specification. We study covariance stationarity and identification of the model, and analyze consistency and asymptotic normality of the quasi-maximum-likelihood estimator. We show how to isolate risk channels and we discuss how to compute target exposure able to reduce system variance. An empirical analysis on Euro-area cross-country holdings shows that Italy and Ireland are key players in spreading risk, France and Portugal are the major risk receivers, and we uncover Spain's non-trivial role as risk middleman.
We show that bond purchases undertaken in the context of quantitative easing efforts by the European Central Bank created a large mispricing between the market for German and Italian government bonds and their respective futures contracts. On top of the direct effect the buying pressure exerted on bond prices, we show three indirect effects through which the scarcity of bonds, resulting from the asset purchases, drove a wedge between the futures contracts and the underlying bonds: the deterioration of bond market liquidity, the increased bond specialness on the repurchase agreement market, and the greater uncertainty about bond availability as collateral.
We study the role of various trader types in providing liquidity in spot and futures markets based on complete order-book and transactions data as well as cross-market trader identifiers from the National Stock Exchange of India for a single large stock. During normal times, short-term traders who carry little inventory overnight are the primary intermediaries in both spot and futures markets, and changes in futures prices Granger-cause changes in spot prices. However, during two days of fast crashes, Granger-causality ran both ways. Both crashes were due to large-scale selling by foreign institutional investors in the spot market. Buying by short-term traders and cross-market traders was insufficient to stop the crashes. Mutual funds, patient traders with better trade-execution quality who were initially slow to move in, eventually bought sufficient quantities leading to price recovery in both markets. Our findings suggest that market stability requires the presence of well-capitalized standby liquidity providers.
An important assumption underlying the designation of some insurers as systemically important is that their overlapping portfolio holdings can result in common selling. We measure the overlap in holdings using cosine similarity, and show that insurers with more similar portfolios have larger subsequent common sales. This relationship can be magnified for some insurers when they are regulatory capital constrained or markets are under stress. When faced with an exogenous liquidity shock, insurers with greater portfolio similarity have even larger common sales that impact prices. Our measure can be used by regulators to predict which institutions may contribute most to financial instability through the asset liquidation channel of risk transmission.
This paper investigates inertia within and across banks in retail deposit markets using detailed panel data on consumer choices and account characteristics. In a structural choice model, I find that costs of inertia are around one third higher for switching accounts across compared to switching within banks. Observable proxies of bank-level switching costs (number and type of additional financial products) explain most of this cost premium, while online banking usage reduces inertia. Consistent with theory, I provide evidence that banks incorporate inertia in their pricing as older accounts pay lower rates than comparable newer accounts. Counterfactual policies reducing inertia shift market share to more competitive smaller banks, but only eliminating inertia within banks already results in high potential gains in consumer surplus. This suggests that facilitating bank switching alone might be insufficient to improve consumer choices.
In recent years European financial regulation has experienced a tremendous reorientation with respect to the shadow banking system, which manifested first and foremost in its reframing as market-based finance. Initially identified as a source of systemic risk certain initiatives did not only fall much behind the envisaged changes but all to the contrary have been substantially modified in a way that they now aim at revitalizing these activities. The reorientation of European regulatory agency on shadow banking post-crisis, from curtailing it to facilitating resilient market-based finance, has been a cause for irritation by academic observers, dismissed by some as mere rebranding or taken as a sign of regulatory capture. All to the contrary, this paper documents the central role of regulatory agency in shadow banking’s reconfiguration. It does so by analyzing the European initiatives concerning the regulation of Asset-Backed Commercial Paper (ABCP) and another prime example of shadow banking, Money Market Mutual Funds (MMFs). Based on documentary analysis and expert interviews we trace the way the recently published EU frameworks for MMFs and ABCP have been designed (in particular the STS, CRR and MMF regulation in 2017). Furthermore, we show how they have been transformed in such a way that their final versions allow to re-establish the shadow banking chain linking MMFs, the ABCP market and arguably the regular banking system. This transformation is driven by a new form of pro-active European regulatory agency which aims at creating a regulatory infrastructure able to sustain the orderly flow of real economy debt. Far from being captured by the industry, they did so consciously and in cooperation with private actors in order to maintain a channel for credit creation outside of bank credit, a task made more complicated by the rushed politicized final negotiations coupled with technical complexity. This paper thereby contributes to a new strand of literature, seeing the creation and reconfiguration of the shadow banking system as characterized by the active and conscious role of state actors.
We propose a unified framework to measure the effects of different reforms of the pension system on retirement ages and macroeconomic indicators in the face of demographic change. A rich overlapping generations (OLG) model is built and endogenous retirement decisions are explicitly modeled within a public pension system. Heterogeneity with respect to consumption preferences, wage profiles, and survival rates is embedded in the model. Besides the expected direct effects of these reforms on the behavior of households, we observe that feedback effects do occur. Results suggest that individual retirement decisions are strongly influenced by numerous incentives produced by the pension system and macroeconomic variables, such as the statutory eligibility age, adjustment rates, the presence of a replacement rate, and interest rates. Those decisions, in turn, have several impacts on the macro-economy which can create feedback cycles working through equilibrium effects on interest rates and wages. Taken together, these reform scenarios have strong implications for the sustainability of pension systems. Because of the rich nature of our unified model framework, we are able to rank the reform proposals according to several individual and macroeconomic measures, thereby providing important support for policy recommendations on pension systems.
The paper investigates the determinants of the idiosyncratic volatility puzzle by allowing linkages across asset returns. The first contribution of the paper is to show that portfolios sorted by increasing indegree computed on the network based on Granger causality test have lower expected returns, not related to idiosyncratic volatility. Secondly, empirical evidence indicates that stocks with higher idiosyncratic volatility have the lower exposition on the indegree risk factor.
We examine how a firms' investment behavior affects the investment of a neighboring firm. Economic theory yields ambiguous predictions regarding the direction of firm peer effects and consistent with earlier work, we find that firms display similar investment behavior within an area using OLS analysis. Exploiting time-variation in the rise of U.S. states' corporate income taxes and utilizing heterogeneity in firms' exposure to increases in corporate income tax rates, we identify the causal impact of local firms' investments. Using this as an instrumental variable in a 2SLS estimation, we find that an increases in local firms' investment reduces the investment of a local peer firm. This effect is more pronounced if local competition among firms is stronger and supports theories that firm investments are strategic substitutes due to competition.
We use minutes from 17,000 financial advisory sessions and corresponding client portfolio data to study how active client involvement affects advisor recommendations and portfolio outcomes. We find that advisors confronted with acquiescent clients stick to their standards and recommend expensive but well diversified mutual fund portfolios. However, if clients take an active role in the meetings, advisors deviate markedly from their standards, resulting in poorer portfolio diversification and lower Sharpe ratios. Our findings that advisors cater to client requests parallel the phenomenon of doctors prescribing antibiotics to insistent patients even if inappropriate, and imply that pandering diminishes the quality of advice.
This paper provides a complete characterization of optimal contracts in principal-agent settings where the agent's action has persistent effects. We model general information environments via the stochastic process of the likelihood-ratio. The martingale property of this performance metric captures the information benefit of deferral. Costs of deferral may result from both the agent's relative impatience as well as her consumption smoothing needs. If the relatively impatient agent is risk neutral, optimal contracts take a simple form in that they only reward maximal performance for at most two payout dates. If the agent is additionally risk-averse, optimal contracts stipulate rewards for a larger selection of dates and performance states: The performance hurdle to obtain the same level of compensation is increasing over time whereas the pay-performance sensitivity is declining.
A growing body of literature shows the importance of financial literacy in households' financial decisions. However, fewer studies focus on understanding the determinants of financial literacy. Our paper fills this gap by analyzing a specific determinant, the educational system, to explain the heterogeneity in financial literacy scores across Germany. We suggest that the lower financial literacy observed in East Germany is partially caused by a different institutional framework experienced during the Cold War, more specifically, by the socialist educational system of the GDR which affected specific cohorts of individuals. By exploiting the unique set-up of the German reunification, we identify education as a channel through which institutions and financial literacy are related in the German context.
How demanding and consistent is the 2018 stress test design in comparison to previous exercises?
(2018)
Bank regulators have the discretion to discipline banks by executing enforcement actions to ensure that banks correct deficiencies regarding safe and sound banking principles. We highlight the trade-offs regarding the execution of enforcement actions for financial stability. Following this we provide an overview of the differences in the legal framework governing supervisors’ execution of enforcement actions in the Banking Union and the United States. After discussing work on the effect of enforcement action on bank behaviour and the real economy, we present data on the evolution of enforcement actions and monetary penalties by U.S. regulators. We conclude by noting the importance of supervisors to levy efficient monetary penalties and stressing that a division of competences among different regulators should not lead to a loss of efficiency regarding the execution of enforcement actions.
A new governance architecture for european financial markets? Towards a european supervision of CCPs
(2018)
Does the new European outlook on financial markets, as voiced by the EU Commission since the beginning of the Capital Market Unions imply a movement of the EU towards an alignment of market integration and direct supervision of common rules? This paper sets out to answer this question for the case of common supervision for Central Counterparties (CCPs) in the European Union. Those entities gained crucial importance post-crisis due to new regulation which requires the mandatory clearing of standardized derivative contracts, transforming clearing houses into central nodes for cross-border financial transactions. While the EU-wide regulatory framework EMIR, enacted in 2012, stipulates common regulatory requirements, the framework still relies on home-country supervision of those rules, arguably leading to regulatory as well as supervisory arbitrage. Therefore, the regulatory reform to stabilize the OTC derivatives market replicated at its center a governance flaw, which had been identified as one of the major causes for the gravity of the financial crisis in the EU: the coupling of intense competition based on private risk management systems with a national supervision of European rules. This paper traces the history of this problem awareness and inquires which factors account for the fact that only in 2017 serious negotiations at the EU level ensued that envisioned a common supervision of CCPs to fix the flawed system of governance. Analyzing this shift in the European governance architecture, we argue that Brexit has opened a window of opportunity for a centralization of supervision for CCPs. Brexit aligns the urgency of the problem with material interests of crucial political stakeholder, in particular of Germany and France, providing the possibility for a grand European bargain.
Improving financial conditions of individuals requires an understanding of the mechanisms through which bad financial decision-making leads to worse financial outcomes. From a theoretical point of view, a key candidate inducing mistakes in financial decision-making are so called present-biased preferences, which are one of the cornerstones of behavioral economics. According to theory, present-biased households should behave systematically different when it comes to consumption and saving decisions, as they should be more prone to spending too much and saving too little.
In this policy letter we show how high frequency financial transaction data available in digitized form allows to precisely categorize individual financial-decision making to be present-biased or not. Using this categorization, we find that one out of five individuals in our sample exhibits present-bias and that this present-biased behavior is associated with a stronger use of overdrafts. As overdrafts represent a particularly expensive way of short-term borrowing, their systematic use can be interpreted as a measure of suboptimal financial-decision making. Overall, our results indicate that the combination of economic theory and Big Data is able to generate valuable insights with applications for policy makers and businesses alike.
The object of this study is one of the most ambitious projects of twentieth-century art history: Aby Warburg's 'Atlas Mnemosyne', conceived in the summer of 1926 – when the first mention of a 'Bilderatlas', or "atlas of images", occurs in his journal – and truncated three years later, unfinished, by his sudden death in October 1929. Mnemosyne consisted in a series of large black panels, about 170x140 cm., on which were attached black-and-white photographs of paintings, sculptures, book pages, stamps, newspaper clippings, tarot cards, coins, and other types of images. Warburg kept changing the order of the panels and the position of the images until the very end, and three main versions of the Atlas have been recorded: one from 1928 (the "1-43 version", with 682 images); one from the early months of 1929, with 71 panels and 1050 images; and the one Warburg was working on at the time of his death, also known as the "1-79 version", with 63 panels and 971 images (which is the one we will examine). But Warburg was planning to have more panels – possibly many more – and there is no doubt that Mnemosyne is a dramatically unfinished and controversial object of study.
Patterns and interpretation
(2017)
One thing for sure: digitization has completely changed the literary archive. People like me used to work on a few hundred nineteenth-century novels; today, we work on thousands of them; tomorrow, hundreds of thousands. This has had a major effect on literary history, obviously enough, but also on critical methodology; because, when we work on 200,000 novels instead of 200, we are not doing the same thing, 1,000 times bigger; we are doing a different thing. The new scale changes our relationship to our object, and in fact 'it changes the object itself'.
The Emotions of London
(2016)
A few years ago, a group formed by Ben Allen, Cameron Blevins, Ryan Heuser, and Matt Jockers decided to use topic modeling to extract geographical information from nineteenth-century novels. Though the study was eventually abandoned, it had revealed that London-related topics had become significantly more frequent in the course of the century, and when some of us were later asked to design a crowd-sourcing experiment, we decided to add a further dimension to those early findings, and see whether London place-names could become the cornerstone for an emotional geography of the city.
Literature, measured
(2016)
There comes a moment, in digital humanities talks, when someone raises the hand and says: "Ok. Interesting. But is it really new?" Good question... And let's leave aside the obvious lines of defense, such as "but the field is still only at its beginning!", or "and traditional literary criticism, is that always new?" All true, and all irrelevant; because the digital humanities have presented themselves as a radical break with the past, and must therefore produce evidence of such a break. And the evidence, let's be frank, is not strong. What is there, moreover, comes in a variety of forms, beginning with the slightly paradoxical fact that, in a new approach, not everything has to be new. When "Network Theory, Plot Analysis” pointed out, in passing, that a network of Hamlet had Hamlet at its center, the New York Times gleefully mentioned the passage as an unmistakable sign of stupidity. Maybe; but the point, of course, was not to present Hamlet’s centrality as a surprise; it was exactly the opposite: had the new approach not found Hamlet at the center of the play, its plausibility would have disintegrated. Before using network theory for dramatic analysis, I had to test it, and prove that it corroborated the main results of previous research.
Of the novelties introduced by digitization in the study of literature, the size of the archive is probably the most dramatic: we used to work on a couple of hundred nineteenth-century novels, and now we can analyze thousands of them, tens of thousands, tomorrow hundreds of thousands. It's a moment of euphoria, for quantitative literary history: like having a telescope that makes you see entirely new galaxies. And it's a moment of truth: so, have the digital skies revealed anything that changes our knowledge of literature? This is not a rhetorical question. In the famous 1958 essay in which he hailed "the advent of a quantitative history" that would "break with the traditional form of nineteenth-century history", Fernand Braudel mentioned as its typical materials "demographic progressions, the movement of wages, the variations in interest rates [...] productivity [...] money supply and demand." These were all quantifiable entities, clearly enough; but they were also completely new objects compared to the study of legislation, military campaigns, political cabinets, diplomacy, and so on. It was this double shift that changed the practice of history; not quantification alone. In our case, though, there is no shift in materials: we may end up studying 200,000 novels instead of 200; but, they're all still novels. Where exactly is the novelty?
Different scales, different features. It’s the main difference between the thesis we have presented here, and the one that has so far dominated the study of the paragraph. By defining it as "a sentence writ large", or, symmetrically, as "a short discourse", previous research was implicitly asserting the irrelevance of scale: sentence, paragraph, and discourse were all equally involved in the "development of one topic". We have found the exact opposite: 'scale is directly correlated to the differentiation of textual functions'. By this, we don't simply mean that the scale of sentences or paragraphs allows us to "see" style or themes more clearly. This is true, but secondary. Paragraphs allows us to "see" themes, because themes fully "exist" only at the scale of the paragraph. Ours is not just an epistemological claim, but an ontological one: if style and themes and episodes exist in the form they do, it's because writers work at different scales – and do different things according to the level at which they are operating.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
The concept of length, the concept is synonymous, the concept is nothing more than, the proper definition of a concept ... Forget programs and visions; the operational approach refers specifically to concepts, and in a very specific way: it describes the process whereby concepts are transformed into a series of operations—which, in their turn, allow to measure all sorts of objects. Operationalizing means building a bridge from concepts to measurement, and then to the world. In our case: from the concepts of literary theory, through some form of quantification, to literary texts.