Working Paper
Refine
Year of publication
Document Type
- Working Paper (2353) (remove)
Language
- English (2353) (remove)
Is part of the Bibliography
- no (2353)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1378)
- Wirtschaftswissenschaften (1308)
- Sustainable Architecture for Finance in Europe (SAFE) (740)
- House of Finance (HoF) (606)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
This paper uses unique administrative data and a quasi-field experiment of exogenous allocation in Sweden to estimate medium- and longer-run effects on financial behavior from exposure to financially literate neighbors. It contributes evidence of causal impact of exposure and of a social multiplier of financial knowledge, but also of unfavorable distributional aspects of externalities. Exposure promotes saving in private retirement accounts and stockholding, especially when neighbors have economics or business education, but only for educated households and when interaction possibilities are substantial. Findings point to transfer of knowledge rather than mere imitation or effects through labor, education, or mobility channels.
The recent sovereign debt crisis in the Eurozone was characterized by a monetary policy, which has been constrained by the zero lower bound (ZLB) on nominal interest rates, and several countries, which faced high risk spreads on their sovereign bonds. How is the government spending multiplier affected by such an economic environment?While prominent results in the academic literature point to high government spending multipliers at the ZLB, higher public indebtedness is often associated with small government spending multipliers. I develop a DSGE model with leverage constrained banks that captures both features of this economic environment, the ZLB and fiscal stress. In this model, I analyze the effects of government spending shocks. I find that not only are multipliers large at the ZLB, the presence of fiscal stress can even increase their size. For longer durations of the ZLB,multipliers in this model can be considerably larger than one.
JEL Classification: E32, E 44, E62
Recently, Fuest and Sinn (2018) have demanded a change of rules for the Eurozone’s Target 2 payment system, claiming it would violate the Statutes of the European System of Central Banks and of the European Central Bank. The authors present a stylized model based on a set of macro-economic assumptions, and show that Target 2 may lead to loss sharing among national central banks (NCBs), thus violating the no risk-sharing requirement laid out by the Eurosystem Statutes.
In this note, I present an augmented model that incorporates essential features of the micro- and macroprudential regulatory and supervisory regime that today is hard-wired into Europe’s banking system. The model shows that the original no-risk-sharing principle is not necessarily violated during a financial crisis of a member state. Moreover, it shows that under a banking union regime, financial crisis asset value losses at or below the 99.9th percentile are borne by private investors, not by taxpayers, and particularly not by central banks.
Therefore, policy conclusions from the micro-founded model differ significantly from those suggested by Fuest and Sinn (2018).
We propose a shrinkage and selection methodology specifically designed for network inference using high dimensional data through a regularised linear regression model with Spike-and-Slab prior on the parameters. The approach extends the case where the error terms are heteroscedastic, by adding an ARCH-type equation through an approximate Expectation-Maximisation algorithm. The proposed model accounts for two sets of covariates. The first set contains predetermined variables which are not penalised in the model (i.e., the autoregressive component and common factors) while the second set of variables contains all the (lagged) financial institutions in the system, included with a given probability. The financial linkages are expressed in terms of inclusion probabilities resulting in a weighted directed network where the adjacency matrix is built “row by row". In the empirical application, we estimate the network over time using a rolling window approach on 1248 world financial firms (banks, insurances, brokers and other financial services) both active and dead from 29 December 2000 to 6 October 2017 at a weekly frequency. Findings show that over time the shape of the out degree distribution exhibits the typical behavior of financial stress indicators and represents a significant predictor of market returns at the first lag (one week) and the fourth lag (one month).
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
This paper examines how networks of professional contacts contribute to the development of the careers of executives of North American and European companies. We build a dynamic model of career progression in which career moves may both depend upon existing networks and contribute to the development of future networks. We test the theory on an original dataset of nearly 73 000 executives in over 10 000 _rms. In principle professional networks could be relevant both because they are rewarded by the employer and because they facilitate job mobility. Our econometric analysis suggests that, although there is a substantial positive correlation between network size and executive compensation, with an elasticity of around 20%, almost all of this is due to unobserved individual characteristics. The true causal impact of networks on compensation is closer to an elasticity of 1 or 2% on average, all of this due to enhanced probability of moving to a higher-paid job. And there appear to be strongly diminishing returns to network size.
Using a unique confidential contract level dataset merged with firm-level asset price data, we find robust evidence that firms' stock market valuations and employment levels respond more to monetary policy announcements the higher the degree of wage rigidity. Data on the renegotiations of collective bargaining agreements allow us to construct an exogenous measure of wage rigidity. We also find that the amplification induced by wage rigidity is stronger for firms with high labor intensity and low profitability, providing evidence of distributional consequences of monetary policy. We rationalize the evidence through a model in which firms in different sectors feature different degrees of wage rigidity due to staggered renegotiations vis-a-vis unions.
This paper analyzes the effect of financial constraints on firms' corporate social responsibility. Exploiting heterogeneity in firms' exposure to a monetary policy shock in the U.S., which reduced financial constraints for some firms, I find that firms increase their environmental responsibility. I use facility-level data to account for unobservable time-varying influences on pollution and find that toxic emissions decrease when parent companies are more exposed to the monetary policy shock. I further find that these facilities are also more likely to implement pollution abatement activities. Examining within-parent company heterogeneity I find that pollution abatement investments center on facilities at greater risk of facing additional costs due to environmental regulation. The findings are consistent with the idea that a reduction in financial constraints reduces pollution as it allows firms to implement pollution abatement measures.
Households buy life insurance as part of their liquidity management. The option to surrender such a policy can serve as a buffer when a household faces a liquidity need. In this study, we investigate empirically which individual and household specific sociodemographic factors influence the surrender behavior of life insurance policyholders. Based on the Socio-Economic Panel (SOEP), an ongoing wide-ranging representative longitudinal study of around 11,000 private households in Germany, we construct a proxy to identify life insurance surrender in the data. We use this proxy to conduct fixed effect regressions and support the results with survival analyses. We find that life events that possibly impose a liquidity shock to the household, such as birth of a child and divorce increase the likelihood to surrender an existing life insurance policy for an average household in the panel. The acquisition of a dwelling and unemployment are further aspects that can foster life insurance surrender. Our results are robust with respect to different models and hold conditioning on region specific trends; they vary however for different age groups. Our analyses contribute to the existing literature supporting the emergency fund hypothesis. The findings obtained in this study can help life insurers and regulators to detect and understand industry specific challenges of the demographic change.
Higher capital ratios are believed to improve system-wide financial stability through three main channels: (i) higher loss-absorption capacity, (ii) lower moral hazard, (iii) stabilization of the financial cycle if capital ratios are increased during good times. We examine these mechanisms in a laboratory asset market experiment with indebted participants. We find support for the loss-absorption channel: higher capital ratios reduce the bankruptcy rate. However, we do not find support for the moral hazard channel. Higher capital ratios (insignificantly) increase asset price bubbles, an aggregate measure of excessive risk-taking. Additional evidence suggests that bankruptcy aversion explains this surprising result. Finally, the evidence supports the idea that higher capital ratios in good times stabilize the financial cycle.
Whither artificial intelligence? Debating the policy challenges of the upcoming transformation
(2018)
The School of Salamanca, and Iberian late Scholasticism in general, had the merit of transposing the wisdom of medieval scholasticism into the coordinates of early modernity. Due to the economic growth after the discovery of America, economic terms and moral problems become a central focus for moral theologians. In this article, I consider important key economic concepts that deliver a surprising wealth of insights into the modernization brought about by the leading scholars of the time. Social mobility, the principle of majority decision, the inviolability of property, human rights of the person, limited political power of the pope, and other key concepts that were decisive for the development of democracy and modernity are to be found in the works of the School of Salamanca in connection with economic issues.
Distributed ledger technologies rely on consensus protocols confronting traders with random waiting times until the transfer of ownership is accomplished. This time consuming settlement process exposes arbitrageurs to price risk and imposes limits to arbitrage. We derive theoretical arbitrage boundaries under general assumptions and show that they increase with expected latency, latency uncertainty, spot volatility, and risk aversion. Using high-frequency data from the Bitcoin network, we estimate arbitrage boundaries due to settlement latency of on average 124 basis points, covering 88% of the observed cross-exchange price differences. Settlement through decentralized systems thus induces non-trivial frictions affecting market efficiency and price formation.
Much ado about nothing : a study of differential pricing and liquidity of short and long term bonds
(2018)
Are yields of long-maturity bonds distorted by demand pressure of clientele investors, regulatory effects, or default, flight-to-safety or liquidity premiums? Using data on German nominal bonds between 2005 and 2015, we study the differential pricing and liquidity of short and long maturity bonds. We find statistically significant, but economically negligible segmentation in yields and some degree of liquidity segmentation of short-term versus long-term bonds. These results have important policy implications for the e17.5 trillion European pension and insurance industries: long maturity bond yields seem appropriate for the valuation of long-term liabilities.
A number of recent studies have concluded that consumer spending patterns over the month are closely linked to the timing of income receipt. This correlation is interpreted as evidence of hyperbolic discounting. I re-examine patterns of spending in the diary sample of the U.S. Consumer Expenditure Survey, incorporating information on the timing of the main consumption commitment for most households - their monthly rent or mortgage payment. I find that non-durable and food spending increase with 30-48% on the day housing payments are made, with smaller increases in the days after. Moreover, households with weekly, biweekly and monthly income streams but the same timing of rent/mortgage payments have very similar consumption patterns. Exploiting variation in income, I find that households with extra liquidity decrease non-durable spending around housing payments, especially those households with a large budget share of housing.
A recent US Treasury regulation allowed deferred longevity income annuities to be included in pension plan menus as a default payout solution, yet little research has investigated whether more people should convert some of the $15 trillion they hold in employer-based defined contribution plans into lifelong income streams. We investigate this innovation using a calibrated lifecycle consumption and portfolio choice model embodying realistic institutional considerations. Our welfare analysis shows that defaulting a small portion of retirees’ 401(k) assets (over a threshold) is an attractive way to enhance retirement security, enhancing welfare by up to 20% of retiree plan accruals.
We provide the first partner tenure and rotation analysis for a large cross-section of U.S. publicly listed firms over an extended period. We analyze the effects on audit quality as well as economic tradeoffs with respect to audit hours and fees. On average, we find no evidence for audit quality declines over the tenure cycle and, consistent with the former, little support for fresh-look benefits after five-year mandatory rotations. Nevertheless, partner rotations have significant economic consequences. We find increases in audit fees and decreases in audit hours over the tenure cycle, which differ by partner experience, client size, and competitiveness of the local audit market. Our findings are consistent with efforts by the audit firms to minimize disruptions and audit failures around mandatory rotations. We also analyze special circumstances, such as audit firm or audit team switches and early partner rotations. We show that these situations are more disruptive and more likely to exhibit audit quality effects. In particular, we find that low quality audits give rise to early engagement partner rotations and in this sense have (career) consequences for partners.
Manipulative communications touting stocks are common in capital markets around the world. Although the price distortions created by so-called “pump-and-dump” schemes are well known, little is known about the investors in these frauds. By examining 421 “pump-and-dump” schemes between 2002 and 2015 and a proprietary set of trading records for over 110,000 individual investors from a major German bank, we provide evidence on the participation rate, magnitude of the investments, losses, and the characteristics of the individuals who invest in such schemes. Our evidence suggests that participation is quite common and involves sizable losses, with nearly 6% of active investors participating in at least one “pump-and-dump” and an average loss of nearly 30%. Moreover, we identify several distinct types of investors, some of which should not be viewed as falling prey to these frauds. We also show that portfolio composition and past trading behavior can better explain participation in touted stocks than demographics. Our analysis offers insights into the challenges associated with designing effective investor protection against market manipulation.
An important question in banking is how strict supervision affects bank lending and in turn local business activity. Forcing banks to recognize losses could choke off lending and amplify local economic woes, especially after financial crises. But stricter supervision could also lead to changes in how banks assess loans and manage their loan portfolios. Estimating such effects is challenging. We exploit the extinction of the thrift regulator (OTS) – a large change in prudential supervision, affecting ten percent of all U.S. depository institutions. Using this event, we analyze economic links between strict supervision, bank lending and business activity. We first show that the OTS replacement indeed resulted in stricter supervision of former OTS banks. We then analyze the lending effects of this regulatory change and show that former OTS banks increase small business lending by approximately 10 percent. This increase stems primarily from well capitalized banks and those more affected by the new regime. These findings suggest that stricter supervision operates not only through capital but can also overcome frictions in bank management, leading to more lending and a reallocation of loans. Consistent with the latter, we find increases in business entry and exit in counties with greater expose to OTS banks.
The use of evidence and economic analysis in policymaking is on the rise, and accounting standard setting and financial regulation are no exception. This article discusses the promise of evidence-based policymaking in accounting and financial markets as well as the challenges and opportunities for research supporting this endeavor. In principle, using sound theory and robust empirical evidence should lead to better policies and regulations. But despite its obvious appeal and substantial promise, evidence-based policymaking is easier demanded than done. It faces many challenges related to the difficulty of providing relevant causal evidence, lack of data, the reliability of published research, and the transmission of research findings. Overcoming these challenges requires substantial infrastructure investments for generating and disseminating relevant research. To illustrate this point, I draw parallels to the rise of evidence-based medicine. The article provides several concrete suggestions for the research process and the aggregation of research findings if scientific evidence is to inform policymaking. I discuss how policymakers can foster and support policy-relevant research, chiefly by providing and generating data. The article also points to potential pitfalls when research becomes increasingly policy-oriented.
We examine whether the economy can be insured against banking crises with deposit and loan contracts contingent on macroeconomic shocks. We study banking competition and show that the private sector insures the banking system through such contracts, and banking crises are avoided, provided that failed banks are not bailed out. When risks are large, banks may shift part of them to depositors. In contrast, when banks are bailed out by the next generation, depositors receive non-contingent contracts with high interest rates, while entrepreneurs obtain loan contracts that demand high repayment in good times and low repayment in bad times. As a result, the present generation overinvests, and banks generate large macroeconomic risks for future generations, even if the underlying productivity risk is small or zero. We conclude that a joint policy package of orderly default procedures and contingent contracts is a promising way to reduce the threat of a fragile banking system.
Following the introduction of the one-child policy in China, the capital-labor (K/L) ratio of China increased relative to that of India, and, simultaneously, FDI inflows relative to GDP for China versus India declined. These observations are explained in the context of a simple neoclassical OLG paradigm. The adjustment mechanism works as follows: the reduction in the growth rate of the (urban) labor force due to the one-child policy permanently increases the capital per worker inherited from the previous generation. The resulting increase in China's (domestic K)/L thus "crowds out" the need for FDI in China relative to India. Our paper is a contribution to the nascent literature exploring demographic transitions and their effects on FDI flows.
Based on OECD evidence, equity/housing-price busts and credit crunches are followed by substantial increases in public consumption. These increases in unproductive public spending lead to increases in distortionary marginal taxes, a policy in sharp contrast with presumably optimal Keynesian fiscal stimulus after a crisis. Here we claim that this seemingly adverse policy selection is optimal under rational learning about the frequency of rare capital-value busts. Bayesian updating after a bust implies massive belief jumps toward pessimism, with investors and policymakers believing that busts will be arriving more frequently in the future. Lowering taxes would be as if trying to kick a sick horse in order to stand up and run, since pessimistic markets would be unwilling to invest enough under any temporarily generous tax regime.
We present empirical evidence on the heterogeneity in monetary policy transmission across countries with different home ownership rates. We use household-level data together with shocks to the policy rate identified from high-frequency data. We find that housing tenure reacts more strongly to unexpected changes in the policy rate in Germany and Switzerland –the OECD countries with the lowest home ownership rates– compared with existing evidence for the U.S. An unexpected decrease in the policy rate by 25 basis points increases the home ownership rate by 0.8 percentage points in Germany and by 0.6 percentage points in Switzerland. The response of non-housing consumption in Switzerland is less heterogeneous across renters and mortgagors, and has a different pattern across age groups than in the U.S. We discuss economic explanations for these findings and implications for monetary policy.
In 1983, Brian Henderson published an article that examined various types of narrative structure in film, including flashbacks and flashforwards. After analyzing a whole spectrum of techniques capable of effecting a transition between past and present – blurs, fades, dissolves, and so on – he concluded: "Our discussions indicate that cinema has not (yet) developed the complexity of tense structures found in literary works". His "yet" (in parentheses) was an instance of laudable caution, as very soon – in some ten–fifteen years – the situation would change drastically, and temporal twists would become a trademark of a new genre that has not (yet) acquired a standardized name: "modular narratives", "puzzle films", and "complex films" are among the labels used.
Asset transaction prices sampled at high frequency are much staler than one might expect in the sense that they frequently lack new updates showing zero returns. In this paper, we propose a theoretical framework for formalizing this phenomenon. It hinges on the existence of a latent continuous-time stochastic process pt valued in the open interval (0; 1), which represents at any point in time the probability of the occurrence of a zero return. Using a standard infill asymptotic design, we develop an inferential theory for nonparametrically testing, the null hypothesis that pt is constant over one day. Under the alternative, which encompasses a semimartingale model for pt, we develop non-parametric inferential theory for the probability of staleness that includes the estimation of various integrated functionals of pt and its quadratic variation. Using a large dataset of stocks, we provide empirical evidence that the null of the constant probability of staleness is fairly rejected. We then show that the variability of pt is mainly driven by transaction volume and is almost unaffected by bid-ask spread and realized volatility.
Through the lens of market participants' objective to minimize counterparty risk, we provide an explanation for the reluctance to clear derivative trades in the absence of a central clearing obligation. We develop a comprehensive understanding of the benefits and potential pitfalls with respect to a single market participant's counterparty risk exposure when moving from a bilateral to a clearing architecture for derivative markets. Previous studies suggest that central clearing is beneficial for single market participants in the presence of a sufficiently large number of clearing members. We show that three elements can render central clearing harmful for a market participant's counterparty risk exposure regardless of the number of its counterparties: 1) correlation across and within derivative classes (i.e., systematic risk), 2) collateralization of derivative claims, and 3) loss sharing among clearing members. Our results have substantial implications for the design of derivatives markets, and highlight that recent central clearing reforms might not incentivize market participants to clear derivatives.
A tale of one exchange and two order books : effects of fragmentation in the absence of competition
(2018)
Exchanges nowadays routinely operate multiple, almost identically structured limit order markets for the same security. We study the effects of such fragmentation on market performance using a dynamic model where agents trade strategically across two identically-organized limit order books. We show that fragmented markets, in equilibrium, offer higher welfare to intermediaries at the expense of investors with intrinsic trading motives, and lower liquidity than consolidated markets. Consistent with our theory, we document improvements in liquidity and lower profits for liquidity providers when Euronext, in 2009, consolidated its order ow for stocks traded across two country-specific and identically-organized order books into a single order book. Our results suggest that competition in market design, not fragmentation, drives previously documented improvements in market quality when new trading venues emerge; in the absence of such competition, market fragmentation is harmful.
This paper presents new evidence on the expectation formation process from a Dutch household survey. Households become too optimistic about their future income after their income has improved, consistent with the over-extrapolation of their experience. We show that this effect of experience is persistent and that households over-extrapolate income losses more than income gains. Furthermore, older households over-extrapolate more, suggesting that they did not learn over time to form more accurate expectations. Finally, we study the relationship between expectation errors and consumption. We find that more over-optimistic households intend to consume more and subsequently report higher consumption, even though they do not consume as much as they intended to. These results suggests that overextrapolation hurts consumers and amplify business cycles.
Popularity/Prestige
(2018)
What is the canon? Usually this question is just a proxy for something like, "Which works are in the canon?" But the first question is not just a concise version of the second, or at least it doesn’t have to be. Instead, it can ask what the structure of the canon is - in other words, when things are in the canon, what are they in? This question came to the fore during the project that resulted in Pamphlet 11. The members of that group were looking for morphological differences between the canon and the archive. The latter they define, straightforwardly and capaciously, as "that portion of published literature that has been preserved—in libraries and elsewhere" The canon is a slipperier concept; the authors speak instead of multiple canons, like the books preserved in the Chadwyck-Healey Nineteenth-Century Fiction Collection, the constituents of the six different "best-twentieth century novels" lists analyzed by Mark Algee-Hewitt and Mark McGurl in Pamphlet 8, authors included in the British Dictionary of National Biography, and so forth. [...] This last conundrum points the way out of these difficulties and into a workable model of the structure of the canon. It suggests two different ways of entering the canon: being read by many and being prized by an elite few—or, to use the terms arrived at in Pamphlet 11, popularity and prestige. With these two dimensions, we arrive at a canonical space [...].
The propagation of regional shocks in housing markets: evidence from oil price shocks in Canada
(2018)
Shocks to the demand for housing that originate in one region may seem important only for that regional housing market. We provide evidence that such shocks can also affect housing markets in other regions. Our analysis focuses on the response of Canadian housing markets to oil price shocks. Oil price shocks constitute an important source of exogenous regional variation in income in Canada because oil production is highly geographically concentrated. We document that, at the national level, real oil price shocks account for 11% of the variability in real house price growth over time. At the regional level, we find that unexpected increases in the real price of oil raise housing demand and real house prices not only in oil-producing regions, but also in other regions. We develop a theoretical model of the propagation of real oil price shocks across regions that helps understand this finding. The model differentiates between oil-producing and non-oil-producing regions and incorporates multiple sectors, trade between provinces, government redistribution, and consumer spending on fuel. We empirically confirm the model prediction that oil price shocks are propagated to housing markets in non-oil-producing regions by the government redistribution of oil revenue and by increased interprovincial trade.
We analytically characterize optimal monetary policy for an augmented New Keynesian model with a housing sector. In a setting where the private sector has rational expectations about future housing prices and inflation, optimal monetary policy can be characterized without making reference to housing price developments: commitment to a 'target criterion' that refers to inflation and the output gap only is optimal, as in the standard model without a housing sector. When the policymaker is concerned with potential departures of private sector expectations from rational ones and seeks to choose a policy that is robust against such possible departures, then the optimal target criterion must also depend on housing prices. In the empirically realistic case where housing is subsidized and where monopoly power causes output to fall short of its optimal level, the robustly optimal target criterion requires the central bank to 'lean against' housing prices: following unexpected housing price increases, policy should adopt a stance that is projected to undershoot its normal targets for inflation and the output gap, and similarly aim to overshoot those targets in the case of unexpected declines in housing prices. The robustly optimal target criterion does not require that policy distinguish between 'fundamental' and 'non-fundamental' movements in housing prices.
We establish that the labor market helps discipline asset managers via the impact of fund liquidations on their careers. Using hand-collected data on 1,948 professionals, we find that top managers working for funds liquidated after persistently poor relative performance suffer demotion coupled with a significant loss in imputed compensation. Scarring effects are absent when liquidations are preceded by normal relative performance or involve mid-level employees. Seen through the lens of a model with moral hazard and adverse selection, these results can be ascribed to reputation loss rather than bad luck. The findings suggest that performance-induced liquidations supplement compensation-based incentives.
In talent-intensive jobs, workers’ quality is revealed by their performance. This enhances productivity and earnings, but also increases layoff risk. Firms cannot insure workers against this risk if they compete fiercely for talent. In this case, the more risk-averse workers will choose less quality-revealing jobs. This lowers expected productivity and salaries. Public unemployment insurance corrects this inefficiency, enhancing employment in talent-sensitive industries, consistently with international evidence. Unemployment insurance dominates legal restrictions on firms’ dismissals, which penalize more talent-sensitive firms and thus depress expected productivity. Finally, unemployment insurance fosters education, by encouraging investment in risky human capital that enhances talent discovery.
We assess the relationship between finance and growth over the period 1980-2014. We estimate a cross-country growth regression for 48 countries during 20 periods of 15 years starting in 1980 (to 1995) and ending in 1999 (to 2014). We use OLS and IV estimations and we find that: 1) overall financial development had a positive effect on economic growth during all periods of our sample, i.e., we confirm that from 1980 to 2014 financial services provided by the various financial systems were significant (to various degrees) for firm creation, industrial expansion and economic growth; but that, 2) the structure of financial markets was particularly relevant for economic growth until the financial crisis; while 3) the structure of the banking sector played a major role since; and finally that, 4) the legal system is the primary determinant of the effectiveness of the overall financial system in facilitating innovation and growth in (almost) all of our sample period. Hence, overall our results suggest that the relationship between finance and growth matters but also that it varies over time in strength and in sector origination.
JEL Classification: O16, G16, G20.
Motivated by the observation that survey expectations of stock returns are inconsistent with rational return expectations under real-world probabilities, we investigate whether alternative expectations hypotheses entertained in the asset pricing literature are consistent with the survey evidence. We empirically test (1) the notion that survey forecasts constitute rational but risk-neutral forecasts of future returns, and (2) the notion that survey fore- casts are ambiguity averse/robust forecasts of future returns. We find that these alternative hypotheses are also strongly rejected by the data, albeit for different reasons. Hypothesis (1) is rejected because survey return forecasts are not in line with risk-free interest rates and because survey expected excess returns are predictable. Hypothesis (2) is rejected because agents are not al- ways pessimistic about future returns, instead often display overly optimistic return expectations. We speculate as to what kind of expectations theories might be consistent with the available survey evidence.
Europe is a key normative power. Its legitimacy as a force for ensuring the reign of rule of law in international relations is unparalleled. It also packs an economic punch. In data protection and the fight against cybercrime, European norms have been successfully globalized. The time is right to take the next step: Europe must now become the international normative leader for developing a new deal on internet governance. To ensure this, European powers should commit to rules that work in security, economic development and human rights on the internet and implement them in a reinvigorated IGF.
This paper argues that the introduction of the Banking Recovery and Resolution Directive (BRRD) improved market discipline in the European bank market for unsecured debt. The different impact of the BRRD on bank bonds provides a quasi-natural experiment that allows to study the effect of the BRRD within banks using a difference-in-difference approach. Identification is based on the fact that (otherwise identical) bonds of a given bank maturing before 2016 are explicitly protected from BRRD bail-in. The empirical results are consistent with the hypothesis that debt holders actively monitor banks and that the BRRD diminished bail-out expectations. Bank bonds subject to BRRD bail-in carry a 10 basis points bail-in premium in terms of the yield spread. While there is some evidence that the bail-in premium is more pronounced for non-GSIB banks and banks domiciled in peripheral European countries, weak capitalization is the main driver.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
What institutional arrangements for an independent central bank with a price stability mandate promote good policy outcomes when unconventional policies become necessary? Unconventional monetary policy poses challenges. The large scale asset purchases needed to counteract the zero lower bound on nominal interest rates have uncomfortable fiscal and distributional consequences and require central banks to assume greater risks on their balance sheets.
In his paper, Athanasios Orphanides draws lessons from the experience of the Bank of Japan (BoJ) since the late 1990s for the institutional design of independent central banks. He comes to the conclusion that lack of clarity on the precise definition of price stability, coupled with concerns about the legitimacy of large balance sheet expansions, hinders policy: It encourages the central bank to eschew the decisive quantitative easing needed to reflate the economy and instead to accommodate too-low inflation. The BoJ’s experience with the zero lower bound suggests important benefits from a clear definition of price stability as a symmetric 2% goal for inflation, which the Bank adopted in 2013.
The paper illustrates based on an example the importance of consistency between the empirical measurement and the concept of variables in estimated macroeconomic models. Since standard New Keynesian models do not account for demographic trends and sectoral shifts, the authors proposes adjusting hours worked per capita used to estimate such models accordingly to enhance the consistency between the data and the model. Without this adjustment, low frequency shifts in hours lead to unreasonable trends in the output gap, caused by the close link between hours and the output gap in such models.
The retirement wave of baby boomers, for example, lowers U.S. aggregate hours per capita, which leads to erroneous permanently negative output gap estimates following the Great Recession. After correcting hours for changes in the age composition, the estimated output gap closes gradually instead following the years after the Great Recession.
Financial market interactions can lead to large and persistent booms and recessions. Instability is an inherent threat to economies with speculative financial markets. A central bank’s interest rate setting can amplify the expectation feedback in the financial market and this can lead to unstable dynamics and excess volatility. The paper suggests that policy institutions may be well-advised to handle tools like asset price targeting with care since such instruments might add a structural link between asset prices and macroeconomic aggregates. Neither stock prices nor indices are a good indicator to base decisions on.
The level of capital tax gains has high explanatory power regarding the question of what drives economic inequality. On this basis, the authors develop a simple, yet micro-founded portfolio selection model to explain the dynamics of wealth inequality given empirical tax series in the US. The results emphasize that the level and the transition of speed of wealth inequality depend crucially on the degree of capital taxation. The projections predict that – continuing on the present path of capital taxation in the US – the gap between rich and poor is expected to shrink whereas “massive” tax cuts will further increase the degree of wealth concentration.
We investigate the characteristics of infrastructure as an asset class from an investment perspective of a limited partner. While non U.S. institutional investors gain exposure to infrastructure assets through a mix of direct investments and private fund vehicles, U.S. investors predominantly invest in infrastructure through private funds. We find that the stream of cash flows delivered by private infrastructure funds to institutional investors is very similar to that delivered by other types of private equity, as reflected by the frequency and amounts of net cash flows. U.S. public pension funds perform worse than other institutional investors in their infrastructure fund investments, although they are exposed to underlying deals with very similar project stage, concession terms, ownership structure, industry, and geographical location. By selecting funds that invest in projects with poor financial performance, U.S. public pension funds have created an implicit subsidy to infrastructure as an asset class, which we estimate within the range of $730 million to $3.16 billion per year depending on the benchmark.
Direct financing of consumer credit by individual investors or non-bank institutions through an implementation of marketplace lending is a relatively new phenomenon in financial markets. The emergence of online platforms has made this type of financial intermediation widely available. This paper analyzes the performance of marketplace lending using proprietary cash flow data for each individual loan from the largest platform, Lending Club. While individual loan characteristics would be important for amateur investors holding a few loans, sophisticated lenders, including institutional investors, usually form broad portfolios to benefit from diversification. We find high risk-adjusted performance of approximately 40 basis points per month for these basic loan portfolios. This abnormal performance indicates that Lending Club, and similar marketplace lenders, are likely to attract capital to finance a growing share of the consumer credit market. In the absence of a competitive response from traditional credit providers, these loans lower costs to the ultimate borrowers and increase returns for the ultimate lenders.
We study the relevance of signaling and marketing as explanations for the discount control mechanisms that a closed-end fund may choose to adopt in its prospectus. These policies are designed to narrow the potential gap between share price and net asset value, measured by the fund’s discount. The two most common discount control mechanisms are explicit discretion to repurchase shares based on the magnitude of the fund discount and mandatory continuation votes that provide shareholders the opportunity to liquidate the fund. We find very limited evidence that a discount control mechanism serves as costly signal of information. Funds with mandatory voting are not more likely to delist than the rest of the CEFs in general or whenever the fund discount is large. Similarly, funds that explicitly discuss share repurchases as a potential response do not subsequently buy back shares more often when discounts do increase. Instead, the existence of these policies is more consistent with marketing explanations because the policies are associated with an increased probability of issuing more equity in subsequent periods.
This paper investigates how biases in macroeconomic forecasts are associated with economic surprises and market responses across asset classes around US data announcements. We find that the skewness of the distribution of economic forecasts is a strong predictor of economic surprises, suggesting that forecasters behave strategically (rational bias) and possess private information. Our results also show that consensus forecasts of US macroeconomic releases embed anchoring. Under these conditions, both economic surprises and the returns of assets that are sensitive to macroeconomic conditions are predictable. Our findings indicate that local equities and bond markets are more predictable than foreign markets, currencies and commodities. Economic surprises are found to link to asset returns very distinctively through the stages of the economic cycle, whereas they strongly depend on economic releases being inflation- or growth-related. Yet, when forecasters fail to correctly forecast the direction of economic surprises, regret becomes a relevant cognitive bias to explain asset price responses. We find that the behavioral and rational biases encountered in US economic forecasting also exists in Continental Europe, the United Kingdom and Japan, albeit, to a lesser extent.
In the secondary art market, artists play no active role. This allows us to isolate cultural influences on the demand for female artists’ work from supply-side factors. Using 1.5 million auction transactions in 45 countries, we document a 47.6% gender discount in auction prices for paintings. The discount is higher in countries with greater gender inequality. In experiments, participants are unable to guess the gender of an artist simply by looking at a painting and they vary in their preferences for paintings associated with female artists. Women's art appears to sell for less because it is made by women.
While record-making prices at art auctions receive headline news coverage, artists typically do not receive any direct proceeds from those sales. Early-stage creative work in any field is perennially difficult to value, but the valuation, reward, and incentivization for artistic labor are particularly fraught. A core challenge in studying the real return on artists’ work is the extreme difficulty accessing data from when an artwork was first sold. Galleries keep private records that are difficult to access and to match to public auction results. This paper, for the first time, uses archivally sourced primary market records, for the artists Jasper Johns and Robert Rauschenberg. Although this approach restricts the size of the data set, this innovative method shows much more accurate returns on art than typical regression and hedonic models. We find that if Johns and Rauschenberg had retained 10% equity in their work when it was first sold, the returns to them when the work was resold at auction would have outperformed the US S&P 500 by between 2 and 986 times. The implication of this work opens up vast policy recommendations with regard to secondary art market sales, entrepreneurial strategies using blockchain technology, and implications about how we compensate creative work.
We study the introduction of single-market liquidity provider incentives in fragmented securities markets. Specifically, we investigate whether fee rebates for liquidity providers enhance liquidity on the introducing market and thereby increase its competitiveness and market share. Further, we analyze whether single-market liquidity provider incentives increase overall market liquidity available for market participants. Therefore, we measure the specific liquidity contribution of individual markets to the aggregate liquidity in the fragmented market environment. While liquidity and market share of the venue introducing incentives increase, we find no significant effect for turnover and liquidity of the whole market.
Reliability and relevance of fair values : private equity investments and investee fundamentals
(2018)
We directly test the reliability and relevance of fair values reported by listed private equity firms (LPEs), where the unit of account for fair value measurement attribute (FVM) is an investment stake in an individual investee company. FVMs are observable for multiple investment stakes, fair values are economically important, and granular data on investee economic fundamentals that should underpin fair values are available in public disclosures. We find that LPE fund managers determine valuations based on accounting-based fundamentals—equity book value and net income—that are in line with those investors derive for listed companies. Additionally, our findings suggest that LPE fund managers apply a lower valuation weight to investee net income if direct market inputs are unobservable during investment value estimation. We interpret these findings as evidence that LPE fund managers do not appear mechanically to apply market valuation weights for publicly traded investees when determining valuations of non-listed. We also document that the judgments that LPE fund managers apply when determining investee valuations appear to be perceived as reliable by their investors.
We study the impact of transparency on liquidity in OTC markets. We do so by providing an analysis of liquidity in a corporate bond market without trade transparency (Germany), and comparing our findings to a market with full post-trade disclosure (the U.S.). We employ a unique regulatory dataset of transactions of German financial institutions from 2008 until 2014 to find that: First, overall trading activity is much lower in the German market than in the U.S. Second, similar to the U.S., the determinants of German corporate bond liquidity are in line with search theories of OTC markets. Third, surprisingly, frequently traded German bonds have transaction costs that are 39-61 bp lower than a matched sample of bonds in the U.S. Our results support the notion that, while market liquidity is generally higher in transparent markets, a sub-set of bonds could be more liquid in more opaque markets because of investors "crowding" their demand into a small number of more actively traded securities.
This paper analyzes how the combination of borrowing constraints and idiosyncratic risk affects the equity premium in an overlapping generations economy. I find that introducing a zero-borrowing constraint in an economy without idiosyncratic risk increases the equity premium by 70 percent, which means that the mechanism described in Constantinides, Donaldson, and Mehra (2002) is dampened because of the large number of generations and production. With social security the effect of the zero-borrowing constraint is a lot weaker. More surprisingly, when I introduce idiosyncratic labor income risk in an economy without a zero-borrowing constraint, the equity premium increases by 50 percent, even though the income shocks are independent of aggregate risk and are not permanent. The reason is that idiosyncratic risk makes the endogenous natural borrowing limits much tighter, so that they have a similar effect to an exogenously imposed zero-borrowing constraint. This intuition is confirmed when I add idiosyncratic risk in an economy with a zero-borrowing constraint: neither the equity premium nor the Sharpe ratio change, because the zero-borrowing constraint is already tighter than the natural borrowing limits that result when idiosyncratic risk is added.
We propose a spatiotemporal approach for modeling risk spillovers using time-varying proximity matrices based on observable financial networks and introduce a new bilateral specification. We study covariance stationarity and identification of the model, and analyze consistency and asymptotic normality of the quasi-maximum-likelihood estimator. We show how to isolate risk channels and we discuss how to compute target exposure able to reduce system variance. An empirical analysis on Euro-area cross-country holdings shows that Italy and Ireland are key players in spreading risk, France and Portugal are the major risk receivers, and we uncover Spain's non-trivial role as risk middleman.
We show that bond purchases undertaken in the context of quantitative easing efforts by the European Central Bank created a large mispricing between the market for German and Italian government bonds and their respective futures contracts. On top of the direct effect the buying pressure exerted on bond prices, we show three indirect effects through which the scarcity of bonds, resulting from the asset purchases, drove a wedge between the futures contracts and the underlying bonds: the deterioration of bond market liquidity, the increased bond specialness on the repurchase agreement market, and the greater uncertainty about bond availability as collateral.
We study the role of various trader types in providing liquidity in spot and futures markets based on complete order-book and transactions data as well as cross-market trader identifiers from the National Stock Exchange of India for a single large stock. During normal times, short-term traders who carry little inventory overnight are the primary intermediaries in both spot and futures markets, and changes in futures prices Granger-cause changes in spot prices. However, during two days of fast crashes, Granger-causality ran both ways. Both crashes were due to large-scale selling by foreign institutional investors in the spot market. Buying by short-term traders and cross-market traders was insufficient to stop the crashes. Mutual funds, patient traders with better trade-execution quality who were initially slow to move in, eventually bought sufficient quantities leading to price recovery in both markets. Our findings suggest that market stability requires the presence of well-capitalized standby liquidity providers.
An important assumption underlying the designation of some insurers as systemically important is that their overlapping portfolio holdings can result in common selling. We measure the overlap in holdings using cosine similarity, and show that insurers with more similar portfolios have larger subsequent common sales. This relationship can be magnified for some insurers when they are regulatory capital constrained or markets are under stress. When faced with an exogenous liquidity shock, insurers with greater portfolio similarity have even larger common sales that impact prices. Our measure can be used by regulators to predict which institutions may contribute most to financial instability through the asset liquidation channel of risk transmission.
This paper investigates inertia within and across banks in retail deposit markets using detailed panel data on consumer choices and account characteristics. In a structural choice model, I find that costs of inertia are around one third higher for switching accounts across compared to switching within banks. Observable proxies of bank-level switching costs (number and type of additional financial products) explain most of this cost premium, while online banking usage reduces inertia. Consistent with theory, I provide evidence that banks incorporate inertia in their pricing as older accounts pay lower rates than comparable newer accounts. Counterfactual policies reducing inertia shift market share to more competitive smaller banks, but only eliminating inertia within banks already results in high potential gains in consumer surplus. This suggests that facilitating bank switching alone might be insufficient to improve consumer choices.
In recent years European financial regulation has experienced a tremendous reorientation with respect to the shadow banking system, which manifested first and foremost in its reframing as market-based finance. Initially identified as a source of systemic risk certain initiatives did not only fall much behind the envisaged changes but all to the contrary have been substantially modified in a way that they now aim at revitalizing these activities. The reorientation of European regulatory agency on shadow banking post-crisis, from curtailing it to facilitating resilient market-based finance, has been a cause for irritation by academic observers, dismissed by some as mere rebranding or taken as a sign of regulatory capture. All to the contrary, this paper documents the central role of regulatory agency in shadow banking’s reconfiguration. It does so by analyzing the European initiatives concerning the regulation of Asset-Backed Commercial Paper (ABCP) and another prime example of shadow banking, Money Market Mutual Funds (MMFs). Based on documentary analysis and expert interviews we trace the way the recently published EU frameworks for MMFs and ABCP have been designed (in particular the STS, CRR and MMF regulation in 2017). Furthermore, we show how they have been transformed in such a way that their final versions allow to re-establish the shadow banking chain linking MMFs, the ABCP market and arguably the regular banking system. This transformation is driven by a new form of pro-active European regulatory agency which aims at creating a regulatory infrastructure able to sustain the orderly flow of real economy debt. Far from being captured by the industry, they did so consciously and in cooperation with private actors in order to maintain a channel for credit creation outside of bank credit, a task made more complicated by the rushed politicized final negotiations coupled with technical complexity. This paper thereby contributes to a new strand of literature, seeing the creation and reconfiguration of the shadow banking system as characterized by the active and conscious role of state actors.
We propose a unified framework to measure the effects of different reforms of the pension system on retirement ages and macroeconomic indicators in the face of demographic change. A rich overlapping generations (OLG) model is built and endogenous retirement decisions are explicitly modeled within a public pension system. Heterogeneity with respect to consumption preferences, wage profiles, and survival rates is embedded in the model. Besides the expected direct effects of these reforms on the behavior of households, we observe that feedback effects do occur. Results suggest that individual retirement decisions are strongly influenced by numerous incentives produced by the pension system and macroeconomic variables, such as the statutory eligibility age, adjustment rates, the presence of a replacement rate, and interest rates. Those decisions, in turn, have several impacts on the macro-economy which can create feedback cycles working through equilibrium effects on interest rates and wages. Taken together, these reform scenarios have strong implications for the sustainability of pension systems. Because of the rich nature of our unified model framework, we are able to rank the reform proposals according to several individual and macroeconomic measures, thereby providing important support for policy recommendations on pension systems.
The paper investigates the determinants of the idiosyncratic volatility puzzle by allowing linkages across asset returns. The first contribution of the paper is to show that portfolios sorted by increasing indegree computed on the network based on Granger causality test have lower expected returns, not related to idiosyncratic volatility. Secondly, empirical evidence indicates that stocks with higher idiosyncratic volatility have the lower exposition on the indegree risk factor.
We examine how a firms' investment behavior affects the investment of a neighboring firm. Economic theory yields ambiguous predictions regarding the direction of firm peer effects and consistent with earlier work, we find that firms display similar investment behavior within an area using OLS analysis. Exploiting time-variation in the rise of U.S. states' corporate income taxes and utilizing heterogeneity in firms' exposure to increases in corporate income tax rates, we identify the causal impact of local firms' investments. Using this as an instrumental variable in a 2SLS estimation, we find that an increases in local firms' investment reduces the investment of a local peer firm. This effect is more pronounced if local competition among firms is stronger and supports theories that firm investments are strategic substitutes due to competition.
We use minutes from 17,000 financial advisory sessions and corresponding client portfolio data to study how active client involvement affects advisor recommendations and portfolio outcomes. We find that advisors confronted with acquiescent clients stick to their standards and recommend expensive but well diversified mutual fund portfolios. However, if clients take an active role in the meetings, advisors deviate markedly from their standards, resulting in poorer portfolio diversification and lower Sharpe ratios. Our findings that advisors cater to client requests parallel the phenomenon of doctors prescribing antibiotics to insistent patients even if inappropriate, and imply that pandering diminishes the quality of advice.
This paper provides a complete characterization of optimal contracts in principal-agent settings where the agent's action has persistent effects. We model general information environments via the stochastic process of the likelihood-ratio. The martingale property of this performance metric captures the information benefit of deferral. Costs of deferral may result from both the agent's relative impatience as well as her consumption smoothing needs. If the relatively impatient agent is risk neutral, optimal contracts take a simple form in that they only reward maximal performance for at most two payout dates. If the agent is additionally risk-averse, optimal contracts stipulate rewards for a larger selection of dates and performance states: The performance hurdle to obtain the same level of compensation is increasing over time whereas the pay-performance sensitivity is declining.
A growing body of literature shows the importance of financial literacy in households' financial decisions. However, fewer studies focus on understanding the determinants of financial literacy. Our paper fills this gap by analyzing a specific determinant, the educational system, to explain the heterogeneity in financial literacy scores across Germany. We suggest that the lower financial literacy observed in East Germany is partially caused by a different institutional framework experienced during the Cold War, more specifically, by the socialist educational system of the GDR which affected specific cohorts of individuals. By exploiting the unique set-up of the German reunification, we identify education as a channel through which institutions and financial literacy are related in the German context.
How demanding and consistent is the 2018 stress test design in comparison to previous exercises?
(2018)
Bank regulators have the discretion to discipline banks by executing enforcement actions to ensure that banks correct deficiencies regarding safe and sound banking principles. We highlight the trade-offs regarding the execution of enforcement actions for financial stability. Following this we provide an overview of the differences in the legal framework governing supervisors’ execution of enforcement actions in the Banking Union and the United States. After discussing work on the effect of enforcement action on bank behaviour and the real economy, we present data on the evolution of enforcement actions and monetary penalties by U.S. regulators. We conclude by noting the importance of supervisors to levy efficient monetary penalties and stressing that a division of competences among different regulators should not lead to a loss of efficiency regarding the execution of enforcement actions.
A new governance architecture for european financial markets? Towards a european supervision of CCPs
(2018)
Does the new European outlook on financial markets, as voiced by the EU Commission since the beginning of the Capital Market Unions imply a movement of the EU towards an alignment of market integration and direct supervision of common rules? This paper sets out to answer this question for the case of common supervision for Central Counterparties (CCPs) in the European Union. Those entities gained crucial importance post-crisis due to new regulation which requires the mandatory clearing of standardized derivative contracts, transforming clearing houses into central nodes for cross-border financial transactions. While the EU-wide regulatory framework EMIR, enacted in 2012, stipulates common regulatory requirements, the framework still relies on home-country supervision of those rules, arguably leading to regulatory as well as supervisory arbitrage. Therefore, the regulatory reform to stabilize the OTC derivatives market replicated at its center a governance flaw, which had been identified as one of the major causes for the gravity of the financial crisis in the EU: the coupling of intense competition based on private risk management systems with a national supervision of European rules. This paper traces the history of this problem awareness and inquires which factors account for the fact that only in 2017 serious negotiations at the EU level ensued that envisioned a common supervision of CCPs to fix the flawed system of governance. Analyzing this shift in the European governance architecture, we argue that Brexit has opened a window of opportunity for a centralization of supervision for CCPs. Brexit aligns the urgency of the problem with material interests of crucial political stakeholder, in particular of Germany and France, providing the possibility for a grand European bargain.
Improving financial conditions of individuals requires an understanding of the mechanisms through which bad financial decision-making leads to worse financial outcomes. From a theoretical point of view, a key candidate inducing mistakes in financial decision-making are so called present-biased preferences, which are one of the cornerstones of behavioral economics. According to theory, present-biased households should behave systematically different when it comes to consumption and saving decisions, as they should be more prone to spending too much and saving too little.
In this policy letter we show how high frequency financial transaction data available in digitized form allows to precisely categorize individual financial-decision making to be present-biased or not. Using this categorization, we find that one out of five individuals in our sample exhibits present-bias and that this present-biased behavior is associated with a stronger use of overdrafts. As overdrafts represent a particularly expensive way of short-term borrowing, their systematic use can be interpreted as a measure of suboptimal financial-decision making. Overall, our results indicate that the combination of economic theory and Big Data is able to generate valuable insights with applications for policy makers and businesses alike.
The object of this study is one of the most ambitious projects of twentieth-century art history: Aby Warburg's 'Atlas Mnemosyne', conceived in the summer of 1926 – when the first mention of a 'Bilderatlas', or "atlas of images", occurs in his journal – and truncated three years later, unfinished, by his sudden death in October 1929. Mnemosyne consisted in a series of large black panels, about 170x140 cm., on which were attached black-and-white photographs of paintings, sculptures, book pages, stamps, newspaper clippings, tarot cards, coins, and other types of images. Warburg kept changing the order of the panels and the position of the images until the very end, and three main versions of the Atlas have been recorded: one from 1928 (the "1-43 version", with 682 images); one from the early months of 1929, with 71 panels and 1050 images; and the one Warburg was working on at the time of his death, also known as the "1-79 version", with 63 panels and 971 images (which is the one we will examine). But Warburg was planning to have more panels – possibly many more – and there is no doubt that Mnemosyne is a dramatically unfinished and controversial object of study.
Patterns and interpretation
(2017)
One thing for sure: digitization has completely changed the literary archive. People like me used to work on a few hundred nineteenth-century novels; today, we work on thousands of them; tomorrow, hundreds of thousands. This has had a major effect on literary history, obviously enough, but also on critical methodology; because, when we work on 200,000 novels instead of 200, we are not doing the same thing, 1,000 times bigger; we are doing a different thing. The new scale changes our relationship to our object, and in fact 'it changes the object itself'.
The Emotions of London
(2016)
A few years ago, a group formed by Ben Allen, Cameron Blevins, Ryan Heuser, and Matt Jockers decided to use topic modeling to extract geographical information from nineteenth-century novels. Though the study was eventually abandoned, it had revealed that London-related topics had become significantly more frequent in the course of the century, and when some of us were later asked to design a crowd-sourcing experiment, we decided to add a further dimension to those early findings, and see whether London place-names could become the cornerstone for an emotional geography of the city.
Literature, measured
(2016)
There comes a moment, in digital humanities talks, when someone raises the hand and says: "Ok. Interesting. But is it really new?" Good question... And let's leave aside the obvious lines of defense, such as "but the field is still only at its beginning!", or "and traditional literary criticism, is that always new?" All true, and all irrelevant; because the digital humanities have presented themselves as a radical break with the past, and must therefore produce evidence of such a break. And the evidence, let's be frank, is not strong. What is there, moreover, comes in a variety of forms, beginning with the slightly paradoxical fact that, in a new approach, not everything has to be new. When "Network Theory, Plot Analysis” pointed out, in passing, that a network of Hamlet had Hamlet at its center, the New York Times gleefully mentioned the passage as an unmistakable sign of stupidity. Maybe; but the point, of course, was not to present Hamlet’s centrality as a surprise; it was exactly the opposite: had the new approach not found Hamlet at the center of the play, its plausibility would have disintegrated. Before using network theory for dramatic analysis, I had to test it, and prove that it corroborated the main results of previous research.
Of the novelties introduced by digitization in the study of literature, the size of the archive is probably the most dramatic: we used to work on a couple of hundred nineteenth-century novels, and now we can analyze thousands of them, tens of thousands, tomorrow hundreds of thousands. It's a moment of euphoria, for quantitative literary history: like having a telescope that makes you see entirely new galaxies. And it's a moment of truth: so, have the digital skies revealed anything that changes our knowledge of literature? This is not a rhetorical question. In the famous 1958 essay in which he hailed "the advent of a quantitative history" that would "break with the traditional form of nineteenth-century history", Fernand Braudel mentioned as its typical materials "demographic progressions, the movement of wages, the variations in interest rates [...] productivity [...] money supply and demand." These were all quantifiable entities, clearly enough; but they were also completely new objects compared to the study of legislation, military campaigns, political cabinets, diplomacy, and so on. It was this double shift that changed the practice of history; not quantification alone. In our case, though, there is no shift in materials: we may end up studying 200,000 novels instead of 200; but, they're all still novels. Where exactly is the novelty?
Different scales, different features. It’s the main difference between the thesis we have presented here, and the one that has so far dominated the study of the paragraph. By defining it as "a sentence writ large", or, symmetrically, as "a short discourse", previous research was implicitly asserting the irrelevance of scale: sentence, paragraph, and discourse were all equally involved in the "development of one topic". We have found the exact opposite: 'scale is directly correlated to the differentiation of textual functions'. By this, we don't simply mean that the scale of sentences or paragraphs allows us to "see" style or themes more clearly. This is true, but secondary. Paragraphs allows us to "see" themes, because themes fully "exist" only at the scale of the paragraph. Ours is not just an epistemological claim, but an ontological one: if style and themes and episodes exist in the form they do, it's because writers work at different scales – and do different things according to the level at which they are operating.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
The concept of length, the concept is synonymous, the concept is nothing more than, the proper definition of a concept ... Forget programs and visions; the operational approach refers specifically to concepts, and in a very specific way: it describes the process whereby concepts are transformed into a series of operations—which, in their turn, allow to measure all sorts of objects. Operationalizing means building a bridge from concepts to measurement, and then to the world. In our case: from the concepts of literary theory, through some form of quantification, to literary texts.
We would study not style as such, but style 'at the scale of the sentence': the lowest level, it seemed, at which style as a distinct phenomenon became visible. Implicitly, we were defining style as a combination of smaller linguistic units, which made it, in consequence, particularly sensitive to changes in scale—from words to clauses to whole sentences.
The nineteenth century in Britain saw tumultuous changes that reshaped the fabric of society and altered the course of modernization. It also saw the rise of the novel to the height of its cultural power as the most important literary form of the period. This paper reports on a long-term experiment in tracing such macroscopic changes in the novel during this crucial period. Specifically, we present findings on two interrelated transformations in novelistic language that reveal a systemic concretization in language and fundamental change in the social spaces of the novel. We show how these shifts have consequences for setting, characterization, and narration as well as implications for the responsiveness of the novel to the dramatic changes in British society.
This paper has a second strand as well. This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project.
If there is one thing to be learned from David Foster Wallace, it is that cultural transmission is a tricky game. This was a problem Wallace confronted as a literary professional, a university-based writer during what Mark McGurl has called the Program Era. But it was also a philosophical issue he grappled with on a deep level as he struggled to combat his own loneliness through writing. This fundamental concern with literature as a social, collaborative enterprise has also gained some popularity among scholars of contemporary American literature, particularly McGurl and James English: both critics explore the rules by which prestige or cultural distinction is awarded to authors (English; McGurl). Their approach requires a certain amount of empirical work, since these claims move beyond the individual experience of the text into forms of collective reading and cultural exchange influenced by social class, geographical location, education, ethnicity, and other factors. Yet McGurl and English's groundbreaking work is limited by the very forms of exclusivity they analyze: the protective bubble of creative writing programs in the academy and the elite economy of prestige surrounding literary prizes, respectively. To really study the problem of cultural transmission, we need to look beyond the symbolic markets of prestige to the real market, the site of mass literary consumption, where authors succeed or fail based on their ability to speak to that most diverse and complicated of readerships: the general public. Unless we study what I call the social lives of books, we make the mistake of keeping literature in the same ascetic laboratory that Wallace tried to break out of with his intense authorial focus on popular culture, mass media, and everyday life.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
We develop a model that reproduces the average return and volatility spread between sin and non-sin stocks. Our investors do not necessarily boycott sin companies. Rather, they are open to invest in any company while trading off dividends against ethicalness. We show that when dividends and ethicalness are complementary goods and investors are sufficiently risk averse, the model predicts that the dividend share of sin companies exhibits a positive relation with the future return and volatility spreads. Our empirical analysis supports the model's predictions.
In the last decade, central bank interventions, flights to safety, and the shift in derivatives clearing resulted in exceptionally high demand for high quality liquid assets, such as German treasuries, in the securities lending market besides the traditional repo market activities. Despite the high demand, the realizable securities lending income has remained economically negligible for most beneficial owners. We provide empirical evidence of pricing inefficiencies in the non-transparent, oligopolistic securities lending market for German treasuries from 2006 to 2015. Consistent with Duffie, Gârleanu and Pedersen (2005)’s theory, we find that the less connected market participants’ interests are underrepresented, evident in the longer maturity segment, where lenders are more likely to be conservative passive investors, such as pension funds and insurance firms. The low price elasticity in this segment hinders these beneficial owners to fully capitalize on the additional income from securities lending, giving rise to important negative welfare implications.
In this study we investigate which economic ideas were prevalent in the macroprudential discourse post-crises in order to understand the availability of ideas for reform minded agents. We base our analysis on new findings in the field of ideational shifts and regulatory science, which posit that change-agents engage with new ideas pragmatically and strategically in their effort to have their economic ideas institutionalized. We argue that in these epistemic battles over new regulation, scientific backing by academia is the key resource determining the outcome. We show that the present reforms implemented internationally follow this pattern. In our analysis we contrast the entire discourse on systemic risk and macroprudential regulation with Borio’s initial 2003 proposal for a macroprudential framework. We find that mostly cross-sectional measures targeted towards increasing the resilience of the financial system rather than inter-temporal measures dampening the financial cycle have been implemented. We provide evidence for the lacking support of new macroprudential thinking within academia and argue that this is partially responsible for the lack of anti-cyclical macroprudential regulation. Most worryingly, the financial cycle is largely absent in the academic discourse and is only tacitly assumed instead of fully fledged out in technocratic discourses, pointing to the possibility that no anti-cyclical measures will be forthcoming.
This paper gives an account of the unmaking of Soviet workers at the Vernissage in Armenia. I argue that the unmaking of Soviet workers, first, is the irrelevance of Soviet workers as workers once they lost their jobs after the collapse of the Soviet Union and came to the Vernissage to trade. During the Soviet period, private trade was forbidden, and the Soviet government persecuted people who dared to engage in it. Consequently, many people grew up thinking of trade as a criminal activity that was non-productive and parasitic, as opposed to productive work that facilitated the modernization of the USSR. After the dissolution of the USSR, when trade was liberalized and many former Soviet workers were pushed into trade as they lost their jobs, it still retained its quality of not being “real” work, to borrow Roberman’s (2013) wording. Even 25 years after the dissolution of the USSR, former Soviet workers at the Vernissage still want to be identified with their former Soviet occupations and not with trade. However, now engaged in trade, former Soviet workers came up with a “new” way of establishing identity and hierarchy—through production. I describe this “new” way as “the identification game”; employing it, I demonstrate how former Soviet workers at the Vernissage identify and represent themselves as masters, whose work is productive and intellectual. In doing so, they single out resellers, people who resell the work of other masters, by implying that their work is parasitic and selfish. However, this “identification game” is reified only by the older generation of traders, former Soviet workers. The younger generation of traders at the Vernissage, which does not have any experience of being Soviet workers, is disengaged from it, thus undermining the Soviet view of trade as not “real” work and making it irrelevant in the postsocialist era. Thus, I contend that the unmaking of Soviet workers consists in, first, their irrelevance as workers in a postsocialist period, and second, the irrelevance of their ideas about trade as not “real” work. Furthermore, to support my depiction of a master who engages in “the identification game” and a younger-generation trader who is disengaged from it, I give two ethnographic portraits of traders at the Vernissage. I assert that the disengagement of a younger generation of traders at the Vernissage signals a change in the perception of trade as “real” work and runs parallel to the unmaking of Soviet workers.
In the context of Brexit, changes to the regulatory architecture of CCPs that empower the European securities markets regulator are under way to prevent the threat of a regulatory race to the bottom. However, this empowerment currently leaves the national supervision of common European rules within the EU intact. This policy letter argues that supervisory arbitrage is as much a threat within the EU as outside of it, wherefore a common supervision of CCP rules in the EU is called for. The paper traces the origins of the current set-up and criticizes the current regulatory proposal by the EU Commission as too cumbersome while discussing possible ways forward to achieve European supervision. In contrast to the current proposal of the Commission, we call for a unified supervision within ESMA, combined with a European fiscal backstop.
This policy letter provides evidence for the crucial importance of the initial regulatory treatment for the further development of financial innovations by exploring the emergence and initial legal framing of off-balance-sheet leasing in Germany. Due to a missing legal framework, lease contracts occurred as an innovative social practice of off-balance-sheet financing. However, this lacking legal framing impeded the development of this financial innovation as it also created legal uncertainties. This was about to change after the initial legal framing of leasing in the 1970’s which eliminated those legal uncertainties and off-balance-sheet leasing entered into a stunning period of growth while laying the foundation of a regulatory resiliency against efforts that seek to abandon the off-balance-sheet treatment of leases. As the initial legal framing is crucial for the further development of a financial innovation, we propose the French approach for the initial vindication of new financial products in which the principles-based rules are aligned with the capabilities of regulators to intervene, even when a financial innovation complies with the letter of the law. In this way, regulators could regulate the frontier of financial innovations and weed out those which are entirely or mainly driven by regulatory arbitrage considerations while maintaining the beneficial elements of those products.
While the debate about the needs and merits of cryptocurrency regulation is ongoing, the unprecedented price hikes of cryptocurrencies towards the end of 2017 triggered a somewhat unexpected sort of regulation in the form of public statements by governments and financial supervisors. It kicked in rather quickly and turned out to be much more effective than imagined. These interventions can be identified as one of the main factors that drove asset prices down, thereby preventing destabilizing bubbles. The experience of the supervisory response to the cryptocurrency bubble of the past months keeps important insights for any prospective regulation of cryptocurrencies. First, public statements are a highly effective regulatory tool in the short term as they manage market expectations, a fact which is well-known as forward guidance in monetary policy. So far, the legal framework in the EU takes insufficient account of the regulatory role of public statements. Second, regulation needs to keep up with the incredible speed of fintech innovations. Some regulators addressed the challenge by adopting a ‘sandbox’ approach. However, the ‘sandbox’ approach clearly calls for international cooperation. To achieve a balance between safety and innovation, international cooperation should emulate the experimental character of sandboxes. One could conceive of a ‘sandbox for regulators’, an arrangement which would facilitate the exchange of information on regulatory initiatives among authorities but also the coordination of communication and forward guidance.
To estimate demand for labor, we use a combination of detailed employment data and the outcomes of procurement auctions, and compare the employment of the winner of an auction with the employment of the second ranked firm (i.e. the runner-up firm). Assuming similar ex-ante winning probabilities for both firms, we may view winning an auction as an exogenous shock to a firm’s production and its demand for labor. We utilize daily data from almost 900 construction firms and about 3,000 auctions in Austria in the time period 2006 until 2009. Our main results show that the winning firm significantly increases labor demand in the weeks following an auction but only in the years before the recent economic crisis. It employs about 80 workers more after the auction than the runner-up firm. Most of the adjustment takes place within one month after the demand shock. Winners predominantly fire fewer workers after winning than runner-up firms. In the crisis, however, firms do not employ more workers than their competitors after winning an auction. We discuss explanations like labor hoarding and productivity improvements induced by the crisis as well discuss implications for fiscal and stimulus policy in the crisis.
Departing from the principle of absolute priority, CoCo bonds are particularly exposed to bank losses despite not having ownership rights. This paper shows the link between adverse CoCo design and their yields, confirming the existence of market monitoring in designated bail-in debt. Specifically, focusing on the write-down feature as loss absorption mechanism in CoCo debt, I do find a yield premium on this feature relative to equity-conversion CoCo bonds as predicted by theoretical models. Moreover, and consistent with theories on moral hazard, I find this premium to be largest when existing incentives for opportunistic behavior are largest, while this premium is non-existent if moral hazard is perceived to be small. The findings show that write-down CoCo bonds introduce a moral hazard problem in the banks. At the same time, they support the idea of CoCo investors acting as monitors, which is a prerequisite for a meaningful role of CoCo debt in banks' regulatory capital mix.
Bargaining with a bank
(2018)
This paper examines bargaining as a mechanism to resolve information problems. To guide the analysis, I develop a parsimonious model of a credit negotiation between a bank and firms with varying levels of impatience. In equilibrium, impatient firms accept the bank’s offer immediately, while patient firms wait and negotiate price adjustments. I test the empirical predictions using a hand-collected dataset on credit line negotiations. Firms signing the bank’s offer right away draw down their line of credit after origination and default more than late signers. Late signers negotiate price adjustments more frequently, and, consistent with the model, these adjustments predict better ex post performance.
We show that time-varying volatility of volatility is a significant risk factor which affects the cross-section and the time-series of index and VIX option returns, beyond volatility risk itself. Volatility and volatility-of-volatility measures, identified model-free from the option price data as the VIX and VVIX indices, respectively, are only weakly related to each other. Delta-hedged index and VIX option returns are negative on average, and are more negative for strategies which are more exposed to volatility and volatility-of-volatility risks. Volatility and volatility of volatility significantly and negatively predict future delta-hedged option payoffs. The evidence is consistent with a no-arbitrage model featuring time-varying market volatility and volatility-of-volatility factors, both of which have negative market price of risk.
This paper studies the distributional consequences of a systematic variation in expenditure shares and prices. Using European Union Household Budget Surveys and Harmonized Index of Consumer Prices data, we construct household-specific price indices and reveal the existence of a pro-rich inflation in Europe. Particularly, over the period 2001-15, the consumption bundles of the poorest deciles in 25 European countries have, on average, become 10.5 percentage points more expensive than those of the richest decile. We find that ignoring the differential inflation across the distribution underestimates the change in the Gini (based on consumption expenditure) by up to 0.03 points. Cross-country heterogeneity in this change is large enough to alter the inequality ranking of numerous countries. The average inflation effect we detect is almost as large as the change in the standard Gini measure over the period of interest.
The paper analyses the contagion channels of the European financial system through the stochastic block model (SBM). The model groups homogeneous connectivity patterns among the financial institutions and describes the shock transmission mechanisms of the financial networks in a compact way. We analyse the global financial crisis and European sovereign debt crisis and show that the network exhibits a strong community structure with two main blocks acting as shock spreader and receiver, respectively. Moreover, we provide evidence of the prominent role played by insurances in the spread of systemic risk in both crises. Finally, we demonstrate that policy interventions focused on institutions with inter-community linkages (community bridges) are more effective than the ones based on the classical connectedness measures and represents consequently, a better early warning indicator in predicting future financial losses.
The increase in alternative working arrangements has sparked a debate over the positive impact of increased flexibility against the negative impact of decreased financial security. We study the prevalence and determinants of intermediated work in order to document the relative importance of the arguments for and against this recent labor market trend. We link data on individual participation and losses from a Federal Trade Commission settlement with a Multi-Level Marketing firm with detailed county-level information. Participation is greater in middle-income areas and in areas where female labor market non-participation is higher, suggesting that flexibility offers real benefits. However, losses from MLM participation are higher in areas with lower education levels and higher income inequality, suggesting that the downsides of alternative work are particularly high in certain demographics. Our results illustrate that the advantages and disadvantages of alternative work arrangements accrue to different groups.