Working Paper
Refine
Year of publication
Document Type
- Working Paper (2351) (remove)
Language
- English (2351) (remove)
Is part of the Bibliography
- no (2351)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1376)
- Wirtschaftswissenschaften (1306)
- Sustainable Architecture for Finance in Europe (SAFE) (738)
- House of Finance (HoF) (604)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (148)
- Informatik (114)
- Foundation of Law and Finance (50)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
We show that bond purchases undertaken in the context of quantitative easing efforts by the European Central Bank created a large mispricing between the market for German and Italian government bonds and their respective futures contracts. On top of the direct effect the buying pressure exerted on bond prices, we show three indirect effects through which the scarcity of bonds, resulting from the asset purchases, drove a wedge between the futures contracts and the underlying bonds: the deterioration of bond market liquidity, the increased bond specialness on the repurchase agreement market, and the greater uncertainty about bond availability as collateral.
We study the role of various trader types in providing liquidity in spot and futures markets based on complete order-book and transactions data as well as cross-market trader identifiers from the National Stock Exchange of India for a single large stock. During normal times, short-term traders who carry little inventory overnight are the primary intermediaries in both spot and futures markets, and changes in futures prices Granger-cause changes in spot prices. However, during two days of fast crashes, Granger-causality ran both ways. Both crashes were due to large-scale selling by foreign institutional investors in the spot market. Buying by short-term traders and cross-market traders was insufficient to stop the crashes. Mutual funds, patient traders with better trade-execution quality who were initially slow to move in, eventually bought sufficient quantities leading to price recovery in both markets. Our findings suggest that market stability requires the presence of well-capitalized standby liquidity providers.
An important assumption underlying the designation of some insurers as systemically important is that their overlapping portfolio holdings can result in common selling. We measure the overlap in holdings using cosine similarity, and show that insurers with more similar portfolios have larger subsequent common sales. This relationship can be magnified for some insurers when they are regulatory capital constrained or markets are under stress. When faced with an exogenous liquidity shock, insurers with greater portfolio similarity have even larger common sales that impact prices. Our measure can be used by regulators to predict which institutions may contribute most to financial instability through the asset liquidation channel of risk transmission.
This paper investigates inertia within and across banks in retail deposit markets using detailed panel data on consumer choices and account characteristics. In a structural choice model, I find that costs of inertia are around one third higher for switching accounts across compared to switching within banks. Observable proxies of bank-level switching costs (number and type of additional financial products) explain most of this cost premium, while online banking usage reduces inertia. Consistent with theory, I provide evidence that banks incorporate inertia in their pricing as older accounts pay lower rates than comparable newer accounts. Counterfactual policies reducing inertia shift market share to more competitive smaller banks, but only eliminating inertia within banks already results in high potential gains in consumer surplus. This suggests that facilitating bank switching alone might be insufficient to improve consumer choices.
In recent years European financial regulation has experienced a tremendous reorientation with respect to the shadow banking system, which manifested first and foremost in its reframing as market-based finance. Initially identified as a source of systemic risk certain initiatives did not only fall much behind the envisaged changes but all to the contrary have been substantially modified in a way that they now aim at revitalizing these activities. The reorientation of European regulatory agency on shadow banking post-crisis, from curtailing it to facilitating resilient market-based finance, has been a cause for irritation by academic observers, dismissed by some as mere rebranding or taken as a sign of regulatory capture. All to the contrary, this paper documents the central role of regulatory agency in shadow banking’s reconfiguration. It does so by analyzing the European initiatives concerning the regulation of Asset-Backed Commercial Paper (ABCP) and another prime example of shadow banking, Money Market Mutual Funds (MMFs). Based on documentary analysis and expert interviews we trace the way the recently published EU frameworks for MMFs and ABCP have been designed (in particular the STS, CRR and MMF regulation in 2017). Furthermore, we show how they have been transformed in such a way that their final versions allow to re-establish the shadow banking chain linking MMFs, the ABCP market and arguably the regular banking system. This transformation is driven by a new form of pro-active European regulatory agency which aims at creating a regulatory infrastructure able to sustain the orderly flow of real economy debt. Far from being captured by the industry, they did so consciously and in cooperation with private actors in order to maintain a channel for credit creation outside of bank credit, a task made more complicated by the rushed politicized final negotiations coupled with technical complexity. This paper thereby contributes to a new strand of literature, seeing the creation and reconfiguration of the shadow banking system as characterized by the active and conscious role of state actors.
We propose a unified framework to measure the effects of different reforms of the pension system on retirement ages and macroeconomic indicators in the face of demographic change. A rich overlapping generations (OLG) model is built and endogenous retirement decisions are explicitly modeled within a public pension system. Heterogeneity with respect to consumption preferences, wage profiles, and survival rates is embedded in the model. Besides the expected direct effects of these reforms on the behavior of households, we observe that feedback effects do occur. Results suggest that individual retirement decisions are strongly influenced by numerous incentives produced by the pension system and macroeconomic variables, such as the statutory eligibility age, adjustment rates, the presence of a replacement rate, and interest rates. Those decisions, in turn, have several impacts on the macro-economy which can create feedback cycles working through equilibrium effects on interest rates and wages. Taken together, these reform scenarios have strong implications for the sustainability of pension systems. Because of the rich nature of our unified model framework, we are able to rank the reform proposals according to several individual and macroeconomic measures, thereby providing important support for policy recommendations on pension systems.
The paper investigates the determinants of the idiosyncratic volatility puzzle by allowing linkages across asset returns. The first contribution of the paper is to show that portfolios sorted by increasing indegree computed on the network based on Granger causality test have lower expected returns, not related to idiosyncratic volatility. Secondly, empirical evidence indicates that stocks with higher idiosyncratic volatility have the lower exposition on the indegree risk factor.
We examine how a firms' investment behavior affects the investment of a neighboring firm. Economic theory yields ambiguous predictions regarding the direction of firm peer effects and consistent with earlier work, we find that firms display similar investment behavior within an area using OLS analysis. Exploiting time-variation in the rise of U.S. states' corporate income taxes and utilizing heterogeneity in firms' exposure to increases in corporate income tax rates, we identify the causal impact of local firms' investments. Using this as an instrumental variable in a 2SLS estimation, we find that an increases in local firms' investment reduces the investment of a local peer firm. This effect is more pronounced if local competition among firms is stronger and supports theories that firm investments are strategic substitutes due to competition.
We use minutes from 17,000 financial advisory sessions and corresponding client portfolio data to study how active client involvement affects advisor recommendations and portfolio outcomes. We find that advisors confronted with acquiescent clients stick to their standards and recommend expensive but well diversified mutual fund portfolios. However, if clients take an active role in the meetings, advisors deviate markedly from their standards, resulting in poorer portfolio diversification and lower Sharpe ratios. Our findings that advisors cater to client requests parallel the phenomenon of doctors prescribing antibiotics to insistent patients even if inappropriate, and imply that pandering diminishes the quality of advice.
This paper provides a complete characterization of optimal contracts in principal-agent settings where the agent's action has persistent effects. We model general information environments via the stochastic process of the likelihood-ratio. The martingale property of this performance metric captures the information benefit of deferral. Costs of deferral may result from both the agent's relative impatience as well as her consumption smoothing needs. If the relatively impatient agent is risk neutral, optimal contracts take a simple form in that they only reward maximal performance for at most two payout dates. If the agent is additionally risk-averse, optimal contracts stipulate rewards for a larger selection of dates and performance states: The performance hurdle to obtain the same level of compensation is increasing over time whereas the pay-performance sensitivity is declining.
A growing body of literature shows the importance of financial literacy in households' financial decisions. However, fewer studies focus on understanding the determinants of financial literacy. Our paper fills this gap by analyzing a specific determinant, the educational system, to explain the heterogeneity in financial literacy scores across Germany. We suggest that the lower financial literacy observed in East Germany is partially caused by a different institutional framework experienced during the Cold War, more specifically, by the socialist educational system of the GDR which affected specific cohorts of individuals. By exploiting the unique set-up of the German reunification, we identify education as a channel through which institutions and financial literacy are related in the German context.
How demanding and consistent is the 2018 stress test design in comparison to previous exercises?
(2018)
Bank regulators have the discretion to discipline banks by executing enforcement actions to ensure that banks correct deficiencies regarding safe and sound banking principles. We highlight the trade-offs regarding the execution of enforcement actions for financial stability. Following this we provide an overview of the differences in the legal framework governing supervisors’ execution of enforcement actions in the Banking Union and the United States. After discussing work on the effect of enforcement action on bank behaviour and the real economy, we present data on the evolution of enforcement actions and monetary penalties by U.S. regulators. We conclude by noting the importance of supervisors to levy efficient monetary penalties and stressing that a division of competences among different regulators should not lead to a loss of efficiency regarding the execution of enforcement actions.
A new governance architecture for european financial markets? Towards a european supervision of CCPs
(2018)
Does the new European outlook on financial markets, as voiced by the EU Commission since the beginning of the Capital Market Unions imply a movement of the EU towards an alignment of market integration and direct supervision of common rules? This paper sets out to answer this question for the case of common supervision for Central Counterparties (CCPs) in the European Union. Those entities gained crucial importance post-crisis due to new regulation which requires the mandatory clearing of standardized derivative contracts, transforming clearing houses into central nodes for cross-border financial transactions. While the EU-wide regulatory framework EMIR, enacted in 2012, stipulates common regulatory requirements, the framework still relies on home-country supervision of those rules, arguably leading to regulatory as well as supervisory arbitrage. Therefore, the regulatory reform to stabilize the OTC derivatives market replicated at its center a governance flaw, which had been identified as one of the major causes for the gravity of the financial crisis in the EU: the coupling of intense competition based on private risk management systems with a national supervision of European rules. This paper traces the history of this problem awareness and inquires which factors account for the fact that only in 2017 serious negotiations at the EU level ensued that envisioned a common supervision of CCPs to fix the flawed system of governance. Analyzing this shift in the European governance architecture, we argue that Brexit has opened a window of opportunity for a centralization of supervision for CCPs. Brexit aligns the urgency of the problem with material interests of crucial political stakeholder, in particular of Germany and France, providing the possibility for a grand European bargain.
Improving financial conditions of individuals requires an understanding of the mechanisms through which bad financial decision-making leads to worse financial outcomes. From a theoretical point of view, a key candidate inducing mistakes in financial decision-making are so called present-biased preferences, which are one of the cornerstones of behavioral economics. According to theory, present-biased households should behave systematically different when it comes to consumption and saving decisions, as they should be more prone to spending too much and saving too little.
In this policy letter we show how high frequency financial transaction data available in digitized form allows to precisely categorize individual financial-decision making to be present-biased or not. Using this categorization, we find that one out of five individuals in our sample exhibits present-bias and that this present-biased behavior is associated with a stronger use of overdrafts. As overdrafts represent a particularly expensive way of short-term borrowing, their systematic use can be interpreted as a measure of suboptimal financial-decision making. Overall, our results indicate that the combination of economic theory and Big Data is able to generate valuable insights with applications for policy makers and businesses alike.
The object of this study is one of the most ambitious projects of twentieth-century art history: Aby Warburg's 'Atlas Mnemosyne', conceived in the summer of 1926 – when the first mention of a 'Bilderatlas', or "atlas of images", occurs in his journal – and truncated three years later, unfinished, by his sudden death in October 1929. Mnemosyne consisted in a series of large black panels, about 170x140 cm., on which were attached black-and-white photographs of paintings, sculptures, book pages, stamps, newspaper clippings, tarot cards, coins, and other types of images. Warburg kept changing the order of the panels and the position of the images until the very end, and three main versions of the Atlas have been recorded: one from 1928 (the "1-43 version", with 682 images); one from the early months of 1929, with 71 panels and 1050 images; and the one Warburg was working on at the time of his death, also known as the "1-79 version", with 63 panels and 971 images (which is the one we will examine). But Warburg was planning to have more panels – possibly many more – and there is no doubt that Mnemosyne is a dramatically unfinished and controversial object of study.
Patterns and interpretation
(2017)
One thing for sure: digitization has completely changed the literary archive. People like me used to work on a few hundred nineteenth-century novels; today, we work on thousands of them; tomorrow, hundreds of thousands. This has had a major effect on literary history, obviously enough, but also on critical methodology; because, when we work on 200,000 novels instead of 200, we are not doing the same thing, 1,000 times bigger; we are doing a different thing. The new scale changes our relationship to our object, and in fact 'it changes the object itself'.
The Emotions of London
(2016)
A few years ago, a group formed by Ben Allen, Cameron Blevins, Ryan Heuser, and Matt Jockers decided to use topic modeling to extract geographical information from nineteenth-century novels. Though the study was eventually abandoned, it had revealed that London-related topics had become significantly more frequent in the course of the century, and when some of us were later asked to design a crowd-sourcing experiment, we decided to add a further dimension to those early findings, and see whether London place-names could become the cornerstone for an emotional geography of the city.
Literature, measured
(2016)
There comes a moment, in digital humanities talks, when someone raises the hand and says: "Ok. Interesting. But is it really new?" Good question... And let's leave aside the obvious lines of defense, such as "but the field is still only at its beginning!", or "and traditional literary criticism, is that always new?" All true, and all irrelevant; because the digital humanities have presented themselves as a radical break with the past, and must therefore produce evidence of such a break. And the evidence, let's be frank, is not strong. What is there, moreover, comes in a variety of forms, beginning with the slightly paradoxical fact that, in a new approach, not everything has to be new. When "Network Theory, Plot Analysis” pointed out, in passing, that a network of Hamlet had Hamlet at its center, the New York Times gleefully mentioned the passage as an unmistakable sign of stupidity. Maybe; but the point, of course, was not to present Hamlet’s centrality as a surprise; it was exactly the opposite: had the new approach not found Hamlet at the center of the play, its plausibility would have disintegrated. Before using network theory for dramatic analysis, I had to test it, and prove that it corroborated the main results of previous research.
Of the novelties introduced by digitization in the study of literature, the size of the archive is probably the most dramatic: we used to work on a couple of hundred nineteenth-century novels, and now we can analyze thousands of them, tens of thousands, tomorrow hundreds of thousands. It's a moment of euphoria, for quantitative literary history: like having a telescope that makes you see entirely new galaxies. And it's a moment of truth: so, have the digital skies revealed anything that changes our knowledge of literature? This is not a rhetorical question. In the famous 1958 essay in which he hailed "the advent of a quantitative history" that would "break with the traditional form of nineteenth-century history", Fernand Braudel mentioned as its typical materials "demographic progressions, the movement of wages, the variations in interest rates [...] productivity [...] money supply and demand." These were all quantifiable entities, clearly enough; but they were also completely new objects compared to the study of legislation, military campaigns, political cabinets, diplomacy, and so on. It was this double shift that changed the practice of history; not quantification alone. In our case, though, there is no shift in materials: we may end up studying 200,000 novels instead of 200; but, they're all still novels. Where exactly is the novelty?
Different scales, different features. It’s the main difference between the thesis we have presented here, and the one that has so far dominated the study of the paragraph. By defining it as "a sentence writ large", or, symmetrically, as "a short discourse", previous research was implicitly asserting the irrelevance of scale: sentence, paragraph, and discourse were all equally involved in the "development of one topic". We have found the exact opposite: 'scale is directly correlated to the differentiation of textual functions'. By this, we don't simply mean that the scale of sentences or paragraphs allows us to "see" style or themes more clearly. This is true, but secondary. Paragraphs allows us to "see" themes, because themes fully "exist" only at the scale of the paragraph. Ours is not just an epistemological claim, but an ontological one: if style and themes and episodes exist in the form they do, it's because writers work at different scales – and do different things according to the level at which they are operating.
Loudness in the novel
(2014)
The novel is composed entirely of voices: the most prominent among them is typically that of the narrator, which is regularly intermixed with those of the various characters. In reading through a novel, the reader "hears" these heterogeneous voices as they occur in the text. When the novel is read out loud, the voices are audibly heard. They are also heard, however, when the novel is read silently: in this la!er case, the voices are not verbalized for others to hear, but acoustically created and perceived in the mind of the reader. Simply put: sound, in the context of the novel, is fundamentally a product of the novel’s voices. This conception of sound mechanics may at first seem unintuitive—sound seems to be the product of oral reading—but it is only by starting with the voice that one can fully appreciate sound’s function in the novel. Moreover, such a conception of sound mechanics finds affirmation in the works of both Mikhail Bakhtin and Elaine Scarry: "In the novel," writes Bakhtin, "we can always hear voices (even while reading silently to ourselves)."
The concept of length, the concept is synonymous, the concept is nothing more than, the proper definition of a concept ... Forget programs and visions; the operational approach refers specifically to concepts, and in a very specific way: it describes the process whereby concepts are transformed into a series of operations—which, in their turn, allow to measure all sorts of objects. Operationalizing means building a bridge from concepts to measurement, and then to the world. In our case: from the concepts of literary theory, through some form of quantification, to literary texts.
We would study not style as such, but style 'at the scale of the sentence': the lowest level, it seemed, at which style as a distinct phenomenon became visible. Implicitly, we were defining style as a combination of smaller linguistic units, which made it, in consequence, particularly sensitive to changes in scale—from words to clauses to whole sentences.
The nineteenth century in Britain saw tumultuous changes that reshaped the fabric of society and altered the course of modernization. It also saw the rise of the novel to the height of its cultural power as the most important literary form of the period. This paper reports on a long-term experiment in tracing such macroscopic changes in the novel during this crucial period. Specifically, we present findings on two interrelated transformations in novelistic language that reveal a systemic concretization in language and fundamental change in the social spaces of the novel. We show how these shifts have consequences for setting, characterization, and narration as well as implications for the responsiveness of the novel to the dramatic changes in British society.
This paper has a second strand as well. This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project.
If there is one thing to be learned from David Foster Wallace, it is that cultural transmission is a tricky game. This was a problem Wallace confronted as a literary professional, a university-based writer during what Mark McGurl has called the Program Era. But it was also a philosophical issue he grappled with on a deep level as he struggled to combat his own loneliness through writing. This fundamental concern with literature as a social, collaborative enterprise has also gained some popularity among scholars of contemporary American literature, particularly McGurl and James English: both critics explore the rules by which prestige or cultural distinction is awarded to authors (English; McGurl). Their approach requires a certain amount of empirical work, since these claims move beyond the individual experience of the text into forms of collective reading and cultural exchange influenced by social class, geographical location, education, ethnicity, and other factors. Yet McGurl and English's groundbreaking work is limited by the very forms of exclusivity they analyze: the protective bubble of creative writing programs in the academy and the elite economy of prestige surrounding literary prizes, respectively. To really study the problem of cultural transmission, we need to look beyond the symbolic markets of prestige to the real market, the site of mass literary consumption, where authors succeed or fail based on their ability to speak to that most diverse and complicated of readerships: the general public. Unless we study what I call the social lives of books, we make the mistake of keeping literature in the same ascetic laboratory that Wallace tried to break out of with his intense authorial focus on popular culture, mass media, and everyday life.
In the last few years, literary studies have experienced what we could call the rise of quantitative evidence. This had happened before of course, without producing lasting effects, but this time it’s probably going to be different, because this time we have digital databases, and automated data retrieval. As Michel’s and Lieberman’s recent article on "Culturomics" made clear, the width of the corpus and the speed of the search have increased beyond all expectations: today, we can replicate in a few minutes investigations that took a giant like Leo Spitzer months and years of work. When it comes to phenomena of language and style, we can do things that previous generations could only dream of.
When it comes to language and style. But if you work on novels or plays, style is only part of the picture. What about plot – how can that be quantified? This paper is the beginning of an answer, and the beginning of the beginning is network theory. This is a theory that studies connections within large groups of objects: the objects can be just about anything – banks, neurons, film actors, research papers, friends... – and are usually called nodes or vertices; their connections are usually called edges; and the analysis of how vertices are linked by edges has revealed many unexpected features of large systems, the most famous one being the so-called "small-world" property, or "six degrees of separation": the uncanny rapidity with which one can reach any vertex in the network from any other vertex. The theory proper requires a level of mathematical intelligence which I unfortunately lack; and it typically uses vast quantities of data which will also be missing from my paper. But this is only the first in a series of studies we’re doing at the Stanford Literary Lab; and then, even at this early stage, a few things emerge.
This paper is the report of a study conducted by five people – four at Stanford, and one at the University of Wisconsin – which tried to establish whether computer-generated algorithms could "recognize" literary genres. You take 'David Copperfield', run it through a program without any human input – "unsupervised", as the expression goes – and ... can the program figure out whether it's a gothic novel or a 'Bildungsroman'? The answer is, fundamentally, Yes: but a Yes with so many complications that it is necessary to look at the entire process of our study. These are new methods we are using, and with new methods the process is almost as important as the results.
We develop a model that reproduces the average return and volatility spread between sin and non-sin stocks. Our investors do not necessarily boycott sin companies. Rather, they are open to invest in any company while trading off dividends against ethicalness. We show that when dividends and ethicalness are complementary goods and investors are sufficiently risk averse, the model predicts that the dividend share of sin companies exhibits a positive relation with the future return and volatility spreads. Our empirical analysis supports the model's predictions.
In the last decade, central bank interventions, flights to safety, and the shift in derivatives clearing resulted in exceptionally high demand for high quality liquid assets, such as German treasuries, in the securities lending market besides the traditional repo market activities. Despite the high demand, the realizable securities lending income has remained economically negligible for most beneficial owners. We provide empirical evidence of pricing inefficiencies in the non-transparent, oligopolistic securities lending market for German treasuries from 2006 to 2015. Consistent with Duffie, Gârleanu and Pedersen (2005)’s theory, we find that the less connected market participants’ interests are underrepresented, evident in the longer maturity segment, where lenders are more likely to be conservative passive investors, such as pension funds and insurance firms. The low price elasticity in this segment hinders these beneficial owners to fully capitalize on the additional income from securities lending, giving rise to important negative welfare implications.
In this study we investigate which economic ideas were prevalent in the macroprudential discourse post-crises in order to understand the availability of ideas for reform minded agents. We base our analysis on new findings in the field of ideational shifts and regulatory science, which posit that change-agents engage with new ideas pragmatically and strategically in their effort to have their economic ideas institutionalized. We argue that in these epistemic battles over new regulation, scientific backing by academia is the key resource determining the outcome. We show that the present reforms implemented internationally follow this pattern. In our analysis we contrast the entire discourse on systemic risk and macroprudential regulation with Borio’s initial 2003 proposal for a macroprudential framework. We find that mostly cross-sectional measures targeted towards increasing the resilience of the financial system rather than inter-temporal measures dampening the financial cycle have been implemented. We provide evidence for the lacking support of new macroprudential thinking within academia and argue that this is partially responsible for the lack of anti-cyclical macroprudential regulation. Most worryingly, the financial cycle is largely absent in the academic discourse and is only tacitly assumed instead of fully fledged out in technocratic discourses, pointing to the possibility that no anti-cyclical measures will be forthcoming.
This paper gives an account of the unmaking of Soviet workers at the Vernissage in Armenia. I argue that the unmaking of Soviet workers, first, is the irrelevance of Soviet workers as workers once they lost their jobs after the collapse of the Soviet Union and came to the Vernissage to trade. During the Soviet period, private trade was forbidden, and the Soviet government persecuted people who dared to engage in it. Consequently, many people grew up thinking of trade as a criminal activity that was non-productive and parasitic, as opposed to productive work that facilitated the modernization of the USSR. After the dissolution of the USSR, when trade was liberalized and many former Soviet workers were pushed into trade as they lost their jobs, it still retained its quality of not being “real” work, to borrow Roberman’s (2013) wording. Even 25 years after the dissolution of the USSR, former Soviet workers at the Vernissage still want to be identified with their former Soviet occupations and not with trade. However, now engaged in trade, former Soviet workers came up with a “new” way of establishing identity and hierarchy—through production. I describe this “new” way as “the identification game”; employing it, I demonstrate how former Soviet workers at the Vernissage identify and represent themselves as masters, whose work is productive and intellectual. In doing so, they single out resellers, people who resell the work of other masters, by implying that their work is parasitic and selfish. However, this “identification game” is reified only by the older generation of traders, former Soviet workers. The younger generation of traders at the Vernissage, which does not have any experience of being Soviet workers, is disengaged from it, thus undermining the Soviet view of trade as not “real” work and making it irrelevant in the postsocialist era. Thus, I contend that the unmaking of Soviet workers consists in, first, their irrelevance as workers in a postsocialist period, and second, the irrelevance of their ideas about trade as not “real” work. Furthermore, to support my depiction of a master who engages in “the identification game” and a younger-generation trader who is disengaged from it, I give two ethnographic portraits of traders at the Vernissage. I assert that the disengagement of a younger generation of traders at the Vernissage signals a change in the perception of trade as “real” work and runs parallel to the unmaking of Soviet workers.
In the context of Brexit, changes to the regulatory architecture of CCPs that empower the European securities markets regulator are under way to prevent the threat of a regulatory race to the bottom. However, this empowerment currently leaves the national supervision of common European rules within the EU intact. This policy letter argues that supervisory arbitrage is as much a threat within the EU as outside of it, wherefore a common supervision of CCP rules in the EU is called for. The paper traces the origins of the current set-up and criticizes the current regulatory proposal by the EU Commission as too cumbersome while discussing possible ways forward to achieve European supervision. In contrast to the current proposal of the Commission, we call for a unified supervision within ESMA, combined with a European fiscal backstop.
This policy letter provides evidence for the crucial importance of the initial regulatory treatment for the further development of financial innovations by exploring the emergence and initial legal framing of off-balance-sheet leasing in Germany. Due to a missing legal framework, lease contracts occurred as an innovative social practice of off-balance-sheet financing. However, this lacking legal framing impeded the development of this financial innovation as it also created legal uncertainties. This was about to change after the initial legal framing of leasing in the 1970’s which eliminated those legal uncertainties and off-balance-sheet leasing entered into a stunning period of growth while laying the foundation of a regulatory resiliency against efforts that seek to abandon the off-balance-sheet treatment of leases. As the initial legal framing is crucial for the further development of a financial innovation, we propose the French approach for the initial vindication of new financial products in which the principles-based rules are aligned with the capabilities of regulators to intervene, even when a financial innovation complies with the letter of the law. In this way, regulators could regulate the frontier of financial innovations and weed out those which are entirely or mainly driven by regulatory arbitrage considerations while maintaining the beneficial elements of those products.
While the debate about the needs and merits of cryptocurrency regulation is ongoing, the unprecedented price hikes of cryptocurrencies towards the end of 2017 triggered a somewhat unexpected sort of regulation in the form of public statements by governments and financial supervisors. It kicked in rather quickly and turned out to be much more effective than imagined. These interventions can be identified as one of the main factors that drove asset prices down, thereby preventing destabilizing bubbles. The experience of the supervisory response to the cryptocurrency bubble of the past months keeps important insights for any prospective regulation of cryptocurrencies. First, public statements are a highly effective regulatory tool in the short term as they manage market expectations, a fact which is well-known as forward guidance in monetary policy. So far, the legal framework in the EU takes insufficient account of the regulatory role of public statements. Second, regulation needs to keep up with the incredible speed of fintech innovations. Some regulators addressed the challenge by adopting a ‘sandbox’ approach. However, the ‘sandbox’ approach clearly calls for international cooperation. To achieve a balance between safety and innovation, international cooperation should emulate the experimental character of sandboxes. One could conceive of a ‘sandbox for regulators’, an arrangement which would facilitate the exchange of information on regulatory initiatives among authorities but also the coordination of communication and forward guidance.
To estimate demand for labor, we use a combination of detailed employment data and the outcomes of procurement auctions, and compare the employment of the winner of an auction with the employment of the second ranked firm (i.e. the runner-up firm). Assuming similar ex-ante winning probabilities for both firms, we may view winning an auction as an exogenous shock to a firm’s production and its demand for labor. We utilize daily data from almost 900 construction firms and about 3,000 auctions in Austria in the time period 2006 until 2009. Our main results show that the winning firm significantly increases labor demand in the weeks following an auction but only in the years before the recent economic crisis. It employs about 80 workers more after the auction than the runner-up firm. Most of the adjustment takes place within one month after the demand shock. Winners predominantly fire fewer workers after winning than runner-up firms. In the crisis, however, firms do not employ more workers than their competitors after winning an auction. We discuss explanations like labor hoarding and productivity improvements induced by the crisis as well discuss implications for fiscal and stimulus policy in the crisis.
Departing from the principle of absolute priority, CoCo bonds are particularly exposed to bank losses despite not having ownership rights. This paper shows the link between adverse CoCo design and their yields, confirming the existence of market monitoring in designated bail-in debt. Specifically, focusing on the write-down feature as loss absorption mechanism in CoCo debt, I do find a yield premium on this feature relative to equity-conversion CoCo bonds as predicted by theoretical models. Moreover, and consistent with theories on moral hazard, I find this premium to be largest when existing incentives for opportunistic behavior are largest, while this premium is non-existent if moral hazard is perceived to be small. The findings show that write-down CoCo bonds introduce a moral hazard problem in the banks. At the same time, they support the idea of CoCo investors acting as monitors, which is a prerequisite for a meaningful role of CoCo debt in banks' regulatory capital mix.
Bargaining with a bank
(2018)
This paper examines bargaining as a mechanism to resolve information problems. To guide the analysis, I develop a parsimonious model of a credit negotiation between a bank and firms with varying levels of impatience. In equilibrium, impatient firms accept the bank’s offer immediately, while patient firms wait and negotiate price adjustments. I test the empirical predictions using a hand-collected dataset on credit line negotiations. Firms signing the bank’s offer right away draw down their line of credit after origination and default more than late signers. Late signers negotiate price adjustments more frequently, and, consistent with the model, these adjustments predict better ex post performance.
We show that time-varying volatility of volatility is a significant risk factor which affects the cross-section and the time-series of index and VIX option returns, beyond volatility risk itself. Volatility and volatility-of-volatility measures, identified model-free from the option price data as the VIX and VVIX indices, respectively, are only weakly related to each other. Delta-hedged index and VIX option returns are negative on average, and are more negative for strategies which are more exposed to volatility and volatility-of-volatility risks. Volatility and volatility of volatility significantly and negatively predict future delta-hedged option payoffs. The evidence is consistent with a no-arbitrage model featuring time-varying market volatility and volatility-of-volatility factors, both of which have negative market price of risk.
This paper studies the distributional consequences of a systematic variation in expenditure shares and prices. Using European Union Household Budget Surveys and Harmonized Index of Consumer Prices data, we construct household-specific price indices and reveal the existence of a pro-rich inflation in Europe. Particularly, over the period 2001-15, the consumption bundles of the poorest deciles in 25 European countries have, on average, become 10.5 percentage points more expensive than those of the richest decile. We find that ignoring the differential inflation across the distribution underestimates the change in the Gini (based on consumption expenditure) by up to 0.03 points. Cross-country heterogeneity in this change is large enough to alter the inequality ranking of numerous countries. The average inflation effect we detect is almost as large as the change in the standard Gini measure over the period of interest.
The paper analyses the contagion channels of the European financial system through the stochastic block model (SBM). The model groups homogeneous connectivity patterns among the financial institutions and describes the shock transmission mechanisms of the financial networks in a compact way. We analyse the global financial crisis and European sovereign debt crisis and show that the network exhibits a strong community structure with two main blocks acting as shock spreader and receiver, respectively. Moreover, we provide evidence of the prominent role played by insurances in the spread of systemic risk in both crises. Finally, we demonstrate that policy interventions focused on institutions with inter-community linkages (community bridges) are more effective than the ones based on the classical connectedness measures and represents consequently, a better early warning indicator in predicting future financial losses.
The increase in alternative working arrangements has sparked a debate over the positive impact of increased flexibility against the negative impact of decreased financial security. We study the prevalence and determinants of intermediated work in order to document the relative importance of the arguments for and against this recent labor market trend. We link data on individual participation and losses from a Federal Trade Commission settlement with a Multi-Level Marketing firm with detailed county-level information. Participation is greater in middle-income areas and in areas where female labor market non-participation is higher, suggesting that flexibility offers real benefits. However, losses from MLM participation are higher in areas with lower education levels and higher income inequality, suggesting that the downsides of alternative work are particularly high in certain demographics. Our results illustrate that the advantages and disadvantages of alternative work arrangements accrue to different groups.
We develop a simple theoretical model to motivate testable hypotheses about how peer-to-peer (P2P) platforms compete with banks for loans. The model predicts that (i) P2P lending grows when some banks are faced with exogenously higher regulatory costs; (ii) P2P loans are riskier than bank loans; and (iii) the risk-adjusted interest rates on P2P loans are lower than those on bank loans. We confront these predictions with data on P2P lending and the consumer bank credit market in Germany and find empirical support. Overall, our analysis indicates the P2P lenders are bottom fishing when regulatory shocks create a competitive disadvantage for some banks.
In contrast to the popularity of financial education interventions worldwide, studies on the economic effects of those interventions report mixed results. With a focus on the effect on disadvantaged groups, we review both the theoretical and empirical findings in order to understand why this discrepancy exists. The survey first highlights that it is necessary to distinguish between the concepts of, and the relationships between, financial education, financial literacy and financial behavior to identify the true effects of financial education. The review addresses possible biases caused by third factors such as numeracy. Next, we review theories on financial literacy which make clear that the effect of financial education interventions is heterogeneous across the population. Last, we look closely at main empirical studies on financial education targeted at the migrants/immigrants, the low-income earners and the young, and compare their methodologies. There seems to be a positive effect on short-term financial knowledge and awareness of the young, but there is no proven evidence on long-term behavior after being grown up. Studies on financial behavior of migrants and immigrants show almost no effect of financial education.
This paper investigates the effect of the conventional and unconventional (e.g. Quantitative Easing - QE) monetary policy intervention on the insurance industry. We first analyze the impact on the stock performances of 166 (re)insurers from the last QE programme launched by the European Central Bank (ECB) by constructing an event study around the announcement date. Then we enlarge the scope by looking at the monetary policy surprise effects on the same sample of (re)insurers over a timeframe of 12 years, also extending the analysis to the Credit Default Swaps (CDS) market. In the second part of the paper by building a set of balance sheet-based indices, we identify the characteristics of (re)insurers that determine sensitivity to monetary policy actions. Our evidences suggest that a single intervention extrapolated from the comprehensive strategy cannot be utilized to estimate the effect of monetary policy intervention on the market. With respect to the impact of monetary policies, we show how the effect of interventions changes over time. Expansionary monetary policy interventions, when generating an instantaneous reduction of interest rates, generated movement in stock prices in the same direction till September 2010. This effect turned positive during the European sovereign debt crisis. However, the effect faded away in 2014-2015. The pattern is confirmed by the impact on the CDS market. With regard to the determinants of these effects, our analysis suggests that sensitivity is mainly driven by asset allocation and in particular by exposure to fixed income assets.
his paper studies heterogeneity in the reaction to rank feedback. In a laboratory experiment, individuals take part in a series of dynamic real-effort contests with intermediate feedback. To solve the identification problem in estimating the causal effect of rank feedback on subsequent effort provision we implement a random multiplier in the first round of each contest. The realization of this multiplier then serves as a valid instrument for rank feedback. While rank feedback has a robust effect on subsequent effort provision on average, an explicit analysis of between-subject heterogeneity reveals that a substantial fraction of participants in fact react entirely opposite than the aggregated results indicate. We further show that this heterogeneity has consequences for overall outcomes, thereby arguing that heterogeneous sensitivities to rank feedback could have implications for the design of various policies in education and organizations.
With a notional amount outstanding of more than USD 500 trillion, the market for OTC derivatives is of vital importance for global financial stability. A growing proportion of these contracts are cleared via central counterparties (CCPs), which means that CCPs are gaining in importance as critical financial market infrastructures. At the same time, there is growing concern that a new "too big to fail" problem could arise, as the CCP industry is highly concentrated due to economies of scale. From a European perspective, it should be noted that the clearing of euro-denominated OTC derivatives mainly takes place in London, hence outside the EU in the foreseeable future. For some time there has been a controversial discussion as to whether this can remain the case post Brexit.
CCPs, which clear a significant proportion of euro OTC derivatives and are systemically relevant from an EU perspective, should be subject to direct supervision by EU authorities and should be established in the EU. This would represent an important building block for a future Capital Markets Union in Europe, as regulatory or supervisory arbitrage in favour of systemically important third- ountry CCPs could be prevented. In addition, if a systemically relevant CCP handling a considerable portion of the euro OTC derivatives business were to run into serious difficulties, this may impact ECB monetary policy. This applies both to demand for central bank money and to the transmission of monetary policy measures, which can be significantly impaired, particularly in the event that the repo market or payment systems are disrupted. It is therefore essential for the ECB to be closely involved in the supervision of CCPs. Against this background, the draft amendment of EMIR (European Market Infrastructure Regulation) presented on 13 June 2017 is a step in the right direction. In addition, there is an urgent need to introduce a recovery and resolution mechanism for CCPs in the EU to complement the existing single resolution mechanism (SRM) for banks in the eurozone. Only then can the diverse interdependencies between banks and CCPs be adequately taken into account in the recovery and resolution programmes required in a financial crisis.
We investigate whether and how the shift from discretionary forward-looking provisioning to the restrictive incurred loss approach under International Financial Reporting Standards (IFRS) in the European Union (EU) affects the cross-country comparability and predictive ability of loan loss allowances. Given bank supervisors’ keen interest in comparable and adequate loan loss allowances, we also examine the role of supervisors in determining financial statement effects around IFRS adoption. We find that the application of the incurred loss approach has led to more comparable loan loss allowances. However, some differences persist in countries where supervisors were reluctant to enforce the incurred loss approach. Our results also suggest that the predictive ability of loan loss allowances improved following IFRS adoption. Finally, in supplemental analyses we document that increased comparability of loan loss allowances is associated with the cross-country convergence of the risk sensitivity of bank leverage indicating an improvement in the effectiveness of market discipline in the EU.
The article is designed to introduce and analyze authoritarian constitutionalism as an important phenomenon in its own right, not merely a deficient or deviant version of liberal constitutionalism. Therefore it is not adequate to dismiss it as sham or window-dressing. Instead, its crucial features – participation as complicity, power as property and the cult of immediacy – are related to the basic assumption that authoritarian constitutions are texts with a purpose that warrant careful analysis of the domestic and transnational audience.
Even if the importance of micro data transparency is a well-established fact, European institutions are still lacking behind the US when it comes to the provision of financial market data to academics. In this Policy Letter we discuss five different types of micro data that are crucial for monitoring (systemic) risk in the financial system, identifying and understanding inter-linkages in financial markets and thus have important implications for policymakers and regulatory authorities. We come to the conclusion that for all five areas of micro data, outlined in this Policy Letter (bank balance sheet data, asset portfolio data, market transaction data, market high frequency data and central bank data), the benefits of increased transparency greatly offset potential downsides. Hence, European policymakers would do well to follow the US example and close the sizeable gap in micro data transparency. For most cases, relevant data is already collected (at least on national level), but just not made available to academics for partly incomprehensible reasons. Overcoming these obstacles could foster financial stability in Europe and assure level playing fields with US regulators and policymakers.
Does economic policy uncertainty affect household stockholding? To answer this question we create a novel measure of household exposure to economic policy uncertainty news by combining survey information on the hours a household spends in reading newspapers and the frequency of such news in the popular press during a household’s pre-interview period. After controlling for household fixed effects, month-year fixed effects and time-varying cognitive skills, we find that households with a higher exposure to economic policy uncertainty news are less likely to invest in stocks held directly or through mutual funds. This effect is independent from the market volatility index and household (first-moment) expectations about the stock market index.
Germany Inc. was an idiosyncratic form of industrial organization that put financial institutions at the center. This paper argues that the consumption of private benefits in related party transactions by these key agents can be understood as a compensation for their coordinating and monitoring function in Germany Inc. As a consequence, legal tools apt to curb tunneling remained weak in Germany from the perspective of outside shareholders. While banks were in a position to use their firm-level knowledge and influence to limit rent-seeking by other related parties, their own behavior was not subject to meaningful controls. With the dismantling of Germany Inc. banks seized their monitoring function and left an unprecedented void with regard to related party transactions. Hence, a “traditionalist” stance which opposes law reform for related party transactions in Germany negatively affects capital market development, growth opportunities and ultimately social welfare.
We characterize the optimal linear tax on capital in an Overlapping Generations model with two period lived households facing uninsurable idiosyncratic labor income risk. The Ramsey government internalizes the general equilibrium feedback of private precautionary saving. For logarithmic utility our full analytical solution of the Ramsey problem shows that the optimal aggregate saving rate is independent of income risk. The optimal time-invariant tax on capital is increasing in income risk. Its sign depends on the extent of risk and on the Pareto weight of future generations. If the Ramsey tax rate that maximizes steady state utility is positive, then implementing this tax rate permanently generates a Pareto-improving transition even if the initial equilibrium is dynamically efficient. We generalize our results to Epstein-Zin-Weil utility and show that the optimal steady state saving rate is increasing in income risk if and only if the intertemporal elasticity of substitution is smaller than 1.
This paper investigates the roles psychological biases play in empirically estimated deviations between subjective survival beliefs (SSBs) and objective survival probabilities (OSPs). We model deviations between SSBs and OSPs through age-dependent inverse S-shaped probability weighting functions (PWFs), as documented in experimental prospect theory. Our estimates suggest that the implied measures for cognitive weakness, likelihood insensitivity, and those for motivational biases, relative pessimism, increase with age. We document that direct measures of cognitive weakness and motivational attitudes share these trends. Our regression analyses confirm that these factors play strong quantitative roles in the formation of subjective survival beliefs. In particular, cognitive weakness is an increasingly important contributor to the overestimation of survival chances in old age.
This paper is the national report for Germany prepared for the to the 20th General Congress of the International Academy of Comparative Law 2018 and gives an overview of the regulation of crowdfunding in Germany and the typical design of crowdfunding campaigns under this legal framework. After a brief survey of market data, it delineates the classification of crowdfunding transactions in German contract law and their treatment under the applicable conflict of laws regime. It then turns to the relevant rules in prudential banking regulation and capital market law. It highlights disclosure requirements that flow from both contractual obligations of the initiators of campaigns vis-à-vis contributors and securities regulation (prospectus regime). After sketching the most important duties of the parties involved in crowdfunding, the report also looks at the key features of the respective transactions’ tax treatment.
This paper revisits the macroeconomic effects of the large-scale asset purchase programmes launched by the Federal Reserve and the Bank of England from 2008. Using a Bayesian VAR, we investigate the macroeconomic impact of shocks to asset purchase announcements and assess changes in their effectiveness based on subsample analysis. The results suggest that the early asset purchase programmes had significant positive macroeconomic effects, while those of the subsequent ones were weaker and in part not significantly different from zero. The reduced effectiveness seems to reflect in part better anticipation of asset purchase programmes over time, since we find significant positive macroeconomic effects when we consider shocks to survey expectations of the Federal Reserve’s last asset purchase programme. Finally, in all estimations we find a significant and persistent positive impact of asset purchase shocks on stock prices.
Coordination of circuit breakers? Volume migration and volatility spillover in fragmented markets
(2018)
We study circuit breakers in a fragmented, multi-market environment and investigate whether a coordination of circuit breakers is necessary to ensure their effectiveness. In doing so, we analyze 2,337 volatility interruptions on Deutsche Boerse and research whether a volume migration and an accompanying volatility spillover to alternative venues that continue trading can be observed. Different to prevailing theoretical rationale, trading volume on alternative venues significantly decreases during circuit breakers on the main market and we do not find any evidence for volatility spillover. Moreover, we show that the market share of the main market increases sharply during a circuit breaker. Surprisingly, this is amplified with increasing levels of fragmentation. We identify high-frequency trading as a major reason for the vanishing trading activity on the alternative venues and give empirical evidence that a coordination of circuit breakers is not essential for their effectiveness as long as market participants shift to the dominant venue during market stress.
We investigate different designs of circuit breakers implemented on European trading venues and examine their effectiveness to manage excess volatility and to preserve liquidity. Specifically, we empirically analyze volatility and liquidity around volatility interruptions implemented on the German and Spanish stock market which differ regarding specific design parameters. We find that volatility interruptions in general significantly decrease volatility in the post interruption phase. Unfortunately, this decrease in volatility comes at the cost of decreased liquidity. Regarding design parameters, we find tighter price ranges and shorter durations to support volatility interruptions in achieving their goals.
I present a new business cycle model in which decision making follows a simple mental process motivated by neuroeconomics. Decision makers first compute the value of two different options and then choose the option that offers the highest value, but with errors. The resulting model is highly tractable and intuitive. A demand function in level replaces the traditional Euler equation. As a result, even liquid consumers can have a large marginal propensity to consume. The interest rate affects consumption through the cost of borrowing and not through intertemporal substitution. I discuss the implications for stimulus policies.
This paper analyses whether the post-crisis regulatory reforms developed by global-standard-setting bodies have created appropriate incentives for different types of market participants to centrally clear Over-The-Counter (OTC) derivative contracts. Beyond documenting the observed facts, we analyze four main drivers for the decision to clear: 1) the liquidity and riskiness of the reference entity; 2) the credit risk of the counterparty; 3) the clearing member’s portfolio net exposure with the Central Counterparty Clearing House (CCP) and 4) post trade transparency. We use confidential European trade repository data on single-name Sovereign Credit Derivative Swap (CDS) transactions, and show that for all the transactions reported in 2016 on Italian, German and French Sovereign CDS 48% were centrally cleared, 42% were not cleared despite being eligible for central clearing, while 9% of the contracts were not clearable because they did not satisfy certain CCP clearing criteria. However, there is a large difference between CCP clearing members that clear about 53% of their transactions and non-clearing members, even those that are subject to counterparty risk capital requirements, that almost never clear their trades. Moreover, we find that diverse factors explain clearing members’ decision to clear different CDS contracts: for Italian CDS, counterparty credit risk exposures matter most for the decision to clear, while for French and German CDS, margin costs are the most important factor for the decision. Clearing members use clearing to reduce their exposures to the CCP and largely clear contracts when at least one of the traders has a high counterparty credit risk.
Digitalization expands the possibility for corporations to reduce taxes, mainly, but not exclusively, by allowing improved planning where profits can be shifted. Against this background, the European Commission and several countries emphatically demand and design new tax instruments. However, a selective turning away from internationally accepted principles of international taxation will bring up more questions than solutions. While there are good reasons to think about a fundamental regime switch in international corporate taxation, there are also good arguments for not turning to ad hoc measures that selectively target the relatively small market of Google and Facebook and raise only negligible tax revenues.
Monetary policy and prudential supervision – from functional separation to a holistic approach?
(2018)
When prudential supervision was put in the hands of the European Central Bank (ECB), it was the political understanding that the ECB should follow a policy of meticulous separation between monetary policy and financial supervision. However, the financial crisis showed that monetary policy and prudential supervision deeply affect each other and that an overly strict separation might generate systemic risk. As a consequence, the prevalent model of “functional separation” – central banking and financial supervision in separate entities – has been questioned and calls for a more holistic approach increased.
This policy letter states that from a legal perspective, such a holistic approach would be in conformity with the current legal framework of the Economic and Monetary Union. Although the realization of a holistic approach might intensify the doubts of democratic legitimation under the framework of the ESCB, the independence of the ECB should not be given up. As viable alternatives to protect monetary policy against the time inconsistency problem that would render central bank independence moot do not seem to be available and given the great importance of the independence of the European institutions for the European integration, the democratic control over the ECB should be strengthened instead of stripping the ECB of its independence.
I analyze the real effects of the quality of the judicial enforcement by showing that an increase in the average duration of civil proceedings reduces firms' employment. I exploit a reorganization of court districts in Italy as an exogenous shock to court productivity and, using an instrumental variable approach, estimate an elasticity of employment to average trial length between -0.24 and -0.29. These results are very different from OLS estimates which do not control for endogeneity, and suggest that stronger law enforcement eases financing constraints. The effects are more pronounced in highly levered and more financially dependent firms, and appear to affect mainly firms in less financially developed areas. Revenues respond more slowly than employment to the reform, and wages fall as the judiciary improves. There is no evidence of effects on capital structure and profitability. These results offer a more complete picture of the interplay between legal institutions and real economic outcomes.
This paper aims to analyze the effects of financial constraints and the financial crisis on the financing and investment policies of newly founded firms. Thereby, the analysis adds important new insights on a crucial segment of the economy. We make use of a large and comprehensive data set of French firms founded in the years 2004-2006, i.e. well before the financial crisis. Our panel data analysis shows that the global financial crisis imposed a shock (mostly demand-driven) on the financing as well as on the investments of these firms. Moreover, we find that financially constrained firms use less external debt financing and invest smaller amounts. They also rely on less trade credit. With regard to bank financing, newly founded firms which are more financially constrained accumulate less bank debt and repay initial bank debt slower than their non-financially constraint counterparts. Finally, we find that financially constrained firms are affected to a smaller degree by the financial crisis than their less financially constrained counterparts.
Automated deduction in higher-order program calculi, where properties of transformation rules are demanded, or confluence or other equational properties are requested, can often be done by syntactically computing overlaps (critical pairs) of reduction rules and transformation rules. Since higher-order calculi have alpha-equivalence as fundamental equivalence, the reasoning procedure must deal with it. We define ASD1-unification problems, which are higher-order equational unification problems employing variables for atoms, expressions and contexts, with additional distinct-variable constraints, and which have to be solved w.r.t. alpha-equivalence. Our proposal is to extend nominal unification to solve these unification problems. We succeeded in constructing the nominal unification algorithm NomUnifyASC. We show that NomUnifyASC is sound and complete for these problem class, and outputs a set of unifiers with constraints in nondeterministic polynomial time if the final constraints are satisfiable. We also show that solvability of the output constraints can be decided in NEXPTIME, and for a fixed number of context-variables in NP time. For terms without context-variables and atom-variables, NomUnifyASC runs in polynomial time, is unitary, and extends the classical problem by permitting distinct-variable constraints.
1998 ACM Subject Classification F.4.1 Mathematical Logic
We propose a model for measuring the runtime of concurrent programs by the minimal number of evaluation steps. The focus of this paper are improvements, which are program transformations that improve this number in every context, where we distinguish between sequential and parallel improvements, for one or more processors, respectively. We apply the methods to CHF, a model of Concurrent Haskell extended by futures. The language CHF is a typed higher-order functional language with concurrent threads, monadic IO and MVars as synchronizing variables. We show that all deterministic reduction rules and 15 further program transformations are sequential and parallel improvements. We also show that introduction of deterministic parallelism is a parallel improvement, and its inverse a sequential improvement, provided it is applicable. This is a step towards more automated precomputation of concurrent programs during compile time, which is also formally proven to be correctly optimizing.
The publication of the Liikanen Group's final report in October 2012 was surrounded by high expectations regarding the implementation of the reform plans through the proposed measures that reacted to the financial and sovereign debt crises. The recommendations mainly focused on introducing a mild version of banking separation and the creation of the preconditions for bail-in measures. In this article, we present an overview of the regulatory reforms, to which the financial sector has been subject over the past years in accordance with the concepts laid out in the Liikanen Report. It becomes clear from our assessment that more specific steps have yet to be taken before the agenda is accomplished. In particular, bail-in rules must be implemented more consistently. Beyond the question of the required minimum, the authors develop the notion of a maximum amount of liabilities subject to bail-in. The combination of both components leads to a three-layer structure of bank capital: a bail-in tranche, a deposit-insured bailout tranche, and an intermediate run-endangered mezzanine tranche. The size and treatment of the latter must be put to a political debate that weighs the costs and benefits of a further increase in financial stability beyond that achieved through loss-bearing of the bail-in tranche.
New provisioning rules introduced by IFRS 9 are expected to reduce the procyclicality of provisioning. Heterogeneity among banks in the procyclicality of provisioning may not only reflect the formal accounting rules, but also variation in discretionary provisioning policies. This paper presents empirical evidence on the heterogeneity of provisioning procyclicality among significant banks that are directly supervised by the ECB. In particular, this paper finds that provisioning is relatively procyclical at banks that have i) high loans-to-assets ratios, ii) high shares of non-interest income in total operating income, iii) low capitalization rates, and iv) low total assets. Supervisory guidance provided to banks on how to implement IFRS 9 has mostly been of a qualitative nature, and may prove inadequate to prevent an undesirably wide future variation in provisioning among EU banks.
This paper was provided at the request of the Committee on Economic and Monetary Affairs of the European Parliament and commissioned and drafted under the responsibility of the Economic Governance Support Unit (EGOV) of the European Parliament. It was originally published on the European Parliament’s webpage.
Coming (great) events cast their (long) shadow before. As the financial crisis gave birth to the creation of the European System of Financial Supervision (ESFS), the imminent Brexit now serves as an impulse to rather extensively reorganize it. Pursuant to the preferences of the Commission—as revealed in its draft for a regulation amending the regulations founding the European Supervisory Authorities (ESA)—the supervision (and regulation) of the financial sectors should be further centralized and integrated and additional powers should be given to the ESAs. To a large degree these alterations are intended to adjust the competences of the European Securities and Markets Authority (ESMA) to better meet its new objectives under the Capital Markets Union (“CMU”). In view that an equivalent to the CMU or the Banking Union—in the sense of a European Insurance Union—is not yet on the horizon for the insurance sector (or the occupational pensions sector), one could prima vista take the view that insurance supervision and regulation is once again taken captive by the necessity of regulatory reforms stemming from other financial sectors. However, even if that is partially the case, the outcome of the intended reforms might still be advantageous for the insurance sector and an important step in the right direction. Therefore, it needs to be intensively discussed.
At this stage, some of the most prominent envisioned changes to the structure, tasks and powers of the European Insurance and Occupational Pensions Authority (EIOPA) and their necessity, usefulness or counter-productivity still have to be examined.
During the last IAIS Global Seminar in June 2017, IAIS disclosed the agenda for a gradual shift in the systemic risk assessment methodology from the current Entity Based Approach (EBA) to a new Activity Based Approach(ABA). The EBA, which was developed in the aftermath of the 2008/2009 financial crisis, defines a list of Global Systemically Important Insurers (G-SIIs) based on a pre-defined set of criteria related to the size of the institution. These G-SIIs are subject to additional regulatory requirements since their distress or disorderly failure would potentially cause significant disruption to the global financial system and economic activity. Even if size is still a needed element of a systemic risk assessment, the strong emphasis put on the too-big-to-fail approach in insurance, i.e. EBA, might be partially missing the underlying nature of systemic risk in insurance. Not only certain activities, including insurance activities such as life or non-life lines of business, but also common exposures or certain managerial practices such as leverage or funding structures, tend to contribute to systemic risk of insurers but are not covered by the current EBA (Berdin and Sottocornola, 2015). Therefore, we very much welcome the general development of the systemic risk assessment methodology, even if several important questions still need to be answered.
This Chapter explores how an environment of persistent low returns influences saving, investing, and retirement behaviors, as compared to what in the past had been thought of as more “normal” financial conditions. Our calibrated lifecycle dynamic model with realistic tax, minimum distribution, and Social Security benefit rules produces results that agree with observed saving, work, and claiming age behavior of U.S. households. In particular, our model generates a large peak at the earliest claiming age at 62, as in the data. Also in line with the evidence, our baseline results show a smaller second peak at the (system-defined) Full Retirement Age of 66. In the context of a zero-return environment, we show that workers will optimally devote more of their savings to non-retirement accounts and less to 401(k) accounts, since the relative appeal of investing in taxable versus tax-qualified retirement accounts is lower in a low return setting. Finally, we show that people claim Social Security benefits later in a low interest rate environment.
This paper studies the long-run effects of credit market disruptions on real firm outcomes and how these effects depend on nominal wage rigidities at the firm level. I trace out the long-run investment and growth trajectories of firms which are more adversely affected by a transitory shock to aggregate credit supply. Affected firms exhibit a temporary investment gap for two years following the shock, resulting in a persistent accumulated growth gap. I show that affected firms with a higher degree of wage rigidity exhibit a steeper drop in investment and grow more slowly than affected firms with more flexible wages.
We shed new light on the macroeconomic effects of rising temperatures. In the data, a shock to global temperature dampens expenditures in research and development (R&D). We rationalize this empirical evidence within a stochastic endogenous growth model, featuring temperature risk and growth sustained through innovations. In line with the novel evidence in the data, temperature shocks undermine economic growth via a drop in R&D. Moreover, in our endogenous growth setting temperature risk generates non-negligible welfare costs (i.e., 11% of lifetime utility). An active government, which is committed to a zero fiscal deficit policy, can offset the welfare costs of global temperature risk by subsidizing the aggregate capital investment with one-fifth of total public spending.
After the Lehman-Brothers collapse, the stock index has exceeded its pre-Lehman-Brothers peak by 36% in real terms. Seemingly, markets have been demanding more stocks instead of bonds. Yet, instead of observing higher bond rates, paradoxically, bond rates have been persistently negative after the Lehman-Brothers collapse. To explain this paradox, we suggest that, in the post-Lehman-Brothers period, investors changed their perceptions on disasters, thinking that disasters occur once every 30 years on average, instead of disasters occurring once every 60 years. In our asset-pricing calibration exercise, this rise in perceived market fragility alone can explain the drop in both bond rates and price-dividend ratios observed after the Lehman-Brothers collapse, which indicates that markets mostly demanded bonds instead of stocks.
In the context of the upcoming Brexit, a relocation of the clearing of euro-OTC derivatives for EU-based firms is the subject of controversial discussion. The opponents of a relocation argue that a relocation would cause additional costs for market participants of up to USD 100 bn over a period of 5 years. This paper shows that this cost estimate is fairly unrealistic and that relocation costs would amount to approximately USD 0.6 bn p.a., which translates to cumulative costs of around USD 3.2 bn for a transition period of 5 years. In light of the strategic importance of systemically relevant CCPs for the financial stability of the eurozone, the potential relocation costs should not be a decision criterion.
We establish a benchmark result for the relationship between the loanable funds and the money-creation approach to banking. In particular, we show that both processes yield the same allocations when there is no uncertainty and thus no bank default. In such cases, using the much simpler loanable funds approach as a shortcut does not imply any loss of generality.
This paper presents new evidence on the expectation formation process of firms from a survey of the German manufacturing sector. It focuses on the expectation about their future business conditions, which enters the widely followed economic sentiment index and which is an important determinant of their employment and investment decisions. We find that firms extrapolate their experience too much and make predictable forecasting errors. Moreover, firms do not seem to anticipate the upcoming reversals of business cycle peaks and troughs which causes suboptimal adjustment of investment and employment and affects their inventories and profits. However, the impact on expectation errors decreases with the size and the age of the firm as firms learn to reduce their extrapolation bias over time.
We propose a long-run risk model with stochastic volatility, a time-varying mean reversion level of volatility, and jumps in the state variables. The special feature of our model is that the jump intensity is not affine in the conditional variance but driven by a separate process. We show that this separation of jump risk from volatility risk is needed to match the empirically weak link between the level and the slope of the implied volatility smile for S&P 500 options.
Why do banks issue contingent convertible debt? To answer this question we study comprehensive data covering all issues by publicly traded banks in Europe of contingent convertible bonds (CoCos) that count as additional tier 1 capital (AT1). We find that banks with lower asset volatility are more likely to issue AT1 CoCos than their riskier counterparts, but that CDS spreads do not react following issue announcements. Our estimates therefore suggest that agency costs play a crucial role in banks' ability to successfully issue CoCos. The agency costs may be higher for CoCos than for equity explaining why we observe riskier or lowly capitalized banks to issue equity rather than CoCos.
Empirical evidence suggests that investments in research and development (R&D) by older and larger firms are more spread out internationally than R&D investments by younger and smaller firms. In this paper, I explore the quantitative implications of this type of heterogeneity by assuming that incumbents, i.e. current monopolists engaging in incremental innovation, have a higher degree of internationalization in their R&D technologies than entrants, i.e. new firms engaging in radical innovation, in a two-country endogenous growth general equilibrium model. In particular, this assumption allows the model to break the perfect correlation between incumbents’ and entrants’ innovation probabilities and to match the empirical counterpart exactly.
What processes transform (im)mobile individuals into ‘migrants’ and geographic movements across political-territorial borders into ‘migration’? To address this question, the article develops the doing migration approach, which combines perspectives from social constructivism, praxeology and the sociologies of knowledge and culture. ‘Doing migration’ starts with the processes of social attribution that differentiate between ‘migrants’ and ‘non-migrants’. Embedded in institutional, organizational and interactional routines these attributions generate unique social orders of migration. By illustrating these conceptual ideas, the article provides insights into the elements of the contemporary European order of ‘migration’. Its institutional routines contribute to the emergence of a European migration regime that involves narratives of economization, securitization and humanitarization. The organizational routines of the European migration order involve surveillance and diversity management, which have disciplining effects on those defined as ‘migrants’. The routines of everyday face-to-face interactions produce various micro-forms of doing ‘migration’ through stigmatization and othering, but they also provide opportunities to resist a social attribution as ‘migrant’.
This paper reviews social network analysis (SNA) as a method to be utilized in biographical research which is a novel contribution. We argue that applying SNA in the context of biography research through standardized data collection as well as visualization of networks can open up participants’ interpretations of relations throughout their lives, and allow a creative and innovative way of data collection that is responsive to participants’ own meanings and associations while allowing the researchers to conduct systematical data analysis. The paper discusses the analytical potential of SNA in biographical research, where the efficacy of this method is critically discussed, together with its limitations, and its potential within the context of biographical research.
Public employees in many developing economies earn much higher wages than similar privatesector workers. These wage premia may reflect an efficient return to effort or unobserved skills, or an inefficient rent causing labor misallocation. To distinguish these explanations, we exploit the Kenyan government’s algorithm for hiring eighteen-thousand new teachers in 2010 in a regression discontinuity design. Fuzzy regression discontinuity estimates yield a civil-service wage premium of over 100 percent (not attributable to observed or unobserved skills), but no effect on motivation, suggesting rent-sharing as the most plausible explanation for the wage premium.
The Global Irrigation Model (GIM) is used within the framework of the global hydrological model WaterGAP to calculate monthly irrigation crop water use. Results on a 0.5 degrees grid include, consumption (ICU) and, via division by irrigation efficiencies, water withdrawal (IWU). The model distinguishes up to two cropping periods of rice and non-rice crops, each grown for 150 days, using a grid of area equipped for irrigation (AEI). Historical development of AEI and fraction of area actually irrigated (AAI) was previously considered via scaling of cell-specific results with country-specific factors for each year. In this study, GIM was adapted to use the new Historical Irrigation Data set (HID) with cell-specific AEI for 14 time slices between 1900 and 2005. AEI grids were temporally interpolated, and using the optional grid of AAI/AEI, results for years 1901-2014 were generated (runs "HID-ACT"). Thus, new installation or abandonment of irrigation infrastructure in new grid cells can be represented in a spatially explicit manner. For evaluated years 1910, 1960, 1995, and 2005, ICU from HID-ACT was superior to country-specific scaled results (run "HID-ACTHIST") in representing historical development of the spatial pattern. Compared to US state-level reference data, spatial patterns were better, while country totals were not always better. For calculating the cropping periods, 30-years climate means are needed, the choice of which is relevant. Four chosen periods before 1981-2010 all resulted in considerable, pertaining changes of ICU spatial pattern, and various percent changes in country totals. This might be because of already present climate change.
The paper provides an overview and an economic analysis of the development of the corporate governance of German banks since the 1950s, highlighting peculiarities – as seen from the meanwhile prevailing standard model perspective – of the German case. These peculiarities refer to the specific German notion and legal-institutional regime of corporate governance in general as well as to the specific three-pillar structure of the German banking system.
The most striking changes in the corporate governance of German banks during the past 50 years occurred in the case of the large shareholder-owned banks. For them, capital markets have become an important element of corporate governance, and their former orientation towards the interests of a broadly defined set of stakeholders has largely been replaced by a one-sided concentration on shareholders’ interests. In contrast, the corporate governance regimes of the smaller local public savings banks and the local cooperative banks have remained virtually unchanged. They acknowledge a broader horizon of stakeholder interests and put an emphasis on monitoring.
The Great Financial Crisis, beginning in 2007, has led to a considerable reassessment in the academic and political debate on bank governance. On an international level, it has revived the older notion that, in view of their high leverage and their innate complexity, banks are “special” and bank corporate governance also – and needs to be seen in this light, not least because research indicates that banks with a strong and one-sided shareholder orientation – and thus with what appears to be the best corporate governance according to the standard model – have suffered most in the crisis. In the German case, the crisis has shown that the smaller local banks have survived the crisis much better than large private and public banks, whose funding strongly depends on wholesale markets. This may point to certain advantages of their governance and ownership regimes. But the differences in the performance during the crisis years may also, or even more so, be a consequence of the business models of large vs small banks than of their different governance regimes.
This study provides a graphic overview on core legislation in the area of economic and financial services. The presentation essentially covers the areas within the responsibility of the Economic and Monetary Affairs Committee (ECON); hence it starts with core ECON areas but also displays neighbouring areas of other Committees' competences which are closely connected to and impacting on ECON's work. It shows legislation in force, proposals and other relevant provisions on banking, securities markets and investment firms, market infrastructure, insurance and occupational pensions, payment services, consumer protection in financial services, the European System of Financial Supervision, European Monetary Union, euro bills and coins and statistics, competition, taxation, commerce and company law, accounting and auditing. Moreover, it notes selected provisions that might become relevant in the upcoming Article 50 TEU negotiations.
The German savings and cooperative banks of the 19th century were precursors of modern microfinance. They provided access to financial services for the majority of the German population, which was formerly excluded from bank funding. Furthermore, they did this at low costs for themselves and affordable prices for their clients. By creating networks of financially viable and stable financial institutions covering the entire country, they contributed significantly to building a sound and “inclusive” financial infrastructure in Germany. A look back at the history of German savings and cooperative banks and combining these experiences with the lessons learned from modern microfinance can guide current policy and be valuable for present and future models of microfinance business.
Effective market discipline incentivizes financial institutions to limit their risk-taking behavior, making it a key element for financial regulation. However, without adequate incentives to monitor and control the risk-taking behavior of financial institutions market discipline erodes. As a consequence, bailing out financial institutions, as happened unprecedentedly during the recent financial crisis, may impose indirect costs to financial stability if bailout expectations of investors change. Analyzing US data covering the period between 2004 and 2014, Hett und Schmidt (2017) find that market participants adjusted their bailout expectations in response to government interventions, undermining market discipline mechanisms. Given these findings, policymakers need to take into account the potential effects on market discipline when deciding about public support to troubled financial institutions in the future. Considering the parallelism of events and public responses during the financial crisis as well as the recent developments of Italian banks, these results not only concern the US, but also have important implications for European financial markets and policy makers.
On 15 August 2017, the Bundesverfassungsgericht (BVerfG) referred the case against the European Central Bank’s policy of Quantitative Easing (QE) to the European Court of Justice (ECJ). The author argues that this event differs in several aspects from the OMT case in 2015 – in content as well as in form. The BVerfG recognizes that it is a legitimate goal of the ECB’s monetary policy to bring inflation up close to 2%, and that the instrument employed for QE is one of monetary policy. However, it doubts whether the sheer volume of QE would not distort the character of the program as one of monetary policy. The ECJ will now have to clarify the extent to which the ECJ’s findings in its OMT judgment are relevant for QE as well as the standard of review applicable to monetary policy. The author raises the questions of whether the principle of democracy under German constitutional law can actually provide the standard by which the ECB is to be measured, and how tight judicial review could be exercised over the ECB without encroaching upon its autonomy in monetary policy matters – and thus upon the very essence of central bank independence.
Crowdfunding is a buzzword that signifies a sub-set in the new forms of finance facilitated by advances in information technology usually categorized as fintech. Concerns for financial stability, investor and consumer protection, or the prevention of money laundering or funding of terrorism hinge incrementally on including the new techniques to initiate financing relationships adequately in the regulatory framework.
This paper analyzes the German regulation of crowdinvesting and finds that it does not fully live up to the regulatory challenges posed by this novel form of digitized matching of supply and demand on capital markets. It should better reflect the key importance of crowdinvesting platforms, which may become critical providers of market infrastructure in the not too distant future. Moreover, platforms can play an important role in investor protection that cannot be performed by traditional disclosure regimes geared towards more seasoned issuers. Against this background, the creation of an exemption from the traditional prospectus regime seems to be a plausible policy choice. However, it needs to be complemented by an adequate regulatory stimulation of platforms’ role as gatekeepers.
Fleckenstein et al. (2014) document that nominal Treasuries trade at higher prices than inflation-swapped indexed bonds, which exactly replicate the nominal cash flows. We study whether this mispricing arises from liquidity premiums in inflation-indexed bonds (TIPS) and inflation swaps. Using US data, we show that the level of liquidity affects TIPS, whereas swap yields include a liquidity risk premium. We also allow for liquidity effects in nominal bonds. These results are based on a model with a systematic liquidity risk factor and asset-specific liquidity characteristics. We show that these liquidity (risk) premiums explain a substantial part of the TIPS underpricing.
Coming early to the party
(2017)
We examine the strategic behavior of High Frequency Traders (HFTs) during the pre-opening phase and the opening auction of the NYSE-Euronext Paris exchange. HFTs actively participate, and profitably extract information from the order flow. They also post "flash crash" orders, to gain time priority. They make profits on their last-second orders; however, so do others, suggesting that there is no speed advantage. HFTs lead price discovery, and neither harm nor improve liquidity. They "come early to the party", and enjoy it (make profits); however, they also help others enjoy the party (improve market quality) and do not have privileges (their speed advantage is not crucial).