Refine
Year of publication
- 2016 (131) (remove)
Document Type
- Working Paper (131) (remove)
Language
- English (131) (remove)
Has Fulltext
- yes (131)
Is part of the Bibliography
- no (131) (remove)
Keywords
- monetary policy (5)
- Banking Regulation (3)
- Digital Humanities (3)
- Financial Crisis (3)
- Insurance (3)
- Interest Rate Risk (3)
- Life Insurance (3)
- Systemic Risk (3)
- Annuities (2)
- Basel III (2)
Institute
- Wirtschaftswissenschaften (105)
- Center for Financial Studies (CFS) (97)
- Sustainable Architecture for Finance in Europe (SAFE) (63)
- House of Finance (HoF) (52)
- Institute for Monetary and Financial Stability (IMFS) (6)
- Rechtswissenschaft (5)
- Gesellschaftswissenschaften (3)
- Informatik (3)
- Institut für sozial-ökologische Forschung (ISOE) (2)
- Institute for Law and Finance (ILF) (2)
We assess the degree of market fragmentation in the euro-area corporate bond market by disentangling the determinants of the risk premium paid on bonds at origination. By looking at over 2,400 bonds we are able to isolate the country-specific effects which are a suitable indicator of the market fragmentation. We find that, after peaking during the sovereign debt crisis, fragmentation shrank in 2013 and receded to pre-crisis levels only in 2014. However, the low level of estimated market fragmentation is coupled with a still high heterogeneity in actual bond yields, challenging the consistency of the new equilibrium.
Chen and Zadrozny (1998) developed the linear extended Yule-Walker (XYW) method for determining the parameters of a vector autoregressive (VAR) model with available covariances of mixed-frequency observations on the variables of the model. If the parameters are determined uniquely for available population covariances, then, the VAR model is identified. The present paper extends the original XYW method to an extended XYW method for determining all ARMA parameters of a vector autoregressive moving-average (VARMA) model with available covariances of single- or mixed-frequency observations on the variables of the model. The paper proves that under conditions of stationarity, regularity, miniphaseness, controllability, observability, and diagonalizability on the parameters of the model, the parameters are determined uniquely with available population covariances of single- or mixed-frequency observations on the variables of the model, so that the VARMA model is identified with the single- or mixed-frequency covariances.
We examine the dynamics of assets under management (AUM) and management fees at the portfolio manager level in the closed-end fund industry. We find that managers capitalize on good past performance and favorable investor perception about future performance, as reflected in fund premiums, through AUM expansions and fee increases. However, the penalties for poor performance or unfavorable investor perception are either insignificant, or substantially mitigated by manager tenure. Long tenure is generally associated with poor performance and high discounts. Our findings suggest substantial managerial power in capturing CEF rents. We also document significant diseconomies of scale at the manager level.
The global financial crisis and the ensuing criticism of macroeconomics have inspired researchers to explore new modeling approaches. There are many new models that deliver improved estimates of the transmission of macroeconomic policies and aim to better integrate the financial sector in business cycle analysis. Policy making institutions need to compare available models of policy transmission and evaluate the impact and interaction of policy instruments in order to design effective policy strategies. This paper reviews the literature on model comparison and presents a new approach for comparative analysis. Its computational implementation enables individual researchers to conduct systematic model comparisons and policy evaluations easily and at low cost. This approach also contributes to improving reproducibility of computational research in macroeconomic modeling. Several applications serve to illustrate the usefulness of model comparison and the new tools in the area of monetary and fiscal policy. They include an analysis of the impact of parameter shifts on the effects of fiscal policy, a comparison of monetary policy transmission across model generations and a cross-country comparison of the impact of changes in central bank rates in the United States and the euro area. Furthermore, the paper includes a large-scale comparison of the dynamics and policy implications of different macro-financial models. The models considered account for financial accelerator effects in investment financing, credit and house price booms and a role for bank capital. A final exercise illustrates how these models can be used to assess the benefits of leaning against credit growth in monetary policy.
The modern tontine: an innovative instrument for longevity risk management in an aging society
(2016)
The changing social, financial and regulatory frameworks, such as an increasingly aging society, the current low interest rate environment, as well as the implementation of Solvency II, lead to the search for new product forms for private pension provision. In order to address the various issues, these product forms should reduce or avoid investment guarantees and risks stemming from longevity, still provide reliable insurance benefits and simultaneously take account of the increasing financial resources required for very high ages. In this context, we examine whether a historical concept of insurance, the tontine, entails enough innovative potential to extend and improve the prevailing privately funded pension solutions in a modern way. The tontine basically generates an age-increasing cash flow, which can help to match the increasing financing needs at old ages. However, the tontine generates volatile cash flows, so that - especially in the context of an aging society - the insurance character of the tontine cannot be guaranteed in every situation. We show that partial tontinization of retirement wealth can serve as a reliable supplement to existing pension products.
Using two datasets containing demographically representative samples of the Dutch population, I study how lifetime experiences of aggregate labor market conditions affect personality. Three sets of findings are reported. First, experienced aggregate unemployment is negatively correlated with the levels of all Big Five personality traits, except for conscientiousness (no significant correlation). Second, in panel data models with individual fixed effects I find that changes in experienced aggregate unemployment cause changes in emotional stability and agreeableness for men, and conscientiousness for women. The correlation is positive, and effects are economically large. Thirdly, I report suggestive evidence that the main driver is experienced aggregate unemployment, instead of other macroeconomic variables as experienced GDP, stock market returns or inflation. Taken together, these findings suggest that changes in Big Five personality traits are systematically related to experienced aggregate labor market conditions.
This paper investigates the potential implications of say on pay on management remuneration in Germany. We try to shed light on some key aspects by presenting quantitative data that allows us to gauge the pertinent effects of the German natural experiment that originates with the 2009 amendments to the Stock Corporation Act of 1965. In order to do this, we deploy a hand-collected data set for Germany's major firms (i.e. DAX 30), for the years 2006-2012. Rather than focusing exclusively on CEO remuneration we collected data for all members of the management board for the whole period under investigation. We observe that the compensation packages of management board members of Germany's DAX30-firms are quite closely linked to key performance measures. In addition, we find that salaries increase with the size of the company and that ownership concentration has no significant effect on compensation. Also, our findings suggest that the two-tier system seems to matter a lot when it comes to compensation. However, it would be misleading to state that we see no significant impact of the introduction of the German say on pay-regime. Our findings suggest that supervisory boards anticipate shareholder-behavior.
Highly-skilled labour migration in Switzerland: household strategies and professional careers
(2016)
The article investigates household strategies in the context of highly-skilled labour migration. It focuses on the ways highly-skilled migrants are taking up residence in Switzerland. The analysis shows different household strategies based on the perception of a further professional move. The perceived likeliness of a further move implies household strategies characterized by a high motility: the household remains ready to move and mobilises dedicated organisations (like outplacement agencies or international schools). When a further move is neither perceived nor wanted, the household develops more anchored strategies which are often cheaper. In order to cope with frequent mobilities, the analysis shows that household strategies are deeply gendered.
Understanding the shift from micro to macro-prudential thinking: a discursive network analysis
(2016)
While some economists argued for macro-prudential regulation pre-crisis, the macro-prudential approach and its emphasis on endogenously created systemic risk have only gained prominence post-crisis. Employing discourse and network analysis on samples of the most cited scholarly works on banking regulation as well as on systemic risk (60 sources each) from 1985 to 2014, we analyze the shift from micro to macro-prudential thinking in the shift to the post crisis period. Our analysis demonstrates that the predominance of formalism, particularly, partial equilibrium analysis along with the exclusion of historical and practitioners’ styles of reasoning from banking regulatory studies impeded economists from engaging seriously with the endogenous sources of systemic risk prior to the crisis. Post-crisis, these topics became important in this discourse, but the epistemological failures of banking regulatory studies pre-crisis were not sufficiently recognized. Recent attempts to conceptualize and price systemic risk as a negative externality point to the persistence of formalism and equilibrium thinking, with its attending dangers of incremental innovation due to epistemological barriers constrains theoretical progress, by excluding observed phenomena, which cannot yet be accommodated in mathematical models.
Constitutionalization beyond the nation state can be observed as an evolutionary process that leads in two quite different directions: (1) constitutions evolve in transnational political processes outside the nation state; (2) simulta-neously, constitutions evolve outside international politics in global society’s ‘private’ sectors. What, however, is the specifically societal element in societal constitutionalism? This is currently the object of a controversy regarding the subjects of non-state constitutions, their origin, their legitimization, their scope, and their internal structures. This article interprets the controversy as a theme with a number of variations. What is the distinctive ‘compositional principle’ in each particular variation? Which problems become evident in its ‘development’? What are its most valuable ‘motifs’? The article starts with David Sciulli’s theme of societal constitutionalism. Then it presents six variations on Sciulli. In a first group, constitutionalization is perceived as the expansion of a single rationality into all spheres of society. In a second group, the motif of the unity of the consti-tution can still be heard, despite the essential pluralism of societal constitution-alism. In the final movement, three further variations will then reprise and devel-op further the most important motifs, in a resumption of the original theme.
Recently there has been an explosion of research on whether the equilibrium real interest rate has declined, an issue with significant implications for monetary policy. A common finding is that the rate has declined. In this paper we provide evidence that contradicts this finding. We show that the perceived decline may well be due to shifts in regulatory policy and monetary policy that have been omitted from the research. In developing the monetary policy implications, it is promising that much of the research approaches the policy problem through the framework of monetary policy rules, as uncertainty in the equilibrium real rate is not a reason to abandon rules in favor of discretion. But the results are still inconclusive and too uncertain to incorporate into policy rules in the ways that have been suggested.
The early acquisition of Greek compounds by two monolingual Greek girls aged between 1;8 and 3;0 years is studied in a usage-based theoretical framework. Special importance is attached to the morphological structure of Greek compound types occurring in child speech and child-directed speech. Greek nominal compound formation does not consist in the mere juxtaposition of words or roots, but involves stems as well as a compound marker. Major questions addressed are the transparency of compounds and productive nominal compound formation. Evidence for productivity of nominal compound formation has been found with only one of the two girls. In contrast to other languages, neoclassical nominal compounds by far exceed endocentric subordinative ones tokenwise in Greek child speech and child-directed speech providing evidence of entrenchment rather than productivity.
In a cross-linguistic comparison it is shown that, in spite of the fact that both Standard Modern Greek and German are rich in nominal compounds, their number is much more limited in Greek than in German child speech. An explanation for this apparent paradox is provided by an onomasiological approach to lexical typology based on a sample list of nominal compounds occurring in German child language and their Greek translational equivalents. It has been found that while use of nominal compounds is common in colloquial German including child-centered situations, it is more typical of Greek formal than colloquial registers.
This study looks at the interrelationship between fiscal policy and safe assets as there is surprisingly little analysis about this beyond fleeting references. The study argues that from a certain point more public debt will not “buy” more safety: countries face a kind of “safe-assets Laffer curve” with a maximum amount of safe assets at some level of indebtedness. The position and “stability” of this curve depend on a number of national and international factors, including the international risk appetite and, as a more recent factor, QE policies by central banks. The study also finds evidence of declining safe assets as reflected in government debt ratings.
Low risk anomalies?
(2016)
This paper shows theoretically and empirically that beta- and volatility-based low risk anomalies are driven by return skewness. The empirical patterns concisely match the predictions of our model which generates skewness of stock returns via default risk. With increasing downside risk, the standard capital asset pricing model increasingly overestimates required equity returns relative to firms' true (skew-adjusted) market risk. Empirically, the profitability of betting against beta/volatility increases with firms' downside risk. Our results suggest that the returns to betting against beta/volatility do not necessarily pose asset pricing puzzles but rather that such strategies collect premia that compensate for skew risk.
Amid increasing regulation, structural changes of the market and Quantitative Easing as well as extremely low yields, concerns about the market liquidity of the Eurozone sovereign debt markets have been raised. We aim to quantify illiquidity risks, especially such related to liquidity dry-ups, and illiquidity spillover across maturities by examining the reaction to illiquidity shocks at high frequencies in two ways:
a) the regular response to shocks using a variance decomposition and,
b) the response to shocks in the extremes by detecting illiquidity shocks and modeling those as ultivariate Hawkes processes.
We find that:
a) market liquidity is more fragile and less predictable when an asset is very illiquid and,
b) the response to shocks in the extremes is structurally different from the regular response.
In 2015 long-term bonds are less liquid and the medium-term bonds are liquid, although we observe that in the extremes the medium-term bonds are increasingly driven by illiquidity spillover from the long-term titles.
The calculus LRP is a polymorphically typed call-by-need lambda calculus extended by data constructors, case-expressions, seq-expressions and type abstraction and type application. This report is devoted to the extension LRPw of LRP by scoped sharing decorations. The extension cannot be properly encoded into LRP if improvements are defined w.r.t. the number of lbeta, case, and seq-reductions, which makes it necessary to reconsider the claims and proofs of properties. We show correctness of improvement properties of reduction and transformation rules and also of computation rules for decorations in the extended calculus LRPw. We conjecture that conservativity of the embedding of LRP in LRPw holds.
An improvement is a correct program transformation that optimizes the program, where the criterion is that the number of computation steps until a value is obtained is decreased. This paper investigates improvements in both { an untyped and a polymorphically typed { call-by-need lambda-calculus with letrec, case, constructors and seq. Besides showing that several local optimizations are improvements, the main result of the paper is a proof that common subexpression elimination is correct and an improvement, which proves a conjecture and thus closes a gap in Moran and Sands' improvement theory. We also prove that several different length measures used for improvement in Moran and Sands' call-by-need calculus and our calculus are equivalent.
This paper studies the role of the Community Reinvestment Act (CRA) in the recent US housing boom-bust cycle. Using a difference-in-differences matching estimation, I find that the enhancement of CRA enforcement in 1998 caused a 7.7 percentage points increase in annual growth rate of mortgage lending by CRA-regulated banks to CRA-eligible census tracts relative to a group of similar-income CRA-ineligible census tracts within the same state. Financial institutions which are not subject to the CRA, however, do not show any change in their mortgage supply between these two types of census tracts after 1998. I take advantage of this exogenous shift in mortgage supply within an instrumental variable framework to identify the causal effect of mortgage supply on housing prices. I find that every 1 percentage point higher annual growth rate of mortgage supply leads to 0.3 percentage points higher annual growth rate of housing prices. Reduced form regressions show that CRA-eligible neighborhoods experienced higher house price growth during the boom and sharper decline during the bust period. I use placebo tests to confirm that this effect is in fact channeled through the shift in mortgage supply by CRA-regulated banks and not by unobserved demand factors. Furthermore, my results indicate that CRA-induced mortgages went to borrowers with lower FICO scores, carried higher interest rates, and encountered more frequent delinquencies.
Studies employing micro price data suggest that price dispersion is larger between regions in different countries than between regions in the same country. To investigate the strength of this border effect, deviations from the law of one price are used in most studies to provide statistical evidence on the effect of borders on price dispersion. I propose an alternative measure of the economic costs of borders which has an explicit welfare-theoretic foundation. Employing a unique micro price data set from households in Belgium, Germany and the Netherlands I provide evidence on the economic importance of price differences for households. I find that price dispersion within countries has only small economic importance, but that price dispersion between Belgium and Germany (and Belgium and the Netherlands) has considerable economic importance.
The grammar of global law
(2016)
Legal grammar is understood as the conceptual and linguistic foundation on which legal decisions rest – law’s meta-structure, its argumentative techniques and its systematicity. The essay distinguishes between two ways of thinking about this grammar. The first way of thinking appeals to a grammar as a stabilizing factor, maintaining the coherence of the law. The second way of thinking highlights the asymmetries of power within this structure and perceives legal grammar as the medium carrying the ideological commitments of the law. As the essay ultimately argues, both perspectives react differently to the challenges of globalization that the law is confronted with. While the debate on the grammar(s) of global law is one place where future political order is negotiated, the outcome of the debate is largely open.
Within the framework of the Transboundary Waters Assessment Programme (TWAP), initiated by the Global Environment Facility (GEF), we contributed to a comprehensive baseline assessment of transboundary aquifers (TBAs) by quantifying different groundwater indicators using the global water resources and water use model WaterGAP 2.2. All indicators were computed under current (2010) and projected conditions in 2030 and 2050 for 91 selected TBAs larger than 20,000 km2 and for each nation’s share of the TBAs (TBA-CU: country unit). TBA outlines were provided by the International Groundwater Resources Assessment Centre (IGRAC). The set of indicators comprises groundwater recharge, groundwater depletion, per-capita groundwater recharge, dependency on groundwater, population density, and groundwater development stress (groundwater withdrawals to groundwater recharge). Only the latter four indicators were projected to 2030 and 2050. Current-state indicators were quantified using the Watch Forcing Data climate dataset, while projections were based on five climate scenarios that were computed by five global climate models for the high-emissions scenario RCP 8.5. Water use projections were based on the Shared Socio-economic Pathway SSP2 developed within ISI-MIP. Furthermore, two scenarios of future irrigated areas were explored. For individual water use sectors, the fraction of groundwater abstraction was assumed to remain at the current level.
According to our assessment, aquifers with the highest current groundwater depletion rates worldwide are not transboundary. Exceptions are the Neogene Aquifer System (Syria) with 53 mm/yr between 2000 and 2009 and the Indus River Plain aquifer (India) with 28 mm/yr. For current conditions, we identified 20 out of 258 TBA-CUs suffering from medium to very high groundwater development stress, which are located in the Middle East and North Africa region, in South Asia, China, and the USA. Considering projections, ensemble means of per-cent changes or percent point changes to current conditions were determined. Per-capita groundwater recharge is projected to decrease in 80-90% of all TBA-CUs until 2030/2050. Due to the strongly varying projections of the global climate models, we applied a worst-case scenario approach to define future hotspots of groundwater development stress, taking into account the strongest computed increase until either 2030 or 2050 among all scenarios and individual GCMs. Based on this approach, the number of TBA-CUs under at least medium groundwater development stress increases from 20 to 58, comprising all hotspots under current conditions. New hotspots are projected to develop mainly in Sub-Saharan Africa, China, and Mexico.
We analyze global data about electricity generation and document that the risk exposure of a firm’s owners and its workers depends on competitors’ ability or willingness to change their output in response to productivity shocks. Competitor inflexibility appears to be a risk factor: the sales of firms with more inflexible competitors respond more strongly to aggregate sales shocks. As a consequence, competitor inflexibility also affects the stability of firms’ total wage- and dividend-payments. Firms with relatively flexible competitors appear to smoothen both wages and dividends, but an increase in competitor inflexibility is associated with less dividend-smoothing and more wage-smoothing. Our evidence supports the idea that labor productivity risk associated with competitor inflexibility should be borne by firms’ shareholders, rather than by their workers.
This paper uses recent legislation in Austria to establish a link between sovereign reputation and yield spreads. In 2009, Hypo Alpe Adria International, a bank previously co-owned by the regional government of Carinthia, had been nationalized by Austria’s central government in order to avoid a default triggering multi-billion Euro local government guarantees. In 2015, special legislation retroactively introduced collective action clauses allowing a haircut on both the bonds and the guarantees while avoiding formal default. We document that legislative and administrative action designed to partly abrogate the guarantees resulted in a loss of reputation, leading to higher yield spreads for sovereign debt. Our analysis of covered bonds uncovers an increase in yield spreads on the secondary market and a deterioration of primary market conditions.
The article, which summarizes key findings of my German book ‘Die Gemeinfreiheit. Begriff, Funktion, Dogmatik’ (‘The Public Domain: Theory, Func-tion, Doctrine’), asks whether there are any provisions or principles under Ger-man and EU law that protect the public domain from interference by the legisla-ture, courts and private parties. In order to answer this question, it is necessary to step out of the intellectual property (IP) system and to analyze this body of law from the outside, and – even more important – to develop a positive legal conception of the public domain as such. By giving the public domain a proper doctrinal place in the legal system, the structural asymmetry between heavily theorized and protected IP rights on the one hand and a neglected public do-main on the other is countered. The overarching normative purpose is to devel-op a framework for a balanced IP system, which can only be achieved if the public domain forms an integral part of the overall regulation of information.
On 14 September 2016, the European Commission proposed a Directive on “copyright in the Digital Single Market”. This proposal includes an Article 11 on the “protection of press publications concerning digital uses”, according to which “Member States shall provide publishers of press publications with the rights provided for in Article 2 and Article 3(2) of Directive 2001/29/EC for the digital use of their press publications.” Relying on the experiences and debates surrounding the German and Spanish laws in this area, this study presents a legal analysis of the proposal for an EU related right for press publishers (RRPP). After a brief overview over the general limits of the EU competence to introduce such a new related right, the study critically examines the purpose of an RRPP. On this basis, the next section distinguishes three versions of an RRPP with regard to its subject-matter and scope, and considers the practical and legal implications of these alternatives, in particular having regard to fundamental rights.
Mobilizations in defence of ‘companion animals’ have become major sites of contestation in Chinese society in recent years. They often reject the existing ambiguity between the use of these animals as pets and as meat, demanding unambiguous respect for and protection of dogs. However, in a society where inequalities are as significant as in China, where the level of poverty, sickness, and environmental and industrial tragedies appears overwhelming, one may ask how pets’ destinies have become such a symbolic focus and source of occasional fury – for both Chinese and foreign audiences. Taking this question seriously, this article aims to examine such mobilizations in China – demanding the protection of dogs – as a starting point to theoretically unwrap the more general problem of how the perception of certain beings as ‘weak’ and as deserving the protection of society is socially constructed, and what the related choices imply. I argue that to better understand these mobilizations to protect dogs, we should not separate the focus of the calls for protection from the social web of relationships and oppositions in which they are entrenched.
This paper explores the impact of immigrants on the imports, exports and productivity of service- producing firms in the U.K. Immigrants may substitute for imported intermediate inputs (offshore production) and they may impact the productivity of the firm as well as its export behavior. The first effect can be understood as the re-assignment of offshore productive tasks to immigrant workers. The second can be seen as a productivity or cost cutting effect due to immigration, and the third as the effect of immigrants on specific bilateral trade costs. We test the predictions of our model using differences in immigrant inflows across U.K. labor markets, instrumented with an enclave-based instrument that distinguishes between aggregate and bilateral immigration, as well as immigrant diversity. We find that immigrants increase overall productivity in service-producing firms, revealing a cost cutting impact on these firms. Immigrants also reduce the extent of country-specific offshoring, consistent with a reallocation of tasks and, finally, they increase country-specific exports, implying an important role in reducing communication and trade costs for services.
Under ordinary circumstances, the fiscal implications of central bank policies tend to be seen as relatively minor and escape close scrutiny. The global financial crisis of 2008, however, demanded an extraordinary response by central banks which brought to light the immense power of central bank balance sheet policies as well as their major fiscal implications. Once the zero lower bound on interest rates is reached, expanding a central bank’s balance sheet becomes the central instrument for providing additional monetary policy accommodation. However, with interest rates near zero, the line separating fiscal and monetary policy is blurred. Furthermore, discretionary decisions associated with asset purchases and liquidity provision, as well as with lender-of-last-resort operations benefiting private entities, can have major distributional effects that are ordinarily associated with fiscal policy. In the euro area, discretionary central bank decisions can have immense distributional effects across member states. However, decisions of this nature are incompatible with the role of unelected officials in democratic societies. Drawing on the response to the crisis by the Federal Reserve and the ECB, this paper explores the tensions arising from central bank balance sheet policies and addresses pertinent questions about the governance and accountability of independent central banks in a democratic society.
Prestige and loan pricing
(2016)
We find that prestigious companies pay lower spreads and upfront fees on their loans despite the fact that prestige does not predict default risk over the life of the loan. Using survey data on firm-level prestige, we show that a one standard deviation increase in prestige reduces loan spreads by 6.18% per year and upfront fees by 22.86%. We identify causal effects (i) using fraud by industry peers as an instrument for borrower prestige and (ii) exploiting a regression discontinuity around rank 100 of the prestige survey. Banks that lend to prestigious firms attract more business afterwards compared to otherwise similar institutions. Moreover, the effect of prestige on upfront fees is particularly strong for new bank relationships. Our findings suggest that prestigious firms receive cheaper funding because the associated lending relationship helps banks establish valuable credentials they use to compete for future borrowers.
Literature, measured
(2016)
There comes a moment, in digital humanities talks, when someone raises the hand and says: "Ok. Interesting. But is it really new?" Good question... And let's leave aside the obvious lines of defense, such as "but the field is still only at its beginning!", or "and traditional literary criticism, is that always new?" All true, and all irrelevant; because the digital humanities have presented themselves as a radical break with the past, and must therefore produce evidence of such a break. And the evidence, let's be frank, is not strong. What is there, moreover, comes in a variety of forms, beginning with the slightly paradoxical fact that, in a new approach, not everything has to be new. When "Network Theory, Plot Analysis” pointed out, in passing, that a network of Hamlet had Hamlet at its center, the New York Times gleefully mentioned the passage as an unmistakable sign of stupidity. Maybe; but the point, of course, was not to present Hamlet’s centrality as a surprise; it was exactly the opposite: had the new approach not found Hamlet at the center of the play, its plausibility would have disintegrated. Before using network theory for dramatic analysis, I had to test it, and prove that it corroborated the main results of previous research.
In a field study with more than 1.500 customers of an online-broker we test what happens when investors receive repeated feedback on their investment success in a monthly securities account report. The reports show investors’ last year’s returns, costs, their current level of risk and their portfolio diversification. We find that receiving a report results in investors trading less, diversifying more and having higher risk-adjusted returns. Results are robust to controlling for potential play money accounts and changes in report designs. We also find that investors who are less likely to subscribe equally benefit from the report.
owards their best performing products; and also extend the range of products sold to that market. We develop a theoretical model of multiproduct firms and derive the specific demand and cost conditions needed to generate these product-mix reallocations. Our theoretical model highlights how the increased competition from demand shocks in export markets - and the induced product mix reallocations - induce productivity changes within the firm. We then empirically test for this connection between the demand shocks and the productivity of multi-product firms exporting to those destinations. We find that the effect of those demand shocks on productivity are substantial - and explain an important share of aggregate productivity fluctuations for French manufacturing.
We designed and fielded an experimental module in the 2014 HRS which seeks to measure older persons’ willingness to voluntarily defer claiming of Social Security benefits. In addition we evaluate the stated willingness of older individuals to work longer, depending on the Social Security incentives offered to delay claiming their benefits. Our project extends previous work by analyzing the results from our HRS module and comparing findings from other data sources, which included very much smaller samples of older persons. We show that half of the respondents would delay claiming if no work requirement were in place under the status quo, and only slightly fewer, 46 percent, with a work requirement. We also asked respondents how large a lump sum they would need with or without a work requirement. In the former case, the average amount needed to induce delayed claiming was about $60,400, while when part-time work was required, the average was $66,700. This implies a low utility value of leisure foregone of only $6,300, or about 10 percent of older households’ income.
„Corporate groups are a fact of life“.1 This was the starting point for a group of renowned European experts to deliver a report on a possible Directive on corporate group law in 2000.2 We all know that no such Directive has been issued.3 However, these days a fresh group of eminent experts has started, among other things, to develop an initiative „on groups of companies“.4 One reason for a European regulation to take its time might be the enormous national differences in dealing with group situations. While some countries, notably the UK,5 rely on general company law to deal with corporate groups, others provide most detailed rules specifically for groups of companies.6 German law provides an example for the latter. Do we need a law of corporate groups? Most countries regulate one or another aspect of group law.7
This is probably most common for tax and for accounting law. Insolvency law will often take group situations into account and the same is true for labour law. Regulatory oversight for financial institutions or insurance companies usually includes a group dimension. Competition law necessarily does so as well. However, in what follows when we speak about „group law“ we will focus on regulation more specifically tuned to genuine questions of company law such as the protection of minority shareholders or creditors, the standards for managerial behavior and the „enabling“ function of legal structures.
The old boy network: the impact of professional networks on remuneration in top executive jobs
(2016)
We investigate the impact of social networks on earnings using a dataset of over 20,000 senior executives of European and US firms. The size of an individual's network of influential former colleagues has a large positive association with current remuneration. An individual at the 75th percentile in the distribution of connections could expect to have a salary nearly 20 per cent higher than an otherwise identical individual at the median. We use a placebo technique to show that our estimates reflect the causal impact of connections and not merely unobserved individual characteristics. Networks are more weakly associated with women's remuneration than with men's. This mainly reflects an interaction between unobserved individual characteristics and firm recruitment policies. The kinds of firm that best identify and advance talented women are less likely to give them access to influential networks than are firms that do the same for the most talented men.
Common systemic risk measures focus on the instantaneous occurrence of triggering and systemic events. However, systemic events may also occur with a time-lag to the triggering event. To study this contagion period and the resulting persistence of institutions' systemic risk we develop and employ the Conditional Shortfall Probability (CoSP), which is the likelihood that a systemic market event occurs with a specific time-lag to the triggering event. Based on CoSP we propose two aggregate systemic risk measures, namely the Aggregate Excess CoSP and the CoSP-weighted time-lag, that reflect the systemic risk aggregated over time and average time-lag of an institution's triggering event, respectively. Our empirical results show that 15% of the financial companies in our sample are significantly systemically important with respect to the financial sector, while 27% of the financial companies are significantly systemically important with respect to the American non-financial sector. Still, the aggregate systemic risk of systemically important institutions is larger with respect to the financial market than with respect to non-financial markets. Moreover, the aggregate systemic risk of insurance companies is similar to the systemic risk of banks, while insurers are also exposed to the largest aggregate systemic risk among the financial sector.
Intrinsic motivation for honesty is perceived as an important determinant of large and persistent variation in cheating behavior. However, little is known about its actual role due to challenges in obtaining precise measures of motivation for honesty, as well as field outcomes on cheating. We fill these gaps using a unique setting of informal milk markets in India. A novel behavioral experiment, which combines a standard die roll task with Bluetooth technology, is used to measure motivation for honesty of milkmen at both extensive and intensive margins. We then buy milk from the same milkmen and show that cheating in the field, measured by the amount of water added to milk, widens significantly with a milkman’s degree of dishonesty. Additional analyses show that conventional binary measure of motivation for honesty suffers from measurement errors, resulting in underestimation of this association.
Based on a unique data set of driving behavior we find direct evidence that private information has significant effects on contract choice and risk in automobile insurance. The number of car rides and the relative distance driven on weekends are significant risk factors. While the number of car rides and average speeding are negatively related to the level of liability coverage, the number of car rides and the relative distance driven at night are positively related to the level of first-party coverage. These results indicate multiple and counteracting effects of private information based on risk preferences and driving behavior.
This note discusses the basic economics of central clearing for derivatives and the need for a proper regulation, supervision and resolution of central counterparty clearing houses (CCPs). New regulation in the U.S. and in Europe renders the involvement of a central counterparty mandatory for standardized OTC derivatives’ trading and sets higher capital and collateral requirements for non-centrally cleared derivatives.
From a macrofinance perspective, CCPs provide a trade-off between reduced contagion risk in the financial industry and the creation of a significant systemic risk. However, so far, regulation and supervision of CCPs is very fragmented, limited and ignores two important aspects: the risk of consolidation of CCPs on the one side and the competition among CCPs on the other side. i) As the economies of scale of CCP operations in risk and cost reduction can be large, they provide an argument in favor of consolidation, leading at the extreme to a monopoly CCP that poses the ultimate default risk – a systemic risk for the entire financial sector. As a systemic risk event requires a government bailout, there is a public policy issue here. ii) As long as no monopoly CCP exists, there is competition for market share among existing CCPs. Such competition may undermine the stability of the entire financial system because it induces “predatory margining”: a reduction of margin requirements to increase market share.
The policy lesson from our consideration emphasizes the importance of a single authority supervising all competing CCPs as well as of a specific regulation and resolution framework for CCPs. Our general recommendations can be applied to the current situation in Europe, and the proposed merger between Deutsche Börse and London Stock Exchange.
In the wake of the recent financial crisis, significant regulatory actions have been taken aimed at limiting risks emanating from trading in bank business models. Prominent reform proposals are the Volcker Rule in the U.S., the Vickers Report in the UK, and, based on the Liikanen proposal, the Barnier proposal in the EU. A major element of these reforms is to separate “classical” commercial banking activities from securities trading activities, notably from proprietary trading. While the reforms are at different stages of implementation, there is a strong ongoing discussion on what possible economic consequences are to be expected. The goal of this paper is to look at the alternative approaches of these reform proposals and to assess their likely consequences for bank business models, risk-taking and financial stability. Our conclusions can be summarized as follows: First, the focus on a prohibition of only proprietary trading, as envisaged in the current EU proposal, is inadequate. It does not necessarily reduce risk-taking and it likely crowds out desired trading activities, thereby negatively affecting financial stability. Second, there is potentially a better solution to limit excessive trading risk at banks in terms of potential welfare consequences: Trading separation into legally distinct or ring-fenced entities within the existing banking organizations. This kind of separation limits cross-subsidies between banking and proprietary trading and diminishes contagion risk, while still allowing for synergies across banking, non-proprietary trading and proprietary trading.
In the wake of the recent financial crisis, significant regulatory actions have been taken aimed at limiting risks emanating from banks’ trading activities. The goal of this paper is to look at the alternative reforms in the US, the UK and the EU, specifically with respect to the role of proprietary trading. Our conclusions can be summarized as follows: First, the focus on a prohibition of proprietary trading, as reflected in the Volcker Rule in the US and in the current proposal of the European Commission (Barnier proposal), is inadequate. It does not necessarily reduce risk-taking and it is likely to crowd out desired trading activities, thereby possibly affecting financial stability negatively. Second, trading separation into legally distinct or ring-fenced entities within the existing banking organizations, as suggested under the Vickers Report for the UK and the Liikanen proposal for the EU, is a more effective solution. Separation limits cross-subsidies between banking and proprietary trading and diminishes contagion risk, while still allowing for synergies and risk management across banking, non-proprietary trading and proprietary trading.
We show that the net corporate payout yield predicts both the stock market index and house prices and that the log home rent-price ratio predicts both house prices and labor income growth. We incorporate the predictability in a rich life-cycle model of household decisions involving consumption of both perishable goods and housing services, stochastic and unspanned labor income, stochastic house prices, home renting and owning, stock investments, and portfolio constraints. We find that households can significantly improve their welfare by optimally conditioning decisions on the predictors. For a modestly risk-averse agent with a 35-year working period and a 15-year retirement period, the present value of the higher average life-time consumption amounts to roughly $179,000 (assuming both an initial wealth and an initial annual income of $20,000), and the certainty equivalent gain is around 5.5% of total wealth (financial wealth plus human capital). Furthermore, every cohort of agents in our model would have benefited from applying predictor-conditional strategies along the realized time series over our 1960-2010 data period.
This working paper is based on a lecture given at the Summer School “Multiple Inequalities in the Age of Transnationalization”, June 23-27 2014 at Goethe University Frankfurt. In it, I explore the linkages between sexuality and migration and aim to show that instead of deeming them a narrow subfield of migration studies, thinking through these linkages has much wider implications for different fields, including post- and decolonial queer studies, the study of race and sexuality, the study of citizenship and state projects of inclusion/exclusion, and for work that attempts to ce-center the predominant knowledge production focused on the Global North.
In this chapter, I examine the relationship between customary international law and general principles of law. Both are distinct sources of public international law (Art. 38(1)(b) and (c) of the Statue of the International Court of Justice). In a first step, I analyze the different meanings of principles as a “source” of international law. Second, I consider different approaches to principles as a norm type in legal theory. Third, I discuss attempts in international legal doctrine to facilitate conceptual issues by either unifying general principles as a source with the source of customary international law or by equating general principles as a source and as a norm type. Finally, I propose that the delimitation between customary international law and general principles of law as sources of international law should follow the distinction between situations dominated by factual reciprocity (which justify customary norms) and situations where such factual reciprocity is absent (which justify general principles). The jurisgenerative processes leading to the emergence of general principles of international law are processes of changing identities and argumentative self-entrapment.
As the financial crisis gathered momentum in 2007, the United States Federal Reserve brought its policy interest rate aggressively down from 5¼ percent in September 2007 to virtually zero by December 2008. In contrast, although facing the same economic and financial stress, the European Central Bank’s first action was to raise its policy rate in July 2008. The ECB began lowering rates only in October 2008 once near global financial meltdown left it with no choice. Thereafter, the ECB lowered rates slowly, interrupted by more hikes in April and July 2011. We use the “abnormal” increase in stock prices — the rise in the stock price index that was not predicted by the trend in the previous 20 days — to measure the market’s reaction to the announcement of the interest rate cuts. Stock markets responded favorably to the Fed interest rate cuts but, on average, they reacted negatively when the ECB cut its policy rate. The Fed’s early and aggressive rate cuts established its intention to provide significant monetary stimulus. That helped renew market optimism, consistent with the earlier economic recovery. In contrast, the ECB started building its shelter only after the storm had started. Markets interpreted even the simulative ECB actions either as “too little, too late” or as signs of bad news. We conclude that by recognizing the extraordinary nature of the circumstances, the Fed’s response not only achieved better economic outcomes but also enhanced its credibility. The ECB could have acted similarly and stayed true to its mandate. The poorer economic outcomes will damage the ECB’s long-term credibility.
Shortcomings revealed by experimental and theoretical researchers such as Allais (1953), Rabin (2000) and Rabin and Thaler (2001) that put the classical expected utility paradigm von Neumann and Morgenstern (1947) into question, led to the proposition of alternative and generalized utility functions, that intend to improve descriptive accuracy. The perhaps best known among those alternative preference theories, that has attracted much popularity among economists, is the so called Prospect Theory by Kahneman and Tversky (1979) and Tversky and Kahneman (1992). Its distinctive features, governed by its set of risk parameters such as risk sensitivity, loss aversion and decision weights, stimulated a series of economic and financial models that build on the previously estimated parameter values by Tversky and Kahneman (1992) to analyze and explain various empirical phenomena for which expected utility doesn't seem to offer a satisfying rationale. In this paper, after providing a brief overview of the relevant literature, we take a closer look at one of those papers, the trading model of Vlcek and Hens (2011) and analyze its implications on Prospect Theory parameters using an adopted maximum likelihood approach for a dataset of 656 individual investors from a large German discount brokerage firm. We find evidence that investors in our dataset are moderately averse to large losses and display high risk sensitivity, supporting the main assumptions of Prospect Theory.
This paper addresses whether and to what extent econometric methods used in experimental studies can be adapted and applied to financial data to detect the best-fitting preference model. To address the research question, we implement a frequently used nonlinear probit model in the style of Hey and Orme (1994) and base our analysis on a simulation stud. In detail, we simulate trading sequences for a set of utility models and try to identify the underlying utility model and its parameterization used to generate these sequences by maximum likelihood. We find that for a very broad classification of utility models, this method provides acceptable outcomes. Yet, a closer look at the preference parameters reveals several caveats that come along with typical issues attached to financial data, and that some of these issues seems to drive our results. In particular, deviations are attributable to effects stemming from multicollinearity and coherent under-identification problems, where some of these detrimental effects can be captured up to a certain degree by adjusting the error term specification. Furthermore, additional uncertainty stemming from changing market parameter estimates affects the precision of our estimates for risk preferences and cannot be simply remedied by using a higher standard deviation of the error term or a different assumption regarding its stochastic process. Particularly, if the variance of the error term becomes large, we detect a tendency to identify SPT as utility model providing the best fit to simulated trading sequences. We also find that a frequent issue, namely serial correlation of the residuals, does not seem to be significant. However, we detected a tendency to prefer nesting models over nested utility models, which is particularly prevalent if RDU and EXPO utility models are estimated along with EUT and CRRA utility models.