Refine
Year of publication
- 2013 (109) (remove)
Document Type
- Working Paper (109) (remove)
Language
- English (109) (remove)
Has Fulltext
- yes (109)
Is part of the Bibliography
- no (109)
Keywords
- banking union (4)
- Contagion (3)
- European Banking Authority (EBA) (3)
- European Central Bank (ECB) (3)
- euro area (3)
- leverage (3)
- monetary policy (3)
- oil price (3)
- political economy of bureaucracy (3)
- prudential supervision (3)
Institute
- Center for Financial Studies (CFS) (68)
- Wirtschaftswissenschaften (60)
- House of Finance (HoF) (43)
- Sustainable Architecture for Finance in Europe (SAFE) (14)
- Institute for Monetary and Financial Stability (IMFS) (11)
- Rechtswissenschaft (8)
- Informatik (5)
- LOEWE-Schwerpunkt Außergerichtliche und gerichtliche Konfliktlösung (4)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (2)
Insurance guarantee schemes aim to protect policyholders from the costs of insurer insolvencies. However, guarantee schemes can also reduce insurers’ incentives to conduct appropriate risk management. We investigate stock insurers’ risk-shifting behavior for insurance guarantee schemes under the two different financing alternatives: a flat-rate premium assessment versus a risk-based premium assessment. We identify which guarantee scheme maximizes policyholders’ welfare, measured by their expected utility. We find that the risk-based insurance guarantee scheme can only mitigate the insurer’s risk-shifting behavior if a substantial premium loading is present. Furthermore, the risk-based guarantee scheme is superior for improving policyholders’ welfare compared to the flat-rate scheme when the mitigating effect occurs.
The concept of length, the concept is synonymous, the concept is nothing more than, the proper definition of a concept ... Forget programs and visions; the operational approach refers specifically to concepts, and in a very specific way: it describes the process whereby concepts are transformed into a series of operations—which, in their turn, allow to measure all sorts of objects. Operationalizing means building a bridge from concepts to measurement, and then to the world. In our case: from the concepts of literary theory, through some form of quantification, to literary texts.
We would study not style as such, but style 'at the scale of the sentence': the lowest level, it seemed, at which style as a distinct phenomenon became visible. Implicitly, we were defining style as a combination of smaller linguistic units, which made it, in consequence, particularly sensitive to changes in scale—from words to clauses to whole sentences.
The amino acid content (alanine/arginine, glutamine, proline, taurine) of five different lichen species (Evernia prunastri, Hypogymnia physodes, Parmelia sulcata, Physcia adscendens, Xanthoria parietina) from different parts of Germany and NW France with different atmospheric nitrogen depositions was determined.
The study revealed that the so called nitrophytic lichen species (Physcia adscendens, Xanthoria parietina) had no higher amino acid contents as compared with the other species. The amino acid contents of five different lichen species from the same tree varied without regard to the nitrophily of the species. The contents of amino acids of the lichen species studied from Bonn is four to twelve times higher as in the same species in the Vosges Mountains, France. The amount of amino acids in nitrophytic species (Xanthoria parietina, Physcia adscendens) from a region with high load of atmospheric nitrogen (35 kg/y/ha) is in average 5 times higher than in the same species from a region with low nitrogen immission (16 kg/y/ha).
It can be concluded that the amino acid contents of lichens reflects the atmospheric nitrogen load and that the amino acid content of so called nitrophytic lichen species is not higher as in other species, that lichens are passive sampler and take up the available nitrogen but make no use of it but store it as amino acids. On the other hand, the conductivity of the cell liquid (as a measure of the osmotic pressure) of nitrophytic lichen species is higher as compared with non-nitrophytic species. Thus the “nitrophily” of these species is presumably not based upon the facility to higher nitrogen uptake but osmotic tolerance against the salt effects of nitrogen compounds. Within nitrophytic species, the osmotic values of Phaeophyscia orbicularis are double as high as those from Physcia adscendens, which is explained by the higher tolerance of Phaeophyscia against dry deposition. The higher osmotic values of nitrophilous lichen species lead to the conclusion that they are also drought resistant species and occur in regions with low humidity where they are more competitive than other lichen species.
The IMFS Interdisciplinary Study 2/2013 contains speeches of Michael Burda (Humboldt University ), Benoît Coeuré (European Central Bank), Stefan Gerlach (Bank of Ireland and former IMFS Professor), Patrick Honohan (Bank of Ireland), Sabine Lautenschläger (Deutsche Bundesbank), Athanasios Orphanides (MIT) and Helmut Siekmann as well as Volker Wieland.
This study contains articles based on speeches and presentations at the 14th CFS-IMFS Conference "The ECB and its Watchers" on June 15, 2012 by Mario Draghi, John Vickers, Peter Praet, Lucrezia Reichlin, Vitor Gaspar, Lucio Pench and Stefan Gerlach and a post-conference outlook by Helmut Siekmann and Volker Wieland.
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations.
The proof is by a fully abstract and surjective transfer of the contextual approximation into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR.
The translation from the call-by-need letrec calculus into the extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less complex definition of reduction than LR.
Motivated by our experience in analyzing properties of translations between programming languages with observational semantics, this paper clarifies the notions, the relevant questions, and the methods, constructs a general framework, and provides several tools for proving various correctness properties of translations like adequacy and full abstractness. The presented framework can directly be applied to the observational equivalences derived from the operational semantics of programming calculi, and also to other situations, and thus has a wide range of applications.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
Our motivation is the question whether the lazy lambda calculus, a pure lambda calculus with the leftmost outermost rewriting strategy, considered under observational semantics, or extensions thereof, are an adequate model for semantic equivalences in real-world purely functional programming languages, in particular for a pure core language of Haskell. We explore several extensions of the lazy lambda calculus: addition of a seq-operator, addition of data constructors and case-expressions, and their combination, focusing on conservativity of these extensions. In addition to untyped calculi, we study their monomorphically and polymorphically typed versions. For most of the extensions we obtain non-conservativity which we prove by providing counterexamples. However, we prove conservativity of the extension by data constructors and case in the monomorphically typed scenario.
This note proposes a new set-up for the fund backing the Single Resolution Mechanism (SRM). The proposed fund is a Multi-Tier Resolution Fund (MTRF), restricting the joint and several supranational liability to a limited range of losses, bounded by national liability at the upper and the lower end. The layers are, in ascending order: a national fund (first losses), a European fund (second losses), the national budget (third losses), the ESM (fourth losses, as a backup for sovereigns). The system works like a reinsurance scheme, providing clear limits to European-level joint liability, and therefore confining moral hazard. At the same time, it allows for some degree of risk sharing, which is important for financial stability if shocks to the financial system are exogenous (e.g., of a supranational macroeconomic nature). The text has four parts. Section A describes the operation of the Multi-Tier Resolution Fund, assuming the fund capital to be fully paid-in (“Steady State“). Section B deals with the build-up phase of the fund capital (“Build up“). Section C discusses how the proposal deals with the apparent incentive conflicts. The final Section D summarizes open questions which need further thought (“Open Questions“).
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
The financial services industry worldwide has undergone major transformation since the late 1970s. Technological advancements in information processing and communication facilitated financial innovation and narrowed traditional distinctions in financial products and services, allowing them to become close substitutes for one another. The deregulation process in many major economies prior to the recent financial crisis blurred the traditional lines of demarcation between the distinct types of financial institutions, exposing those firms to new competitors in their traditional business areas, while the increasing globalization of financial markets fostered the provision of financial services across national borders. Against this backdrop, a trend toward consolidation across financial sectors as well as across national borders increasingly manifested itself since the 1990s. The developments in the financial markets ever more intensified competition in the financial services industry and induced financial institutions to redefine their business strategies in search of higher profitability and growth opportunities. Consolidation across distinct financial sectors, i.e. financial conglomeration, in particular became a popular business strategy in light of the potential operational synergies and diversification benefits it can offer. This trend spurred the growth of diversified financial groups, the so-called financial conglomerates, which commingle banking, securities, and insurance activities under one corporate umbrella.5 Still today, large, complex financial conglomerates are represented among major players in the financial markets worldwide, whose activities not only sway across traditional boundaries of banking, securities, and insurance sectors but also across national borders.
Notwithstanding the economic benefits that conglomeration may produce as a business strategy, the emergence of financial conglomerates also exacerbated existing and created new prudential risks in the financial system. 6 The mixing of a variety of financial products and services under one corporate roof and the generally large and complex group structure of financial conglomerates expose such organizations to specific group risks such as contagion and arbitrage risk as well as systemic risk. When realized, these risks may not only cause the failure of an entire financial group but threaten the stability of the financial system as a whole, as evidenced by the events during recent financial crisis of 2007-2009...
Following the experience of the global financial crisis, central banks have been asked to undertake unprecedented responsibilities. Governments and the public appear to have high expectations that monetary policy can provide solutions to problems that do not necessarily fit in the realm of traditional monetary policy. This paper examines three broad public policy goals that may overburden monetary policy: full employment; fiscal sustainability; and financial stability. While central banks have a crucial position in public policy, the appropriate policy mix also involves other institutions, and overreliance on monetary policy to achieve these goals is bound to disappoint. Central Bank policies that facilitate postponement of needed policy actions by governments may also have longer-term adverse consequences that could outweigh more immediate benefits. Overburdening monetary policy may eventually diminish and compromise the independence and credibility of the central bank, thereby reducing its effectiveness to preserve price stability and contribute to crisis management.
This paper tests whether an increase in insured deposits causes banks to become more risky. We use variation introduced by the U.S. Emergency Economic Stabilization Act in October 2008, which increased the deposit insurance coverage from $100,000 to $250,000 per depositor and bank. For some banks, the amount of insured deposits increased significantly; for others, it was a minor change. Our analysis shows that the more affected banks increase their investments in risky commercial real estate loans and become more risky relative to unaffected banks following the change. This effect is most distinct for affected banks that are low capitalized.
We introduce a new measure of systemic risk, the change in the conditional joint probability of default, which assesses the effects of the interdependence in the financial system on the general default risk of sovereign debtors. We apply our measure to examine the fragility of the European financial system during the ongoing sovereign debt crisis. Our analysis documents an increase in systemic risk contributions in the euro area during the post-Lehman global recession and especially after the beginning of the euro area sovereign debt crisis. We also find a considerable potential for cascade effects from small to large euro area sovereigns. When we investigate the effect of sovereign default on the European Union banking system, we find that bigger banks, banks with riskier activities, with poor asset quality, and funding and liquidity constraints tend to be more vulnerable to a sovereign default. Surprisingly, an increase in leverage does not seem to influence systemic vulnerability.
We show that market discipline, defined as the extent to which firm specific risk characteristics are reflected in market prices, eroded during the recent financial crisis in 2008. We design a novel test of changes in market discipline based on the relation between firm specific risk characteristics and debt-to-equity hedge ratios. We find that market discipline already weakened after the rescue of Bear Stearns before disappearing almost entirely after the failure of Lehman Brothers. The effect is stronger for investment banks and large financial institutions, while there is no comparable effect for non-financial firms.
Sovereign bond risk premiums
(2013)
Credit risk has become an important factor driving government bond returns. We therefore introduce an asset pricing model which exploits information contained in both forward interest rates and forward CDS spreads. Our empirical analysis covers euro-zone countries with German government bonds as credit risk-free assets. We construct a market factor from the first three principal components of the German forward curve as well as a common and a country-specific credit factor from the principal components of the forward CDS curves. We find that predictability of risk premiums of sovereign euro-zone bonds improves substantially if the market factor is augmented by a common and an orthogonal country-specific credit factor. While the common credit factor is significant for most countries in the sample, the country-specific factor is significant mainly for peripheral euro-zone countries. Finally, we find that during the current crisis period, market and credit risk premiums of government bonds are negative over long subintervals, a finding that we attribute to the presence of financial repression in euro-zone countries.
This paper takes a novel approach to estimating bankruptcy costs by inference from market prices of equity and put options using a dynamic structural model of capital structure. This approach avoids the selection bias of looking at firms in or near default and therefore permits theories of ex ante capital structure determination to be tested. We identify significant cross sectional variation in bankruptcy costs across industries and relate these to specific firm characteristics. We find that asset volatility and growth options have significant positive impacts, while tangibility and size have negative impacts. Our bankruptcy cost variable estimate significantly negatively impacts leverage ratios. This negative impact is in addition to that of other firm characteristics such as asset intangibility and asset volatility. The results provide strong support for the tradeoff theory of capital structure.
We study to what extent firms spread out their debt maturity dates across time, which we call "granularity of corporate debt." We consider the role of debt granularity using a simple model in which a firm's inability to roll over expiring debt causes inefficiencies, such as costly asset sales or underinvestment. Since multiple small asset sales are less costly than a single large one, firms may diversify debt rollovers across maturity dates. We construct granularity measures using data on corporate bond issuers for the 1991-2011 period and establish a number of novel findings. First, there is substantial variation in granularity in that many firms have either very concentrated or highly dispersed maturity structures. Second, our model's predictions are consistent with observed variation in granularity. Corporate debt maturities are more dispersed for larger and more mature firms, for firms with better investment opportunities, with higher leverage ratios, and with lower levels of current cash flows. We also show that during the recent financial crisis especially firms with valuable investment opportunities implemented more dispersed maturity structures. Finally, granularity plays an important role for bond issuances, because we document that newly issued corporate bond maturities complement pre-existing bond maturity profiles.
We consider an economy where individuals privately choose effort and trade competitively priced securities that pay off with effort-determined probability. We show that if insurance against a negative shock is sufficiently incomplete, then standard functional form restrictions ensure that individual objective functions are optimized by an effort and insurance combination that is unique and satisfies first- and second-order conditions. Modeling insurance incompleteness in terms of costly production of private insurance services, we characterize the constrained inefficiency arising in general equilibrium from competitive pricing of nonexclusive financial contracts.
We propose a new classification of consumption goods into nondurable goods, durable goods and a new class which we call “memorable” goods. A good is memorable if a consumer can draw current utility from its past consumption experience through memory. We construct a novel consumption-savings model in which a consumer has a well-defined preference ordering over both nondurable goods and memorable goods. Memorable goods consumption differs from nondurable goods consumption in that current memorable goods consumption may also impact future utility through the accumulation process of the stock of memory. In our model, households optimally choose a lumpy profile of memorable goods consumption even in a frictionless world. Using Consumer Expenditure Survey data, we then document levels and volatilities of different groups of consumption goods expenditures, as well as their expenditure patterns, and show that the expenditure patterns on memorable goods indeed differ significantly from those on nondurable and durable goods. Finally, we empirically evaluate our model’s predictions with respect to the welfare cost of consumption fluctuations and conduct an excess-sensitivity test of the consumption response to predictable income changes. We find that (i) the welfare cost of household-level consumption fluctuations may be overstated by 1.7 percentage points (11.9% points as opposed to 13.6% points of permanent consumption) if memorable goods are not appropriately accounted for; (ii) the finding of excess sensitivity of consumption documented in important papers of the literature might be entirely due to the presence of memorable goods.
There is mounting evidence that retail investors make predictable, costly investment mistakes, including underinvestment, naïve diversification, and payment of excessive fund fees. Over the past thirty-five years, however, participant-directed 401(k) plans have largely replaced professionally managed pension plans, requiring unsophisticated retail investors to navigate the financial markets themselves. Policy-makers have struggled with regulatory interventions designed to improve the quality of investment decisions without a clear understanding of the reasons for investor mistakes. Absent such an understanding, it is difficult to design effective regulatory responses. This article offers a first step in understanding the investor decision-making process. We use an internet-based experiment to disentangle possible explanations for inefficient investment decisions. The experiment employs a simplified construct of an employee’s allocation among the options in a retirement plan coupled with technology that enables us to collect data on the specific information that investors choose to view. In addition to collecting general information about the process by which investors choose among mutual fund options, we employ an experimental manipulation to test the effect of an instruction on the importance of mutual fund fees. Pairing this instruction with simplified fee disclosure allows us to distinguish between motivation-limits and cognition-limits as explanations for the widespread findings that investors ignore fees in their investment decisions. Our results offer partial but limited grounds for optimism. On the one hand, within our simplified experimental construct, our subjects allocated more money, on average, to higher-value funds. Furthermore, subjects who received the fees instruction paid closer attention to mutual fund fees and allocated their investments into funds with lower fees. On the other hand, the effects of even a blunt fees instruction were limited, and investors were unable to identify and avoid clearly inferior fund options. In addition, our results suggest that excessive, naïve diversification strategies are driving many investment decisions. Although our findings are preliminary, they suggest valuable avenues for future research and important implications for regulation of retail investing.
The substantial variation in the real price of oil since 2003 has renewed interest in the question of how to forecast monthly and quarterly oil prices. There also has been increased interest in the link between financial markets and oil markets, including the question of whether financial market information helps forecast the real price of oil in physical markets. An obvious advantage of financial data in forecasting oil prices is their availability in real time on a daily or weekly basis. We investigate whether mixed-frequency models may be used to take advantage of these rich data sets. We show that, among a range of alternative high-frequency predictors, especially changes in U.S. crude oil inventories produce substantial and statistically significant real-time improvements in forecast accuracy. The preferred MIDAS model reduces the MSPE by as much as 16 percent compared with the no-change forecast and has statistically significant directional accuracy as high as 82 percent. This MIDAS forecast also is more accurate than a mixed-frequency realtime VAR forecast, but not systematically more accurate than the corresponding forecast based on monthly inventories. We conclude that typically not much is lost by ignoring high-frequency financial data in forecasting the monthly real price of oil.
Model case procedures have some fundamentals in common with collective redress in civil law countries. This is particularly true in the field of investor protection which is highly regulated and marked by resulting enforcement failures, which led the German legislator to the enactment of the KapMuG and its recent amendment which highlight exemplary elements of model case procedure. A survey of the ongoing activities of the European Union in the area of collective redress and of its repercussions on the member state level therefore forms a suitable basis for the following analysis of the 2012 amendment of the KapMuG. It clearly brings into focus a shift from sector-specific regulation with an emphasis on the cross-border aspect of protecting consumers towards a “coherent approach” strengthening the enforcement of EU law. As a result, regulatory policy and collective redress are two sides of the same coin today. With respect to the KapMuG such a development brings about some tension between its aim to aggregate small individual claims as efficiently as possible and the dominant role of individual procedural rights in German civil procedure. This conflict can be illustrated by some specific rules of the KapMuG: its scope of application, the three-tier procedure of a model case procedure, the newly introduced notification of claims and the new opt-out settlement under the amended §§ 17-19.
We propose the realized systemic risk beta as a measure for financial companies’ contribution to systemic risk given network interdependence between firms’ tail risk exposures. Conditional on statistically pre-identified network spillover effects and market as well as balance sheet information, we define the realized systemic risk beta as the total time-varying marginal effect of a firm’s Value-at-risk (VaR) on the system’s VaR. Statistical inference reveals a multitude of relevant risk spillover channels and determines companies’ systemic importance in the U.S. financial system. Our approach can be used to monitor companies’ systemic importance allowing for a transparent macroprudential supervision.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
Does it pay to invest in art? A selection-corrected returns perspective : [draft october 15, 2013]
(2013)
This paper shows the importance of correcting for sample selection when investing in illiquid assets with endogenous trading. Using a large sample of 20,538 paintings that were sold repeatedly at auction between 1972 and 2010, we find that paintings with higher price appreciation are more likely to trade. This strongly biases estimates of returns. The selection-corrected average annual index return is 6.5 percent, down from 10 percent for traditional uncorrected repeat sales regressions, and Sharpe Ratios drop from 0.24 to 0.04. From a pure financial perspective, passive index investing in paintings is not a viable investment strategy once selection bias is accounted for. Our results have important implications for other illiquid asset classes that trade endogenously.
The 2011 European short sale ban on financial stocks: a cure or a curse? : [version 31 july 2013]
(2013)
Did the August 2011 European short sale bans on financial stocks accomplish their goals? In order to answer this question, we use stock options’ implied volatility skews to proxy for investors’ risk aversion. We find that on ban announcement day, risk aversion levels rose for all stocks but more so for the banned financial stocks. The banned stocks’ volatility skews remained elevated during the ban but dropped for the other unbanned stocks. We show that it is the imposition of the ban itself that led to the increase in risk aversion rather than other causes such as information flow, options trading volumes, or stock specific factors. Substitution effects were minimal, as banned stocks’ put trading volumes and put-call ratios declined during the ban. We argue that although the ban succeeded in curbing further selling pressure on financial stocks by redirecting trading activity towards index options, this result came at the cost of increased risk aversion and some degree of market failure.
We show that the presence of high frequency trading (HFT) has significantly mitigated the frequency and severity of end-of-day price dislocation, counter to recent concerns expressed in the media. The effect of HFT is more pronounced on days when end of day price dislocation is more likely to be the result of market manipulation on days of option expiry dates and end of month. Moreover, the effect of HFT is more pronounced than the role of trading rules, surveillance, enforcement and legal conditions in curtailing the frequency and severity of end-of-day price dislocation. We show our findings are robust to different proxies of the start of HFT by trade size, cancellation of orders, and co-location.
We examine the impact of stock exchange trading rules and surveillance on the frequency and severity of suspected insider trading cases in 22 stock exchanges around the world over the period January 2003 through June 2011. Using new indices for market manipulation, insider trading, and broker-agency conflict based on the specific provisions of the trading rules of each stock exchange, along with surveillance to detect non-compliance with such rules, we show that more detailed exchange trading rules and surveillance over time and across markets significantly reduce the number of cases, but increase the profits per case.
We use responses to survey questions in the 2010 Italian Survey of Household Income and Wealth that ask consumers how much of an unexpected transitory income change they would consume. We find that the marginal propensity to consume (MPC) is 48 percent on average, and that there is substantial heterogeneity in the distribution. We find that households with low cash-on-hand exhibit a much higher MPC than affluent households, which is in agreement with models with precautionary savings where income risk plays an important role. The results have important implications for the evaluation of fiscal policy, and for predicting household responses to tax reforms and redistributive policies. In particular, we find that a debt-financed increase in transfers of 1 percent of national disposable income targeted to the bottom decile of the cash-on-hand distribution would increase aggregate consumption by 0.82 percent. Furthermore, we find that redistributing 1% of national disposable income from the top to the bottom decile of the income distribution would boost aggregate consumption by 0.33%.
Prior research suggests that those who rely on intuition rather than effortful reasoning when making decisions are less averse to risk and ambiguity. The evidence is largely correlational, however, leaving open the question of the direction of causality. In this paper, we present experimental evidence of causation running from reliance on intuition to risk and ambiguity preferences. We directly manipulate participants’ predilection to rely on intuition and find that enhancing reliance on intuition lowers the probability of being ambiguity averse by 30 percentage points and increases risk tolerance by about 30 percent in the experimental sub-population where we would a priori expect the manipulation to be successful(males).
Investment in financial literacy, social security and portfolio choice : [version may 21, 2013]
(2013)
We present an intertemporal portfolio choice model where individuals invest in financial literacy, save, allocate their wealth between a safe and a risky asset, and receive a pension when they retire. Financial literacy affects the excess return and the cost of stock market participation. Since literacy depreciates over time and has a cost related to current consumption, investors simultaneously choose how much to save, the portfolio allocation, and the optimal investment in literacy. This last depends on households' resources, its preference parameters and on how much financial literacy affects the returns on risky assets and the stock market participation cost, and the returns on social security wealth. The model implies one should observe a positive correlation between stock market participation (and risky asset share, conditional on participation) and financial literacy, and a negative correlation between the generosity of the social security system and financial literacy. The model also implies that the stock of financial literacy accumulated early in life is positively correlated with the individual's wealth and portfolio allocations later in life. Using microeconomic cross-country data, we find support for these predictions.
The U.S. Energy Information Administration (EIA) regularly publishes monthly and quarterly forecasts of the price of crude oil for horizons up to two years, which are widely used by practitioners. Traditionally, such out-of-sample forecasts have been largely judgmental, making them difficult to replicate and justify. An alternative is the use of real-time econometric oil price forecasting models. We investigate the merits of constructing combinations of six such models. Forecast combinations have received little attention in the oil price forecasting literature to date. We demonstrate that over the last 20 years suitably constructed real-time forecast combinations would have been systematically more accurate than the no-change forecast at horizons up to 6 quarters or 18 months. MSPE reduction may be as high as 12% and directional accuracy as high as 72%. The gains in accuracy are robust over time. In contrast, the EIA oil price forecasts not only tend to be less accurate than no-change forecasts, but are much less accurate than our preferred forecast combination. Moreover, including EIA forecasts in the forecast combination systematically lowers the accuracy of the combination forecast. We conclude that suitably constructed forecast combinations should replace traditional judgmental forecasts of the price of oil.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
U.S. retail food price increases in recent years may seem large in nominal terms, but after adjusting for inflation have been quite modest even after the change in U.S. biofuel policies in 2006. In contrast, increases in the real prices of corn, soybeans, wheat and rice received by U.S. farmers have been more substantial and can be linked in part to increases in the real price of oil. That link, however, appears largely driven by common macroeconomic determinants of the prices of oil and agricultural commodities rather than the pass-through from higher oil prices. We show that there is no evidence that corn ethanol mandates have created a tight link between oil and agricultural markets. Rather increases in food commodity prices not associated with changes in global real activity appear to reflect a wide range of idiosyncratic shocks ranging from changes in biofuel policies to poor harvests. Increases in agricultural commodity prices in turn contribute little to U.S. retail food price increases, because of the small cost share of agricultural products in food prices. There is no evidence that oil price shocks have caused more than a negligible increase in retail food prices in recent years. Nor is there evidence for the prevailing wisdom that oil-price driven increases in the cost of food processing, packaging, transportation and distribution are responsible for higher retail food prices. Finally, there is no evidence that oil-market specific events or for that matter U.S. biofuel policies help explain the evolution of the real price of rice, which is perhaps the single most important food commodity for many developing countries.
We investigate the theoretical impact of including two empirically-grounded insights in a dynamic life cycle portfolio choice model. The first is to recognize that, when managing their own financial wealth, investors incur opportunity costs in terms of current and future human capital accumulation, particularly if human capital is acquired via learning by doing. The second is that we incorporate age-varying efficiency patterns in financial decisionmaking. Both enhancements produce inactivity in portfolio adjustment patterns consistent with empirical evidence. We also analyze individuals’ optimal choice between self-managing their wealth versus delegating the task to a financial advisor. Delegation proves most valuable to the young and the old. Our calibrated model quantifies welfare gains from including investment time and money costs, as well as delegation, in a life cycle setting.
Household decisions are profoundly shaped by a complex set of financial options due to Social Security rules determining retirement, spousal, and survivor benefits, along with benefit adjustments that vary with the age at which these are claimed. These rules influence optimal household asset allocation, insurance, and work decisions, given life cycle demographic shocks such as marriage, divorce, and children. Our model generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We also confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
This paper employs stochastic simulations of the New Area-Wide Model—a microfounded open-economy model developed at the ECB—to investigate the consequences of the zero lower bound on nominal interest rates for the evolution of risks to price stability in the euro area during the recent financial crisis. Using a formal measure of the balance of risks, which is derived from policy-makers’ preferences about inflation outcomes, we first show that downside risks to price stability were considerably greater than upside risks during the first half of 2009, followed by a gradual rebalancing of these risks until mid-2011 and a renewed deterioration thereafter. We find that the lower bound has induced a noticeable downward bias in the risk balance throughout our evaluation period because of the implied amplification of deflation risks. We then illustrate that, with nominal interest rates close to zero, forward guidance in the form of a time-based conditional commitment to keep interest rates low for longer can be successful in mitigating downside risks to price stability. However, we find that the provision of time-based forward guidance may give rise to upside risks over the medium term if extended too far into the future. By contrast, time-based forward guidance complemented with a threshold condition concerning tolerable future inflation can provide insurance against the materialisation of such upside risks.
Empirical evidence suggests that asset returns correlate more strongly in bear markets than conventional correlation estimates imply. We propose a method for determining complete tail correlation matrices based on Value-at-Risk (VaR) estimates. We demonstrate how to obtain more efficient tail-correlation estimates by use of overidentification strategies and how to guarantee positive semidefiniteness, a property required for valid risk aggregation and Markowitz{type portfolio optimization. An empirical application to a 30-asset universe illustrates the practical applicability and relevance of the approach in portfolio management.
We analyze the equilibrium in a two-tree (sector) economy with two regimes. The output of each tree is driven by a jump-diffusion process, and a downward jump in one sector of the economy can (but need not) trigger a shift to a regime where the likelihood of future jumps is generally higher. Furthermore, the true regime is unobservable, so that the representative Epstein-Zin investor has to extract the probability of being in a certain regime from the data. These two channels help us to match the stylized facts of countercyclical and excessive return volatilities and correlations between sectors. Moreover, the model reproduces the predictability of stock returns in the data without generating consumption growth predictability. The uncertainty about the state also reduces the slope of the term structure of equity. We document that heterogeneity between the two sectors with respect to shock propagation risk can lead to highly persistent aggregate price-dividend ratios. Finally, the possibility of jumps in one sector triggering higher overall jump probabilities boosts jump risk premia while uncertainty about the regime is the reason for sizeable diffusive risk premia.
This study presents an empirical analysis of capital and liability management in eight cases of bank restructurings and resolutions from eight different European countries. It can be read as a companion piece to an earlier study by the author covering the specific bank restructuring programs of Greece, Spain and Cyprus during 2012/13.
The study portrays for each case the timelines between the initial credit event and the (last) restructuring. It proceeds to discuss the capital and liability management activity before restructuring and the restructuring itself, launches an attempt to calibrate the extent of creditor participation as well as expected loss by government, and engages in a counterfactual discussion of what could have been a least cost restructuring approach.
Four of the eight cases are resolutions, i.e. the original bank is unwound (Anglo Irish Bank, Amagerbanken, Dexia, Laiki), while the four other banks have de-facto or de-jure become nationalized and are awaiting re-privatization after the restructuring (Deutsche Pfandbriefbank/Hypo Real Estate, Bankia, SNS Reaal, Alpha Bank). The case selection follows considerations of their model character for the European bank restructuring and resolution policy discussion while straddling both the U.S. (2007 - 2010) and the European (2010 - ) legs of the financial crisis, which each saw very different policy responses....
We provide an assessment of the determinants of the risk remia paid by non-financial corporations on long-term bonds. By looking at 5,500 issues over the period 2005-2012, we find that in recent years the sovereign debt market turbulence has been a major driver of corporate risk. Compared with the three-year period 2005-07 before the global financial crisis, in the years 2010-12 Italian, Spanish and Portuguese firms paid on average between 70 and 120 basis points of additional premium due to the negative spillovers from the sovereign debt crisis, while German firms got a discount of 40 basis points.
Advances in technology and several regulatory initiatives have led to the emergence of a competitive but fragmented equity trading landscape in the US and Europe. While these changes have brought about several benefits like reduced transaction costs, regulators and market participants have also raised concerns about the potential adverse effects associated with increased execution complexity and the impact on market quality of new types of venues like dark pools. In this article we review the theoretical and empirical literature examining the economic arguments and motivations underlying market fragmentation, as well as the resulting implications for investors' welfare. We start with the literature that views exchanges as natural monopolies due to presence of network externalities, and then examine studies which challenge this view by focusing on trader heterogeneity and other aspects of the microstructure of equity markets.
This paper examines a practice that is nearly imperceptible to historians because the bulk of evidence for it is to be found in the interstices of the beaten paths of legal and social history and because it mixes economic and religious matters in a strikingly unfamiliar manner. From the thirteenth to the sixteenth century, excommunication for debt offered ordinary people an economical, efficacious enforcement mechanism for small-scale, daily, unwritten credit. At the same time, the practice offered holders of ecclesiastical jurisdiction an important opportunity to round out their incomes, particularly in the difficult fifteenth century. This transitional practice reveals a level of credit below that of the letters of change, annuities secured on real property, or written obligations beloved of economic historians and historians of banking. Studying the practice casts light on the transition from the face-to-face, local economies of the high Middle Ages to the regional economies of the early modern period, on how the Reformation shaped early modern regimes of credit, and on how the disappearance of ecclesiastical civil justice facilitated the emergence of early modern juridically sovereign territories.