Working Paper
Refine
Year of publication
Document Type
- Working Paper (2358) (remove)
Language
- English (2358) (remove)
Is part of the Bibliography
- no (2358)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (24)
Institute
- Center for Financial Studies (CFS) (1383)
- Wirtschaftswissenschaften (1313)
- Sustainable Architecture for Finance in Europe (SAFE) (745)
- House of Finance (HoF) (610)
- Institute for Monetary and Financial Stability (IMFS) (174)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
Securities transaction tax in France: impact on market quality and inter-market price coordination
(2014)
The general concept of a Securities Transaction Tax is controversial among academics and politicians. While theoretical research is quite advanced, the empirical guidance in a fragmented market context is still scarce. Possible negative effects for market liquidity and market efficiency are theoretically predicted, but have not been empirically tested yet. In light of the agreement of eleven European member states to implement an STT, this study aims to give a comprehensive overview of the effects of the STT, introduced in France in 2012, on liquidity demand, liquidity supply, volatility and inter-market information transmission. The results show that the STT has led to a decline in liquidity demand, has had a detrimental effect on liquidity supply and negatively influences the inter-market information transmission efficiency. However, no effect on volatility can be observed.
In the United States, on April 1, 2014, the set of rules commonly known as the "Volcker Rule", prohibiting proprietary trading activities in banks, became effective. The implementation of this rule took more than three years, as “proprietary trading” is an inherently vague concept, overlapping strongly with genuinely economically useful activities such as market-making. As a result, the final Rule is a complex and lengthy combination of prohibitions and exemptions.
In January 2014, the European Commission put forward its proposal on banking structural reform. The proposal includes a Volcker-like provision, prohibiting large, systemically relevant financial institutions from engaging in proprietary trading or hedge fund-related business. This paper offers lessons to be learned from the implementation process for the Volcker rule in the US for the European regulatory process.
Financial innovation is, as usual, faster than regulation. New forms of speculation and intermediation are rapidly emerging. Largely as a result of the evaporation of trust in financial intermediation, an exponentially increasing role is being played by the so-called peer to peer intermediation. The most prominent example at the moment is Bitcoin.
If one expects that shocks in these markets could destabilize also traditional financial markets, then it will be necessary to extend regulatory measures also to these innovations.
This policy letter provides an overview of the strengths, weaknesses, risks and opportunities of the upcoming comprehensive risk assessment, a euro area-wide evaluation of bank balance sheets and business models. If carried out properly, the 2014 comprehensive assessment will lead the euro area into a new era of banking supervision. Policy makers in euro area countries are now under severe pressure to define a credible backstop framework for banks. This framework, as the author argues, needs to be a broad, quasi-European system of mutually reinforcing backstops.
This article discusses the recent proposal for debt restructuring in the euro zone by Pierre Paris and Charles Wyplosz. It argues that the plan cannot realize the promised debt relief without producing moral hazard. Ester Faia revisits the Redemption Fund proposed in November 2011 by the German Council of Economic Experts and argues that this plan, up to date, still remains the most promising path towards succesful debt restructuring in Europe.
On November 8, 2013, several members of the British House of Lords’ Subcommittee A conducted a hearing at the ECB in Frankfurt, Germany, on “Genuine Economic and Monetary Union and its Implications for the UK”. Professors Otmar Issing and Jan Pieter Krahnen were called as expert witnesses.
The testimony began with a general discussion on the elements considered necessary for a functioning internal market. Do economic union and monetary union require a fiscal union or even a political union, beyond the elements of the banking union currently being prepared? In this context, also the critique of the German current account surplus and the international expectations that Germany stimulate internal demand to support growth in crisis countries, were discussed.
With regard to the monetary union, the members of the subcommittee asked for an assessment of how European nations and the banking industry would have fared in the banking crisis that followed the Lehman collapse, had there not been a common currency. Given the important role that the ECB has played in the course of the crisis management, the members further asked for an evaluation of the OMT-program of the ECB and also if the monetary union is in need of common debt instruments, in order to provide the ECB with the possibility of buying EU liabilities, comparable to the Fed buying US Treasury bonds. Finally, the dual role of the ECB for monetary policy and banking supervision was an issue touched on by several questions.
In many cases, the dire situation of public finances calls into question the very soundness of sovereigns and prompts corrective actions with far-reaching consequences. In this context, European authorities responded with several measures on different fronts, for instance by passing the "Fiscal Compact", which entered into force on January 1, 2013. Of critical importance in this framework is the assessment of a country’s situation by way of statistical measures, in order to take corrective actions when called for according to the letter of the law. If these statistics are not correct, there is a risk of imposing draconian measures on countries that do not really need it.
Before the 2007–09 crisis, standard risk measurement methods substantially underestimated the threat to the financial system. One reason was that these methods didn’t account for how closely commercial banks, investment banks, hedge funds, and insurance companies were linked. As financial conditions worsened in one type of institution, the effects spread to others. A new method that more accurately accounts for these spillover effects suggests that hedge funds may have been central in generating systemic risk during the crisis.
Social impact bonds are a special type of bond whose purpose is to provide long term funds to projects with a social impact. Especially in the UK and in the US these bonds are increasingly being used to raise funds to finance government projects. Their return depends on the social improvements achieved. Especially in times of crisis, governments lack funds to prevent the social consequences of recessions. Faia argues that the European Union should develop an equivalent to the British Social Finance Ltd. to finance projects for social improvement.
Neither Northerners are willing to invest in a South they perceive as unwilling to undertake necessary structural reforms, nor are Southerners willing to invest in their countries in a climate of austerity and policy uncertainty imposed, in their view, by the North. This results in a vicious cycle of mistrust. However, as the author argues, big steps in the direction of reforms may provide just enough thrust to break out of this vicious cycle, propel southern countries – and especially Greece – to a much happier future, and promote the chances for more balanced economic performance in North and South.
Social Security rules that determine retirement, spousal, and survivor benefits, along with benefit adjustments according to the age at which these are claimed, open up a complex set of financial options for household decisions. These rules influence optimal household asset allocation, insurance, and work decisions, subject to life cycle demographic shocks, such as marriage, divorce, and children. Our model-based research generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
One of the motivations for establishing a European banking union was the desire to break the ties with between national regulators and domestic financial institutions in order to prevent regulatory capture. However, supervisory authority over the financial sector at the national level can also have valuable public benefits. The aim of this policy letter is to detail these public benefits in order to counter discussions that focus only on conflicts of interest. It is informed by an analysis of how financial institutions interacted with policy-makers in the design of national bank rescue schemes in response to the banking crisis of 2008. Using this information, it discusses the possible benefits of close cooperation between financial institutions and regulators and analyzes these in the wake of a European banking union.
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper explores consequences of consumer education on prices and welfare in retail financial markets when some consumers are naive about shrouded add-on prices and firms try to exploit it. Allowing for different information and pricing strategies we show that education is unlikely to push firms to disclose prices towards all consumers, which would be socially efficient. Instead, price discrimination emerges as a new equilibrium. Further, due to a feedback on prices, education that is good for consumers who become sophisticated may be bad for consumers who stay naive and even for the group of all consumers as a whole
This paper makes a conceptual contribution to the effect of monetary policy on financial stability. We develop a microfounded network model with endogenous network formation to analyze the impact of central banks' monetary policy interventions on systemic risk. Banks choose their portfolio, including their borrowing and lending decisions on the interbank market, to maximize profit subject to regulatory constraints in an asset-liability framework. Systemic risk arises in the form of multiple bank defaults driven by common shock exposure on asset markets, direct contagion via the interbank market, and firesale spirals. The central bank injects or withdraws liquidity on the interbank markets to achieve its desired interest rate target. A tension arises between the beneficial effects of stabilized interest rates and increased loan volume and the detrimental effects of higher risk taking incentives. We find that central bank supply of liquidity quite generally increases systemic risk.
This paper investigates the role of monetary policy in the collapse in the long-term real interest rates in the decade before the onset of the financial crisis using a sample of five advanced economies (United States, United Kingdom, the euro area, Sweden and Canada). The results from an estimated panel VAR with monthly data show that, while monetary policy shocks had negligible effects on long-term real interest rates, shocks to the long-term real interest rates had a one-to-one effect on the short nominal rate.
This paper empirically tests the role of bank lending tightening on non-financial corporate (NFC) bond issuance in the eurozone. By utilizing a unique data set provided by the ECB Bank Lending Survey, we capture the "pure" credit supply effect on corporate external financing. We find that tightened credit standards positively affect the NFC bond issuance: A 1pp increase in banks reporting considerable tightening on loans leads to around a 7% increase in firms' bond issuance in the eurozone. Focusing on a spectrum of aspects contributing to bank credit tightening, we document that banks' balance sheet constraints, as well as the perception of risk lead to significantly higher NFC bond issuance. In addition, we show that stricter lending conditions, such as wider margins, higher collateral requirements and covenants significantly increase NFC bond issuance volumes too. Furthermore, the impact of bank credit tightening on firms' bond issuance is particularly observable in core eurozone countries and not in peripheral countries. This is partially due to the underdeveloped of debt capital markets in the peripheral countries.
This paper investigates the determinants of value and growth investing in a large administrative panel of Swedish residents over the 1999-2007 period. We document strong relationships between a household’s portfolio tilt and the household’s financial and demographic characteristics. Value investors have higher financial and real estate wealth, lower leverage, lower income risk, lower human capital, and are more likely to be female than the average growth investor. Households actively migrate to value stocks over the life-cycle and, at higher frequencies, dynamically offset the passive variations in the value tilt induced by market movements. We verify that these results are not driven by cohort effects, financial sophistication, biases toward popular or professionally close stocks, or unobserved heterogeneity in preferences. We relate these household-level results to some of the leading explanations of the value premium.
We analyze the risk premium on bank bonds at origination with a special focus on the role of implicit and explicit public guarantees and the systemic relevance of the issuing institutions. By looking at the asset swap spread on 5,500 bonds, we find that explicit guarantees and sovereign creditworthiness have a substantial effect on the risk premium. In addition, while large institutions still enjoy lower issuance costs linked to the TBTF framework, we find evidence of enhanced market disciple for systemically important banks which face, since the onset of the financial crisis, an increased premium on bond placements.
We examine the impact of so-called "Crisis Contracts" on bank managers' risk-taking incentives and on the probability of banking crises. Under a Crisis Contract, managers are required to contribute a pre-specified share of their past earnings to finance public rescue funds when a crisis occurs. This can be viewed as a retroactive tax that is levied only when a crisis occurs and that leads to a form of collective liability for bank managers. We develop a game-theoretic model of a banking sector whose shareholders have limited liability, so that society at large will suffer losses if a crisis occurs. Without Crisis Contracts, the managers' and shareholders' interests are aligned, and managers take more than the socially optimal level of risk. We investigate how the introduction of Crisis Contracts changes the equilibrium level of risk-taking and the remuneration of bank managers. We establish conditions under which the introduction of Crisis Contracts will reduce the probability of a banking crisis and improve social welfare. We explore how Crisis Contracts and capital requirements can supplement each other and we show that the efficacy of Crisis Contracts is not undermined by attempts to hedge.
Banks can deal with their liquidity risk by holding liquid assets (self-insurance), by participating in interbank markets (coinsurance), or by using flexible financing instruments, such as bank capital (risk-sharing). We use a simple model to show that undiversifiable liquidity risk, i.e. the liquidity risk that banks are unable to coinsure on interbank markets, represents an important risk factor affecting their capital structures. Banks facing higher undiversifiable liquidity risk hold more capital. We posit that empirically banks that are more exposed to undiversifiable liquidity risk are less active on interbank markets. Therefore, we test for the existence of a negative relationship between bank capital and interbank market activity and find support in a large sample of U.S. commercial banks.
From the late middle ages to early modern times (ca. 1200-1600) the Lübeck City Council was the most important courthouse in the Baltic. About 100 cities and towns on its shores lived according to the law of Lübeck. The paper deals with the old theory that Imperial law, i.e. mainly the learned Ius commune, was generally rejected by the council on the grounds of its foreign nature. The paper rejects this view with the help of 8 case studies. There exist rather spectacular statements against Imperial Law, but a closer look reveals that they have to be seen in the light of a specific practical context. They must not be confounded with general statements in which the council had no interest. Its attitude towards Learned Law was flexible and purely pragmatic.
I analyze a critical illness insurance in a consumption-investment model over the life cycle. I solve a model with stochastic mortality risk and health shock risk numerically. These shocks are interpreted as critical illness and can negatively affect the expected remaining lifetime, the health expenses, and the income. In order to hedge the health expense effect of a shock, the agent has the possibility to contract a critical illness insurance. My results highlight that the critical illness insurance is strongly desired by the agents. With an insurance profit of 20%, nearly all agents contract the insurance in the working stage of the life cycle and more than 50% of the agents contract the insurance during retirement. With an insurance profit of 200%, still nearly all working agents contract the insurance, whereas there is little demand in the retirement stage.
I numerically solve realistically calibrated life cycle consumption-investment problems in continuous time featuring stochastic mortality risk driven by jumps, unspanned labor income as well as short-sale and liquidity constraints and a simple insurance. I compare models with deterministic and stochastic hazard rate of death to a model without mortality risk. Mortality risk has only minor effects on the optimal controls early in the life cycle but it becomes crucial in later years. A diffusive component in the hazard rate of death has no significant impact, whereas a jump component is desired by the agent and influences optimal controls and wealth evolution. The insurance is used to ensure optimal bequest such that there is no accidental bequest. In the absence of the insurance, the biggest part of bequest is accidental.
We explore the sources of household balance sheet adjustment following the collapse of the housing market in 2006. First, we use microdata from the Federal Reserve Board’s Senior Loan Officer Opinion Survey to document that banks cumulatively tightened consumer lending standards more in counties that experienced a house price boom in the mid-2000s than in non-boom counties. We then use the idea that renters, unlike homeowners, did not experience an adverse wealth shock when the housing market collapsed to examine the relative importance of two explanations for the observed deleveraging and the sluggish pickup in consumption after 2008. First, households may have optimally adjusted to lower wealth by reducing their demand for debt and implicitly, their demand for consumption. Alternatively, banks may have been more reluctant to lend in areas with pronounced real estate declines. Our evidence is consistent with the second explanation. Renters with low risk scores, compared to homeowners in the same markets, reduced their levels of nonmortgage debt and credit card debt more in counties where house prices fell more. The contrast suggests that the observed reductions in aggregate borrowing were more driven by cutbacks in the provision of credit than by a demand-based response to lower housing wealth.
This paper solves a dynamic model of households' mortgage decisions incorporating labor income, house price, inflation, and interest rate risk. It uses a zero-profit condition for mortgage lenders to solve for equilibrium mortgage rates given borrower characteristics and optimal decisions. The model quantifies the effects of adjustable vs. fixed mortgage rates, loan-to-value ratios, and mortgage affordability measures on mortgage premia and default. Heterogeneity in borrowers' labor income risk is important for explaining the higher default rates on adjustable-rate mortgages during the recent US housing downturn, and the variation in mortgage premia with the level of interest rates.
The paper analyses the relationship between deposit insurance, debt-holder monitoring, bank charter values, and risk taking for European banks. Utilising cross-sectional and time series variation in the existence of deposit insurance schemes in the EU, we find that the establishment of explicit deposit insurance significantly reduces the risk taking of banks. This finding stands in contrast to most of the previous empirical literature. It supports the hypothesis that in the absence of deposit insurance, European banking systems have been characterised by strong implicit insurance operating through the expectation of public intervention at times of distress. Hence the introduction of an explicit system may imply a de facto reduction in the scope of the safety net. This finding provides a new perspective on the effects of deposit insurance on risk taking. Unless the absence of any safety net is credible, the introduction of deposit insurance serves to explicitly limit the safety net and, hence, moral hazard. We also test further hypotheses regarding the interaction between deposit insurance and monitoring, charter values and "too-big-to-fail." We find that banks with lower charter values and more subordinated debt reduce risk taking more after the introduction of explicit deposit insurance, in support of the notion that charter values and subordinated debt may mitigate moral hazard. Finally, large banks (as measured in relation to the banking system as a whole) do not change their risk taking in response to the introduction of deposit insurance, which suggests that the introduction of explicit deposit insurance does not mitigate "too-big-to-fail" problems.
This paper uses the co-incidence of extreme shocks to banks’ risk to examine within country and across country contagion among large EU banks. Banks’ risk is measured by the first difference of weekly distances to default and abnormal returns. Using Monte Carlo simulations, the paper examines whether the observed frequency of large shocks experienced by two or more banks simultaneously is consistent with the assumption of a multivariate normal or a student t distribution. Further, the paper proposes a simple metric, which is used to identify contagion from one bank to another and identify “systemically important” banks in the EU.
Using a normalized CES function with factor-augmenting technical progress, we estimate a supply-side system of the US economy from 1953 to 1998. Avoiding potential estimation biases that have occurred in earlier studies and putting a high emphasis on the consistency of the data set, required by the estimated system, we obtain robust results not only for the aggregate elasticity of substitution but also for the parameters of labor and capital augmenting technical change. We find that the elasticity of substitution is significantly below unity and that the growth rates of technical progress show an asymmetrical pattern where the growth of laboraugmenting technical progress is exponential, while that of capital is hyperbolic or logarithmic.
Recent empirical studies on the inflation-growth-relationship underline that inflation has negative growth effects already under relatively modest rates. Most contributions to monetary growth theory, however, have difficulties in explaining such a pattern. It is shown in this paper that this problem can be overcome by establishing a link between monetary instability and the aggregate elasticity of factor substitution. Several microeconomic justifications can be found for a negative influence of inflation on factor substitution. It turns out that already in a simple neoclassical monetary growth model this effect is usually strong enough to question the superneutrality benchmark result in the steady state and to dominate all potential positive effects of inflation along the convergence path. In a more general perspective the paper contributes to a better integration of institutional change in aggregate models of economic growth.
June 4th, 2013 marks the formal launch of the third generation of the Equator Principles (EP III) and the tenth anniversary of the EPs – enough reasons for evaluating the EPs initiative from an economic ethics and business ethics perspectives. In particular, this essay deals with the following questions: What are the EPs and where are they going? What has been achieved so far by the EPs? What are the strengths and weaknesses of the EPs? Which necessary reform steps need to be adopted in order to further strengthen the EPs framework? Can the EPs be regarded as a role-model in the field of sustainable finance and CSR? The paper is structured as follows: The first chapter defines the term EPs and introduces the keywords related to the EPs framework. The second chapter gives a brief overview of the history of the EPs. The third chapter discusses the Equator Principles Association, the governing, administering, and managing institution behind the EPs. The fourth chapter summarizes the main features and characteristics of the newly released third generation of the EPs. The fifth chapter critically evaluates the EP III from an economic ethics and business ethics perspectives. The paper concludes with a summary of the main findings.
In ‘Strafe für fremde Schuld’ Harald Maihold uncovered how a doctrine of surrogate punishment in the legal treatises of the Salamanca school gradually gave way to the principle of guilt. This meant that punishment eventually could only be inflicted upon a culprit and no longer upon an innocent. We will use René Girard’s philosophy of (the disruption of) scapegoat mechanisms and sacrifice to develop a coherent interpretation not only of how this institution of surrogate punishment functioned, how it selected its victims and the way it was legitimated, but also of the theology that formed its background. We argue that most of what surrogate punishment is about can be grasped in two words: sacrificial logic. The elimination of surrogation from criminal law would then correspond to the rejection of this logic, an evolution which could be interpreted as a desacralisation or secularisation of criminal law under the influence of the upcoming principle of guilt.
Noumenal Power
(2014)
In political or social philosophy, we speak about power all the time. Yet the meaning of this important concept is rarely made explicit, especially in the context of normative discussions. But as with many other concepts, once one considers it more closely, fundamental problems arise, such as whether a power relation is necessarily a relation of subordination and domination. In the following, I suggest a novel understanding of what power is and what it means to exercise it.
Francisco Suárez (1548-1617) and Rodrigo Arriaga (1592-1667) on the state of innocence and community
(2014)
Recent scholarship on late-scholastic thought has stressed a Jesuit discontinuity from Thomism. While Aquinas’ Aristotelian thesis located the political sphere in the state of innocence, Jesuit thought on community formation is said to have referred to ‘fallen’ and ‘pure’ nature. In this piece, I trace one particular narrative: In the hypothetical, lasting state of innocence (if original sin had not occurred), Aquinas identified the political community, but not the institution of the sacraments. Two celebrated Jesuit scholastics, Francisco Suárez and Rodrigo Arriaga, challenged the latter claim and defended the naturalness of spiritual alongside temporal power. This effectively allowed them to connect ‘nature’ to ‘utility’ and ‘necessity’ without tying their claims to the supernatural teleology. To them, the state of innocence remained relevant for politics, albeit in a way that challenged the Thomist account.
In this paper, we study the effect of proportional transaction costs on consumption-portfolio decisions and asset prices in a dynamic general equilibrium economy with a financial market that has a single-period bond and two risky stocks, one of which incurs the transaction cost. Our model has multiple investors with stochastic labor income, heterogeneous beliefs, and heterogeneous Epstein-Zin-Weil utility functions. The transaction cost gives rise to endogenous variations in liquidity. We show how equilibrium in this incomplete-markets economy can be characterized and solved for in a recursive fashion. We have three main findings. One, costs for trading a stock lead to a substantial reduction in the trading volume of that stock, but have only a small effect on the trading volume of the other stock and the bond. Two, even in the presence of stochastic labor income and heterogeneous beliefs, transaction costs have only a small effect on the consumption decisions of investors, and hence, on equity risk premia and the liquidity premium. Three, the effects of transaction costs on quantities such as the liquidity premium are overestimated in partial equilibrium relative to general equilibrium.
This paper studies the life cycle consumption-investment-insurance problem of a family. The wage earner faces the risk of a health shock that significantly increases his probability of dying. The family can buy term life insurance with realistic features. In particular, the available contracts are long term so that decisions are sticky and can only be revised at significant costs. Furthermore, a revision is only possible as long as the insured person is healthy. A second important and realistic feature of our model is that the labor income of
the wage earner is unspanned. We document that the combination of unspanned labor income and the stickiness of insurance decisions reduces the insurance demand significantly. This is because an income shock induces the need to reduce the insurance coverage, since premia become less affordable. Since such a reduction is costly and families anticipate these potential costs, they buy less protection at all ages. In particular, young families stay away from life insurance markets altogether.
The financial services industry worldwide has undergone major transformation since the late 1970s. Technological advancements in information processing and communication facilitated financial innovation and narrowed traditional distinctions in financial products and services, allowing them to become close substitutes for one another. The deregulation process in many major economies prior to the recent financial crisis blurred the traditional lines of demarcation between the distinct types of financial institutions, exposing those firms to new competitors in their traditional business areas, while the increasing globalization of financial markets fostered the provision of financial services across national borders. Against this backdrop, a trend toward consolidation across financial sectors as well as across national borders increasingly manifested itself since the 1990s. The developments in the financial markets ever more intensified competition in the financial services industry and induced financial institutions to redefine their business strategies in search of higher profitability and growth opportunities. Consolidation across distinct financial sectors, i.e. financial conglomeration, in particular became a popular business strategy in light of the potential operational synergies and diversification benefits it can offer. This trend spurred the growth of diversified financial groups, the so-called financial conglomerates, which commingle banking, securities, and insurance activities under one corporate umbrella.5 Still today, large, complex financial conglomerates are represented among major players in the financial markets worldwide, whose activities not only sway across traditional boundaries of banking, securities, and insurance sectors but also across national borders.
Notwithstanding the economic benefits that conglomeration may produce as a business strategy, the emergence of financial conglomerates also exacerbated existing and created new prudential risks in the financial system. 6 The mixing of a variety of financial products and services under one corporate roof and the generally large and complex group structure of financial conglomerates expose such organizations to specific group risks such as contagion and arbitrage risk as well as systemic risk. When realized, these risks may not only cause the failure of an entire financial group but threaten the stability of the financial system as a whole, as evidenced by the events during recent financial crisis of 2007-2009...
Following the experience of the global financial crisis, central banks have been asked to undertake unprecedented responsibilities. Governments and the public appear to have high expectations that monetary policy can provide solutions to problems that do not necessarily fit in the realm of traditional monetary policy. This paper examines three broad public policy goals that may overburden monetary policy: full employment; fiscal sustainability; and financial stability. While central banks have a crucial position in public policy, the appropriate policy mix also involves other institutions, and overreliance on monetary policy to achieve these goals is bound to disappoint. Central Bank policies that facilitate postponement of needed policy actions by governments may also have longer-term adverse consequences that could outweigh more immediate benefits. Overburdening monetary policy may eventually diminish and compromise the independence and credibility of the central bank, thereby reducing its effectiveness to preserve price stability and contribute to crisis management.
Banks' financial distress, lending supply and consumption expenditure : [version december 2013]
(2014)
The paper employs a unique identification strategy that links survey data on household consumption expenditure to bank level data in order to estimate the effects of bank financial distress on consumer credit and consumption expenditures. Specifically, we show that households whose banks were more exposed to funding shocks report significantly lower levels of non-mortgage liabilities compared to a matched sample of households. The reduced access to credit, however, does not result in lower levels of consumption. Instead, we show that households compensate by drawing down liquid assets. Only households without the ability to draw on liquid assets reduce consumption. The results are consistent with consumption smoothing in the face of a temporary adverse lending supply shock. The results contrast with recent evidence on the real effects of finance on firms' investment, where even temporary adverse credit supply shocks are associated with significant real effects.
This paper tests whether an increase in insured deposits causes banks to become more risky. We use variation introduced by the U.S. Emergency Economic Stabilization Act in October 2008, which increased the deposit insurance coverage from $100,000 to $250,000 per depositor and bank. For some banks, the amount of insured deposits increased significantly; for others, it was a minor change. Our analysis shows that the more affected banks increase their investments in risky commercial real estate loans and become more risky relative to unaffected banks following the change. This effect is most distinct for affected banks that are low capitalized.
We introduce a new measure of systemic risk, the change in the conditional joint probability of default, which assesses the effects of the interdependence in the financial system on the general default risk of sovereign debtors. We apply our measure to examine the fragility of the European financial system during the ongoing sovereign debt crisis. Our analysis documents an increase in systemic risk contributions in the euro area during the post-Lehman global recession and especially after the beginning of the euro area sovereign debt crisis. We also find a considerable potential for cascade effects from small to large euro area sovereigns. When we investigate the effect of sovereign default on the European Union banking system, we find that bigger banks, banks with riskier activities, with poor asset quality, and funding and liquidity constraints tend to be more vulnerable to a sovereign default. Surprisingly, an increase in leverage does not seem to influence systemic vulnerability.
We show that market discipline, defined as the extent to which firm specific risk characteristics are reflected in market prices, eroded during the recent financial crisis in 2008. We design a novel test of changes in market discipline based on the relation between firm specific risk characteristics and debt-to-equity hedge ratios. We find that market discipline already weakened after the rescue of Bear Stearns before disappearing almost entirely after the failure of Lehman Brothers. The effect is stronger for investment banks and large financial institutions, while there is no comparable effect for non-financial firms.
Inflation differentials in the euro area have been persistent since the adoption of the single currency. This paper analyzes the impact of product and labor market regulation on inflation in a sample of 11 countries. The results show that, after the adoption of the euro, product market deregulation has a relevant and significant effect on the level of inflation, while higher labor market regulation increases the responsiveness of inflation to the output gap.
We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
Even though fiscal sovereignty still counts as a fundamental principle of government, global and regional economic integration as well as increasing levels of sovereign debt severely limit governments’ tax policy choices. In particular the redistributive function of taxation has suffered in the pursuit of economic competitiveness. As inequality rises and attention is directed again at taxation as a means for redistribution, international cooperation appears as an avenue to enable redistribution through taxation. Yet, one of the predominant international institutions dealing with tax matters – the OECD – with its focus on economic growth and competitiveness and resulting tax policy advice prevents rather than promotes national and international debates on taxation as a question of social justice. The paper argues that questions of taxation need to be perceived as questions of social justice and thus as questions of politics, and not merely of economics. Only if taxation is not considered a mere economic instrument can a ‘political economy’ be maintained. The paper addresses the three objectives of taxation – revenue generation, redistribution and regulation -- and how they are affected as governments aim for fiscal consolidation to conclude that governments’ power to freely pursue and calibrate these objectives has come to appear rather as a myth than the core of sovereignty. It then demonstrates how the OECD’s tax policy advice and cooperation in tax matters react to the constraints on governmental taxation powers; how they aim at economic growth and competitiveness to the detriment of (other) ideas of social justice. The paper concludes with a call for (re)integrating social and global justice concerns into debates on taxation.
The article introduces a research project financed by the Academy of Sciences and Literature Mainz began in 2013 and will extend over an 18-year period. It aims at producing a historical-semantic dictionary elucidating central terms of the School of Salamanca's discourses and their significance for modern political theory and jurisprudence. The project's fundament will be a digital corpus of important texts from the School of Salamanca which will be linked up with the dictionary's online version. By making the source corpus accessible in searchable full text (as well as in high quality digital images), the project is creating a new research tool with exciting possibilities for further investigations. The dictionary will be a valuable source of information for the interdisciplinary research carried out in this field.
Sovereign bond risk premiums
(2013)
Credit risk has become an important factor driving government bond returns. We therefore introduce an asset pricing model which exploits information contained in both forward interest rates and forward CDS spreads. Our empirical analysis covers euro-zone countries with German government bonds as credit risk-free assets. We construct a market factor from the first three principal components of the German forward curve as well as a common and a country-specific credit factor from the principal components of the forward CDS curves. We find that predictability of risk premiums of sovereign euro-zone bonds improves substantially if the market factor is augmented by a common and an orthogonal country-specific credit factor. While the common credit factor is significant for most countries in the sample, the country-specific factor is significant mainly for peripheral euro-zone countries. Finally, we find that during the current crisis period, market and credit risk premiums of government bonds are negative over long subintervals, a finding that we attribute to the presence of financial repression in euro-zone countries.
This paper takes a novel approach to estimating bankruptcy costs by inference from market prices of equity and put options using a dynamic structural model of capital structure. This approach avoids the selection bias of looking at firms in or near default and therefore permits theories of ex ante capital structure determination to be tested. We identify significant cross sectional variation in bankruptcy costs across industries and relate these to specific firm characteristics. We find that asset volatility and growth options have significant positive impacts, while tangibility and size have negative impacts. Our bankruptcy cost variable estimate significantly negatively impacts leverage ratios. This negative impact is in addition to that of other firm characteristics such as asset intangibility and asset volatility. The results provide strong support for the tradeoff theory of capital structure.
We study to what extent firms spread out their debt maturity dates across time, which we call "granularity of corporate debt." We consider the role of debt granularity using a simple model in which a firm's inability to roll over expiring debt causes inefficiencies, such as costly asset sales or underinvestment. Since multiple small asset sales are less costly than a single large one, firms may diversify debt rollovers across maturity dates. We construct granularity measures using data on corporate bond issuers for the 1991-2011 period and establish a number of novel findings. First, there is substantial variation in granularity in that many firms have either very concentrated or highly dispersed maturity structures. Second, our model's predictions are consistent with observed variation in granularity. Corporate debt maturities are more dispersed for larger and more mature firms, for firms with better investment opportunities, with higher leverage ratios, and with lower levels of current cash flows. We also show that during the recent financial crisis especially firms with valuable investment opportunities implemented more dispersed maturity structures. Finally, granularity plays an important role for bond issuances, because we document that newly issued corporate bond maturities complement pre-existing bond maturity profiles.
We consider an economy where individuals privately choose effort and trade competitively priced securities that pay off with effort-determined probability. We show that if insurance against a negative shock is sufficiently incomplete, then standard functional form restrictions ensure that individual objective functions are optimized by an effort and insurance combination that is unique and satisfies first- and second-order conditions. Modeling insurance incompleteness in terms of costly production of private insurance services, we characterize the constrained inefficiency arising in general equilibrium from competitive pricing of nonexclusive financial contracts.
We propose a new classification of consumption goods into nondurable goods, durable goods and a new class which we call “memorable” goods. A good is memorable if a consumer can draw current utility from its past consumption experience through memory. We construct a novel consumption-savings model in which a consumer has a well-defined preference ordering over both nondurable goods and memorable goods. Memorable goods consumption differs from nondurable goods consumption in that current memorable goods consumption may also impact future utility through the accumulation process of the stock of memory. In our model, households optimally choose a lumpy profile of memorable goods consumption even in a frictionless world. Using Consumer Expenditure Survey data, we then document levels and volatilities of different groups of consumption goods expenditures, as well as their expenditure patterns, and show that the expenditure patterns on memorable goods indeed differ significantly from those on nondurable and durable goods. Finally, we empirically evaluate our model’s predictions with respect to the welfare cost of consumption fluctuations and conduct an excess-sensitivity test of the consumption response to predictable income changes. We find that (i) the welfare cost of household-level consumption fluctuations may be overstated by 1.7 percentage points (11.9% points as opposed to 13.6% points of permanent consumption) if memorable goods are not appropriately accounted for; (ii) the finding of excess sensitivity of consumption documented in important papers of the literature might be entirely due to the presence of memorable goods.
There is mounting evidence that retail investors make predictable, costly investment mistakes, including underinvestment, naïve diversification, and payment of excessive fund fees. Over the past thirty-five years, however, participant-directed 401(k) plans have largely replaced professionally managed pension plans, requiring unsophisticated retail investors to navigate the financial markets themselves. Policy-makers have struggled with regulatory interventions designed to improve the quality of investment decisions without a clear understanding of the reasons for investor mistakes. Absent such an understanding, it is difficult to design effective regulatory responses. This article offers a first step in understanding the investor decision-making process. We use an internet-based experiment to disentangle possible explanations for inefficient investment decisions. The experiment employs a simplified construct of an employee’s allocation among the options in a retirement plan coupled with technology that enables us to collect data on the specific information that investors choose to view. In addition to collecting general information about the process by which investors choose among mutual fund options, we employ an experimental manipulation to test the effect of an instruction on the importance of mutual fund fees. Pairing this instruction with simplified fee disclosure allows us to distinguish between motivation-limits and cognition-limits as explanations for the widespread findings that investors ignore fees in their investment decisions. Our results offer partial but limited grounds for optimism. On the one hand, within our simplified experimental construct, our subjects allocated more money, on average, to higher-value funds. Furthermore, subjects who received the fees instruction paid closer attention to mutual fund fees and allocated their investments into funds with lower fees. On the other hand, the effects of even a blunt fees instruction were limited, and investors were unable to identify and avoid clearly inferior fund options. In addition, our results suggest that excessive, naïve diversification strategies are driving many investment decisions. Although our findings are preliminary, they suggest valuable avenues for future research and important implications for regulation of retail investing.
The substantial variation in the real price of oil since 2003 has renewed interest in the question of how to forecast monthly and quarterly oil prices. There also has been increased interest in the link between financial markets and oil markets, including the question of whether financial market information helps forecast the real price of oil in physical markets. An obvious advantage of financial data in forecasting oil prices is their availability in real time on a daily or weekly basis. We investigate whether mixed-frequency models may be used to take advantage of these rich data sets. We show that, among a range of alternative high-frequency predictors, especially changes in U.S. crude oil inventories produce substantial and statistically significant real-time improvements in forecast accuracy. The preferred MIDAS model reduces the MSPE by as much as 16 percent compared with the no-change forecast and has statistically significant directional accuracy as high as 82 percent. This MIDAS forecast also is more accurate than a mixed-frequency realtime VAR forecast, but not systematically more accurate than the corresponding forecast based on monthly inventories. We conclude that typically not much is lost by ignoring high-frequency financial data in forecasting the monthly real price of oil.
Model case procedures have some fundamentals in common with collective redress in civil law countries. This is particularly true in the field of investor protection which is highly regulated and marked by resulting enforcement failures, which led the German legislator to the enactment of the KapMuG and its recent amendment which highlight exemplary elements of model case procedure. A survey of the ongoing activities of the European Union in the area of collective redress and of its repercussions on the member state level therefore forms a suitable basis for the following analysis of the 2012 amendment of the KapMuG. It clearly brings into focus a shift from sector-specific regulation with an emphasis on the cross-border aspect of protecting consumers towards a “coherent approach” strengthening the enforcement of EU law. As a result, regulatory policy and collective redress are two sides of the same coin today. With respect to the KapMuG such a development brings about some tension between its aim to aggregate small individual claims as efficiently as possible and the dominant role of individual procedural rights in German civil procedure. This conflict can be illustrated by some specific rules of the KapMuG: its scope of application, the three-tier procedure of a model case procedure, the newly introduced notification of claims and the new opt-out settlement under the amended §§ 17-19.
We propose the realized systemic risk beta as a measure for financial companies’ contribution to systemic risk given network interdependence between firms’ tail risk exposures. Conditional on statistically pre-identified network spillover effects and market as well as balance sheet information, we define the realized systemic risk beta as the total time-varying marginal effect of a firm’s Value-at-risk (VaR) on the system’s VaR. Statistical inference reveals a multitude of relevant risk spillover channels and determines companies’ systemic importance in the U.S. financial system. Our approach can be used to monitor companies’ systemic importance allowing for a transparent macroprudential supervision.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
Does it pay to invest in art? A selection-corrected returns perspective : [draft october 15, 2013]
(2013)
This paper shows the importance of correcting for sample selection when investing in illiquid assets with endogenous trading. Using a large sample of 20,538 paintings that were sold repeatedly at auction between 1972 and 2010, we find that paintings with higher price appreciation are more likely to trade. This strongly biases estimates of returns. The selection-corrected average annual index return is 6.5 percent, down from 10 percent for traditional uncorrected repeat sales regressions, and Sharpe Ratios drop from 0.24 to 0.04. From a pure financial perspective, passive index investing in paintings is not a viable investment strategy once selection bias is accounted for. Our results have important implications for other illiquid asset classes that trade endogenously.
The 2011 European short sale ban on financial stocks: a cure or a curse? : [version 31 july 2013]
(2013)
Did the August 2011 European short sale bans on financial stocks accomplish their goals? In order to answer this question, we use stock options’ implied volatility skews to proxy for investors’ risk aversion. We find that on ban announcement day, risk aversion levels rose for all stocks but more so for the banned financial stocks. The banned stocks’ volatility skews remained elevated during the ban but dropped for the other unbanned stocks. We show that it is the imposition of the ban itself that led to the increase in risk aversion rather than other causes such as information flow, options trading volumes, or stock specific factors. Substitution effects were minimal, as banned stocks’ put trading volumes and put-call ratios declined during the ban. We argue that although the ban succeeded in curbing further selling pressure on financial stocks by redirecting trading activity towards index options, this result came at the cost of increased risk aversion and some degree of market failure.
We show that the presence of high frequency trading (HFT) has significantly mitigated the frequency and severity of end-of-day price dislocation, counter to recent concerns expressed in the media. The effect of HFT is more pronounced on days when end of day price dislocation is more likely to be the result of market manipulation on days of option expiry dates and end of month. Moreover, the effect of HFT is more pronounced than the role of trading rules, surveillance, enforcement and legal conditions in curtailing the frequency and severity of end-of-day price dislocation. We show our findings are robust to different proxies of the start of HFT by trade size, cancellation of orders, and co-location.
We examine the impact of stock exchange trading rules and surveillance on the frequency and severity of suspected insider trading cases in 22 stock exchanges around the world over the period January 2003 through June 2011. Using new indices for market manipulation, insider trading, and broker-agency conflict based on the specific provisions of the trading rules of each stock exchange, along with surveillance to detect non-compliance with such rules, we show that more detailed exchange trading rules and surveillance over time and across markets significantly reduce the number of cases, but increase the profits per case.
We use responses to survey questions in the 2010 Italian Survey of Household Income and Wealth that ask consumers how much of an unexpected transitory income change they would consume. We find that the marginal propensity to consume (MPC) is 48 percent on average, and that there is substantial heterogeneity in the distribution. We find that households with low cash-on-hand exhibit a much higher MPC than affluent households, which is in agreement with models with precautionary savings where income risk plays an important role. The results have important implications for the evaluation of fiscal policy, and for predicting household responses to tax reforms and redistributive policies. In particular, we find that a debt-financed increase in transfers of 1 percent of national disposable income targeted to the bottom decile of the cash-on-hand distribution would increase aggregate consumption by 0.82 percent. Furthermore, we find that redistributing 1% of national disposable income from the top to the bottom decile of the income distribution would boost aggregate consumption by 0.33%.
Prior research suggests that those who rely on intuition rather than effortful reasoning when making decisions are less averse to risk and ambiguity. The evidence is largely correlational, however, leaving open the question of the direction of causality. In this paper, we present experimental evidence of causation running from reliance on intuition to risk and ambiguity preferences. We directly manipulate participants’ predilection to rely on intuition and find that enhancing reliance on intuition lowers the probability of being ambiguity averse by 30 percentage points and increases risk tolerance by about 30 percent in the experimental sub-population where we would a priori expect the manipulation to be successful(males).
Investment in financial literacy, social security and portfolio choice : [version may 21, 2013]
(2013)
We present an intertemporal portfolio choice model where individuals invest in financial literacy, save, allocate their wealth between a safe and a risky asset, and receive a pension when they retire. Financial literacy affects the excess return and the cost of stock market participation. Since literacy depreciates over time and has a cost related to current consumption, investors simultaneously choose how much to save, the portfolio allocation, and the optimal investment in literacy. This last depends on households' resources, its preference parameters and on how much financial literacy affects the returns on risky assets and the stock market participation cost, and the returns on social security wealth. The model implies one should observe a positive correlation between stock market participation (and risky asset share, conditional on participation) and financial literacy, and a negative correlation between the generosity of the social security system and financial literacy. The model also implies that the stock of financial literacy accumulated early in life is positively correlated with the individual's wealth and portfolio allocations later in life. Using microeconomic cross-country data, we find support for these predictions.
The U.S. Energy Information Administration (EIA) regularly publishes monthly and quarterly forecasts of the price of crude oil for horizons up to two years, which are widely used by practitioners. Traditionally, such out-of-sample forecasts have been largely judgmental, making them difficult to replicate and justify. An alternative is the use of real-time econometric oil price forecasting models. We investigate the merits of constructing combinations of six such models. Forecast combinations have received little attention in the oil price forecasting literature to date. We demonstrate that over the last 20 years suitably constructed real-time forecast combinations would have been systematically more accurate than the no-change forecast at horizons up to 6 quarters or 18 months. MSPE reduction may be as high as 12% and directional accuracy as high as 72%. The gains in accuracy are robust over time. In contrast, the EIA oil price forecasts not only tend to be less accurate than no-change forecasts, but are much less accurate than our preferred forecast combination. Moreover, including EIA forecasts in the forecast combination systematically lowers the accuracy of the combination forecast. We conclude that suitably constructed forecast combinations should replace traditional judgmental forecasts of the price of oil.
Are product spreads useful for forecasting? An empirical evaluation of the Verleger hypothesis
(2013)
Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document MSPE reductions as high as 20% and directional accuracy as high as 63% at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons.
U.S. retail food price increases in recent years may seem large in nominal terms, but after adjusting for inflation have been quite modest even after the change in U.S. biofuel policies in 2006. In contrast, increases in the real prices of corn, soybeans, wheat and rice received by U.S. farmers have been more substantial and can be linked in part to increases in the real price of oil. That link, however, appears largely driven by common macroeconomic determinants of the prices of oil and agricultural commodities rather than the pass-through from higher oil prices. We show that there is no evidence that corn ethanol mandates have created a tight link between oil and agricultural markets. Rather increases in food commodity prices not associated with changes in global real activity appear to reflect a wide range of idiosyncratic shocks ranging from changes in biofuel policies to poor harvests. Increases in agricultural commodity prices in turn contribute little to U.S. retail food price increases, because of the small cost share of agricultural products in food prices. There is no evidence that oil price shocks have caused more than a negligible increase in retail food prices in recent years. Nor is there evidence for the prevailing wisdom that oil-price driven increases in the cost of food processing, packaging, transportation and distribution are responsible for higher retail food prices. Finally, there is no evidence that oil-market specific events or for that matter U.S. biofuel policies help explain the evolution of the real price of rice, which is perhaps the single most important food commodity for many developing countries.
We investigate the theoretical impact of including two empirically-grounded insights in a dynamic life cycle portfolio choice model. The first is to recognize that, when managing their own financial wealth, investors incur opportunity costs in terms of current and future human capital accumulation, particularly if human capital is acquired via learning by doing. The second is that we incorporate age-varying efficiency patterns in financial decisionmaking. Both enhancements produce inactivity in portfolio adjustment patterns consistent with empirical evidence. We also analyze individuals’ optimal choice between self-managing their wealth versus delegating the task to a financial advisor. Delegation proves most valuable to the young and the old. Our calibrated model quantifies welfare gains from including investment time and money costs, as well as delegation, in a life cycle setting.
Household decisions are profoundly shaped by a complex set of financial options due to Social Security rules determining retirement, spousal, and survivor benefits, along with benefit adjustments that vary with the age at which these are claimed. These rules influence optimal household asset allocation, insurance, and work decisions, given life cycle demographic shocks such as marriage, divorce, and children. Our model generates a wealth profile and a low and stable equity fraction consistent with empirical evidence. We also confirm predictions that wives will claim retirement benefits earlier than husbands, while life insurance is mainly purchased by younger men. Our policy simulations imply that eliminating survivor benefits would sharply reduce claiming differences by sex while dramatically increasing men’s life insurance purchases.
This paper employs stochastic simulations of the New Area-Wide Model—a microfounded open-economy model developed at the ECB—to investigate the consequences of the zero lower bound on nominal interest rates for the evolution of risks to price stability in the euro area during the recent financial crisis. Using a formal measure of the balance of risks, which is derived from policy-makers’ preferences about inflation outcomes, we first show that downside risks to price stability were considerably greater than upside risks during the first half of 2009, followed by a gradual rebalancing of these risks until mid-2011 and a renewed deterioration thereafter. We find that the lower bound has induced a noticeable downward bias in the risk balance throughout our evaluation period because of the implied amplification of deflation risks. We then illustrate that, with nominal interest rates close to zero, forward guidance in the form of a time-based conditional commitment to keep interest rates low for longer can be successful in mitigating downside risks to price stability. However, we find that the provision of time-based forward guidance may give rise to upside risks over the medium term if extended too far into the future. By contrast, time-based forward guidance complemented with a threshold condition concerning tolerable future inflation can provide insurance against the materialisation of such upside risks.
Empirical evidence suggests that asset returns correlate more strongly in bear markets than conventional correlation estimates imply. We propose a method for determining complete tail correlation matrices based on Value-at-Risk (VaR) estimates. We demonstrate how to obtain more efficient tail-correlation estimates by use of overidentification strategies and how to guarantee positive semidefiniteness, a property required for valid risk aggregation and Markowitz{type portfolio optimization. An empirical application to a 30-asset universe illustrates the practical applicability and relevance of the approach in portfolio management.
We analyze the equilibrium in a two-tree (sector) economy with two regimes. The output of each tree is driven by a jump-diffusion process, and a downward jump in one sector of the economy can (but need not) trigger a shift to a regime where the likelihood of future jumps is generally higher. Furthermore, the true regime is unobservable, so that the representative Epstein-Zin investor has to extract the probability of being in a certain regime from the data. These two channels help us to match the stylized facts of countercyclical and excessive return volatilities and correlations between sectors. Moreover, the model reproduces the predictability of stock returns in the data without generating consumption growth predictability. The uncertainty about the state also reduces the slope of the term structure of equity. We document that heterogeneity between the two sectors with respect to shock propagation risk can lead to highly persistent aggregate price-dividend ratios. Finally, the possibility of jumps in one sector triggering higher overall jump probabilities boosts jump risk premia while uncertainty about the regime is the reason for sizeable diffusive risk premia.
This study presents an empirical analysis of capital and liability management in eight cases of bank restructurings and resolutions from eight different European countries. It can be read as a companion piece to an earlier study by the author covering the specific bank restructuring programs of Greece, Spain and Cyprus during 2012/13.
The study portrays for each case the timelines between the initial credit event and the (last) restructuring. It proceeds to discuss the capital and liability management activity before restructuring and the restructuring itself, launches an attempt to calibrate the extent of creditor participation as well as expected loss by government, and engages in a counterfactual discussion of what could have been a least cost restructuring approach.
Four of the eight cases are resolutions, i.e. the original bank is unwound (Anglo Irish Bank, Amagerbanken, Dexia, Laiki), while the four other banks have de-facto or de-jure become nationalized and are awaiting re-privatization after the restructuring (Deutsche Pfandbriefbank/Hypo Real Estate, Bankia, SNS Reaal, Alpha Bank). The case selection follows considerations of their model character for the European bank restructuring and resolution policy discussion while straddling both the U.S. (2007 - 2010) and the European (2010 - ) legs of the financial crisis, which each saw very different policy responses....
We provide an assessment of the determinants of the risk remia paid by non-financial corporations on long-term bonds. By looking at 5,500 issues over the period 2005-2012, we find that in recent years the sovereign debt market turbulence has been a major driver of corporate risk. Compared with the three-year period 2005-07 before the global financial crisis, in the years 2010-12 Italian, Spanish and Portuguese firms paid on average between 70 and 120 basis points of additional premium due to the negative spillovers from the sovereign debt crisis, while German firms got a discount of 40 basis points.
Advances in technology and several regulatory initiatives have led to the emergence of a competitive but fragmented equity trading landscape in the US and Europe. While these changes have brought about several benefits like reduced transaction costs, regulators and market participants have also raised concerns about the potential adverse effects associated with increased execution complexity and the impact on market quality of new types of venues like dark pools. In this article we review the theoretical and empirical literature examining the economic arguments and motivations underlying market fragmentation, as well as the resulting implications for investors' welfare. We start with the literature that views exchanges as natural monopolies due to presence of network externalities, and then examine studies which challenge this view by focusing on trader heterogeneity and other aspects of the microstructure of equity markets.
This paper examines a practice that is nearly imperceptible to historians because the bulk of evidence for it is to be found in the interstices of the beaten paths of legal and social history and because it mixes economic and religious matters in a strikingly unfamiliar manner. From the thirteenth to the sixteenth century, excommunication for debt offered ordinary people an economical, efficacious enforcement mechanism for small-scale, daily, unwritten credit. At the same time, the practice offered holders of ecclesiastical jurisdiction an important opportunity to round out their incomes, particularly in the difficult fifteenth century. This transitional practice reveals a level of credit below that of the letters of change, annuities secured on real property, or written obligations beloved of economic historians and historians of banking. Studying the practice casts light on the transition from the face-to-face, local economies of the high Middle Ages to the regional economies of the early modern period, on how the Reformation shaped early modern regimes of credit, and on how the disappearance of ecclesiastical civil justice facilitated the emergence of early modern juridically sovereign territories.
German Expressionist cinema is a movement that began in 1919. Expressionist film is marked by distinct visual features and performance styles that rebel against prior realist art movements. More than 20 years prior to the Expressionist movement, Sigmund Frued published "The Interpretation of Dreams" in 1899, a ground breaking study that links dreams to unconcious impulses. This thesis argues that the unexplained dream - like imagery found in two Expressionist films, The Cabinet of Dr. Caligari (Robert Wiene, 1920) and Dr. Mabus, the Gambler (Fritz Lang, 1922) - can be seen in terms of Freud's model of dreaming.
The paper explains the absence of resultative secondary predication in Russian as arising from a conflict of inferential interpretations. It formalises the framework necessary to express this proposal in terms of abductive reasoning with Poole systems in Gricean contexts. The conflict is shown to arise for default rules regulating alternative realisation of verb-internally specified consequent states. The paper thus indicates that typological variation may be due not only to different parameter values but to general inferential properties of the syntax-semantics mapping. The proposed theory also contradicts some widespread proposals that the absence of resultative secondary predication is due to the absence of some particular language feature.
Band II von II
Band I von II
The papers in this volume were presented at the eleventh meeting of the Austronesian Formal Linguistics Association (AFLA 11), held from April 23-25 at the Zentrum für Allgemeine Sprachwissenschaft, Berlin, Germany. The conference was organized by Hans-Martin Gärtner, Joachim Sabel, and myself, as part of the research project Clause Structure and Adjuncts in Austronesian Languages. We gratefully acknowledge the financial support by the German Research Foundation (Deutsche Forschungsgemeinschaft). We would like to thank Wayan Arka, Agibail Cohn, Laura Downing, Silke Hamann, S J Hannahs, Ray Harlow, Nikolaus Himmelmann, Yuchua E. Hsiao, Lillian Huang, Ed Keenan, Glyne Piggott, Charles Randriamasimanana, Joszef Szakos, Barbara Stiebels, Jane Tang, Lisa Travis, Noami Tsukido, Sam Wang, Elizabeth Zeitoun, Kie Ross Zuraw, and Marzena Zygis for reviewing the abstracts. We are thankful to Mechthild Bernhard, Jenny Ehrhardt, Fabienne Fritzsche, Theódóra Torfadóttir and Tue Trinh for their help during the conference. I would like to thank Theódóra for providing essential editorial assistance.
The paper makes two contributions to semantic typology of secondary predicates. It provides an explanation of the fact that Russian has no resultative secondary predicates, relating this explanation to the interpretation of secondary predicates in English. And it relates depictive secondary predicates in Russian, which usually occur in the instrumental case, to other uses of the instrumental case in Russian, establishing here, too, a difference to English concerning the scope of the secondary predication phenomenon.
We present a thought-provoking study of two monetary models: the cash-in-advance and the Lagos and Wright (2005) models. We report that the different approach to modeling money — reduced-form vs. explicit role — neither induces theoretical nor quantitative differences in results. Given conformity of preferences, technologies and shocks, both models reduce to one difference equation. The equations do not coincide only if price distortions are differentially imposed across models. To illustrate, when cash prices are equally distorted in both models equally large welfare costs of inflation are obtained in each model. Our insight is that if results differ, then this is due to differential assumptions about the pricing mechanism that governs cash transactions, not the explicit microfoundation of money.
This paper summarizes the key proposals of the report by the Liikanen Commission. It starts with an explanation of a crisis narrative underlying the Report and its proposals. The proposals aim for a revitalization of market discipline in financial markets. The two main structural proposals of the Liikanen Report are: first, for large banks, the separation of the trading business from other parts of the banking business (the "Separation Proposal"), and the mandatory issuing of subordinated bank debt thought to be liable (the strict "Bail-in Proposal"). The credibility of this commitment to private liability is achieved by strict holding restrictions. The anticipated consequences of the introduction of these structural regulations for the financial industry and markets are addressed in a concluding part.
The financial crisis which started in 2007 has caused a tremendous challenge for monetary policy. The simple concept of inflation targeting has lost its position as state of the art. There is a debate on whether the mandate of a central bank should not be widened. And, indeed, monetary policy has been very accommodative in the last couple of years and central banks have modified their communication strategies by introducing forward guidance as a new policy tool. This paper addresses the consequences of these developments for the credibility, the reputation and the independence of central banks. It also comments on the recent debate among economists concerning the question whether the ECB's OMT program is compatible with its mandate.
These proceedings, also online available as No. 46 in the ZAS Papers in Linguistics series under http://www.zas.gwz-berlin.de/index.html?publications_zaspil have resulted from the International Conference "Focus in African languages" held October 6-8, 2005 at the Zentrum für Allgemeine Sprachwissenschaft (ZAS) in Berlin. The conference was cooperatively organized by the latter, together with the Collaborative Research Center (Sonderforschungsforschungsbereich) 632, generously funded by the German Research Foundation (DFG). It was the first conference bringing together colleagues working on this topic from all over the world in such scale.
Even though this volume contains only ten contributions out of the 35 papers presented at the conference, it displays the wide range of approaches, subjects and languages studied in the field of information structure in African languages. The collection thus reflects the synergetic atmosphere of the conference.
In the name of all organizers (Laura Downing, Ines Fiedler, Katharina Hartmann, Brigitte Reineke, Anne Schwarz, Sabine Zerbian, Malte Zimmermann) we would like to take this opportunity to thank the participant reviewers and student assistants for their contributions by which the conference became such a fruitful forum for inspiring and seminal studies in this field. Also special thanks for their effort in copy editing to our research assistants Lars Marstaller and Paul Starzmann.
Die Hauptthese dieser Dissertation ist, dass Nord-Sotho keinen obligatorischen Gebrauch von grammatischen Mitteln zur Markierung von Fokus macht, weder in der Syntax noch in der Prosodie oder Morphologie. Trotzdem strukturiert diese Sprache eine Äußerung nach informationsstrukturellen Aspekten. Konstituenten, die im Diskurs gegeben sind, werden entweder getilgt, pronominalisiert oder an den rechten oder linken Satzrand versetzt. Diese (morpho-)syntaktischen Prozesse wirken so zusammen, dass die fokussierte Konstituente oft final in ihrem Teilsatz erscheint. Obwohl die finale Position keine designierte Fokusposition ist, ist das Wissen um diese Tendenz doch entscheidend für das Verständnis einer morphologischen Alternation, die in Nord-Sotho am Verb erscheint und die in der Literatur im Zusammenhang mit Fokus diskutiert wurde.
Obwohl also ein direkter grammatischer Ausdruck von formaler F(okus)-Markierung im Nord-Sotho fehlt, ist F-Markierung trotzdem entscheidend für die Grammatik dieser Sprache: Fokussierte logische Subjekte können nicht in kanonischer präverbaler Position erscheinen. Sie erscheinen stattdessen entweder postverbal oder in einem Spaltsatz, abhängig von der Valenz des Verbs. Obwohl Nord-Sotho bei Objekten im Gebrauch von Spaltsätzen eine Korrespondenz von komplexer Form mit komplexer Bedeutung zeigt, gilt diese Korrespondenz nicht für logische Subjekte.
Die vorliegende Dissertation modelliert die oben genannten Ergebnisse im theoretischen Rahmen der Optimalitätstheorie (OT). Syntaktischer in situ Fokus und die Abwesenheit von prosodischer Fokusmarkierung können mit unkontroversen Beschränkungen erfasst werden. Für die Ungrammatikaliät fokussierter logischer Subjekte in präverbaler Position schlägt die vorliegende Arbeit die Modifizierung einer in der Literatur vorhandenen Beschränkung vor, die in Nord-Sotho von entscheidener Bedeutung ist. Die Form-Bedeutungs-Korrespondenz wird, wie andere Phänomene pragmatischer Arbeitsteilung auch, innerhalb der schwach bidirektionalen Optimalitätstheorie behandelt.
This paper deals with spelling normalization of historical texts with regard to further processing with modern part-of-speech taggers. Different methods for this task are presented and evaluated on a set of historical German texts from the 15th–18th century, and specific problems inherent to the processing of historical data are discussed. A chain combination using word-based and character-based techniques is shown to be best for normalization, while POS tagging of normalized data is shown to benefit from ignoring punctuation marks. Using these techniques, when 500 manually normalized tokens are used as training data for the normalization, the tagging accuracy of a manuscript from the 15th century can be raised from 28.65% to 76.27%.
All of the papers in the volume except one (Kaji) take up some aspect of relative clause construction in some Bantu language. Kaji’s paper aims to account for how Tooro (J12; western Uganda) lost phonological tone through a comparative study of the tone systems of other western Uganda Bantu languages. The other papers examine a range of ways of forming relative clauses, often including non-restrictive relatives and clefts, in a wide range of languages representing a variety of prosodic systems.
The purpose of this dissertation is to defend the idea that the empirical responsibilities of binding theory can be handled in a more psychologically and historically realistic way when assigned to the field of pragmatics. In particular, I wish to show that Optimality Theory (OT) (Prince & Smolensky, 1993), the stochastic OT and Gradual Learning Algorithm of Boersma (1998), the Recoverability of OT of Wilson (2001) and Buchwald et al. (2002), and the bidirectional OT of Blutner (2000b) and Bidirectional Gradual Learning Algorithm of Jäger (2003a) can all participate in a formal framework in which one can formally spell out and justify the idea that the distributional behavior of bound pronouns and reflexivs is a pragmatic phenomenon.
This volume represents a collection of papers that present some of the results of two projects on control: on the one hand, the project Typology of complement control directed by Barbara Stiebels and funded by the German Research Foundation (DFG STI 151/2-2), and on the other hand the project Variation in control structures directed by Maria Polinsky and Eric Potsdam and funded by the US National Science Foundation (NSF grants BCS-0131946, BCS-0131993; website http://accent.ucsd.edu/). Whereas the first project pursued a lexical approach to control with a semantic definition of obligatory control, the second project has mainly pursued a syntactic approach to control – with special emphasis on less studied control structures (such as adjunct control, backward control, finite control, etc.). Both projects have aimed at extending the research on complement control to structures that differ from the prototypical cases of infinitival complements with empty subjects found in many Indo-European languages; their common interest was to bring in new empirical data, both primary and experimental.
The 48th volume of the ZAS Papers in Linguistics presents selected papers from the conference on Intersentential pronominal reference in child and adult language held at the ZAS in December, 2006. The conference, organized by the project Acquisition and disambiguation of intersentential pronominal reference, brought together leading researchers dealing with anaphora resolution in diverse theoretical approaches and the acquisition perspective on pronominal reference taken by the ZAS project.
This 18th issue of ZAS-Papers in Linguistics consists of papers on the development of verb acquisition in 9 languages from the very early stages up to the onset of paradigm construction. Each of the 10 papers deals with first-Ianguage developmental processes in one or two children studied via longitudinal data. The languages involved are French, Spanish, Russian, Croatian, Lithuanien, Finnish, English and German. For German two different varieties are examined, one from Berlin and one from Vienna. All papers are based on presentations at the workshop 'Early verbs: On the way to mini-paradigms' held at the ZAS (Berlin) on the 30./31. of September 2000. This workshop brought to a close the first phase of cooperation between two projects on language acquisition which has started in October 1999:
a) the project on "Syntaktische Konsequenzen des Morphologieerwerbs" at the ZAS (Berlin) headed by Juergen Weissenborn and Ewald Lang, and financially supported by the Deutsche Forschungsgemeinschaft, and
b) the international "Crosslinguistic Project on Pre- and Protomorphology in Language Acquisition" coordinated by Wolfgang U. Dressler in behalf of the Austrian Academy of Sciences.
This volume comprises papers that were given at the workshop Information Structure and the Referential Status of Linguistic Expressions, which we organized during the Deutsche Gesellschaft für Sprachwissenschaft (DGfS) Conference in Leipzig in February 2001. At this workshop we discussed the connection between information structure and the referential interpretation of linguistic expressions, a topic mostly neglected in current linguistics research. One common aim of the papers is to find out to what extent the focus-background as well as the topic-comment structuring determine the referential interpretation of simple arguments like definite and indefinite NPs on the one hand and sentences on the other.
The papers in this volume were originally presented at the Workshop on Bantu Wh-questions, held at the Institut des Sciences de l’Homme, Université Lyon 2, on 25-26 March 2011, which was organized by the French-German cooperative project on the Phonology/Syntax Interface in Bantu Languages (BANTU PSYN). This project, which is funded by the ANR and the DFG, comprises three research teams, based in Berlin, Paris and Lyon. The Berlin team, at the ZAS, is: Laura Downing (project leader) and Kristina Riedel (post-doc). The Paris team, at the Laboratoire de phonétique et phonologie (LPP; UMR 7018), is: Annie Rialland (project leader), Cédric Patin (Maître de Conférences, STL, Université Lille 3), Jean-Marc Beltzung (post-doc), Martial Embanga Aborobongui (doctoral student), Fatima Hamlaoui (post-doc). The Lyon team, at the Dynamique du Langage (UMR 5596) is: Gérard Philippson (project leader) and Sophie Manus (Maître de Conférences, Université Lyon 2). These three research teams bring together the range of theoretical expertise necessary to investigate the phonology-syntax interface: intonation (Patin, Rialland), tonal phonology (Aborobongui, Downing, Manus, Patin, Philippson, Rialland), phonology-syntax interface (Downing, Patin) and formal syntax (Riedel, Hamlaoui). They also bring together a range of Bantu language expertise: Western Bantu (Aboronbongui, Rialland), Eastern Bantu (Manus, Patin, Philippson, Riedel), and Southern Bantu (Downing).
To monitor one's speech means to check the speech plan for errors, both before and after talking. There are several theories as to how this process works. We give a short overview on the most influential theories only to focus on the most widely received one, the Perceptual Loop Theory of monitoring by Levelt (1983). One of the underlying assumptions of this theory is the existence of an Inner Loop, a monitoring device that checks for errors before speech is articulated. This paper collects evidence for the existence of such an internal monitoring device and questions how it might work. Levelt's theory argues that internal monitoring works by means of perception, but there are other empirical findings that allow for the assumption that an Inner Loop could also use our speech production devices. Based on data from both experimental and aphasiological papers we develop a model based on Levelt (1983) which shows that internal monitoring might in fact make use of both perception and production means.
Table of Contents:
T. A. Hall (Indiana University): English syllabification as the interaction of markedness constraints
Antony D. Green: Opacity in Tiberian Hebrew: Morphology, not phonology
Sabine Zerbian (ZAS Berlin): Phonological Phrases in Xhosa (Southern Bantu)
Laura J. Downing (ZAS Berlin): What African Languages Tell Us About Accent Typology
Marzena Zygis (ZAS Berlin): (Un)markedness of trills: the case of Slavic r-palatalisation
Laura J. Downing (ZAS Berlin), Al Mtenje (University of Malawi), Bernd Pompino-Marschall (Humboldt-Universitat Berlin): Prosody and Information Structure in Chichewa
T. A. Hall (Indiana University). Silke Hamann (ZAS Berlin), Marzena Zygis (ZAS Berlin): The phonetics of stop assibilation
Christian Geng (ZAS Berlin), Christine Mooshammer (Universitat Kiel): The Hungarian palatal stop: phonological considerations and phonetic data
The comprehension and production of single words involve a variety of processing stages. Which stages need to be accessed differs depending on whether objects (pictures in an experimental environment) or words are supposed to be named. Naming tasks are often employed in psycholinguistic studies in order to provide an insight into the function of mental processes during word production. Differences in naming latencies and naming accuracy between words suggest that the retrieval of some lexical items is easier or more difficult in contrast to others. The relative ease of word retrieval has been found to be strongly influenced by properties of these words, such as familiarity and written or spoken frequency.
Exploring which variables affect naming speed and accuracy will allow gaining more information about the storage and processing of words in general. If a variable has a discernable effect on a specific experimental task, the localization of this effect is of interest for psycholinguistic research. This is because finding the locus of the effect can help specify models of speech production with respect to what processes occur at which stage of lexical retrieval. Additionally, identifying which variables influence language processing is inevitable in order to control for these variables when necessary. Otherwise variance in naming latencies could not be explained by the variable that was to be tested because other, uncontrolled variables could have altered the results.