CFS working paper series
https://gfk-cfs.de/working-papers/
Refine
Year of publication
Document Type
- Working Paper (696)
- Preprint (1)
Language
- English (697) (remove)
Has Fulltext
- yes (697)
Is part of the Bibliography
- no (697)
Keywords
- Deutschland (46)
- Geldpolitik (43)
- USA (43)
- Europäische Union (26)
- Schätzung (23)
- monetary policy (22)
- Bank (20)
- Monetary Policy (18)
- Haushalt (17)
- Venture Capital (17)
Institute
2001, 13 r
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
2001, 07
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
2009, 19
In the New-Keynesian model, optimal interest rate policy under uncertainty is formulated without reference to monetary aggregates as long as certain standard assumptions on the distributions of unobservables are satisfied. The model has been criticized for failing to explain common trends in money growth and inflation, and that therefore money should be used as a cross-check in policy formulation (see Lucas (2007)). We show that the New-Keynesian model can explain such trends if one allows for the possibility of persistent central bank misperceptions. Such misperceptions motivate the search for policies that include additional robustness checks. In earlier work, we proposed an interest rate rule that is near-optimal in normal times but includes a cross-check with monetary information. In case of unusual monetary trends, interest rates are adjusted. In this paper, we show in detail how to derive the appropriate magnitude of the interest rate adjustment following a significant cross-check with monetary information, when the New-Keynesian model is the central bank’s preferred model. The cross-check is shown to be effective in offsetting persistent deviations of inflation due to central bank misperceptions. Keywords: Monetary Policy, New-Keynesian Model, Money, Quantity Theory, European Central Bank, Policy Under Uncertainty
2008, 25
Research with Keynesian-style models has emphasized the importance of the output gap for policies aimed at controlling inflation while declaring monetary aggregates largely irrelevant. Critics, however, have argued that these models need to be modified to account for observed money growth and inflation trends, and that monetary trends may serve as a useful cross-check for monetary policy. We identify an important source of monetary trends in form of persistent central bank misperceptions regarding potential output. Simulations with historical output gap estimates indicate that such misperceptions may induce persistent errors in monetary policy and sustained trends in money growth and inflation. If interest rate prescriptions derived from Keynesian-style models are augmented with a cross-check against money-based estimates of trend inflation, inflation control is improved substantially.
2007, 17
The European Central Bank has assigned a special role to money in its two pillar strategy and has received much criticism for this decision. In this paper, we explore possible justifications. The case against including money in the central bank’s interest rate rule is based on a standard model of the monetary transmission process that underlies many contributions to research on monetary policy in the last two decades. Of course, if one allows for a direct effect of money on output or inflation as in the empirical “two-pillar” Phillips curves estimated in some recent contributions, it would be optimal to include a measure of (long-run) money growth in the rule. In this paper, we develop a justification for including money in the interest rate rule by allowing for imperfect knowledge regarding unobservables such as potential output and equilibrium interest rates. We formulate a novel characterization of ECB-style monetary cross-checking and show that it can generate substantial stabilization benefits in the event of persistent policy misperceptions regarding potential output. Such misperceptions cause a bias in policy setting. We find that cross-checking and changing interest rates in response to sustained deviations of long-run money growth helps the central bank to overcome this bias. Our argument in favor of ECB-style cross-checking does not require direct effects of money on output or inflation. JEL Classification: E32, E41, E43, E52, E58
2007, 18
The European Central Bank has assigned a special role to money in its two pillar strategy and has received much criticism for this decision. The case against including money in the central bank’s interest rate rule is based on a standard model of the monetary transmission process that underlies many contributions to research on monetary policy in the last two decades. In this paper, we develop a justification for including money in the interest rate rule by allowing for imperfect knowledge regarding unobservables such as potential output and equilibrium interest rates. We formulate a novel characterization of ECB-style monetary cross-checking and show that it can generate substantial stabilization benefits in the event of persistent policy misperceptions regarding potential output. JEL Classification: E32, E41, E43, E52, E58
2000, 11
This paper examines empirically the question whether the presence of foreign banks and a liberal trade regime with regard to financial services can contribute to a stabilization of capital flows to emerging markets. Since foreign banks, so the argument goes, provide better information to foreign investors and increase transparency, the danger of herding is reduced. Previous findings by Kono and Schuknecht (1998) confirmed empirically that such an effect does exist. This study expands their data set with respect to the length of the time period and the number of countries. Contrary to Kono and Schuknecht, it is found that foreign bank penetration tends to rather increase the volatility of capital flows. The trade regime variables are not significant in explaining cross-country variations in the volatility of capital flows. This result does not change significantly when alternative measures of volatility are considered. This paper was presented at the conference ''Financial crisis in transition countries: recent lessons and problems yet to solve'' on 13-14 July 2000 at the Institute for Economic Research (IWH) in Halle, Germany.
2001, 02
This paper shows that emerging market eurobond spreads after the Asian crisis can be almost completely explained by market expectations about macroeconomic fundamentals and international interest rates. Contrary to the claim that emerging market bond spreads are driven by market variables such as stock market volatility in the developed countries, it is found that this did not play a significant role after the Asian crisis. Using panel data techniques, it is shown that the determinants of bond spreads can be divided into long-term structural variables and medium-term variables which explain month-to-month changes in bond spreads. As relevant medium-term variables, ''consensus forecasts'' of real GDP growth and inflation, and international interest rates are identified. The long-term structural factors do not explicitly enter the model and show up as fixed or random country-specific effects. These intercepts are highly correlated with the countries' credit rating.
2011, 09
In the microstructure literature, information asymmetry is an important determinant of market liquidity. The classic setting is that uninformed dedicated liquidity suppliers charge price concessions when incoming market orders are likely to be informationally motivated. In limit order book markets, however, this relationship is less clear, as market participants can switch roles, and freely choose to immediately demand or patiently supply liquidity by submitting either market or limit orders. We study the importance of information asymmetry in limit order books based on a recent sample of thirty German DAX stocks. We find that Hasbrouck’s (1991) measure of trade informativeness Granger-causes book liquidity, in particular that required to fill large market orders. Picking-off risk due to public news induced volatility is more important for top-of-the book liquidity supply. In our multivariate analysis we control for volatility, trading volume, trading intensity and order imbalance to isolate the effect of trade informativeness on book liquidity. JEL Classification: G14 Keywords: Price Impact of Trades , Trading Intensity , Dynamic Duration Models, Spread Decomposition Models , Adverse Selection Risk
2008, 52
Previous evidence suggests that less liquid stocks entail higher average returns. Using NYSE data, we present evidence that both the sensitivity of returns to liquidity and liquidity premia have significantly declined over the past four decades to levels that we cannot statistically distinguish from zero. Furthermore, the profitability of trading strategies based on buying illiquid stocks and selling illiquid stocks has declined over the past four decades, rendering such strategies virtually unprofitable. Our results are robust to several conventional liquidity measures related to volume. When using liquidity measure that is not related to volume, we find just weak evidence of a liquidity premium even in the early periods of our sample. The gradual introduction and proliferation of index funds and exchange traded funds is a possible explanation for these results.
2011, 17
Do firms buy their stock at bargain prices? : Evidence from actual stock repurchase disclosure
(2011)
We use new data from SEC filings to investigate how S&P 500 firms execute their open market repurchase programs. We find that smaller S&P 500 firms repurchase less frequently than larger firms, and at a price which is significantly lower than the average market price. Their repurchase activity is followed by a positive and significant abnormal return which lasts up to three months after the repurchase. These findings do not hold for large S&P 500 firms. Our interpretation is that small firms repurchase strategically, whereas the repurchase activity of large firms is more focused on the disbursement of free cash. JEL Classification: G14, G30, G35 Keywords: Stock Repurchases, Stock Buybacks, Payout Policy, Timing, Bid-Ask Spread, Liquidity
1998, 18
We review arguments for and against reserve requirements and conclude that the main question is whether a distinction between money creation and intermediation can be made. We argue that such a distinction can be made in a money-in-advance economy and show that if the money-in-advance constraint is universally binding then reserve requirements on checkable accounts have no effect on intermediation. We then proceed to show that in a model in which trade is uncertain and sequential, a fractional reserve banking system gives rise to endogenous monetary shocks. These endogenous monetary shocks lead to fluctuations in capacity utilisation and waste. When the moneyin-advance constraint is universally binding, a 100% reserve requirement on checkable accounts can eliminate this waste.
2006, 19
We use data from several waves of the Survey of Consumer Finances to document credit and debit card ownership and use across US demographic groups. We then present recent theoretical and empirical contributions to the study of credit and debit card behavior. Utilization rates of credit lines and portfolios of card holders present several puzzles. Credit line increases initiated by banks lead households to restore previous utilization rates. High-interest credit card debt co-exists with substantial holdings of low-interest liquid assets and with accumulation of retirement assets. Although available evidence disputes ignorance of credit card terms by card holders, redit card rates do not respond to competition. There is a rising trend in bankruptcy and delinquency, partly attributable to an increased tendency of households to declare bankruptcy associated with reduced social stigma, ease of procedures, and financial incentives. Co-existence of credit card debt with retirement assets can be explained through self-control hyperbolic discounting. Strategic default motives contribute partly to observed co-existence of credit card debt with low-interest liquid assets. A framework of “accountant-shopper” households, in which a rational accountant tries to control an impulsive shopper, seems consistent with both types of co-existence and with observed utilization of credit lines. JEL Classification: G11, E21
2007, 31
It is theoretically clear and may be verified empirically that efficient financial markets can make it less necessary for policy to try and offset the welfare effects of labour income risk and unequal consumption dynamics. The literature has also pointed out that, since international competition exposes workers to new sources of risk at the same time as it makes it easier for individual choices to undermine collective policies, international economic integration makes insurance-oriented government policies more beneficial as well as more difficult to implement. This paper reviews the economic mechanisms underlying these insights and assesses their empirical relevance in cross-country panel data sets. Interactions between indicators of international economic integration, of government economic involvement, and of financial development are consistent with the idea that financial market development can substitute public schemes when economic integration calls for more effective household consumption smoothing. The paper’s theoretical perspective and empirical evidence suggest that to the extent that governments can foster financial market development by appropriate regulation and supervision, they should do so more urgently at times of intense and increasing internationalization of economic relationships. JEL Classification: G1, E21
2007, 32
We survey contributions to the analysis of household liabilities, highlighting relevant theoretical aspects and outlining how data sources may support empirical testing and measurement efforts. Specifically, we classify aspects of household debt, discussing the theoretical and policy relevance of heterogeneity across individual and country dimensions. Aiming to illustrate conceptual and measurement issues, we refer to the approaches and results of some recent relevant country-specific work on administrative and survey data, and we argue that research in this area would greatly benefit from availability of appropriately classified household liabilities data and of cross-country institutional information. JEL Classification: G1, E21
2013, 25
We consider an economy where individuals privately choose effort and trade competitively priced securities that pay off with effort-determined probability. We show that if insurance against a negative shock is sufficiently incomplete, then standard functional form restrictions ensure that individual objective functions are optimized by an effort and insurance combination that is unique and satisfies first- and second-order conditions. Modeling insurance incompleteness in terms of costly production of private insurance services, we characterize the constrained inefficiency arising in general equilibrium from competitive pricing of nonexclusive financial contracts.
2006, 34
We show theoretically that income redistribution benefits borrowingconstrained individuals more than is implied by standard relative-income and uninsurable-risk considerations. Empirically, we find in international opinion-survey data that younger and lower-income individuals express stronger support for government redistribution in countries where consumer credit is less easily available. This evidence supports our theoretical perspective if such individuals are more strongly affected by tighter credit supply, in that expectations of higher incomes in the future increase their propensity to borrow. JEL Classification: E21
2008, 55
We document significant and robust empirical relationships in cross-country panel data between government size or social expenditure on the one hand, and trade and financial development indicators on the other. Across countries, deeper economic integration is associated with more intense government redistribution, but more developed financial markets weaken that relationship. Over time, controlling for country-specific effects, public social expenditure appears to be eroded by globalization trends where financial market development can more easily substitute for it.
455
This paper investigates the determinants of value and growth investing in a large administrative panel of Swedish residents over the 1999-2007 period. We document strong relationships between a household’s portfolio tilt and the household’s financial and demographic characteristics. Value investors have higher financial and real estate wealth, lower leverage, lower income risk, lower human capital, and are more likely to be female than the average growth investor. Households actively migrate to value stocks over the life-cycle and, at higher frequencies, dynamically offset the passive variations in the value tilt induced by market movements. We verify that these results are not driven by cohort effects, financial sophistication, biases toward popular or professionally close stocks, or unobserved heterogeneity in preferences. We relate these household-level results to some of the leading explanations of the value premium.
467
We propose a framework for estimating network-driven time-varying systemic risk contributions that is applicable to a high-dimensional financial system. Tail risk dependencies and contributions are estimated based on a penalized two-stage fixed-effects quantile approach, which explicitly links bank interconnectedness to systemic risk contributions. The framework is applied to a system of 51 large European banks and 17 sovereigns through the period 2006 to 2013, utilizing both equity and CDS prices. We provide new evidence on how banking sector fragmentation and sovereign-bank linkages evolved over the European sovereign debt crisis and how it is reflected in network statistics and systemic risk measures. Illustrating the usefulness of the framework as a monitoring tool, we provide indication for the fragmentation of the European financial system having peaked and that recovery has started.
2011, 04
Regulations in the pre-Sarbanes–Oxley era allowed corporate insiders considerable flexibility in strategically timing their trades and SEC filings, for example, by executing several trades and reporting them jointly after the last trade. We document that even these lax reporting requirements were frequently violated and that the strategic timing of trades and reports was common. Event study abnormal re-turns are larger after reports of strategic insider trades than after reports of otherwise similar nonstrategic trades. Our results also imply that delayed reporting is detrimental to market efficiency and lend strong support to the more stringent trade reporting requirements established by the Sarbanes–Oxley Act. JEL Classification: G14, G30, G32 Keywords: Insider Trading , Directors' Dealings , Corporate Governance , Market Efficiency
2009, 01
Opting out of the great inflation: German monetary policy after the break down of Bretton Woods
(2009)
During the turbulent 1970s and 1980s the Bundesbank established an outstanding reputation in the world of central banking. Germany achieved a high degree of domestic stability and provided safe haven for investors in times of turmoil in the international financial system. Eventually the Bundesbank provided the role model for the European Central Bank. Hence, we examine an episode of lasting importance in European monetary history. The purpose of this paper is to highlight how the Bundesbank monetary policy strategy contributed to this success. We analyze the strategy as it was conceived, communicated and refined by the Bundesbank itself. We propose a theoretical framework (following Söderström, 2005) where monetary targeting is interpreted, first and foremost, as a commitment device. In our setting, a monetary target helps anchoring inflation and inflation expectations. We derive an interest rate rule and show empirically that it approximates the way the Bundesbank conducted monetary policy over the period 1975-1998. We compare the Bundesbank´s monetary policy rule with those of the FED and of the Bank of England. We find that the Bundesbank´s policy reaction function was characterized by strong persistence of policy rates as well as a strong response to deviations of inflation from target and to the activity growth gap. In contrast, the response to the level of the output gap was not significant. In our empirical analysis we use real-time data, as available to policy-makers at the time. JEL Classification: E31, E32, E41, E52, E58
1998, 12
Shares trading in the Bolsa mexicana de Valores do not seem to react to company news. Using a sample of Mexican corporate news announcements from the period July 1994 through June 1996, this paper finds that there is nothing unusual about returns, volatility of returns, volume of trade or bid-ask spreads in the event window. This suggests one of five possibilities: our sample size is small; or markets are inefficient; or markets are efficient but the corporate news announcements are not value-relevant; or markets are efficient and corporate news announcements are value-relevant, but they have been fully anticipated; or markets are efficient and corporate news announcements are value-relevant, but unrestricted insider trading has caused prices to fully incorporate the information. The evidence supports the last hypothesis. The paper thus points towards a methodology for ranking emerging stock markets in terms of their market integrity, an approach that can be used with the limited data available in such markets.
477
We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semi-martingale log asset price process which is subject to noise and non-synchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM) which recently has been introduced by Bibinger et al. (2014). We extend the LMM estimator to allow for autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. We prove the consistency and asymptotic normality of the proposed spot covariance estimator. Based on extensive simulations we provide empirical guidance on the optimal implementation of the estimator and apply it to high-frequency data of a cross-section of NASDAQ blue chip stocks. Employing the estimator to estimate spot covariances, correlations and betas in normal but also extreme-event periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, (iii) are strongly serially correlated, and (iv) can increase strongly and nearly instantaneously if new information arrives.
2006, 11
We analyze the degree of contract completeness with respect to staging of venture capital investments using a hand-collected German data set of contract data from 464 rounds into 290 entrepreneurial firms. We distinguish three forms of staging (pure milestone financing, pure round financing and mixes). Thereby, contract completeness reduces when going from pure milestone financing via mixes to pure round financing. We show that the decision for a specific form of staging is determined by the expected distribution of bargaining power between the contracting parties when new funding becomes necessary and the predictability of the development process. To be more precise, parties choose the more complete contracts the lower the entrepreneur's expected bargaining power - the maximum level depending on the predictability of the development process. JEL Classification: G24, G32, D86, D80, G34
2009, 05
Venture capital exit rights
(2009)
Theorists argue that exit rights can mitigate hold-up problems in venture capital. Using a hand-collected data-set of venture capital contracts from Germany we show that exit rights are included more frequently in venture capital contracts when a hold-up problem associated with the venture capitalist's exit decision is likely. Examples include drag-along and tag-along rights. Additionally, we find that almost all exit rights are allocated to the venture capitalist rather than to the entrepreneur. In addition, we show that besides the basic hold-up mechanism there are other mechanisms such as ex-ante bargaining power and the degree of pledgeable income that drive the allocation of exit rights. JEL Classification: G24, G34, D80
496
Money is more than memory
(2014)
Impersonal exchange is the hallmark of an advanced society. One key institution for impersonal exchange is money, which economic theory considers just a primitive arrangement for monitoring past conduct in society. If so, then a public record of past actions — or memory — supersedes the function performed by money. This intriguing theoretical postulate remains untested. In an experiment, we show that the suggested functional equality between money and memory does not translate into an empirical equivalence. Monetary systems perform a richer set of functions than just revealing past behaviors, which proves to be crucial in promoting large-scale cooperation.
2005, 20
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
2006, 14
Several recent studies have addressed household participation in the stock market, but relatively few have focused on household stock trading behavior. Household trading is important for the stock market, as households own more than 40% of the NYSE capitalization directly and can also influence trading patterns of institutional investors by adjusting their indirect stock holdings. Existing studies based on administrative data offer conflicting results. Discount brokerage data show excessive trading to the detriment of stockholders, while data on retirement accounts indicate extreme inactivity. This paper uses data representative of the population to document the extent of household portfolio inertia and to link it to household characteristics and to stock market movements. We document considerable portfolio inertia, as regards both changing stockholding participation status and trading stocks, and find that specific household characteristics contribute to the tendency to exhibit such inertia. Although our findings suggest some dependence of trading directly-held equity through brokerage accounts on the performance of the stock market index, they do not indicate that the recent expansion in the stockholder base and the experience of the stock market downswing have significantly altered the overall propensity of households to trade in stocks or to switch participation status in a way that could contribute to stock market instability. JEL Classification: G110, E210
2005, 17
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
2007, 23
In this paper we revisit medium- to long-run exchange rate determination, focusing on the role of international investment positions. To do so, we develop a new econometric framework accounting for conditional long-run homogeneity in heterogeneous dynamic panel data models. In particular, in our model the long-run relationship between effective exchange rates and domestic as well as weighted foreign prices is a homogeneous function of a country’s international investment position. We find rather strong support for purchasing power parity in environments of limited negative net foreign asset to GDP positions, but not outside such environments. We thus argue that the purchasing power parity hypothesis holds conditionally, but not unconditionally, and that international investment positions are an essential component to characterizing this conditionality. Finally, we adduce evidence that whether deterioration of a country’s net foreign asset to GDP position leads to a depreciation of that country’s effective exchange rate depends on its rate of inflation relative to the rate of inflation abroad as well as its exposure to global shocks. JEL Classification: F31, F37, C23
2007, 03
The European Central Bank
(2007)
The establishment of the ECB and with it the launch of the euro has arguably been a unique endeavor in economic history, representing an important experiment in central banking. This note aims to summarize some of the main lessons learned from this experiment and sketch some of the prospects for the ECB. It is written for "The New Palgrave Dictionary of Economics", 2nd edition. JEL Classification: E52, E58
622
This paper investigates what we can learn from the financial crisis about the link between accounting and financial stability. The picture that emerges ten years after the crisis is substantially different from the picture that dominated the accounting debate during and shortly after the crisis. Widespread claims about the role of fair-value (or mark-to-market) accounting in the crisis have been debunked. However, we identify several other core issues for the link between accounting and financial stability. Our analysis suggests that, going into the financial crisis, banks’ disclosures about relevant risk exposures were relatively sparse. Such disclosures came later after major concerns about banks’ exposures had arisen in markets. Similarly, banks delayed the recognition of loan losses. Banks’ incentives seem to drive this evidence, suggesting that reporting discretion and enforcement deserve careful consideration. In addition, bank regulation through its interlinkage with financial accounting may have dampened banks’ incentives for corrective actions. Our analysis illustrates that a number of serious challenges remain if accounting and financial reporting are to contribute to financial stability.
2007, 08
We develop a utility based model of fluctuations, with nominal rigidities, and unemployment. In doing so, we combine two strands of research: the New Keynesian model with its focus on nominal rigidities, and the Diamond-Mortensen-Pissarides model, with its focus on labor market frictions and unemployment. In developing this model, we proceed in two steps. We first leave nominal rigidities aside. We show that, under a standard utility specification, productivity shocks have no effect on unemployment in the constrained efficient allocation. We then focus on the implications of alternative real wage setting mechanisms for fluctuations in unemployment. We then introduce nominal rigidities in the form of staggered price setting by firms. We derive the relation between inflation and unemployment and discuss how it is influenced by the presence of real wage rigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and we draw the implications for optimal monetary policy. JEL Classification: E32, E50
2011, 20
This paper outlines a new method for using qualitative information to analyze the monetary policy strategy of central banks. Quantitative assessment indicators that are extracted from a central bank's public statements via the balance statistic approach are employed to estimate a Taylor-type rule. This procedure allows to directly capture a policymaker's assessments of macroeconomic variables that are relevant for its decision making process. As an application of the proposed method the monetary policy of the Bundesbank is re-investigated with a new dataset. One distinctive feature of the Bundesbank's strategy consisted of targeting growth in monetary aggregates. The analysis using the proposed method provides evidence that the Bundesbank indeed took into consideration monetary aggregates but also real economic activity and inflation developments in its monetary policy strategy since 1975. JEL Classification: E52, E58, N14 Keywords: Monetary Policy Rule, Statement Indicators, Bundesbank, Monetary Targeting
2011, 19
This paper analyzes the emergence of systemic risk in a network model of interconnected bank balance sheets. Given a shock to asset values of one or several banks, systemic risk in the form of multiple bank defaults depends on the strength of balance sheets and asset market liquidity. The price of bank assets on the secondary market is endogenous in the model, thereby relating funding liquidity to expected solvency - an important stylized fact of banking crises. Based on the concept of a system value at risk, Shapley values are used to define the systemic risk charge levied upon individual banks. Using a parallelized simulated annealing algorithm the properties of an optimal charge are derived. Among other things we find that there is not necessarily a correspondence between a bank's contribution to systemic risk - which determines its risk charge - and the capital that is optimally injected into it to make the financial system more resilient to systemic risk. The analysis has policy implications for the design of optimal bank levies. JEL Classification: G01, G18, G33 Keywords: Systemic Risk, Systemic Risk Charge, Systemic Risk Fund, Macroprudential Supervision, Shapley Value, Financial Network
1998, 17
Derivatives usage in risk management by U.S. and German non-financial firms : a comparative survey
(1998)
This paper is a comparative study of the responses to the 1995 Wharton School survey of derivative usage among US non-financial firms and a 1997 companion survey on German non-financial firms. It is not a mere comparison of the results of both studies but a comparative study, drawing a comparable subsample of firms from the US study to match the sample of German firms on both size and industry composition. We find that German firms are more likely to use derivatives than US firms, with 78% of German firms using derivatives compared to 57% of US firms. Aside from this higher overall usage, the general pattern of usage across industry and size groupings is comparable across the two countries. In both countries, foreign currency derivative usage is most common, followed closely by interest rate derivatives, with commodity derivatives a distant third. Usage rates across all three classes of derivatives are higher for German firms than US firms. In contrast to the similarities, firms in the two countries differ notably on issues such as the primary goal of hedging, their choice of instruments, and the influence of their market view when taking derivative positions. These differences appear to be driven by the greater importance of financial accounting statements in Germany than the US and stricter German corporate policies of control over derivative activities within the firm. German firms also indicate significantly less concern about derivative related issues than US firms, which appears to arise from a more basic and simple strategy for using derivatives. Finally, among the derivative non-users, German firms tend to cite reasons suggesting derivatives were not needed whereas US firms tend to cite reasons suggesting a possible role for derivatives, but a hesitation to use them for some reason.
2013, 19
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
2007, 14
This paper uses factor-augmented vector autoregressions (FAVAR) estimated using a large data set to disentangle fluctuations in disaggregated consumer and producer prices which are due to macroeconomic factors from those due to sectorial conditions. This allows us to provide consistent estimates of the effects of US monetary policy on disaggregated prices. While sectorial prices respond quickly to sector-specific shocks, we find that for a large number of price series, there is a significant delay in the response of prices to monetary policy shocks. In addition, price responses display little evidence of a “price puzzle,” contrary to existing studies based on traditional VARs. The observed dispersion in the reaction of producer prices is relatively well explained by the degree of market power, as predicted by models with monopolistic competition. JEL Classification: E32, E52
2007, 05
In this paper, we examine three famous episodes of deliberate deflation (or disinflation) in U.S. history, including episodes following the Civil War, World War I, and the Volcker disinflation of the early 1980s. These episodes were associated with widely divergent effects on the real economy, which we attribute both to differences in the policy actions undertaken, and to the transparency and credibility of the monetary authorities. We attempt to account for the salient features of each episode within the context of a stylized DSGE model. Our model simulations indicate how a more predictable policy of gradual deflation could have helped avoid the sharp post-WWI depression. But our analysis also suggests that the strong argument for gradualism under a transparent monetary regime becomes less persuasive if the monetary authority lacks credibility; in this case, an aggressive policy stance (as under Volcker) can play a useful signalling role by making a policy shift more apparent to private agents. JEL Classification: E31, E32, E52
2012, 14
From its early post-war catch-up phase, Germany’s formidable export engine has been its consistent driver of growth. But Germany has almost equally consistently run current account surpluses. Exports have powered the dynamic phases and helped emerge from stagnation. Volatile external demand, in turn, has elevated German GDP growth volatility by advanced countries’ standards, keeping domestic consumption growth at surprisingly low levels. As a consequence, despite the size of its economy and important labor market reforms, Germany’s ability to act as global locomotive has been limited. With increasing competition in its traditional areas of manufacturing, a more domestically-driven growth dynamic, especially in the production and delivery of services, will be good for Germany and for the global economy. Absent such an effort, German growth will remain constrained, and Germany will play only a modest role in spurring growth elsewhere.
495
Emotions-at-risk: an experimental investigation into emotions, option prices and risk perception
(2014)
This paper experimentally investigates how emotions are associated with option prices and risk perception. Using a binary lottery, we find evidence that the emotion ‘surprise’ plays a significant role in the negative correlation between lottery returns and estimates of the price of a put option. Our findings shed new light on various existing theories on emotions and affect. We find gratitude, admiration, and joy to be positively associated with risk perception, although the affect heuristic predicts a negative association. In contrast with the predictions of the appraisal tendency framework (ATF), we document a negative correlation between option price and surprise for lottery winners. Finally, the results show that the option price is not associated with risk perception as commonly used in psychology.
522
We investigate the effect of the tone of news on investor stock price expectations and beliefs. In an experimental study we ask subjects to estimate a future stock price for twelve real listed companies. As additional information we provide them with historical stock prices and extracts from real newspaper articles. We propose a way to manipulate the tone of news extracts without distorting its content. Subjects in different treatment groups read news items that are written either in positive or negative tone for each stock. We find that subjects tend to predict a significantly higher (lower) return for stocks after reading positive (negative) tone news. The effect is especially pronounced for stocks with poor past performance. Subjects are more likely to be optimistic (pessimistic) about the economy and to buy (sell) stocks after reading positive (negative) than negative (positive) tone news. Our results show that the news media might affect not only how investors perceive information, but also what they do in response to it.
2010, 20
We test whether asymmetric preferences for losses versus gains as in Ang, Chen, and Xing (2006) also affect the pricing of cash flow versus discount rate news as in Campbell and Vuolteenaho (2004). We construct a new four-fold beta decomposition, distinguishing cash flow and discount rate betas in up and down markets. Using CRSP data over 1963–2008, we find that the downside cash flow beta and downside discount rate beta carry the largest premia. We subject our result to an extensive number of robustness checks. Overall, downside cash flow risk is priced most consistently across different samples, periods, and return decomposition methods, and is the only component of beta that has significant out-of-sample predictive ability. The downside cash flow risk premium is mainly attributable to small stocks. The risk premium for large stocks appears much more driven by a compensation for symmetric, cash flow related risk. Finally, we multiply our premia estimates by average betas to compute the contribution of the different risk components to realized average returns. We find that up and down discount rate components dominate the contribution to average returns of downside cash flow risk. Keywords: Asset Pricing, Beta, Downside Risk, Upside Risk, Cash Flow Risk, Discount Rate Risk JEL Classification: G11, G12, G14
2006, 10
We estimate the effect of pension reforms on households' expectations of retirement outcomes and private wealth accumulation decisions exploiting a decade of intense Italian pension reforms as a source of exogenous variation in expected pension wealth. The Survey of Household Income and Wealth, a large random sample of the Italian population, elicits expectations of the age at which workers expect to retire and of the ratio of pension benefits to pre-retirement income between 1989 and 2002. We find that workers have revised expectations in the direction suggested by the reform and that there is substantial offset between private wealth and perceived pension wealth, particularly by workers that are better informed about their pension wealth. Klassifikation: E21, H55
2003, 47
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.
2004, 07
We extend the important idea of range-based volatility estimation to the multivariate case. In particular, we propose a range-based covariance estimator that is motivated by financial economic considerations (the absence of arbitrage), in addition to statistical considerations. We show that, unlike other univariate and multivariate volatility estimators, the range-based estimator is highly efficient yet robust to market microstructure noise arising from bid-ask bounce and asynchronous trading. Finally, we provide an empirical example illustrating the value of the high-frequency sample path information contained in the range-based estimates in a multivariate GARCH framework.
2005, 21
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
675
We investigate the impact of reporting regulation on corporate innovation. Exploiting thresholds in Europe’s regulation and a major enforcement reform in Germany, we find that forcing firms to publicly disclose their financial statements discourages innovative activities. Our evidence suggests that reporting regulation has significant real effects by imposing proprietary costs on innovative firms, which in turn diminish their incentives to innovate. At the industry level, positive information spillovers (e.g., to competitors, suppliers, and customers) appear insufficient to compensate the negative direct effect on the prevalence of innovative activity. The spillovers instead appear to concentrate innovation among a few large firms in a given industry. Thus, financial reporting regulation has important aggregate and distributional effects on corporate innovation.
1999, 11
We analyze the role of different kinds of primary and secondary market interventions for the government's goal to maximize its revenues from public bond issuances. Some of these interventions can be thought of as characteristics of a "primary dealer system". After all, we see that a primary dealer system with a restricted number of participants may be useful in case of only restricted competition among sufficiently heterogeneous market makers. We further show that minimum secondary market turnover requirements for primary dealers with respect to bond sales seem to be in general more adequate than the definition of maximum bid-ask-spreads or minimum turnover requirements with respect to bond purchases. Moreover, official price management operations are not able to completely substitute for a system of primary dealers. Finally it should be noted that there is in general no reason for monetary compensations to primary dealers since they already possess some privileges with respect to public bond auction.