CFS working paper series
https://gfk-cfs.de/working-papers/
Refine
Year of publication
Document Type
- Working Paper (18) (remove)
Language
- English (18)
Has Fulltext
- yes (18)
Is part of the Bibliography
- no (18)
Keywords
- Monetary Policy (18) (remove)
Institute
- Center for Financial Studies (CFS) (18) (remove)
2012, 11
The complexity resulting from intertwined uncertainties regarding model misspecification and mismeasurement of the state of the economy defines the monetary policy landscape. Using the euro area as laboratory this paper explores the design of robust policy guides aiming to maintain stability in the economy while recognizing this complexity. We document substantial output gap mismeasurement and make use of a new model data base to capture the evolution of model specification. A simple interest rate rule is employed to interpret ECB policy since 1999. An evaluation of alternative policy rules across 11 models of the euro area confirms the fragility of policy analysis optimized for any specific model and shows the merits of model averaging in policy design. Interestingly, a simple difference rule with the same coefficients on inflation and output growth as the one used to interpret ECB policy is quite robust as long as it responds to current outcomes of these variables.
2011, 29
The lessons from QE and other 'unconventional' monetary policies - evidence from the Bank of England
(2011)
This paper investigates the effectiveness of the ‘quantitative easing’ policy, as implemented by the Bank of England in March 2009. Similar policies had been previously implemented in Japan, the U.S. and the Eurozone. The effectiveness is measured by the impact of Bank of England policies (including, but not limited to QE) on nominal GDP growth – the declared goal of the policy, according to the Bank of England. Unlike the majority of the literature on the topic, the general-to-specific econometric modeling methodology (a.k.a. the ‘Hendry’ or ‘LSE’ methodology) is employed for this purpose. The empirical analysis indicates that QE as defined and announced in March 2009 had no apparent effect on the UK economy. Meanwhile, it is found that a policy of ‘quantitative easing’ defined in the original sense of the term (Werner, 1994) is supported by empirical evidence: a stable relationship between a lending aggregate (disaggregated M4 lending, i.e. bank credit for GDP transactions) and nominal GDP is found. The findings imply that BoE policy should more directly target the growth of bank credit for GDP-transactions.
2011, 26
The unintended consequences of the debt ... will increased government expenditure hurt the economy?
(2011)
In 2008, governments in many countries embarked on large fiscal expenditure programmes, with the intention to support the economy and prevent a more serious recession. In this study, the overall impact of a substantial increase in fiscal expenditure is considered by providing a novel analysis of the most relevant recent experience in similar circumstances, namely that of Japan in the 1990s. Then a weak economy with risk-averse banks seemed to require some of the largest peacetime fiscal stimulation programmes on record, albeit with disappointing results. The explanations provided by the literature and their unsatisfactory empirical record are reviewed. An alternative explanation, derived from early Keynesian models on the ineffectiveness of fiscal policy is presented in the form of a modified Fisher-equation, which incorporates the recent findings in the credit view literature. The model postulates complete quantity crowding out. It is subjected to empirical tests, which were supportive. Thus evidence is found that fiscal policy, if not supported by suitable monetary policy, is likely to crowd out private sector demand, even in an environment of falling or near-zero interest rates. As a policy conclusion it is pointed out that by changing the funding strategy, complete crowding out can be avoided and a positive net effect produced. The proposed framework creates common ground between proponents of Keynesian views (as held, among others, by Blinder and Solow), monetarist views (as held in particular by Milton Friedman) and those of leading contemporary macroeconomists (such as Mankiw).
2011, 30
Central banks have recently introduced new policy initiatives, including a policy called ‘Quantitative Easing’ (QE). Since it has been argued by the Bank of England that “Standard economic models are of limited use in these unusual circumstances, and the empirical evidence is extremely limited” (Bank of England, 2009b), we have taken an entirely empirical approach and have focused on the QE-experience, on which substantial data is available, namely that of Japan (2001-2006). Recent literature on the effectiveness of QE has neglected any reference to final policy goals. In this paper, we adopt the view that ultimately effectiveness will be measured by whether it will be able to “boost spending” (Bank of England, 2009b) and “will ultimately be judged by their impact on the wider macroeconomy” (Bank of England, 2010). In line with a widely held view among leading macroeconomists from various persuasions, while attempting to stay agnostic and open-minded on the distribution of demand changes between real output and inflation, we have thus identified nominal GDP growth as the key final policy goal of monetary policy. The empirical research finds that the policy conducted by the Bank of Japan between 2001 and 2006 makes little empirical difference while an alternative policy targeting credit creation (the original definition of QE) would likely have been more successful.
2010, 26
The recent financial crisis has highlighted the limits of the “originate to distribute” model of banking, but its nexus with the macroeconomy and monetary policy remains unexplored. I build a DSGE model with banks (along the lines of Holmström and Tirole [28] and Parlour and Plantin [39] and examine its properties with and without active secondary markets for credit risk transfer. The possibility of transferring credit reduces the impact of liquidity shocks on bank balance sheets, but also reduces the bank incentive to monitor. As a result, secondary markets allow to release bank capital and exacerbate the effect of productivity and other macroeconomic shocks on output and inflation. By offering a possibility of capital recycling and by reducing bank monitoring, secondary credit markets in general equilibrium allow banks to take on more risk. Keywords: Credit Risk Transfer , Dual Moral Hazard , Monetary Policy , Liquidity , Welfare JEL Classification: E3, E5, G3 First Draft: December 2009, This Draft: September 2010
2009, 19
In the New-Keynesian model, optimal interest rate policy under uncertainty is formulated without reference to monetary aggregates as long as certain standard assumptions on the distributions of unobservables are satisfied. The model has been criticized for failing to explain common trends in money growth and inflation, and that therefore money should be used as a cross-check in policy formulation (see Lucas (2007)). We show that the New-Keynesian model can explain such trends if one allows for the possibility of persistent central bank misperceptions. Such misperceptions motivate the search for policies that include additional robustness checks. In earlier work, we proposed an interest rate rule that is near-optimal in normal times but includes a cross-check with monetary information. In case of unusual monetary trends, interest rates are adjusted. In this paper, we show in detail how to derive the appropriate magnitude of the interest rate adjustment following a significant cross-check with monetary information, when the New-Keynesian model is the central bank’s preferred model. The cross-check is shown to be effective in offsetting persistent deviations of inflation due to central bank misperceptions. Keywords: Monetary Policy, New-Keynesian Model, Money, Quantity Theory, European Central Bank, Policy Under Uncertainty
2009, 30
This paper reviews the rationale for quantitative easing when central bank policy rates reach near zero levels in light of recent announcements regarding direct asset purchases by the Bank of England, the Bank of Japan, the U.S. Federal Reserve and the European Central Bank. Empirical evidence from the previous period of quantitative easing in Japan between 2001 and 2006 is presented. During this earlier period the Bank of Japan was able to expand the monetary base very quickly and significantly. Quantitative easing translated into a greater and more lasting expansion of M1 relative to nominal GDP. Deflation subsided by 2005. As soon as inflation appeared to stabilize near a rate of zero, the Bank of Japan rapidly reduced the monetary base as a share of nominal income as it had announced in 2001. The Bank was able to exit from extensive quantitative easing within less than a year. Some implications for the current situation in Europe and the United States are discussed.
2009, 01
Opting out of the great inflation: German monetary policy after the break down of Bretton Woods
(2009)
During the turbulent 1970s and 1980s the Bundesbank established an outstanding reputation in the world of central banking. Germany achieved a high degree of domestic stability and provided safe haven for investors in times of turmoil in the international financial system. Eventually the Bundesbank provided the role model for the European Central Bank. Hence, we examine an episode of lasting importance in European monetary history. The purpose of this paper is to highlight how the Bundesbank monetary policy strategy contributed to this success. We analyze the strategy as it was conceived, communicated and refined by the Bundesbank itself. We propose a theoretical framework (following Söderström, 2005) where monetary targeting is interpreted, first and foremost, as a commitment device. In our setting, a monetary target helps anchoring inflation and inflation expectations. We derive an interest rate rule and show empirically that it approximates the way the Bundesbank conducted monetary policy over the period 1975-1998. We compare the Bundesbank´s monetary policy rule with those of the FED and of the Bank of England. We find that the Bundesbank´s policy reaction function was characterized by strong persistence of policy rates as well as a strong response to deviations of inflation from target and to the activity growth gap. In contrast, the response to the level of the output gap was not significant. In our empirical analysis we use real-time data, as available to policy-makers at the time. JEL Classification: E31, E32, E41, E52, E58
2008, 17
This paper introduces adaptive learning and endogenous indexation in the New-Keynesian Phillips curve and studies disinflation under inflation targeting policies. The analysis is motivated by the disinflation performance of many inflation-targeting countries, in particular the gradual Chilean disinflation with temporary annual targets. At the start of the disinflation episode price-setting firms’ expect inflation to be highly persistent and opt for backward-looking indexation. As the central bank acts to bring inflation under control, price-setting firms revise their estimates of the degree of persistence. Such adaptive learning lowers the cost of disinflation. This reduction can be exploited by a gradual approach to disinflation. Firms that choose the rate for indexation also re-assess the likelihood that announced inflation targets determine steady-state inflation and adjust indexation of contracts accordingly. A strategy of announcing and pursuing short-term targets for inflation is found to influence the likelihood that firms switch from backward-looking indexation to the central bank’s targets. As firms abandon backward-looking indexation the costs of disinflation decline further. We show that an inflation targeting strategy that employs temporary targets can benefit from lower disinflation costs due to the reduction in backward-looking indexation.
2008, 16
Monetary policy analysts often rely on rules-of-thumb, such as the Taylor rule, to describe historical monetary policy decisions and to compare current policy to historical norms. Analysis along these lines also permits evaluation of episodes where policy may have deviated from a simple rule and examination of the reasons behind such deviations. One interesting question is whether such rules-of-thumb should draw on policymakers "forecasts of key variables such as inflation and unemployment or on observed outcomes. Importantly, deviations of the policy from the prescriptions of a Taylor rule that relies on outcomes may be due to systematic responses to information captured in policymakers" own projections. We investigate this proposition in the context of FOMC policy decisions over the past 20 years using publicly available FOMC projections from the biannual monetary policy reports to the Congress (Humphrey-Hawkins reports). Our results indicate that FOMC decisions can indeed be predominantly explained in terms of the FOMC´s own projections rather than observed outcomes. Thus, a forecast-based rule-of-thumb better characterizes FOMC decision-making. We also confirm that many of the apparent deviations of the federal funds rate from an outcome-based Taylor-style rule may be considered systematic responses to information contained in FOMC projections.
2007, 14
This paper uses factor-augmented vector autoregressions (FAVAR) estimated using a large data set to disentangle fluctuations in disaggregated consumer and producer prices which are due to macroeconomic factors from those due to sectorial conditions. This allows us to provide consistent estimates of the effects of US monetary policy on disaggregated prices. While sectorial prices respond quickly to sector-specific shocks, we find that for a large number of price series, there is a significant delay in the response of prices to monetary policy shocks. In addition, price responses display little evidence of a “price puzzle,” contrary to existing studies based on traditional VARs. The observed dispersion in the reaction of producer prices is relatively well explained by the degree of market power, as predicted by models with monopolistic competition. JEL Classification: E32, E52
2007, 10
Mortgage markets, collateral constraints, and monetary policy: do institutional factors matter?
(2006)
We study the role of institutional characteristics of mortgage markets in affecting the strength and timing of the effects of monetary policy shocks on house prices and consumption in a sample of OECD countries. We document three facts: (1) there is significant divergence in the structure of mortgage markets across the main industrialised countries; (2) at the business cycle frequency, the correlation between consumption and house prices increases with the degree of flexibility/development of mortgage markets; (3) the transmission of monetary policy shocks on consumption and house prices is stronger in countries with more flexible/developed mortgage markets. We then build a two-sector dynamic general equilibrium model with price stickiness and collateral constraints, where the ability of borrowing is endogenously linked to the nominal value of a durable asset (housing). We study how the response of consumption to monetary policy shocks is affected by alternative values of three key institutional parameters: (i) down-payment rate; (ii) mortgage repayment rate; (iii) interest rate mortgage structure (variable vs. fixed interest rate). In line with our empirical evidence, the sensitivity of consumption to monetary policy shocks increases with lower values of (i) and (ii), and is larger under a variable-rate mortgage structure. JEL Classification: E21, E44, E52
2006, 30
The paper constructs a global monetary aggregate, namely the sum of the key monetary aggregates of the G5 economies (US, Euro area, Japan, UK, and Canada), and analyses its indicator properties for global output and inflation. Using a structural VAR approach we find that after a monetary policy shock output declines temporarily, with the downward effect reaching a peak within the second year, and the global monetary aggregate drops significantly. In addition, the price level rises permanently in response to a positive shock to the global liquidity aggregate. The similarity of our results with those found in country studies might supports the use of a global monetary aggregate as a summary measure of worldwide monetary trends. JEL Classification: E52, F01
2004, 24
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
2004, 22
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
2003, 40
This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability. July 2003.
2003, 37
The development of tractable forward looking models of monetary policy has lead to an explosion of research on the implications of adopting Taylor-type interest rate rules. Indeterminacies have been found to arise for some specifications of the interest rate rule, raising the possibility of inefficient fluctuations due to the dependence of expectations on extraneous "sunspots ". Separately, recent work by a number of authors has shown that sunspot equilibria previously thought to be unstable under private agent learning can in some cases be stable when the observed sunspot has a suitable time series structure. In this paper we generalize the "common factor "technique, used in this analysis, to examine standard monetary models that combine forward looking expectations and predetermined variables. We consider a variety of specifications that incorporate both lagged and expected inflation in the Phillips Curve, and both expected inflation and inertial elements in the policy rule. We find that some policy rules can indeed lead to learnable sunspot solutions and we investigate the conditions under which this phenomenon arises.
510
No. And not only for the reason you think. In a world with multiple inefficiencies the single policy tool the central bank has control over will not undo all inefficiencies; this is well understood. We argue that the world is better characterized by multiple inefficiencies and multiple policy makers with various objectives. Asking the policy question only in terms of optimal monetary policy effectively turns the central bank into the residual claimant of all policy and gives the other policymakers a free hand in pursuing their own goals. This further worsens the tradeoffs faced by the central bank. The optimal monetary policy literature and the optimal simple rules often labeled flexible inflation targeting assign all of the cyclical policymaking duties to central banks. This distorts the policy discussion and narrows the policy choices to a suboptimal set. We highlight this issue and call for a broader thinking of optimal policies.