Institute for Monetary and Financial Stability (IMFS)
Refine
Year of publication
Document Type
- Working Paper (192)
- Part of Periodical (15)
- Report (8)
- Article (4)
- Book (2)
- Doctoral Thesis (1)
- Periodical (1)
Is part of the Bibliography
- no (223)
Keywords
- monetary policy (13)
- Geldpolitik (6)
- Bayesian estimation (5)
- Deutschland (5)
- banking union (5)
- model uncertainty (5)
- Bank (4)
- DSGE (4)
- DSGE models (4)
- Federal Reserve (4)
Institute
I have assessed changes in the monetary policy stance in the euro area since its inception by applying a Bayesian time-varying parameter framework in conjunction with the Hamiltonian Monte Carlo algorithm. I find that the estimated policy response has varied considerably over time. Most of the results suggest that the response weakened after the onset of the financial crisis and while quantitative measures were still in place, although there are also indications that the weakening of the response to the expected inflation gap may have been less pronounced. I also find that the policy response has become more forceful over the course of the recent sharp rise in inflation. Furthermore, it is essential to model the stochastic volatility relating to deviations from the policy rule as it materially influences the results.
This paper presents and compares Bernoulli iterative approaches for solving linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. I find that Bernoulli methods compare favorably in solving DSGE models to the QZ, providing similar accuracy as measured by the forward error of the solution at a comparable computation burden. The method can guarantee convergence to a particular, e.g., unique stable, solution and can be combined with other iterative methods, such as the Newton method, lending themselves especially to refining solutions.
Fabo, Janˇcokov ́a, Kempf, and P ́astor (2021) show that papers written by central bank researchers find quantitative easing (QE) to be more effective than papers written by academics. Weale and Wieladek (2022) show that a subset of these results lose statistical significance when OLS regressions are replaced by regressions that downweight outliers. We examine those outliers and find no reason to downweight them. Most of them represent estimates from influential central bank papers published in respectable academic journals. For example, among the five papers finding the largest peak effect of QE on output, all five are published in high-quality journals (Journal of Monetary Economics, Journal of Money, Credit and Banking, and Applied Economics Letters), and their average number of citations is well over 200. Moreover, we show that these papers have supported policy communication by the world’s leading central banks and shaped the public perception of the effectiveness of QE. New evidence based on quantile regressions further supports the results in Fabo et al. (2021).
Optimal monetary policy studies typically rely on a single structural model and identification of model-specific rules that minimize the unconditional volatilities of inflation and real activity. In their proposed approach, the authors take a large set of structural models and look for the model-robust rules that minimize the volatilities at those frequencies that policymakers are most interested in stabilizing. Compared to the status quo approach, their results suggest that policymakers should be more restrained in their inflation responses when their aim is to stabilize inflation and output growth at specific frequencies. Additional caution is called for due to model uncertainty.
Output gap revisions can be large even after many years. Real-time reliability tests might therefore be sensitive to the choice of the final output gap vintage that the real-time estimates are compared to. This is the case for the Federal Reserve’s output gap. When accounting for revisions in response to the global financial crisis in the final output gap, the improvement in real-time reliability since the mid-1990s is much smaller than found by Edge and Rudd (Review of Economics and Statistics, 2016, 98(4), 785-791). The negative bias of real-time estimates from the 1980s has disappeared, but the size of revisions continues to be as large as the output gap itself.
The authors systematically analyse how the realtime reliability assessment is affected through varying the final output gap vintage. They find that the largest changes are caused by output gap revisions after recessions. Economists revise their models in response to such events, leading to economically important revisions not only for the most recent years, but reaching back up to two decades. This might improve the understanding of past business cycle dynamics, but decreases the reliability of real-time output gaps ex post.
Highly interconnected global supply chains make countries vulnerable to supply chain disruptions. The authors estimate the macroeconomic effects of global supply chain shocks for the euro area. Their empirical model combines business cycle variables with data from international container trade.
Using a novel identification scheme, they augment conventional sign restrictions on the impulse responses by narrative information about three episodes: the Tohoku earthquake in 2011, the Suez Canal obstruction in 2021, and the Shanghai backlog in 2022. They show that a global supply chain shock causes a drop in euro area real economic activity and a strong increase in consumer prices. Over a horizon of one year, the global supply chain shock explains about 30% of inflation dynamics. They also use regional data on supply chain pressure to isolate shocks originating in China.
Their results show that supply chain disruptions originating in China are an important driver for unexpected movements in industrial production, while disruptions originating outside China are an especially important driver for the dynamics of consumer prices.
The author proposes a Differential-Independence Mixture Ensemble (DIME) sampler for the Bayesian estimation of macroeconomic models.It allows sampling from particularly challenging, high-dimensional black-box posterior distributions which may also be computationally expensive to evaluate. DIME is a “Swiss Army knife”, combining the advantages of a broad class of gradient-free global multi-start optimizers with the properties of a Monte Carlo Markov chain (MCMC). This includes fast burn-in and convergence absent any prior numerical optimization or initial guesses, good performance for multimodal distributions, a large number of chains (the “ensemble”) running in parallel, an endogenous proposal density generated from the state of the full ensemble, which respects the bounds of the prior distribution. The author shows that the number of parallel chains scales well with the number of necessary ensemble iterations.
DIME is used to estimate the medium-scale heterogeneous agent New Keynesian (“HANK”) model with liquid and illiquid assets, thereby for the first time allowing to also include the households’ preference parameters. The results mildly point towards a less accentuated role of household heterogeneity for the empirical macroeconomic dynamics.
The authors estimate perceptions about the Fed's monetary policy rule from panel data on professional forecasts of interest rates and macroeconomic conditions. The perceived dependence of the federal funds rate on economic conditions is time-varying and cyclical: high during tightening episodes but low during easings. Forecasters update their perceptions about the policy rule in response to monetary policy actions, measured by high-frequency interest rate surprises, suggesting that forecasters have imperfect information about the rule. The perceived rule impacts asset prices crucial for monetary policy transmission, driving how interest rates respond to macroeconomic news and explaining term premia in long-term interest rates.
The authors propose a new method to forecast macroeconomic variables that combines two existing approaches to mixed-frequency data in DSGE models. The first existing approach estimates the DSGE model in a quarterly frequency and uses higher frequency auxiliary data only for forecasting. The second method transforms a quarterly state space into a monthly frequency. Their algorithm combines the advantages of these two existing approaches.They compare the new method with the existing methods using simulated data and real-world data. With simulated data, the new method outperforms all other methods, including forecasts from the standard quarterly model. With real world data, incorporating auxiliary variables as in their method substantially decreases forecasting errors for recessions, but casting the model in a monthly frequency delivers better forecasts in normal times.
The authors present and compare Newton-based methods from the applied mathematics literature for solving the matrix quadratic that underlies the recursive solution of linear DSGE models. The methods are compared using nearly 100 different models from the Macroeconomic Model Data Base (MMB) and different parameterizations of the monetary policy rule in the medium-scale New Keynesian model of Smets and Wouters (2007) iteratively. They find that Newton-based methods compare favorably in solving DSGE models, providing higher accuracy as measured by the forward error of the solution at a comparable computation burden. The methods, however, suffer from their inability to guarantee convergence to a particular, e.g. unique stable, solution, but their iterative procedures lend themselves to refining solutions either from different methods or parameterizations.