## C53 Forecasting and Other Model Applications

### Refine

#### Year of publication

#### Document Type

- Working Paper (22)
- Article (1)
- Report (1)

#### Language

- English (24)

#### Has Fulltext

- yes (24)

#### Is part of the Bibliography

- no (24)

#### Keywords

- real-time data (7)
- forecasting (6)
- forecast combination (5)
- DSGE models (4)
- model uncertainty (4)
- oil price (4)
- Greenbook (3)
- density forecasts (3)
- Bayesian VAR (2)
- Brent (2)

This paper proposes tests for out-of-sample comparisons of interval forecasts based on parametric conditional quantile models. The tests rank the distance between actual and nominal conditional coverage with respect to the set of conditioning variables from all models, for a given loss function. We propose a pairwise test to compare two models for a single predictive interval. The set-up is then extended to a comparison across multiple models and/or intervals. The limiting distribution varies depending on whether models are strictly non-nested or overlapping. In the latter case, degeneracy may occur. We establish the asymptotic validity of wild bootstrap based critical values across all cases. An empirical application to Growth-at-Risk (GaR) uncovers situations in which a richer set of financial indicators are found to outperform a commonly-used benchmark model when predicting downside risk to economic activity.

Measuring and reducing energy consumption constitutes a crucial concern in public policies aimed at mitigating global warming. The real estate sector faces the challenge of enhancing building efficiency, where insights from experts play a pivotal role in the evaluation process. This research employs a machine learning approach to analyze expert opinions, seeking to extract the key determinants influencing potential residential building efficiency and establishing an efficient prediction framework. The study leverages open Energy Performance Certificate databases from two countries with distinct latitudes, namely the UK and Italy, to investigate whether enhancing energy efficiency necessitates different intervention approaches. The findings reveal the existence of non-linear relationships between efficiency and building characteristics, which cannot be captured by conventional linear modeling frameworks. By offering insights into the determinants of residential building efficiency, this study provides guidance to policymakers and stakeholders in formulating effective and sustainable strategies for energy efficiency improvement.

In the euro area, monetary policy is conducted by a single central bank for 20 member countries. However, countries are heterogeneous in their economic development, including their inflation rates. This paper combines a New Keynesian model and a neural network to assess whether the European Central Bank (ECB) conducted monetary policy between 2002 and 2022 according to the weighted average of the inflation rates within the European Monetary Union (EMU) or reacted more strongly to the inflation rate developments of certain EMU countries.
The New Keynesian model first generates data which is used to train and evaluate several machine learning algorithms. They authors find that a neural network performs best out-of-sample. They use this algorithm to generally classify historical EMU data, and to determine the exact weight on the inflation rate of EMU members in each quarter of the past two decades. Their findings suggest disproportional emphasis of the ECB on the inflation rates of EMU members that exhibited high inflation rate volatility for the vast majority of the time frame considered (80%), with a median inflation weight of 67% on these countries. They show that these results stem from a tendency of the ECB to react more strongly to countries whose inflation rates exhibit greater deviations from their long-term trend.

Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.

The authors examine the effectiveness of labor cost reductions as a means to stimulate economic activity and assesses the differences which may occur with the prevailing exchange rate regime. They develop a medium-scale three-region DSGE model and show that the impact of a cut in the employers’ social security contributions rate does not vary significantly under different exchange rate regimes. They find that both the interest rate and the exchange rate channel matters. Furthermore, the measure appears to be effective even if it comes along with a consumption tax increase to preserve long-term fiscal sustainability.
Finally, they assess whether obtained theoretical results hold up empirically by applying the local projection method. Regression results suggest that changes in employers’ social security contributions rates have statistically significant real effects – a one percentage point reduction leads to an average cumulative rise in output of around 1.3 percent in the medium term. Moreover, the outcome does not differ significantly across the different exchange rate regimes.

This paper provides an overview of how to use "big data" for economic research. We investigate the performance and ease of use of different Spark applications running on a distributed file system to enable the handling and analysis of data sets which were previously not usable due to their size. More specifically, we explain how to use Spark to (i) explore big data sets which exceed retail grade computers memory size and (ii) run typical econometric tasks including microeconometric, panel data and time series regression models which are prohibitively expensive to evaluate on stand-alone machines. By bridging the gap between the abstract concept of Spark and ready-to-use examples which can easily be altered to suite the researchers need, we provide economists and social scientists more generally with the theory and practice to handle the ever growing datasets available. The ease of reproducing the examples in this paper makes this guide a useful reference for researchers with a limited background in data handling and distributed computing.

It has been forty years since the oil crisis of 1973/74. This crisis has been one of the defining economic events of the 1970s and has shaped how many economists think about oil price shocks. In recent years, a large literature on the economic determinants of oil price fluctuations has emerged. Drawing on this literature, we first provide an overview of the causes of all major oil price fluctuations between 1973 and 2014. We then discuss why oil price fluctuations remain difficult to predict, despite economists’ improved understanding of oil markets. Unexpected oil price fluctuations are commonly referred to as oil price shocks. We document that, in practice, consumers, policymakers, financial market participants and economists may have different oil price expectations, and that, what may be surprising to some, need not be equally surprising to others.

Some observers have conjectured that oil supply shocks in the United States and in other countries are behind the plunge in the price of oil since June 2014. Others have suggested that a major shock to oil price expectations occurred when in late November 2014 OPEC announced that it would maintain current production levels despite the steady increase in non-OPEC oil production. Both conjectures are perfectly reasonable ex ante, yet we provide quantitative evidence that neither explanation appears supported by the data. We show that more than half of the decline in the price of oil was predictable in real time as of June 2014 and therefore must have reflected the cumulative effects of earlier oil demand and supply shocks. Among the shocks that occurred after June 2014, the most influential shock resembles a negative shock to the demand for oil associated with a weakening economy in December 2014. In contrast, there is no evidence of any large positive oil supply shocks between June and December. We conclude that the difference in the evolution of the price of oil, which declined by 44% over this period, compared with other commodity prices, which on average only declined by about 5%-15%, reflects oil-market specific developments that took place prior to June 2014.

Although there is much interest in the future retail price of gasoline among consumers, industry analysts, and policymakers, it is widely believed that changes in the price of gasoline are essentially unforecastable given publicly available information. We explore a range of new forecasting approaches for the retail price of gasoline and compare their accuracy with the no-change forecast. Our key finding is that substantial reductions in the mean-squared prediction error (MSPE) of gasoline price forecasts are feasible in real time at horizons up to two years, as are substantial increases in directional accuracy. The most accurate individual model is a VAR(1) model for real retail gasoline and Brent crude oil prices. Even greater reductions in MSPEs are possible by constructing a pooled forecast that assigns equal weight to five of the most successful forecasting models. Pooled forecasts have lower MSPE than the EIA gasoline price forecasts and the gasoline price expectations in the Michigan Survey of Consumers. We also show that as much as 39% of the decline in gas prices between June and December 2014 was predictable.

he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.