C52 Model Evaluation and Selection
Refine
Year of publication
Document Type
- Working Paper (19)
- Conference Proceeding (1)
- Report (1)
Language
- English (21)
Has Fulltext
- yes (21)
Is part of the Bibliography
- no (21)
Keywords
- Bayesian inference (3)
- Kreditrisiko (3)
- bank regulation (3)
- credit risk (3)
- Gütefunktion (2)
- Parameter Elicitation (2)
- Parametertest (2)
- Portfoliomanagement (2)
- Signifikanzniveau (2)
- Statistischer Test (2)
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy. We make use of a new data base of models designed for such investigations. We focus on three representative models: the Christiano, Eichenbaum, Evans (2005) model, the Smets and Wouters (2007) model, and the Taylor (1993a) model. Although the three models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, the optimal monetary policy responses to other sources of economic fluctuations are widely different in the different models. We show that simple optimal policy rules that respond to the growth rate of output and smooth the interest rate are not robust. In contrast, policy rules with no interest rate smoothing and no response to the growth rate, as distinct from the level, of output are more robust. Robustness can be improved further by optimizing rules with respect to the average loss across the three models.
In this paper we investigate the comparative properties of empirically-estimated monetary models of the U.S. economy using a new database of models designed for such investigations. We focus on three representative models due to Christiano, Eichenbaum, Evans (2005), Smets and Wouters (2007) and Taylor (1993a). Although these models differ in terms of structure, estimation method, sample period, and data vintage, we find surprisingly similar economic impacts of unanticipated changes in the federal funds rate. However, optimized monetary policy rules differ across models and lack robustness. Model averaging offers an effective strategy for improving the robustness of policy rules.
This chapter aims to provide a hands-on approach to New Keynesian models and their uses for macroeconomic policy analysis. It starts by reviewing the origins of the New Keynesian approach, the key model ingredients and representative models. Building blocks of current-generation dynamic stochastic general equilibrium (DSGE) models are discussed in detail. These models address the famous Lucas critique by deriving behavioral equations systematically from the optimizing and forward-looking decision-making of households and firms subject to well-defined constraints. State-of-the-art methods for solving and estimating such models are reviewed and presented in examples. The chapter goes beyond the mere presentation of the most popular benchmark model by providing a framework for model comparison along with a database that includes a wide variety of macroeconomic models. Thus, it offers a convenient approach for comparing new models to available benchmarks and for investigating whether particular policy recommendations are robust to model uncertainty. Such robustness analysis is illustrated by evaluating the performance of simple monetary policy rules across a range of recently-estimated models including some with financial market imperfections and by reviewing recent comparative findings regarding the magnitude of government spending multipliers. The chapter concludes with a discussion of important objectives for on-going and future research using the New Keynesian framework.
Output gap revisions can be large even after many years. Real-time reliability tests might therefore be sensitive to the choice of the final output gap vintage that the real-time estimates are compared to. This is the case for the Federal Reserve’s output gap. When accounting for revisions in response to the global financial crisis in the final output gap, the improvement in real-time reliability since the mid-1990s is much smaller than found by Edge and Rudd (Review of Economics and Statistics, 2016, 98(4), 785-791). The negative bias of real-time estimates from the 1980s has disappeared, but the size of revisions continues to be as large as the output gap itself.
The authors systematically analyse how the realtime reliability assessment is affected through varying the final output gap vintage. They find that the largest changes are caused by output gap revisions after recessions. Economists revise their models in response to such events, leading to economically important revisions not only for the most recent years, but reaching back up to two decades. This might improve the understanding of past business cycle dynamics, but decreases the reliability of real-time output gaps ex post.
We show that the use of correlations for modeling dependencies may lead to counterintuitive behavior of risk measures, such as Value-at-Risk (VaR) and Expected Short- fall (ES), when the risk of very rare events is assessed via Monte-Carlo techniques. The phenomenon is demonstrated for mixture models adapted from credit risk analysis as well as for common Poisson-shock models used in reliability theory. An obvious implication of this finding pertains to the analysis of operational risk. The alleged incentive suggested by the New Basel Capital Accord (Basel II), amely decreasing minimum capital requirements by allowing for less than perfect correlation, may not necessarily be attainable.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
A series of recent articles has called into question the validity of VAR models of the global market for crude oil. These studies seek to replace existing oil market models by structural VAR models of their own based on different data, different identifying assumptions, and a different econometric approach. Their main aim has been to revise the consensus in the literature that oil demand shocks are a more important determinant of oil price fluctuations than oil supply shocks. Substantial progress has been made in recent years in sorting out the pros and cons of the underlying econometric methodologies and data in this debate, and in separating claims that are supported by empirical evidence from claims that are not. The purpose of this paper is to take stock of the VAR literature on global oil markets and to synthesize what we have learned. Combining this evidence with new data and analysis, I make the case that the concerns regarding the existing VAR oil market literature have been overstated and that the results from these models are quite robust to changes in the model specification.
This paper examines the advantages and drawbacks of alternative methods of estimating oil supply and oil demand elasticities and of incorporating this information into structural VAR models. I not only summarize the state of the literature, but also draw attention to a number of econometric problems that have been overlooked in this literature. Once these problems are recognized, seemingly conflicting conclusions in the recent literature can be resolved. My analysis reaffirms the conclusion that the one-month oil supply elasticity is close to zero, which implies that oil demand shocks are the dominant driver of the real price of oil. The focus of this paper is not only on correcting some misunderstandings in the recent literature, but on the substantive and methodological insights generated by this exchange, which are of broader interest to applied researchers.
Shortcomings revealed by experimental and theoretical researchers such as Allais (1953), Rabin (2000) and Rabin and Thaler (2001) that put the classical expected utility paradigm von Neumann and Morgenstern (1947) into question, led to the proposition of alternative and generalized utility functions, that intend to improve descriptive accuracy. The perhaps best known among those alternative preference theories, that has attracted much popularity among economists, is the so called Prospect Theory by Kahneman and Tversky (1979) and Tversky and Kahneman (1992). Its distinctive features, governed by its set of risk parameters such as risk sensitivity, loss aversion and decision weights, stimulated a series of economic and financial models that build on the previously estimated parameter values by Tversky and Kahneman (1992) to analyze and explain various empirical phenomena for which expected utility doesn't seem to offer a satisfying rationale. In this paper, after providing a brief overview of the relevant literature, we take a closer look at one of those papers, the trading model of Vlcek and Hens (2011) and analyze its implications on Prospect Theory parameters using an adopted maximum likelihood approach for a dataset of 656 individual investors from a large German discount brokerage firm. We find evidence that investors in our dataset are moderately averse to large losses and display high risk sensitivity, supporting the main assumptions of Prospect Theory.