C51 Model Construction and Estimation
Refine
Year of publication
Document Type
- Working Paper (19)
Language
- English (19)
Has Fulltext
- yes (19)
Is part of the Bibliography
- no (19)
Keywords
- GARCH-Prozess (3)
- Conditional Volatility (2)
- Density Forecasting (2)
- High-Frequency Data (2)
- Multivariate GARCH (2)
- Parameter Elicitation (2)
- Predictive Likelihood (2)
- Risk Management (2)
- Value at Risk (2)
- ARMA (1)
A novel spatial autoregressive model for panel data is introduced, which incor-porates multilayer networks and accounts for time-varying relationships. Moreover, the proposed approach allows the structural variance to evolve smoothly over time and enables the analysis of shock propagation in terms of time-varying spillover effects.
The framework is applied to analyse the dynamics of international relationships among the G7 economies and their impact on stock market returns and volatilities. The findings underscore the substantial impact of cooperative interactions and highlight discernible disparities in network exposure across G7 nations, along with nuanced patterns in direct and indirect spillover effects.
A common practice in empirical macroeconomics is to examine alternative recursive orderings of the variables in structural vector autogressive (VAR) models. When the implied impulse responses look similar, the estimates are considered trustworthy. When they do not, the estimates are used to bound the true response without directly addressing the identification challenge. A leading example of this practice is the literature on the effects of uncertainty shocks on economic activity. We prove by counterexample that this practice is invalid in general, whether the data generating process is a structural VAR model or a dynamic stochastic general equilibrium model.
I have assessed changes in the monetary policy stance in the euro area since its inception by applying a Bayesian time-varying parameter framework in conjunction with the Hamiltonian Monte Carlo algorithm. I find that the estimated policy response has varied considerably over time. Most of the results suggest that the response weakened after the onset of the financial crisis and while quantitative measures were still in place, although there are also indications that the weakening of the response to the expected inflation gap may have been less pronounced. I also find that the policy response has become more forceful over the course of the recent sharp rise in inflation. Furthermore, it is essential to model the stochastic volatility relating to deviations from the policy rule as it materially influences the results.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
The paper analyses the contagion channels of the European financial system through the stochastic block model (SBM). The model groups homogeneous connectivity patterns among the financial institutions and describes the shock transmission mechanisms of the financial networks in a compact way. We analyse the global financial crisis and European sovereign debt crisis and show that the network exhibits a strong community structure with two main blocks acting as shock spreader and receiver, respectively. Moreover, we provide evidence of the prominent role played by insurances in the spread of systemic risk in both crises. Finally, we demonstrate that policy interventions focused on institutions with inter-community linkages (community bridges) are more effective than the ones based on the classical connectedness measures and represents consequently, a better early warning indicator in predicting future financial losses.
Fleckenstein et al. (2014) document that nominal Treasuries trade at higher prices than inflation-swapped indexed bonds, which exactly replicate the nominal cash flows. We study whether this mispricing arises from liquidity premiums in inflation-indexed bonds (TIPS) and inflation swaps. Using US data, we show that the level of liquidity affects TIPS, whereas swap yields include a liquidity risk premium. We also allow for liquidity effects in nominal bonds. These results are based on a model with a systematic liquidity risk factor and asset-specific liquidity characteristics. We show that these liquidity (risk) premiums explain a substantial part of the TIPS underpricing.
This paper addresses whether and to what extent econometric methods used in experimental studies can be adapted and applied to financial data to detect the best-fitting preference model. To address the research question, we implement a frequently used nonlinear probit model in the style of Hey and Orme (1994) and base our analysis on a simulation stud. In detail, we simulate trading sequences for a set of utility models and try to identify the underlying utility model and its parameterization used to generate these sequences by maximum likelihood. We find that for a very broad classification of utility models, this method provides acceptable outcomes. Yet, a closer look at the preference parameters reveals several caveats that come along with typical issues attached to financial data, and that some of these issues seems to drive our results. In particular, deviations are attributable to effects stemming from multicollinearity and coherent under-identification problems, where some of these detrimental effects can be captured up to a certain degree by adjusting the error term specification. Furthermore, additional uncertainty stemming from changing market parameter estimates affects the precision of our estimates for risk preferences and cannot be simply remedied by using a higher standard deviation of the error term or a different assumption regarding its stochastic process. Particularly, if the variance of the error term becomes large, we detect a tendency to identify SPT as utility model providing the best fit to simulated trading sequences. We also find that a frequent issue, namely serial correlation of the residuals, does not seem to be significant. However, we detected a tendency to prefer nesting models over nested utility models, which is particularly prevalent if RDU and EXPO utility models are estimated along with EUT and CRRA utility models.
Microeconomic modeling of investors behavior in financial markets and its results crucially depends on assumptions about the mathematical shape of the underlying preference functions as well as their parameterizations. With the purpose to shed some light on the question, which preferences towards risky financial outcomes prevail in stock markets, we adopted and applied a maximum likelihood approach from the field of experimental economics on a randomly selected dataset of 656 private investors of a large German discount brokerage firm. According to our analysis we find evidence that the majority of these clients follow trading pattern in accordance with Prospect Theory (Kahneman and Tversky (1979)). We also find that observable sociodemographic and personal characteristics such as gender or age don't seem to correlate with specific preference types. With respect to the overall impact of preferences on trading behavior, we find a moderate impact of preferences on trading decisions of individual investors. A classification of investors according to various utility types reveals that the strength of the impact of preferences on an investors' rading behavior is not connected to most personal characteristics, but seems to be related to round-trip length.
Shortcomings revealed by experimental and theoretical researchers such as Allais (1953), Rabin (2000) and Rabin and Thaler (2001) that put the classical expected utility paradigm von Neumann and Morgenstern (1947) into question, led to the proposition of alternative and generalized utility functions, that intend to improve descriptive accuracy. The perhaps best known among those alternative preference theories, that has attracted much popularity among economists, is the so called Prospect Theory by Kahneman and Tversky (1979) and Tversky and Kahneman (1992). Its distinctive features, governed by its set of risk parameters such as risk sensitivity, loss aversion and decision weights, stimulated a series of economic and financial models that build on the previously estimated parameter values by Tversky and Kahneman (1992) to analyze and explain various empirical phenomena for which expected utility doesn't seem to offer a satisfying rationale. In this paper, after providing a brief overview of the relevant literature, we take a closer look at one of those papers, the trading model of Vlcek and Hens (2011) and analyze its implications on Prospect Theory parameters using an adopted maximum likelihood approach for a dataset of 656 individual investors from a large German discount brokerage firm. We find evidence that investors in our dataset are moderately averse to large losses and display high risk sensitivity, supporting the main assumptions of Prospect Theory.
We propose a framework for estimating network-driven time-varying systemic risk contributions that is applicable to a high-dimensional financial system. Tail risk dependencies and contributions are estimated based on a penalized two-stage fixed-effects quantile approach, which explicitly links bank interconnectedness to systemic risk contributions. The framework is applied to a system of 51 large European banks and 17 sovereigns through the period 2006 to 2013, utilizing both equity and CDS prices. We provide new evidence on how banking sector fragmentation and sovereign-bank linkages evolved over the European sovereign debt crisis and how it is reflected in network statistics and systemic risk measures. Illustrating the usefulness of the framework as a monitoring tool, we provide indication for the fragmentation of the European financial system having peaked and that recovery has started.