C32 Time-Series Models; Dynamic Quantile Regressions (Updated!)
Refine
Year of publication
Document Type
- Working Paper (33)
- Article (1)
- Part of a Book (1)
Language
- English (35)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Cointegration (3)
- AI borrower classification (2)
- AI enabled credit scoring (2)
- Bayesian inference (2)
- Conditional Volatility (2)
- DCC-GARCH (2)
- DSGE (2)
- Granger Causality (2)
- Multivariate GARCH (2)
- credit scoring methodology (2)
I provide a solution method in the frequency domain for multivariate linear rational expectations models. The method works with the generalized Schur decomposition, providing a numerical implementation of the underlying analytic function solution methods suitable for standard DSGE estimation and analysis procedures. This approach generalizes the time-domain restriction of autoregressive-moving average exogenous driving forces to arbitrary covariance stationary processes. Applied to the standard New Keynesian model, I find that a Bayesian analysis favors a single parameter log harmonic function of the lag operator over the usual AR(1) assumption as it generates humped shaped autocorrelation patterns more consistent with the data.
The meme stock phenomenon has yet to be explored. In this note, we provide evidence that these stocks display common stylized facts for the dynamics of price, trading volume, and social media activity. Using a regime-switching cointegration model, we identify the meme stock “mementum” which exhibits a different characterization compared to other stocks with high volumes of activity (persistent and not) on social media. Finally, we show that mementum is significant and positively related to the stock’s returns. Understanding these properties helps investors and market authorities in their decisions.
A common practice in empirical macroeconomics is to examine alternative recursive orderings of the variables in structural vector autogressive (VAR) models. When the implied impulse responses look similar, the estimates are considered trustworthy. When they do not, the estimates are used to bound the true response without directly addressing the identification challenge. A leading example of this practice is the literature on the effects of uncertainty shocks on economic activity. We prove by counterexample that this practice is invalid in general, whether the data generating process is a structural VAR model or a dynamic stochastic general equilibrium model.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
Several recent studies have expressed concern that the Haar prior typically imposed in estimating sign-identi.ed VAR models may be unintentionally informative about the implied prior for the structural impulse responses. This question is indeed important, but we show that the tools that have been used in the literature to illustrate this potential problem are invalid. Speci.cally, we show that it does not make sense from a Bayesian point of view to characterize the impulse response prior based on the distribution of the impulse responses conditional on the maximum likelihood estimator of the reduced-form parameters, since the the prior does not, in general, depend on the data. We illustrate that this approach tends to produce highly misleading estimates of the impulse response priors. We formally derive the correct impulse response prior distribution and show that there is no evidence that typical sign-identi.ed VAR models estimated using conventional priors tend to imply unintentionally informative priors for the impulse response vector or that the corre- sponding posterior is dominated by the prior. Our evidence suggests that concerns about the Haar prior for the rotation matrix have been greatly overstated and that alternative estimation methods are not required in typical applications. Finally, we demonstrate that the alternative Bayesian approach to estimating sign-identi.ed VAR models proposed by Baumeister and Hamilton (2015) su¤ers from exactly the same conceptual shortcoming as the conventional approach. We illustrate that this alternative approach may imply highly economically implausible impulse response priors.
Analysing causality among oil prices and, in general, among financial and economic variables is of central relevance in applied economics studies. The recent contribution of Lu et al. (2014) proposes a novel test for causality— the DCC-MGARCH Hong test. We show that the critical values of the test statistic must be evaluated through simulations, thereby challenging the evidence in papers adopting the DCC-MGARCH Hong test. We also note that rolling Hong tests represent a more viable solution in the presence of short-lived causality periods.
Empirical estimates of equilibrium real interest rates are so far mostly limited to advanced economies, since no statistical procedure suitable for a large set of countries is available. This is surprising, as equilibrium rates have strong policy implications in emerging markets and developing economies as well; current estimates of the global equilibrium rate rely on only a few countries; and estimates for a more diverse set of countries can improve understanding of the drivers. The authors propose a model and estimation strategy that decompose ex ante real interest rates into a permanent and transitory component even with short samples and high volatility. This is done with an unobserved component local level stochastic volatility model, which is used to estimate equilibrium rates for 50 countries with Bayesian methods.
Equilibrium rates were lower in emerging markets and developing economies than in advanced economies in the 1980s, similar in the 1990s, and have been higher since 2000. In line with economic integration and rising global capital markets, synchronization has been rising over time and is higher among advanced economies. Equilibrium rates of countries with stronger trade linkages and similar demographic and economic trends are more synchronized.
We derive the Bayes estimator of vectors of structural VAR impulse responses under a range of alternative loss functions. We also derive joint credible regions for vectors of impulse responses as the lowest posterior risk region under the same loss functions. We show that conventional impulse response estimators such as the posterior median response function or the posterior mean response function are not in general the Bayes estimator of the impulse response vector obtained by stacking the impulse responses of interest. We show that such pointwise estimators may imply response function shapes that are incompatible with any possible parameterization of the underlying model. Moreover, conventional pointwise quantile error bands are not a valid measure of the estimation uncertainty about the impulse response vector because they ignore the mutual dependence of the responses. In practice, they tend to understate substantially the estimation uncertainty about the impulse response vector.
We analyze cyclical co-movement in credit, house prices, equity prices, and longterm interest rates across 17 advanced economies. Using a time-varying multi-level dynamic factor model and more than 130 years of data, we analyze the dynamics of co-movement at different levels of aggregation and compare recent developments to earlier episodes such as the early era of financial globalization from 1880 to 1913 and the Great Depression. We find that joint global dynamics across various financial quantities and prices as well as variable-specific global co-movements are important to explain fluctuations in the data. From a historical perspective, global co-movement in financial variables is not a new phenomenon, but its importance has increased for some variables since the 1980s. For equity prices, global cycles play currently a historically unprecedented role, explaining more than half of the fluctuations in the data. Global cycles in credit and housing have become much more pronounced and longer, but their importance in explaining dynamics has only increased for some economies including the US, the UK and Nordic European countries. We also include GDP in the analysis and find an increasing role for a global business cycle.
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
The authors relax the standard assumption in the dynamic stochastic general equilibrium (DSGE) literature that exogenous processes are governed by AR(1) processes and estimate ARMA (p,q) orders and parameters of exogenous processes. Methodologically, they contribute to the Bayesian DSGE literature by using Reversible Jump Markov Chain Monte Carlo (RJMCMC) to sample from the unknown ARMA orders and their associated parameter spaces of varying dimensions.
In estimating the technology process in the neoclassical growth model using post war US GDP data, they cast considerable doubt on the standard AR(1) assumption in favor of higher order processes. They find that the posterior concentrates density on hump-shaped impulse responses for all endogenous variables, consistent with alternative empirical estimates and the rigidities behind many richer structural models. Sampling from noninvertible MA representations, a negative response of hours to a positive technology shock is contained within the posterior credible set. While the posterior contains significant uncertainty regarding the exact order, the results are insensitive to the choice of data filter; this contrasts with the authors’ ARMA estimates of GDP itself, which vary significantly depending on the choice of HP or first difference filter.
We extend the classical ”martingale-plus-noise” model for high-frequency prices by an error correction mechanism originating from prevailing mispricing. The speed of price reversal is a natural measure for informational efficiency. The strength of the price reversal relative to the signal-to-noise ratio determines the signs of the return serial correlation and the bias in standard realized variance estimates. We derive the model’s properties and locally estimate it based on mid-quote returns of the NASDAQ 100 constituents. There is evidence of mildly persistent local regimes of positive and negative serial correlation, arising from lagged feedback effects and sluggish price adjustments. The model performance is decidedly superior to existing stylized microstructure models. Finally, we document intraday periodicities in the speed of price reversion and noise-to-signal ratios.
Causality is a widely-used concept in theoretical and empirical economics. The recent financial economics literature has used Granger causality to detect the presence of contemporaneous links between financial institutions and, in turn, to obtain a network structure. Subsequent studies combined the estimated networks with traditional pricing or risk measurement models to improve their fit to empirical data. In this paper, we provide two contributions: we show how to use a linear factor model as a device for estimating a combination of several networks that monitor the links across variables from different viewpoints; and we demonstrate that Granger causality should be combined with quantile-based causality when the focus is on risk propagation. The empirical evidence supports the latter claim.
Chen and Zadrozny (1998) developed the linear extended Yule-Walker (XYW) method for determining the parameters of a vector autoregressive (VAR) model with available covariances of mixed-frequency observations on the variables of the model. If the parameters are determined uniquely for available population covariances, then, the VAR model is identified. The present paper extends the original XYW method to an extended XYW method for determining all ARMA parameters of a vector autoregressive moving-average (VARMA) model with available covariances of single- or mixed-frequency observations on the variables of the model. The paper proves that under conditions of stationarity, regularity, miniphaseness, controllability, observability, and diagonalizability on the parameters of the model, the parameters are determined uniquely with available population covariances of single- or mixed-frequency observations on the variables of the model, so that the VARMA model is identified with the single- or mixed-frequency covariances.
We examine the inter-linkages between financial factors and real economic activity. We review the main theoretical approaches that allow financial frictions to be embedded into general equilibrium models. We outline, from a policy perspective, the most recent empirical papers focusing on the propagation of exogenous shocks to the economy, with a particular emphasis on works dealing with time variation of parameters and other types of nonlinearities. We then present an application to the analysis of the changing transmission of financial shocks in the euro area. Results show that the effects of a financial shock are time-varying and contingent on the state of the economy. They are of negligible importance in normal times but they greatly matter in conditions of stress.
Does austerity pay off?
(2014)
Policy makers often implement austerity measures when the sustainability of public finances is in doubt and, hence, sovereign yield spreads are high. Is austerity successful in bringing about a reduction in yield spreads? We employ a new panel data set which contains sovereign yield spreads for 31 emerging and advanced economies and estimate the effects of cuts of government consumption on yield spreads and economic activity. The conditions under which austerity takes place are crucial. During times of fiscal stress, spreads rise in response to the spending cuts, at least in the short-run. In contrast, austerity pays off, if conditions are more benign.
One of the leading methods of estimating the structural parameters of DSGE models is the VAR-based impulse response matching estimator. The existing asympotic theory for this estimator does not cover situations in which the number of impulse response parameters exceeds the number of VAR model parameters. Situations in which this order condition is violated arise routinely in applied work. We establish the consistency of the impulse response matching estimator in this situation, we derive its asymptotic distribution, and we show how this distribution can be approximated by bootstrap methods. Our methods of inference remain asymptotically valid when the order condition is satisfied, regardless of whether the usual rank condition for the application of the delta method holds. Our analysis sheds new light on the choice of the weighting matrix and covers both weakly and strongly identified DSGE model parameters. We also show that under our assumptions special care is needed to ensure the asymptotic validity of Bayesian methods of inference. A simulation study suggests that the frequentist and Bayesian point and interval estimators we propose are reasonably accurate in finite samples. We also show that using these methods may affect the substantive conclusions in empirical work.
he predictive likelihood is of particular relevance in a Bayesian setting when the purpose is to rank models in a forecast comparison exercise. This paper discusses how the predictive likelihood can be estimated for any subset of the observable variables in linear Gaussian state-space models with Bayesian methods, and proposes to utilize a missing observations consistent Kalman filter in the process of achieving this objective. As an empirical application, we analyze euro area data and compare the density forecast performance of a DSGE model to DSGE-VARs and reduced-form linear Gaussian models.
We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semi-martingale log asset price process which is subject to noise and non-synchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM) which recently has been introduced by Bibinger et al. (2014). We extend the LMM estimator to allow for autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. We prove the consistency and asymptotic normality of the proposed spot covariance estimator. Based on extensive simulations we provide empirical guidance on the optimal implementation of the estimator and apply it to high-frequency data of a cross-section of NASDAQ blue chip stocks. Employing the estimator to estimate spot covariances, correlations and betas in normal but also extreme-event periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, (iii) are strongly serially correlated, and (iv) can increase strongly and nearly instantaneously if new information arrives.
We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
We examine both the degree and the structural stability of inflation persis tence at different quantiles of the conditional inflation distribution. Previous research focused exclusively on persistence at the conditional mean of the inflation rate. Economic theory, however, provides various reasons -for example downward wage rigidities or menu costs- to expect higher inflation persistence at the upper than at the lower tail of the conditional inflation distribution.
Based on post-war US data we indeed find slower mean reversion in response to positive than to negative shocks. We find robust evidence for a structural break in persistence at all quantiles of the inflation process in the early 1980s. Inflation persistence has decreased and become more homogeneous across quantiles. Persistence at the conditional mean became more informative about the degree of persistence across the entire conditional inflation distribution. While prior to the 1980s inflation was not mean reverting in response to large positive shocks, our evidence strongly suggests that since the end of the Volcker disinflation the unit root can be rejected at every quantile including the upper tail of the conditional inflation distribution.
Recent evaluations of the fiscal stimulus packages recently enacted in the United States and Europe such as Cogan, Cwik, Taylor and Wieland (2009) and Cwik and Wieland (2009) suggest that the GDP effects will be modest due to crowding-out of private consumption and investment. Corsetti, Meier and Mueller (2009a,b) argue that spending shocks are typically followed by consolidations with substantive spending cuts, which enhance the short-run stimulus effect. This note investigates the implications of this argument for the estimated impact of recent stimulus packages and the case for discretionary fiscal policy.
Despite their importance in modern electronic trading, virtually no systematic empirical evidence on the market impact of incoming orders is existing. We quantify the short-run and long-run price effect of posting a limit order by proposing a high-frequency cointegrated VAR model for ask and bid quotes and several levels of order book depth. Price impacts are estimated by means of appropriate impulse response functions. Analyzing order book data of 30 stocks traded at Euronext Amsterdam, we show that limit orders have significant market impacts and cause a dynamic (and typically asymmetric) rebalancing of the book. The strength and direction of quote and spread responses depend on the incoming orders’ aggressiveness, their size and the state of the book. We show that the effects are qualitatively quite stable across the market. Cross-sectional variations in the magnitudes of price impacts are well explained by the underlying trading frequency and relative tick size.
We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model’s forecasting power can be used to improve optimal order execution strategies.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
We develop a multivariate generalization of the Markov–switching GARCH model introduced by Haas, Mittnik, and Paolella (2004b) and derive its fourth–moment structure. An application to international stock markets illustrates the relevance of accounting for volatility regimes from both a statistical and economic perspective, including out–of–sample portfolio selection and computation of Value–at–Risk.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
We develop an interregional version of the standard textbook input-output model, that is extended with respect to the inclusion of the consumption expenditures and income generation process into the endogenous part of the input-output table. We also introduce a new method for deriving a two-region version of an interregional input-output table from original input-output tables for an overall economy and one of its regions. In an empirical assessment of the economic effects of the Frankfurt Airport, the interregional model is successfully employed. It is shown, that the model is capable of reducing the degree of overestimation of economic effects that results from inappropriate use of national input-output tables in the assessment of regional impact effects.
Asset-backed securitisation (ABS) is an asset funding technique that involves the issuance of structured claims on the cash flow performance of a designated pool of underlying receivables. Efficient risk management and asset allocation in this growing segment of fixed income markets requires both investors and issuers to thoroughly understand the longitudinal properties of spread prices. We present a multi-factor GARCH process in order to model the heteroskedasticity of secondary market spreads for valuation and forecasting purposes. In particular, accounting for the variance of errors is instrumental in deriving more accurate estimators of time-varying forecast confidence intervals. On the basis of CDO, MBS and Pfandbrief transactions as the most important asset classes of off-balance sheet and on-balance sheet securitisation in Europe we find that expected spread changes for these asset classes tends to be level stationary with model estimates indicating asymmetric mean reversion. Furthermore, spread volatility (conditional variance) is found to follow an asymmetric stochastic process contingent on the value of past residuals. This ABS spread behaviour implies negative investor sentiment during cyclical downturns, which is likely to escape stationary approximation the longer this market situation lasts.
Using the Johansen test for cointegration, we examine to which extent inflation rates in the Euro area have converged after the introduction of a single currency. Since the assumption of non-stationary variables represents the pivotal point in cointegration analyses we pay special attention to the appropriate identification of non-stationary inflation rates by the application of six different unit root tests. We compare two periods, the first ranging from 1993 to 1998 and the second from 1993 to 2002 with monthly observations. The Johansen test only finds partial convergence for the former period and no convergence for the latter.
In this study a regime switching approach is applied to estimate the chartist and fundamentalist (c&f) exchange rate model originally proposed by Frankel and Froot (1986). The c&f model is tested against alternative regime switching specifications applying likelihood ratio tests. Nested atheoretical models like the popular segmented trends model suggested by Engel and Hamilton (1990) are rejected in favour of the multi agent model. Moreover, the c&f regime switching model seems to describe the data much better than a competing regime switching GARCH(1,1) model. Finally, our findings turned out to be relatively robust when estimating the model in subsamples. The empirical results suggest that the model is able to explain daily DM/Dollar forward exchange rate dynamics from 1982 to 1998.
Modeling short-term interest rates as following regime-switching processes has become increasingly popular. Theoretically, regime-switching models are able to capture rational expectations of infrequently occurring discrete events. Technically, they allow for potential time-varying stationarity. After discussing both aspects with reference to the recent literature, this paper provides estimations of various univariate regime-switching specifications for the German three-month money market rate and bivariate specifications additionally including the term spread. However, the main contribution is a multi-step out-of-sample forecasting competition. It turns out that forecasts are improved substantially when allowing for state-dependence. Particularly, the informational content of the term spread for future short rate changes can be exploited optimally within a multivariate regime-switching framework.