C32 Time-Series Models; Dynamic Quantile Regressions (Updated!)
Refine
Year of publication
Document Type
- Working Paper (33)
- Article (1)
- Part of a Book (1)
Language
- English (35)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Cointegration (3)
- AI borrower classification (2)
- AI enabled credit scoring (2)
- Bayesian inference (2)
- Conditional Volatility (2)
- DCC-GARCH (2)
- DSGE (2)
- Granger Causality (2)
- Multivariate GARCH (2)
- credit scoring methodology (2)
I provide a solution method in the frequency domain for multivariate linear rational expectations models. The method works with the generalized Schur decomposition, providing a numerical implementation of the underlying analytic function solution methods suitable for standard DSGE estimation and analysis procedures. This approach generalizes the time-domain restriction of autoregressive-moving average exogenous driving forces to arbitrary covariance stationary processes. Applied to the standard New Keynesian model, I find that a Bayesian analysis favors a single parameter log harmonic function of the lag operator over the usual AR(1) assumption as it generates humped shaped autocorrelation patterns more consistent with the data.
The meme stock phenomenon has yet to be explored. In this note, we provide evidence that these stocks display common stylized facts for the dynamics of price, trading volume, and social media activity. Using a regime-switching cointegration model, we identify the meme stock “mementum” which exhibits a different characterization compared to other stocks with high volumes of activity (persistent and not) on social media. Finally, we show that mementum is significant and positively related to the stock’s returns. Understanding these properties helps investors and market authorities in their decisions.
A common practice in empirical macroeconomics is to examine alternative recursive orderings of the variables in structural vector autogressive (VAR) models. When the implied impulse responses look similar, the estimates are considered trustworthy. When they do not, the estimates are used to bound the true response without directly addressing the identification challenge. A leading example of this practice is the literature on the effects of uncertainty shocks on economic activity. We prove by counterexample that this practice is invalid in general, whether the data generating process is a structural VAR model or a dynamic stochastic general equilibrium model.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Linear rational-expectations models (LREMs) are conventionally "forwardly" estimated as follows. Structural coefficients are restricted by economic restrictions in terms of deep parameters. For given deep parameters, structural equations are solved for "rational-expectations solution" (RES) equations that determine endogenous variables. For given vector autoregressive (VAR) equations that determine exogenous variables, RES equations reduce to reduced-form VAR equations for endogenous variables with exogenous variables (VARX). The combined endogenous-VARX and exogenous-VAR equations comprise the reduced-form overall VAR (OVAR) equations of all variables in a LREM. The sequence of specified, solved, and combined equations defines a mapping from deep parameters to OVAR coefficients that is used to forwardly estimate a LREM in terms of deep parameters. Forwardly-estimated deep parameters determine forwardly-estimated RES equations that Lucas (1976) advocated for making policy predictions in his critique of policy predictions made with reduced-form equations.
Sims (1980) called economic identifying restrictions on deep parameters of forwardly-estimated LREMs "incredible", because he considered in-sample fits of forwardly-estimated OVAR equations inadequate and out-of-sample policy predictions of forwardly-estimated RES equations inaccurate. Sims (1980, 1986) instead advocated directly estimating OVAR equations restricted by statistical shrinkage restrictions and directly using the directly-estimated OVAR equations to make policy predictions. However, if assumed or predicted out-of-sample policy variables in directly-made policy predictions differ significantly from in-sample values, then, the out-of-sample policy predictions won't satisfy Lucas's critique.
If directly-estimated OVAR equations are reduced-form equations of underlying RES and LREM-structural equations, then, identification 2 derived in the paper can linearly "inversely" estimate the underlying RES equations from the directly-estimated OVAR equations and the inversely-estimated RES equations can be used to make policy predictions that satisfy Lucas's critique. If Sims considered directly-estimated OVAR equations to fit in-sample data adequately (credibly) and their inversely-estimated RES equations to make accurate (credible) out-of-sample policy predictions, then, he should consider the inversely-estimated RES equations to be credible. Thus, inversely-estimated RES equations by identification 2 can reconcile Lucas's advocacy for making policy predictions with RES equations and Sims's advocacy for directly estimating OVAR equations.
The paper also derives identification 1 of structural coefficients from RES coefficients that contributes mainly by showing that directly estimated reduced-form OVAR equations can have underlying LREM-structural equations.
Several recent studies have expressed concern that the Haar prior typically imposed in estimating sign-identi.ed VAR models may be unintentionally informative about the implied prior for the structural impulse responses. This question is indeed important, but we show that the tools that have been used in the literature to illustrate this potential problem are invalid. Speci.cally, we show that it does not make sense from a Bayesian point of view to characterize the impulse response prior based on the distribution of the impulse responses conditional on the maximum likelihood estimator of the reduced-form parameters, since the the prior does not, in general, depend on the data. We illustrate that this approach tends to produce highly misleading estimates of the impulse response priors. We formally derive the correct impulse response prior distribution and show that there is no evidence that typical sign-identi.ed VAR models estimated using conventional priors tend to imply unintentionally informative priors for the impulse response vector or that the corre- sponding posterior is dominated by the prior. Our evidence suggests that concerns about the Haar prior for the rotation matrix have been greatly overstated and that alternative estimation methods are not required in typical applications. Finally, we demonstrate that the alternative Bayesian approach to estimating sign-identi.ed VAR models proposed by Baumeister and Hamilton (2015) su¤ers from exactly the same conceptual shortcoming as the conventional approach. We illustrate that this alternative approach may imply highly economically implausible impulse response priors.
Analysing causality among oil prices and, in general, among financial and economic variables is of central relevance in applied economics studies. The recent contribution of Lu et al. (2014) proposes a novel test for causality— the DCC-MGARCH Hong test. We show that the critical values of the test statistic must be evaluated through simulations, thereby challenging the evidence in papers adopting the DCC-MGARCH Hong test. We also note that rolling Hong tests represent a more viable solution in the presence of short-lived causality periods.
Empirical estimates of equilibrium real interest rates are so far mostly limited to advanced economies, since no statistical procedure suitable for a large set of countries is available. This is surprising, as equilibrium rates have strong policy implications in emerging markets and developing economies as well; current estimates of the global equilibrium rate rely on only a few countries; and estimates for a more diverse set of countries can improve understanding of the drivers. The authors propose a model and estimation strategy that decompose ex ante real interest rates into a permanent and transitory component even with short samples and high volatility. This is done with an unobserved component local level stochastic volatility model, which is used to estimate equilibrium rates for 50 countries with Bayesian methods.
Equilibrium rates were lower in emerging markets and developing economies than in advanced economies in the 1980s, similar in the 1990s, and have been higher since 2000. In line with economic integration and rising global capital markets, synchronization has been rising over time and is higher among advanced economies. Equilibrium rates of countries with stronger trade linkages and similar demographic and economic trends are more synchronized.
We derive the Bayes estimator of vectors of structural VAR impulse responses under a range of alternative loss functions. We also derive joint credible regions for vectors of impulse responses as the lowest posterior risk region under the same loss functions. We show that conventional impulse response estimators such as the posterior median response function or the posterior mean response function are not in general the Bayes estimator of the impulse response vector obtained by stacking the impulse responses of interest. We show that such pointwise estimators may imply response function shapes that are incompatible with any possible parameterization of the underlying model. Moreover, conventional pointwise quantile error bands are not a valid measure of the estimation uncertainty about the impulse response vector because they ignore the mutual dependence of the responses. In practice, they tend to understate substantially the estimation uncertainty about the impulse response vector.