C32 Time-Series Models; Dynamic Quantile Regressions (Updated!)
Refine
Year of publication
Document Type
- Working Paper (33)
- Article (1)
- Part of a Book (1)
Language
- English (35)
Has Fulltext
- yes (35)
Is part of the Bibliography
- no (35)
Keywords
- Cointegration (3)
- AI borrower classification (2)
- AI enabled credit scoring (2)
- Bayesian inference (2)
- Conditional Volatility (2)
- DCC-GARCH (2)
- DSGE (2)
- Granger Causality (2)
- Multivariate GARCH (2)
- credit scoring methodology (2)
Extending the data set used in Beyer (2009) to 2017, we estimate I(1) and I(2) money demand models for euro area M3. After including two broken trends and a few dummies to account for shifts in the variables following the global financial crisis and the ECB's non-standard monetary policy measures, we find that the money demand and the real wealth relations identified in Beyer (2009) have remained remarkably stable throughout the extended sample period. Testing for price homogeneity in the I(2) model we find that the nominal-to-real transformation is not rejected for the money relation whereas the wealth relation cannot be expressed in real terms.
In this paper we consider the dynamics of spot and futures prices in the presence of arbitrage. We propose a partially linear error correction model where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. We estimate our model using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out–of–sample Value–at–Risk measures.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
Search costs for lenders when evaluating potential borrowers are driven by the quality of the underwriting model and by access to data. Both have undergone radical change over the last years, due to the advent of big data and machine learning. For some, this holds the promise of inclusion and better access to finance. Invisible prime applicants perform better under AI than under traditional metrics. Broader data and more refined models help to detect them without triggering prohibitive costs. However, not all applicants profit to the same extent. Historic training data shape algorithms, biases distort results, and data as well as model quality are not always assured. Against this background, an intense debate over algorithmic discrimination has developed. This paper takes a first step towards developing principles of fair lending in the age of AI. It submits that there are fundamental difficulties in fitting algorithmic discrimination into the traditional regime of anti-discrimination laws. Received doctrine with its focus on causation is in many cases ill-equipped to deal with algorithmic decision-making under both, disparate treatment, and disparate impact doctrine. The paper concludes with a suggestion to reorient the discussion and with the attempt to outline contours of fair lending law in the age of AI.
We introduce a copula-based dynamic model for multivariate processes of (non-negative) high-frequency trading variables revealing time-varying conditional variances and correlations. Modeling the variables’ conditional mean processes using a multiplicative error model we map the resulting residuals into a Gaussian domain using a Gaussian copula. Based on high-frequency volatility, cumulative trading volumes, trade counts and market depth of various stocks traded at the NYSE, we show that the proposed copula-based transformation is supported by the data and allows capturing (multivariate) dynamics in higher order moments. The latter are modeled using a DCC-GARCH specification. We suggest estimating the model by composite maximum likelihood which is sufficiently flexible to be applicable in high dimensions. Strong empirical evidence for time-varying conditional (co-)variances in trading processes supports the usefulness of the approach. Taking these higher-order dynamics explicitly into account significantly improves the goodness-of-fit of the multiplicative error model and allows capturing time-varying liquidity risks.
Does austerity pay off?
(2014)
Policy makers often implement austerity measures when the sustainability of public finances is in doubt and, hence, sovereign yield spreads are high. Is austerity successful in bringing about a reduction in yield spreads? We employ a new panel data set which contains sovereign yield spreads for 31 emerging and advanced economies and estimate the effects of cuts of government consumption on yield spreads and economic activity. The conditions under which austerity takes place are crucial. During times of fiscal stress, spreads rise in response to the spending cuts, at least in the short-run. In contrast, austerity pays off, if conditions are more benign.
Empirical estimates of equilibrium real interest rates are so far mostly limited to advanced economies, since no statistical procedure suitable for a large set of countries is available. This is surprising, as equilibrium rates have strong policy implications in emerging markets and developing economies as well; current estimates of the global equilibrium rate rely on only a few countries; and estimates for a more diverse set of countries can improve understanding of the drivers. The authors propose a model and estimation strategy that decompose ex ante real interest rates into a permanent and transitory component even with short samples and high volatility. This is done with an unobserved component local level stochastic volatility model, which is used to estimate equilibrium rates for 50 countries with Bayesian methods.
Equilibrium rates were lower in emerging markets and developing economies than in advanced economies in the 1980s, similar in the 1990s, and have been higher since 2000. In line with economic integration and rising global capital markets, synchronization has been rising over time and is higher among advanced economies. Equilibrium rates of countries with stronger trade linkages and similar demographic and economic trends are more synchronized.
We propose an iterative procedure to efficiently estimate models with complex log-likelihood functions and the number of parameters relative to the observations being potentially high. Given consistent but inefficient estimates of sub-vectors of the parameter vector, the procedure yields computationally tractable, consistent and asymptotic efficient estimates of all parameters. We show the asymptotic normality and derive the estimator's asymptotic covariance in dependence of the number of iteration steps. To mitigate the curse of dimensionality in high-parameterized models, we combine the procedure with a penalization approach yielding sparsity and reducing model complexity. Small sample properties of the estimator are illustrated for two time series models in a simulation study. In an empirical application, we use the proposed method to estimate the connectedness between companies by extending the approach by Diebold and Yilmaz (2014) to a high-dimensional non-Gaussian setting.
We propose a new estimator for the spot covariance matrix of a multi-dimensional continuous semi-martingale log asset price process which is subject to noise and non-synchronous observations. The estimator is constructed based on a local average of block-wise parametric spectral covariance estimates. The latter originate from a local method of moments (LMM) which recently has been introduced by Bibinger et al. (2014). We extend the LMM estimator to allow for autocorrelated noise and propose a method to adaptively infer the autocorrelations from the data. We prove the consistency and asymptotic normality of the proposed spot covariance estimator. Based on extensive simulations we provide empirical guidance on the optimal implementation of the estimator and apply it to high-frequency data of a cross-section of NASDAQ blue chip stocks. Employing the estimator to estimate spot covariances, correlations and betas in normal but also extreme-event periods yields novel insights into intraday covariance and correlation dynamics. We show that intraday (co-)variations (i) follow underlying periodicity patterns, (ii) reveal substantial intraday variability associated with (co-)variation risk, (iii) are strongly serially correlated, and (iv) can increase strongly and nearly instantaneously if new information arrives.