Refine
Year of publication
Document Type
- Working Paper (29)
Language
- English (29)
Has Fulltext
- yes (29)
Is part of the Bibliography
- no (29)
Keywords
- Excess Zeros (2)
- High-frequency Data (2)
- Market Microstructure (2)
- Multiplicative Error Model (2)
- Semiparametric Specification Test (2)
- Value-at-Risk (2)
- limit order book (2)
- "magnet effect" (1)
- Abnormal Returns (1)
- Angebot (1)
Bayesian learning provides the core concept of processing noisy information. In standard Bayesian frameworks, assessing the price impact of information requires perfect knowledge of news’ precision. In practice, however, precision is rarely dis- closed. Therefore, we extend standard Bayesian learning, suggesting traders infer news’ precision from magnitudes of surprises and from external sources. We show that interactions of the different precision signals may result in highly nonlinear price responses. Empirical tests based on intra-day T-bond futures price reactions to employment releases confirm the model’s predictions and show that the effects are statistically and economically significant.
Analyzing interest rate risk: stochastic volatility in the term structure of government bond yields
(2009)
We propose a Nelson-Siegel type interest rate term structure model where the underlying yield factors follow autoregressive processes with stochastic volatility. The factor volatilities parsimoniously capture risk inherent to the term structure and are associated with the time-varying uncertainty of the yield curve’s level, slope and curvature. Estimating the model based on U.S. government bond yields applying Markov chain Monte Carlo techniques we find that the factor volatilities follow highly persistent processes. We show that slope and curvature risk have explanatory power for bond excess returns and illustrate that the yield and volatility factors are closely related to industrial capacity utilization, inflation, monetary policy and employment growth. JEL Classification: C5, E4, G1
Despite their importance in modern electronic trading, virtually no systematic empirical evidence on the market impact of incoming orders is existing. We quantify the short-run and long-run price effect of posting a limit order by proposing a high-frequency cointegrated VAR model for ask and bid quotes and several levels of order book depth. Price impacts are estimated by means of appropriate impulse response functions. Analyzing order book data of 30 stocks traded at Euronext Amsterdam, we show that limit orders have significant market impacts and cause a dynamic (and typically asymmetric) rebalancing of the book. The strength and direction of quote and spread responses depend on the incoming orders’ aggressiveness, their size and the state of the book. We show that the effects are qualitatively quite stable across the market. Cross-sectional variations in the magnitudes of price impacts are well explained by the underlying trading frequency and relative tick size.
We introduce a regularization and blocking estimator for well-conditioned high-dimensional daily covariances using high-frequency data. Using the Barndorff-Nielsen, Hansen, Lunde, and Shephard (2008a) kernel estimator, we estimate the covariance matrix block-wise and regularize it. A data-driven grouping of assets of similar trading frequency ensures the reduction of data loss due to refresh time sampling. In an extensive simulation study mimicking the empirical features of the S&P 1500 universe we show that the ’RnB’ estimator yields efficiency gains and outperforms competing kernel estimators for varying liquidity settings, noise-to-signal ratios, and dimensions. An empirical application of forecasting daily covariances of the S&P 500 index confirms the simulation results.
We model the dynamics of ask and bid curves in a limit order book market using a dynamic semiparametric factor model. The shape of the curves is captured by a factor structure which is estimated nonparametrically. Corresponding factor loadings are assumed to follow multivariate dynamics and are modelled using a vector autoregressive model. Applying the framework to four stocks traded at the Australian Stock Exchange (ASX) in 2002, we show that the suggested model captures the spatial and temporal dependencies of the limit order book. Relating the shape of the curves to variables reflecting the current state of the market, we show that the recent liquidity demand has the strongest impact. In an extensive forecasting analysis we show that the model is successful in forecasting the liquidity supply over various time horizons during a trading day. Moreover, it is shown that the model’s forecasting power can be used to improve optimal order execution strategies.
We examine intra-day market reactions to news in stock-specific sentiment disclosures. Using pre-processed data from an automated news analytics tool based on linguistic pattern recognition we extract information on the relevance as well as the direction of company-specific news. Information-implied reactions in returns, volatility as well as liquidity demand and supply are quantified by a high-frequency VAR model using 20 second intervals. Analyzing a cross-section of stocks traded at the London Stock Exchange (LSE), we find market-wide robust news-dependent responses in volatility and trading volume. However, this is only true if news items are classified as highly relevant. Liquidity supply reacts less distinctly due to a stronger influence of idiosyncratic noise. Furthermore, evidence for abnormal highfrequency returns after news in sentiments is shown. JEL-Classification: G14, C32
This paper provides theory as well as empirical results for pre-averaging estimators of the daily quadratic variation of asset prices. We derive jump robust inference for pre-averaging estimators, corresponding feasible central limit theorems and an explicit test on serial dependence in microstructure noise. Using transaction data of different stocks traded at the NYSE, we analyze the estimators’ sensitivity to the choice of the pre-averaging bandwidth and suggest an optimal interval length. Moreover, we investigate the dependence of pre-averaging based inference on the sampling scheme, the sampling frequency, microstructure noise properties as well as the occurrence of jumps. As a result of a detailed empirical study we provide guidance for optimal implementation of pre-averaging estimators and discuss potential pitfalls in practice. Quadratic Variation , MarketMicrostructure Noise , Pre-averaging , Sampling Schemes , Jumps
We study the impact of the arrival of macroeconomic news on the informational and noise-driven components in high-frequency quote processes and their conditional variances. Bid and ask returns are decomposed into a common ("efficient return") factor and two market-side-specific components capturing market microstructure effects. The corresponding variance components reflect information-driven and noise-induced volatilities. We find that all volatility components reveal distinct dynamics and are positively influenced by news. The proportion of noise-induced variances is highest before announcements and significantly declines thereafter. Moreover, news-affected responses in all volatility components are influenced by order flow imbalances. JEL Classification: C32, G14, E44
Capturing the zero: a new class of zero-augmented distributions and multiplicative error processes
(2010)
We propose a novel approach to model serially dependent positive-valued variables which realize a non-trivial proportion of zero outcomes. This is a typical phenomenon in financial time series observed on high frequencies, such as cumulated trading volumes or the time between potentially simultaneously occurring market events. We introduce a flexible pointmass mixture distribution and develop a semiparametric specification test explicitly tailored for such distributions. Moreover, we propose a new type of multiplicative error model (MEM) based on a zero-augmented distribution, which incorporates an autoregressive binary choice component and thus captures the (potentially different) dynamics of both zero occurrences and of strictly positive realizations. Applying the proposed model to high-frequency cumulated trading volumes of liquid NYSE stocks, we show that the model captures both the dynamic and distribution properties of the data very well and is able to correctly predict future distributions. Keywords: High-frequency Data , Point-mass Mixture , Multiplicative Error Model , Excess Zeros , Semiparametric Specification Test , Market Microstructure JEL Classification: C22, C25, C14, C16, C51
We introduce a multivariate multiplicative error model which is driven by componentspecific observation driven dynamics as well as a common latent autoregressive factor. The model is designed to explicitly account for (information driven) common factor dynamics as well as idiosyncratic effects in the processes of high-frequency return volatilities, trade sizes and trading intensities. The model is estimated by simulated maximum likelihood using efficient importance sampling. Analyzing five minutes data from four liquid stocks traded at the New York Stock Exchange, we find that volatilities, volumes and intensities are driven by idiosyncratic dynamics as well as a highly persistent common factor capturing most causal relations and cross-dependencies between the individual variables. This confirms economic theory and suggests more parsimonious specifications of high-dimensional trading processes. It turns out that common shocks affect the return volatility and the trading volume rather than the trading intensity. JEL Classification: C15, C32, C52