Refine
Document Type
- Working Paper (14)
Language
- English (14)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14)
Keywords
- Value-at-Risk (2)
- limit order book (2)
- "magnet effect" (1)
- Arbitrage (1)
- Blockchain (1)
- Brexit (1)
- Conditional intensity (1)
- Counterparty Credit Limits (1)
- Counterparty Risk (1)
- Distributed Ledger (1)
Institute
- Wirtschaftswissenschaften (14) (remove)
We develop a model of an order-driven exchange competing for order flow with off-exchange trading mechanisms. Liquidity suppliers face a trade-off between benefits and costs of order exposure. If they display trading intentions, they attract additional trade demand. We show, in equilibrium, hiding trade intentions can induce mis-coordination between liquidity supply and demand, generate excess price fluctuations and harm price efficiency. Econometric high-frequency analysis based on unique data on hidden orders from NASDAQ reveals strong empirical support for these predictions: We find abnormal reactions in prices and order flow after periods of high excess-supply of hidden liquidity.
We extend the classical ”martingale-plus-noise” model for high-frequency prices by an error correction mechanism originating from prevailing mispricing. The speed of price reversal is a natural measure for informational efficiency. The strength of the price reversal relative to the signal-to-noise ratio determines the signs of the return serial correlation and the bias in standard realized variance estimates. We derive the model’s properties and locally estimate it based on mid-quote returns of the NASDAQ 100 constituents. There is evidence of mildly persistent local regimes of positive and negative serial correlation, arising from lagged feedback effects and sluggish price adjustments. The model performance is decidedly superior to existing stylized microstructure models. Finally, we document intraday periodicities in the speed of price reversion and noise-to-signal ratios.
We show an ambivalent role of high-frequency traders (HFTs) in the Eurex Bund Futures market around high-impact macroeconomic announcements and extreme events. Around macroeconomic announcements, HFTs serve as market makers, post competitive spreads, and earn most of their profits through liquidity supply. Right before the announcement, however, HFTs significantly widen spreads and cause a rapid but short-lived drying-out of liquidity. In turbulent periods, such as after the U.K. Brexit announcement, HFTs shift their focus from market making activities to aggressive (but not necessarily profitable) directional strategies. Then, HFT activity becomes dominant and market quality can degrade.
We theoretically and empirically study large-scale portfolio allocation problems when transaction costs are taken into account in the optimization problem. We show that transaction costs act on the one hand as a turnover penalization and on the other hand as a regularization, which shrinks the covariance matrix. As an empirical framework, we propose a flexible econometric setting for portfolio optimization under transaction costs, which incorporates parameter uncertainty and combines predictive distributions of individual models using optimal prediction pooling. We consider predictive distributions resulting from highfrequency based covariance matrix estimates, daily stochastic volatility factor models and regularized rolling window covariance estimates, among others. Using data capturing several hundred Nasdaq stocks over more than 10 years, we illustrate that transaction cost regularization (even to small extent) is crucial in order to produce allocations with positive Sharpe ratios. We moreover show that performance differences between individual models decline when transaction costs are considered. Nevertheless, it turns out that adaptive mixtures based on high-frequency and low-frequency information yield the highest performance. Portfolio bootstrap reveals that naive 1=N-allocations and global minimum variance allocations (with and without short sales constraints) are significantly outperformed in terms of Sharpe ratios and utility gains.
A counterparty credit limit (CCL) is a limit imposed by a financial institution to cap its maximum possible exposure to a specified counterparty. Although CCLs are designed to help institutions mitigate counterparty risk by selective diversification of their exposures, their implementation restricts the liquidity that institutions can access in an otherwise centralized pool. We address the question of how this mechanism impacts trade prices and volatility, both empirically and via a new model of trading with CCLs. We find empirically that CCLs cause little impact on trade. However, our model highlights that in extreme situations, CCLs could serve to destabilize prices and thereby influence systemic risk.
Exploiting NASDAQ order book data and difference-in-differences methodology, we identify the distinct effects of trading pause mechanisms introduced on U.S. stock exchanges after May 2010. We show that the mere existence of such a regulation constitutes a safeguard which makes market participants behave differently in anticipation of a pause. Pauses tend to break local price trends, make liquidity suppliers revise positions, and enhance price discovery. In contrast, pauses do not have a “cool off” effect on markets, but rather accelerate volatility and bid-ask spreads. This implies a regulatory trade-off between the protective role of trading pauses and their adverse effects on market quality.
Despite the impressive success of deep neural networks in many application areas, neural network models have so far not been widely adopted in the context of volatility forecasting. In this work, we aim to bridge the conceptual gap between established time series approaches, such as the Heterogeneous Autoregressive (HAR) model (Corsi, 2009), and state-of-the-art deep neural network models. The newly introduced HARNet is based on a hierarchy of dilated convolutional layers, which facilitates an exponential growth of the receptive field of the model in the number of model parameters. HARNets allow for an explicit initialization scheme such that before optimization, a HARNet yields identical predictions as the respective baseline HAR model. Particularly when considering the QLIKE error as a loss function, we find that this approach significantly stabilizes the optimization of HARNets. We evaluate the performance of HARNets with respect to three different stock market indexes. Based on this evaluation, we formulate clear guidelines for the optimization of HARNets and show that HARNets can substantially improve upon the forecasting accuracy of their respective HAR baseline models. In a qualitative analysis of the filter weights learnt by a HARNet, we report clear patterns regarding the predictive power of past information. Among information from the previous week, yesterday and the day before, yesterday's volatility makes by far the most contribution to today's realized volatility forecast. Moroever, within the previous month, the importance of single weeks diminishes almost linearly when moving further into the past.
Distributed ledger technologies rely on consensus protocols confronting traders with random waiting times until the transfer of ownership is accomplished. This time consuming settlement process exposes arbitrageurs to price risk and imposes limits to arbitrage. We derive theoretical arbitrage boundaries under general assumptions and show that they increase with expected latency, latency uncertainty, spot volatility, and risk aversion. Using high-frequency data from the Bitcoin network, we estimate arbitrage boundaries due to settlement latency of on average 124 basis points, covering 88% of the observed cross-exchange price differences. Settlement through decentralized systems thus induces non-trivial frictions affecting market efficiency and price formation.
We propose a multivariate dynamic intensity peaks-over-threshold model to capture extreme events in a multivariate time series of returns. The random occurrence of extreme events exceeding a threshold is modeled by means of a multivariate dynamic intensity model allowing for feedback effects between the individual processes. We propose alternative specifications of the multivariate intensity process using autoregressive conditional intensity and Hawkes-type specifications. Likewise, temporal clustering of the size of exceedances is captured by an autoregressive multiplicative error model based on a generalized Pareto distribution. We allow for spillovers between both the intensity processes and the process of marks. The model is applied to jointly model extreme returns in the daily returns of three major stock indexes. We find strong empirical support for a temporal clustering of both the occurrence of extremes and the size of exceedances. Moreover, significant feedback effects between both types of processes are observed. Backtesting Value-at-Risk (VaR) and Expected Shortfall (ES) forecasts show that the proposed model does not only produce a good in-sample fit but also reliable out-of-sample predictions. We show that the inclusion of temporal clustering of the size of exceedances and feedback with the intensity thereof results in better forecasts of VaR and ES.
Revisiting the stealth trading hypothesis: does time-varying liquidity explain the size-effect?
(2019)
Large trades have a smaller price impact per share than medium-sized trades. So far, the literature has attributed this effect to the informational content of trades. In this paper, we show that this effect can arise from strategic order placement. We introduce the concept of a liquidity elasticity, measuring the responsiveness of liquidity demand with respect to changes in liquidity supply, as a major driver for a declining price impact per share. Empirical evidence based on Nasdaq stocks strongly supports theoretical predictions and shows that the aspect of liquidity coordination is an important complement to rationales based on asymmetric information.