Refine
Year of publication
Document Type
- Working Paper (19)
- Part of Periodical (2)
- Article (1)
- Report (1)
Has Fulltext
- yes (23)
Is part of the Bibliography
- no (23)
Keywords
- Coronavirus (3)
- Corporate Bonds (3)
- Eligibility premium (3)
- coronavirus (3)
- financial stability (3)
- Capital Markets Union (2)
- Collateral Policy (2)
- Corporate Debt Structure (2)
- Corporate Finance (2)
- ECB (2)
We use a unique data set from the Trade Reporting and Compliance Engine (TRACE) to study liquidity effects in the US structured product market. Our main contribution is the analysis of the relation between the accuracy in measuring liquidity and the potential degree of disclosure. Having access to all relevant trading information, we provide evidence that transaction cost measures that use dealer specific information such as trader identity and trade direction can be efficiently proxied by measures that use less detailed information. This finding is important for all market participants in the context of OTC markets, as it fosters our understanding of the information contained in transaction data. Thus, our results provide guidance for improving transparency while maintaining trader confidentiality. In addition, we analyze liquidity in the structured product market in general and show that securities that are mainly institutionally traded, guaranteed by a federal authority, or have low credit risk, tend to be more liquid.
We study the many implications of the Eurosystem collateral framework for corporate bonds. Using data on the evolving collateral eligibility list, we identify the first inclusion dates of bonds and issuers and use these events to find that the increased supply and demand for pledgeable collateral following eligibility (a) increases activity in the corporate securities lending market, (b) lowers eligible bond yields, and (c) affects bond liquidity. Thus, corporate bond lending relaxes the constraint of limited collateral supply and thereby improves market functioning.
Non-standard errors
(2021)
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.