Refine
Document Type
- Working Paper (3)
- Conference Proceeding (2)
- Report (1)
Has Fulltext
- yes (6)
Is part of the Bibliography
- no (6)
Keywords
- Gütefunktion (3)
- Kreditrisiko (3)
- Parametertest (3)
- Portfoliomanagement (3)
- Signifikanzniveau (3)
- Statistischer Test (3)
- Testtheorie (3)
- Asymmetrische Information (2)
- Börsenzulassung (2)
- Deutschland (2)
Institute
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
Who knows what when? : The information content of pre-IPO market prices : [Version March/June 2002]
(2002)
To resolve the IPO underpricing puzzle it is essential to analyze who knows what when during the issuing process. In Germany, broker-dealers make a market in IPOs during the subscription period. We examine these pre-issue prices and find that they are highly informative. They are closer to the first price subsequently established on the exchange than both the midpoint of the bookbuilding range and the offer price. The pre-issue prices explain a large part of the underpricing left unexplained by other variables. The results imply that information asymmetries are much lower than the observed variance of underpricing suggests.
Rating agencies state that they take a rating action only when it is unlikely to be reversed shortly afterwards. Based on a formal representation of the rating process, I show that such a policy provides a good explanation for the empirical evidence: Rating changes occur relatively seldom, exhibit serial dependence, and lag changes in the issuers’ default risk. In terms of informational losses, avoiding rating reversals can be more harmful than monitoring credit quality only twice per year.
Rating agencies state that they take a rating action only when it is unlikely to be reversed shortly afterwards. Based on a formal representation of the rating process, I show that such a policy provides a good explanation for the puzzling empirical evidence: Rating changes occur relatively seldom, exhibit serial dependence, and lag changes in the issuers’ default risk. In terms of informational losses, avoiding rating reversals can be more harmful than monitoring credit quality only twice per year.
Evaluating the quality of credit portfolio risk models is an important issue for both banks and regulators. Lopez and Saidenberg (2000) suggest cross-sectional resampling techniques in order to make efficient use of available data. We show that their proposal disregards cross-sectional dependence in resampled portfolios, which renders standard statistical inference invalid. We proceed by suggesting the Berkowitz (1999) procedure, which relies on standard likelihood ratio tests performed on transformed default data. We simulate the power of this approach in various settings including one in which the test is extended to incorporate cross-sectional information. To compare the predictive ability of alternative models, we propose to use either Bonferroni bounds or the likelihood-ratio of the two models. Monte Carlo simulations show that a default history of ten years can be sufficient to resolve uncertainties currently present in credit risk modeling.
Evaluating the quality of credit portfolio risk models is an important question for both banks and regulators. Lopez and Saidenberg (2000) suggest cross-sectional resampling techniques in order to make efficient use of available data and to produce measures of forecast accuracy. We first show that their proposal disregards crosssectional dependence in simulated subportfolios, which renders standard statistical inference invalid. We proceed by suggesting another evaluation methodology which draws on the concept of likelihood ratio tests. Specifically, we compare the predictive quality of alternative models by comparing the probabilities that observed data have been generated by these models. The distribution of the test statistic can be derived through Monte Carlo simulation. To exploit differences in cross-sectional predictions of alternative models, the test can be based on a linear combination of subportfolio statistics. In the construction of the test, the weight of a subportfolio depends on the difference in the loss distributions which alternative models predict for this particular portfolio. This makes efficient use of the data, and reduces computational burden. Monte Carlo simulations suggest that the power of the tests is satisfactory.
JEL classification: G2; G28; C52