Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
Hackethal and Schmidt (2003) criticize a large body of literature on the financing of corporate sectors in different countries that questions some of the distinctions conventionally drawn between financial systems. Their criticism is directed against the use of net flows of finance and they propose alternative measures based on gross flows which they claim re-establish conventional distinctions. This paper argues that their criticism is invalid and that their alternative measures are misleading. There are real issues raised by the use of aggregate data but they are not the ones discussed in Hackethal and Schmidt’s paper. JEL Classification: G30
This chapter focuses on institutional investors in the German financial markets. Institutional investors are specialized financial intermediaries who collect and manage funds on behalf of small investors toward specific objectives in terms of risk, return and maturity. The major types of institutional investors in Germany are insurance companies and investment funds. We will examine the nature of their businesses, their size and role in the financial sector, the size and the composition of the assets under their management, aspects of financ ial regulation, and features of their asset-liability-management.
US investors hold much less foreign stocks than mean/variance analysis applied to historical data predicts. In this article, we investigate whether this home bias can be explained by Bayesian approaches to international asset allocation. In contrast to mean/variance analysis, Bayesian approaches employ different techniques for obtaining the set of expected returns. They shrink sample means towards a reference point that is inferred from economic theory. We also show that one of the Bayesian approaches leads to the same implications for asset allocation as mean-variance/tracking error criterion. In both cases, the optimal portfolio is a combination the market portfolio and the mean/variance efficient portfolio with the highest Sharpe ratio.
Applying the Bayesian approaches to the subject of international diversification, we find that substantial home bias can be explained when a US investor has a strong belief in the global mean/variance efficiency of the US market portfolio and when he has a high regret aversion falling behind the US market portfolio. We also find that the current level of home bias can justified whenever regret aversion is significantly higher than risk aversion.
Finally, we compare the Bayesian approaches to mean/variance analysis in an empirical out-ofsample study. The Bayesian approaches prove to be superior to mean/variance optimized portfolios in terms of higher risk-adjusted performance and lower turnover. However, they not systematically outperform the US market portfolio or the minimum-variance portfolio.
We analyze exchange rates along with equity quotes for 3 German firms from New York (NYSE) and Frankfurt (XETRA) during overlapping trading hours to see where price discovery occurs and how stock prices adjust to an exchange rate shock. Findings include: (a) the exchange rate is exogenous with respect to the stock prices; (b) exchange rate innovations are more important in understanding the evolution of NYSE prices than XETRA prices; and (c) most (but not all) of the fundamental or random walk component of firm value is determined in Frankfurt.
In contrast to the United States and the United Kingdom, little empirical work exists about the distributional characteristics of appraisalbased real estate returns outside these countries. The purpose of this study is to fill this gap by focusing on Germany. In line with other studies, this paper offers an extensive investigation into the distribution of German real estate returns and compares them with and U.S. and U.K. data in the same period. Furthermore, the comovements with bonds and stocks are also examined. In the core, the distributional characteristics for German real estate are comparable to that for the U.S. and U.K.
U.S. investors hold much less international stock than is optimal according to mean–variance portfolio theory applied to historical data. We investigated whether this home bias can be explained by Bayesian approaches to international asset allocation. In comparison with mean–variance analysis, Bayesian approaches use different techniques for obtaining the set of expected returns by shrinking the sample means toward a reference point that is inferred from economic theory. Applying the Bayesian approaches to the field of international diversification, we found that a substantial home bias can be explained when a U.S. investor has a strong belief in the global mean–variance efficiency of the U.S. market portfolio, and in this article, we show how to quantify the strength of this belief. We also found that one of the Bayesian approaches leads to the same implications for asset allocation as the mean–variance/tracking-error criterion. In both cases, the optimal portfolio is a combination of the U.S. market portfolio and the mean–variance-efficient portfolio with the highest Sharpe ratio.
For the Neuer Markt year 2001 is not considered as one of its best, compared to its prior performance. Investors who once piled into the Neuer Markt have now become wary of the exchange, which was launched in 1997 as Europe’s leading growth market and answer to the U.S.‘s Nasdaq Stock Market. The Neuer Markt’s reputation has been marred by the misleading information policy from several Neuer Markt companies, publishing false annual and quarterly data. Some of these companies are responsible for having misinformed investors of their pending bankruptcies. Under these circumstances, it is time to find an explanation for the dramatic loss of credibility in Neuer Markt enterprises. Finding an answer, two aspects come under consideration: • What type of information (annual versus quarterly reports) was available for investors and • of what quality were these provided data. Interim reports can be seen as important instrument in the reporting system to inform all kinds of investors. For this reason we examine the quality of Neuer Markt quarterly reports by concentrating on the disclosure level of 52 Neuer Markt companies‘ reports for the third quarter 1999 and 2000. To enable comparison we establish four disclosure indexes that measure the report’s compliance with the Neuer Markt Rules and Regulations as well as with IAS and US GAAP interim reporting standards. The results demonstrate that the level of disclosure has increased over time. Then we aim to find typical attributes of Neuer Markt enterprises that provide high or low level of accounting information in their quarterly reports. Nevertheless the study also shows that there is not any correlation between market capitalization and the quality of interim reports. However, it can be suggested that an additional enforcement mechanism could improve quality and lure investors back. A step towards this aim is the standardization project of quarterly reports of Deutsche Boerse AG.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
What constitutes a financial system in general and the German financial system in particular?
(2003)
This paper is one of the two introductory chapters of the book "The German Financial System". It first discusses two issues that have a general bearing on the entire book, and then provides a broad overview of the German financial system. The first general issue is that of clarifying what we mean by the key term "financial system" and, based on this definition, of showing why the financial system of a country is important and what it might be important for. Obviously, a definition of its subject matter and an explanation of its importance are required at the outset of any book. As we will explain in Section II, we use the term "financial system" in a broad sense which sets it clearly apart from the narrower concept of the "financial sector". The second general issue is that of how financial systems are described and analysed. Obviously, the definition of the object of analysis and the method by which the object is to be analysed are closely related to one another. The remainder of the paper provides a general overview of the German financial system. In addition, it is intended to provide a first indication of how the elements of the German financial system are related to each other, and thus to support our claim from Section II that there is indeed some merit in emphasising the systemic features of financial systems in general and of the German financial system in particular. The chapter concludes by briefly comparing the general characteristics of the German financial system with those of the financial systems of other advanced industrial countries, and taking a brief look at recent developments which might undermine the "systemic" character of the German financial system.
Portfolio choice and estimation risk : a comparison of Bayesian approaches to resampled efficiency
(2002)
Estimation risk is known to have a huge impact on mean/variance (MV) optimized portfolios, which is one of the primary reasons to make standard Markowitz optimization unfeasible in practice. Several approaches to incorporate estimation risk into portfolio selection are suggested in the earlier literature. These papers regularly discuss heuristic approaches (e.g., placing restrictions on portfolio weights) and Bayesian estimators. Among the Bayesian class of estimators, we will focus in this paper on the Bayes/Stein estimator developed by Jorion (1985, 1986), which is probably the most popular estimator. We will show that optimal portfolios based on the Bayes/Stein estimator correspond to portfolios on the original mean-variance efficient frontier with a higher risk aversion. We quantify this increase in risk aversion. Furthermore, we review a relatively new approach introduced by Michaud (1998), resampling efficiency. Michaud argues that the limitations of MV efficiency in practice generally derive from a lack of statistical understanding of MV optimization. He advocates a statistical view of MV optimization that leads to new procedures that can reduce estimation risk. Resampling efficiency has been contrasted to standard Markowitz portfolios until now, but not to other approaches which explicitly incorporate estimation risk. This paper attempts to fill this gap. Optimal portfolios based on the Bayes/Stein estimator and resampling efficiency are compared in an empirical out-of-sample study in terms of their Sharpe ratio and in terms of stochastic dominance.
Eine Beteiligung des Managements an Gewinngrößen spielt eine wichtige Rolle bei der Ausrichtung von Managemententscheidungen auf die Ziele der Unternehmenseigentümer. Dieser Beitrag zeigt auf, unter welchen Gewinnermittlungsregeln ein Agent zu optimalen Investitionsentscheidungen motiviert wird, wenn er an den Residualgewinnen beteiligt wird. Dieser Beitrag beschäftigt sich insbesondere mit der Frage, ob zum Zwecke einer optimalen Investitionssteuerung, Fertigerzeugnisse zu Vollkosten oder zu Teilkosten bewertet werden sollen. Vor diesem Hintergrund werden ebenfalls verschiedene Wertansätze für Forderungen auf ihre Anreizwirkungen untersucht.
Recent changes in accounting regulation for financial instruments (SFAS 133, IAS 39) have been heavily criticized by representatives from the banking industry. They argue for retaining a historical cost based "mixed model" where accounting for financial instruments depends on their designation to either trading or nontrading activities. In order to demonstrate the impact of different accounting models for financial instruments on the financial statements of banks, we develop a bank simulation model capturing the essential characteristics of a modern universal bank with investment banking and commercial banking activities. In our simulations we look at different scenarios with periods of increasing/decreasing interest rates using historical data and with different banking strategies (fully hedged; partially hedged). The financial statements of our model bank are prepared under different accounting rules ("Old" IAS before implementation of IAS 39; current IAS) with and without hedge accounting as offered by the respective sets of rules. The paper identifies critical issues of applying the different accounting rules for financial instruments to the activities of a universal bank. It demonstrates important shortcomings of the "Old" IAS rules (before IAS 39), and of the current IAS rules. Under the current IAS rules the results of a fully hedged bank may have to show volatility in income statements due to changes in market interest rates. Accounting results of a partially hedged bank in the same scenario may be less affected even though there are economic gains or losses.
As past research suggest, currency exposure risk is a main source of overall risk of international diversified portfolios. Thus, controlling the currency risk is an important instrument for controlling and improving investment performance of international investments. This study examines the effectiveness of controlling the currency risk for international diversified mixed asset portfolios via different hedge tools. Several hedging strategies, using currency forwards and currency options, were evaluated and compared with each other. Therefore, the stock and bond markets of the, United Kingdom, Germany, Japan, Switzerland, and the U.S, in the time period of January 1985 till December 2002, are considered. This is done form the point of view of a German investor. Due to highly skewed return distributions of options, the application of the traditional mean-variance framework for portfolio optimization is doubtful when options are considered. To account for this problem, a mean-LPM model is employed. Currency trends are also taken into account to check for the general dependence of time trends of currency movements and the relative potential gains of risk controlling strategies.
Rating agencies state that they take a rating action only when it is unlikely to be reversed shortly afterwards. Based on a formal representation of the rating process, I show that such a policy provides a good explanation for the empirical evidence: Rating changes occur relatively seldom, exhibit serial dependence, and lag changes in the issuers’ default risk. In terms of informational losses, avoiding rating reversals can be more harmful than monitoring credit quality only twice per year.
The purpose of this paper is to compare three different index construction methodologies of commercial property investments. We examine for different European countries (i) appraisal-based indices and methods of „unsmoothing“ the corresponding return series, (ii) indices that trace average ex-post transaction prices over time, and (iii) indices based on Real Estate Investment Trust share prices.
Substantial research attention has been devoted to the pension accumulation process, whereby employees and those advising them work to accumulate funds for retirement. Until recently, less analysis has been devoted to the pension decumulation process – the process by which retirees finance their consumption during retirement. This gap has recently begun to be filled by an active group of researchers examining key aspects of the pension payout market. One of the areas of most interesting investigation has been in the area of annuities, which are financial products intended to cover the risk of retirees outliving their assets. This paper reviews and extends recent research examining the role of annuities in helping finance retirement consumption. We also examine key market and regulatory factors.
This paper examines the provision of managerial investment incentives by an accounting based incentive scheme in a multiperiod agency setting in which an impatient manager has to choose between mutually exclusive investment projects. We study the properties of accounting rules that motivate an impatient manager to exert unobservable effort and to make optimal investment decisions. In this analysis, a realized cash flow constitutes a noisy signal that contains information about the unknown profitability of the investment project. By observing these signals a principal is able to revise his prior beliefs about the agent´s investment decision. The revision of the principal´s prior beliefs leads to a trade off between the provision of efficient investment incentives and intertemporalsharing of output.
Under a new Basel capital accord, bank regulators might use quantitative measures when evaluating the eligibility of internal credit rating systems for the internal ratings based approach. Based on data from Deutsche Bundesbank and using a simulation approach, we find that it is possible to identify strongly inferior rating systems out-of time based on statistics that measure either the quality of ranking borrowers from good to bad, or the quality of individual default probability forecasts. Banks do not significantly improve system quality if they use credit scores instead of ratings, or logistic regression default probability estimates instead of historical data. Banks that are not able to discriminate between high- and low-risk borrowers increase their average capital requirements due to the concavity of the capital requirements function.
The theoretical derivation of credit market segmentation as the result of a free market process
(2003)
Information asymmetries make it difficult for banks to assess accurately whether specific entrepreneurs are able and/or willing to repay their loans. This leads to implicit interest rate ceilings, i.e. banks "refuse" to increase their interest rates beyond this ceiling as this would lower their net returns. Although the maximum interest rate increases as the size of enterprises decreases, such ceilings nonetheless constrain the banks’ ability to set interest rates at a level that would enable them to cover costs. If transaction costs are high, the total costs associated with granting small and medium-sized loans will exceed the maximum average return which the banks can earn by issuing such loans. For this reason, banks do not lend to small and medium-sized enterprises, and, as a consequence, these businesses have no access to formal sector loans. Because micro and small enterprises have a very high RoI, it is worthwhile for them to rely on expensive informal loans to finance their operations, at least until they reach a certain size. Once they have reached this size, however, it does not make economic sense for them to continue taking out informal credits, and thus they face a growth constraint imposed by the credit market. Medium-sized enterprises earn a lower RoI than small ones, which is why borrowing in the informal credit market is not a worthwhile option for them. Moreover, they do not have access to credit from formal financial institutions, and are thus excluded from obtaining any kind of financing in either of the two credit markets. As the result of free, unregulated market forces we get a stable equilibrium in which the credit market is segmented into an informal (small loan) segment, a formal (large loan) segment and, in between, a "non-market" (medium loan) segment.
This paper analyses the long-term effects of improved small-scale lending, often provided by microfinance institutions set up with the support of development aid. The analysis shows that some common assumptions about microfinance are not true at all: First, it shows that the impact on income will accrue not to the microenterprises themselves, but rather to the consumers of their products. Second, microfinance will have a significant positive effect on the wage levels of employees in the informal sector. Third, microfinance will cause high growth rates in the informal production sector, whereas the trade sector will either contract or at best grow very little.
In this paper, we calculate a transaction–based price index for apartments in Paris (France). The heterogeneous character of real estate is taken into account using an hedonic model. The functional form is specified using a general Box–Cox function. The data basis covers 84 686 transactions of the housing market in 1990:01–1999:12, which is one of the largest samples ever used in comparable studies. Low correlations of the price index with stock and bond indices (first differences) indicate diversification benefits from the inclusion of real estate in a mixed asset portfolio. JEL C43, C51, O18, R20.
An economy in which deposit-taking banks of a Diamond/ Dybvig style and an asset market coexist is modelled. Firstly, within this framework we characterize distinct financial systems depending on the fraction of households with direct investment opportunities that are less efficient than those available to banks. With this fraction comparatively low, the evolving financial system can be interpreted as market-oriented. In this system, banks only provide efficient investment opportunities to households with inferior investment alternatives. Banks are not active in the secondary financial market nor do they provide any liquidity insurance to their depositors. Households participate to a large extent in the primary as well as in the secondary financial markets. In the other case of a relatively high fraction of households with inefficient direct investment opportunities, a bank-dominated financial system arises, in which banks provide liquidity transformation, are active in secondary financial markets and are the only player in primary markets, while households only participate in secondary financial markets. Secondly, we analyze the effect a run on a single bank has on the entire financial system. Interestingly, we can show that a bank run on a single bank causes contagion via the financial market neither in market-oriented nor in extremely bank-dominated financial systems. But in only moderately bank-dominated (or hybrid) financial systems fire sales of long-term financial claims by a distressed bank cause a sudden drop in asset prices that precipitates other banks into crisis.
Capital rationing is an empirically well-documented phenomenon. This constraint requires managers to make investment decisions between mutually exclusive investment opportunities. In a multiperiod agency setting, this paper analyses accounting rules that provide managerial incentives for efficient project selection. In order to motivate a shortsighted manager to expend unobservable effort and to make efficient investment decisions, the principal sets up an incentive scheme based on residual income (e.g. EVATM). The paper shows that income smoothing generates a trade-off between agency costs resulting from differences in discount rates and the costs associated with the "congruity" of residual earnings.
Open-end real estate funds (so called “Offene Immobilienfonds”) play a major role in the German market for securitised real estate investments. Such funds are pools of money from many investors, which are invested in real estate by special investment management companies. This study seeks to identify the risk and return profile of this investment vehicle (before and after income taxes), to compare them with those of other major asset classes, and to provide implications for their appropriate role in a mixed-asset portfolio. Addition-ally, an overview of the institutional architecture and role of German open-end real estate funds is given. Empirical evidence suggests that the financial characteristics of open-end real estate funds are in many respects similar to those reported for direct real estate invest-ments. Accordingly, German open-end real estate funds qualify for medium and long-term investment horizons, rather than for shorter holding periods.
This paper investigates the magnitude and the main determinants of share price reactions to buy-back announcements of German corporations. Based on a sample of 224 announcements from the period May 1998 to April 2003 we find average cumulative abnormal returns around -7.5% for the thirty days preceding the announcement and around +7.0 % for the ten days following the announcement. We regress postannouncement abnormal returns with multiple firm characteristics and provide evidence which supports the undervaluation signaling hypothesis but not the excess cash hypothesis. In extending prior empirical work, we also analyze price effects from an initial statement by management that it intends to seek shareholder approval for a buy-back plan. Observed cumulative abnormal returns on this initial date are in excess of 5% implying a total average price effect between 12% and 15% from implementing a buy-back plan. We conjecture that the German regulatory environment is the main reason why market variations to buy-back announcements are much stronger in Germany than in other countries and conclude that initial statements by managers to seek shareholders’ approval for a buy-back plan should also be subject to legal ad-hoc disclosure requirements. EFM classification: 330, 350
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that therefore financial patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by demonstrating that the surprising empirical results found by Mayer et al. are due to a hidden assumption underlying their methodology. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
This study contributes to the valuation of employee stock options (ESO) in two ways: First, a new pricing model is presented, admitting a major part of calculations to be solved in closed form. Designed with a focus on good replication of empirics, the model fits with publicly observable exercise characteristics better than earlier models. In particular, it is able to account for the correlation of the time of exercise and the stock price at exercise, suspected of being crucial for the option value. The impact of correlation is weak, however, whereas cancellations play a central role. The second contribution of this paper is an examination to what extent the ESO pricing method of SFAS 123 is subject to discretion of the accountant. Given my model were true, the SFAS price would be a good proxy. Yet, outside shareholders usually cannot observe one of the SFAS input parameters. On behalf of an example I show that there is wide latitude left to the accountant.
This study contributes to the valuation of employee stock options (ESO) in two ways: First, a new pricing model is presented, admitting a major part of calculations to be solved in closed form. Designed with a focus on good replication of empirics, the model fits with publicly observable exercise characteristics better than earlier models. In particular, it is able to account for the correlation of the time of exercise and the stock price at exercise, suspected of being crucial for the option value. The impact of correlation is weak, however, whereas cancellations play a central role. The second contribution of this paper is an examination to what extent the ESO pricing method of SFAS 123 is subject to discretion of the accountant. Given my model were true, the SFAS price would be a good proxy. Yet, outside shareholders usually cannot observe one of the SFAS input parameters. On behalf of an example I show that there is wide latitude left to the accountant.
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
The GPS recorder consists of a GPS receiver board, a logging facility, an antenna, a power supply, a DC-DC converter and a casing. Currently, it has a weight of 33 g. The recorder works reliably with a sampling rate of 1/s and with an operation time of about 3 h, providing time-indexed data on geographic positions and ground speed. The data are downloaded when the animal is recaptured. Prototypes were tested on homing pigeons. The records of complete flight paths with surprising details illustrate the potential of this new method that can be used on a variety of medium-sized and large vertebrates.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Charged-particle exclusive data for Ar+Pb collisions at 0.772 GeV/u are analyzed in terms of collective variables for the event shapes in momentum space. Semicentral collisions lead to sidewards flow whereas nearly head-on collisions have spherical shapes in the c.m. frame, resulting from complete stopping of projectile motion. The hydrodynamical model predictions agree qualitatively with the data whereas the standard cascade model disagrees, lacking in stopping power and collective flow.
Nuclear resonance fluorescence measurements with linearly polarized bremsstrahlung were performed to determine parities of bound dipole transitions in 206Pb. A new 1+ level at 5800 keV was found, which has almost the same strength as the isoscalar M1 transition in 208Pb. Twenty-four further dipole states in 206Pb below 7.6 MeV possess negative parity.
Pion and proton production are measured to investigate thermal equilibrium in central collisions of 40Ar+KCl at 1.8 GeV/nucleon. The bulk of the pion yield is isotropic in the c.m. system, with an apparent temperature of 58±3 MeV, much lower than the 118±2 MeV of the protons. It is shown that the low pion "temperature" can be explained by the decay kinematics of delta resonances in thermal equilibrium. A (5±1)% component in the pion spectrum is, however, found to have a temperature of 110±10 MeV. The effect on the spectra of possible contributions from collective radial flow is discussed.
An event by event analysis is carried out for all charged particles observed in central collisions of 40Ar + KCl and 40Ar + Pb at 1.808 and 0.772 GeV/nucleon, respectively. Total transverse energy is used for impact parameter selection within the central trigger condition. The central Ar + KCl reaction exhibits a forward-backward oriented momentum flux. The flux distribution of the most central Ar + Pb events is approximately isotropic in the fireball center of mass.
Triple differential cross sections d3 sigma /dp3 for charged pions produced in symmetric heavy-ion collisions were measured with the KaoS magnetic spectrometer at the heavy-ion synchrotron facility SIS at GSI. The correlations between the momentum vectors of charged pions and the reaction plane in 197Au+197Au collisions at an incident energy of 1 GeV/nucleon were determined. We observe, for the first time, an azimuthally anisotropic distribution of pions, with enhanced emission perpendicular to the reaction plane. The anisotropy is most pronounced for pions of high transverse momentum in semicentral collisions.
Nuclear resonance fluorescence experiments with linearly polarized bremsstrahlung were performed to determine parities of strong dipole transitions in 40Ar. A total of 14 transitions—ten of them previously unknown—in the energy range from 4.7 to 10.2 MeV could be identified. From this experiment it is evident that the main dipole strength to bound states is due to E1 excitations. An upper limit of B(M1) [up arrow] <0.5 µN2 was found for individual magnetic dipole excitations in 40Ar in the energy region below neutron threshold.
Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The properties of two measures of charge fluctuations D-tilde and DeltaPhiq are discussed within several toy models of nuclear collisions. In particular their dependence on mean particle multiplicity, multiplicity fluctuations, and net electric charge are studied. It is shown that the measure DeltaPhiq is less sensitive to these trivial biasing effects than the originally proposed measure D-tilde. Furthermore the influence of resonance decay kinematics is analyzed and it is shown that it is likely to shadow a possible reduction of fluctuations due to QGP creation.
Dt. Fassung: Der Umgang mit Rechtsparadoxien: Derrida, Luhmann, Wiethölter. In: Christian Joerges und Gunther Teubner (Hg.) Rechtsverfassungsrecht: Recht-Fertigungen zwischen Sozialtheorie und Privatrechtsdogmatik. Nomos, Baden-Baden 2003, 249-272.
Analysis of Lambda and associative pion production in relativistic nucleus-nucleus collisions
(1984)
The transverse momentum and rapidity distributions of negative hadrons and participant protons have been measured for central 32S+ 32S collisions at plab=200 GeV/c per nucleon. The proton mean rapidity shift < Delta y>~1.6 and mean transverse momentum <pT>~0.6 GeV/c are much higher than in pp or peripheral AA collisions and indicate an increase in the nuclear stopping power. All pT spectra exhibit similar source temperatures. Including previous results for K0s Lambda , and Lambda -bar, we account for all important contributions to particle production.
The NA35 experiment has collected a high statistics set of momentum analyzed negative hadrons near and forward of midrapidity for central collisions of 200A GeV/c 32S+S, Cu, Ag, and Au. Using momentum space correlations to study the size of the source of particle production, the transverse source radii are found to decrease by ~40% at midrapidity and ~20% at forward rapidity while the longitudinal radius RL is found to decrease by ~50% as pT increases over the interval 50<pT<600 MeV/c. Calculations using a microscopic phase space approach (relativistic quantum molecular dynamics) reproduce the observed trends of the data. PACS: 25.75.+r
Transverse momenta and rapidities of Lambda 's produced in central nucleus-nucleus collisions at 4.5 GeV/c·u (C-C,...,O-Pb) were studied and compared with those from inelastic He-Li interactions at the same incident momentum. Polarization of the Lambda hyperons was found to be consistent with zero ( alpha P=-0.06=0.11 for Lambda 's from central collisions). An upper limit of the Lambda -bar / Lambda production ratio was estimated to be less than 4.5 x 10-3. The experiment was performed in a triggered streamer chamber.
Difficulties of the thermodynamical model approach to pion production in relativistic ion collisions
(1983)
Thermodynamical models with various forms of partial transparency of nuclear matter are considered. It is shown that the introduction of transparency, however, significantly improves agreement with pion data concerning multiplicities and transverse momenta leads to a serious discrepancy with average rapidities of pions. Qualitative arguments are given that difficulties of the thermodynamical approach can be overcome if one assumes hydrodynamical expansion in the first stage of nuclear interactions.
A detailed study of pion production in inelastic and central nucleus-nucleus collisions was carried out using a 2 m streamer spectrometer. Nuclear targets mounted inside the streamer chamber were exposed to nuclear beams of 4.5 GeV/c/nucleon momentum. A systematic study of the influence of the central trigger on observed data is performed. The data on multiplicities, rapidities, transverse momenta, and emission angles of negative pions are presented for various pairs of colliding nuclei. Intercorrelations between various characteristics are studied and discussed. The results are compared with predictions of some theoretical models. It is shown that the main features of the pion production in nuclear collisions can be satisfactorily described by a model assuming independent nucleon-nucleon collisions with subsequent cascading process. However, the observed correlation between Lambda and pion characteristics seems to be unexplained by this picture.
We argue that the recent analysis of strangeness production in nuclear collisions at 200 A GeV/c performed by Topor Pop et al. is flawed. The conclusions are based on an erroneous interpretation of the data and the numerical model results. The term "strangeness enhancement" is used in a misleading way.
Pion and strangeness puzzles
(1996)
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
In the current globalization debate the law appears to be entangled in economic and political developments which move into a new dimension of depoliticization, de-centralization and de-individualization. For all the correct observations in detail, though, this debate is bringing about a drastic (polit)economic reduction of the role of law in the globalization process that I wish to challenge in this paper. Here one has to take on Wallerstein’s misconception of “worldwide economies” according to which the formation of the global society is seen as a basically economic process. Autonomous globalization processes in other social spheres running parallel to economic globalization need to be taken seriously. In protest against such (polit)economic reductionism several strands of the debate, among them the neo-institutionalist theory of “global culture”, post-modern concepts of global legal pluralism, systems theory studies of differentiated global society and various versions of “global civil society” have shaped a concept of a polycentric globalization. From these angles the remarkable multiplicity of the world society, in which tendencies to re-politicization, re-regionalization and re-individualization are becoming visible at the same time, becomes evident. I shall contrast two current theses on the globalization of law with two less current counter-theses: First thesis: globalization is relevant for law because the emergence of global markets undermines the control potential of national policy, and therefore also the chances of legal regulation. First counter-thesis: globalization produces a set of problems intrinsic to law itself, consisting in a change to the dominant lawmaking processes. Second thesis: globalization means that the law institutionalizes the worldwide shift in power from governmental actors to economic actors. Second counter-thesis: globalization means that the law has a chance of contributing to a dual constitution of autonomous sectors of world society.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
We present the measured correlation functions for pi+ pi-, pi- pi- and pi+ pi+ pairs in central S+Ag collisions at 200 GeV per nucleon. The Gamov function, which has been traditionally used to correct the correlation functions of charged pions for the Coulomb interaction, is found to be inconsistent with all measured correlation functions. Certain problems which have been dominating the systematic uncertainty of the correlation analysis are related to this inconsistency. It is demonstrated that a new Coulomb correction method, based exclusively on the measured correlation function for pi+ pi- pairs, may solve the problem.
The pion multiplicity per participating nucleon in central nucleus-nucleus collisions at the energies 2-15 A GeV is significantly smaller than in nucleon-nucleon interactions at the same collision energy. This effect of pion suppression is argued to appear due to the evolution of the system produced at the early stage of heavy-ion collisions towards a local thermodynamic equilibrium and further isentropic expansion.
It is shown that data on pion and strangeness production in central nucleus-nucleus collisions are consistent with the hypothesis of a Quark Gluon Plasma formation between 15 A GeV/c (BNL AGS) and 160 A GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach indicate that the effective number of degrees of freedom increases by a factor of about 3 in the course of the phase transition and that the plasma created at CERN SPS energy may have a temperature of about 280 MeV (energy density $\approx$ 10 GeV/fm^3). Experimental studies of central Pb+Pb collisions in the energy range 20-160 A GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
Using the NA49 main TPC, the central production of hyperons has been measured in CERN SPS Pb - Pb collisions at 158 GeV c-1. The preliminary ratio, studied at 2.0 < y < 2.6 and 1 < pT < 3 GeV c-1, equals ~ (13 ± 4)% (systematic error only). It is compatible, within errors, with the previously obtained ratios for central S + S [1], S + W [2], and S + Au [3] collisions. The fit to the transverse momentum distribution resulted in an inverse slope parameter T of 297 MeV. At this level of statistics we do not see any noticeable enhancement of hyperon production with the increased volume (and, possibly, degree of equilibration) of the system from S + S to Pb + Pb. This result is unexpected and counterintuitive, and should be further investigated. If confirmed, it will have a significant impact on our understanding of mechanisms leading to the enhanced strangeness production in heavy-ion collisions.
The data on average hadron multiplicities in central A+A collisions measured at CERN SPS are analysed with the ideal hadron gas model. It is shown that the full chemical equilibrium version of the model fails to describe the experimental results. The agreement of the data with the off-equilibrium version allowing for partial strangeness saturation is significantly better. The freeze-out temperature of about 180 MeV seems to be independent of the system size (from S+S to Pb+Pb) and in agreement with that extracted in e+e-, pp and p{\bar p} collisions. The strangeness suppression is discussed at both hadron and valence quark level. It is found that the hadronic strangeness saturation factor gamma_S increases from about 0.45 for pp interactions to about 0.7 for central A+A collisions with no significant change from S+S to Pb+Pb collisions. The quark strangeness suppression factor lambda_S is found to be about 0.2 for elementary collisions and about 0.4 for heavy ion collisions independently of collision energy and type of colliding system
The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.
Preliminary inclusive spectra for K+, K-, Ks0, Λ, and are presented which were measured in central Pb + Pb collisions at 158 GeV per nucleon by the NA49 experiment. A comparison with data from lighter collision systems shows a strong change of the shape of the Λ rapidity distribution. The strangeness enhancement observed in S + S compared to p + p and p + A is not further increased in Pb + Pb.
The directed and elliptic flow of protons and charged pions has been observed from the semi-central collisions of a 158 GeV/nucleon Pb beam with a Pb target. The rapidity and transverse momentum dependence of the flow has been measured. The directed flow of the pions is opposite to that of the protons but both exhibit negative flow at low pt. The elliptic flow of both is fairly independent of rapidity but rises with pt. PACS numbers: 25.75.-q, 25.75.Ld
Preliminary data on phi production in central Pb + Pb collisions at 158 GeV per nucleon are presented, measured by the NA49 experiment in the hadronic decay channel phi - K+K-. At mid-rapidity, the kaons were separated from pions and protons by combining dE/dx and time-of-flight information; in the forward rapidity range only dE/dx identification was used to obtain the rapidity distribution and a rapidity-integrated mt-spectrum. The mid-rapidity yield obtained was dN/dy = 1.85 ± 0.3 per event; the total phi multiplicity was estimated to be 5.0 ± 0.7 per event. Comparison with published pp data shows a slight, but not very significant strangeness enhancement.
We demonstrate that a new type of analysis in heavy-ion collisions, based on an event-by-event analysis of the transverse momentum distribution, allows us to obtain information on secondary interactions and collective behaviour that is not available from the inclusive spectra. Using a random walk model as a simple phenomenological description of initial state scattering in collisions with heavy nuclei, we show that the event-by-event measurement allows a quantitative determination of this effect, well within the resolution achievable with the new generation of large acceptance hadron spectrometers. The preliminary data of the NA49 collaboration on transverse momentum fluctuations indicate qualitatively different behaviour than that obtained within the random walk model. The results are discussed in relation to the thermodynamic and hydrodynamic description of nuclear collisions.
Two-particle correlation functions of negative hadrons over wide phase space, and transverse mass spectra of negative hadrons and deuterons near mid-rapidity have been measured in central Pb+Pb collisions at 158 GeV per nucleon by the NA49 experiment at the CERN SPS. A novel Coulomb correction procedure for the negative two-particle correlations is employed making use of the measured oppositely charged particle correlation. Within an expanding source scenario these results are used to extract the dynamic characteristics of the hadronic source, resolving the ambiguities between the temperature and transverse expansion velocity of the source, that are unavoidable when single and two particle spectra are analysed separately. The source shape, the total duration of the source expansion, the duration of particle emission, the freeze-out temperature and the longitudinal and transverse expansion velocities are deduced.
Lambda and Antilambda reconstruction in central Pb+Pb collisions using a time projection chamber
(1997)
The large acceptance time projection chambers of the NA49 experiment are used to record the trajectory of charged particles from Pb + Pb collisions at 158 GeV per nucleon. Neutral strange hadrons have been reconstructed from their charged decay products. To obtain distributions of Λ, and Ks0 in discrete bins of rapidity, y, and transverse momentum, pT, calculations have been performed to determine the acceptance of the detector and the efficiency of the reconstruction software as a function of both variables. The lifetime distributions obtained give values of cτ = 7.8 ± 0.6 cm for Λ and cτ = 2.5 ± 0.3 cm for Ks0, consistent with data book values.
A brief review of a history of data collection and interpretation of the results on high energy A+A collisions is presented. Basic assumptions and main results of a statistical model of the early stage of the A+A collisions are discussed. It is concluded that a broad set of experimental data is in agreement with the hypothesis that QGP is created in central A+A (S+S and Pb+Pb) collisions at the SPS. Carefull experimental investigation of the A+A collisions in the energy region between top AGS and SPS energies is needed.
The large acceptance TPCs of the NA49 spectrometer allow for a systematic multidimensional study of two-particle correlations in different part of phase space. Results from Bertsch-Pratt and Yano-Koonin-Podgoretskii parametrizations are presented differentially in transverse pair momentum and pair rapidity. These studies give an insight into the dynamical space-time evolution of relativistic Pb+Pb collisions, which is dominated by longitudinal expansion.
A statistical model of the early stage of central nucleus--nucleus (A+A) collisions is developed. We suggest a description of the confined state with several free parameters fitted to a compilation of A+A data at the AGS. For the deconfined state a simple Bag model equation of state is assumed. The model leads to the conclusion that a Quark Gluon Plasma is created in central nucleus--nucleus collisions at the SPS. This result is in quantitative agreement with existing SPS data on pion and strangeness production and gives a natural explanation for their scaling behaviour. The localization and the properties of the transition region are discussed. It is shown that the deconfinement transition can be detected by observation of the characteristic energy dependence of pion and strangeness multiplicities, and by an increase of the event--by--event fluctuations. An attempt to understand the data on J/psi production in Pb+Pb collisions at the SPS within the same approach is presented.
Data on J/psi production in inelastic proton-proton, proton-nucleus and nucleus-nucleus interactions at 158 A GeV are analyzed and it is shown that the ratio of mean multiplicities of J/psi mesons and pions is the same for all these collisions. This observation is difficult to understand within current models of J/psi production in nuclear collisions based on the assumption of hard QCD creation of charm quarks.
This paper determines the cost of employee stock options (ESOs) to shareholders. I present a pricing method that seeks to replicate the empirics of exercise and cancellation as good as possible. In a first step, an intensity-based pricing model of El Karoui and Martellini is adapted to the needs of ESOs. In a second step, I calibrate the model with a regression analysis of exercise rates from the empirical work of Heath, Huddart and Lang. The pricing model thus takes account for all effects captured in the regression. Separate regressions enableme to compare options for top executives with those for subordinates. I find no price differences. The model is also applied to test the precision of the fair value accounting method for ESOs, SFAS 123. Using my model as a reference, the SFAS method results in surprisingly accurate prices.
Intangible assets as goodwill, licenses, research and development or customer relations become in high technology and service orientated economies more and more important. But comparing the book values of listed companies and their market capitalization the financial reports seems to fail the information needs of market participants regarding the estimate of the proper firm value. Moreover, with the introduction of Anglo-American accounting systems in Europe and Asia we can observe even in the accounts of companies sited in the same jurisdiction diverging accounting practices for intangible assets caused by different accounting standards. To assess the relevance of intangible assets in Japanese and German accounts of listed companies we therefore measure certain balance sheet and profit and loss relations according to goodwill and self-developed software. We compare and analyze valuation rules for goodwill and software costs according to German GAAP, Japanese GAAP, US GAAP and IAS to determine the possible impact of diverging rules in the comparability of the accounts. Our results show that the comparability of the accounts is impaired because of different accounting practices. The recognition and valuation of goodwill and self-developed software varies significantly according to the accounting regime applied. However, for the recognition of self-developed software, the effect on the average impact on asset coefficients or profit is not that high. Moreover, an industry bias can only be found for the financial industry. In contrast, for goodwill accounting we found major differences especially between German and Japanese Blue Chips. The introduction of the new goodwill impairment only approach and the prohibition of the pooling method may have a major impact especially for Japanese companies’ accounts.