Refine
Year of publication
Document Type
- Working Paper (2354) (remove)
Language
- English (2354) (remove)
Has Fulltext
- yes (2354) (remove)
Is part of the Bibliography
- no (2354)
Keywords
- Deutschland (115)
- USA (51)
- Geldpolitik (48)
- monetary policy (46)
- Schätzung (45)
- Europäische Union (43)
- Bank (38)
- Corporate Governance (36)
- Monetary Policy (31)
- Inflation (23)
Institute
- Center for Financial Studies (CFS) (1380)
- Wirtschaftswissenschaften (1309)
- Sustainable Architecture for Finance in Europe (SAFE) (742)
- House of Finance (HoF) (608)
- Institute for Monetary and Financial Stability (IMFS) (173)
- Rechtswissenschaft (149)
- Informatik (114)
- Foundation of Law and Finance (51)
- Exzellenzcluster Die Herausbildung normativer Ordnungen (34)
- Gesellschaftswissenschaften (29)
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Analysis of Lambda and associative pion production in relativistic nucleus-nucleus collisions
(1984)
Pion and strangeness puzzles
(1996)
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
It is shown that data on pion and strangeness production in central nucleus-nucleus collisions are consistent with the hypothesis of a Quark Gluon Plasma formation between 15 A GeV/c (BNL AGS) and 160 A GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach indicate that the effective number of degrees of freedom increases by a factor of about 3 in the course of the phase transition and that the plasma created at CERN SPS energy may have a temperature of about 280 MeV (energy density $\approx$ 10 GeV/fm^3). Experimental studies of central Pb+Pb collisions in the energy range 20-160 A GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
The data on average hadron multiplicities in central A+A collisions measured at CERN SPS are analysed with the ideal hadron gas model. It is shown that the full chemical equilibrium version of the model fails to describe the experimental results. The agreement of the data with the off-equilibrium version allowing for partial strangeness saturation is significantly better. The freeze-out temperature of about 180 MeV seems to be independent of the system size (from S+S to Pb+Pb) and in agreement with that extracted in e+e-, pp and p{\bar p} collisions. The strangeness suppression is discussed at both hadron and valence quark level. It is found that the hadronic strangeness saturation factor gamma_S increases from about 0.45 for pp interactions to about 0.7 for central A+A collisions with no significant change from S+S to Pb+Pb collisions. The quark strangeness suppression factor lambda_S is found to be about 0.2 for elementary collisions and about 0.4 for heavy ion collisions independently of collision energy and type of colliding system
The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.
We demonstrate that a new type of analysis in heavy-ion collisions, based on an event-by-event analysis of the transverse momentum distribution, allows us to obtain information on secondary interactions and collective behaviour that is not available from the inclusive spectra. Using a random walk model as a simple phenomenological description of initial state scattering in collisions with heavy nuclei, we show that the event-by-event measurement allows a quantitative determination of this effect, well within the resolution achievable with the new generation of large acceptance hadron spectrometers. The preliminary data of the NA49 collaboration on transverse momentum fluctuations indicate qualitatively different behaviour than that obtained within the random walk model. The results are discussed in relation to the thermodynamic and hydrodynamic description of nuclear collisions.
A statistical model of the early stage of central nucleus--nucleus (A+A) collisions is developed. We suggest a description of the confined state with several free parameters fitted to a compilation of A+A data at the AGS. For the deconfined state a simple Bag model equation of state is assumed. The model leads to the conclusion that a Quark Gluon Plasma is created in central nucleus--nucleus collisions at the SPS. This result is in quantitative agreement with existing SPS data on pion and strangeness production and gives a natural explanation for their scaling behaviour. The localization and the properties of the transition region are discussed. It is shown that the deconfinement transition can be detected by observation of the characteristic energy dependence of pion and strangeness multiplicities, and by an increase of the event--by--event fluctuations. An attempt to understand the data on J/psi production in Pb+Pb collisions at the SPS within the same approach is presented.
Data on J/psi production in inelastic proton-proton, proton-nucleus and nucleus-nucleus interactions at 158 A GeV are analyzed and it is shown that the ratio of mean multiplicities of J/psi mesons and pions is the same for all these collisions. This observation is difficult to understand within current models of J/psi production in nuclear collisions based on the assumption of hard QCD creation of charm quarks.
This paper determines the cost of employee stock options (ESOs) to shareholders. I present a pricing method that seeks to replicate the empirics of exercise and cancellation as good as possible. In a first step, an intensity-based pricing model of El Karoui and Martellini is adapted to the needs of ESOs. In a second step, I calibrate the model with a regression analysis of exercise rates from the empirical work of Heath, Huddart and Lang. The pricing model thus takes account for all effects captured in the regression. Separate regressions enableme to compare options for top executives with those for subordinates. I find no price differences. The model is also applied to test the precision of the fair value accounting method for ESOs, SFAS 123. Using my model as a reference, the SFAS method results in surprisingly accurate prices.
Intangible assets as goodwill, licenses, research and development or customer relations become in high technology and service orientated economies more and more important. But comparing the book values of listed companies and their market capitalization the financial reports seems to fail the information needs of market participants regarding the estimate of the proper firm value. Moreover, with the introduction of Anglo-American accounting systems in Europe and Asia we can observe even in the accounts of companies sited in the same jurisdiction diverging accounting practices for intangible assets caused by different accounting standards. To assess the relevance of intangible assets in Japanese and German accounts of listed companies we therefore measure certain balance sheet and profit and loss relations according to goodwill and self-developed software. We compare and analyze valuation rules for goodwill and software costs according to German GAAP, Japanese GAAP, US GAAP and IAS to determine the possible impact of diverging rules in the comparability of the accounts. Our results show that the comparability of the accounts is impaired because of different accounting practices. The recognition and valuation of goodwill and self-developed software varies significantly according to the accounting regime applied. However, for the recognition of self-developed software, the effect on the average impact on asset coefficients or profit is not that high. Moreover, an industry bias can only be found for the financial industry. In contrast, for goodwill accounting we found major differences especially between German and Japanese Blue Chips. The introduction of the new goodwill impairment only approach and the prohibition of the pooling method may have a major impact especially for Japanese companies’ accounts.
The hypothesis of statistical production of J/psi mesons at hadronization is formulated and checked against experimental data. It explains in the natural way the observed scaling behavior of the J/psi to pion ratio at the CERN SPS energies. Using the multiplicities of J/psi and eta mesons the hadronization temperature T_H = 175 MeV is found, which agrees with the previous estimates of the temperature parameter based on the analysis of the hadron yield systematics.
A validity of a recent estimate of an upper limit of charm production in central Pb+Pb collisions at 158 AGeV is critically discussed. Within a simple model we study properties of the background subtraction procedure used for an extraction of the charm signal from the analysis of dilepton spectra. We demonstrate that a production asymmetry between positively and negatively charged background muons and a large multiplicity of signal pairs leads to biased results. Therefore the applicability of this procedure for the analysis of nucleus-nucleus data should be reconsidered before final conclusions on the upper limit estimate of charm production could be drawn.
At least in the past, banking in continental Europe has been characterised by a number of features that are quite specific to the region. They include the following: (1) banks play a strong role in their respective financial systems; (2) universal banking is prevalent; (3) not strictly profit-oriented banks play a significant role; and (4) there are considerable differences between national banking systems. It can be safely assumed that the future of banking in Europe will be shaped by three major external developments: deregulation and liberalisation; advances in information technology; and economic, financial and monetary integration. The overall consequences of these developments would be much too vast a topic to be addressed in one short paper. Therefore the present paper concentrates on the following question: Are the traditional peculiarities of the banking and financial systems of continental Europe likely to disappear as a consequence of the aforementioned external developments or are they more likely to remain in spite of these developments? The external developments affect the features specific to banking in continental Europe only indirectly and only via the strategies selected and pursued by the various players in the financial systems, notably the banks themselves, and in ways which strongly depend on the structure of the banking industry and the level of competition between banks and other providers of financial services. The paper develops an informal model of the relationships between (1) external developments, (2) bank strategies and the structure of the banking industry, and (3) the peculiarities of banking in Europe, and derives a hypothesis predicting which of the traditional peculiarities are likely to disappear and which are likely to remain. It argues that, overall, the peculiarities are not likely to disappear in the short or the medium term. First version June 2000. This version March 2001.
The use of catastrophe bonds (cat bonds) implies the problem of the so called basis risk, resulting from the fact that, in contrast to traditional reinsurance, this kind of coverage cannot be a perfect hedge for the primary’s insured portfolio. On the other hand cat bonds offer some very attractive economic features: Besides their usefulness as a solution to the problems of moral hazard and default risk, an important advantage of cat bonds can be seen in the presumably lower transaction costs compared to (re)insurance products. Insurance coverage usually incurs costs of acquisition, monitoring and loss adjustment, all of which can be reduced by making use of the financial markets. Additionally, cat bonds are only weakly correlated with market risk, implying that in perfect financial markets these securities could be traded at a price including just small risk premiums. Although these aspects have been identified in economic literature, to our knowledge there has been no publication so far that formally addresses the trade-off between basis risk and transaction cost. In this paper, therefore, we introduce a simple model that enables us to analyze cat bonds and reinsurance as substitutional risk management tools in a standard insurance demand theory environment. We concentrate on the problem of basis risk versus transaction cost, and show that the availability of cat bonds affects the structure of optimal reinsurance contract design in an interesting way, as it leads to an increase of indemnity for small losses and a decrease of indemnity for large losses.
In the early 1990s, a consensus emerged among the leading experts in the field of small and micro business finance. It is based on three elements: The focus of projects should be on improving the entire financial sector of a given developing country; a commercial approach should be adopted, which implies covering costs and keeping costs as low as possible; and institutions should be created which are both able and willing to provide good financial services to the target group on a lasting basis. The starting point for this paper, which wholeheartedly endorses these three elements, is the proposition that putting these general principles into practice is much more difficult than some of their proponents seem to believe - and also more difficult than some of them have led donors to believe. The paper discusses the central issues of small and micro business financing in three areas: credit in general and the cost-effectiveness of lending methodologies in particular (Section II); savings in general and the role of deposit-taking in the growth of a target group-oriented financial institution in particular (Section III); and the process of creating viable target group-oriented financial institutions in developing countries (Section IV). We argue that donor institutions must be willing, and prepared, to play a role here which differs in important respects from their conventional role if they really wish to support sustainable financial sector development.
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
A financial system can only perform its function of channelling funds from savers to investors if it offers sufficient assurance to the providers of the funds that they will reap the rewards which have been promised to them. To the extent that this assurance is not provided by contracts alone, potential financiers will want to monitor and influence managerial decisions. This is why corporate governance is an essential part of any financial system. It is almost obvious that providers of equity have a genuine interest in the functioning of corporate governance. However, corporate governance encompasses more than investor protection. Similar considerations also apply to other stakeholders who invest their resources in a firm and whose expectations of later receiving an appropriate return on their investment also depend on decisions at the level of the individual firm which would be extremely difficult to anticipate and prescribe in a set of complete contingent contracts. Lenders, especially long-term lenders, are one such group of stakeholders who may also want to play a role in corporate governance; employees, especially those with high skill levels and firm-specific knowledge, are another. The German corporate governance system is different from that of the Anglo-Saxon countries because it foresees the possibility, and even the necessity, to integrate lenders and employees in the governance of large corporations. The German corporate governance system is generally regarded as the standard example of an insider-controlled and stakeholder-oriented system. Moreover, only a few years ago it was a consistent system in the sense of being composed of complementary elements which fit together well. The first objective of this paper is to show why and in which respect these characterisations were once appropriate. However, the past decade has seen a wave of developments in the German corporate governance system, which make it worthwhile and indeed necessary to investigate whether German corporate governance has recently changed in a fundamental way. More specifically one can ask which elements and features of German corporate governance have in fact changed, why they have changed and whether those changes which did occur constitute a structural change which would have converted the old insider-controlled system into an outsider-controlled and shareholder-oriented system and/or would have deprived it of its former consistency. It is the second purpose of this paper to answer these questions.
This paper starts out by pointing out the challenges and weaknesses which the German banking systems faces according to the prevailing views among national and international observers. These challenges include a generalproblem of profitability and, possibly as its main reason, the strong role of public banks. These concerns raise the questions whether the facts support this assessment of a general profitability problem and whether there are reasons to expect a fundamental or structural transformation of the German banking system. The paper contains four sections. The first one presents the evidence concerning the profitability problem in a comparative, international perspective. The second section presents information about the so-called three-pillar system of German banking. What might be surprising in this context is that the group of pub lic banks is not only the largest segment of the German banking system, but that the primary savings banks also are its financially most successful part. The German banking system is highly fragmented. This fact suggests to discuss past, present and possible future consolidations in the banking system in the third section. The authors provide evidence to the effect that within- group consolidation has been going on at a rapid pace in the public and the cooperative banking groups in recent years and that this development has not yet come to an end, while within-group consolidation among the large private banks, consolidation across group boundaries at a national level and cross-border or international consolidation has so far only happened at a limited scale, and do not appear to gain momentum in the near future. In the last section, the authors develop their explanation for the fact that large-scale and cross border consolidation has so far not materialized to any great extent. Drawing on the concept of complementarity, they argue that it would be difficult to expect these kinds of mergers and acquisitions happening within a financial system which is itself surprisingly stable, or, as one cal also call it, resistant to change.
In a series of recent papers, Mark Roe and Lucian Bebchuk have developed further the concept of path dependence, combined it with concepts of evolution and used it to challenge the wide-spread view that the corporate governance systems of the major advanced economies are likely to converge towards the economically best system at a rapid pace. The present paper shares this skepticism, but adds several aspects which strengthen the point made by Roe and Bebchuk. The present paper argues that it is important for the topic under discussion to distinguish clearly between two arguments which can explain path dependence. One of them is based on the role of adjustment costs, and the other one uses concepts borrowed from evolutionary biology. Making this distinction is important because the two concepts of path dependence have different implications for the issue of rapid convergence to the best system. In addition, we introduce a formal concept of complementarity and demonstrate that national corporate governance systems are usefully regarded as – possibly consistent – systems of complementary elements. Complementarity is a reason for path dependence which supports the socio-biological argument. The dynamic properties of systems composed of complementary elements are such that a rapid convergence towards a universally best corporate governance systems is not likely to happen. We then proceed by showing for the case of corporate governance systems shaped by complementarity, that there even is the possibility of a convergence towards a common system which is economically inferior. And in the specific case of European integration, "inefficient convergence" of corporate governance systems is a possible future course of events. First version December 1998, this version March 2000.
Major differences between national financial systems might make a common monetary policy difficult. As within Europe, Germany and the United Kingdom differ most with respect to their financial systems, the present paper addresses its topic under the assumption that the United Kingdom is already a part of EMU. Employing a comprehensive concept of a financial system, the author shows that there are indeed profound differences between the national financial systems of Germany and the United Kingdom. But he argues that these differences are not likely to create great problems for a common monetary policy. In the context of the present paper, one important difference between the two financial systems refers to the structure of the respective financial sector and, as a consequence, to the strength with which a given monetary policy impulse set by the central bank is passed on to the financial sector. The other important difference refers to the typical relationship between the banks and the business sector in each country which determines to what extent the financial sectors and especially the banks pass on pressure exerted on them by a monetary policy authority to their clients in their national business sector. In Germany, the central bank has a stronger influence on the financial sector than in England, while, for systemic reasons, German banks tend to soften monetary policy pressures on their customers more than British banks do. As far as the transmission of a restrictive monetary policy of the ECB to the real economy is concerned, these two differences tend to offset each other. This is good news for the advocates of a monetary union as it eases the task of the ECB when it comes to determining the strength of its monetary policy measures.
Paper Presented at the Conference on Workable Corporate Governance: Cross-Border Perspectives held in Paris, March 17-19, 1997 To appear in: A. Pezard/J.-M. Thiveaud: Workable Corporate Governance: Cross-Border Perspectives, Montchrestien, Paris 1997. The paper discusses the role of various constituencies in the corporate governance of a corporation from the perspective of incomplete contracts. A strict shareholder value orientation in the sense of a rule that at any time firm decisions should be made strictly in the interest of the present shareholders would make it difficult for the firm to establish long-term relationships as the potential partners would have to fear that, at a later stage of the co-operation, the shareholders or a management acting only on their behalf could exploit them because of the inevitable incompleteness of long-term contracts. One way of mitigating these problems is to put in place a corporate governance system which gives some active role to the other stakeholders or constituencies, or which makes their interests a well-defined element of the objective function of the firm. A commitment not to follow a policy of strict shareholder value maximization ex post can be efficient ex ante. Such a system would clearly differ from what is advocated by proponents of a "stakeholder approach", as it would limit the rights of the other constituencies to those which would have been agreed upon in a constitutional contract concluded between them and the founder of the firm at the time when long-term contracts are first established.
Asset-backed securitization (ABS) has become a viable and increasingly attractive risk management and refinancing method either as a standalone form of structured finance or as securitized debt in Collateralized Debt Obligations (CDO). However, the absence of industry standardization has prevented rising investment demand from translating into market liquidity comparable to traditional fixed income instruments, in all but a few selected market segments. Particularly low financial transparency and complex security designs inhibits profound analysis of secondary market pricing and how it relates to established forms of external finance. This paper represents the first attempt to measure the intertemporal, bivariate causal relationship between matched price series of equity and ABS issued by the same entity. In a two-dimensional linear system of simultaneous equations we investigate the short-term dynamics and long-term consistency of daily secondary market data from the U.K. Sterling ABS/MBS market and exchange traded shares between 1998 and 2004 with and without the presence of cointegration. Our causality framework delivers compelling empirical support for a strong co-movement between matched price series of ABS-equity pairs, where ABS markets seem to contribute more to price discovery over the long run. Controlling for cointegration, risk-free interest and average market risk of corporate debt hardly alters our results. However, once we qualify the magnitude and direction of price discovery on various security characteristics, such as the ABS asset class, we find that ABS-equity pairs with large-scale CMBS/RMBS and credit card/student loan ABS reveal stronger lead-lag relationships and joint price dynamics than whole business ABS. JEL Classifications: G10, G12, G24
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
Asset-backed securitisation (ABS) is an asset funding technique that involves the issuance of structured claims on the cash flow performance of a designated pool of underlying receivables. Efficient risk management and asset allocation in this growing segment of fixed income markets requires both investors and issuers to thoroughly understand the longitudinal properties of spread prices. We present a multi-factor GARCH process in order to model the heteroskedasticity of secondary market spreads for valuation and forecasting purposes. In particular, accounting for the variance of errors is instrumental in deriving more accurate estimators of time-varying forecast confidence intervals. On the basis of CDO, MBS and Pfandbrief transactions as the most important asset classes of off-balance sheet and on-balance sheet securitisation in Europe we find that expected spread changes for these asset classes tends to be level stationary with model estimates indicating asymmetric mean reversion. Furthermore, spread volatility (conditional variance) is found to follow an asymmetric stochastic process contingent on the value of past residuals. This ABS spread behaviour implies negative investor sentiment during cyclical downturns, which is likely to escape stationary approximation the longer this market situation lasts.
Efficient systems for the securities transaction industry : a framework for the European Union
(2003)
This paper provides a framework for the securities transaction industry in the EU to understand the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure. Of particular interest are microeconomic incentives of the industry players that can be in contradiction to social welfare. We evaluate the three functions and the strategic parameters - the boundary decision, the communication standard employed and the governance implemented - along the lines of three efficiency concepts. By structuring the main factors that influence these concepts and by describing the underlying trade-offs among them, we provide insight into a highly complex industry. Applying our framework, the paper describes and analyzes three consistent systems for the securities transaction industry. We point out that one of the systems, denoted as 'contestable monopolies', demonstrates a superior overall efficiency while it might be the most sensitive in terms of configuration accuracy and thus difficult to achieve and sustain.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
This paper studies a setting in which a risk averse agent must be motivated to work on two tasks: he (1) evaluates a new project and, if adopted, (2) manages it. While a performance measure which is informative of an agent´s action is typically valuable because it can be used to improve the risk sharing of the contract, this is not necessarily the case in this two-task setting. I provide a sufficient condition under which a performance measure that is informative of the second task is worthless for contracting despite the agent being risk averse. This shows that information content is a necessary but not a sufficient condition for a performance measure to be valuable.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
Transverse momentum event-by-event fluctuations are studied within the string-hadronic model of high energy nuclear collisions, LUCIAE. Data on non-statistical pT fluctuations in p+p interactions are reproduced. Fluctuations of similar magnitude are predicted for nucleus-nucleus collisions, in contradiction to the preliminary NA49 results. The introduction of a string clustering mechanism (Firecracker Model) leads to a further, significant increase of pT fluctuations for nucleus-nucleus collisions. Secondary hadronic interactions, as implemented in LUCIAE, cause only a small reduction of pT fluctuations.
We argue that the measurement of open charm gives a unique opportunity to test the validity of pQCD-based and statistical models of nucleus-nucleus collisions at high energies. We show that various approaches used to estimate D-meson multiplicity in central Pb+Pb collisions at 158 A GeV give predictions which differ by more than a factor of 100. Finally we demonstrate that decisive experimental results concerning the open charm yield in A+A collisions can be obtained using data of the NA49 experiment at the CERN SPS.
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
Syndicated loans and the number of lending relationships have raised growing attention. All other terms being equal (e.g. seniority), syndicated loans provide larger payments (in basis points) to lenders funding larger amounts. The paper explores empirically the motivation for such a price discrimination on sovereign syndicated loans in the period 1990-1997. First evidence suggests larger premia are associated with renegotiation prospects. This is consistent with the hypothesis that price discrimination is aimed at reducing the number of lenders and thus the expected renegotiation costs. However, larger payment discrimination is also associated with more targeted market segments and with larger loans, thus minimising borrowing costs and/or attempting to widen the circle of lending relationships in order to successfully raise the requested amount. JEL Classification: F34, G21, G33 This version: June, 2002. Later version (october 2003) with the title: "Why Borrowers Pay Premiums to Larger Lenders: Empirical Evidence from Sovereign Syndicated Loans" : http://publikationen.ub.uni-frankfurt.de/volltexte/2005/992/
We use consumer price data for 205 cities/regions in 21 countries to study deviations from the law-of-one-price before, during and after the major currency crises of the 1990s. We combine data from industrialised nations in North America (Unites States, Canada, Mexico), Europe (Germany, Italy, Spain and Portugal) and Asia (Japan, Korea, New Zealand, Australia) with corresponding data from emerging market economies in the South America (Argentine, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). We confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by significantly increasing these border effects, and by raising within country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago. JEL classification: F40, F41
We use consumer price data for 81 European cities (in Germany, Austria, Switzerland, Italy, Spain and Portugal) to study deviations from the law-of-one-price before and during the European Economic and Monetary Union (EMU) by analysing both aggregate and disaggregate CPI data for 7 categories of goods we find that the distance between cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of the relative price is much higher for two cities located in different countries than for two equidistant cities in the same country. Under EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility. JEL classification: F40, F41
This paper analyzes a comprehensive data set of 108 non venture-backed, 58 venture-backed and 33 bridge financed companies going public at Germany s Neuer Markt between March 1997 and March 2000. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This study provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VCbacked IPOs.
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.
Default risk sharing between banks and markets : the contribution of collateralized debt obligations
(2005)
This paper contributes to the economics of financial institutions risk management by exploring how loan securitization a.ects their default risk, their systematic risk, and their stock prices. In a typical CDO transaction a bank retains through a first loss piece a very high proportion of the expected default losses, and transfers only the extreme losses to other market participants. The size of the first loss piece is largely driven by the average default probability of the securitized assets. If the bank sells loans in a true sale transaction, it may use the proceeds to to expand its loan business, thereby incurring more systematic risk. We find an increase of the banks' betas, but no significant stock price e.ect around the announcement of a CDO issue. Our results suggest a role for supervisory requirements in stabilizing the financial system, related to transparency of tranche allocation, and to regulatory treatment of senior tranches. JEL Klassifikation: D82, G21, D74 .
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
From a macroeconomic perspective, the short-term interest rate is a policy instrument under the direct control of the central bank. From a finance perspective, long rates are risk-adjusted averages of expected future short rates. Thus, as illustrated by much recent research, a joint macro-finance modeling strategy will provide the most comprehensive understanding of the term structure of interest rates. We discuss various questions that arise in this research, and we also present a new examination of the relationship between two prominent dynamic, latent factor models in this literature: the Nelson-Siegel and affine no-arbitrage term structure models. JEL Klassifikation: G1, E4, E5.
What do academics have to offer market risk management practitioners in financial institutions? Current industry practice largely follows one of two extremely restrictive approaches: historical simulation or RiskMetrics. In contrast, we favor flexible methods based on recent developments in financial econometrics, which are likely to produce more accurate assessments of market risk. Clearly, the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress parsimonious models that are easily estimated, and we discuss a variety of practical approaches for high-dimensional covariance matrix modeling, along with what we see as some of the pitfalls and problems in current practice. In so doing we hope to encourage further dialog between the academic and practitioner communities, hopefully stimulating the development of improved market risk management technologies that draw on the best of both worlds.
This study offers a historical review of the monetary policy reform of October 6, 1979, and discusses the influences behind it and its significance. We lay out the record from the start of 1979 through the spring of 1980, relying almost exclusively upon contemporaneous sources, including the recently released transcripts of Federal Open Market Committee (FOMC) meetings during 1979. We then present and discuss in detail the reasons for the FOMC's adoption of the reform and the communications challenge presented to the Committee during this period. Further, we examine whether the essential characteristics of the reform were consistent with monetarism, new, neo, or old-fashioned Keynesianism, nominal income targeting, and inflation targeting. The record suggests that the reform was adopted when the FOMC became convinced that its earlier gradualist strategy using finely tuned interest rate moves had proved inadequate for fighting inflation and reversing inflation expectations. The new plan had to break dramatically with established practice, allow for the possibility of substantial increases in short-term interest rates, yet be politically acceptable, and convince financial markets participants that it would be effective. The new operating procedures were also adopted for the pragmatic reason that they would likely succeed. JEL Klassifikation: E52, E58, E61, E65.
The Basel Committee plans to differentiate risk-adjusted capital requirements between banks regulated under the internal ratings based (IRB) approach and banks under the standard approach. We investigate the consequences for the lending capacity and the failure risk of banks in a model with endogenous interest rates. The optimal regulatory response depends on the banks' inclination to increase their portfolio risk. If IRB-banks are well-capitalized or gain little from taking risks, then they will increase their market share and hold safe portfolios. As risk-taking incentives become more important, the optimal portfolio size of banks adopting intern rating systems will be increasingly constrained, and ultimately they may lose market share relative to banks using the standard approach. The regulator has only limited options to avoid the excessive adoption of internal rating systems. JEL Klassifikation: K13, H41.
We develop an estimated model of the U.S. economy in which agents form expectations by continually updating their beliefs regarding the behavior of the economy and monetary policy. We explore the effects of policymakers' misperceptions of the natural rate of unemployment during the late 1960s and 1970s on the formation of expectations and macroeconomic outcomes. We find that the combination of monetary policy directed at tight stabilization of unemployment near its perceived natural rate and large real-time errors in estimates of the natural rate uprooted heretofore quiescent in inflation expectations and destabilized the economy. Had monetary policy reacted less aggressively to perceived unemployment gaps, in inflation expectations would have remained anchored and the stag inflation of the 1970s would have been avoided. Indeed, we find that less activist policies would have been more effective at stabilizing both in inflation and unemployment. We argue that policymakers, learning from the experience of the 1970s, eschewed activist policies in favor of policies that concentrated on the achievement of price stability, contributing to the subsequent improvements in macroeconomic performance of the U.S. economy.
Recent evidence on the effect of government spending shocks on consumption cannot be easily reconciled with existing optimizing business cycle models. We extend the standard New Keynesian model to allow for the presence of rule-of-thumb (non-Ricardian) consumers. We show how the interaction of the latter with sticky prices and deficit financing can account for the existing evidence on the effects of government spending. JEL Klassifikation: E32, E62.
In a plain-vanilla New Keynesian model with two-period staggered price-setting, discretionary monetary policy leads to multiple equilibria. Complementarity between the pricing decisions of forward-looking firms underlies the multiplicity, which is intrinsically dynamic in nature. At each point in time, the discretionary monetary authority optimally accommodates the level of predetermined prices when setting the money supply because it is concerned solely about real activity. Hence, if other firms set a high price in the current period, an individual firm will optimally choose a high price because it knows that the monetary authority next period will accommodate with a high money supply. Under commitment, the mechanism generating complementarity is absent: the monetary authority commits not to respond to future predetermined prices. Multiple equilibria also arise in other similar contexts where (i) a policymaker cannot commit, and (ii) forward-looking agents determine a state variable to which future policy respond. JEL Klassifikation: E5, E61, D78
The Basle securitisation framework explained: the regulatory treatment of asset securitisation
(2005)
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
This paper analyzes the empirical relationship between credit default swap, bond and stock markets during the period 2000-2002. Focusing on the intertemporal comovement, we examine weekly and daily lead-lag relationships in a vector autoregressive model and the adjustment between markets caused by cointegration. First, we find that stock returns lead CDS and bond spread changes. Second, CDS spread changes Granger cause bond spread changes for a higher number of firms than vice versa. Third, the CDS market is significantly more sensitive to the stock market than the bond market and the magnitude of this sensitivity increases when credit quality becomes worse. Finally, the CDS market plays a more important role for price discovery than the corporate bond market. JEL Klassifikation: G10, G14, C32.
We characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. Our analysis is based on a unique data set of high-frequency futures returns for each of the markets. We find that news surprises produce conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. The details of the linkages are particularly intriguing as regards equity markets. We show that equity markets react differently to the same news depending on the state of the economy, with bad news having a positive impact during expansions and the traditionally-expected negative impact during recessions. We rationalize this by temporal variation in the competing "cash flow" and "discount rate" effects for equity valuation. This finding helps explain the time-varying correlation between stock and bond returns, and the relatively small equity market news effect when averaged across expansions and recessions. Lastly, relying on the pronounced heteroskedasticity in the high-frequency data, we document important contemporaneous linkages across all markets and countries over-and-above the direct news announcement effects. JEL Klassifikation: F3, F4, G1, C5
This paper analyzes banks' choice between lending to firms individually and sharing lending with other banks, when firms and banks are subject to moral hazard and monitoring is essential. Multiple-bank lending is optimal whenever the benefit of greater diversification in terms of higher monitoring dominates the costs of free-riding and duplication of efforts. The model predicts a greater use of multiple-bank lending when banks are small relative to investment projects, firms are less profitable, and poor financial integration, regulation and inefficient judicial systems increase monitoring costs. These results are consistent with empirical observations concerning small business lending and loan syndication. JEL Klassifikation: D82; G21; G32.
We analyze governance with a dataset on investments of venture capitalists in 3848 portfolio firms in 39 countries from North and South America, Europe and Asia spanning 1971-2003. We find that cross-country differences in Legality have a significant impact on the governance structure of investments in the VC industry: better laws facilitate faster deal screening and deal origination, a higher probability of syndication and a lower probability of potentially harmful co-investment, and facilitate board representation of the investor. We also show better laws reduce the probability that the investor requires periodic cash flows prior to exit, which is in conjunction with an increased probability of investment in high-tech companies. Klassifikation: G24, G31, G32.
A large literature over several decades reveals both extensive concern with the question of time-varying betas and an emerging consensus that betas are in fact time-varying, leading to the prominence of the conditional CAPM. Set against that background, we assess the dynamics in realized betas, vis-à-vis the dynamics in the underlying realized market variance and individual equity covariances with the market. Working in the recently-popularized framework of realized volatility, we are led to a framework of nonlinear fractional cointegration: although realized variances and covariances are very highly persistent and well approximated as fractionally-integrated, realized betas, which are simple nonlinear functions of those realized variances and covariances, are less persistent and arguably best modeled as stationary I(0) processes. We conclude by drawing implications for asset pricing and portfolio management. JEL Klassifikation: C1, G1
Earlier studies of the seigniorage inflation model have found that the high-inflation steady state is not stable under adaptive learning. We reconsider this issue and analyze the full set of solutions for the linearized model. Our main focus is on stationary hyperinflationary paths near the high-inflation steady state. The hyperinflationary paths are stable under learning if agents can utilize contemporaneous data. However, in an economy populated by a mixture of agents, some of whom only have access to lagged data, stable inflationary paths emerge only if the proportion of agents with access to contemporaneous data is sufficiently high. JEL Klassifikation: C62, D83, D84, E31
In this paper, we study the effectiveness of monetary policy in a severe recession and deflation when nominal interest rates are bounded at zero. We compare two alternative proposals for ameliorating the effect of the zero bound: an exchange-rate peg and price-level targeting. We conduct this quantitative comparison in an empirical macroeconometric model of Japan, the United States and the euro area. Furthermore, we use a stylized micro-founded two-country model to check our qualitative findings. We find that both proposals succeed in generating inflationary expectations and work almost equally well under full credibility of monetary policy. However, price-level targeting may be less effective under imperfect credibility, because the announced price-level target path is not directly observable. Klassifikation: E31, E52, E58, E61
We determine optimal monetary policy under commitment in a forwardlooking New Keynesian model when nominal interest rates are bounded below by zero. The lower bound represents an occasionally binding constraint that causes the model and optimal policy to be nonlinear. A calibration to the U.S. economy suggests that policy should reduce nominal interest rates more aggressively than suggested by a model without lower bound. Rational agents anticipate the possibility of reaching the lower bound in the future and this amplifies the effects of adverse shocks well before the bound is reached. While the empirical magnitude of U.S. mark-up shocks seems too small to entail zero nominal interest rates, shocks affecting the natural real interest rate plausibly lead to a binding lower bound. Under optimal policy, however, this occurs quite infrequently and does not require targeting a positive average rate of inflation. Interestingly, the presence of binding real rate shocks alters the policy response to (non-binding) mark-up shocks. JEL Klassifikation: C63, E31, E52 .
In this article, we investigate risk return characteristics and diversification benefits when private equity is used as a portfolio component. We use a unique dataset describing 642 US-American portfolio companies with 3620 private equity investments. Information about precisely dated cash flows at the company level enables for the first time a cash flow equivalent and simultaneous investment simulation in stocks, as well as the construction of stock portfolios for benchmarking purposes. With respect to the methodology involved, we construct private equity, stock-benchmark and mixed-asset portfolios using bootstrap simulations. For the late 1990s we find a dramatic increase in the extent to which private equity outperforms stock investment. In earlier years private equity was underperforming its stock benchmarks. Within the overall class of private equity, returns on earlier private equity investment categories, like venture capital, show on average higher variations and even higher rates of failure. It is in this category in particular that high average portfolio returns are generated solely by the ability to select a few extremely well performing companies, thus compensating for lost investments. There is a high marginal diversifiable risk reduction of about 80% when the portfolio size is increased to include 15 investments. When the portfolio size is increased from 15 to 200 there are few marginal risk diversification effects on the one hand, but a large increase in managing expenditure on the other, so that an actual average portfolio size between 20 and 28 investments seems to be well balanced. We provide empirical evidence that the non-diversifiable risk that a constrained investor, who is exclusively investing in private equity, has to hold exceeds that of constrained stock investors and also the market risk. From the viewpoint of unconstrained investors with complete investment freedom, risk can be optimally reduced by constructing mixed asset portfolios. According to the various private equity subcategories analyzed, there are big differences in optimal allocations to this asset class for minimizing mixed-asset portfolio variance or maximizing performance ratios. We observe optimal portfolio weightings to be between 3% and 65%.
We take a simple time-series approach to modeling and forecasting daily average temperature in U.S. cities, and we inquire systematically as to whether it may prove useful from the vantage point of participants in the weather derivatives market. The answer is, perhaps surprisingly, yes. Time-series modeling reveals conditional mean dynamics, and crucially, strong conditional variance dynamics, in daily average temperature, and it reveals sharp differences between the distribution of temperature and the distribution of temperature surprises. As we argue, it also holds promise for producing the long-horizon predictive densities crucial for pricing weather derivatives, so that additional inquiry into time-series weather forecasting methods will likely prove useful in weather derivatives contexts.