Refine
Year of publication
Document Type
- Article (15825)
- Part of Periodical (2818)
- Working Paper (2353)
- Preprint (2085)
- Doctoral Thesis (2065)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (753)
- Report (471)
- Review (165)
Language
- English (29537) (remove)
Keywords
- taxonomy (744)
- new species (444)
- morphology (174)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (117)
- biodiversity (101)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5347)
- Physik (3819)
- Wirtschaftswissenschaften (1921)
- Frankfurt Institute for Advanced Studies (FIAS) (1762)
- Biowissenschaften (1550)
- Center for Financial Studies (CFS) (1494)
- Informatik (1401)
- Biochemie und Chemie (1090)
- Sustainable Architecture for Finance in Europe (SAFE) (1071)
- House of Finance (HoF) (710)
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
The GPS recorder consists of a GPS receiver board, a logging facility, an antenna, a power supply, a DC-DC converter and a casing. Currently, it has a weight of 33 g. The recorder works reliably with a sampling rate of 1/s and with an operation time of about 3 h, providing time-indexed data on geographic positions and ground speed. The data are downloaded when the animal is recaptured. Prototypes were tested on homing pigeons. The records of complete flight paths with surprising details illustrate the potential of this new method that can be used on a variety of medium-sized and large vertebrates.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Charged-particle exclusive data for Ar+Pb collisions at 0.772 GeV/u are analyzed in terms of collective variables for the event shapes in momentum space. Semicentral collisions lead to sidewards flow whereas nearly head-on collisions have spherical shapes in the c.m. frame, resulting from complete stopping of projectile motion. The hydrodynamical model predictions agree qualitatively with the data whereas the standard cascade model disagrees, lacking in stopping power and collective flow.
Nuclear resonance fluorescence measurements with linearly polarized bremsstrahlung were performed to determine parities of bound dipole transitions in 206Pb. A new 1+ level at 5800 keV was found, which has almost the same strength as the isoscalar M1 transition in 208Pb. Twenty-four further dipole states in 206Pb below 7.6 MeV possess negative parity.
Pion and proton production are measured to investigate thermal equilibrium in central collisions of 40Ar+KCl at 1.8 GeV/nucleon. The bulk of the pion yield is isotropic in the c.m. system, with an apparent temperature of 58±3 MeV, much lower than the 118±2 MeV of the protons. It is shown that the low pion "temperature" can be explained by the decay kinematics of delta resonances in thermal equilibrium. A (5±1)% component in the pion spectrum is, however, found to have a temperature of 110±10 MeV. The effect on the spectra of possible contributions from collective radial flow is discussed.
An event by event analysis is carried out for all charged particles observed in central collisions of 40Ar + KCl and 40Ar + Pb at 1.808 and 0.772 GeV/nucleon, respectively. Total transverse energy is used for impact parameter selection within the central trigger condition. The central Ar + KCl reaction exhibits a forward-backward oriented momentum flux. The flux distribution of the most central Ar + Pb events is approximately isotropic in the fireball center of mass.
Triple differential cross sections d3 sigma /dp3 for charged pions produced in symmetric heavy-ion collisions were measured with the KaoS magnetic spectrometer at the heavy-ion synchrotron facility SIS at GSI. The correlations between the momentum vectors of charged pions and the reaction plane in 197Au+197Au collisions at an incident energy of 1 GeV/nucleon were determined. We observe, for the first time, an azimuthally anisotropic distribution of pions, with enhanced emission perpendicular to the reaction plane. The anisotropy is most pronounced for pions of high transverse momentum in semicentral collisions.
Nuclear resonance fluorescence experiments with linearly polarized bremsstrahlung were performed to determine parities of strong dipole transitions in 40Ar. A total of 14 transitions—ten of them previously unknown—in the energy range from 4.7 to 10.2 MeV could be identified. From this experiment it is evident that the main dipole strength to bound states is due to E1 excitations. An upper limit of B(M1) [up arrow] <0.5 µN2 was found for individual magnetic dipole excitations in 40Ar in the energy region below neutron threshold.
Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The properties of two measures of charge fluctuations D-tilde and DeltaPhiq are discussed within several toy models of nuclear collisions. In particular their dependence on mean particle multiplicity, multiplicity fluctuations, and net electric charge are studied. It is shown that the measure DeltaPhiq is less sensitive to these trivial biasing effects than the originally proposed measure D-tilde. Furthermore the influence of resonance decay kinematics is analyzed and it is shown that it is likely to shadow a possible reduction of fluctuations due to QGP creation.
Dt. Fassung: Der Umgang mit Rechtsparadoxien: Derrida, Luhmann, Wiethölter. In: Christian Joerges und Gunther Teubner (Hg.) Rechtsverfassungsrecht: Recht-Fertigungen zwischen Sozialtheorie und Privatrechtsdogmatik. Nomos, Baden-Baden 2003, 249-272.
Analysis of Lambda and associative pion production in relativistic nucleus-nucleus collisions
(1984)
The transverse momentum and rapidity distributions of negative hadrons and participant protons have been measured for central 32S+ 32S collisions at plab=200 GeV/c per nucleon. The proton mean rapidity shift < Delta y>~1.6 and mean transverse momentum <pT>~0.6 GeV/c are much higher than in pp or peripheral AA collisions and indicate an increase in the nuclear stopping power. All pT spectra exhibit similar source temperatures. Including previous results for K0s Lambda , and Lambda -bar, we account for all important contributions to particle production.
The NA35 experiment has collected a high statistics set of momentum analyzed negative hadrons near and forward of midrapidity for central collisions of 200A GeV/c 32S+S, Cu, Ag, and Au. Using momentum space correlations to study the size of the source of particle production, the transverse source radii are found to decrease by ~40% at midrapidity and ~20% at forward rapidity while the longitudinal radius RL is found to decrease by ~50% as pT increases over the interval 50<pT<600 MeV/c. Calculations using a microscopic phase space approach (relativistic quantum molecular dynamics) reproduce the observed trends of the data. PACS: 25.75.+r
Transverse momenta and rapidities of Lambda 's produced in central nucleus-nucleus collisions at 4.5 GeV/c·u (C-C,...,O-Pb) were studied and compared with those from inelastic He-Li interactions at the same incident momentum. Polarization of the Lambda hyperons was found to be consistent with zero ( alpha P=-0.06=0.11 for Lambda 's from central collisions). An upper limit of the Lambda -bar / Lambda production ratio was estimated to be less than 4.5 x 10-3. The experiment was performed in a triggered streamer chamber.
Difficulties of the thermodynamical model approach to pion production in relativistic ion collisions
(1983)
Thermodynamical models with various forms of partial transparency of nuclear matter are considered. It is shown that the introduction of transparency, however, significantly improves agreement with pion data concerning multiplicities and transverse momenta leads to a serious discrepancy with average rapidities of pions. Qualitative arguments are given that difficulties of the thermodynamical approach can be overcome if one assumes hydrodynamical expansion in the first stage of nuclear interactions.
A detailed study of pion production in inelastic and central nucleus-nucleus collisions was carried out using a 2 m streamer spectrometer. Nuclear targets mounted inside the streamer chamber were exposed to nuclear beams of 4.5 GeV/c/nucleon momentum. A systematic study of the influence of the central trigger on observed data is performed. The data on multiplicities, rapidities, transverse momenta, and emission angles of negative pions are presented for various pairs of colliding nuclei. Intercorrelations between various characteristics are studied and discussed. The results are compared with predictions of some theoretical models. It is shown that the main features of the pion production in nuclear collisions can be satisfactorily described by a model assuming independent nucleon-nucleon collisions with subsequent cascading process. However, the observed correlation between Lambda and pion characteristics seems to be unexplained by this picture.
We argue that the recent analysis of strangeness production in nuclear collisions at 200 A GeV/c performed by Topor Pop et al. is flawed. The conclusions are based on an erroneous interpretation of the data and the numerical model results. The term "strangeness enhancement" is used in a misleading way.
Pion and strangeness puzzles
(1996)
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies.
In the current globalization debate the law appears to be entangled in economic and political developments which move into a new dimension of depoliticization, de-centralization and de-individualization. For all the correct observations in detail, though, this debate is bringing about a drastic (polit)economic reduction of the role of law in the globalization process that I wish to challenge in this paper. Here one has to take on Wallerstein’s misconception of “worldwide economies” according to which the formation of the global society is seen as a basically economic process. Autonomous globalization processes in other social spheres running parallel to economic globalization need to be taken seriously. In protest against such (polit)economic reductionism several strands of the debate, among them the neo-institutionalist theory of “global culture”, post-modern concepts of global legal pluralism, systems theory studies of differentiated global society and various versions of “global civil society” have shaped a concept of a polycentric globalization. From these angles the remarkable multiplicity of the world society, in which tendencies to re-politicization, re-regionalization and re-individualization are becoming visible at the same time, becomes evident. I shall contrast two current theses on the globalization of law with two less current counter-theses: First thesis: globalization is relevant for law because the emergence of global markets undermines the control potential of national policy, and therefore also the chances of legal regulation. First counter-thesis: globalization produces a set of problems intrinsic to law itself, consisting in a change to the dominant lawmaking processes. Second thesis: globalization means that the law institutionalizes the worldwide shift in power from governmental actors to economic actors. Second counter-thesis: globalization means that the law has a chance of contributing to a dual constitution of autonomous sectors of world society.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
We present the measured correlation functions for pi+ pi-, pi- pi- and pi+ pi+ pairs in central S+Ag collisions at 200 GeV per nucleon. The Gamov function, which has been traditionally used to correct the correlation functions of charged pions for the Coulomb interaction, is found to be inconsistent with all measured correlation functions. Certain problems which have been dominating the systematic uncertainty of the correlation analysis are related to this inconsistency. It is demonstrated that a new Coulomb correction method, based exclusively on the measured correlation function for pi+ pi- pairs, may solve the problem.
The pion multiplicity per participating nucleon in central nucleus-nucleus collisions at the energies 2-15 A GeV is significantly smaller than in nucleon-nucleon interactions at the same collision energy. This effect of pion suppression is argued to appear due to the evolution of the system produced at the early stage of heavy-ion collisions towards a local thermodynamic equilibrium and further isentropic expansion.
It is shown that data on pion and strangeness production in central nucleus-nucleus collisions are consistent with the hypothesis of a Quark Gluon Plasma formation between 15 A GeV/c (BNL AGS) and 160 A GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach indicate that the effective number of degrees of freedom increases by a factor of about 3 in the course of the phase transition and that the plasma created at CERN SPS energy may have a temperature of about 280 MeV (energy density $\approx$ 10 GeV/fm^3). Experimental studies of central Pb+Pb collisions in the energy range 20-160 A GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
Using the NA49 main TPC, the central production of hyperons has been measured in CERN SPS Pb - Pb collisions at 158 GeV c-1. The preliminary ratio, studied at 2.0 < y < 2.6 and 1 < pT < 3 GeV c-1, equals ~ (13 ± 4)% (systematic error only). It is compatible, within errors, with the previously obtained ratios for central S + S [1], S + W [2], and S + Au [3] collisions. The fit to the transverse momentum distribution resulted in an inverse slope parameter T of 297 MeV. At this level of statistics we do not see any noticeable enhancement of hyperon production with the increased volume (and, possibly, degree of equilibration) of the system from S + S to Pb + Pb. This result is unexpected and counterintuitive, and should be further investigated. If confirmed, it will have a significant impact on our understanding of mechanisms leading to the enhanced strangeness production in heavy-ion collisions.
The data on average hadron multiplicities in central A+A collisions measured at CERN SPS are analysed with the ideal hadron gas model. It is shown that the full chemical equilibrium version of the model fails to describe the experimental results. The agreement of the data with the off-equilibrium version allowing for partial strangeness saturation is significantly better. The freeze-out temperature of about 180 MeV seems to be independent of the system size (from S+S to Pb+Pb) and in agreement with that extracted in e+e-, pp and p{\bar p} collisions. The strangeness suppression is discussed at both hadron and valence quark level. It is found that the hadronic strangeness saturation factor gamma_S increases from about 0.45 for pp interactions to about 0.7 for central A+A collisions with no significant change from S+S to Pb+Pb collisions. The quark strangeness suppression factor lambda_S is found to be about 0.2 for elementary collisions and about 0.4 for heavy ion collisions independently of collision energy and type of colliding system
The transverse momentum and rapidity distributions of net protons and negatively charged hadrons have been measured for minimum bias proton-nucleus and deuteron-gold interactions, as well as central oxygen-gold and sulphur-nucleus collisions at 200 GeV per nucleon. The rapidity density of net protons at midrapidity in central nucleus-nucleus collisions increases both with target mass for sulphur projectiles and with the projectile mass for a gold target. The shape of the rapidity distributions of net protons forward of midrapidity for d+Au and central S+Au collisions is similar. The average rapidity loss is larger than 2 units of rapidity for reactions with the gold target. The transverse momentum spectra of net protons for all reactions can be described by a thermal distribution with temperatures' between 145 +- 11 MeV (p+S interactions) and 244 +- 43 MeV (central S+Au collisions). The multiplicity of negatively charged hadrons increases with the mass of the colliding system. The shape of the transverse momentum spectra of negatively charged hadrons changes from minimum bias p+p and p+S interactions to p+Au and central nucleus-nucleus collisions. The mean transverse momentum is almost constant in the vicinity of midrapidity and shows little variation with the target and projectile masses. The average number of produced negatively charged hadrons per participant baryon increases slightly from p+p, p+A to central S+S,Ag collisions.
Preliminary inclusive spectra for K+, K-, Ks0, Λ, and are presented which were measured in central Pb + Pb collisions at 158 GeV per nucleon by the NA49 experiment. A comparison with data from lighter collision systems shows a strong change of the shape of the Λ rapidity distribution. The strangeness enhancement observed in S + S compared to p + p and p + A is not further increased in Pb + Pb.
The directed and elliptic flow of protons and charged pions has been observed from the semi-central collisions of a 158 GeV/nucleon Pb beam with a Pb target. The rapidity and transverse momentum dependence of the flow has been measured. The directed flow of the pions is opposite to that of the protons but both exhibit negative flow at low pt. The elliptic flow of both is fairly independent of rapidity but rises with pt. PACS numbers: 25.75.-q, 25.75.Ld
Preliminary data on phi production in central Pb + Pb collisions at 158 GeV per nucleon are presented, measured by the NA49 experiment in the hadronic decay channel phi - K+K-. At mid-rapidity, the kaons were separated from pions and protons by combining dE/dx and time-of-flight information; in the forward rapidity range only dE/dx identification was used to obtain the rapidity distribution and a rapidity-integrated mt-spectrum. The mid-rapidity yield obtained was dN/dy = 1.85 ± 0.3 per event; the total phi multiplicity was estimated to be 5.0 ± 0.7 per event. Comparison with published pp data shows a slight, but not very significant strangeness enhancement.
We demonstrate that a new type of analysis in heavy-ion collisions, based on an event-by-event analysis of the transverse momentum distribution, allows us to obtain information on secondary interactions and collective behaviour that is not available from the inclusive spectra. Using a random walk model as a simple phenomenological description of initial state scattering in collisions with heavy nuclei, we show that the event-by-event measurement allows a quantitative determination of this effect, well within the resolution achievable with the new generation of large acceptance hadron spectrometers. The preliminary data of the NA49 collaboration on transverse momentum fluctuations indicate qualitatively different behaviour than that obtained within the random walk model. The results are discussed in relation to the thermodynamic and hydrodynamic description of nuclear collisions.
Two-particle correlation functions of negative hadrons over wide phase space, and transverse mass spectra of negative hadrons and deuterons near mid-rapidity have been measured in central Pb+Pb collisions at 158 GeV per nucleon by the NA49 experiment at the CERN SPS. A novel Coulomb correction procedure for the negative two-particle correlations is employed making use of the measured oppositely charged particle correlation. Within an expanding source scenario these results are used to extract the dynamic characteristics of the hadronic source, resolving the ambiguities between the temperature and transverse expansion velocity of the source, that are unavoidable when single and two particle spectra are analysed separately. The source shape, the total duration of the source expansion, the duration of particle emission, the freeze-out temperature and the longitudinal and transverse expansion velocities are deduced.
Lambda and Antilambda reconstruction in central Pb+Pb collisions using a time projection chamber
(1997)
The large acceptance time projection chambers of the NA49 experiment are used to record the trajectory of charged particles from Pb + Pb collisions at 158 GeV per nucleon. Neutral strange hadrons have been reconstructed from their charged decay products. To obtain distributions of Λ, and Ks0 in discrete bins of rapidity, y, and transverse momentum, pT, calculations have been performed to determine the acceptance of the detector and the efficiency of the reconstruction software as a function of both variables. The lifetime distributions obtained give values of cτ = 7.8 ± 0.6 cm for Λ and cτ = 2.5 ± 0.3 cm for Ks0, consistent with data book values.
A brief review of a history of data collection and interpretation of the results on high energy A+A collisions is presented. Basic assumptions and main results of a statistical model of the early stage of the A+A collisions are discussed. It is concluded that a broad set of experimental data is in agreement with the hypothesis that QGP is created in central A+A (S+S and Pb+Pb) collisions at the SPS. Carefull experimental investigation of the A+A collisions in the energy region between top AGS and SPS energies is needed.
The large acceptance TPCs of the NA49 spectrometer allow for a systematic multidimensional study of two-particle correlations in different part of phase space. Results from Bertsch-Pratt and Yano-Koonin-Podgoretskii parametrizations are presented differentially in transverse pair momentum and pair rapidity. These studies give an insight into the dynamical space-time evolution of relativistic Pb+Pb collisions, which is dominated by longitudinal expansion.
A statistical model of the early stage of central nucleus--nucleus (A+A) collisions is developed. We suggest a description of the confined state with several free parameters fitted to a compilation of A+A data at the AGS. For the deconfined state a simple Bag model equation of state is assumed. The model leads to the conclusion that a Quark Gluon Plasma is created in central nucleus--nucleus collisions at the SPS. This result is in quantitative agreement with existing SPS data on pion and strangeness production and gives a natural explanation for their scaling behaviour. The localization and the properties of the transition region are discussed. It is shown that the deconfinement transition can be detected by observation of the characteristic energy dependence of pion and strangeness multiplicities, and by an increase of the event--by--event fluctuations. An attempt to understand the data on J/psi production in Pb+Pb collisions at the SPS within the same approach is presented.
Data on J/psi production in inelastic proton-proton, proton-nucleus and nucleus-nucleus interactions at 158 A GeV are analyzed and it is shown that the ratio of mean multiplicities of J/psi mesons and pions is the same for all these collisions. This observation is difficult to understand within current models of J/psi production in nuclear collisions based on the assumption of hard QCD creation of charm quarks.
This paper determines the cost of employee stock options (ESOs) to shareholders. I present a pricing method that seeks to replicate the empirics of exercise and cancellation as good as possible. In a first step, an intensity-based pricing model of El Karoui and Martellini is adapted to the needs of ESOs. In a second step, I calibrate the model with a regression analysis of exercise rates from the empirical work of Heath, Huddart and Lang. The pricing model thus takes account for all effects captured in the regression. Separate regressions enableme to compare options for top executives with those for subordinates. I find no price differences. The model is also applied to test the precision of the fair value accounting method for ESOs, SFAS 123. Using my model as a reference, the SFAS method results in surprisingly accurate prices.
Intangible assets as goodwill, licenses, research and development or customer relations become in high technology and service orientated economies more and more important. But comparing the book values of listed companies and their market capitalization the financial reports seems to fail the information needs of market participants regarding the estimate of the proper firm value. Moreover, with the introduction of Anglo-American accounting systems in Europe and Asia we can observe even in the accounts of companies sited in the same jurisdiction diverging accounting practices for intangible assets caused by different accounting standards. To assess the relevance of intangible assets in Japanese and German accounts of listed companies we therefore measure certain balance sheet and profit and loss relations according to goodwill and self-developed software. We compare and analyze valuation rules for goodwill and software costs according to German GAAP, Japanese GAAP, US GAAP and IAS to determine the possible impact of diverging rules in the comparability of the accounts. Our results show that the comparability of the accounts is impaired because of different accounting practices. The recognition and valuation of goodwill and self-developed software varies significantly according to the accounting regime applied. However, for the recognition of self-developed software, the effect on the average impact on asset coefficients or profit is not that high. Moreover, an industry bias can only be found for the financial industry. In contrast, for goodwill accounting we found major differences especially between German and Japanese Blue Chips. The introduction of the new goodwill impairment only approach and the prohibition of the pooling method may have a major impact especially for Japanese companies’ accounts.
We report measurements of Xi and Xi-bar hyperon absolute yields as a function of rapidity in 158 GeV/c Pb+Pb collisions. At midrapidity, dN/dy = 2.29 +/- 0.12 for Xi, and 0.52 +/- 0.05 for Xi-bar, leading to the ratio of Xi-bar/Xi = 0.23 +/- 0.03. Inverse slope parameters fitted to the measured transverse mass spectra are of the order of 300 MeV near mid-rapidity. The estimated total yield of Xi particles in Pb+Pb central interactions amounts to 7.4 +/- 1.0 per collision. Comparison to Xi production in properly scaled p+p reactions at the same energy reveals a dramatic enhancement (about one order of magnitude) of Xi production in Pb+Pb central collisions over elementary hadron interactions.
New data with a minimum bias trigger for 158 GeV/nucleon Pb + Pb have been analyzed. Directed and elliptic flow as a function of rapidity of the particles and centrality of the collision are presented. The centrality dependence of the ratio of elliptic flow to the initial space elliptic anisotropy is compared to models.
Net proton and negative hadron spectra for central \PbPb collisions at 158 GeV per nucleon at the CERN SPS were measured and compared to spectra from lighter systems. Net baryon distributions were derived from those of net protons, utilizing model calculations of isospin contributions as well as data and model calculations of strange baryon distributions. Stopping (rapidity shift with respect to the beam) and mean transverse momentum \meanpt of net baryons increase with system size. The rapidity density of negative hadrons scales with the number of participant nucleons for nuclear collisions, whereas their \meanpt is independent of system size. The \meanpt dependence upon particle mass and system size is consistent with larger transverse flow velocity at midrapidity for \PbPb compared to \SS central collisions.
We present first data on event-by-event fluctuations in the average transverse momentum of charged particles produced in Pb+Pb collisions at the CERN SPS. This measurement provides previously unavailable information allowing sensitive tests of microscopic and thermodynamic collision models and to search for fluctuations expected to occur in the vicinity of the predicted QCD phase transition. We find that the observed variance of the event-by-event average transverse momentum is consistent with independent particle production modified by the known two-particle correlations due to quantum statistics and final state interactions and folded with the resolution of the NA49 apparatus. For two specific models of non-statistical fluctuations in transverse momentum limits are derived in terms of fluctuation amplitude. We show that a significant part of the parameter space for a model of isospin fluctuations predicted as a consequence of chiral symmetry restoration in a non-equilibrium scenario is excluded by our measurement.
The two-proton correlation function at midrapidity from Pb+Pb central collisions at 158 AGeV has been measured by the NA49 experiment. The results are compared to model predictions from static thermal Gaussian proton source distributions and transport models RQMD and VENUS. An effective proton source size is determined by minimizing CHI-square/ndf between the correlation functions of the data and those calculated for the Gaussian sources, yielding 3.85 +-0.15(stat.) +0.60-0.25(syst.) fm. Both the RQMD and the VENUS model are consistent with the data within the error in the correlation peak region.
The statistical production of antibaryons is considered within the canonical ensemble formulation. We demonstrate that the antibaryon suppression in small systems due to the exact baryon number conservation is rather different in the baryon-free (B=0) and baryon-rich (B>1) systems. At constant values of temperature and baryon density in the baryon-rich systems the density of the produced antibaryons is only weakly dependent on the size of the system. For realistic hadronization conditions this dependence appears to be close to B/(B+1) which is in agreement with the preliminary data of the NA49 Collaboration for the antiproton/pion ratio in nucleus-nucleus collisions at the CERN SPS energies. However, a consistent picture of antibaryon production within the statistical hadronization model has not yet been achieved. This is because the condition of constant hadronization temperature in the baryon-free systems leads to a contradiction with the data on the antiproton/pion ratio in e+e- interactions.
The experimental results on the pion, strangeness and J/psi production in high energy nuclear collisions are discussed. The anomalous energy dependence of pion and strangeness production is consistent with the hypothesis that a transition to a deconfined phase takes place between the top AGS (15 AGeV) and the SPS (200 AGeV) energies. The J/psi production systematics at the SPS can be understood assuming that the J/psi mesons are created at hadronization according to the available hadronic phase space. This new interpretation of the J/psi data allows one to establish a coherent picture of high energy nuclear collisions based on the statistical approaches of the collision early stage and hadronization. Surprisingly, the statistical model of strong interactions is successful even in the region reserved up to now for pQCD based models.
The hypothesis of statistical production of J/psi mesons at hadronization is formulated and checked against experimental data. It explains in the natural way the observed scaling behavior of the J/psi to pion ratio at the CERN SPS energies. Using the multiplicities of J/psi and eta mesons the hadronization temperature T_H = 175 MeV is found, which agrees with the previous estimates of the temperature parameter based on the analysis of the hadron yield systematics.
A validity of a recent estimate of an upper limit of charm production in central Pb+Pb collisions at 158 AGeV is critically discussed. Within a simple model we study properties of the background subtraction procedure used for an extraction of the charm signal from the analysis of dilepton spectra. We demonstrate that a production asymmetry between positively and negatively charged background muons and a large multiplicity of signal pairs leads to biased results. Therefore the applicability of this procedure for the analysis of nucleus-nucleus data should be reconsidered before final conclusions on the upper limit estimate of charm production could be drawn.
A recent paper on energy dependence of strangeness production in A+A and p+p interactions written by Dunlop and Ogilvie (Phys. ReV. C61 031901(R) (2000) indicates that there is a significant misunderstanding about the concept of strangeness enhancement and its role as a signal of Quark Gluon Plasma creation. In this comment we will try to clarify some essential points. 25.75.Dw, 13.85.Ni, 21.65.+f
Elliptic flow from nuclear collisions is a hadronic observable sensitive to the early stages of system evolution. We report first results on elliptic flow of charged particles at midrapidity in Au+Au collisions at sqrt(s_NN)=130 GeV using the STAR TPC at RHIC. The elliptic flow signal, v_2, averaged over transverse momentum, reaches values of about 6% for relatively peripheral collisions and decreases for the more central collisions. This can be interpreted as the observation of a higher degree of thermalization than at lower collision energies. Pseudorapidity and transverse momentum dependence of elliptic flow are also presented.
We present the first measurement of fluctuations from event to event in the production of strange particles in collisions of heavy nuclei. The ratio of charged kaons to charged pions is determined for individual central Pb+Pb collisions. After accounting for the fluctuations due to detector resolution and finite number statistics we derive an upper limit on genuine non-statistical fluctuations, perhaps related to a first or second order QCD phase transition. Such fluctuations are shown to be very small.
Charge fluctuations studied on event-by-event basis have been recently suggested to provide a signal of the equilibrium quark-gluon plasma produced in heavy-ion collisions at high energies. It is argued that the fluctuations generated at the early collision stage when the energy is released can fake the signal. PACS 25.75.-q, 12.38.Mh, 24.60.-k
In high energy p+p(bar) interactions the mean multiplicity and transverse mass spectra of neutral mesons from eta to Upsilon (m = 0.5 - 10 GeV/c^2) and the transverse mass spectra of pions (m_T > 1 GeV/c^2) reveal a remarkable behaviour: they follow, over more than 10 orders of magnitude, the power-law function:The parameters C and P are energy dependent, but similar for all mesons produced at the same collision energy. This scaling resembles that expected in the statistical description of hadron production: the parameter P plays the role of a temperature and the normalisation constant C is analogous to the system volume. The fundamental difference is, however, in the form of the distribution function. In order to reproduce the experimental results and preserve the basic structure of the statistical approach the Boltzmann factor e^(-E/T) appearing in standard statistical mechanics has to be substituted by a power-law factor (E/Lambda)^(-P).
At least in the past, banking in continental Europe has been characterised by a number of features that are quite specific to the region. They include the following: (1) banks play a strong role in their respective financial systems; (2) universal banking is prevalent; (3) not strictly profit-oriented banks play a significant role; and (4) there are considerable differences between national banking systems. It can be safely assumed that the future of banking in Europe will be shaped by three major external developments: deregulation and liberalisation; advances in information technology; and economic, financial and monetary integration. The overall consequences of these developments would be much too vast a topic to be addressed in one short paper. Therefore the present paper concentrates on the following question: Are the traditional peculiarities of the banking and financial systems of continental Europe likely to disappear as a consequence of the aforementioned external developments or are they more likely to remain in spite of these developments? The external developments affect the features specific to banking in continental Europe only indirectly and only via the strategies selected and pursued by the various players in the financial systems, notably the banks themselves, and in ways which strongly depend on the structure of the banking industry and the level of competition between banks and other providers of financial services. The paper develops an informal model of the relationships between (1) external developments, (2) bank strategies and the structure of the banking industry, and (3) the peculiarities of banking in Europe, and derives a hypothesis predicting which of the traditional peculiarities are likely to disappear and which are likely to remain. It argues that, overall, the peculiarities are not likely to disappear in the short or the medium term. First version June 2000. This version March 2001.
We study the approximability of the following NP-complete (in their feasibility recognition forms) number theoretic optimization problems: 1. Given n numbers a1 ; : : : ; an 2 Z, find a minimum gcd set for a1 ; : : : ; an , i.e., a subset S fa1 ; : : : ; ang with minimum cardinality satisfying gcd(S) = gcd(a1 ; : : : ; an ). 2. Given n numbers a1 ; : : : ; an 2 Z, find a 1-minimum gcd multiplier for a1 ; : : : ; an , i.e., a vector x 2 Z n with minimum max 1in jx i j satisfying P n...
The use of catastrophe bonds (cat bonds) implies the problem of the so called basis risk, resulting from the fact that, in contrast to traditional reinsurance, this kind of coverage cannot be a perfect hedge for the primary’s insured portfolio. On the other hand cat bonds offer some very attractive economic features: Besides their usefulness as a solution to the problems of moral hazard and default risk, an important advantage of cat bonds can be seen in the presumably lower transaction costs compared to (re)insurance products. Insurance coverage usually incurs costs of acquisition, monitoring and loss adjustment, all of which can be reduced by making use of the financial markets. Additionally, cat bonds are only weakly correlated with market risk, implying that in perfect financial markets these securities could be traded at a price including just small risk premiums. Although these aspects have been identified in economic literature, to our knowledge there has been no publication so far that formally addresses the trade-off between basis risk and transaction cost. In this paper, therefore, we introduce a simple model that enables us to analyze cat bonds and reinsurance as substitutional risk management tools in a standard insurance demand theory environment. We concentrate on the problem of basis risk versus transaction cost, and show that the availability of cat bonds affects the structure of optimal reinsurance contract design in an interesting way, as it leads to an increase of indemnity for small losses and a decrease of indemnity for large losses.
In the early 1990s, a consensus emerged among the leading experts in the field of small and micro business finance. It is based on three elements: The focus of projects should be on improving the entire financial sector of a given developing country; a commercial approach should be adopted, which implies covering costs and keeping costs as low as possible; and institutions should be created which are both able and willing to provide good financial services to the target group on a lasting basis. The starting point for this paper, which wholeheartedly endorses these three elements, is the proposition that putting these general principles into practice is much more difficult than some of their proponents seem to believe - and also more difficult than some of them have led donors to believe. The paper discusses the central issues of small and micro business financing in three areas: credit in general and the cost-effectiveness of lending methodologies in particular (Section II); savings in general and the role of deposit-taking in the growth of a target group-oriented financial institution in particular (Section III); and the process of creating viable target group-oriented financial institutions in developing countries (Section IV). We argue that donor institutions must be willing, and prepared, to play a role here which differs in important respects from their conventional role if they really wish to support sustainable financial sector development.
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
A financial system can only perform its function of channelling funds from savers to investors if it offers sufficient assurance to the providers of the funds that they will reap the rewards which have been promised to them. To the extent that this assurance is not provided by contracts alone, potential financiers will want to monitor and influence managerial decisions. This is why corporate governance is an essential part of any financial system. It is almost obvious that providers of equity have a genuine interest in the functioning of corporate governance. However, corporate governance encompasses more than investor protection. Similar considerations also apply to other stakeholders who invest their resources in a firm and whose expectations of later receiving an appropriate return on their investment also depend on decisions at the level of the individual firm which would be extremely difficult to anticipate and prescribe in a set of complete contingent contracts. Lenders, especially long-term lenders, are one such group of stakeholders who may also want to play a role in corporate governance; employees, especially those with high skill levels and firm-specific knowledge, are another. The German corporate governance system is different from that of the Anglo-Saxon countries because it foresees the possibility, and even the necessity, to integrate lenders and employees in the governance of large corporations. The German corporate governance system is generally regarded as the standard example of an insider-controlled and stakeholder-oriented system. Moreover, only a few years ago it was a consistent system in the sense of being composed of complementary elements which fit together well. The first objective of this paper is to show why and in which respect these characterisations were once appropriate. However, the past decade has seen a wave of developments in the German corporate governance system, which make it worthwhile and indeed necessary to investigate whether German corporate governance has recently changed in a fundamental way. More specifically one can ask which elements and features of German corporate governance have in fact changed, why they have changed and whether those changes which did occur constitute a structural change which would have converted the old insider-controlled system into an outsider-controlled and shareholder-oriented system and/or would have deprived it of its former consistency. It is the second purpose of this paper to answer these questions.
This paper starts out by pointing out the challenges and weaknesses which the German banking systems faces according to the prevailing views among national and international observers. These challenges include a generalproblem of profitability and, possibly as its main reason, the strong role of public banks. These concerns raise the questions whether the facts support this assessment of a general profitability problem and whether there are reasons to expect a fundamental or structural transformation of the German banking system. The paper contains four sections. The first one presents the evidence concerning the profitability problem in a comparative, international perspective. The second section presents information about the so-called three-pillar system of German banking. What might be surprising in this context is that the group of pub lic banks is not only the largest segment of the German banking system, but that the primary savings banks also are its financially most successful part. The German banking system is highly fragmented. This fact suggests to discuss past, present and possible future consolidations in the banking system in the third section. The authors provide evidence to the effect that within- group consolidation has been going on at a rapid pace in the public and the cooperative banking groups in recent years and that this development has not yet come to an end, while within-group consolidation among the large private banks, consolidation across group boundaries at a national level and cross-border or international consolidation has so far only happened at a limited scale, and do not appear to gain momentum in the near future. In the last section, the authors develop their explanation for the fact that large-scale and cross border consolidation has so far not materialized to any great extent. Drawing on the concept of complementarity, they argue that it would be difficult to expect these kinds of mergers and acquisitions happening within a financial system which is itself surprisingly stable, or, as one cal also call it, resistant to change.
In a series of recent papers, Mark Roe and Lucian Bebchuk have developed further the concept of path dependence, combined it with concepts of evolution and used it to challenge the wide-spread view that the corporate governance systems of the major advanced economies are likely to converge towards the economically best system at a rapid pace. The present paper shares this skepticism, but adds several aspects which strengthen the point made by Roe and Bebchuk. The present paper argues that it is important for the topic under discussion to distinguish clearly between two arguments which can explain path dependence. One of them is based on the role of adjustment costs, and the other one uses concepts borrowed from evolutionary biology. Making this distinction is important because the two concepts of path dependence have different implications for the issue of rapid convergence to the best system. In addition, we introduce a formal concept of complementarity and demonstrate that national corporate governance systems are usefully regarded as – possibly consistent – systems of complementary elements. Complementarity is a reason for path dependence which supports the socio-biological argument. The dynamic properties of systems composed of complementary elements are such that a rapid convergence towards a universally best corporate governance systems is not likely to happen. We then proceed by showing for the case of corporate governance systems shaped by complementarity, that there even is the possibility of a convergence towards a common system which is economically inferior. And in the specific case of European integration, "inefficient convergence" of corporate governance systems is a possible future course of events. First version December 1998, this version March 2000.
Major differences between national financial systems might make a common monetary policy difficult. As within Europe, Germany and the United Kingdom differ most with respect to their financial systems, the present paper addresses its topic under the assumption that the United Kingdom is already a part of EMU. Employing a comprehensive concept of a financial system, the author shows that there are indeed profound differences between the national financial systems of Germany and the United Kingdom. But he argues that these differences are not likely to create great problems for a common monetary policy. In the context of the present paper, one important difference between the two financial systems refers to the structure of the respective financial sector and, as a consequence, to the strength with which a given monetary policy impulse set by the central bank is passed on to the financial sector. The other important difference refers to the typical relationship between the banks and the business sector in each country which determines to what extent the financial sectors and especially the banks pass on pressure exerted on them by a monetary policy authority to their clients in their national business sector. In Germany, the central bank has a stronger influence on the financial sector than in England, while, for systemic reasons, German banks tend to soften monetary policy pressures on their customers more than British banks do. As far as the transmission of a restrictive monetary policy of the ECB to the real economy is concerned, these two differences tend to offset each other. This is good news for the advocates of a monetary union as it eases the task of the ECB when it comes to determining the strength of its monetary policy measures.
Paper Presented at the Conference on Workable Corporate Governance: Cross-Border Perspectives held in Paris, March 17-19, 1997 To appear in: A. Pezard/J.-M. Thiveaud: Workable Corporate Governance: Cross-Border Perspectives, Montchrestien, Paris 1997. The paper discusses the role of various constituencies in the corporate governance of a corporation from the perspective of incomplete contracts. A strict shareholder value orientation in the sense of a rule that at any time firm decisions should be made strictly in the interest of the present shareholders would make it difficult for the firm to establish long-term relationships as the potential partners would have to fear that, at a later stage of the co-operation, the shareholders or a management acting only on their behalf could exploit them because of the inevitable incompleteness of long-term contracts. One way of mitigating these problems is to put in place a corporate governance system which gives some active role to the other stakeholders or constituencies, or which makes their interests a well-defined element of the objective function of the firm. A commitment not to follow a policy of strict shareholder value maximization ex post can be efficient ex ante. Such a system would clearly differ from what is advocated by proponents of a "stakeholder approach", as it would limit the rights of the other constituencies to those which would have been agreed upon in a constitutional contract concluded between them and the founder of the firm at the time when long-term contracts are first established.
Recent results on transverse mass spectra of J/psi and psi prime mesons in central Pb+Pb collisions at 158 AGeV are considered. It is shown that those results support a hypothesis of statistical production of charmonia at hadronization and suggest the early thermal freeze-out of J/psi and psi prime mesons. Based on this approach the collective transverse velocity of hadronizing quark gluon plasma is estimated to be <v^H_T> \approx 0.2. Predictions for transverse mass spectra of hidden and open charm mesons at SPS and RHIC are discussed.
The transverse mass spectra of Omega, J/psi and psi' in Pb+Pb collisions at 158 AGeV are studied within a hydrodynamical model of the quark gluon plasma expansion and hadronization. The model reproduces the existing data with the common hadronization parameters: temperature T=T_H = 170 MeV and average collective transverse velocity v_T = 0.2.
Results of the production of Xi and Xi-bar hyperons in central Pb+Pb interactions at 158 GeV/c per nucleon are presented. This analysis utilises a global reconstruction procedure, which allows a measurement of 4pi integrated yields to be made for the first time. Inverse slope paramters, which are determined from an exponential fit to the transverse mass spectra, are shown. Central rapidity densities are found to be 1.49 +- 0.08 and 0.33 +- 0.04 per event per unit of rapidity for Xi and Xi-bar respectively. Yields integrated to full phase space are 4.12 +- 0.02 and 0.77 +- 0.04 for Xi and Xi-bar. The ratio of Xi-bar/Xi at mid-rapidity is 0.22 +- 0.03.
Measurements of charged pion and kaon production in central Pb+Pb collisions at 40, 80 and 158 AGeV are presented. These are compared with data at lower and higher energies as well as with results from p+p interactions. The mean pion multiplicity per wounded nucleon increases approximately linearly with s_NN^1/4 with a change of slope starting in the region 15-40 AGeV. The change from pion suppression with respect to p+p interactions, as observed at low collision energies, to pion enhancement at high energies occurs at about 40 AGeV. A non-monotonic energy dependence of the ratio of K^+ to pi^+ yields is observed, with a maximum close to 40 AGeV and an indication of a nearly constant value at higher energies.The measured dependences may be related to an increase of the entropy production and a decrease of the strangeness to entropy ratio in central Pb+Pb collisions in the low SPS energy range, which is consistent with the hypothesis that a transient state of deconfined matter is created above these energies. Other interpretations of the data are also discussed.
The transverse mass spectra of J/psi and psi' mesons and Omega hyperons produced in central Au+Au collisions at RHIC energies are discussed within a statistical model used successfully for the interpretation of the SPS results. The comparison of the presented model with the future RHIC data should serve as a further crucial test of the hypothesis of statistical production of charmonia at hadronization. Finally, in case of validity, the approach should allow to estimate the mean transverse flow velocity at the quark gluon plasma hadronization.
Experiment NA49 at the Cern SPS uses a large acceptance detector for a systematic study of particle yields and correlations in nucleus-nucleus, nucleon-nucleus and nucleon-nucleon collisions. Preliminary results for Pb+Pb collisions at 40, 80 and 158 A*GeV beam energy are shown and compared to measurements at lower and higher energies.
The energy dependence of hadron production in central Pb+Pb collisions is presented and discussed. In particular, midrapidity m_T-spectra for pi-, K-, K+, p, bar p, d, phi, Lambda and bar Lambda at 40, 80 and 158 AGeV are shown. In addition Xi and Omega spectra are available at 158 AGeV. The spectra allow to determine the thermal freeze-out temperature T and the transverse flow velocity beta_T at the three energies. We do not observe a significant energy dependence of these parameters; furthermore there is no indication of early thermal freeze-out of Xi and Omega at 158 AGeV. Rapidity spectra for pi-, K-, K+ and phi at 40, 80 and 158 AGeV are shown, as well as first results on Omega rapidity distributions at 158 AGeV. The chemical freeze-out parameters T and mu_B at the three energies are determined from the total yields. The parameters are close to the expected phase boundary in the SPS energy range and above. Using the total yields of kaons and lambdas, the energy dependence of the strangeness to pion ratio is discussed. A maximum in this ratio is found at 40 AGeV. This maximum could indicate the formation of deconfined matter at energies above 40 AGeV. A search for open charm in a large sample of 158 AGeV events is presented. No signal is observed. This result is compared to several model predictions.
Directed and elliptic flow of charged pions and protons in Pb + Pb collisions at 40 and 158 A GeV
(2003)
Directed and elliptic flow measurements for charged pions and protons are reported as a function of transverse momentum, rapidity, and centrality for 40 and 158A GeV Pb + Pb collisions as recorded by the NA49 detector. Both the standard method of correlating particles with an event plane, and the cumulant method of studying multiparticle correlations are used. In the standard method the directed flow is corrected for conservation of momentum. In the cumulant method elliptic flow is reconstructed from genuine 4, 6, and 8-particle correlations, showing the first unequivocal evidence for collective motion in A+A collisions at SPS energies.
New results from the energy scan programme of NA49, in particular kaon production at 30 AGeV and phi production at 40 and 80 AGeV are presented. The K+/pi+ ratio shows a pronounced maximum at 30 AGeV; the kaon slope parameters are constant at SPS energies. Both findings support the scenario of a phase transition at about 30 AGeV beam energy. The phi/pi ratio increases smoothly with beam energy, showing an energy dependence similar to K-/pi-. The measured particle yields can be reproduced by a hadron gas model, with chemical freeze-out parameters on a smooth curve in the T-muB plane. The transverse spectra can be understood as resulting from a rapidly expanding, locally equilibrated source. No evidence for an earlier kinetic decoupling of heavy hyperons is found.
Strange particle production in A+A interactions at 158 AGeV is studied by the CERN experiment NA49 as a function of system size and collision geometry. Yields of charged kaons, phi and Lambda are measured and compared to those of pions in central C+C, Si+Si and centrality-selected Pb+Pb reactions. An overall increase of relative strangeness production with the size of the system is observed which does not scale with the number of participants. Arguing that rescattering of secondaries plays a minor role in small systems the observed strangeness enhancement can be related to the space-time density of the primary nucleon-nucleon collisions.
The large acceptance and high momentum resolution as well as the significant particle identification capabilities of the NA49 experiment at the CERN SPS allow for a broad study of fluctuations and correlations in hadronic interactions. In the first part recent results on event-by-event charge and p_t fluctuations are presented. Charge fluctuations in central Pb+Pb reactions are investigated at three different beam energies (40, 80, and 158 AGeV), while for the p_t fluctuations the focus is put on the system size dependence at 158 AGeV. In the second part recent results on Bose Einstein correlations of h-h- pairs in minimum bias Pb+Pb reactions at 40 and 158 AGeV, as well as of K+K+ and K-K- pairs in central Pb+Pb collisions at 158 AGeV are shown. Additionally, other types of two particle correlations, namely pi p, Lambda p, and Lambda Lambda correlations, have been measured by the NA49 experiment. Finally, results on the energy and system size dependence of deuteron coalescence are discussed.
Rapidity distributions for $\Lambda$ and $\bar{\Lambda}$ hyperons in central Pb-Pb collisions at 40, 80 and 158 A$\cdot$GeV and for ${\rm K}_{s}^{0}$ mesons at 158 A$\cdot$GeV are presented. The lambda multiplicities are studied as a function of collision energy together with AGS and RHIC measurements and compared to model predictions. A different energy dependence of the $\Lambda/\pi$ and $\bar{\Lambda}/\pi$ is observed. The $\bar{\Lambda}/\Lambda$ ratio shows a steep increase with collision energy. Evidence for a $\bar{\Lambda}/\bar{\rm p}$ ratio greater than 1 is found at 40 A$\cdot$GeV.
To preserve the required beam quality in an e+/e- collider it is necessary to have a very precise beam position control at each accelerating cavity. An elegant method to avoid additional length and beam disturbance is the usage of signals from existing HOM-dampers. The magnitude of the displacement is derived from the amplitude of a dipole mode whereas the sign follows from the phase comparison of a dipole and a monopole HOM. To check the performance of the system, a measurement setup has been built with an antenna which can be moved with micrometer resolution to simulate the beam. Furthermore we have developed a signal processing to determine the absolute beam displacement. Measurements on the HOM-damper cell can be done in the frequency domain using a network analyser. Final measurements with the nonlinear time dependent signal processing circuit has to be done with very short electric pulses simulating electron bunches. Thus, we have designed a sub nanosecond pulse generator using a clipping line and the step recovery effect of a diode. The measurement can be done with a resolution of about 10 micrometers. Measurements and numerical calculations concerning the monitor design and the pulse generator are presented.