Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
Within the scenario of large extra dimensions, the Planck scale is lowered to values soon accessible. Among the predicted effects, the production of TeV mass black holes at the LHC is one of the most exciting possibilities. Though the final phases of the black hole’s evaporation are still unknown, the formation of a black hole remnant is a theoretically well motivated expectation. We analyze the observables emerging from a black hole evaporation with a remnant instead of a final decay. We show that the formation of a black hole remnant yields a signature which differs substantially from a final decay. We find the total transverse momentum of the black hole event to be significantly dominated by the presence of a remnant mass providing a strong experimental signature for black hole remnant formation.
Probing the density dependence of the symmetry potential in intermediate energy heavy ion collisions
(2005)
Based on the ultrarelativistic quantum molecular dynamics (UrQMD) model, the effects of the density-dependent symmetry potential for baryons and of the Coulomb potential for produced mesons are investigated for neutron-rich heavy ion collisions at intermediate energies. The calculated results of the Delta-/Delta++ and pi -/pi + production ratios show a clear beam-energy dependence on the density-dependent symmetry potential, which is stronger for the pi -/pi + ratio close to the pion production threshold. The Coulomb potential of the mesons changes the transverse momentum distribution of the pi -/pi + ratio significantly, though it alters only slightly the pi- and pi+ total yields. The pi- yields, especially at midrapidity or at low transverse momenta and the p-/pi+ ratios at low transverse momenta, are shown to be sensitive probes of the density-dependent symmetry potential in dense nuclear matter. The effect of the density-dependent symmetry potential on the production of both, K0 and K+ mesons, is also investigated.
In this study, we analyze the recently proposed charge transfer fluctuations within a finite pseudo-rapidity space. As the charge transfer fluctuation is a measure of the local charge correlation length, it is capable of detecting inhomogeneity in the hot and dense matter created by heavy ion collisions. We predict that going from peripheral to central collisions, the charge transfer fluctuations at midrapidity should decrease substantially while the charge transfer fluctuations at the edges of the observation window should decrease by a small amount. These are consequences of having a strongly inhomogeneous matter where the QGP component is concentrated around midrapidity. We also show how to constrain the values of the charge correlations lengths in both the hadronic phase and the QGP phase using the charge transfer fluctuations.
The regeneration of hadronic resonances is discussed for heavy ion collisions at SPS and SIS-300 energies. The time evolutions of Delta, rho and phi resonances are investigated. Special emphasize is put on resonance regeneration after chemical freeze-out. The emission time spectra of experimentally detectable resonances are explored.
The influence of the isospin-independent, isospin- and momentum-dependent equation of state (EoS), as well as the Coulomb interaction on the pion production in intermediate energy heavy ion collisions (HICs) is studied for both isospin-symmetric and neutron-rich systems. The Coulomb interaction plays an important role in the reaction dynamics, and strongly influences the rapidity and transverse momentum distributions of charged pions. It even leads to the pi- pi+ ratio deviating slightly from unity for isospin-symmetric systems. The Coulomb interaction between mesons and baryons is also crucial for reproducing the proper pion flow since it changes the behavior of the directed and the elliptic flow components of pions visibly. The EoS can be better investigated in neutron-rich system if multiple probes are measured simultaneously. For example, the rapidity and the transverse momentum distributions of the charged pions, the pi- pi+ ratio, the various pion flow components, as well as the difference of pi+-pi- flows. A new sensitive observable is proposed to probe the symmetry potential energy at high densities, namely the transverse momentum distribution of the elliptic flow difference [Delta v_2^pi+ - pi-(p_t rm c.m.].
It is investigated whether canonical suppression associated with the exact conservation of an U(1)-charge can be reproduced correctly by current transport models. Therefore a pion-gas having a volume-limited cross section for kaon production and annihilation is simulated within two different transport prescriptions for realizing the inelastic collisions. It is found that both models can indeed dynamically account for the canonical suppression in the yields of rare strange particles.
Longitudinal hadron spectra from proton-proton (pp) and nucleus-nucleus (AA) collisions from E_lab= 2 AGeV to sqrt s=200 AGeV are investigated. The widths of the rapidity spectra for various particle species increases monotonously with energy. The present calculation indicates no sign of a step like behaviour as excepted from the Kaon transverse mass systematics. For Pions, the transport simulation is consistent with a Landau type scaling of the rapidity widths, both in central AA reactions and in pp collisions. However, other hadron species do not follow the Landau scaling. The present model predicts a decreasing rapidity width with particle mass for newly produced particles, not supporting a Landau type flow interpretation.
Transverse hadron spectra from proton-proton, proton-nucleus and nucleus-nucleus collisions from 2 AGeV to 21.3 ATeV are investigated within two independent transport approaches (HSD and UrQMD). For central Au+Au (Pb+Pb) collisions at energies above E lab ~ 5 AGeV, the measured K +- transverse mass spectra have a larger inverse slope parameter than expected from the default calculations. The additional pressure - as suggested by lattice QCD calculations at finite quark chemical potential mu q and temperature T - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions. This is supported by a non-monotonic energy dependence of v2/pT in the present transport model.
Within the ADD-model, we elaborate an idea by Vacavant and Hinchliffe and show quantitatively how to determine the fundamental scale of TeV-gravity and the number of compactified extra dimensions from data at LHC. We demonstrate that the ADD-model leads to strong correlations between the missing E_T in gravitons at different center of mass energies. This correlation puts strong constraints on this model for extra dimensions, if probed at sqr s=5.5 TeV and sqrt s=14 TeV at LHC.
The cumulant method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s=200 AGeV, with the UrQMD model. In this approach, the true event plane is known and both the non-flow effects and event-by-event spatial (epsilon) and v_2 fluctuations exist. Qualitatively, the hierarchy of v_2 's from two, four and six-particle cumulants is consistent with the STAR data, however, the magnitude of v_2 in the UrQMD model is only 60% of the data. We find that the four and six-particle cumulants are good measures of the real elliptic flow over a wide range of centralities except for the most central and very peripheral events. There the cumulant method is affected by the v_2 fluctuations. In mid-central collisions, the four and six-particle cumulants are shown to give a good estimation of the true differential v_2, especially at large transverse momentum, where the two-particle cumulant method is heavily affected by the non-flow effects.
We predict transverse and longitudinal momentum spectra and yields of rho 0 and omega mesons reconstructed from hadron correlations in C+C reactions at 2~AGeV. The rapidity and pT distributions for reconstructable rho 0 mesons differs strongly from the primary distribution, while the omega's distributions are only weakly modified. We discuss the temporal and spatial distributions of the particles emitted in the hadron channel. Finally, we report on the mass shift of the rho 0 due to its coupling to the N*(1520), which is observable in both the di-lepton and pi pi channel. Our calculations can be tested with the Hades experiment at GSI, Darmstadt.
Trapping black hole remnants
(2005)
Large extra dimensions lower the Planck scale to values soon accessible. The production of TeV mass black holes at the LHC is one of the most exciting predictions. However, the final phases of the black hole's evaporation are still unknown and there are strong indications that a black hole remnant can be left. Since a certain fraction of such objects would be electrically charged, we argue that they can be trapped. In this paper, we examine the occurrence of such charged black hole remnants. These trapped remnants are of high interest, as they could be used to closely investigate the evaporation characteristics. Due to the absence of background from the collision region and the controlled initial state, the signal would be very clear. This would allow to extract information about the late stages of the evaporation process with high precision.
The recently proposed baryon-strangeness correlation (C_BS) is studied with a string-hadronic transport model (UrQMD) for various energies from E_lab=4 AGeV to \sqrt s=200 AGeV. It is shown that rescattering among secondaries can not mimic the predicted correlation pattern expected for a Quark-Gluon-Plasma. However, we find a strong increase of the C_BS correlation function with decreasing collision energy both for pp and Au+Au/Pb+Pb reactions. For Au+Au reactions at the top RHIC energy (\sqrt s=200 AGeV), the C_BS correlation is constant for all centralities and compatible with the pp result. With increasing width of the rapidity window, C_BS follows roughly the shape of the baryon rapidity distribution. We suggest to study the energy and centrality dependence of C_BS which allow to gain information on the onset of the deconfinement transition in temperature and volume.
We analyze longitudinal pion spectra from E_lab= 2AGeV to sqrt s_NN=200GeV within Landau's hydrodynamical model. From the measured data on the widths of the pion rapidity spectra, we extract the sound velocity c_s in the early stage of the reactions. It is found that the sound velocity has a local minimum (indicating a softest point in the equation of state, EoS) at E_beam=30AGeV. This softening of the EoS is compatible with the assumption of the formation of a mixed phase at the onset of deconfinement.
The results from the STAR Collaboration on directed flow (v1), elliptic flow (v2), and the fourth harmonic (v4) in the anisotropic azimuthal distribution of particles from Au+Au collisions at sqrt[sNN]=200GeV are summarized and compared with results from other experiments and theoretical models. Results for identified particles are presented and fit with a blast-wave model. Different anisotropic flow analysis methods are compared and nonflow effects are extracted from the data. For v2, scaling with the number of constituent quarks and parton coalescence are discussed. For v4, scaling with v22 and quark coalescence are discussed.
Midrapidity open charm spectra from direct reconstruction of D0(D0-bar)-->K± pi ± in d+Au collisions and indirect electron-positron measurements via charm semileptonic decays in p+p and d+Au collisions at sqrt[sNN]=200 GeV are reported. The D0(D0-bar) spectrum covers a transverse momentum (pT) range of 0.1<pT<3 GeV/c, whereas the electron spectra cover a range of 1<pT<4 GeV/c. The electron spectra show approximate binary collision scaling between p+p and d+Au collisions. From these two independent analyses, the differential cross section per nucleon-nucleon binary interaction at midrapidity for open charm production from d+Au collisions at BNL RHIC is d sigma NNcc-bar/dy=0.30±0.04(stat)±0.09(syst) mb. The results are compared to theoretical calculations. Implications for charmonium results in A+A collisions are discussed.
We present the first large-acceptance measurement of event-wise mean transverse momentum <pt> fluctuations for Au-Au collisions at nucleon-nucleon center-of-momentum collision energy sqrt[sNN] = 130 GeV. The observed nonstatistical <pt> fluctuations substantially exceed in magnitude fluctuations expected from the finite number of particles produced in a typical collision. The r.m.s. fractional width excess of the event-wise <pt> distribution is 13.7±0.1(stat) ±1.3(syst)% relative to a statistical reference, for the 15% most-central collisions and for charged hadrons within pseudorapidity range | eta |<1,2 pi azimuth, and 0.15 <= pt <= 2 GeV/c. The width excess varies smoothly but nonmonotonically with collision centrality and does not display rapid changes with centrality which might indicate the presence of critical fluctuations. The reported <pt> fluctuation excess is qualitatively larger than those observed at lower energies and differs markedly from theoretical expectations. Contributions to <pt> fluctuations from semihard parton scattering in the initial state and dissipation in the bulk colored medium are discussed.
The short-lived K(892)* resonance provides an efficient tool to probe properties of the hot and dense medium produced in relativistic heavy-ion collisions. We report measurements of K* in sqrt[sNN]=200GeV Au+Au and p+p collisions reconstructed via its hadronic decay channels K(892)*0-->K pi and K(892)*±-->K0S pi ± using the STAR detector at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. The K*0 mass has been studied as a function of pT in minimum bias p+p and central Au+Au collisions. The K*pT spectra for minimum bias p+p interactions and for Au+Au collisions in different centralities are presented. The K*/K yield ratios for all centralities in Au+Au collisions are found to be significantly lower than the ratio in minimum bias p+p collisions, indicating the importance of hadronic interactions between chemical and kinetic freeze-outs. A significant nonzero K*0 elliptic flow (v2) is observed in Au+Au collisions and is compared to the K0S and Lambda v2. The nuclear modification factor of K* at intermediate pT is similar to that of K0S but different from Lambda . This establishes a baryon-meson effect over a mass effect in the particle production at intermediate pT (2<pT <= 4GeV/c).
We present a systematic analysis of two-pion interferometry in Au+Au collisions at sqrt[sNN]=200GeV using the STAR detector at Relativistic Heavy Ion Collider. We extract the Hanbury-Brown and Twiss radii and study their multiplicity, transverse momentum, and azimuthal angle dependence. The Gaussianness of the correlation function is studied. Estimates of the geometrical and dynamical structure of the freeze-out source are extracted by fits with blast-wave parametrizations. The expansion of the source and its relation with the initial energy density distribution is studied.
Correlations in the hadron distributions produced in relativistic Au+Au collisions are studied in the discrete wavelet expansion method. The analysis is performed in the space of pseudorapidity (| eta | <= 1) and azimuth(full 2 pi ) in bins of transverse momentum (pt) from 0.14 <= pt <= 2.1GeV/c. In peripheral Au+Au collisions a correlation structure ascribed to minijet fragmentation is observed. It evolves with collision centrality and pt in a way not seen before, which suggests strong dissipation of minijet fragmentation in the longitudinally expanding medium.
The challenging intricacies of strongly correlated electronic systems necessitate the use of a variety of complementary theoretical approaches. In this thesis, we analyze two distinct aspects of strong correlations and develop further or adapt suitable techniques. First, we discuss magnetization transport in insulating one-dimensional spin rings described by a Heisenberg model in an inhomogeneous magnetic field. Due to quantum mechanical interference of magnon wave functions, persistent magnetization currents are shown to exist in such a geometry in analogy to persistent charge currents in mesoscopic normal metal rings. The second, longer part is dedicated to a new aspect of the functional renormalization group technique for fermions. By decoupling the interaction via a Hubbard-Stratonovich transformation, we introduce collective bosonic variables from the beginning and analyze the hierarchy of flow equations for the coupled field theory. The possibility of a cutoff in the momentum transfer of the interaction leads to a new flow scheme, which we will refer to as the interaction cutoff scheme. Within this approach, Ward identities for forward scattering problems are conserved at every instant of the flow leading to an exact solution of a whole hierarchy of flow equations. This way the known exact result for the single-particle Green's function of the Tomonaga-Luttinger model is recovered.
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.