Refine
Year of publication
- 2005 (597) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Conference Proceeding (71)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Report (22)
- Book (11)
- Review (3)
Language
- English (597) (remove)
Has Fulltext
- yes (597) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
The effects of vocational training programmes on the duration of unemployment in Eastern Germany
(2005)
Vocational training programmes have been the most important active labour market policy instrument in Germany in the last years. However, the still unsatisfying situation of the labour market has raised doubt on the efficiency of these programmes. In this paper, we analyse the effects of the participation in vocational training programmes on the duration of unemployment in Eastern Germany. Based on administrative data for the time between the October 1999 and December 2002 of the Federal Employment Administration, we apply a bivariate mixed proportional hazards model. By doing so, we are able to use the information of the timing of treatment as well as observable and unobservable influences to identify the treatment effects. The results show that a participation in vocational training prolongates the unemployment duration in Eastern Germany. Furthermore, the results suggest that locking-in effects are a serious problem of vocational training programmes. JEL Classification: J64, J24, I28, J68
Previous empirical studies of job creation schemes in Germany have shown that the average effects for the participating individuals are negative. However, we find that this is not true for all strata of the population. Identifying individual characteristics that are responsible for the effect heterogeneity and using this information for a better allocation of individuals therefore bears some scope for improving programme efficiency. We present several stratification strategies and discuss the occurring effect heterogeneity. Our findings show that job creation schemes do neither harm nor improve the labour market chances for most of the groups. Exceptions are long-term unemployed men in West and long-term unemployed women in East and West Germany who benefit from participation in terms of higher employment rates. JEL: C13 , J68 , H43
Innovations are a key factor to ensure the competitiveness of establishments as well as to enhance the growth and wealth of nations. But more than any other economic activity, decisions about innovations are plagued by failures of the market mechanism. As a response, public instruments have been implemented to stimulate private innovation activities. The effectiveness of these measures, however, is ambiguous and calls for an empirical evaluation. In this paper we make use of the IAB Establishment Panel and apply various microeconometric methods to estimate the effect of public measures on innovation activities of German establishments. We find that neglecting sample selection due to observable as well as to unobservable characteristics leads to an overestimation of the treatment effect and that there are considerable differences with regard to size class and betweenWest and East German establishments.
In recent methodological work the well known ACD approach, originally introduced by Engle and Russell (1998), has been supplemented by the involvement of an unobservable stochastic process which accompanies the underlying process of durations via a discrete mixture of distributions. The Mixture ACD model, emanating from the specialized proposal of De Luca and Gallo (2004), has proved to be a moderate tool for description of financial duration data. The use of one and the same family of ordinary distributions has been common practice until now. Our contribution incites to use the rich parameterized comprehensive family of distributions which allows for interacting different distributional idiosyncrasies. JEL classification: C41, C22, C25, C51, G14.
We propose a new framework for modelling the time dependence in duration processes being in force on financial markets. The pioneering ACD model introduced by Engle and Russell (1998) will be extended in a manner that the duration process will be accompanied by an unobservable stochastic process. The Discrete Mixture ACD framework provides us with a general methodology which puts the idea into practice. It is established by introducing a discrete-valued latent regime variable which can be justified in the light of recent market microstructure theories. The empirical application demonstrates its ability to capture specific characteristics of intraday transaction durations while alternative approaches fail. JEL classification: C41, C22, C25, C51, G14.
We discuss that hadron-induced atmospheric air showers from ultra-high energy cosmic rays are sensitive to QCD interactions at very small momentum fractions x where nonlinear effects should become important. The leading partons from the projectile acquire large random transverse momenta as they pass through the strong field of the target nucleus, which breaks up their coherence. This leads to a steeper x_F-distribution of leading hadrons as compared to low energy collisions, which in turn reduces the position of the shower maximum Xmax. We argue that high-energy hadronic interaction models should account for this effect, caused by the approach to the black-body limit, which may shift fits of the composition of the cosmic ray spectrum near the GZK cutoff towards lighter elements. We further show that present data on Xmax(E) exclude that the rapid ~ 1/x^0.3 growth of the saturation boundary (which is compatible with RHIC and HERA data) persists up to GZK cutoff energies. Measurements of pA collisions at LHC could further test the small-x regime and advance our understanding of high density QCD significantly.
Sharing of substructures like subterms and subcontexts in terms is a common method for space-efficient representation of terms, which allows for example to represent exponentially large terms in polynomial space, or to represent terms with iterated substructures in a compact form. We present singleton tree grammars as a general formalism for the treatment of sharing in terms. Singleton tree grammars (STG) are recursion-free context-free tree grammars without alternatives for non-terminals and at most unary second-order nonterminals. STGs generalize Plandowski's singleton context free grammars to terms (trees). We show that the test, whether two different nonterminals in an STG generate the same term can be done in polynomial time, which implies that the equality test of terms with shared terms and contexts, where composition of contexts is permitted, can be done in polynomial time in the size of the representation. This will allow polynomial-time algorithms for terms exploiting sharing. We hope that this technique will lead to improved upper complexity bounds for variants of second order unification algorithms, in particular for variants of context unification and bounded second order unification.
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 547-562 und in "Anales de öa Catedra Francisco Suarez 2005". S.a. Teubner, Gunther: Globalized Justice - Fragmented Justice. Human Rights Violations by "Private" Transnational Actors
Charmonium production and suppression in heavy-ion collisions at relativistic energies is investigated within di erent models, i.e. the comover absorption model, the threshold suppression model, the statistical coalescence model and the HSD transport approach. In HSD the charmonium dissociation cross sections with mesons are described by a simple phase-space parametrization including an e ective coupling strength |Mi|2 for the charmonium states i =Xc,J/psi, psi'. This allows to include the backward channels for charmonium reproduction by DD channels which are missed in the comover absorption and threshold suppression model employing detailed balance without introducing any new parameters. It is found that all approaches yield a reasonable description of J/psi suppression in S+U and Pb+Pb collisions at SPS energies. However, they di er significantly in the psi'/J/psi ratio versus centrality at SPS and especially at RHIC energies. These pronounced differences can be exploited in future measurements at RHIC to distinguish the hadronic rescattering scenarios from quark coalescence close to the QGP phase boundary.
The quinol:fumarate reductase (QFR) is the terminal reductase of anaerobic fumarate respiration, the most commonly occurring type of anaerobic respiration. This membrane protein complex couples the oxidation of menaquinol to menaquinone to the reduction of fumarate to succinate. The three-dimensional crystal structure of the QFR from Wolinella succinogenes has previoulsy been solved at 2.2 Å resolution. Although the diheme-containing QFR from W. succinogenes is known to catalyze an electroneutral process, structural and functional characterization of parental and variant enzymes has revealed active site locations which indicate electrogenic catalysis across the membrane. A solution to this apparent controversy was proposed with the so-called “Epathway hypothesis”. According to this, transmembrane electron transfer via the heme groups is strictly coupled to a parallel, compensatory transfer of protons via a transiently established pathway, which is inactive in the oxidized state of the enzyme. Proposed constituents of the E-pathway are the side chain of Glu C180, and the ring C propionate of the distal heme. Previous experimental evidence strongly supports such a role for the former constituent. One aim of this thesis is to investigate by a combination of specific 13C-heme propionate labeling and FTIR difference spectroscopy whether the ring C propionate of the distal heme is involved in redox-coupled proton transfer in the QFR from W. succinogenes. In addition to W. succinogenes, the primary structures of the QFR enzymes of two other e- proteobacteria are known. These are Campylobacter jejuni and Helicobacter pylori, which unlike W. succinogenes are human pathogens. The QFR from H. pylori has previously been established to be a potential drug target, and the same is likely for the QFR from C. jejuni. The two pathogenic species colonize mucosal surfaces causing several diseases. The possibility of studying these QFRs from these bacteria and creating more efficient drugs specifically active for this enzyme depends substantially on the availability of large amounts of high-quality protein. Further, biochemical and structural studies on QFR enzymes from e- proteobacteria species other than W. succinogenes can be valuable to enlighten new aspects or corroborate the current understanding of this class of membrane proteins.
We study the collective flow of open charm mesons and charmonia in Au + Au collisions at s = 200 GeV within the hadron-string-dynamics (HSD) transport approach. The detailed studies show that the coupling of D, mesons to the light hadrons leads to comparable directed and elliptic flow as for the light mesons. This also holds approximately for J/ mesons since more than 50% of the final charmonia for central and midcentral collisions stem from D + induced reactions in the transport calculations. The transverse momentum spectra of D, mesons and J/ s are only very moderately changed by the (pre-)hadronic interactions in HSD, which can be traced back to the collective flow generated by elastic interactions with the light hadrons. PACS-Nr. 25.75.-q, 13.60.Le, 14.40.Lb, 14.65.Dw
The study of hidden charm production is an important part of the heavy ion program. The standard approach to this problem [1] assumes that c¯c bound states are created only at the initial stage of the reaction and then partially destroyed at later stages due to interactions with the medium [2, 3, 4].
Nuclear collisions at intermediate, relativistic, and ultra-relativistic energies offer unique opportunities to study in detail manifold fragmentation and clustering phenomena in dense nuclear matter. At intermediate energies, the well known processes of nuclear multifragmentation -- the disintegration of bulk nuclear matter in clusters of a wide range of sizes and masses -- allow the study of the critical point of the equation of state of nuclear matter. At very high energies, ultra-relativistic heavy-ion collisions offer a glimpse at the substructure of hadronic matter by crossing the phase boundary to the quark-gluon plasma. The hadronization of the quark-gluon plasma created in the fireball of a ultra-relativistic heavy-ion collision can be considered, again, as a clustering process. We will present two models which allow the simulation of nuclear multifragmentation and the hadronization via the formation of clusters in an interacting gas of quarks, and will discuss the importance of clustering to our understanding of hadronization in ultra-relativistic heavy-ion collisions.
We study Mach shocks generated by fast partonic jets propagating through a deconfined strongly-interacting matter. Our main goal is to take into account different types of collective motion during the formation and evolution of this matter. We predict a significant deformation of Mach shocks in central Au+Au collisions at RHIC and LHC energies as compared to the case of jet propagation in a static medium. The observed broadening of the near-side two-particle correlations in pseudorapidity space is explained by the Bjorken-like longitudinal expansion. Three-particle correlation measurements are proposed for a more detailed study of the Mach shock waves.
We study the effects of isovector-scalar meson delta on the equation of state (EOS) of neutron star matter in strong magnetic fields. The EOS of neutron-star matter and nucleon effective masses are calculated in the framework of Lagrangian field theory, which is solved within the mean-field approximation. From the numerical results one can find that the delta-field leads to a remarkable splitting of proton and neutron effective masses. The strength of delta-field decreases with the increasing of the magnetic field and is little at ultrastrong field. The proton effective mass is highly influenced by magnetic fields, while the effect of magnetic fields on the neutron effective mass is negligible. The EOS turns out to be stiffer at B < 10^15G but becomes softer at stronger magnetic field after including the delta-field. The AMM terms can affect the system merely at ultrastrong magnetic field(B > 10^19G). In the range of 10^15 G - 10^18 G the properties of neutron-star matter are found to be similar with those without magnetic fields.
The D-meson spectral density at finite temperature is obtained within a self-consistent coupled-channel approach. For the bare meson-baryon interaction, a separable potential is taken, whose parameters are fixed by the position and width of the Lambda_c (2593) resonance. The quasiparticle peak stays close to the free D-meson mass, indicating a small change in the effective mass for finite density and temperature. However, the considerable width of the spectral density implies physics beyond the quasiparticle approach. Our results indicate that the medium modifications for the D-mesons in nucleus-nucleus collisions at FAIR (GSI) will be dominantly on the width and not, as previously expected, on the mass.
Potential energy surfaces are calculated by using the most advanced asymmetric two-center shell model allowing to obtain shell and pairing corrections which are added to the Yukawa-plus-exponential model deformation energy. Shell effects are of crucial importance for experimental observation of spontaneous disintegration by heavy ion emission. Results for 222Ra, 232U, 236Pu and 242Cm illustrate the main ideas and show for the first time for a cluster emitter a potential barrier obtained by using the macroscopic-microscopic method.
In this increasingly complex world of learned information delivery and discovery - is it possible that the "free lunch" the Publishing world worries about could come true? Although Open Access and Institutional Repositories have not (yet) created the "scorched earth" effect many were predicting, they are slowly and inevitably gaining momentum. Broader access to top-level information via Google (and others) does indeed appear to be "good enough" for many in their search for content. But you rarely get food for free in a good quality restaurant. You pay for the selection, preparation, speed and expertise of the delivery. At the soup kitchen the food can often be filling - but the queue will be long, the wait even longer and there is no chance of silver service or à la carte. If you are unfortunate enough to have little choice then this may be a great solution. Others will be willing to pay for a more satisfactory meal. As in all aspects of life, diversification and specialisation are fundamental forces. The publishing community in the years to come will continue to develop its offerings for a variety of needs that require more than just broth. To stretch the analogy, the ongoing presence of tap water in our lives has done little to halt the extraordinary rise of bottled water as part of our staple diet. Business reality will continue to settle these types of debate; my bet is that the commercial publishers see a role as providing information that commands an intrinsic value proposition to enough customers to remain economically viable for some time to come. Inspired by the comments and ideas expounded by Dr. James O'Donnell of Georgetown University on the liblicense listserv on 20th July this year, this paper will look to expand on the analogy and identify the good, the bad - but importantly the difference in information quality and access that will result in the radically changed (but still co-existent) information landscape of tomorrow.
The economical and organizational debates about open access have mostly been concerned with journals. This is not surprising since the open access movement can be seen largely as a response to the serials crisis. Recently the open access debate has been extended to include access to government produced data in different forms. In this presentation I'll critically look at some economic and organizational issues pertaining to the open access provision of bibliographical data.
In keeping with the views of its guru, Stephen Harnard, the open access movement is only prepared to discuss the two models of the "green road" and the "golden road" as sole alternatives for the future of scientific publishing. The "golden road" is put forward as the royal road for solving the journals crisis. However, no one has drawn attention to the fact that the golden road represents a purely socialist solution to a free-market problem and thus continues the "samizdat" tradition of underground literature in the former Eastern bloc. The present paper reveals the alarmingly low level at which the open access movement intends to publish top-class results from science and research, and the low degree of professionalism with which they are satisfied.
Der Vortrag wurde am 5th Frankfurt Scientific Symposium gehalten (22-23 Oktober 2005). Die Betrachtung des Videos ist (leider) nur mit den Browsern Internet Explorer ab 5.0, Netscape Navigator ab 7.0 oder Internet Explorer ab 5.2.2 für MaC möglich (s. Dokument 1.html). Die gesamten Tagungsbeiträge sind unter http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1992/ abrufbar.
Within the scenario of large extra dimensions, the Planck scale is lowered to values soon accessible. Among the predicted effects, the production of TeV mass black holes at the LHC is one of the most exciting possibilities. Though the final phases of the black hole’s evaporation are still unknown, the formation of a black hole remnant is a theoretically well motivated expectation. We analyze the observables emerging from a black hole evaporation with a remnant instead of a final decay. We show that the formation of a black hole remnant yields a signature which differs substantially from a final decay. We find the total transverse momentum of the black hole event to be significantly dominated by the presence of a remnant mass providing a strong experimental signature for black hole remnant formation.
Probing the density dependence of the symmetry potential in intermediate energy heavy ion collisions
(2005)
Based on the ultrarelativistic quantum molecular dynamics (UrQMD) model, the effects of the density-dependent symmetry potential for baryons and of the Coulomb potential for produced mesons are investigated for neutron-rich heavy ion collisions at intermediate energies. The calculated results of the Delta-/Delta++ and pi -/pi + production ratios show a clear beam-energy dependence on the density-dependent symmetry potential, which is stronger for the pi -/pi + ratio close to the pion production threshold. The Coulomb potential of the mesons changes the transverse momentum distribution of the pi -/pi + ratio significantly, though it alters only slightly the pi- and pi+ total yields. The pi- yields, especially at midrapidity or at low transverse momenta and the p-/pi+ ratios at low transverse momenta, are shown to be sensitive probes of the density-dependent symmetry potential in dense nuclear matter. The effect of the density-dependent symmetry potential on the production of both, K0 and K+ mesons, is also investigated.
In this study, we analyze the recently proposed charge transfer fluctuations within a finite pseudo-rapidity space. As the charge transfer fluctuation is a measure of the local charge correlation length, it is capable of detecting inhomogeneity in the hot and dense matter created by heavy ion collisions. We predict that going from peripheral to central collisions, the charge transfer fluctuations at midrapidity should decrease substantially while the charge transfer fluctuations at the edges of the observation window should decrease by a small amount. These are consequences of having a strongly inhomogeneous matter where the QGP component is concentrated around midrapidity. We also show how to constrain the values of the charge correlations lengths in both the hadronic phase and the QGP phase using the charge transfer fluctuations.
The regeneration of hadronic resonances is discussed for heavy ion collisions at SPS and SIS-300 energies. The time evolutions of Delta, rho and phi resonances are investigated. Special emphasize is put on resonance regeneration after chemical freeze-out. The emission time spectra of experimentally detectable resonances are explored.
The influence of the isospin-independent, isospin- and momentum-dependent equation of state (EoS), as well as the Coulomb interaction on the pion production in intermediate energy heavy ion collisions (HICs) is studied for both isospin-symmetric and neutron-rich systems. The Coulomb interaction plays an important role in the reaction dynamics, and strongly influences the rapidity and transverse momentum distributions of charged pions. It even leads to the pi- pi+ ratio deviating slightly from unity for isospin-symmetric systems. The Coulomb interaction between mesons and baryons is also crucial for reproducing the proper pion flow since it changes the behavior of the directed and the elliptic flow components of pions visibly. The EoS can be better investigated in neutron-rich system if multiple probes are measured simultaneously. For example, the rapidity and the transverse momentum distributions of the charged pions, the pi- pi+ ratio, the various pion flow components, as well as the difference of pi+-pi- flows. A new sensitive observable is proposed to probe the symmetry potential energy at high densities, namely the transverse momentum distribution of the elliptic flow difference [Delta v_2^pi+ - pi-(p_t rm c.m.].
It is investigated whether canonical suppression associated with the exact conservation of an U(1)-charge can be reproduced correctly by current transport models. Therefore a pion-gas having a volume-limited cross section for kaon production and annihilation is simulated within two different transport prescriptions for realizing the inelastic collisions. It is found that both models can indeed dynamically account for the canonical suppression in the yields of rare strange particles.
Longitudinal hadron spectra from proton-proton (pp) and nucleus-nucleus (AA) collisions from E_lab= 2 AGeV to sqrt s=200 AGeV are investigated. The widths of the rapidity spectra for various particle species increases monotonously with energy. The present calculation indicates no sign of a step like behaviour as excepted from the Kaon transverse mass systematics. For Pions, the transport simulation is consistent with a Landau type scaling of the rapidity widths, both in central AA reactions and in pp collisions. However, other hadron species do not follow the Landau scaling. The present model predicts a decreasing rapidity width with particle mass for newly produced particles, not supporting a Landau type flow interpretation.
Transverse hadron spectra from proton-proton, proton-nucleus and nucleus-nucleus collisions from 2 AGeV to 21.3 ATeV are investigated within two independent transport approaches (HSD and UrQMD). For central Au+Au (Pb+Pb) collisions at energies above E lab ~ 5 AGeV, the measured K +- transverse mass spectra have a larger inverse slope parameter than expected from the default calculations. The additional pressure - as suggested by lattice QCD calculations at finite quark chemical potential mu q and temperature T - might be generated by strong interactions in the early pre-hadronic/partonic phase of central Au+Au (Pb+Pb) collisions. This is supported by a non-monotonic energy dependence of v2/pT in the present transport model.
Within the ADD-model, we elaborate an idea by Vacavant and Hinchliffe and show quantitatively how to determine the fundamental scale of TeV-gravity and the number of compactified extra dimensions from data at LHC. We demonstrate that the ADD-model leads to strong correlations between the missing E_T in gravitons at different center of mass energies. This correlation puts strong constraints on this model for extra dimensions, if probed at sqr s=5.5 TeV and sqrt s=14 TeV at LHC.
The cumulant method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s=200 AGeV, with the UrQMD model. In this approach, the true event plane is known and both the non-flow effects and event-by-event spatial (epsilon) and v_2 fluctuations exist. Qualitatively, the hierarchy of v_2 's from two, four and six-particle cumulants is consistent with the STAR data, however, the magnitude of v_2 in the UrQMD model is only 60% of the data. We find that the four and six-particle cumulants are good measures of the real elliptic flow over a wide range of centralities except for the most central and very peripheral events. There the cumulant method is affected by the v_2 fluctuations. In mid-central collisions, the four and six-particle cumulants are shown to give a good estimation of the true differential v_2, especially at large transverse momentum, where the two-particle cumulant method is heavily affected by the non-flow effects.
We predict transverse and longitudinal momentum spectra and yields of rho 0 and omega mesons reconstructed from hadron correlations in C+C reactions at 2~AGeV. The rapidity and pT distributions for reconstructable rho 0 mesons differs strongly from the primary distribution, while the omega's distributions are only weakly modified. We discuss the temporal and spatial distributions of the particles emitted in the hadron channel. Finally, we report on the mass shift of the rho 0 due to its coupling to the N*(1520), which is observable in both the di-lepton and pi pi channel. Our calculations can be tested with the Hades experiment at GSI, Darmstadt.
Trapping black hole remnants
(2005)
Large extra dimensions lower the Planck scale to values soon accessible. The production of TeV mass black holes at the LHC is one of the most exciting predictions. However, the final phases of the black hole's evaporation are still unknown and there are strong indications that a black hole remnant can be left. Since a certain fraction of such objects would be electrically charged, we argue that they can be trapped. In this paper, we examine the occurrence of such charged black hole remnants. These trapped remnants are of high interest, as they could be used to closely investigate the evaporation characteristics. Due to the absence of background from the collision region and the controlled initial state, the signal would be very clear. This would allow to extract information about the late stages of the evaporation process with high precision.
The recently proposed baryon-strangeness correlation (C_BS) is studied with a string-hadronic transport model (UrQMD) for various energies from E_lab=4 AGeV to \sqrt s=200 AGeV. It is shown that rescattering among secondaries can not mimic the predicted correlation pattern expected for a Quark-Gluon-Plasma. However, we find a strong increase of the C_BS correlation function with decreasing collision energy both for pp and Au+Au/Pb+Pb reactions. For Au+Au reactions at the top RHIC energy (\sqrt s=200 AGeV), the C_BS correlation is constant for all centralities and compatible with the pp result. With increasing width of the rapidity window, C_BS follows roughly the shape of the baryon rapidity distribution. We suggest to study the energy and centrality dependence of C_BS which allow to gain information on the onset of the deconfinement transition in temperature and volume.
We analyze longitudinal pion spectra from E_lab= 2AGeV to sqrt s_NN=200GeV within Landau's hydrodynamical model. From the measured data on the widths of the pion rapidity spectra, we extract the sound velocity c_s in the early stage of the reactions. It is found that the sound velocity has a local minimum (indicating a softest point in the equation of state, EoS) at E_beam=30AGeV. This softening of the EoS is compatible with the assumption of the formation of a mixed phase at the onset of deconfinement.
The results from the STAR Collaboration on directed flow (v1), elliptic flow (v2), and the fourth harmonic (v4) in the anisotropic azimuthal distribution of particles from Au+Au collisions at sqrt[sNN]=200GeV are summarized and compared with results from other experiments and theoretical models. Results for identified particles are presented and fit with a blast-wave model. Different anisotropic flow analysis methods are compared and nonflow effects are extracted from the data. For v2, scaling with the number of constituent quarks and parton coalescence are discussed. For v4, scaling with v22 and quark coalescence are discussed.
Midrapidity open charm spectra from direct reconstruction of D0(D0-bar)-->K± pi ± in d+Au collisions and indirect electron-positron measurements via charm semileptonic decays in p+p and d+Au collisions at sqrt[sNN]=200 GeV are reported. The D0(D0-bar) spectrum covers a transverse momentum (pT) range of 0.1<pT<3 GeV/c, whereas the electron spectra cover a range of 1<pT<4 GeV/c. The electron spectra show approximate binary collision scaling between p+p and d+Au collisions. From these two independent analyses, the differential cross section per nucleon-nucleon binary interaction at midrapidity for open charm production from d+Au collisions at BNL RHIC is d sigma NNcc-bar/dy=0.30±0.04(stat)±0.09(syst) mb. The results are compared to theoretical calculations. Implications for charmonium results in A+A collisions are discussed.
We present the first large-acceptance measurement of event-wise mean transverse momentum <pt> fluctuations for Au-Au collisions at nucleon-nucleon center-of-momentum collision energy sqrt[sNN] = 130 GeV. The observed nonstatistical <pt> fluctuations substantially exceed in magnitude fluctuations expected from the finite number of particles produced in a typical collision. The r.m.s. fractional width excess of the event-wise <pt> distribution is 13.7±0.1(stat) ±1.3(syst)% relative to a statistical reference, for the 15% most-central collisions and for charged hadrons within pseudorapidity range | eta |<1,2 pi azimuth, and 0.15 <= pt <= 2 GeV/c. The width excess varies smoothly but nonmonotonically with collision centrality and does not display rapid changes with centrality which might indicate the presence of critical fluctuations. The reported <pt> fluctuation excess is qualitatively larger than those observed at lower energies and differs markedly from theoretical expectations. Contributions to <pt> fluctuations from semihard parton scattering in the initial state and dissipation in the bulk colored medium are discussed.
The short-lived K(892)* resonance provides an efficient tool to probe properties of the hot and dense medium produced in relativistic heavy-ion collisions. We report measurements of K* in sqrt[sNN]=200GeV Au+Au and p+p collisions reconstructed via its hadronic decay channels K(892)*0-->K pi and K(892)*±-->K0S pi ± using the STAR detector at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. The K*0 mass has been studied as a function of pT in minimum bias p+p and central Au+Au collisions. The K*pT spectra for minimum bias p+p interactions and for Au+Au collisions in different centralities are presented. The K*/K yield ratios for all centralities in Au+Au collisions are found to be significantly lower than the ratio in minimum bias p+p collisions, indicating the importance of hadronic interactions between chemical and kinetic freeze-outs. A significant nonzero K*0 elliptic flow (v2) is observed in Au+Au collisions and is compared to the K0S and Lambda v2. The nuclear modification factor of K* at intermediate pT is similar to that of K0S but different from Lambda . This establishes a baryon-meson effect over a mass effect in the particle production at intermediate pT (2<pT <= 4GeV/c).
We present a systematic analysis of two-pion interferometry in Au+Au collisions at sqrt[sNN]=200GeV using the STAR detector at Relativistic Heavy Ion Collider. We extract the Hanbury-Brown and Twiss radii and study their multiplicity, transverse momentum, and azimuthal angle dependence. The Gaussianness of the correlation function is studied. Estimates of the geometrical and dynamical structure of the freeze-out source are extracted by fits with blast-wave parametrizations. The expansion of the source and its relation with the initial energy density distribution is studied.
Correlations in the hadron distributions produced in relativistic Au+Au collisions are studied in the discrete wavelet expansion method. The analysis is performed in the space of pseudorapidity (| eta | <= 1) and azimuth(full 2 pi ) in bins of transverse momentum (pt) from 0.14 <= pt <= 2.1GeV/c. In peripheral Au+Au collisions a correlation structure ascribed to minijet fragmentation is observed. It evolves with collision centrality and pt in a way not seen before, which suggests strong dissipation of minijet fragmentation in the longitudinally expanding medium.
The challenging intricacies of strongly correlated electronic systems necessitate the use of a variety of complementary theoretical approaches. In this thesis, we analyze two distinct aspects of strong correlations and develop further or adapt suitable techniques. First, we discuss magnetization transport in insulating one-dimensional spin rings described by a Heisenberg model in an inhomogeneous magnetic field. Due to quantum mechanical interference of magnon wave functions, persistent magnetization currents are shown to exist in such a geometry in analogy to persistent charge currents in mesoscopic normal metal rings. The second, longer part is dedicated to a new aspect of the functional renormalization group technique for fermions. By decoupling the interaction via a Hubbard-Stratonovich transformation, we introduce collective bosonic variables from the beginning and analyze the hierarchy of flow equations for the coupled field theory. The possibility of a cutoff in the momentum transfer of the interaction leads to a new flow scheme, which we will refer to as the interaction cutoff scheme. Within this approach, Ward identities for forward scattering problems are conserved at every instant of the flow leading to an exact solution of a whole hierarchy of flow equations. This way the known exact result for the single-particle Green's function of the Tomonaga-Luttinger model is recovered.
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
Dt. Fassung: Der Umgang mit Rechtsparadoxien: Derrida, Luhmann, Wiethölter. In: Christian Joerges und Gunther Teubner (Hg.) Rechtsverfassungsrecht: Recht-Fertigungen zwischen Sozialtheorie und Privatrechtsdogmatik. Nomos, Baden-Baden 2003, 249-272.
This paper starts out by pointing out the challenges and weaknesses which the German banking systems faces according to the prevailing views among national and international observers. These challenges include a generalproblem of profitability and, possibly as its main reason, the strong role of public banks. These concerns raise the questions whether the facts support this assessment of a general profitability problem and whether there are reasons to expect a fundamental or structural transformation of the German banking system. The paper contains four sections. The first one presents the evidence concerning the profitability problem in a comparative, international perspective. The second section presents information about the so-called three-pillar system of German banking. What might be surprising in this context is that the group of pub lic banks is not only the largest segment of the German banking system, but that the primary savings banks also are its financially most successful part. The German banking system is highly fragmented. This fact suggests to discuss past, present and possible future consolidations in the banking system in the third section. The authors provide evidence to the effect that within- group consolidation has been going on at a rapid pace in the public and the cooperative banking groups in recent years and that this development has not yet come to an end, while within-group consolidation among the large private banks, consolidation across group boundaries at a national level and cross-border or international consolidation has so far only happened at a limited scale, and do not appear to gain momentum in the near future. In the last section, the authors develop their explanation for the fact that large-scale and cross border consolidation has so far not materialized to any great extent. Drawing on the concept of complementarity, they argue that it would be difficult to expect these kinds of mergers and acquisitions happening within a financial system which is itself surprisingly stable, or, as one cal also call it, resistant to change.
Asset-backed securitization (ABS) has become a viable and increasingly attractive risk management and refinancing method either as a standalone form of structured finance or as securitized debt in Collateralized Debt Obligations (CDO). However, the absence of industry standardization has prevented rising investment demand from translating into market liquidity comparable to traditional fixed income instruments, in all but a few selected market segments. Particularly low financial transparency and complex security designs inhibits profound analysis of secondary market pricing and how it relates to established forms of external finance. This paper represents the first attempt to measure the intertemporal, bivariate causal relationship between matched price series of equity and ABS issued by the same entity. In a two-dimensional linear system of simultaneous equations we investigate the short-term dynamics and long-term consistency of daily secondary market data from the U.K. Sterling ABS/MBS market and exchange traded shares between 1998 and 2004 with and without the presence of cointegration. Our causality framework delivers compelling empirical support for a strong co-movement between matched price series of ABS-equity pairs, where ABS markets seem to contribute more to price discovery over the long run. Controlling for cointegration, risk-free interest and average market risk of corporate debt hardly alters our results. However, once we qualify the magnitude and direction of price discovery on various security characteristics, such as the ABS asset class, we find that ABS-equity pairs with large-scale CMBS/RMBS and credit card/student loan ABS reveal stronger lead-lag relationships and joint price dynamics than whole business ABS. JEL Classifications: G10, G12, G24
Although the commoditisation of illiquid asset exposures through securitisation facilitates the disciplining effect of capital markets on the risk management, private information about securitised debt as well as complex transaction structures could possibly impair the fair market valuation. In a simple issue design model without intermediaries we maximise issuer proceeds over a positive measure of issue quality, where a direct revelation mechanism (DRM) by profitable informed investors engages endogenous price discovery through auction-style allocation preference as a continuous function of perceived issue quality. We derive an optimal allocation schedule for maximum issuer payoffs under different pricing regimes if asymmetric information requires underpricing. In particular, we study how the incidence of uninformed investors at varying levels of valuation uncertainty and their function of clearing the market effects profitable informed investment. We find that the issuer optimises own payoffs at each valuation irrespective of the applicable pricing mechanism by awarding informed investors the lowest possible allocation (and attendant underpricing) that still guarantees profitable informed investment. Under uniform pricing the composition of the investor pool ensures that informed investors appropriate higher profit than uninformed types. Any reservation utility by issuers lowers the probability of information disclosure by informed investors and the scope of issuers to curtail profitable informed investment. JEL Classifications: D82, G12, G14, G23
Asset securitisation as a risk management and funding tool : what does it hold in store for SMES?
(2005)
The following chapter critically surveys the attendant benefits and drawbacks of asset securitisation on both financial institutions and firms. It also elicits salient lessons to be learned about the securitisation of SME-related obligations from a cursory review of SME securitisation in Germany as a foray of asset securitisation in a bank-centred financial system paired with a strong presence of SMEs in industrial production. JEL Classification: D81, G15, M20
As a sign of ambivalence in the regulatory definition of capital adequacy for credit risk and the quest for more efficient refinancing sources collateral loan obligations (CLOs) have become a prominent securitisation mechanism. This paper presents a loss-based asset pricing model for the valuation of constituent tranches within a CLO-style security design. The model specifically examines how tranche subordination translates securitised credit risk into investment risk of issued tranches as beneficial interests on a designated loan pool typically underlying a CLO transaction. We obtain a tranchespecific term structure from an intensity-based simulation of defaults under both robust statistical analysis and extreme value theory (EVT). Loss sharing between issuers and investors according to a simplified subordination mechanism allows issuers to decompose securitised credit risk exposures into a collection of default sensitive debt securities with divergent risk profiles and expected investor returns. Our estimation results suggest a dichotomous effect of loss cascading, with the default term structure of the most junior tranche of CLO transactions (“first loss position”) being distinctly different from that of the remaining, more senior “investor tranches”. The first loss position carries large expected loss (with high investor return) and low leverage, whereas all other tranches mainly suffer from loss volatility (unexpected loss). These findings might explain why issuers retain the most junior tranche as credit enhancement to attenuate asymmetric information between issuers and investors. At the same time, the issuer discretion in the configuration of loss subordination within particular security design might give rise to implicit investment risk in senior tranches in the event of systemic shocks. JEL Classifications: C15, C22, D82, F34, G13, G18, G20
System-size dependence of strangeness production in nucleus-nucleus collisions at √sNN = 17.3 GeV
(2005)
Emission of pi, K, phi and Lambda was measured in near-central C+C and Si+Si collisions at 158 AGeV beam energy. Together with earlier data for p+p, S+S and Pb+Pb, the system-size dependence of relative strangeness production in nucleus-nucleus collisions is obtained. Its fast rise and the saturation observed at about 60 participating nucleons can be understood as onset of the formation of coherent partonic subsystems of increasing size. PACS numbers: 25.75.-q
Results are presented on Omega production in central Pb+Pb collisions at 40 and 158 AGeV beam energy. Given are transverse-mass spectra, rapidity distributions, and total yields for the sum Omega+Antiomega at 40 AGeV and for Omega and Antiomega separately at 158 AGeV. The yields are strongly under-predicted by the string-hadronic UrQMD model and are in better agreement with predictions from a hadron gas models. PACS numbers: 25.75.Dw
Phase diagram of strongly interacting matter is discussed within the exactly solvable statistical model of the quark-gluon bags. The model predicts two phases of matter: the hadron gas at a low temperature T and baryonic chemical potential muB, and the quark-gluon gas at a high T and/or muB. The nature of the phase transition depends on a form of the bag mass-volume spectrum (its pre-exponential factor), which is expected to change with the muB/T ratio. It is therefore likely that the line of the 1st} order transition at a high muB/T ratio is followed by the line of the 2nd order phase transition at an intermediate muB/T, and then by the lines of "higher order transitions" at a low muB/T.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Results are presented from a search for the decays D0 -> K min pi plus and D0 bar -> K plus pi min in a sample of 3.8x10^6 central Pb-Pb events collected with a beam energy of 158A GeV by NA49 at the CERN SPS. No signal is observed. An upper limit on D0 production is derived and compared to predictions from several models.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.