Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
The recently proposed baryon-strangeness correlation (C_BS) is studied with a string-hadronic transport model (UrQMD) for various energies from E_lab=4 AGeV to \sqrt s=200 AGeV. It is shown that rescattering among secondaries can not mimic the predicted correlation pattern expected for a Quark-Gluon-Plasma. However, we find a strong increase of the C_BS correlation function with decreasing collision energy both for pp and Au+Au/Pb+Pb reactions. For Au+Au reactions at the top RHIC energy (\sqrt s=200 AGeV), the C_BS correlation is constant for all centralities and compatible with the pp result. With increasing width of the rapidity window, C_BS follows roughly the shape of the baryon rapidity distribution. We suggest to study the energy and centrality dependence of C_BS which allow to gain information on the onset of the deconfinement transition in temperature and volume.
We analyze longitudinal pion spectra from E_lab= 2AGeV to sqrt s_NN=200GeV within Landau's hydrodynamical model. From the measured data on the widths of the pion rapidity spectra, we extract the sound velocity c_s in the early stage of the reactions. It is found that the sound velocity has a local minimum (indicating a softest point in the equation of state, EoS) at E_beam=30AGeV. This softening of the EoS is compatible with the assumption of the formation of a mixed phase at the onset of deconfinement.
The results from the STAR Collaboration on directed flow (v1), elliptic flow (v2), and the fourth harmonic (v4) in the anisotropic azimuthal distribution of particles from Au+Au collisions at sqrt[sNN]=200GeV are summarized and compared with results from other experiments and theoretical models. Results for identified particles are presented and fit with a blast-wave model. Different anisotropic flow analysis methods are compared and nonflow effects are extracted from the data. For v2, scaling with the number of constituent quarks and parton coalescence are discussed. For v4, scaling with v22 and quark coalescence are discussed.
Midrapidity open charm spectra from direct reconstruction of D0(D0-bar)-->K± pi ± in d+Au collisions and indirect electron-positron measurements via charm semileptonic decays in p+p and d+Au collisions at sqrt[sNN]=200 GeV are reported. The D0(D0-bar) spectrum covers a transverse momentum (pT) range of 0.1<pT<3 GeV/c, whereas the electron spectra cover a range of 1<pT<4 GeV/c. The electron spectra show approximate binary collision scaling between p+p and d+Au collisions. From these two independent analyses, the differential cross section per nucleon-nucleon binary interaction at midrapidity for open charm production from d+Au collisions at BNL RHIC is d sigma NNcc-bar/dy=0.30±0.04(stat)±0.09(syst) mb. The results are compared to theoretical calculations. Implications for charmonium results in A+A collisions are discussed.
We present the first large-acceptance measurement of event-wise mean transverse momentum <pt> fluctuations for Au-Au collisions at nucleon-nucleon center-of-momentum collision energy sqrt[sNN] = 130 GeV. The observed nonstatistical <pt> fluctuations substantially exceed in magnitude fluctuations expected from the finite number of particles produced in a typical collision. The r.m.s. fractional width excess of the event-wise <pt> distribution is 13.7±0.1(stat) ±1.3(syst)% relative to a statistical reference, for the 15% most-central collisions and for charged hadrons within pseudorapidity range | eta |<1,2 pi azimuth, and 0.15 <= pt <= 2 GeV/c. The width excess varies smoothly but nonmonotonically with collision centrality and does not display rapid changes with centrality which might indicate the presence of critical fluctuations. The reported <pt> fluctuation excess is qualitatively larger than those observed at lower energies and differs markedly from theoretical expectations. Contributions to <pt> fluctuations from semihard parton scattering in the initial state and dissipation in the bulk colored medium are discussed.
The short-lived K(892)* resonance provides an efficient tool to probe properties of the hot and dense medium produced in relativistic heavy-ion collisions. We report measurements of K* in sqrt[sNN]=200GeV Au+Au and p+p collisions reconstructed via its hadronic decay channels K(892)*0-->K pi and K(892)*±-->K0S pi ± using the STAR detector at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. The K*0 mass has been studied as a function of pT in minimum bias p+p and central Au+Au collisions. The K*pT spectra for minimum bias p+p interactions and for Au+Au collisions in different centralities are presented. The K*/K yield ratios for all centralities in Au+Au collisions are found to be significantly lower than the ratio in minimum bias p+p collisions, indicating the importance of hadronic interactions between chemical and kinetic freeze-outs. A significant nonzero K*0 elliptic flow (v2) is observed in Au+Au collisions and is compared to the K0S and Lambda v2. The nuclear modification factor of K* at intermediate pT is similar to that of K0S but different from Lambda . This establishes a baryon-meson effect over a mass effect in the particle production at intermediate pT (2<pT <= 4GeV/c).
We present a systematic analysis of two-pion interferometry in Au+Au collisions at sqrt[sNN]=200GeV using the STAR detector at Relativistic Heavy Ion Collider. We extract the Hanbury-Brown and Twiss radii and study their multiplicity, transverse momentum, and azimuthal angle dependence. The Gaussianness of the correlation function is studied. Estimates of the geometrical and dynamical structure of the freeze-out source are extracted by fits with blast-wave parametrizations. The expansion of the source and its relation with the initial energy density distribution is studied.
Correlations in the hadron distributions produced in relativistic Au+Au collisions are studied in the discrete wavelet expansion method. The analysis is performed in the space of pseudorapidity (| eta | <= 1) and azimuth(full 2 pi ) in bins of transverse momentum (pt) from 0.14 <= pt <= 2.1GeV/c. In peripheral Au+Au collisions a correlation structure ascribed to minijet fragmentation is observed. It evolves with collision centrality and pt in a way not seen before, which suggests strong dissipation of minijet fragmentation in the longitudinally expanding medium.
The challenging intricacies of strongly correlated electronic systems necessitate the use of a variety of complementary theoretical approaches. In this thesis, we analyze two distinct aspects of strong correlations and develop further or adapt suitable techniques. First, we discuss magnetization transport in insulating one-dimensional spin rings described by a Heisenberg model in an inhomogeneous magnetic field. Due to quantum mechanical interference of magnon wave functions, persistent magnetization currents are shown to exist in such a geometry in analogy to persistent charge currents in mesoscopic normal metal rings. The second, longer part is dedicated to a new aspect of the functional renormalization group technique for fermions. By decoupling the interaction via a Hubbard-Stratonovich transformation, we introduce collective bosonic variables from the beginning and analyze the hierarchy of flow equations for the coupled field theory. The possibility of a cutoff in the momentum transfer of the interaction leads to a new flow scheme, which we will refer to as the interaction cutoff scheme. Within this approach, Ward identities for forward scattering problems are conserved at every instant of the flow leading to an exact solution of a whole hierarchy of flow equations. This way the known exact result for the single-particle Green's function of the Tomonaga-Luttinger model is recovered.
Market discipline for financial institutions can be imposed not only from the liability side, as has often been stressed in the literature on the use of subordinated debt, but also from the asset side. This will be particularly true if good lending opportunities are in short supply, so that banks have to compete for projects. In such a setting, borrowers may demand that banks commit to monitoring by requiring that they use some of their own capital in lending, thus creating an asset market-based incentive for banks to hold capital. Borrowers can also provide banks with incentives to monitor by allowing them to reap some of the benefits from the loans, which accrue only if the loans are in fact paid o.. Since borrowers do not fully internalize the cost of raising capital to the banks, the level of capital demanded by market participants may be above the one chosen by a regulator, even when capital is a relatively costly source of funds. This implies that capital requirements may not be binding, as recent evidence seems to indicate. JEL Classification: G21, G38
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
Dt. Fassung: Der Umgang mit Rechtsparadoxien: Derrida, Luhmann, Wiethölter. In: Christian Joerges und Gunther Teubner (Hg.) Rechtsverfassungsrecht: Recht-Fertigungen zwischen Sozialtheorie und Privatrechtsdogmatik. Nomos, Baden-Baden 2003, 249-272.
This paper starts out by pointing out the challenges and weaknesses which the German banking systems faces according to the prevailing views among national and international observers. These challenges include a generalproblem of profitability and, possibly as its main reason, the strong role of public banks. These concerns raise the questions whether the facts support this assessment of a general profitability problem and whether there are reasons to expect a fundamental or structural transformation of the German banking system. The paper contains four sections. The first one presents the evidence concerning the profitability problem in a comparative, international perspective. The second section presents information about the so-called three-pillar system of German banking. What might be surprising in this context is that the group of pub lic banks is not only the largest segment of the German banking system, but that the primary savings banks also are its financially most successful part. The German banking system is highly fragmented. This fact suggests to discuss past, present and possible future consolidations in the banking system in the third section. The authors provide evidence to the effect that within- group consolidation has been going on at a rapid pace in the public and the cooperative banking groups in recent years and that this development has not yet come to an end, while within-group consolidation among the large private banks, consolidation across group boundaries at a national level and cross-border or international consolidation has so far only happened at a limited scale, and do not appear to gain momentum in the near future. In the last section, the authors develop their explanation for the fact that large-scale and cross border consolidation has so far not materialized to any great extent. Drawing on the concept of complementarity, they argue that it would be difficult to expect these kinds of mergers and acquisitions happening within a financial system which is itself surprisingly stable, or, as one cal also call it, resistant to change.
Asset-backed securitization (ABS) has become a viable and increasingly attractive risk management and refinancing method either as a standalone form of structured finance or as securitized debt in Collateralized Debt Obligations (CDO). However, the absence of industry standardization has prevented rising investment demand from translating into market liquidity comparable to traditional fixed income instruments, in all but a few selected market segments. Particularly low financial transparency and complex security designs inhibits profound analysis of secondary market pricing and how it relates to established forms of external finance. This paper represents the first attempt to measure the intertemporal, bivariate causal relationship between matched price series of equity and ABS issued by the same entity. In a two-dimensional linear system of simultaneous equations we investigate the short-term dynamics and long-term consistency of daily secondary market data from the U.K. Sterling ABS/MBS market and exchange traded shares between 1998 and 2004 with and without the presence of cointegration. Our causality framework delivers compelling empirical support for a strong co-movement between matched price series of ABS-equity pairs, where ABS markets seem to contribute more to price discovery over the long run. Controlling for cointegration, risk-free interest and average market risk of corporate debt hardly alters our results. However, once we qualify the magnitude and direction of price discovery on various security characteristics, such as the ABS asset class, we find that ABS-equity pairs with large-scale CMBS/RMBS and credit card/student loan ABS reveal stronger lead-lag relationships and joint price dynamics than whole business ABS. JEL Classifications: G10, G12, G24
Although the commoditisation of illiquid asset exposures through securitisation facilitates the disciplining effect of capital markets on the risk management, private information about securitised debt as well as complex transaction structures could possibly impair the fair market valuation. In a simple issue design model without intermediaries we maximise issuer proceeds over a positive measure of issue quality, where a direct revelation mechanism (DRM) by profitable informed investors engages endogenous price discovery through auction-style allocation preference as a continuous function of perceived issue quality. We derive an optimal allocation schedule for maximum issuer payoffs under different pricing regimes if asymmetric information requires underpricing. In particular, we study how the incidence of uninformed investors at varying levels of valuation uncertainty and their function of clearing the market effects profitable informed investment. We find that the issuer optimises own payoffs at each valuation irrespective of the applicable pricing mechanism by awarding informed investors the lowest possible allocation (and attendant underpricing) that still guarantees profitable informed investment. Under uniform pricing the composition of the investor pool ensures that informed investors appropriate higher profit than uninformed types. Any reservation utility by issuers lowers the probability of information disclosure by informed investors and the scope of issuers to curtail profitable informed investment. JEL Classifications: D82, G12, G14, G23
Asset securitisation as a risk management and funding tool : what does it hold in store for SMES?
(2005)
The following chapter critically surveys the attendant benefits and drawbacks of asset securitisation on both financial institutions and firms. It also elicits salient lessons to be learned about the securitisation of SME-related obligations from a cursory review of SME securitisation in Germany as a foray of asset securitisation in a bank-centred financial system paired with a strong presence of SMEs in industrial production. JEL Classification: D81, G15, M20
As a sign of ambivalence in the regulatory definition of capital adequacy for credit risk and the quest for more efficient refinancing sources collateral loan obligations (CLOs) have become a prominent securitisation mechanism. This paper presents a loss-based asset pricing model for the valuation of constituent tranches within a CLO-style security design. The model specifically examines how tranche subordination translates securitised credit risk into investment risk of issued tranches as beneficial interests on a designated loan pool typically underlying a CLO transaction. We obtain a tranchespecific term structure from an intensity-based simulation of defaults under both robust statistical analysis and extreme value theory (EVT). Loss sharing between issuers and investors according to a simplified subordination mechanism allows issuers to decompose securitised credit risk exposures into a collection of default sensitive debt securities with divergent risk profiles and expected investor returns. Our estimation results suggest a dichotomous effect of loss cascading, with the default term structure of the most junior tranche of CLO transactions (“first loss position”) being distinctly different from that of the remaining, more senior “investor tranches”. The first loss position carries large expected loss (with high investor return) and low leverage, whereas all other tranches mainly suffer from loss volatility (unexpected loss). These findings might explain why issuers retain the most junior tranche as credit enhancement to attenuate asymmetric information between issuers and investors. At the same time, the issuer discretion in the configuration of loss subordination within particular security design might give rise to implicit investment risk in senior tranches in the event of systemic shocks. JEL Classifications: C15, C22, D82, F34, G13, G18, G20
System-size dependence of strangeness production in nucleus-nucleus collisions at √sNN = 17.3 GeV
(2005)
Emission of pi, K, phi and Lambda was measured in near-central C+C and Si+Si collisions at 158 AGeV beam energy. Together with earlier data for p+p, S+S and Pb+Pb, the system-size dependence of relative strangeness production in nucleus-nucleus collisions is obtained. Its fast rise and the saturation observed at about 60 participating nucleons can be understood as onset of the formation of coherent partonic subsystems of increasing size. PACS numbers: 25.75.-q
Results are presented on Omega production in central Pb+Pb collisions at 40 and 158 AGeV beam energy. Given are transverse-mass spectra, rapidity distributions, and total yields for the sum Omega+Antiomega at 40 AGeV and for Omega and Antiomega separately at 158 AGeV. The yields are strongly under-predicted by the string-hadronic UrQMD model and are in better agreement with predictions from a hadron gas models. PACS numbers: 25.75.Dw
Phase diagram of strongly interacting matter is discussed within the exactly solvable statistical model of the quark-gluon bags. The model predicts two phases of matter: the hadron gas at a low temperature T and baryonic chemical potential muB, and the quark-gluon gas at a high T and/or muB. The nature of the phase transition depends on a form of the bag mass-volume spectrum (its pre-exponential factor), which is expected to change with the muB/T ratio. It is therefore likely that the line of the 1st} order transition at a high muB/T ratio is followed by the line of the 2nd order phase transition at an intermediate muB/T, and then by the lines of "higher order transitions" at a low muB/T.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Results are presented from a search for the decays D0 -> K min pi plus and D0 bar -> K plus pi min in a sample of 3.8x10^6 central Pb-Pb events collected with a beam energy of 158A GeV by NA49 at the CERN SPS. No signal is observed. An upper limit on D0 production is derived and compared to predictions from several models.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
The main subject of this survey are Belyi functions and dessins d'enfants on Riemann surfaces. Dessins are certain bipartite graphs on 2-mainfolds defining there are conformal and even an algebraic structure. In principle, all deeper properties of the resulting Riemann surfaces or algebraic curves should be encoded in these dessins, but the decoding turns out to be difficult and leads to many open problems. We emphasize arithmetical aspects like Galois actions, the relation to the ABC theorem in function filds and arithemtic questions in uniformization theory of algebraic curves defined over number fields.
Presentation at the AMS Southeastern Sectional Meeting 14-16 March 2003, and the Workshop Asymptotic Analysis, Stability, and Generalized Functions', 17-19 March 2003, Louisiana State University, Baton Rouge, Louisiana. See the corresponding papers "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra".
Background: Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is performed mainly in patients with high-risk or advanced hematologic malignancies and congenital or acquired aplastic anemias. In the context of the significant risk of graft failure after allo-HSCT from alternative donors and the risk of relapse in recipients transplanted for malignancy, the precise monitoring of posttransplant hematopoietic chimerism is of utmost interest. Useful molecular methods for chimerism quantification after allogeneic transplantation, aimed at distinguishing precisely between donor's and recipient's cells, are PCR-based analyses of polymorphic DNA markers. Such analyses can be performed regardless of donor's and recipient's sex. Additionally, in patients after sex-mismatched allo-HSCT, fluorescent in situ hybridization (FISH) can be applied. Methods: We compared different techniques for analysis of posttransplant chimerism, namely FISH and PCR-based molecular methods with automated detection of fluorescent products in an ALFExpress DNA Sequencer (Pharmacia) or ABI 310 Genetic Analyzer (PE). We used Spearman correlation test. Results: We have found high correlation between results obtained from the PCR/ALF Express and PCR/ABI 310 Genetic Analyzer. Lower, but still positive correlations were found between results of FISH technique and results obtained using automated DNA sizing technology. Conclusions: All the methods applied enable a rapid and accurate detection of post-HSCT chimerism.
Background: To investigate the occupational risk of tuberculosis (TB) infection in a low-incidence setting, data from a prospective study of patients with culture-confirmed TB conducted in Hamburg, Germany, from 1997 to 2002 were evaluated. Methods: M. tuberculosis isolates were genotyped by IS6110 RFLP analysis. Results of contact tracing and additional patient interviews were used for further epidemiological analyses. Results: Out of 848 cases included in the cluster analysis, 286 (33.7%) were classified into 76 clusters comprising 2 to 39 patients. In total, two patients in the non-cluster and eight patients in the cluster group were health-care workers. Logistic regression analysis confirmed work in the health-care sector as the strongest predictor for clustering (OR 17.9). However, only two of the eight transmission links among the eight clusters involving health-care workers had been detected previously. Overall, conventional contact tracing performed before genotyping had identified only 26 (25.2%) of the 103 contact persons with the disease among the clustered cases whose transmission links were epidemiologically verified. Conclusion: Recent transmission was found to be strongly associated with health-care work in a setting with low incidence of TB. Conventional contact tracing alone was shown to be insufficient to discover recent transmission chains. The data presented also indicate the need for establishing improved TB control strategies in health-care settings.
Introduction: ScFv(FRP5)-ETA is a recombinant antibody toxin with binding specificity for ErbB2 (HER2). It consists of an N-terminal single-chain antibody fragment (scFv), genetically linked to truncated Pseudomonas exotoxin A (ETA). Potent antitumoral activity of scFv(FRP5)-ETA against ErbB2-overexpressing tumor cells was previously demonstrated in vitro and in animal models. Here we report the first systemic application of scFv(FRP5)-ETA in human cancer patients.
Methods: We have performed a phase I dose-finding study, with the objective to assess the maximum tolerated dose and the dose-limiting toxicity of intravenously injected scFv(FRP5)-ETA. Eighteen patients suffering from ErbB2-expressing metastatic breast cancers, prostate cancers, head and neck cancer, non small cell lung cancer, or transitional cell carcinoma were treated. Dose levels of 2, 4, 10, 12.5, and 20 μg/kg scFv(FRP5)-ETA were administered as five daily infusions each for two consecutive weeks.
Results: No hematologic, renal, and/or cardiovascular toxicities were noted in any of the patients treated. However, transient elevation of liver enzymes was observed, and considered dose limiting, in one of six patients at the maximum tolerated dose of 12.5 μg/kg, and in two of three patients at 20 μg/kg. Fifteen minutes after injection, peak concentrations of more than 100 ng/ml scFv(FRP5)-ETA were obtained at a dose of 10 μg/kg, indicating that predicted therapeutic levels of the recombinant protein can be applied without inducing toxic side effects. Induction of antibodies against scFv(FRP5)-ETA was observed 8 days after initiation of therapy in 13 patients investigated, but only in five of these patients could neutralizing activity be detected. Two patients showed stable disease and in three patients clinical signs of activity in terms of signs and symptoms were observed (all treated at doses ≥ 10 μg/kg). Disease progression occurred in 11 of the patients.
Conclusion: Our results demonstrate that systemic therapy with scFv(FRP5)-ETA can be safely administered up to a maximum tolerated dose of 12.5 μg/kg in patients with ErbB2-expressing tumors, justifying further clinical development.
First paragraph (this article has no abstract) Persistent stimulation of nociceptors results in sensitization of nociceptive sensory neurons, which is associated with hyperalgesia and allodynia. The release of NO and subsequent synthesis of cGMP in the spinal cord are involved in this process. cGMP-dependent protein kinase I (PKG-I) has been suggested to act as a downstream target of cGMP, but its exact role in nociception hadn't been characterized yet. To further evaluate the NO/cGMP/PKG-I pathway in nociception we assessed the effects of PKG-I inhibiton and activaton in the rat formalin assay and analyzed the nociceptive behavior of PKG-I-/- mice. Open access article.
Background: In general shell-less slugs are considered to be slimy animals with a rather dull appearance and a pest to garden plants. But marine slugs usually are beautifully coloured animals belonging to the less-known Opisthobranchia. They are characterized by a large array of interesting biological phenomena, usually related to foraging and/or defence. In this paper our knowledge of shell reduction, correlated with the evolution of different defensive and foraging strategies is reviewed, and new results on histology of different glandular systems are included. Results: Based on a phylogeny obtained by morphological and histological data, the parallel reduction of the shell within the different groups is outlined. Major food sources are given and glandular structures are described as possible defensive structures in the external epithelia, and as internal glands. Conclusion: According to phylogenetic analyses, the reduction of the shell correlates with the evolution of defensive strategies. Many different kinds of defence structures, like cleptocnides, mantle dermal formations (MDFs), and acid glands, are only present in shell-less slugs. In several cases, it is not clear whether the defensive devices were a prerequisite for the reduction of the shell, or reduction occurred before. Reduction of the shell and acquisition of different defensive structures had an implication on exploration of new food sources and therefore likely enhanced adaptive radiation of several groups. © 2005 Wägele and Klussmann-Kolb; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited: http://www.frontiersinzoology.com/content/2/1/3/
Background: Tumor development remains one of the major obstacles following organ transplantation. Immunosuppressive drugs such as cyclosporine and tacrolimus directly contribute to enhanced malignancy, whereas the influence of the novel compound mycophenolate mofetil (MMF) on tumor cell dissemination has not been explored. We therefore investigated the adhesion capacity of colon, pancreas, prostate and kidney carcinoma cell lines to endothelium, as well as their beta1 integrin expression profile before and after MMF treatment. Methods: Tumor cell adhesion to endothelial cell monolayers was evaluated in the presence of 0.1 and 1 μM MMF and compared to unstimulated controls. beta1 integrin analysis included alpha1beta1 (CD49a), alpha2beta1 (CD49b), alpha3beta1 (CD49c), alpha4beta1 (CD49d), alpha5beta1 (CD49e), and alpha6beta1 (CD49f) receptors, and was carried out by reverse transcriptase-polymerase chain reaction, confocal microscopy and flow cytometry. Results: Adhesion of the colon carcinoma cell line HT-29 was strongly reduced in the presence of 0.1 μM MMF. This effect was accompanied by down-regulation of alpha3beta1 and alpha6beta1 surface expression and of alpha3beta1 and alpha6beta1 coding mRNA. Adhesion of the prostate tumor cell line DU-145 was blocked dose-dependently by MMF. In contrast to MMF's effects on HT-29 cells, MMF dose-dependently up-regulated alpha1beta1, alpha2beta1, alpha3beta1, and alpha5beta1 on DU-145 tumor cell membranes. Conclusion: We conclude that MMF possesses distinct anti-tumoral properties, particularly in colon and prostate carcinoma cells. Adhesion blockage of HT-29 cells was due to the loss of alpha3beta1 and alpha6beta1 surface expression, which might contribute to a reduced invasive behaviour of this tumor entity. The enhancement of integrin beta1 subtypes observed in DU-145 cells possibly causes re-differentiation towards a low-invasive phenotype.
Apparent contradiction between negative effects of UV radiation and positive effects of sun exposure
(2005)
We would like to comment on the three contributions in the Journal of the National Cancer Institute, Vol. 97, No. 3, February 2, 2005: Kathleen M. Egan, Jeffrey A. Sosman, William J. Blot: Editorial: Sunlight and Reduced Risk of Cancer: Is the Real Story Vitamin D? (pp. 161-163) ; Marianne Berwick, Bruce K. Armstrong, Leah Ben-Porat, Judith Fine, Anne Kricker, Carey Eberle, Raymond Barnhill: Sun Exposure and Mortality From Melanoma. (pp. 195-199) ; Karin Ekström Smedby, Henrik Hjalgrim, Mads Melbye, Anna Torrång, Klaus Rostgaard, Lars Munksgaard, et al.: Ultraviolet Radiation Exposure and Risk of Malignant Lymphomas. (pp. 199-209).
Drug target 5-lipoxygenase : a link between cellular enzyme regulation and molecular pharmacology
(2005)
Leukotriene (LT) sind bioaktive Lipidmediatoren, die in einer Vielzahl von Entzündungskrankheiten wie z.B. Asthma, Psoriasis, Arthritis oder allergische Rhinitis involviert sind. Des Weiteren spielen LT in der Pathogenese von Erkrankungen wie Krebs, Osteoarthritis oder Atherosklerose eine Rolle. Die 5-Lipoxygenase (5-LO) ist das Enzym, das für die Bildung von LT verantwortlich ist. Aufgrund der physiologischen Eigenschaften der LT, ist die Entwicklung von potentiellen Arzneistoffen, welche die 5-LO als Zielstruktur besitzen, von erheblichem Interesse. Die Aktivität der 5-LO wird in vitro durch Ca2+, ATP, Phosphatidylcholin und Lipidhydroperoxide (LOOH) und durch die p38-abhängige MK-2/3 5-LO bestimmt. Inhibitorstudien weisen darauf hin, dass der MEK1/2-Signalweg ebenfalls in vivo an der 5-LO Aktivierung beteiligt ist. Hauptziel dieser Arbeit war es zu untersuchen, welche Rolle der MEK1/2-Signalweg bei der Aktivierung der 5-LO besitzt und welchen Einfluss der 5-LO Aktivierungsweg auf die Wirksamkeit potentieller Inhibitoren hat. „In gel kinase“ und „In vitro kinase“ Untersuchungen zeigten, dass die 5-LO ein Substrat für die Extracellular signal-regulated kinase (ERK) und MK-2/3 darstellt. Der Zusatz von mehrfach ungesättigten Fettsäuren (UFA), wie AA oder Ölsäure, verstärkte den Phosphorylierungsgrad der 5-LO sowohl durch ERK1/2 als auch durch MK-2/3. Die genannten Kinasen sind demnach auch für die 5-LO Aktivierung durch natürliche Stimuli verantwortlich, die den zellulären Ca2+-Spiegel kaum beeinflussen. Daraus ist ersichtlich, dass die Phosphorylierung der 5-LO durch ERK1/2 und/oder MK-2/3 einen alternativen Aktivierungsmechanismus neben Ca2+ darstellt. Ursprünglich wurden Nonredox-5-LO-Inhibitoren als kompetitive Wirkstoffe entwickelt, die mit AA um die Bindung an die katalytische Domäne der 5-LO konkurrieren. Vertreter dieser Inhibitoren, wie ZM230487 und L-739,010, zeigen eine potente Hemmung der LT-Biosynthese in verschiedenen Testsystemen. Sie scheiterten jedoch in klinischen Studien. In dieser Arbeit konnten wir zeigen, dass die Wirksamkeit dieser Inhibitoren vom Aktivierungsweg der 5-LO abhängig ist. Verglichen mit 5-LO Aktivität, die durch den unphysiologischen Stimulus Ca2+-Ionophor induziert wird, erfordert die Hemmung zellstress-induzierter Aktivität eine 10- bis 100-fach höhere Konzentration der Nonredox-5-LO-Inhibitoren. Die nicht-phosphorylierbare 5-LO Mutante (Ser271Ala/Ser663Ala) war wesentlich sensitiver gegenüber Nonredox-Inhibitoren als der Wildtyp, wenn das Enzym durch 5-LO Kinasen aktiviert wurde. Somit zeigen diese Ergebnisse, dass, im Gegensatz zu Ca2+, die 5-LO Aktivierung mittels Phosphorylierung die Wirksamkeit der Nonredox-Inhibitoren deutlich verringert. Des Weiteren wurde das pharmakologische Profil des neuen 5-LO Inhibitors CJ-13,610 mittels verschiedener in vitro-Testsysteme charakterisiert. In intakten PMNL, die durch Ca2+-Ionophor stimuliert wurden, hemmte die Substanz die 5-LO Produktbildung mit einem IC50 von 70 nM. Durch Zugabe von exogener AA, wird die Wirkung vermindert und der IC50 des Inhibitors steigt an. Dies deutet auf eine kompetitive Wirkweise hin. Wie die bekannten Nonredox-Inhibitoren, verliert auch CJ-13,610 seine Wirkung bei erhöhtem zellulärem Peroxidspiegel. Der Inhibitor CJ-13,610 zeigt jedoch keine Abhängigkeit vom Aktivierungsweg der 5-LO. Grundsätzlich ist es also von fundamentaler Bedeutung bei der Entwicklung von neuen Arzneistoffen, die zellulären Zusammenhänge, insbesondere die Regulierung der Aktivität von Enzymen, zu kennen. Wie in dieser Arbeit gezeigt, hat die Phosphorylierung der 5-LO einen starken Einfluss auf die Regulation der 5-LO Aktivität und eine elementare Wirkung auf die Hemmung des Enzyms durch verschiedene Wirkstoffe.
This paper has shown that some of the principal arguments against shareholder voice are unfounded. It has shown that shareholders do own corporations, and that the nature of their property interest is structured to meet the needs of the relationships found in stock corporations. The paper has explained that fiduciary and other duties restrain the actions of shareholders just as they do those of management, and that critics cannot reasonably expect court-imposed fiduciary duties to extend beyond the actual powers of shareholders. It has also illustrated how, although corporate statutes give shareholders complete power to structure governance as they will, the default governance structures of U.S. corporations leaves shareholders almost powerless to initiate any sort of action, and the interaction between state and federal law makes it almost impossible for shareholders to elect directors of their choice. Lastly, the paper has recalled how the percentage of U.S. corporate equities owned by institutional investors has increased dramatically in recent decades, and it has outlined some of the major developments in shareholder rights that followed this increase. I hope that this paper deflated some of the strong rhetoric used against shareholder voice by contrasting rhetoric to law, and that it illustrated why the picture of weak owners painted in the early 20th century should be updated to new circumstances, which will help avoid projecting an old description as a current normative model that perpetuates the inevitability of "managerialsm", perhaps better known as "dirigisme".
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
This paper characterizes the optimal inflation buffer consistent with a zero lower bound on nominal interest rates in a New Keynesian sticky-price model. It is shown that a purely forward-looking version of the model that abstracts from inflation inertia would significantly underestimate the inflation buffer. If the central bank follows the prescriptions of a welfare-theoretic objective, a larger buffer appears optimal than would be the case employing a traditional loss function. Taking also into account potential downward nominal rigidities in the price-setting behavior of firms appears not to impose significant further distortions on the economy. JEL Klassifikation: C63, E31, E52 .
Ignoring the existence of the zero lower bound on nominal interest rates one considerably understates the value of monetary commitment in New Keynesian models. A stochastic forward-looking model with lower bound, calibrated to the U.S. economy, suggests that low values for the natural rate of interest lead to sizeable output losses and deflation under discretionary monetary policy. The fall in output and deflation are much larger than in the case with policy commitment and do not show up at all if the model abstracts from the existence of the lower bound. The welfare losses of discretionary policy increase even further when inflation is partly determined by lagged inflation in the Phillips curve. These results emerge because private sector expectations and the discretionary policy response to these expectations reinforce each other and cause the lower bound to be reached much earlier than under commitment. JEL Klassifikation: E31, E52
Using data from the Consumer Expenditure Survey we first document that the recent increase in income inequality in the US has not been accompanied by a corresponding rise in consumption inequality. Much of this divergence is due to different trends in within-group inequality, which has increased significantly for income but little for consumption. We then develop a simple framework that allows us to analytically characterize how within-group income inequality affects consumption inequality in a world in which agents can trade a full set of contingent consumption claims, subject to endogenous constraints emanating from the limited enforcement of intertemporal contracts (as in Kehoe and Levine, 1993). Finally, we quantitatively evaluate, in the context of a calibrated general equilibrium production economy, whether this set-up, or alternatively a standard incomplete markets model (as in Ayiagari 1994), can account for the documented stylized consumption inequality facts from the US data. JEL Klassifikation: E21, D91, D63, D31, G22
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61
In this paper, we examine the cost of insurance against model uncertainty for the Euro area considering four alternative reference models, all of which are used for policy-analysis at the ECB.We find that maximal insurance across this model range in terms of aMinimax policy comes at moderate costs in terms of lower expected performance. We extract priors that would rationalize the Minimax policy from a Bayesian perspective. These priors indicate that full insurance is strongly oriented towards the model with highest baseline losses. Furthermore, this policy is not as tolerant towards small perturbations of policy parameters as the Bayesian policy rule. We propose to strike a compromise and use preferences for policy design that allow for intermediate degrees of ambiguity-aversion.These preferences allow the specification of priors but also give extra weight to the worst uncertain outcomes in a given context. JEL Klassifikation: E52, E58, E61.
This paper studies an overlapping generations model with stochastic production and incomplete markets to assess whether the introduction of an unfunded social security system leads to a Pareto improvement. When returns to capital and wages are imperfectly correlated a system that endows retired households with claims to labor income enhances the sharing of aggregate risk between generations. Our quantitative analysis shows that, abstracting from the capital crowding-out effect, the introduction of social security represents a Pareto improving reform, even when the economy is dynamically effcient. However, the severity of the crowding-out effect in general equilibrium tends to overturn these gains. Klassifikation: E62, H55, H31, D91, D58 . April 2005.
While much of classical statistical analysis is based on Gaussian distributional assumptions, statistical modeling with the Laplace distribution has gained importance in many applied fields. This phenomenon is rooted in the fact that, like the Gaussian, the Laplace distribution has many attractive properties. This paper investigates two methods of combining them and their use in modeling and predicting financial risk. Based on 25 daily stock return series, the empirical results indicate that the new models offer a plausible description of the data. They are also shown to be competitive with, or superior to, use of the hyperbolic distribution, which has gained some popularity in asset-return modeling and, in fact, also nests the Gaussian and Laplace. Klassifikation: C16, C50 . March 2005.
This paper computes the optimal progressivity of the income tax code in a dynamic general equilibrium model with household heterogeneity in which uninsurable labor productivity risk gives rise to a nontrivial income and wealth distribution. A progressive tax system serves as a partial substitute for missing insurance markets and enhances an equal distribution of economic welfare. These beneficial effects of a progressive tax system have to be traded off against the efficiency loss arising from distorting endogenous labor supply and capital accumulation decisions. Using a utilitarian steady state social welfare criterion we find that the optimal US income tax is well approximated by a flat tax rate of 17:2% and a fixed deduction of about $9,400. The steady state welfare gains from a fundamental tax reform towards this tax system are equivalent to 1:7% higher consumption in each state of the world. An explicit computation of the transition path induced by a reform of the current towards the optimal tax system indicates that a majority of the population currently alive (roughly 62%) would experience welfare gains, suggesting that such fundamental income tax reform is not only desirable, but may also be politically feasible. JEL Klassifikation: E62, H21, H24 .
Financial markets embed expectations of central bank policy into asset prices. This paper compares two approaches that extract a probability density of market beliefs. The first is a simulatedmoments estimator for option volatilities described in Mizrach (2002); the second is a new approach developed by Haas, Mittnik and Paolella (2004a) for fat-tailed conditionally heteroskedastic time series. In an application to the 1992-93 European Exchange Rate Mechanism crises, that both the options and the underlying exchange rates provide useful information for policy makers. JEL Klassifikation: G12, G14, F31.
Volatility forecasting
(2005)
Volatility has been one of the most active and successful areas of research in time series econometrics and economic forecasting in recent decades. This chapter provides a selective survey of the most important theoretical developments and empirical insights to emerge from this burgeoning literature, with a distinct focus on forecasting applications. Volatility is inherently latent, and Section 1 begins with a brief intuitive account of various key volatility concepts. Section 2 then discusses a series of different economic situations in which volatility plays a crucial role, ranging from the use of volatility forecasts in portfolio allocation to density forecasting in risk management. Sections 3, 4 and 5 present a variety of alternative procedures for univariate volatility modeling and forecasting based on the GARCH, stochastic volatility and realized volatility paradigms, respectively. Section 6 extends the discussion to the multivariate problem of forecasting conditional covariances and correlations, and Section 7 discusses volatility forecast evaluation methods in both univariate and multivariate cases. Section 8 concludes briefly. JEL Klassifikation: C10, C53, G1.
This paper analyzes dynamic equilibrium risk sharing contracts between profit-maximizing intermediaries and a large pool of ex-ante identical agents that face idiosyncratic income uncertainty that makes them heterogeneous ex-post. In any given period, after having observed her income, the agent can walk away from the contract, while the intermediary cannot, i.e. there is one-sided commitment. We consider the extreme scenario that the agents face no costs to walking away, and can sign up with any competing intermediary without any reputational losses. We demonstrate that not only autarky, but also partial and full insurance can obtain, depending on the relative patience of agents and financial intermediaries. Insurance can be provided because in an equilibrium contract an up-front payment e.ectively locks in the agent with an intermediary. We then show that our contract economy is equivalent to a consumption-savings economy with one-period Arrow securities and a short-sale constraint, similar to Bulow and Rogo. (1989). From this equivalence and our characterization of dynamic contracts it immediately follows that without cost of switching financial intermediaries debt contracts are not sustainable, even though a risk allocation superior to autarky can be achieved. JEL Klassifikation: G22, E21, D11, D91.
Default risk sharing between banks and markets : the contribution of collateralized debt obligations
(2005)
This paper contributes to the economics of financial institutions risk management by exploring how loan securitization a.ects their default risk, their systematic risk, and their stock prices. In a typical CDO transaction a bank retains through a first loss piece a very high proportion of the expected default losses, and transfers only the extreme losses to other market participants. The size of the first loss piece is largely driven by the average default probability of the securitized assets. If the bank sells loans in a true sale transaction, it may use the proceeds to to expand its loan business, thereby incurring more systematic risk. We find an increase of the banks' betas, but no significant stock price e.ect around the announcement of a CDO issue. Our results suggest a role for supervisory requirements in stabilizing the financial system, related to transparency of tranche allocation, and to regulatory treatment of senior tranches. JEL Klassifikation: D82, G21, D74 .
We selectively survey, unify and extend the literature on realized volatility of financial asset returns. Rather than focusing exclusively on characterizing the properties of realized volatility, we progress by examining economically interesting functions of realized volatility, namely realized betas for equity portfolios, relating them both to their underlying realized variance and covariance parts and to underlying macroeconomic fundamentals.
From a macroeconomic perspective, the short-term interest rate is a policy instrument under the direct control of the central bank. From a finance perspective, long rates are risk-adjusted averages of expected future short rates. Thus, as illustrated by much recent research, a joint macro-finance modeling strategy will provide the most comprehensive understanding of the term structure of interest rates. We discuss various questions that arise in this research, and we also present a new examination of the relationship between two prominent dynamic, latent factor models in this literature: the Nelson-Siegel and affine no-arbitrage term structure models. JEL Klassifikation: G1, E4, E5.
What do academics have to offer market risk management practitioners in financial institutions? Current industry practice largely follows one of two extremely restrictive approaches: historical simulation or RiskMetrics. In contrast, we favor flexible methods based on recent developments in financial econometrics, which are likely to produce more accurate assessments of market risk. Clearly, the demands of real-world risk management in financial institutions - in particular, real-time risk tracking in very high-dimensional situations - impose strict limits on model complexity. Hence we stress parsimonious models that are easily estimated, and we discuss a variety of practical approaches for high-dimensional covariance matrix modeling, along with what we see as some of the pitfalls and problems in current practice. In so doing we hope to encourage further dialog between the academic and practitioner communities, hopefully stimulating the development of improved market risk management technologies that draw on the best of both worlds.
This study offers a historical review of the monetary policy reform of October 6, 1979, and discusses the influences behind it and its significance. We lay out the record from the start of 1979 through the spring of 1980, relying almost exclusively upon contemporaneous sources, including the recently released transcripts of Federal Open Market Committee (FOMC) meetings during 1979. We then present and discuss in detail the reasons for the FOMC's adoption of the reform and the communications challenge presented to the Committee during this period. Further, we examine whether the essential characteristics of the reform were consistent with monetarism, new, neo, or old-fashioned Keynesianism, nominal income targeting, and inflation targeting. The record suggests that the reform was adopted when the FOMC became convinced that its earlier gradualist strategy using finely tuned interest rate moves had proved inadequate for fighting inflation and reversing inflation expectations. The new plan had to break dramatically with established practice, allow for the possibility of substantial increases in short-term interest rates, yet be politically acceptable, and convince financial markets participants that it would be effective. The new operating procedures were also adopted for the pragmatic reason that they would likely succeed. JEL Klassifikation: E52, E58, E61, E65.
The Basle securitisation framework explained: the regulatory treatment of asset securitisation
(2005)
The paper provides a comprehensive overview of the gradual evolution of the supervisory policy adopted by the Basle Committee for the regulatory treatment of asset securitisation. We carefully highlight the pathology of the new “securitisation framework” to facilitate a general understanding of what constitutes the current state of computing adequate capital requirements for securitised credit exposures. Although we incorporate a simplified sensitivity analysis of the varying levels of capital charges depending on the security design of asset securitisation transactions, we do not engage in a profound analysis of the benefits and drawbacks implicated in the new securitisation framework. JEL Klassifikation: E58, G21, G24, K23, L51. Forthcoming in Journal of Financial Regulation and Compliance, Vol. 13, No. 1 .
The mammalian retina contains around 30 morphological varieties of amacrine cell types. These interneurons receive excitatory glutamatergic input from bipolar cells and provide GABA- and glycinergic inhibition to other cells in the retina. Amacrine cells exhibit widely varying light evoked responses, in large part defined by their presynaptic partners. We wondered whether amacrine functional diversity is based on a differential expression of glutamate receptors among cell populations and types. In whole cell patch-clamp experiments on mouse retinal slices, we used selective agonists and antagonists to discriminate responses mediated by NMDA/ non-NMDA (NBQX) and AMPA/ KA receptors (cyclothiazide, GYKI 52466, GYKI 53655, SYM 2081). We sampled a large variety of individual cell types, which were classified by their dendritic field size into either narrow-field or wide-field cells after filling with Lucifer yellow or neurobiotin. In addition, we used transgenic GlyT2-EGFP mice, whose glycinergic neurons express EGFP. This allowed us to classify amacrines on basis of their neurotransmitter into either glycinergic or GABAergic cells. All cells (n = 300) had good responses to non-NMDA agonists. Specific AMPA receptor responses could be obtained from almost all cells recorded: 94% of the AII (n = 17), 87% of the narrow-field (n = 45), 81% of the wide-field (n = 21), 85% of the glycinergic (n = 20) and 78% of the GABAergic cells (n = 9). KA receptor selective drugs were also effective on the majority of the AII (79%, n = 14), narrow-field (93%, n = 43), wide-field (85%, n = 26), glycinergic (94%, n = 16) and GABAergic amacrine cells (100%, n = 6). Among the cells tested for the two receptors (n = 65), we encountered both exclusive expression of AMPA or KA receptors and co-expression of the two types. Most narrow-field (70%, n = 27), glycinergic (81%, n = 16) and GABAergic cells (67%, n = 6) were found to have both AMPA and KA receptors. In contrast, only less than half of the wide-field cells (43%, n = 14) were found to co-express AMPA and KA receptors, most of them expressing exclusively AMPA (36%) or KA receptors (21%). We could elicit small NMDA responses from most of the wide-field (75%, n = 13) and GABAergic cells (67%, n = 3), whereas only 47% of the narrow-field (n = 15), 14% of the AII (n = 22) and no glycinergic cell (n = 2) reacted to NMDA. Abstract 83 Our data suggest that AMPA, KA and NMDA receptors are differentially expressed among different types of amacrine cells rather than among populations with different neurotransmitters or different dendritic coverage of the retina. Selective expression of kinetically different glutamate receptors among amacrine types may be involved in generating transient and sustained inhibitory pathways in the retina. Since AMPA and KA receptors are not generally clustered at the same postsynaptic sites, a single amacrine cell expressing both AMPA and KA receptors may provide inhibition with different temporal characteristics to individual synaptic partners.
"In this paper, I analyse the conduct of business rules included in the Directive on Markets in Financial Instruments (MiFID) which has replaced the Investment Services Directive (ISD). These rules, in addition to being part of the regulation of investment intermediaries, operate as contractual standards in the relationships between intermediaries and their clients. While the need to harmonise similar rules is generally acknowledged, in the present paper I ask whether the Lamfalussy regulatory architecture, which governs securities lawmaking in the EU, has in some way improved regulation in this area. In section II, I examine the general aspects of the Lamfalussy process. In section III, I critically analyse the MiFID s provisions on conduct of business obligations, best execution of transactions and client order handling, taking into account the new regime of trade internalisation by investment intermediaries and the ensuing competition between these intermediaries and market operators. In sectionIV, I draw some general conclusions on the re-regulation made under the Lamfalussy regulatory structure and its limits. In this section, I make a few preliminary comments on the relevance of conduct of business rules to contract law, the ISD rules of conduct and the role of harmonisation."
The wide-area deployment of WiFi hot spots challenges IP access providers. While new profit models are sought after by them, profitability as well as logistics for large-scale deployment of 802.11 wireless technology are still to be proven. Expenditure for hardware, locations, maintenance, connectivity, marketing, billing and customer care must be considered. Even for large carriers with infrastructure, the deployment of a large-scale WiFi infrastructure may be risky. This paper proposes a multi-level scheme for hot spot distribution and customer acquisition that reduces financial risk, cost of marketing and cost of maintenance for the large-scale deployment of WiFi hot spots.
Abstract: It is commonplace in the debate on Germany's labor market problems to argue that high unemployment and low wage dispersion are related. This paper analyses the relationship between unemployment and residual wage dispersion for individuals with comparable attributes. In the conventional neoclassical point of view, wages are determined by the marginal product of the workers. Accordingly, increases in union minimum wages result in a decline of residual wage dispersion and higher unemployment. A competing view regards wage dispersion as the outcome of search frictions and the associated monopsony power of the firms. Accordingly, an increase in search frictions causes both higher unemployment and higher wage dispersion. The empirical analysis attempts to discriminate between the two hypotheses for West Germany analyzing the relationship between wage dispersion and both the level of unemployment as well as the transition rates between different labor market states. The findings are not completely consistent with either theory. However, as predicted by search theory, one robust result is that unemployment by cells is not negatively correlated with the within cell wage dispersion.
This paper shows that abnormal stock price returns around open market repurchase announcements are about four times higher in Germany than in the US (12% versus 3%). We hypothesize that this observation can be explained by country differences in repurchase regulation. Our empirical evidence indicates that German managers primarily buy back shares to signal an undervaluation of their firm. We demonstrate that the stringent repurchase process prescribed by German law attributes a higher credibility to such a signal than lax US regulations and thereby corroborate our hypothesis.
This paper examines intraday stock price effects and trading activity caused by ad hoc disclosures in Germany. The evidence suggests that the observed stock prices react within 90 minutes after the ad hoc disclosures. Trading volumes take even longer to adjust. We find no evidence for abnormal price reactions or abnormal trading volume before announcements. The bigger the company that announces an ad hoc disclosure, the less severe is the abnormal price effect following the announcement. The number of analysts is negatively correlated to the trading volume effect before the ad hoc disclosure. The higher the trading volume on the last trading day before the announcement, the greater is the price effect after the ad hoc disclosures and the greater the trading volume effect. Keywords: ad hoc disclosure rules, intraday stock price adjustments, market efficiency.