Refine
Year of publication
- 2005 (562) (remove)
Document Type
- Article (215)
- Working Paper (79)
- Doctoral Thesis (58)
- Part of a Book (51)
- Preprint (43)
- Part of Periodical (40)
- Conference Proceeding (36)
- Report (22)
- Book (11)
- Review (3)
Language
- English (562) (remove)
Has Fulltext
- yes (562) (remove)
Keywords
- Artikulation (13)
- Phonetik (13)
- Artikulatorische Phonetik (12)
- Englisch (11)
- Artikulator (8)
- Deutsch (7)
- Europäische Union (7)
- Geldpolitik (7)
- Bedeutungswandel (6)
- Computerlinguistik (6)
Institute
- Physik (72)
- Center for Financial Studies (CFS) (42)
- Wirtschaftswissenschaften (39)
- Biochemie und Chemie (32)
- Medizin (24)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Geowissenschaften (17)
- E-Finance Lab e.V. (16)
- Extern (15)
- Biowissenschaften (14)
We explore the macro/finance interface in the context of equity markets. In particular, using half a century of Livingston expected business conditions data we characterize directly the impact of expected business conditions on expected excess stock returns. Expected business conditions consistently affect expected excess returns in a statistically and economically significant counter-cyclical fashion: depressed expected business conditions are associated with high expected excess returns. Moreover, inclusion of expected business conditions in otherwise standard predictive return regressions substantially reduces the explanatory power of the conventional financial predictors, including the dividend yield, default premium, and term premium, while simultaneously increasing R2. Expected business conditions retain predictive power even after controlling for an important and recently introduced non-financial predictor, the generalized consumption/wealth ratio, which accords with the view that expected business conditions play a role in asset pricing different from and complementary to that of the consumption/wealth ratio. We argue that time-varying expected business conditions likely capture time-varying risk, while time-varying consumption/wealth may capture time-varying risk aversion. Klassifikation: G12
We provide a novel benefit of "Alternative Risk Transfer" (ART) products with parametric or index triggers. When a reinsurer has private information about his client's risk, outside reinsurers will price their reinsurance offer less aggressively. Outsiders are subject to adverse selection as only a high-risk insurer might find it optimal to change reinsurers. This creates a hold-up problem that allows the incumbent to extract an information rent. An information-insensitive ART product with a parametric or index trigger is not subject to adverse selection. It can therefore be used to compete against an informed reinsurer, thereby reducing the premium that a low-risk insurer has to pay for the indemnity contract. However, ART products exhibit an interesting fate in our model as they are useful, but not used in equilibrium because of basis-risk. Klassifikation: D82, G22
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
EU financial integration : is there a 'Core Europe'? ; evidence from a cluster-based approach
(2005)
Numerous recent studies, e.g. EU Commission (2004a), Baele et al. (2004), Adam et al.(2002), and the research pooled in ECB-CFS (2005), Gaspar, Hartmann, and Sleijpen(2003), have documented progress in EU financial integration from a micro-level view.This paper contributes to this research by identifying groups of financially integratedcountries from a holistic, macro-level view. It calculates cross-sectional dispersions, andinnovates by applying an inter-temporal cluster analysis to eight euro area countries for the period 1995-2002. The indicators employed represent the money, government bond and credit markets. Our results show that euro countries were divided into two stable groups of financially more closely integrated countries in the pre-EMU period. Back then, geographic proximity and country size might have played a role. This situation has changed remarkably with the euro's introduction. EMU has led to a shake-up both in the number and composition of groups. The evidence puts a question mark behin d using Germany as a benchmark in the post-EMU period. The ¯ndings suggest as well that ¯nancial integration takes place in waves. Stable periods and periods of intense transition alternate. Based on the notion of 'maximum similarity', the results suggest that there exist 'maximum similarity barriers'. It takes extraordinary events, such as EMU, to push the degree of ¯nancial integration beyond these barriers. The research encourages policymakers to move forward courageously in the post-FSAP era, and provides comfort that the substantial di®erences between the current and potentially new euro states can be overcome. The analysis could be extended to the new EU member countries, to the global level, and to additional indicators.
The German corporate governance system has long been cited as the standard example of an insider-controlled and stakeholder-oriented system. We argue that despite important reforms and substantial changes of individual elements of the German corporate governance system the main characteristics of the traditional German system as a whole are still in place. However, in our opinion the changing role of the big universal banks in the governance undermines the stability of the corporate governance system in Germany. Therefore a breakdown of the traditional system leading to a control vacuum or a fundamental change to a capital market-based system could be in the offing.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
Small and medium-sized firms typically obtain capital via bank financing. They often rely on a mixture of relationship and arm’s-length banking. This paper explores the reasons for the dominance of heterogeneous multiple banking systems. We show that the incidence of inefficient credit termination and subsequent firm liquidation is contingent on the borrower’s quality and on the relationship bank’s information precision. Generally, heterogeneous multiple banking leads to fewer inefficient credit decisions than monopoly relationship lending or homogeneous multiple banking, provided that the relationship bank’s fraction of total firm debt is not too large.
This paper makes an attempt to present the economics of credit securitisation in a non-technical way, starting from the description and the analysis of a typical securitisation transaction. The paper sketches a theoretical explanation for why tranching, or nonproportional risk sharing, which is at the heart of securitisation transactions, may allow commercial banks to maximize their shareholder value. However, the analysis makes also clear that the conditions under which credit securitisation enhances welfare, are fairly restrictive, and require not only an active role of the banking supervisory authorities, but also a price tag on the implicit insurance currently provided by the lender of last resort.
We derive the effects of credit risk transfer (CRT) markets on real sector productivity and on the volume of financial intermediation in a model where banks choose their optimal degree of CRT and monitoring. We find that CRT increases productivity in the up-market real sector but decreases it in the low-end segment. If optimal, CRT unambiguously fosters financial deepening, i.e., it reduces credit-rationing in the economy. These effects rely upon the ability of banks to commit to the optimal CRT at the funding stage. The optimal degree of CRT depends on the combination of moral hazard, general riskiness, and the cost of monitoring in non-monotonic ways.
We provide insights into determinants of the rating level of 371 issuers which defaulted in the years 1999 to 2003, and into the leader-follower relationship between Moody’s and S&P. The evidence for the rating level suggests that Moody’s assigns lower ratings than S&P for all observed periods before the default event. Furthermore, we observe two-way Granger causal-ity, which signifies information flow between the two rating agencies. Since lagged rating changes influence the magnitude of the agencies’ own rating changes it would appear that the two rating agencies apply a policy of taking a severe downgrade through several mild down-grades. Further, our analysis of rating changes shows that issuers with headquarters in the US are less sharply downgraded than non-US issuers. For rating changes by Moody’s we also find that larger issuers seem to be downgraded less severely than smaller issuers.
This article presents an overview of the contemporary German insurance market, its structure, players, and development trends. First, brief information about the history of the insurance industry in Germany is provided. Second, the contemporary market is analyzed in terms of its legal and economic structure, with statistics on the number of companies, insurance density and penetration, the role of insurers in the capital markets, premiums split, and main market players and their market shares. Furthermore, the three biggest insurance lines—life, health, and property and casualty—are considered in more detail, such as product range, country specifics, and insurance and investment results. A section on regulation outlines its implementation in the insurance sector, offering information on the underlying legislative basis, supervisory body, technical procedures, expected developments, and sources of more detailed information.
Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
Dt. Fassung: Der Umgang mit Rechtsparadoxien: Derrida, Luhmann, Wiethölter. In: Christian Joerges und Gunther Teubner (Hg.) Rechtsverfassungsrecht: Recht-Fertigungen zwischen Sozialtheorie und Privatrechtsdogmatik. Nomos, Baden-Baden 2003, 249-272.
This paper starts out by pointing out the challenges and weaknesses which the German banking systems faces according to the prevailing views among national and international observers. These challenges include a generalproblem of profitability and, possibly as its main reason, the strong role of public banks. These concerns raise the questions whether the facts support this assessment of a general profitability problem and whether there are reasons to expect a fundamental or structural transformation of the German banking system. The paper contains four sections. The first one presents the evidence concerning the profitability problem in a comparative, international perspective. The second section presents information about the so-called three-pillar system of German banking. What might be surprising in this context is that the group of pub lic banks is not only the largest segment of the German banking system, but that the primary savings banks also are its financially most successful part. The German banking system is highly fragmented. This fact suggests to discuss past, present and possible future consolidations in the banking system in the third section. The authors provide evidence to the effect that within- group consolidation has been going on at a rapid pace in the public and the cooperative banking groups in recent years and that this development has not yet come to an end, while within-group consolidation among the large private banks, consolidation across group boundaries at a national level and cross-border or international consolidation has so far only happened at a limited scale, and do not appear to gain momentum in the near future. In the last section, the authors develop their explanation for the fact that large-scale and cross border consolidation has so far not materialized to any great extent. Drawing on the concept of complementarity, they argue that it would be difficult to expect these kinds of mergers and acquisitions happening within a financial system which is itself surprisingly stable, or, as one cal also call it, resistant to change.
Asset-backed securitization (ABS) has become a viable and increasingly attractive risk management and refinancing method either as a standalone form of structured finance or as securitized debt in Collateralized Debt Obligations (CDO). However, the absence of industry standardization has prevented rising investment demand from translating into market liquidity comparable to traditional fixed income instruments, in all but a few selected market segments. Particularly low financial transparency and complex security designs inhibits profound analysis of secondary market pricing and how it relates to established forms of external finance. This paper represents the first attempt to measure the intertemporal, bivariate causal relationship between matched price series of equity and ABS issued by the same entity. In a two-dimensional linear system of simultaneous equations we investigate the short-term dynamics and long-term consistency of daily secondary market data from the U.K. Sterling ABS/MBS market and exchange traded shares between 1998 and 2004 with and without the presence of cointegration. Our causality framework delivers compelling empirical support for a strong co-movement between matched price series of ABS-equity pairs, where ABS markets seem to contribute more to price discovery over the long run. Controlling for cointegration, risk-free interest and average market risk of corporate debt hardly alters our results. However, once we qualify the magnitude and direction of price discovery on various security characteristics, such as the ABS asset class, we find that ABS-equity pairs with large-scale CMBS/RMBS and credit card/student loan ABS reveal stronger lead-lag relationships and joint price dynamics than whole business ABS. JEL Classifications: G10, G12, G24
Although the commoditisation of illiquid asset exposures through securitisation facilitates the disciplining effect of capital markets on the risk management, private information about securitised debt as well as complex transaction structures could possibly impair the fair market valuation. In a simple issue design model without intermediaries we maximise issuer proceeds over a positive measure of issue quality, where a direct revelation mechanism (DRM) by profitable informed investors engages endogenous price discovery through auction-style allocation preference as a continuous function of perceived issue quality. We derive an optimal allocation schedule for maximum issuer payoffs under different pricing regimes if asymmetric information requires underpricing. In particular, we study how the incidence of uninformed investors at varying levels of valuation uncertainty and their function of clearing the market effects profitable informed investment. We find that the issuer optimises own payoffs at each valuation irrespective of the applicable pricing mechanism by awarding informed investors the lowest possible allocation (and attendant underpricing) that still guarantees profitable informed investment. Under uniform pricing the composition of the investor pool ensures that informed investors appropriate higher profit than uninformed types. Any reservation utility by issuers lowers the probability of information disclosure by informed investors and the scope of issuers to curtail profitable informed investment. JEL Classifications: D82, G12, G14, G23
Asset securitisation as a risk management and funding tool : what does it hold in store for SMES?
(2005)
The following chapter critically surveys the attendant benefits and drawbacks of asset securitisation on both financial institutions and firms. It also elicits salient lessons to be learned about the securitisation of SME-related obligations from a cursory review of SME securitisation in Germany as a foray of asset securitisation in a bank-centred financial system paired with a strong presence of SMEs in industrial production. JEL Classification: D81, G15, M20
As a sign of ambivalence in the regulatory definition of capital adequacy for credit risk and the quest for more efficient refinancing sources collateral loan obligations (CLOs) have become a prominent securitisation mechanism. This paper presents a loss-based asset pricing model for the valuation of constituent tranches within a CLO-style security design. The model specifically examines how tranche subordination translates securitised credit risk into investment risk of issued tranches as beneficial interests on a designated loan pool typically underlying a CLO transaction. We obtain a tranchespecific term structure from an intensity-based simulation of defaults under both robust statistical analysis and extreme value theory (EVT). Loss sharing between issuers and investors according to a simplified subordination mechanism allows issuers to decompose securitised credit risk exposures into a collection of default sensitive debt securities with divergent risk profiles and expected investor returns. Our estimation results suggest a dichotomous effect of loss cascading, with the default term structure of the most junior tranche of CLO transactions (“first loss position”) being distinctly different from that of the remaining, more senior “investor tranches”. The first loss position carries large expected loss (with high investor return) and low leverage, whereas all other tranches mainly suffer from loss volatility (unexpected loss). These findings might explain why issuers retain the most junior tranche as credit enhancement to attenuate asymmetric information between issuers and investors. At the same time, the issuer discretion in the configuration of loss subordination within particular security design might give rise to implicit investment risk in senior tranches in the event of systemic shocks. JEL Classifications: C15, C22, D82, F34, G13, G18, G20
System-size dependence of strangeness production in nucleus-nucleus collisions at √sNN = 17.3 GeV
(2005)
Emission of pi, K, phi and Lambda was measured in near-central C+C and Si+Si collisions at 158 AGeV beam energy. Together with earlier data for p+p, S+S and Pb+Pb, the system-size dependence of relative strangeness production in nucleus-nucleus collisions is obtained. Its fast rise and the saturation observed at about 60 participating nucleons can be understood as onset of the formation of coherent partonic subsystems of increasing size. PACS numbers: 25.75.-q
Results are presented on Omega production in central Pb+Pb collisions at 40 and 158 AGeV beam energy. Given are transverse-mass spectra, rapidity distributions, and total yields for the sum Omega+Antiomega at 40 AGeV and for Omega and Antiomega separately at 158 AGeV. The yields are strongly under-predicted by the string-hadronic UrQMD model and are in better agreement with predictions from a hadron gas models. PACS numbers: 25.75.Dw
Phase diagram of strongly interacting matter is discussed within the exactly solvable statistical model of the quark-gluon bags. The model predicts two phases of matter: the hadron gas at a low temperature T and baryonic chemical potential muB, and the quark-gluon gas at a high T and/or muB. The nature of the phase transition depends on a form of the bag mass-volume spectrum (its pre-exponential factor), which is expected to change with the muB/T ratio. It is therefore likely that the line of the 1st} order transition at a high muB/T ratio is followed by the line of the 2nd order phase transition at an intermediate muB/T, and then by the lines of "higher order transitions" at a low muB/T.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Results are presented from a search for the decays D0 -> K min pi plus and D0 bar -> K plus pi min in a sample of 3.8x10^6 central Pb-Pb events collected with a beam energy of 158A GeV by NA49 at the CERN SPS. No signal is observed. An upper limit on D0 production is derived and compared to predictions from several models.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
We consider Schwarz maps for triangles whose angles are rather general rational multiples of pi. Under which conditions can they have algebraic values at algebraic arguments? The answer is based mainly on considerations of complex multiplication of certain Prym varieties in Jacobians of hypergeometric curves. The paper can serve as an introduction to transcendence techniques for hypergeometric functions, but contains also new results and examples.
The main subject of this survey are Belyi functions and dessins d'enfants on Riemann surfaces. Dessins are certain bipartite graphs on 2-mainfolds defining there are conformal and even an algebraic structure. In principle, all deeper properties of the resulting Riemann surfaces or algebraic curves should be encoded in these dessins, but the decoding turns out to be difficult and leads to many open problems. We emphasize arithmetical aspects like Galois actions, the relation to the ABC theorem in function filds and arithemtic questions in uniformization theory of algebraic curves defined over number fields.
Presentation at the AMS Southeastern Sectional Meeting 14-16 March 2003, and the Workshop Asymptotic Analysis, Stability, and Generalized Functions', 17-19 March 2003, Louisiana State University, Baton Rouge, Louisiana. See the corresponding papers "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and "Infinite Infrared Regularization and a State Space for the Heisenberg Algebra".
Background: Allogeneic hematopoietic stem cell transplantation (allo-HSCT) is performed mainly in patients with high-risk or advanced hematologic malignancies and congenital or acquired aplastic anemias. In the context of the significant risk of graft failure after allo-HSCT from alternative donors and the risk of relapse in recipients transplanted for malignancy, the precise monitoring of posttransplant hematopoietic chimerism is of utmost interest. Useful molecular methods for chimerism quantification after allogeneic transplantation, aimed at distinguishing precisely between donor's and recipient's cells, are PCR-based analyses of polymorphic DNA markers. Such analyses can be performed regardless of donor's and recipient's sex. Additionally, in patients after sex-mismatched allo-HSCT, fluorescent in situ hybridization (FISH) can be applied. Methods: We compared different techniques for analysis of posttransplant chimerism, namely FISH and PCR-based molecular methods with automated detection of fluorescent products in an ALFExpress DNA Sequencer (Pharmacia) or ABI 310 Genetic Analyzer (PE). We used Spearman correlation test. Results: We have found high correlation between results obtained from the PCR/ALF Express and PCR/ABI 310 Genetic Analyzer. Lower, but still positive correlations were found between results of FISH technique and results obtained using automated DNA sizing technology. Conclusions: All the methods applied enable a rapid and accurate detection of post-HSCT chimerism.