Refine
Year of publication
- 2003 (497) (remove)
Document Type
- Article (162)
- Working Paper (109)
- Conference Proceeding (62)
- Part of a Book (40)
- Preprint (40)
- Doctoral Thesis (33)
- Part of Periodical (29)
- Book (8)
- Report (6)
- Review (5)
Language
- English (497) (remove)
Keywords
- Deutschland (24)
- Morphologie (14)
- Phonologie (12)
- Geldpolitik (11)
- Aspekt (10)
- Englisch (9)
- Koreanisch (8)
- Europäische Union (7)
- Going Public (7)
- Kindersprache (7)
Institute
- Physik (63)
- Center for Financial Studies (CFS) (56)
- Wirtschaftswissenschaften (35)
- Medizin (17)
- Rechtswissenschaft (14)
- Biowissenschaften (12)
- Geowissenschaften (12)
- Informatik (12)
- Extern (11)
- Institut für Deutsche Sprache (IDS) Mannheim (11)
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
This paper is focused on the coordination of order and production policy between buyers and suppliers in supply chains. When a buyer and a supplier of an item work independently, the buyer will place orders based on his economic order quantity (EOQ). However, the buyer s EOQ may not lead to an optimal policy for the supplier. It can be shown that a cooperative batching policy can reduce total cost significantly. Should the buyer have the more powerful position to enforce his EOQ on the supplier, then no incentive exists for him to deviate from his EOQ in order to choose a cooperative batching policy. To provide an incentive to order in quantities suitable to the supplier, the supplier could offer a side payment. One critical assumption made throughout in the literature dealing with incentive schemes to influence buyer s ordering policy is that the supplier has complete information regarding buyer s cost structure. However, this assumption is far from realistic. As a consequence, the buyer has no incentive to report truthfully on his cost structure. Moreover there is an incentive to overstate the total relevant cost in order to obtain as high a side payment as possible. This paper provides a bargaining model with asymmetric information about the buyer s cost structure assuming that the buyer has the bargaining power to enforce his EOQ on the supplier in case of a break-down in negotiations. An algorithm for the determination of an optimal set of contracts which are specifically designed for different cost structures of the buyer, assumed by the supplier, will be presented. This algorithm was implemented in a software application, that supports the supplier in determining the optimal set of contracts.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
The study of organisms with restricted dispersal abilities and presence in the fossil record is particularly adequate to understand the impact of climate changes on the distribution and genetic structure of species. Trochoidea geyeri (Soós 1926) is a land snail restricted to a patchy, insular distribution in Germany and France. Fossil evidence suggests that current populations of T. geyeri are relicts of a much more widespread distribution during more favourable climatic periods in the Pleistocene. Results: Phylogeographic analysis of the mitochondrial 16S rDNA and nuclear ITS-1 sequence variation was used to infer the history of the remnant populations of T. geyeri. Nested clade analysis for both loci suggested that the origin of the species is in the Provence from where it expanded its range first to Southwest France and subsequently from there to Germany. Estimated divergence times predating the last glacial maximum between 25–17 ka implied that the colonization of the northern part of the current species range occurred during the Pleistocene. Conclusion: We conclude that T. geyeri could quite successfully persist in cryptic refugia during major climatic changes in the past, despite of a restricted capacity of individuals to actively avoid unfavourable conditions.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
This paper proposes an intertemporal model of venture capital investment with screening and advising where the venture capitalist´s time endowment is the scarce input factor. Screening improves the selection of firms receiving finance, advising allows firms to develop a marketable product, both have a variable intensity. In our setup, optimal linear contracts solves the moral hazard problem. Screening however asks for an entrepreneur wage and does not allow for upfront payments which would cause severe adverse selection. Project characteristics have implications for screening and advising intensity and the distribution of profits. Finally, we develop a formal version of the "venture capital cycle" by extending the basic setup to a simple model of venture capital supply and demand.
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.
Equal size, equal role? : interest rate interdependence between the Euro area and the United States
(2003)
This paper investigates whether the degree and the nature of economic and monetary policy interdependence between the United States and the euro area have changed with the advent of EMU. Using real-time data, it addresses this issue from the perspective of financial markets by analysing the effects of monetary policy announcements and macroeconomic news on daily interest rates in the United States and the euro area. First, the paper finds that the interdependence of money markets has increased strongly around EMU. Although spillover effects from the United States to the euro area remain stronger than in the opposite direction, we present evidence that US markets have started reacting also to euro area developments since the onset of EMU. Second, beyond these general linkages, the paper finds that certain macroeconomic news about the US economy have a large and significant effect on euro area money markets, and that these effects have become stronger in recent years. Finally, we show that US macroeconomic news have become good leading indicators for economic developments in the euro area. This indicates that the higher money market interdependence between the United States and the euro area is at least partly explained by the increased real integration of the two economies in recent years.
Based on a broad set of regional aggregated and disaggregated consumer price index (CPI) data from major industrialized countries in Asia, North America and Europe we are examining the role that national borders play for goods market integration. In line with the existing literature we find that intra-national markets are better integrated than international market. Additionally, our results show that there is a large "ocean" effect, i.e., inter-continental markets are significantly more segmented than intra-continental markets. To examine the impact of the establishment of the European Monetary Union (EMU) on integration, we split our sample into a pre-EMU and EMU sample. We find that border effects across EMU countries have declined by about 80% to 90% after 1999 whereas border estimates across non-EMU countries have remained basically unchanged. Since global factors have affected all countries in our sample similarly and major integration efforts across EMU countries were made before 1999, we suggest that most of the reduction in EMU border estimates has been "nominal". Panel unit root evidence shows that the observed large differences in integration across intra- and inter-continental markets remain valid in the long-run. This finding implies that real factors are responsible for the documented segmentations across our sample countries.
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts.
We show diverse beliefs is an important propagation mechanism of fluctuations, money non neutrality and efficacy of monetary policy. Since expectations affect demand, our theory shows economic fluctuations are mostly driven by varying demand not supply shocks. Using a competitive model with flexible prices in which agents hold Rational Belief (see Kurz (1994)) we show that (i) our economy replicates well the empirical record of fluctuations in the U.S. (ii) Under monetary rules without discretion, monetary policy has a strong stabilization effect and an aggressive anti-inflationary policy can reduce inflation volatility to zero. (iii) The statistical Phillips Curve changes substantially with policy instruments and activist policy rules render it vertical. (iv) Although prices are flexible, money shocks result in less than proportional changes in inflation hence the aggregate price level appears "sticky" with respect to money shocks. (v) Discretion in monetary policy adds a random element to policy and increases volatility. The impact of discretion on the efficacy of policy depends upon the structure of market beliefs about future discretionary decisions. We study two rationalizable beliefs. In one case, market beliefs weaken the effect of policy and in the second, beliefs bolster policy outcomes and discretion could be a desirable attribute of the policy rule. Since the central bank does not know any more than the private sector, real social gain from discretion arise only in extraordinary cases. Hence, the weight of the argument leads us to conclude that bank´s policy should be transparent and abandon discretion except for rare and unusual circumstances. (vi) An implication of our model suggests the current effective policy is only mildly activist and aims mostly to target inflation.
Permanent and transitory policy shocks in an empirical macro model with asymmetric information
(2003)
Despite a large literature documenting that the efficacy of monetary policy depends on how inflation expectations are anchored, many monetary policy models assume: (1) the inflation target of monetary policy is constant; and, (2) the inflation target is known by all economic agents. This paper proposes an empirical specification with two policy shocks: permanent changes to the inflation target and transitory perturbations of the short-term real rate. The public sector cannot correctly distinguish between these two shocks and, under incomplete learning, private perceptions of the inflation target will not equal the true target. The paper shows how imperfect policy credibility can affect economic responses to structural shocks, including transition to a new inflation target - a question that cannot be addressed by many commonly used empirical and theoretical models. In contrast to models where all monetary policy actions are transient, the proposed specification implies that sizable movements in historical bond yields and inflation are attributable to perceptions of permanent shocks in target inflation.
This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability. July 2003.
Monetary policy is sometimes formulated in terms of a target level of inflation, a fixed time horizon and a constant interest rate that is anticipated to achieve the target at the specified horizon. These requirements lead to constant interest rate (CIR)instrument rules. Using the standard New Keynesian model, it is shown that some forms of CIR policy lead to both indeterminacy of equilibria and instability under adaptive learning. However, some other forms of CIR policy perform better. We also examine the properties of the different policy rules in the presence of inertial demand and price behaviour.
Escapist policy rules
(2003)
We study a simple, microfounded macroeconomic system in which the monetary authority employs a Taylor-type policy rule. We analyze situations in which the self-confirming equilibrium is unique and learnable according to Bullard and Mitra (2002). We explore the prospects for the use of 'large deviation' theory in this context, as employed by Sargent (1999) and Cho, Williams, and Sargent (2002). We show that our system can sometimes depart from the self-confirming equilibrium towards a non-equilibrium outcome characterized by persistently low nominal interest rates and persistently low inflation. Thus we generate events that have some of the properties of "liquidity traps" observed in the data, even though the policymaker remains committed to a Taylor-type policy rule which otherwise has desirable stabilization properties.
The development of tractable forward looking models of monetary policy has lead to an explosion of research on the implications of adopting Taylor-type interest rate rules. Indeterminacies have been found to arise for some specifications of the interest rate rule, raising the possibility of inefficient fluctuations due to the dependence of expectations on extraneous "sunspots ". Separately, recent work by a number of authors has shown that sunspot equilibria previously thought to be unstable under private agent learning can in some cases be stable when the observed sunspot has a suitable time series structure. In this paper we generalize the "common factor "technique, used in this analysis, to examine standard monetary models that combine forward looking expectations and predetermined variables. We consider a variety of specifications that incorporate both lagged and expected inflation in the Phillips Curve, and both expected inflation and inertial elements in the policy rule. We find that some policy rules can indeed lead to learnable sunspot solutions and we investigate the conditions under which this phenomenon arises.
A financial system can only perform its function of channelling funds from savers to investors if it offers sufficient assurance to the providers of the funds that they will reap the rewards which have been promised to them. To the extent that this assurance is not provided by contracts alone, potential financiers will want to monitor and influence managerial decisions. This is why corporate governance is an essential part of any financial system. It is almost obvious that providers of equity have a genuine interest in the functioning of corporate governance. However, corporate governance encompasses more than investor protection. Similar considerations also apply to other stakeholders who invest their resources in a firm and whose expectations of later receiving an appropriate return on their investment also depend on decisions at the level of the individual firm which would be extremely difficult to anticipate and prescribe in a set of complete contingent contracts. Lenders, especially long-term lenders, are one such group of stakeholders who may also want to play a role in corporate governance; employees, especially those with high skill levels and firm-specific knowledge, are another. The German corporate governance system is different from that of the Anglo-Saxon countries because it foresees the possibility, and even the necessity, to integrate lenders and employees in the governance of large corporations. The German corporate governance system is generally regarded as the standard example of an insider-controlled and stakeholder-oriented system. Moreover, only a few years ago it was a consistent system in the sense of being composed of complementary elements which fit together well. The first objective of this paper is to show why and in which respect these characterisations were once appropriate. However, the past decade has seen a wave of developments in the German corporate governance system, which make it worthwhile and indeed necessary to investigate whether German corporate governance has recently changed in a fundamental way. More specifically one can ask which elements and features of German corporate governance have in fact changed, why they have changed and whether those changes which did occur constitute a structural change which would have converted the old insider-controlled system into an outsider-controlled and shareholder-oriented system and/or would have deprived it of its former consistency. It is the second purpose of this paper to answer these questions. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency returns coupled with relatively simple reduced-form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c,d) for related bi-power variation measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in realized volatility measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find the jump component of the price process to be distinctly less persistent than the continuous sample path component. Explicitly including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for realized volatility results in highly significant jump coefficient estimates at the daily, weekly and quarterly forecast horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling, forecasting and pricing of the continuous and jump components of total return variability.
While focusing on the protection of distressed sovereigns, the current debate intended to reform the International Financial Architecture has hardly addressed the protection of creditors rights that varies among laws. I suspect however that this constitutes an essential determinant of the success of suggested solutions, especially under the contractual approach. Based on a sample of bonds issued by developing countries states in the period, January 1987 to December 1997, I find that, for given contract characteristics (e.g. listing markets and currency), the governing law is selected according to its ability to enforce repayment. However, although the New York law seems looser and incur larger enforcement costs than the England&Wales law, the former permits equivalent yearly credit amounts. I interpret this as a consequence of the existence of a larger set of valuable assets (e.g. trade) in the US that constitute implicit securities. My findings yield important implications for the reforms. In particular, provided that there exists a seemingly equivalent enforcement credibility between England and New York laws, the prompt implementation of the contractual approach solution should constitute a valuable first step toward efficient sovereign debt markets. October 2003.
The paper suggests an innovative contribution to the investigation of banking liabilities pricing contracted by sovereign agents. To address fundamental issues of banking, the study focuses on the determinants of the up-front fees (the up-front fee is a charge paid out at the signature of the loan arrangement). The investigation is based on a uniquely extensive sample of bank loans contracted or guaranteed by 58 less-developed countries sovereigns in the period from 1983 to 1997. The well detailed reports allow for the calculation of the equivalent yearly margin on the utilization period for all individual loan. The main findings suggest a significant impact of the renegotiation and agency costs on front-end borrowing payments. Unlike the sole interest spread, the all-in interest margin better takes account of these costs. The model estimates however suggest the non-linear pricing is hardly associated with an exogenous split-up intended by the borrower and his banker to cover up information. Instead the up-front payment is a liquidity transfer as described by Gorton and Kahn (2000) to compensate for renegotiation and monitoring costs. The second interesting result is that banks demand payment for all types of sovereign risk in an identical manner public debt holders do. The difference is that, unlike bond holders, bankers have the possibility to charge an up-front fee to compensate for renegotiation costs. Hence, beyond the information related issues, the higher complexity of the pricing design makes bank loan optimal for lenders on sovereign capital markets, especially relative to public debt, thus motivating for their presence. The paper contributes to the expanding literature on loan syndication and banking related issues. The study also has relevance for the investigation of the developing countries debt pricing.
We present an analysis of VaR forecasts and P&L-series of all 13 German banks that used internal models for regulatory purposes in the year 2001. To this end, we introduce the notion of well-behaved forecast systems. Furthermore, we provide a series of statistical tools to perform our analyses. The results shed light on the forecast quality of VaR models of the individual banks, the regulator's portfolio as a whole, and the main ingredients of the computation of the regulatory capital required by the Basel rules.
We estimate a model with latent factors that summarize the yield curve (namely, level, slope, and curvature) as well as observable macroeconomic variables (real activity, inflation, and the stance of monetary policy). Our goal is to provide a characterization of the dynamic interactions between the macroeconomy and the yield curve. We find strong evidence of the effects of macro variables on future movements in the yield curve and much weaker evidence for a reverse influence. We also relate our results to a traditional macroeconomic approach based on the expectations hypothesis.
Using the Johansen test for cointegration, we examine to which extent inflation rates in the Euro area have converged after the introduction of a single currency. Since the assumption of non-stationary variables represents the pivotal point in cointegration analyses we pay special attention to the appropriate identification of non-stationary inflation rates by the application of six different unit root tests. We compare two periods, the first ranging from 1993 to 1998 and the second from 1993 to 2002 with monthly observations. The Johansen test only finds partial convergence for the former period and no convergence for the latter.
Financial markets are to a very large extent influenced by the advent of information. Such disclosures, however, do not only contain information about fundamentals underlying the markets, but they also serve as a focal point for the beliefs of market participants. This dual role of information gains further importance for explaining the development of asset valuations when taking into account that information may be perceived individually (private information), or may be commonly shared by all traders (public information). This study investigates into the recently developed theoretical structures explaining the operating mechanism of the two types of information and emphasizes the empirical testability and differentiation between the role of private and public information. Concluding from a survey of experimental studies and own econometric analyses, it is argued that most often public information dominates private information. This finding justifies central bankers´ unease when disseminating news to the markets and argues against the recent trend of demanding full transparency both for financial institutions and financial markets themselves.
The paper describes the legal and economic environment of mergers and acquisitions in Germany and explores barriers to obtaining and executing corporate control. Various cases are used to demonstrate that resistance by different stakeholders including minority shareholders, organized labour and the government may present powerful obstacles to takeovers in Germany. In spite of the overall convergence of European takeover and securities trading laws, Germany still shows many peculiarities that make its market for corporate control distinct from other countries. Concentrated share ownership, cross shareholdings and pyramidal ownership structures are frequent barriers to acquiring majority stakes. Codetermination laws, the supervisory board structure and supermajority requirements for important corporate decisions limit the execution of control by majority shareholders. Bidders that disregard the German preference for consensual solutions and the specific balance of powers will risk their takeover attempt be frustrated by opposing influence groups. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This paper is a draft for the chapter "German banks and banking structure" of the forthcoming book "The German financial system" edited by J.P. Krahnen and R.H. Schmidt (Oxford University Press). As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventional commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system.
We present a survey on the role of initial public offerings (Epos) and venture capital (VC) in Germany after the Second World War. Between 1945 and 1983 IPOs hardly played a role at all and only a minor role thereafter. In addition, companies that chose an IPO were much older and larger than the average companies going public for the first time in the US or the UK. The level of IPO underpricing in Germany, in contrast, has not been fundamentally different from that in other countries. The picture for venture capital financing is not much different from that provided by IPOs in Germany. For a long time venture capital financing was hardly significant, particularly as a source of early stage financing. The unprecedented boom on the Neuer Markt between 1997 and 2000, when many small venture capital financed firms entered the market, provides a striking contrast to the preceding era. However, by US standards, the levels of both IPO and venture capital activities remained rather low even in this boom phase. The extent to which recent developments will have a lasting impact on the financing of German firms, the level of IPO activity, and venture capital financing, remains to be seen. At the time of writing, activity has come to a near stand still and the Neuer Markt has just been dissolved. The low number of IPOs and the fairly low volume of VC financing in Germany before the introduction of the Neuer Markt are a striking and much debated phenomenon. Understanding the reasons for these apparent peculiarities is vital to understanding the German financial system. The potential explanations that have been put forward range from differentces in mentality to legal and institutional impediments and the availability of alternative sources of financing. Moreover the recent literature discusses how interest groups may have benefited and influenced the situation. These groups include politicians, unions/workers, managers/controlling-owners of established firms as well as banks. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
We analyze the venture capitalist´s decision on the timing of the IPO, the offer price and the fraction of shares he sells in the course of the IPO. A venture capitalist may decide to take a company public or to liquidate it after one or two financing periods. A longer venture capitalist´s participation in a firm (later IPO) may increase its value while also increasing costs for the venture capitalist. Due to his active involvement, the venture capitalist knows the type of firm and the kind of project he finances before potential new investors do. This information asymmetry is resolved at the end of the second period. Under certain assumptions about the parameters and the structure of the model, we obtain a single equilibrium in which high-quality firms separate from low-quality firms. The latter are liquidated after the first period, while the former go public either after having been financed by the venture capitalist for two periods or after one financing period using a lock-up. Whether a strategy of one or two financing periods is chosen depends on the consulting intensity of the project and / or on the experience of the venture capitalist. In the separating equilibrium, the offer price corresponds to the true value of the firm. An earlier version of this paper appeared as: The Decision of Venture Capitalists on Timing and Extent of IPOs (ZEW Discussion Paper No. 03-12). This version July 2003.
Using a unique, hand-collected database of all venture-backed firms listed on Germany´s Neuer Markt, we analyze the history of venture capital financing of these firms before the IPO and the behavior of venture capitalists at the IPO. We can detect significant differences in the behavior and characteristics of German vs. foreign venture capital firms. The discrepancy in the investment and divestment strategies may be explained by the grandstanding phenomenon, the value-added hypothesis and certification issues. German venture capitalists are typically younger and smaller than their counterparts from abroad. They syndicate less. The sectoral structure of their portfolios differs from that of foreign venture capital firms. We also find that German venture capitalists typically take companies with lower offering volumes on the market. They usually finance firms in a later stage, carry through fewer investment rounds and take their portfolio firms public earlier. In companies where a German firm is the lead venture capitalist, the fraction of equity held by the group of venture capitalists is lower, their selling intensity at the IPO is higher and the committed lock-up period is longer.
This paper deals with the proposed use of sovereign credit ratings in the "Basel Accord on Capital Adequacy" (Basel II) and considers its potential effect on emerging markets financing. It investigates in a first attempt the consequences of the planned revisions on the two central aspects of international bank credit flows: the impact on capital costs and the volatility of credit supply across the risk spectrum of borrowers. The empirical findings cast doubt on the usefulness of credit ratings in determining commercial banks' capital adequacy ratios since the standardized approach to credit risk would lead to more divergence rather than convergence between investment-grade and speculative-grade borrowers. This conclusion is based on the lateness and cyclical determination of credit rating agencies' sovereign risk assessments and the continuing incentives for short-term rather than long-term interbank lending ingrained in the proposed Basel II framework.
Do changes in sovereign credit ratings contribute to financial contagion in emerging market crises?
(2003)
Credit rating changes for long-term foreign currency debt may act as a wake-up call with upgrades and downgrades in one country affecting other financial markets within and across national borders. Such a potential (contagious) rating effect is likely to be stronger in emerging market economies, where institutional investors' problems of asymmetric information are more present. This empirical study complements earlier research by explicitly examining cross-security and cross-country contagious rating effects of credit rating agencies' sovereign risk assessments. In particular, the specific impact of sovereign rating changes during the financial turmoil in emerging markets in the latter half of the 1990s has been examined. The results indicate that sovereign rating changes in a ground-zero country have a (statistically) significant impact on the financial markets of other emerging market economies although the spillover effects tend to be regional.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Some of the most widely expressed myths about the German financial system are concerned with the close ties and intensive interaction between banks and firms, often described as Hausbank relationships. Links between banks and firms include direct shareholdings, board representation, and proxy voting and are particularly significant for corporate governance. Allegedly, these relationships promote investment and improve the performance of firms. Furthermore, German universal banks are believed to play a special role as large and informed monitoring investors (shareholders). However, for the very same reasons, German universal banks are frequently accused of abusing their influence on firms by exploiting rents and sustaining the entrenchment of firms against efficient transfers of firm control. In this paper, we review recent empirical evidence regarding the special role of banks for the corporate governance of German firms. We differentiate between large exchangelisted firms and small and medium sized companies throughout. With respect to the role of banks as monitoring investors, the evidence does not unanimously support a special role of banks for large firms. Only one study finds that banks´ control of management goes beyond what nonbank shareholders achieve. Proxyvoting rights apparently do not provide a significant means for banks to exert management control. Most of the recent evidence regarding small firms suggests that a Hausbank relationship can indeed be beneficial. Hausbanks are more willing to sustain financing when borrower quality deteriorates, and they invest more often than arm´s length banks in workouts if borrowers face financial distress.
In Germany a public discussion on the "power of banks" has been going on for decades now with power having at least two meanings. On the one hand it is the power of banks to control public corporations through direct shareholdings or the exercise of proxy votes - this is the power of banks in corporate control. On the other hand it is market power - due to imperfect competition in markets for financial services - that banks exercise vis-à-vis their loan and deposit customers. In the past, bank regulation has often been blamed to undermine competition and the working of market forces in the financial industry for the sake of soundness and stability of financial services firms. This chapter tries to shed some light on the historical development and current state of bank regulation in Germany. In so doing it tries to embed the analysis of bank regulation into a more general industrial organisation framework. For every regulated industry, competition and regulation are deeply interrelated as most regulatory institutions - even if they do not explicitly address the competitiveness of the market - either affect market structure or conduct. This paper tries to uncover some of the specific relationships between monetary policy, government interference and bank regulation on the one hand and bank market structure and economic performance on the other. In so doing we hope to point to several areas for fruitful research in the future. While our focus is on Germany, some of the questions that we raise and some of our insights might also be applicable to banking systems elsewhere. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
The experience in the period during and after the Asian crisis of 1997-98 has provoked an extensive debate about the credit rating agencies' evaluation of sovereign risk in emerging markets lending. This study analyzes the role of credit rating agencies in international finan-cial markets, particularly whether sovereign credit ratings have an impact on the financial stability in emerging market economies. The event study and panel regression results indicate that credit rating agencies have substantial influence on the size and volatility of emerging markets lending. The empirical results are significantly stronger in the case of government's downgrades and negative imminent sovereign credit rating actions such as credit watches and rating outlooks than positive adjustments by the credit rating agencies while by the market participants' anticipated sovereign credit rating changes have a smaller impact on financial markets in emerging economies.
The German financial system is the archetype of a bank-dominated system. This implies that organized equity markets are, in some sense, underdeveloped. The purpose of this paper is, first, to describe the German equity markets and, second, to analyze whether it is underdeveloped in any meaningful sense. In the descriptive part we provide a detailed account of the microstructure of the German equity markets, putting special emphasis on recent developments. When comparing the German market with its peers, we find that it is indeed underdeveloped with respect to market capitalization. In terms of liquidity, on the other hand, the German equity market is not generally underdeveloped. It does, however, lack a liquid market for block trading. Klassifikation: G 51 . Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This chapter analyzes the role of financial accounting in the German financial system. It starts from the common perception that German accounting is rather "uninformative". This characterization is appropriate from the perspective of an arm´s length or outside investor and when confined to the financial statements per se. But it is no longer accurate when a broader perspective is adopted. The German accounting system exhibits several arrangements that privately communicate information to insiders, notably the supervisory board. Due to these features, the key financing and contracting parties seem reasonably well informed. The same cannot be said about outside investors relying primarily on public disclosure. A descriptive analysis of the main elements of the Germany system and a survey of extant empirical accounting research generally support these arguments.
The paper explores factors that influence the design of financing contracts between venture capital investors and European venture capital funds. 122 Private Placement Memoranda and 46 Partnership Agreements are investigated in respect to the use of covenant restrictions and compensation schemes. The analysis focuses on the impact of two key factors: the reputation of VC-funds and changes in the overall demand for venture capital services. We find that established funds are more severely restricted by contractual covenants. This contradicts the conventional wisdom which assumes that established market participants care more about their reputation, have less incentive to behave opportunistically and therefore need less covenant restrictions. We also find that managers of established funds are more often obliged to invest own capital alongside with investors money. We interpret this as evidence that established funds have actually less reason to care about their reputation as compared to young funds. One reason for this surprising result could be that managers of established VC funds are older and closer to retirement and therefore put less weight on the effects of their actions on future business opportunities. We also explore the effects of venture capital supply on contract design. Gompers and Lerner (1996) show that VC-funds in the US are able to reduce the number of restrictive covenants in years with high supply of venture capital and interpret this as a result of increased bargaining power by VC-funds. We do not find similar evidence for Europe. Instead, we find that VC-funds receive less base compensation and higher performance related compensation in years with strong capital inflows into the VC industry. This may be interpreted as a signal of overconfidence: Strong investor demand seems to coincide with overoptimistic expectations by fund managers which make them willing to accept higher powered incentive schemes.
Price stability and monetary policy effectiveness when nominal interest rates are bounded at zero
(2003)
This paper employs stochastic simulations of a small structural rational expectations model to investigate the consequences of the zero bound on nominal interest rates. We find that if the economy is subject to stochastic shocks similar in magnitude to those experienced in the U.S. over the 1980s and 1990s, the consequences of the zero bound are negligible for target inflation rates as low as 2 percent. However, the effects of the constraint are non-linear with respect to the inflation target and produce a quantitatively significant deterioration of the performance of the economy with targets between 0 and 1 percent. The variability of output increases significantly and that of inflation also rises somewhat. Also, we show that the asymmetry of the policy ineffectiveness induced by the zero bound generates a non-vertical long-run Phillips curve. Output falls increasingly short of potential with lower inflation targets.
We study optimal nominal demand policy in an economy with monopolistic competition and flexible prices when firms have imperfect common knowledge about the shocks hitting the economy. Parametrizing firms´ information imperfections by a (Shannon) capacity parameter that constrains the amount of information flowing to each firm, we study how policy that minimizes a quadratic objective in output and prices depends on this parameter. When price setting decisions of firms are strategic complements, for a large range of capacity values optimal policy nominally accommodates mark-up shocks in the short-run. This finding is robust to the policy maker observing shocks imperfectly or being uncertain about firms´ capacity parameter. With persistent mark-up shocks accommodation may increase in the medium term, but decreases in the long-run thereby generating a hump-shaped price response and a slow reduction in output. Instead, when prices are strategic substitutes, policy tends to react restrictively to mark-up shocks. However, rational expectations equilibria may then not exist with small amounts of imperfect common knowledge.
In this study a regime switching approach is applied to estimate the chartist and fundamentalist (c&f) exchange rate model originally proposed by Frankel and Froot (1986). The c&f model is tested against alternative regime switching specifications applying likelihood ratio tests. Nested atheoretical models like the popular segmented trends model suggested by Engel and Hamilton (1990) are rejected in favour of the multi agent model. Moreover, the c&f regime switching model seems to describe the data much better than a competing regime switching GARCH(1,1) model. Finally, our findings turned out to be relatively robust when estimating the model in subsamples. The empirical results suggest that the model is able to explain daily DM/Dollar forward exchange rate dynamics from 1982 to 1998.
We develop a behavioral exchange rate model with chartists and fundamentalists to study cyclical behavior in foreign exchange markets. Within our model, the market impact of fundamentalists depends on the strength of their belief in fundamental analysis. Estimation of a STAR GARCH model shows that the more the exchange rate deviates from its fundamental value, the more fundamentalists leave the market. In contrast to previous findings, our paper indicates that due to the nonlinear presence of fundamentalists, market stability decreases with increasing misalignments. A stabilization policy such as central bank interventions may help to deflate bubbles.
In this paper we study the role of the exchange rate in conducting monetary policy in an economy with near-zero nominal interest rates as experienced in Japan since the mid-1990s. Our analysis is based on an estimated model of Japan, the United States and the euro area with rational expectations and nominal rigidities. First, we provide a quantitative analysis of the impact of the zero bound on the effectiveness of interest rate policy in Japan in terms of stabilizing output and inflation. Then we evaluate three concrete proposals that focus on depreciation of the currency as a way to ameliorate the effect of the zero bound and evade a potential liquidity trap. Finally, we investigate the international consequences of these proposals.
In this paper we estimate a small model of the euro area to be used as a laboratory for evaluating the performance of alternative monetary policy strategies. We start with the relationship between output and inflation and investigate the fit of the nominal wage contracting model due to Taylor (1980)and three different versions of the relative real wage contracting model proposed by Buiter and Jewitt (1981)and estimated by Fuhrer and Moore (1995a) for the United States. While Fuhrer and Moore reject the nominal contracting model in favor of the relative contracting model which induces more inflation persistence, we find that both models fit euro area data reasonably well. When considering France, Germany and Italy separately, however, we find that the nominal contracting model fits German data better, while the relative contracting model does quite well in countries which transitioned out of a high inflation regime such as France and Italy. We close the model by estimating an aggregate demand relationship and investigate the consequences of the different wage contracting specifications for the inflation-output variability tradeoff, when interest rates are set according to Taylor 's rule.
In this study, we perform a quantitative assessment of the role of money as an indicator variable for monetary policy in the euro area. We document the magnitude of revisions to euro area-wide data on output, prices, and money, and find that monetary aggregates have a potentially significant role in providing information about current real output. We then proceed to analyze the information content of money in a forward-looking model in which monetary policy is optimally determined subject to incomplete information about the true state of the economy. We show that monetary aggregates may have substantial information content in an environment with high variability of output measurement errors, low variability of money demand shocks, and a strong contemporaneous linkage between money demand and real output. As a practical matter, however, we conclude that money has fairly limited information content as an indicator of contemporaneous aggregate demand in the euro area.
We investigate the performance of forecast-based monetary policy rules using five macroeconomic models that reflect a wide range of views on aggregate dynamics. We identify the key characteristics of rules that are robust to model uncertainty: such rules respond to the one-year-ahead inflation forecast and to the current output gap and incorporate a substantial degree of policy inertia. In contrast, rules with longer forecast horizons are less robust and are prone to generating indeterminacy. Finally, we identify a robust benchmark rule that performs very well in all five models over a wide range of policy preferences.
Inflation-targeting central banks have only imperfect knowledge about the effect of policy decisions on inflation. An important source of uncertainty is the relationship between inflation and unemployment. This paper studies the optimal monetary policy in the presence of uncertainty about the natural unemployment rate, the short-run inflation-unemployment tradeoff and the degree of inflation persistence in a simple macroeconomic model, which incorporates rational learning by the central bank as well as private sector agents. Two conflicting motives drive the optimal policy. In the static version of the model, uncertainty provides a motive for the policymaker to move more cautiously than she would if she knew the true parameters. In the dynamic version, uncertainty also motivates an element of experimentation in policy. I find that the optimal policy that balances the cautionary and activist motives typically exhibits gradualism, that is, it still remains less aggressive than a policy that disregards parameter uncertainty. Exceptions occur when uncertainty is very high and in inflation close to target.
The use of GARCH models with stable Paretian innovations in financial modeling has been recently suggested in the literature. This class of processes is attractive because it allows for conditional skewness and leptokurtosis of financial returns without ruling out normality. This contribution illustrates their usefulness in predicting the downside risk of financial assets in the context of modeling foreign exchange-rates and demonstrates their superiority over use of normal or Student´s t GARCH models.
Learning and equilibrium selection in a monetary overlapping generations model with sticky prices
(2003)
We study adaptive learning in a monetary overlapping generations model with sticky prices and monopolistic competition for the case where learning agents observe current endogenous variables. Observability of current variables is essential for informational consistency of the learning setup with the model set up but generates multiple temporary equilibria when prices are flexible and prevents a straightforward construction of the learning dynamics. Sticky prices overcome this problem by avoiding simultaneity between prices and price expectations. Adaptive learning then robustly selects the determinate (monetary) steady state independent from the degree of imperfect competition. The indeterminate (non-monetary) steady state and non-stationary equilibria are never stable. Stability in a deterministic version of the model may differ because perfect foresight equilibria can be the limit of restricted perceptions equilibria of the stochastic economy with vanishing noise and thereby inherit different stability properties. This discontinuity at the zero variance of shocks suggests to analyze learning in stochastic models.
This paper compares Bayesian decision theory with robust decision theory where the decision maker optimizes with respect to the worst state realization. For a class of robust decision problems there exists a sequence of Bayesian decision problems whose solution converges towards the robust solution. It is shown that the limiting Bayesian problem displays infinite risk aversion and that decisions are insensitive (robust) to the precise assignment of prior probabilities. This holds independent from whether the preference for robustness is global or restricted to local perturbations around some reference model.
This paper considers a sticky price model with a cash-in-advance constraint where agents forecast inflation rates with the help of econometric models. Agents use least squares learning to estimate two competing models of which one is consistent with rational expectations once learning is complete. When past performance governs the choice of forecast model, agents may prefer to use the inconsistent forecast model, which generates an equilibrium where forecasts are inefficient. While average output and inflation result the same as under rational expectations, higher moments differ substantially: output and inflation show persistence, inflation responds sluggishly to nominal disturbances, and the dynamic correlations of output and inflation match U.S. data surprisingly well.
Over-allotment arrangements are nowadays part of almost any initial public offering. The underwriting banks borrow stocks from the previous shareholders to issue more than the initially announced number of shares. This is combined with the option to cover this short position at the issue price. We present empirical evidence on the value of these arrangements to the underwriters of initial public offerings on the Neuer Markt. The over-allotment arrangement is regarded as a portfolio of a long call option and a short position in a forward contract on the stock, which is different from other approaches presented in the literature. Given the economically substantial values for these option-like claims we try to identify benefits to previous shareholders or new investors when the company is using this instrument in the process of going public. Although we carefully control for potential endogeneity problems, we find virtually no evidence for a reduction in underpricing for firms using over-allotment arrangements. Furthermore, we do not find evidence for more pronounced price stabilization activities or better aftermarket performance for firms granting an over-allotment arrangement to the underwriting banks.
Why borrowers pay premiums to larger lenders : empirical evidence from sovereign syndicated loans
(2003)
All other terms being equal (e.g. seniority), syndicated loan contracts provide larger lending compensations (in percentage points) to institutions funding larger amounts. This paper explores empirically the motivation for such a price design on a sample of sovereign syndicated loans in the period 1990-1997. I find strong evidence that a larger premium is associated with higher renegotiation probability and information asymmetries. It hardly has any impact on the number of lenders though. This is consistent with the hypothesis that larger lenders act as main lenders, namely help reduce information asymmetries and provide services in situations of liquidity shortage. This constitutes new evidence of the existence of compensations for such unique services. Moreover, larger payment discrepancies are also associated with larger syndicated loan amounts. This provides further new evidence that larger borrowers bear additional borrowing costs.
We use consumer price data for 205 cities/regions in 21 countries to study PPP deviations before, during and after the major currency crises of the 1990s. We combine data from industrialized nations in North America (Unites States, Canada and Mexico), Europe (Germany, Italy, Spain and Portugal), Asia (Japan and South Korea), and Oceania (Australia and New Zealand) with corresponding data from emerging market economies in South America (Argentina, Bolivia, Brazil, Columbia) and Asia (India, Indonesia, Malaysia, Philippines, Taiwan, Thailand). By doing so, we confirm previous results that both distance and border explain a significant amount of relative price variation across different locations. We also find that currency attacks had major disintegration effects by considerably increasing these border effects and by raising within-country relative price dispersion in emerging market economies. These effects are found to be quite persistent since relative price volatility across emerging markets today is still significantly larger than a decade ago.
We use consumer price data for 81 European cities (in Germany, Austria, Finland, Italy, Spain, Portugal and Switzerland) to study the impact of the introduction of the euro on goods market integration. Employing both aggregated and disaggregated consumer price index (CPI) data we confirm previous results which showed that the distance between European cities explains a significant amount of the variation in the prices of similar goods in different locations. We also find that the variation of relative prices is much higher for two cities located in different countries than for two equidistant cities in the same country. Under the EMU, the elimination of nominal exchange rate volatility has largely reduced these border effects, but distance and border still matter for intra-European relative price volatility.
This paper analyzes a comprehensive data set of 160 non venture-backed, 79 venture-backed and 61 bridge financed companies going public at Germany´s Neuer Markt between March 1997 and March 2002. I examine whether these three types of issues differ with regard to issuer characteristics, balance sheet data or offering characteristics. Moreover, this empirical study contributes to the underpricing literature by focusing on the complementary or rather competing role of venture capitalists and underwriters in certifying the quality of a company when going public. Companies backed by a prestigious venture capitalist and/or underwritten by a top bank are expected to show less underpricing at the initial public offering (IPO) due to a reduced ex-ante uncertainty. This analysis provides evidence to the contrary: VC-backed IPOs appear to be more underpriced than non VC-backed IPOs.
In contrast to the class A heat stress transcription factors (Hsfs) of plants, a considerable number of Hsfs assigned to classes B and C have no evident function as transcription activators on their own. In the course of my PhD work I showed that tomato HsfB1, a heat stress induced member of class B Hsf family, is a novel type of transcriptional coactivator in plants. Together with class A Hsfs, e.g. tomato HsfA1, it plays an important role in efficient transcrition initiation during heat stress by forming a type of enhanceosome on fragments of Hsp promoter. Characterization of promoter architecture of hsp promoters led to the identification of novel, complex heat stress element (HSE) clusters, which are required for optimal synergistic interactions of HsfA1 and HsfB1. In addition, HsfB1 showed synergistic activation of the expression of a subset of viral and house keeping promoters. CaMV35S promoter, the most widely expressed constitutive promoter turned out to be the the most interesting candidate to study this effect in detail. Because, for most house-keeping promoters tested during this study, the activators responsible for constitutive expression are not known, but in case of CaMV35S promoter they are quite well known (the bZip proteins, TGA1/2). These proteins belong to the acidic activators, similar to class A Hsfs. Actually, on heat stress inducible promoters HsfA1 or other class A Hsfs are the synergistic partners of HsfB1, whereas on house-keeping or viral promoters, HsfB1 shows synergistic transcriptional activation in cooperation with the promoter specific acidic activators, e.g. with TGA proteins on 35S promoter. In agreement with this the binding sites for HsfB1 were identified in both house-keeping and 35S promoter. It has been suggested during this study that HsfB1 acts in the maintenance of transcription of a sub-set of house-keeping and viral genes during heat stress. The coactivator function of HsfB1 depends on a single lysine residue in the GRGK motif in its CTD. Since, this motif is highly conserved among histones as the acetylation motif, especially in histones H2A and H4,. It was suggested that the GRGK motif acts as a recruitment motif, and together with the other acidic activator is responsible for corecruitment of a histone acetyl transferase (HAT). So, the effect of mammalian CBP (a well known HAT) and its plant orthologs (HAC1) was tested on the stimulation of synergistic reporter gene activation obtained with HsfA1 and HsfB1. Both in plant and mammalian cells, CBP/HAC1 further stimulated the HsfA1/B1 synergistic effect. Corecruitment of HAC1 was proven by in vitro pull down assays, where the NTD of HAC1 interacted specifically both with HsfA1 and HsfB1. Formation of a ternary complex between HsfA1, HsfB1 and CBP/HAC1 was shown via coimmunoprecipitation and electrophoretic mobility shift assays (EMSA). In conclusion, the work presented in my thesis presents a new model for transcriptional regulation during an ongoing heat stress.
In an attempt to search for potential candidate molecules involved in the pathogenesis of endometriosis, a novel 2910 bp cDNA encoding a putative 411 amino acid protein, shrew-1 was discovered. By computational analysis it was predicted to be an integral membrane protein with an outside-in transmembrane domain but no homology with any known protein or domain could be identified. Antibodies raised against the putative open-reading frame peptide of shrew-1 labelled a protein of ca. 48 kDa in extracts of shrew-1 mRNA positive tissues and also detected ectopically expressed shrew-1. In the course of my PhD work, I confirmed the prediction that shrew-1 is indeed a transmembrane protein, by expressing epitope-tagged shrew-1 in epithelial cells and analysing the transfected cells by surface biotinylation and immunoblots. Additionally, I could show that shrew-1 is able to target to E-cadherin-mediated adherens junctions and interacts with the E-cadherin-catenin complex in polarised MCF7 and MDCK cells, but not with the N-cadherin-catenin complex in non-polarised epithelial cells. A direct interaction of shrew-1 with beta-catenin could be shown in an in vitro pull-down assay. From this data, it could be assumed that shrew-1 might play a role in the function and/or regulation of the dynamics of E-cadherin-mediated junctional complexes. In the next part of my thesis, I showed that stable overexpression of shrew-1 in normal MDCK cells. causes changes in morphology of the cells and turns them invasive. Furthermore, transcription by ²-catenin was activated in these MDCK cells stably overexpressing shrew-1. It was probably the imbalance of shrew-1 protein at the adherens junctions that led to the misregulation of adherens junctions associated proteins, i.e. E-cadherin and beta-catenin. Caveolin-1 is another integral membrane protein that forms complexes with Ecadherin- beta-catenin complexes and also plays a role in the endocytosis of E-cadherin during junctional disruption. By immunofluorescence and biochemical studies, caveolin-1 was identified as another interacting partner of shrew-1. However, the functional relevance of this interaction is still not clear. In conclusion, it can be said that shrew-1 interacts with the key players of invasion and metastasis, E-cadherin and caveolin-1, suggesting its possible role in these processes and making it an interesting candidate to unravel other unknown mechanisms involved in the complex process of invasion.
In this paper we demonstrate how to relate the semantics given by the nondeterministic call-by-need calculus FUNDIO [SS03] to Haskell. After introducing new correct program transformations for FUNDIO, we translate the core language used in the Glasgow Haskell Compiler into the FUNDIO language, where the IO construct of FUNDIO corresponds to direct-call IO-actions in Haskell. We sketch the investigations of [Sab03b] where a lot of program transformations performed by the compiler have been shown to be correct w.r.t. the FUNDIO semantics. This enabled us to achieve a FUNDIO-compatible Haskell-compiler, by turning o not yet investigated transformations and the small set of incompatible transformations. With this compiler, Haskell programs which use the extension unsafePerformIO in arbitrary contexts, can be compiled in a "safe" manner.
This paper proposes a non-standard way to combine lazy functional languages with I/O. In order to demonstrate the usefulness of the approach, a tiny lazy functional core language FUNDIO , which is also a call-by-need lambda calculus, is investigated. The syntax of FUNDIO has case, letrec, constructors and an IO-interface: its operational semantics is described by small-step reductions. A contextual approximation and equivalence depending on the input-output behavior of normal order reduction sequences is defined and a context lemma is proved. This enables to study a semantics of FUNDIO and its semantic properties. The paper demonstrates that the technique of complete reduction diagrams enables to show a considerable set of program transformations to be correct. Several optimizations of evaluation are given, including strictness optimizations and an abstract machine, and shown to be correct w.r.t. contextual equivalence. Correctness of strictness optimizations also justifies correctness of parallel evaluation. Thus this calculus has a potential to integrate non-strict functional programming with a non-deterministic approach to input-output and also to provide a useful semantics for this combination. It is argued that monadic IO and unsafePerformIO can be combined in Haskell, and that the result is reliable, if all reductions and transformations are correct w.r.t. to the FUNDIO-semantics. Of course, we do not address the typing problems the are involved in the usage of Haskell s unsafePerformIO. The semantics can also be used as a novel semantics for strict functional languages with IO, where the sequence of IOs is not fixed.
Revised Draft: January 2005, First Draft: December 8, 2004 The picture of dispersed, isolated and uninterested shareholders so graphically drawn by Adolf Berle and Gardiner Means in 19321 is for the most part no longer accurate in today's market, although their famous observations on the separation of control and ownership of public corporations remain true.
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The attitude expressed by Carl Fuerstenberg, a leading German banker of his time, succinctly embodies one of the principal issues facing the large enterprise – the divergence of interest between the management of the firm and outside equity shareholders. Why do, or should, investors put some of their savings in the hands of others, to expend as they see fit, with no commitment to repayment or a return? The answers are far from simple, and involve a complex interaction among a number of legal rules, economic institutions and market forces. Yet crafting a viable response is essential to the functioning of a modern economy based upon technology with scale economies whose attainment is dependent on the creation of large firms.
With the Council regulation (EC) No. 1346/2000 of 29 May 2000 on insolvency proceedings, that came into effect May 31, 2002 the European Union has introduced a legal framework for dealing with cross-border insolvency proceedings. In order to achieve the aim of improving the efficiency and effectiveness of insolvency proceedings having cross-border effects within the European Community, the provisions on jurisdiction, recognition and applicable law in this area are contained in a Regulation, a Community law measure which is binding and directly applicable in Member States. The goals of the Regulation, with 47 articles, are to enable cross-border insolvency proceedings to operate efficiently and effectively, to provide for co-ordination of the measures to be taken with regard to the debtor’s assets and to avoid forum shopping. The Insolvency Regulation, therefore, provides rules for the international jurisdiction of a court in a Member State for the opening of insolvency proceedings, the (automatic) recognition of these proceedings in other Member States and the powers of the ‘liquidator’ in the other Member States. The Regulation also deals with important choice of law (or: private international law) provisions. The Regulation is directly applicable in the Member States3 for all insolvency proceedings opened after 31 May 2002.
Increasingly, alternative investments via hedge funds are gaining importance in Germany. Just recently, this subject was taken up in the legal literature, too; this resulted in a higher product transparency. However, German investment law and, particularly, the special division "hedge funds" is still a field dominated by practitioners. First, the present situation shall be outlined. In addition, a description of the current development is given, in which the practical knowledge of the author is included. Finally, the hedge fund regulation intended by the legislator at the beginning of the year 2004 is legally evaluated against this background.
In response to recent developments in the financial markets and the stunning growth of the hedge fund industry in the United States, policy makers, most notably the Securities and Exchange Commission (“SEC”), are turning their attention to the regulation, or lack thereof, of hedge funds. U.S. regulators have scrutinized the hedge fund industry on several occasions in the recent past without imposing substantial regulatory constraints. Will this time be any different? The focus of the regulators’ interest has shifted. Traditionally, they approached the hedge fund industry by focusing on systemic risk to and integrity of the financial markets. The current inquiry is almost exclusively driven by investor protection concerns. What has changed? First, since 2000, new kinds of investors have poured capital into hedge funds in the United States, facilitated by the “retailization” of hedge funds through the development of funds of hedge funds and the dismal performance of the stock market. Second, in a post-Enron era, regulators and policy makers are increasingly sensitive to investor protection concerns. On May 14 and 15, 2003, the SEC held for the first time a public roundtable discussion on the single topic of hedge funds. Among the investor protection concerns highlighted were: an increase in incidents of fraud, inadequate suitability determinations by brokers who market hedge fund interests to individual investors, conflicts of interest of managers who manage mutual funds and hedge funds side-by-side, a lack of transparency that hinders investors from making informed investment decisions, layering of fees, and unbounded discretion by managers in pricing private hedge fund securities. Although there has been discussion about imposing wide-ranging restrictions onhedge funds, such as reining in short selling, requiring disclosure of long/short positions and limiting leverage, such a response would be heavy-handed and probably unnecessary. The existing regulatory regime is largely adequate to address the most flagrant abuses. Moreover, as the hedge fund market further matures, it is likely that institutional investors will continue to weed out weak performers and mediocre or dishonest hedge fund managers. What is likely to emerge from the newest regulatory focus on investor protection is a measured response that would enhance the SEC’s enforcement and inspection authority, while leaving hedge funds’ inherent investment flexibility largely unfettered. A likely scenario, for example, might be a requirement that some, or possibly all, hedge fund sponsors register with the SEC as investment advisers. Today, most are exempt from registration, although more and more are registering to provide advice to public hedge funds and attract institutions. Registration would make it easier for the SEC to ferret out potential fraudsters in advance by reviewing the professional history of hedge fund operators, allow the SEC to bring administrative proceedings against hedge fund advisers for statutory violations and give the agency access to books and records that it does not have today. Other possible initiatives, including additional disclosure requirements for publicly offered hedge funds, are discussed below. This article addresses the question whether U.S. regulation of hedge funds is really taking a new direction. It (i) provides a brief overview of the current U.S. regulatory scheme, from which hedge funds are generally exempt, (ii) describes recent events in the United States that have contributed to regulators’ anxiety, (iii) examines the investor protection rationale for hedge fund regulation and considers whether these concerns do, in fact, merit increased regulation of hedge funds at this time, and (iv) considers the likelihood and possible scope of a potential regulatory response, principally by the SEC.
In an ideal world all investment products, including hedge funds, would be marketable to all investors. In this ideal world, all investors would fully understand the nature of the products and would be able to make an informed choice whether to invest. Of course the ideal world does not exist – the retail investment market is characterised by asymmetries of information. Product providers know most about the products on offer (or at least they should do). Investment advisers often know rather less than the provider but much more than their retail customers. Providers and intermediary advisers are understandably motivated by the desire to sell their products. There is therefore a risk that investment products will be mis-sold by investment advisers or mis-bought by ill-informed investors. This asymmetry of information is dealt with in most countries through regulation. However, the regulatory response in different countries is not necessarily the same. There are various ways in which protections can be applied and it is important to understand that the cultural background and regulatory histories of countries flavours the way regulation has developed. This means (as will be explained in greater detail later) that some countries are better able than others to admit hedge funds to the retail sector. Following this Introduction, Section II looks at some key background issues. Section III then looks at some important questions raised by the retail hedge fund issue. Many of these are questions of balance. Balance lies at the heart of regulation of course – regulation must always balance the needs of investors and with market efficiency. Understanding the “retail hedge fund” question requires particular attention to balance. Section IV then looks at the UK regime and how the FSA has answered the balance question. Section V offers some international perspectives. Section VI concludes. It will be seen that there is no obviously right answer to the question whether hedge fund products should be marketed to retail investors. Each regulator in each jurisdiction needs to make up its own mind on how to deal with the various issues and balances. It is evident, however, that internationally there is a move towards a greater variety of retail funds. There is nothing wrong with that, provided the regulators and the retail customers they protect, understand sufficiently what sort of protection is, or is not, being offered in the regulatory regime.
While hedge funds have been around at least since the 1940's, it has only been in the last decade or so that they have attracted the widespread attention of investors, academics and regulators. Investors, mainly wealthy individuals but also increasingly institutional investors, are attracted to hedge funds because they promise high “absolute” returns -- high returns even when returns on mainstream asset classes like stocks and bonds are low or negative. This prospect, not surprisingly, has increased interest in hedge funds in recent years as returns on stocks have plummeted around the world, and as investors have sought alternative investment strategies to insulate them in the future from the kind of bear markets we are now experiencing. Government regulators, too, have become increasingly attentive to hedge funds, especially since the notorious collapse of the hedge fund Long-Term Capital Management (LTCM) in September 1998. Over the course of only a few months during the summer of 1998 LTCM lost billions of dollars because of failed investment strategies that were not well understood even by its own investors, let alone by its bankers and derivatives counterparties. LTCM had built up huge leverage both on and off the balance sheet, so that when its investments soured it was unable to meet the demands of creditors and derivatives counterparties. Had LTCM’s counterparties terminated and liquidated their positions with LTCM, the result could have been a severe liquidity shortage and sharp changes in asset prices, which many feared could have impaired the solvency of other financial institutions and destabilized financial markets generally. The Federal Reserve did not wait to see if this would happen. It intervened to organize an immediate (September 1998) creditor-bailout by LTCM’s largest creditors and derivatives counterparties, preventing the wholesale liquidation of LTCM’s positions. Over the course of the year that followed the bailout, the creditor committee charged with managing LTCM’s positions effected an orderly work-out and liquidation of LTCM’s positions. We will never know what would have happened had the Federal Reserve not intervened. In defending the Federal Reserve’s unusual actions in coming to the assistance of an unregulated financial institutions like a hedge fund, William McDonough, the president of the Federal Reserve Bank of New York, stated that it was the Federal Reserve’s judgement that the “...abrupt and disorderly close-out of LTCM’s positions would pose unacceptable risks to the American economy. ... there was a likelihood that a number of credit and interest rate markets would experience extreme price moves and possibly cease to function for a period of one or more days and maybe longer. This would have caused a vicious cycle: a loss of investor confidence, lending to further liquidations of positions, and so on.” The near-collapse of LTCM galvanized regulators throughout the world to examine the operations of hedge funds to determine if they posed a risk to investors and to financial stability more generally. Studies were undertaken by nearly every major central bank, regulatory agency, and international “regulatory” committee (such as the Basle Committee and IOSCO), and reports were issued, by among others, The President’s Working Group on Financial Markets, the United States General Accounting Office (GAO), the Counterparty Risk Management Policy Group, the Basle Committee on Banking Supervision, and the International Organization of Securities Commissions (IOSCO). Many of these studies concluded that there was a need for greater disclosure by hedge funds in order to increase transparency and enhance market discipline, by creditors, derivatives counterparties and investors. In the Fall of 1999 two bills were introduced before the U.S. Congress directed at increasing hedge fund disclosure (the “Hedge Fund Disclosure Act” [the “Baker Bill”] and the “Markey/Dorgan Bill”). But when the legislative firestorm sparked by the LTCM’s episode finally quieted, there was no new regulation of hedge funds. This paper provides an overview of the regulation of hedge funds and examines the key regulatory issues that now confront regulators throughout the world. In particular, two major issues are examined. First, whether hedge funds pose a systemic threat to the stability of financial markets, and, if so, whether additional government regulation would be useful. And second, whether existing regulation provides sufficient protection for hedge fund investors, and, if not, what additional regulation is needed.
Invited talk at the XXXIII International Symposium on Multiparticle Dynamics, Krakow, Poland, 5-11 Sept, 2003. 5 pages, 1 figure Journal-ref: Acta Phys.Polon. B35 (2004) 23-28. We review the recent developments on microscopic transport calculations for two-particle correlations at low relative momenta in ultrarelativistic heavy ion collisions at RHIC.
Invited talk at the 7th International Conference on Strangeness in Quark Matter, SQM 2003, Atlantic Beach, North Carolina, USA, 12-17 Mar, 2003. 11 pages, 12 figures. Journal-ref: J.Phys. G30 (2004) S139-S150. We review recent developments in the field of microscopic transport model calculations for ultrarelativistic heavy ion collisions. In particular, we focus on the strangeness production, for example, the phi-meson and its role as a messenger of the early phase of the system evolution. Moreover, we discuss the important e ects of the (soft) field properties on the multiparticle system. We outline some current problems of the models as well as possible solutions to them
Despite the apparent stability of the wage bargaining institutions in West Germany, aggregate union membership has been declining dramatically since the early 90's. However, aggregate gross membership numbers do not distinguish by employment status and it is impossible to disaggregate these sufficiently. This paper uses four waves of the German Socioeconomic Panel in 1985, 1989, 1993, and 1998 to perform a panel analysis of net union membership among employees. We estimate a correlated random effects probit model suggested in Chamberlain (1984) to take proper account of individual specfic effects. Our results suggest that at the individual level the propensity to be a union member has not changed considerably over time. Thus, the aggregate decline in membership is due to composition effects. We also use the estimates to predict net union density at the industry level based on the IAB employment subsample for the time period 1985 to 1997. JEL - Klassifikation: J5
This paper is a draft for the chapter German banks and banking structure of the forthcoming book The German financial system . As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventio nal commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system. This Version: March 25, 2003
Taking shareholder protection seriously? : Corporate governance in the United States and Germany
(2003)
The paper undertakes a comparative study of the set of laws affecting corporate governance in the United States and Germany, and an evaluation of their design if one assumes that their objective were the protection of the interests of minority outside shareholders. The rationale for such an objective is reviewed, in terms of agency cost theory, and then the institutions that serve to bound agency costs are examined and critiqued. In particular, there is discussion of the applicable legal rules in each country, the role of the board of directors, the functioning of the market for corporate control, and (briefly) the use of incentive compensation. The paper concludes with the authors views on what taking shareholder protection seriously, in each country s legal system, would require.
This memorandum describes the approach of the U.S. Securities and Exchange Commission (the "SEC") in monitoring and, where appropriate, regulating the use of research reports by investment banking firms in connection with securities transactions. The memorandum addresses the historical system of regulation, which continues in large measure to apply. It also examines the new initiatives taken, following a number of prominent corporate, accounting and banking scandals and a significant decline in U.S. and international capital markets, to supplement the current system in what some have dubbed the "post-Enron era".
Recent empirical work shows that a better legal environment leads to lower expected rates of return in an international cross-section of countries. This paper investigates whether differences in firm-specific corporate governance also help to explain expected returns in a cross-section of firms within a single jurisdiction. Constructing a corporate governance rating (CGR) for German firms, we document a positive relationship between the CGR and firm value. In addition, there is strong evidence that expected returns are negatively correlated with the CGR, if dividend yields and price-earnings ratios are used as proxies for the cost of capital. Most results are robust for endogeneity, with causation running from corporate governance practices to firm fundamentals. Finally, an investment strategy that bought high-CGR firms and shorted low-CGR firms would have earned abnormal returns of around 12 percent on an annual basis during the sample period. We rationalize the empirical evidence with lower agency costs and/or the removal of certain governance malfunctions for the high-CGR firms.
Remodeling of extracellular matrix (ECM) is an important physiologic feature of normal growth and development. In addition to this critical function in physiology many diseases have been associated with an imbalance of ECM synthesis and degradation. In the kidney, dysregulation of ECM turnover can lead to interstitial fibrosis, and glomerulosclerosis. The major physiologic regulators of ECM degradation in the glomerulus are the large family of zinc-dependent proteases, collectively refered to matrix metalloproteinases (MMPs). The tight regulation of most of these proteases is accomplished by different mechanisms, including the regulation of MMP gene expression, the processing and conversion of the inactive zymogen by other proteases such as serine proteases and finally the inhibition of active MMPs by endogenous inhibitors of MMPs, denoted as tissue inhibitors of metalloproteinases (TIMPs). Namely, the MMP-9 has been shown to be critically involved in the dysregulation of ECM turnover associated with severe pathologic conditions such as rheumatoid arthritis or fibrosis of lung, skin and kidney. In the present work I searched for a possible modulation of MMP-9 expression and/or activity in glomerular mesangial cells which are thought as key players of many inflammatory and non-inflammatory glomerular diseases. I found that various structurally different PPARalpha agonists such as WY-14,643, LY-171883 and fibrates potently suppress the cytokine-induced MMP-9 expression in renal MC. Furthermore, I demonstrate that the inhibition of MMP-9 expression by PPARalpha agonists was paralleled by a strong increase of cytokine-induced iNOS expression and subsequent NO formation, suggesting that PPARalpha-dependent effects on MMP-9 expression level primarily result from alterations in NO production which in turn reduces the MMP-9 mRNA half-life. Searching for the detailed mechanism of NO-dependent effects on MMP-9 mRNA stability, I found that NO either given from exogenous sources or endogenously produced increases the MMP-9 mRNA degradation by decreasing the expression of the mRNA stabilizing factor HuR. Furthermore, I demonstrate a reduction in the RNA-binding capacity of HuR containing complexes to MMP-9 ARE motifs in cells treated with NO. Since the reduction of HuR expression can be mimicked by the cGMP analog 8-Bromo-cGMP, I suggest that NO reduces in a cGMP-dependent manner the expression of HuR. Finally, I elucidated the modulatory effect of extracellular nucleotides, mainly ATP, on cytokine-triggered MMP-9 expression. Interestingly, I found that in contrast to NO, gamma-S-ATP the stable analog of ATP potently amplifies the IL-beta mediated MMP-9 expression. The increase in mRNA stability was paralleled by an increase in the nuclear-cytosolic shuttling of the mRNA stabilizing factor HuR. Furthermore, I demonstrate an increase in the RNA-binding capacity of HuR containing complexes to the 3'-UTR of MMP-9 by ATP. In summary, the data presented here may help to find new targets (posttranscriptional regulation) that could be used to manipulate or modulate the expression of not only MMP-9 but also other genes regulated on the level of mRNA stability.
In this thesis the anti-proton to proton ratio in 197Au + 197Au collisions, measured at mid-rapidity, at a center of mass energy of psNN = 200GeV is reported. The value was measured to be ¹p/p = 0.81+-0.002stat +- 0.05syst: in the 5% most central collisions. The ratio shows no dependence on rapidity in the range jyj < 0:5. Furthermore, a dependence on transverse momentum within 0:4< p? < 1:0 GeV/c is not observed. At higher p?, a slight drop in the ratio is observed. In the present analysis, the highest momentum considered is p? = 4:5 GeV/c yielding ¹p=p = 0:645§0:005stat: §0:10syst:. However, the systematic error is higher in this momentum range. A slight centrality dependence was observed, where a decrease from ¹p=p = 0:83§0:002stat:§0:05syst: for most peripheral collisions (less than 80% central) to ¹p=p = 0:78§0:002stat:§0:05syst: for the 5% most central collisions was measured. An estimate of the feed-down contributions fromthe decay of heavier strange baryons results in ¹p=p = 0:77 § 0:05syst:. The measured ratio indicates a » 12:5 times higher value compared to the highest SPS energy of psNN = 17:3 and an \almost net-baryon free" region, at mid- rapidity. The asymmetry of protons and anti-protons may be explained by the contribution ofvalence quarks in a nucleus break-up picture. In such a scenario, the absolute value of the ratio and the fact that the ratio does not depend on rapidity (at mid-rapidity) is well reproduced. Fragmentation of quarks and anti- quarks into protons and anti-protons is assumed. An estimate of the ratio, when feed-down correction is taken into consideration, agrees well with the prediction of a statistical model analysis at a temperature of T = 177 § 7 MeV and a baryon chemical potential of ¹B = 29 § 8 MeV. The temperature achieved is only slightly higher when compared to the top SPS energy, while the baryochemical potential is factor »10 lower. As in the case of the SPS results, these parameters are close to the phase boundary of Figure 1.6. The measurement of the ratio at high transverse momentum was of special in- terest in this analysis, since at RHIC energies, the cross section for hadrons at high transverse momentum is increased with respect to SPS energies. The weak dependence of the ratio on the transverse momentum is well described by the non- perturbative quenched and baryon junction scenario (i.e. Soft+Quench model), where baryon creation is enhanced by baryon junctions. In comparison the ratio does not decrease within the considered momentum range as predicted by pQCD.
Der Produktion von Interleukin-8 (IL-8), Hämoxygenase-1 (HO-1), und dem vaskulären endothelialen Wachstumsfaktor (VEGF) wird zunehmend größere Bedeutung im Rahmen der Regulation der Immunantwort bei Entzündung, Infektion und Tumorwachstum zugemessen. Ziel dieser Arbeit war die Untersuchung der Regulation dieser Botenstoffe in vitro durch Verwendung der humanen Dickdarmkarzinomzellinie DLD-1. Die Substanz Pyrrolidinedithiocarbamate (PDTC) verstärkt nicht nur die durch Tumornekrosefaktor-a (TNF-a) vermittelte Ausschüttung von IL-8, sondern induziert auch als alleiniger Stimulus die IL-8-Sekretion. Mutationsanalysen des IL-8-Promotors und "Electrophoretic Mobility Shift" Untersuchungen (EMSA) zeigten, daß die Aktivierung des Transkriptionsfaktors AP-1 (Aktivator Protein-1) und die Bindungsaktivität von konstitutiv aktiviertem NF-KB in DLD-1 Zellen für die PDTC induzierte IL-8 Expression zwingend erforderlich waren. Weiterhin war PDTC in der Lage in DLD-1 Zellen neben IL-8 auch die Expression von HO-1 und VEGF zu verstärken. Die Induktion von IL-8 durch PDTC war nicht nur auf DLD-1 Zellen beschränkt, sondern wurde auch in Caco-2 Zellen (ebenfalls Dickdarmkrebszellen) und in humanen mononukleären Blutzellen beobachtet. Die Verwendung von PDTC wird seit kurzem als Kombinationspräparat für Zytostatia zur Behandlung von verschiedenen bösartigen Tumoren, unter ihnen auch Darmkrebs, vorgeschlagen. Aus unseren Versuchen läßt sich ableiten, daß die Induktion von IL-8, HO-1 und VEGF die therapeutische Anwendung dieser Substanz nachteilig beeinflussen könnte. Dies ergibt sich daraus, daß alle drei genannten Faktoren durch proangiogene Wirkungen das Tumorwachstum fördern. Die Expression der induzierbaren Stickoxidsynthase und die Produktion von Stickoxid (NO) korreliert mit der Angiogenese bei verschiedenen Krebserkrankungen darunter Melanome, Tumore im Hals- und Kopfbereich und Darmkrebs. Da tumorbegünstigende Funktionen von NO mit vermehrter Angiogenese in Verbindung gebracht werden, wurden die Effekte von NO hinsichtlich der Produktion von ausgesuchten Chemokinen, die an der Steuerung des Tumorwachstums beteiligt sind, untersucht. Zu diesen Chemokinen gehören das proangiogene IL-8 sowie das tumorsuppressiv durch Interferon induzierbare Protein-10 (IP-10) und das Monokin induziert durch Interferon-y (MIG). Diese Chemokine werden, nach Stimulation mit IL- 1ß und lnterferon-? (IFN-?) von DLD-1 Zellen, ausgeschüttet. Unter diesen Bedingungen wird die IL-8 Freisetzung alleine durch IL-1ß vermittelt, aber nicht durch INFy. Im Gegensatz zu IL-8 hängt die Sekretion von IP-10 und MIG von der Aktivierung durch IFNy ab. Die Effekte von NO wurden analysiert indem DLD-1 Zellen mit dem NO-Donor DETA-NO inkubiert wurden. DETA-NO besitzt eine Halbwertzeit von 16,5h und simuliert damit die Effekte der endogenen NO-Synthase. Synthese und Freisetzung von IL-8 wurden durch die Behandlung mit NO stark gesteigert. Außerdem wurde in Zellen die dem NO-Donor ausgesetzt wurden die basale Sekretion des VEGF signifikant verstärkt. Dies steht im Gegensatz zur IL-Iß/IFNy-induzierten Produktion von IP-10 und MIG, beide wurden durch Koinkubation mit NO unterdrückt. Ebenso wurde die Regulation der IFNy abhängigen induzierbaren Stickoxidsynthase in DLD-1 Zellen von NO unterdrückt. Die vorliegenden Daten ergänzen vorherige Studien, in denen NO mit Tumorangiogenese und verstärkten Tumorwachstum in Verbindung gebracht wird. Die NO vermittelte Induktion von IL-8 und VEGF, ebenso wie die Verminderung der IP-10 and MIG Expression, könnte zu diesem Phänomen beitragen. Unsere Studien stützen die Hypothese, daß spezifische lnhibitoren der iNOS therapeutischen Nutzen bei humanen Neoplasien haben könnten.
Cytochrome c oxidase is the terminal enzyme in the respiratory chain of mitochondria and aerobic bacteria. This enzyme ultimately couples electron transfer from cytochrome c to an oxygen molecule with proton translocation across the inner mitochondrial and bacterial membrane. This reaction requires complicated chemical processes to occur at the catalytic site of the enzyme in coordination with proton translocation, the exact mechanism of which is not known at present. The mechanisms underlying oxygen activation, electron transfer and coupling of electron transfer to proton translocation are the main questions in the field of bioenergetics. The major goal of this work was to investigate the coupling of electron transfer and proton translocation in cytochrome c oxidase from Paracoccus denitrificans. Different theoretical approaches have been used to investigate the coupling of electron and proton transfer. This thesis presents an internal water prediction scheme in the enzyme and a molecular dynamics study of cytochrome c oxidase from Paracoccus denitrificans in the fully oxidized state, embedded in a fully hydrated dimyristoylphosphatidylcholine lipid bilayer membrane. Two parallel molecular dynamics simulations with different levels of protein hydration, 1.125 ns each in length, were carried out under conditions of constant temperature and pressure using three-dimensional periodic boundary conditions and full electrostatics to investigate the distribution and dynamics of water molecules and their corresponding hydrogen-bonded networks inside cytochrome c oxidase. The average number of solvent sites in the proton conducting K- and D- pathways was determined. The highly fluctuating hydrogen-bonded networks, combined with the significant diffusion of individual water molecules provide a basis for the transfer of protons in cytochrome c oxidase, therefore leading to a better understanding of the mechanism of proton pumping. The importance of the hydrogen bonding network and the possible coupling of local structural changes to larger scale changes in the cytochrome c oxidase during the catalytic cycle have been shown.
Obwohl Böden unzweifelhaft ein signifikanter Pool von organischem Kohlenstoff sind, ist ihre Bedeutung als potenzielle langfristige Senke für atmosphärischen Kohlenstoff keineswegs klar. Trotz bedeutender wissenschaftlicher Forschritte aus den letzten Jahren zur Klärung der Kohlenstoffdynamik in Böden gibt es nach wie vor offene Fragen insbesondere hinsichtlich der spezifischen geochemischen Mechanismen, die für die Stabilisierung organischen Kohlenstoffs in Böden verantwortlich sind. Vor diesem Hintergrund besteht ein wesentliches Ziel der vorliegenden Dissertation darin, in unterschiedlichen Bodentypen die Konzentration von organischem Kohlenstoff und Stickstoff sowie die mineralogische Zusammensetzung zu untersuchen, um Hinweise auf einen möglichen Einfluss der Tonmineralogie, der spezifischen Oberfläche und der Oxidkonzentration auf die Stabilisierung organischen Materials zu ermitteln. Die Ergebnisse sollen einen Beitrag dazu liefern, die Mechanismen der Fixierung organischer Substanz in Böden besser zu verstehen und das vorhandene Wissen hierüber zu erweitern. Hierzu wurden fünf verschiedene Bodenprofile aus Hessen mit unterschiedlicher mineralogischer Zusammensetzung untersucht. Um die Auswirkungen verschiedener physikalischer und geochemischer Faktoren auf den Gehalt organischer Substanz in den untersuchten Böden festzustellen, wurden folgende Parameter untersucht: -Tonmineralogie, -organische Kohlenstoff- und Stickstoff-Konzentrationen, -%-Kationensättigung, -spezifische Oberfläche, -dithionit- und oxalatlösliche Gehalte an Fe, Al und Mn. Anhand dieser Parameter wurden weiterführende statistische Analysen unter Verwendung der Statistiksoftware SPSS für Windows durchgeführt, um mögliche statistische Zusammenhänge aufzudecken, die für die Stabilisierung von organischem Kohlenstoff in den betrachteten Böden verantwortlich sind. Die im Rahmen der vorliegenden Dissertation ermittelten Ergebnisse zeigen, dass der Tonanteil und die Tonmineralogie der untersuchten Böden nur einen begrenzten Einfluss auf die Stabilisierung organischer Substanz haben. Weiterhin wird gezeigt, dass die in der Literatur propagierte Beziehung zwischen spezifischer Oberfläche und der Konzentration organischen Kohlenstoffs nicht auf alle Böden anwendbar ist. Die Ergebnisse deuten darauf hin, dass die Präsenz von amorphen Eisen- und Aluminiumoxiden der wichtigste Einflussfaktor für die Fixierung von organischem Material in den untersuchten Böden ist. Die größeren Konzentrationen von organischem Kohlenstoff in den kleinsten Fraktionen (Feinschluff und Ton) der Profile sind vor allem darauf zurückzuführen, dass Oxide ebenfalls in diesen Fraktionen aufzufinden sind. Tonminerale haben demnach eine sekundäre Bedeutung, indem sie Komplexe mit den Oxiden bilden, die zur Stabilisierung von organischer Substanz führen können. Insgesamt deuten die Ergebnisse daraufhin, dass Böden keine geeignete Senke für die langfristige Speicherung von organischem Kohlenstoff sind. Obwohl Mechanismen wie die Adsorption von organischer Substanz an Oxide die Stabilisierung organischen Materials unterstützen, scheinen diese nicht stark genug zu sein, um eine permanente Speicherung von organischem Kohlenstoff zu bewirken.
A gene trap strategy was used to identify genes induced in hematopoietic cells undergoing apoptosis by growth factor withdrawal. IL-3 dependent survival of hematopoietic cells relies on a delicate balance between proliferation and apoptosis that is controlled by the availability of cytokines (Thompson, 1995; Iijima et al., 2002). From our previous results of gene trap assay, we postulated that transcriptionally activated antagonistic genes against apoptosis might actually block or delay cell death (Wempe et al., 2001) causing cells to have carcinogenic behavior. The analysis attempted to better understand the outcome of a death program following IL-3 deprivation and to identify those survival genes whose expression is affected by time dependent manner. As described in the chapter 4, there would be two major conclusions evident from the three separate experiments (Genetrap, Atlas cDNA array and Affymetrix chips): Firstly 56% of trapped genes, that are up-regulated by IL-3 withdrawal (28 of 50), are directly related to cell death or survival. Secondly, unlike most array technologies, gene trapping only selects for the transiently induced genes that is independent of pre-existing steady state mRNA levels. In regarding correlations of the genes with potential carcinogenesis, the pre-existing mRNA makes difficult to describe the unique characteristics of deregulated tumor tissue genes. For a joint project with Schering (Schering AG, Berlin), the genes of our GTSTs were examined. The first screen with custom array was used to look for whether the survival genes of our GTSTs are involved in various cancer cell lines, whilst the second screen with Matched Tumor/Normal Array was used to characterize if the selected seven genes (ERK3, Plekha2, KIAA1140, PI4P5Ka/g, KIAA0740, KIAA1036 and PEST domains) are transformation-related genes or not in different tumor tissues. Twenty-six genes were identified as either induced or repressed in one or more cell lines. Genetic information is expressed in complex and ever changing patterns throughout a life span of cells. A description of these patterns and how they relate to the tissue specific cancer is crucial for our understanding of the network of genetic interactions that underlie the processes of normal development, disease and evolution. The development of cancer and its progression is clearly a multiplex phenotype, as a function of time, involving dozens of primary genes and hundreds of secondary modifier genes. There would be a major conclusion evident from the three separate experiments (Genetrap, Affymetrix mouse chip and Matched Tumor/Normal Array): ERK3 could play a significant role in breast, stomach and uterus carcinogenesis with tissue specific regulations. It is clear that ERK3 is obvious putative survival gene in these tumor tissues. Especially, in breast tumors, seven times up-regulation was considerable and the activation of ERK3 could be a feature of breast tumors. My results imply that the unique deregulation of ERK3 is perhaps the major consequence of possible transformation of normal cells into malignant cancer cells, even though further analysis remains to be determined whether an alterated activity of associated survival genes is primarily responsible for a carcinogenesis. However unlike all the other known MAP Kinases, no stimuli and no nuclear substrates of ERK3 is reported. Therefore, it will be necessary first to determine the spectrum of substrates and to identify the proximal effectors for the ERK3 in breast carcinoma cells.
Zahnwale sind die einzige Säugetiergruppe, die umfassend an ein Leben im Wasser angepasst ist und dabei ein aktives Sonarsystem zur Orientierung nutzt. Wahrscheinlich produzieren alle Zahnwalarten sonische oder ultrasonische Klicklaute, deren Echos die Tiere zu einem drei-dimensionalen "akustischen Bild" zusammensetzen. Im Gegensatz zu den meisten anderen Säugetieren produzieren Zahnwale diese Laute im Nasen-Komplex durch einen pneumatisch betriebenen Mechanismus. Jedoch spielt auch der Kehlkopf dabei eine wichtige Rolle, indem er den nötigen Luftdruck in der Nase erzeugt. Die Ergebnisse werden in Bezug auf die physikalischen Voraussetzungen eines Bio-Sonars in einer aquatischen Umwelt interpretiert. Um die morphologischen Eigenschaften (Struktur, Form, Topographie) der Organe im Kopf verschiedener Zahnwalarten vollständig zu erfassen, wurden diese mittels Computertomographie und Magnetresonanztomographie gescannt. Daraufhin wurden die Köpfe makroskopisch präpariert und histologische Schnitte von Gewebeproben angefertigt. Schließlich wurden die Ergebnisse durch digitale dreidimensionale Rekonstruktionen vervollständigt. Diese Studie basiert zum größten Teil auf der Untersuchung von Schweinswalen (Phocoena phocoena) und Pottwalen (Physeter macrocephalus). Zum Vergleich wurden fetale und postnatale Individuen anderer Zahnwalarten herangezogen wie Delphinartige (Delphinus delphis, Stenella attenuata, Tursiops truncatus), Flussdelphinartige (Pontoporia blainvillei, Inia geoffrensis) und der Zwergpottwal (Kogia breviceps). Im Allgemeinen konnte durch die morphologischen Daten dieser Studie die einheitliche "phonic lips-Hypothese der Schallproduktion bei Zahnwalen, wie sie von Cranford, Amundin und Norris [J. Morphol. 228 (1996): 223-285] aufgestellt wurde, bestätigt werden. Diese Hypothese beschreibt eine ventilartige Struktur in der Nasenpassage, den sogenannten "monkey lips/dorsal bursae complex" (MLDB) als Schallgenerator. Der pneumatische Mechanismus lässt die beiden Hälften des MLDB aufeinanderschlagen und erzeugt damit die initiale Schallschwingung im Gewebe ("phonic lips"). Diese Vibration wird über die Melone, einen großen Fettkörper in der vorderen Nasenregion der Zahnwale, fokussiert und in das umgebende Wasser übertragen. Die akzessorischen Nasensäcke und spezielle Schädel- und Bindegewebestrukturen können zu der Fokussierung beitragen. Obwohl die Echolotsignale der Schweinswale sehr spezialisiert zu sein scheinen, weisen die Übereinstimmungen in der Topographie und in der Form der Nasenstrukturen im Vergleich zu Delphinen und Flussdelphinartigen (Pontoporia und Inia) auf eine ganz ähnliche Funktion der Nase bezüglich der Produktion und Emission von Echolotschall hin. Allerdings gibt es einige anatomische Besonderheiten im Nasenkomplex des Schweinswals, welche die besondere Pulsstruktur der Sonarsignale erklären könnte. Diese werden in der Dissertation diskutiert. Bei einem Vergleich der Nasenmorphologie der Pottwale einerseits und der nicht-pottwalartigen Zahnwale andererseits fällt vor allem der Grad der Asymmetrie ins Auge. Im Gegensatz zu dem oben für Delphine und Schweinswale beschrieben Mechanismus betreiben Pottwale die Schallproduktion an den "monkey lips" mit Luft, die im rechten Nasengang unter Druck gesetzt wird (und nicht im nasopharyngealen Raum). Zudem könnte durch Änderung des Luftvolumens im rechten Nasengang die Schalltransmission zwischen den Fettkörpern, und somit die Schallemission, kontrolliert werden. In diesem theoretischen Szenario fungiert der breite rechte Nasengang als eine Art "akustische Schranke", welche zwischen zwei verschiedenen Modi der Klickproduktion wechselt: Der erste Modus mit luftgefülltem Nasengang führt zur Produktion der Kommunikationsklicks ("coda clicks") und der zweite Modus zur Aussendung von Echolotklicks, wenn der Nasengang kollabiert ist. Somit scheinen die zentrale Position und die nahezu horizontale Orientierung des rechten Nasengangs im Kopf der Pottwale als Schnittstelle (Schranke) zwischen den beiden großen Fettkörpern mit dem Mechanismus der Schallproduktion bei veränderten Luftvolumina korreliert zu sein. Die hier beschriebenen und andere Ergebnisse dieser Dissertation deuten darauf hin, dass die Gestalt und das Ausmaß der Nasenasymmetrie nicht mit der systematischen Zugehörigkeit der jeweiligen Art korrelieren, sondern durch den jeweiligen Typus des Sonarsystems als Ausdruck einer bestimmten ökologischen Anpassung bedingt sind. Bei Zahnwalen ist der Kehlkopf charakterisiert durch eine rostrale Verlängerung des Kehldeckels und der beiden Stellknorpel, die ein gänseschnabelartiges Rohr bilden, das von einem starken Sphinktermuskel umrundet und dabei in Position gehalten wird. Auf diese Weise ist das Atemrohr vollständig vom Digestionstrakt getrennt. Aus anatomischer Sicht ist es wahrscheinlich, dass die Schallerzeugung bei Zahnwalen durch eine Kolbenbewegung des Kehlkopfes in Richtung der Choanen zustande kommt, wodurch der Luftdruck im Nasenbereich erzeugt wird. Die Kontraktion des Sphinktermuskels als einem muskulösen Schlauch erzeugt wahrscheinlich die größte Kraft für diese Kolbenbewegung. Jedoch dürften die Muskelgruppen, die den Kehlkopf und das Zungenbein am Unterkiefer und an der Schädelbasis aufhängen, signifikant zur Druckerhöhung beitragen.
The endothelin B receptor belongs to the rhodopsin-like G-protein coupled receptors family. It plays an important role in vasodilatation and is found in the membranes of the endothelial cells enveloping blood vessels. During the course of this work, the production of recombinant human ETB receptor in yeast, insect and mammalian cells was evaluated. A number of different receptor constructs for production in the yeast P. pastoris was prepared. Various affinity tags were appended to the receptor N-and C-termini to enable receptor detection and purification. The clone pPIC9KFlagHisETBBio, with an expression level of 60 pmol/mg, yielded the highest amount of active receptor (1.2 mg of receptor per liter of shaking culture). The expression level of the same clone in fermentor culture was 17 pmol/mg, and from a 10L fermentor it was possible to obtain 3 kg of cells that contained 20-39 mg of the receptor. For receptor production in insect cells, Sf9 (S. frugiperda) suspension cells were infected with the recombinant baculovirus pVlMelFlagHisETBBio. The peak of receptor production was reached at 66 h post infection, and radioligand binding assays on insect cell membranes showed 30 pmoL of active receptor /mg of membrane protein. Subsequently, the efficiency of different detergents in solubilizing the active receptor was evaluated. N-dodecyl-beta-D-maltoside (LM), lauryl-sucrose and digitonine/cholate performed best, and LM was chosen for further work. The ETB receptor was produced in mammalian cells using the Semliki Forest Virus expression system. Radioligand binding assays on membranes from CHO cells infected with the recombinant virus pSFV3CAPETBHis showed 7 pmol of active receptor /mg of membrane protein. Since the receptor yield from mammalian cells was much lower than in yeast and insect cells, this system was not used for further large-scale receptor production. After production in yeast and insect cells, the ETB receptor was saturated with its ligand, endothelin-1, in order to stabilize its native form. The receptor was subsequently solubilized with n-dodecyl-beta-D-maltoside and subjected to purification on various affinity matrices. Two-step affinity purification via Ni2+-NTA and monomeric avidin proved the most efficient way to purify milligram amounts of the receptor. The purity of the receptor preparation after this procedure was over 95%, as judged from silver stained gels. However, the tendency of the ETB receptor produced in yeast to form aggregates was a constant problem. Attempts were made to stabilize the active, monomeric form of the receptor by testing a variety of different buffer conditions, but further efforts in this direction will be necessary in order to solve the aggregation problem. In contrast to preparations from yeast, the purification of the ETB receptor produced in insect cells yielded homogeneous receptor preparations, as shown by gel filtration analysis. This work has demonstrated that the amounts of receptor expressed in yeast and insect cells and the final yield of receptor, isolated by purification, represent a good basis for beginning 3D and continuing 2D crystallization trials.
Hepatitis E virus (HEV) is a positive-stranded RNA virus with a 7.2 kb genome that is capped and polyadenylated. The virus is currently unclassified : the organisation of the genome resembles that of the Caliciviridae but sequence analyses suggest that it is more closely related to the Togaviridae. HEV is an enterically transmitted virus that causes both epidemics and sporadic cases of acute hepatitis in many countries of Asia and Africa but only rarely causes disease in more industrialised countries. Initially the virus was believed to have a limited geographical distribution. However, serological studies suggest that that HEV may be endemic also in the United states and Europe even though it infrequently causes overt disease in these countries. Many different animal species worldwide recently have been shown to have antibodies to HEV suggesting that hepatitis E may be zoonotic. Although two related strains have been experimentally transmitted between species, direct transmission from animal to a human has not been documented. Our main objective in this study is to evaluate the suitability of current available HEV antibody assays for use in low-endemicity areas such as in Germany. Methods: We selected sera on the basis of at least borderline reactivity in the routinely used Abbot EIA. Most were tested as part of routine screening of long-term expatriates in endemic countries. The following assays (recombinant antigens : ORF2 and ORF3) were used: Abbot EIA, Genelabs ELISA, Mikrogen recomBlot and a 'Prototype' DSL-ELISA. We observed a wide range of sensitivity ( average of 56.8%) and specificity ( an average of 61.4%) in these used assays. These results implies that , these assays might be unreliable for detection of HEV infection in areas where hepatitis E is not endemic. However, most anti- HEV assays have not been correlated with the HEV RNA determined by reverse transcription. Many of these unexpected results and discrepancies can be alluded to the following reasons: I. The choice and the size of the HEV antigen. II. Duration of the antibody persistence III. A cross reactivity with different agent IV. Due to geographic species V. A low sensitivity of the available assays. VI. And infection with non-pathogenic HEV strain. (zoonotic strain?). We therefore suggest that, further studies will be required to improve the sensitivity and specificity of the available commercial assays on the market.
The focus of this study were Celtic gold coins excavated from the Martberg, a Celtic oppidium and sanctuary, occupied in the first century B.C. by a Celtic tribe known as the Treveri. These coins and a number of associated coinages, were characterised in terms of their alloy compositions and their geochemical and isotopic signatures so as to answer archaeological and numismatic questions of coinage development and metal sources. This required the development of analytical methods involving; Electron Microprobe (EPMA), Laser Ablation-ICP-MS, solution Multicollector-ICPMS and LA-MC-ICP-MS. The alloy compositions (Au-Ag-Cu-Sn) were determined by EPMA on a small polished area on the edge of the coins. A large beam size, 50µm (diameter), was used to overcome the extreme heterogeneity of these alloys. These analyses were shown to be representative of the bulk composition of the coins. The metallurgical development of the coinages was defined and showed that the earlier coinages followed a debasement trend, which was superceded by a trend of increasing copper at the expense of sliver while gold compositions remained stable. This change occurred with the appearance of the inscribed "POTTINA" coinage, Scheers 30/V. Two typologically different coinages, Scheers 16 and 18 ("Armorican Types") were found to have markedly different compositions which do not fit into the trends described above. A Flan for a gold coin, which may indicate the presence of a mint at the Martberg, was found to have an identicle weight and composition as the Scheers 30/I coins, which preceeded the majority of the coins found at the Martberg in the coin development chronology. The trace element anaylses were made by Laser Ablation-ICPMS using an AridusTM desolvating nebuliser to introduce matrix matched solution standards to calibrate the measurements, which were then normalised to 100%. Quantitative results were obtained for the following elements: Sc, Ti, Cr, Mn, Co, Ni, Cu, Zn, Se, Ru, Rh, Pd, Ag, Sb, Te, W, Ir, Pt, Pb, Bi. The remaining elements remain problematic as they produced incorrect standardisations mainly due to chemical effects in solution such as adsorption onto the beaker walls or oxidation : V, Fe, Ga, Ge, As, Mo, Sn, Re, Os, Hg. Changes in the sources of Au, Ag and Cu were observed during the development of the coinages through the variation of trace elements, which correlate positively with the major components of the coin alloys. Changes in the Pt/Au ratios show that the Scheers 23 coins contain distinctly different gold from the later coinages and that the Scheers 18 gold source was also different. Te/Ag was used to show that the Sch.23 coins also contained different silver and some subgroups were observed in the Sch. 30/V coins. A major change in copper source is indicated by the sudden increase of Sb and Ni with the introduction of the Sch. 30/V coins (POTTINA), which can be linked to a similar change in copper observed in the contemporary silver coinage, Sch. 55 (with a ring). Lead isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS, The laser technique proved to be in good agreement with the solution analyses with precisions between 1 and 0.1%o (per mil). The development of the laser method opens the way for easy and virtually non-destructive Pb isotopic determinations of ancient gold coins. The results showed that Sch. 23 is very different from the following coinages, Sch. 16 and 18 are also different, forming their own group, and all the later "Eye" staters (Sch. 30/I-VI) lie on a mixing line controlled by the addition of copper from a Mediterranean source, probably Sardinia or Spain. An indication of gold and silver sources should be possible with further analyses of the Sch. 23 and Rainbow Cup gold coins and the Sch. 54 and 55 silver coinages. Copper Isotopic analyses were made by solution- and Laser Ablation - MC-ICP-MS. Both techniques require further development to produce more reproducible results. The results show that there appears to be a trend to more positive d Cu65 values for the later coinages and that the link between the copper used in the Sch. 30/V (POTTINA) coins and the silver Sch. 55 (with a ring) coins is also shown by similarly postive d Cu65 values. The full suite of analyses were also made on samples of gold from the region. They were mostly composed of "placer gold", alluvial gold found in rivers. It was found that when a study is restricted to a limited number of deposits or areas then it is possible to distinguish between deposits based on the concentration of those elements which are least affected by transport related alteration processes. These elements include; the PGE's, due to their refractory nature, and those elements which are usually present in high enough concentrations to remain relatively unaffected, eg: Cu, Pb and Sb. Due to the nature of the coin alloy it is not possible to link the gold used in the coins studied here with gold deposits, as the large amounts of Ag and Cu, added to the coin alloys, have masked the Au signature. However, further Pb isotopic analyses of gold deposits should prove useful in determining from which regions Celtic gold was derived.