Refine
Year of publication
- 2003 (497) (remove)
Document Type
- Article (162)
- Working Paper (109)
- Conference Proceeding (62)
- Part of a Book (40)
- Preprint (40)
- Doctoral Thesis (33)
- Part of Periodical (29)
- Book (8)
- Report (6)
- Review (5)
Language
- English (497) (remove)
Keywords
- Deutschland (24)
- Morphologie (14)
- Phonologie (12)
- Geldpolitik (11)
- Aspekt (10)
- Englisch (9)
- Koreanisch (8)
- Europäische Union (7)
- Going Public (7)
- Kindersprache (7)
Institute
- Physik (63)
- Center for Financial Studies (CFS) (56)
- Wirtschaftswissenschaften (35)
- Medizin (17)
- Rechtswissenschaft (14)
- Biowissenschaften (12)
- Geowissenschaften (12)
- Informatik (12)
- Extern (11)
- Institut für Deutsche Sprache (IDS) Mannheim (11)
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
This paper is focused on the coordination of order and production policy between buyers and suppliers in supply chains. When a buyer and a supplier of an item work independently, the buyer will place orders based on his economic order quantity (EOQ). However, the buyer s EOQ may not lead to an optimal policy for the supplier. It can be shown that a cooperative batching policy can reduce total cost significantly. Should the buyer have the more powerful position to enforce his EOQ on the supplier, then no incentive exists for him to deviate from his EOQ in order to choose a cooperative batching policy. To provide an incentive to order in quantities suitable to the supplier, the supplier could offer a side payment. One critical assumption made throughout in the literature dealing with incentive schemes to influence buyer s ordering policy is that the supplier has complete information regarding buyer s cost structure. However, this assumption is far from realistic. As a consequence, the buyer has no incentive to report truthfully on his cost structure. Moreover there is an incentive to overstate the total relevant cost in order to obtain as high a side payment as possible. This paper provides a bargaining model with asymmetric information about the buyer s cost structure assuming that the buyer has the bargaining power to enforce his EOQ on the supplier in case of a break-down in negotiations. An algorithm for the determination of an optimal set of contracts which are specifically designed for different cost structures of the buyer, assumed by the supplier, will be presented. This algorithm was implemented in a software application, that supports the supplier in determining the optimal set of contracts.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
The study of organisms with restricted dispersal abilities and presence in the fossil record is particularly adequate to understand the impact of climate changes on the distribution and genetic structure of species. Trochoidea geyeri (Soós 1926) is a land snail restricted to a patchy, insular distribution in Germany and France. Fossil evidence suggests that current populations of T. geyeri are relicts of a much more widespread distribution during more favourable climatic periods in the Pleistocene. Results: Phylogeographic analysis of the mitochondrial 16S rDNA and nuclear ITS-1 sequence variation was used to infer the history of the remnant populations of T. geyeri. Nested clade analysis for both loci suggested that the origin of the species is in the Provence from where it expanded its range first to Southwest France and subsequently from there to Germany. Estimated divergence times predating the last glacial maximum between 25–17 ka implied that the colonization of the northern part of the current species range occurred during the Pleistocene. Conclusion: We conclude that T. geyeri could quite successfully persist in cryptic refugia during major climatic changes in the past, despite of a restricted capacity of individuals to actively avoid unfavourable conditions.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
This paper proposes an intertemporal model of venture capital investment with screening and advising where the venture capitalist´s time endowment is the scarce input factor. Screening improves the selection of firms receiving finance, advising allows firms to develop a marketable product, both have a variable intensity. In our setup, optimal linear contracts solves the moral hazard problem. Screening however asks for an entrepreneur wage and does not allow for upfront payments which would cause severe adverse selection. Project characteristics have implications for screening and advising intensity and the distribution of profits. Finally, we develop a formal version of the "venture capital cycle" by extending the basic setup to a simple model of venture capital supply and demand.
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.
Equal size, equal role? : interest rate interdependence between the Euro area and the United States
(2003)
This paper investigates whether the degree and the nature of economic and monetary policy interdependence between the United States and the euro area have changed with the advent of EMU. Using real-time data, it addresses this issue from the perspective of financial markets by analysing the effects of monetary policy announcements and macroeconomic news on daily interest rates in the United States and the euro area. First, the paper finds that the interdependence of money markets has increased strongly around EMU. Although spillover effects from the United States to the euro area remain stronger than in the opposite direction, we present evidence that US markets have started reacting also to euro area developments since the onset of EMU. Second, beyond these general linkages, the paper finds that certain macroeconomic news about the US economy have a large and significant effect on euro area money markets, and that these effects have become stronger in recent years. Finally, we show that US macroeconomic news have become good leading indicators for economic developments in the euro area. This indicates that the higher money market interdependence between the United States and the euro area is at least partly explained by the increased real integration of the two economies in recent years.
Based on a broad set of regional aggregated and disaggregated consumer price index (CPI) data from major industrialized countries in Asia, North America and Europe we are examining the role that national borders play for goods market integration. In line with the existing literature we find that intra-national markets are better integrated than international market. Additionally, our results show that there is a large "ocean" effect, i.e., inter-continental markets are significantly more segmented than intra-continental markets. To examine the impact of the establishment of the European Monetary Union (EMU) on integration, we split our sample into a pre-EMU and EMU sample. We find that border effects across EMU countries have declined by about 80% to 90% after 1999 whereas border estimates across non-EMU countries have remained basically unchanged. Since global factors have affected all countries in our sample similarly and major integration efforts across EMU countries were made before 1999, we suggest that most of the reduction in EMU border estimates has been "nominal". Panel unit root evidence shows that the observed large differences in integration across intra- and inter-continental markets remain valid in the long-run. This finding implies that real factors are responsible for the documented segmentations across our sample countries.
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts.
We show diverse beliefs is an important propagation mechanism of fluctuations, money non neutrality and efficacy of monetary policy. Since expectations affect demand, our theory shows economic fluctuations are mostly driven by varying demand not supply shocks. Using a competitive model with flexible prices in which agents hold Rational Belief (see Kurz (1994)) we show that (i) our economy replicates well the empirical record of fluctuations in the U.S. (ii) Under monetary rules without discretion, monetary policy has a strong stabilization effect and an aggressive anti-inflationary policy can reduce inflation volatility to zero. (iii) The statistical Phillips Curve changes substantially with policy instruments and activist policy rules render it vertical. (iv) Although prices are flexible, money shocks result in less than proportional changes in inflation hence the aggregate price level appears "sticky" with respect to money shocks. (v) Discretion in monetary policy adds a random element to policy and increases volatility. The impact of discretion on the efficacy of policy depends upon the structure of market beliefs about future discretionary decisions. We study two rationalizable beliefs. In one case, market beliefs weaken the effect of policy and in the second, beliefs bolster policy outcomes and discretion could be a desirable attribute of the policy rule. Since the central bank does not know any more than the private sector, real social gain from discretion arise only in extraordinary cases. Hence, the weight of the argument leads us to conclude that bank´s policy should be transparent and abandon discretion except for rare and unusual circumstances. (vi) An implication of our model suggests the current effective policy is only mildly activist and aims mostly to target inflation.
Permanent and transitory policy shocks in an empirical macro model with asymmetric information
(2003)
Despite a large literature documenting that the efficacy of monetary policy depends on how inflation expectations are anchored, many monetary policy models assume: (1) the inflation target of monetary policy is constant; and, (2) the inflation target is known by all economic agents. This paper proposes an empirical specification with two policy shocks: permanent changes to the inflation target and transitory perturbations of the short-term real rate. The public sector cannot correctly distinguish between these two shocks and, under incomplete learning, private perceptions of the inflation target will not equal the true target. The paper shows how imperfect policy credibility can affect economic responses to structural shocks, including transition to a new inflation target - a question that cannot be addressed by many commonly used empirical and theoretical models. In contrast to models where all monetary policy actions are transient, the proposed specification implies that sizable movements in historical bond yields and inflation are attributable to perceptions of permanent shocks in target inflation.
This paper investigates the role that imperfect knowledge about the structure of the economy plays in the formation of expectations, macroeconomic dynamics, and the efficient formulation of monetary policy. Economic agents rely on an adaptive learning technology to form expectations and to update continuously their beliefs regarding the dynamic structure of the economy based on incoming data. The process of perpetual learning introduces an additional layer of dynamic interaction between monetary policy and economic outcomes. We find that policies that would be efficient under rational expectations can perform poorly when knowledge is imperfect. In particular, policies that fail to maintain tight control over inflation are prone to episodes in which the public's expectations of inflation become uncoupled from the policy objective and stagflation results, in a pattern similar to that experienced in the United States during the 1970s. Our results highlight the value of effective communication of a central bank's inflation objective and of continued vigilance against inflation in anchoring inflation expectations and fostering macroeconomic stability. July 2003.
Monetary policy is sometimes formulated in terms of a target level of inflation, a fixed time horizon and a constant interest rate that is anticipated to achieve the target at the specified horizon. These requirements lead to constant interest rate (CIR)instrument rules. Using the standard New Keynesian model, it is shown that some forms of CIR policy lead to both indeterminacy of equilibria and instability under adaptive learning. However, some other forms of CIR policy perform better. We also examine the properties of the different policy rules in the presence of inertial demand and price behaviour.
Escapist policy rules
(2003)
We study a simple, microfounded macroeconomic system in which the monetary authority employs a Taylor-type policy rule. We analyze situations in which the self-confirming equilibrium is unique and learnable according to Bullard and Mitra (2002). We explore the prospects for the use of 'large deviation' theory in this context, as employed by Sargent (1999) and Cho, Williams, and Sargent (2002). We show that our system can sometimes depart from the self-confirming equilibrium towards a non-equilibrium outcome characterized by persistently low nominal interest rates and persistently low inflation. Thus we generate events that have some of the properties of "liquidity traps" observed in the data, even though the policymaker remains committed to a Taylor-type policy rule which otherwise has desirable stabilization properties.
The development of tractable forward looking models of monetary policy has lead to an explosion of research on the implications of adopting Taylor-type interest rate rules. Indeterminacies have been found to arise for some specifications of the interest rate rule, raising the possibility of inefficient fluctuations due to the dependence of expectations on extraneous "sunspots ". Separately, recent work by a number of authors has shown that sunspot equilibria previously thought to be unstable under private agent learning can in some cases be stable when the observed sunspot has a suitable time series structure. In this paper we generalize the "common factor "technique, used in this analysis, to examine standard monetary models that combine forward looking expectations and predetermined variables. We consider a variety of specifications that incorporate both lagged and expected inflation in the Phillips Curve, and both expected inflation and inertial elements in the policy rule. We find that some policy rules can indeed lead to learnable sunspot solutions and we investigate the conditions under which this phenomenon arises.
A financial system can only perform its function of channelling funds from savers to investors if it offers sufficient assurance to the providers of the funds that they will reap the rewards which have been promised to them. To the extent that this assurance is not provided by contracts alone, potential financiers will want to monitor and influence managerial decisions. This is why corporate governance is an essential part of any financial system. It is almost obvious that providers of equity have a genuine interest in the functioning of corporate governance. However, corporate governance encompasses more than investor protection. Similar considerations also apply to other stakeholders who invest their resources in a firm and whose expectations of later receiving an appropriate return on their investment also depend on decisions at the level of the individual firm which would be extremely difficult to anticipate and prescribe in a set of complete contingent contracts. Lenders, especially long-term lenders, are one such group of stakeholders who may also want to play a role in corporate governance; employees, especially those with high skill levels and firm-specific knowledge, are another. The German corporate governance system is different from that of the Anglo-Saxon countries because it foresees the possibility, and even the necessity, to integrate lenders and employees in the governance of large corporations. The German corporate governance system is generally regarded as the standard example of an insider-controlled and stakeholder-oriented system. Moreover, only a few years ago it was a consistent system in the sense of being composed of complementary elements which fit together well. The first objective of this paper is to show why and in which respect these characterisations were once appropriate. However, the past decade has seen a wave of developments in the German corporate governance system, which make it worthwhile and indeed necessary to investigate whether German corporate governance has recently changed in a fundamental way. More specifically one can ask which elements and features of German corporate governance have in fact changed, why they have changed and whether those changes which did occur constitute a structural change which would have converted the old insider-controlled system into an outsider-controlled and shareholder-oriented system and/or would have deprived it of its former consistency. It is the second purpose of this paper to answer these questions. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
A rapidly growing literature has documented important improvements in volatility measurement and forecasting performance through the use of realized volatilities constructed from high-frequency returns coupled with relatively simple reduced-form time series modeling procedures. Building on recent theoretical results from Barndorff-Nielsen and Shephard (2003c,d) for related bi-power variation measures involving the sum of high-frequency absolute returns, the present paper provides a practical framework for non-parametrically measuring the jump component in realized volatility measurements. Exploiting these ideas for a decade of high-frequency five-minute returns for the DM/$ exchange rate, the S&P500 market index, and the 30-year U.S. Treasury bond yield, we find the jump component of the price process to be distinctly less persistent than the continuous sample path component. Explicitly including the jump measure as an additional explanatory variable in an easy-to-implement reduced form model for realized volatility results in highly significant jump coefficient estimates at the daily, weekly and quarterly forecast horizons. As such, our results hold promise for improved financial asset allocation, risk management, and derivatives pricing, by separate modeling, forecasting and pricing of the continuous and jump components of total return variability.
While focusing on the protection of distressed sovereigns, the current debate intended to reform the International Financial Architecture has hardly addressed the protection of creditors rights that varies among laws. I suspect however that this constitutes an essential determinant of the success of suggested solutions, especially under the contractual approach. Based on a sample of bonds issued by developing countries states in the period, January 1987 to December 1997, I find that, for given contract characteristics (e.g. listing markets and currency), the governing law is selected according to its ability to enforce repayment. However, although the New York law seems looser and incur larger enforcement costs than the England&Wales law, the former permits equivalent yearly credit amounts. I interpret this as a consequence of the existence of a larger set of valuable assets (e.g. trade) in the US that constitute implicit securities. My findings yield important implications for the reforms. In particular, provided that there exists a seemingly equivalent enforcement credibility between England and New York laws, the prompt implementation of the contractual approach solution should constitute a valuable first step toward efficient sovereign debt markets. October 2003.
The paper suggests an innovative contribution to the investigation of banking liabilities pricing contracted by sovereign agents. To address fundamental issues of banking, the study focuses on the determinants of the up-front fees (the up-front fee is a charge paid out at the signature of the loan arrangement). The investigation is based on a uniquely extensive sample of bank loans contracted or guaranteed by 58 less-developed countries sovereigns in the period from 1983 to 1997. The well detailed reports allow for the calculation of the equivalent yearly margin on the utilization period for all individual loan. The main findings suggest a significant impact of the renegotiation and agency costs on front-end borrowing payments. Unlike the sole interest spread, the all-in interest margin better takes account of these costs. The model estimates however suggest the non-linear pricing is hardly associated with an exogenous split-up intended by the borrower and his banker to cover up information. Instead the up-front payment is a liquidity transfer as described by Gorton and Kahn (2000) to compensate for renegotiation and monitoring costs. The second interesting result is that banks demand payment for all types of sovereign risk in an identical manner public debt holders do. The difference is that, unlike bond holders, bankers have the possibility to charge an up-front fee to compensate for renegotiation costs. Hence, beyond the information related issues, the higher complexity of the pricing design makes bank loan optimal for lenders on sovereign capital markets, especially relative to public debt, thus motivating for their presence. The paper contributes to the expanding literature on loan syndication and banking related issues. The study also has relevance for the investigation of the developing countries debt pricing.
We present an analysis of VaR forecasts and P&L-series of all 13 German banks that used internal models for regulatory purposes in the year 2001. To this end, we introduce the notion of well-behaved forecast systems. Furthermore, we provide a series of statistical tools to perform our analyses. The results shed light on the forecast quality of VaR models of the individual banks, the regulator's portfolio as a whole, and the main ingredients of the computation of the regulatory capital required by the Basel rules.
We estimate a model with latent factors that summarize the yield curve (namely, level, slope, and curvature) as well as observable macroeconomic variables (real activity, inflation, and the stance of monetary policy). Our goal is to provide a characterization of the dynamic interactions between the macroeconomy and the yield curve. We find strong evidence of the effects of macro variables on future movements in the yield curve and much weaker evidence for a reverse influence. We also relate our results to a traditional macroeconomic approach based on the expectations hypothesis.
Using the Johansen test for cointegration, we examine to which extent inflation rates in the Euro area have converged after the introduction of a single currency. Since the assumption of non-stationary variables represents the pivotal point in cointegration analyses we pay special attention to the appropriate identification of non-stationary inflation rates by the application of six different unit root tests. We compare two periods, the first ranging from 1993 to 1998 and the second from 1993 to 2002 with monthly observations. The Johansen test only finds partial convergence for the former period and no convergence for the latter.
Financial markets are to a very large extent influenced by the advent of information. Such disclosures, however, do not only contain information about fundamentals underlying the markets, but they also serve as a focal point for the beliefs of market participants. This dual role of information gains further importance for explaining the development of asset valuations when taking into account that information may be perceived individually (private information), or may be commonly shared by all traders (public information). This study investigates into the recently developed theoretical structures explaining the operating mechanism of the two types of information and emphasizes the empirical testability and differentiation between the role of private and public information. Concluding from a survey of experimental studies and own econometric analyses, it is argued that most often public information dominates private information. This finding justifies central bankers´ unease when disseminating news to the markets and argues against the recent trend of demanding full transparency both for financial institutions and financial markets themselves.
The paper describes the legal and economic environment of mergers and acquisitions in Germany and explores barriers to obtaining and executing corporate control. Various cases are used to demonstrate that resistance by different stakeholders including minority shareholders, organized labour and the government may present powerful obstacles to takeovers in Germany. In spite of the overall convergence of European takeover and securities trading laws, Germany still shows many peculiarities that make its market for corporate control distinct from other countries. Concentrated share ownership, cross shareholdings and pyramidal ownership structures are frequent barriers to acquiring majority stakes. Codetermination laws, the supervisory board structure and supermajority requirements for important corporate decisions limit the execution of control by majority shareholders. Bidders that disregard the German preference for consensual solutions and the specific balance of powers will risk their takeover attempt be frustrated by opposing influence groups. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This paper is a draft for the chapter "German banks and banking structure" of the forthcoming book "The German financial system" edited by J.P. Krahnen and R.H. Schmidt (Oxford University Press). As such, the paper starts out with a description of past and present structural features of the German banking industry. Given the presented empirical evidence it then argues that great care has to be taken when generalising structural trends from one financial system to another. Whilst conventional commercial banking is clearly in decline in the US, it is far from clear whether the dominance of banks in the German financial system has been significantly eroded over the last decades. We interpret the immense stability in intermediation ratios and financing patterns of firms between 1970 and 2000 as strong evidence for our view that the way in which and the extent to which German banks fulfil the central functions for the financial system are still consistent with the overall logic of the German financial system. In spite of the current dire business environment for financial intermediaries we do not expect the German financial system and its banking industry as an integral part of this system to converge to the institutional arrangements typical for a market-oriented financial system.
We present a survey on the role of initial public offerings (Epos) and venture capital (VC) in Germany after the Second World War. Between 1945 and 1983 IPOs hardly played a role at all and only a minor role thereafter. In addition, companies that chose an IPO were much older and larger than the average companies going public for the first time in the US or the UK. The level of IPO underpricing in Germany, in contrast, has not been fundamentally different from that in other countries. The picture for venture capital financing is not much different from that provided by IPOs in Germany. For a long time venture capital financing was hardly significant, particularly as a source of early stage financing. The unprecedented boom on the Neuer Markt between 1997 and 2000, when many small venture capital financed firms entered the market, provides a striking contrast to the preceding era. However, by US standards, the levels of both IPO and venture capital activities remained rather low even in this boom phase. The extent to which recent developments will have a lasting impact on the financing of German firms, the level of IPO activity, and venture capital financing, remains to be seen. At the time of writing, activity has come to a near stand still and the Neuer Markt has just been dissolved. The low number of IPOs and the fairly low volume of VC financing in Germany before the introduction of the Neuer Markt are a striking and much debated phenomenon. Understanding the reasons for these apparent peculiarities is vital to understanding the German financial system. The potential explanations that have been put forward range from differentces in mentality to legal and institutional impediments and the availability of alternative sources of financing. Moreover the recent literature discusses how interest groups may have benefited and influenced the situation. These groups include politicians, unions/workers, managers/controlling-owners of established firms as well as banks. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
We analyze the venture capitalist´s decision on the timing of the IPO, the offer price and the fraction of shares he sells in the course of the IPO. A venture capitalist may decide to take a company public or to liquidate it after one or two financing periods. A longer venture capitalist´s participation in a firm (later IPO) may increase its value while also increasing costs for the venture capitalist. Due to his active involvement, the venture capitalist knows the type of firm and the kind of project he finances before potential new investors do. This information asymmetry is resolved at the end of the second period. Under certain assumptions about the parameters and the structure of the model, we obtain a single equilibrium in which high-quality firms separate from low-quality firms. The latter are liquidated after the first period, while the former go public either after having been financed by the venture capitalist for two periods or after one financing period using a lock-up. Whether a strategy of one or two financing periods is chosen depends on the consulting intensity of the project and / or on the experience of the venture capitalist. In the separating equilibrium, the offer price corresponds to the true value of the firm. An earlier version of this paper appeared as: The Decision of Venture Capitalists on Timing and Extent of IPOs (ZEW Discussion Paper No. 03-12). This version July 2003.
Using a unique, hand-collected database of all venture-backed firms listed on Germany´s Neuer Markt, we analyze the history of venture capital financing of these firms before the IPO and the behavior of venture capitalists at the IPO. We can detect significant differences in the behavior and characteristics of German vs. foreign venture capital firms. The discrepancy in the investment and divestment strategies may be explained by the grandstanding phenomenon, the value-added hypothesis and certification issues. German venture capitalists are typically younger and smaller than their counterparts from abroad. They syndicate less. The sectoral structure of their portfolios differs from that of foreign venture capital firms. We also find that German venture capitalists typically take companies with lower offering volumes on the market. They usually finance firms in a later stage, carry through fewer investment rounds and take their portfolio firms public earlier. In companies where a German firm is the lead venture capitalist, the fraction of equity held by the group of venture capitalists is lower, their selling intensity at the IPO is higher and the committed lock-up period is longer.
This paper deals with the proposed use of sovereign credit ratings in the "Basel Accord on Capital Adequacy" (Basel II) and considers its potential effect on emerging markets financing. It investigates in a first attempt the consequences of the planned revisions on the two central aspects of international bank credit flows: the impact on capital costs and the volatility of credit supply across the risk spectrum of borrowers. The empirical findings cast doubt on the usefulness of credit ratings in determining commercial banks' capital adequacy ratios since the standardized approach to credit risk would lead to more divergence rather than convergence between investment-grade and speculative-grade borrowers. This conclusion is based on the lateness and cyclical determination of credit rating agencies' sovereign risk assessments and the continuing incentives for short-term rather than long-term interbank lending ingrained in the proposed Basel II framework.
Do changes in sovereign credit ratings contribute to financial contagion in emerging market crises?
(2003)
Credit rating changes for long-term foreign currency debt may act as a wake-up call with upgrades and downgrades in one country affecting other financial markets within and across national borders. Such a potential (contagious) rating effect is likely to be stronger in emerging market economies, where institutional investors' problems of asymmetric information are more present. This empirical study complements earlier research by explicitly examining cross-security and cross-country contagious rating effects of credit rating agencies' sovereign risk assessments. In particular, the specific impact of sovereign rating changes during the financial turmoil in emerging markets in the latter half of the 1990s has been examined. The results indicate that sovereign rating changes in a ground-zero country have a (statistically) significant impact on the financial markets of other emerging market economies although the spillover effects tend to be regional.
Accounting for financial instruments in the banking industry: conclusions from a simulation model
(2003)
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature.
Some of the most widely expressed myths about the German financial system are concerned with the close ties and intensive interaction between banks and firms, often described as Hausbank relationships. Links between banks and firms include direct shareholdings, board representation, and proxy voting and are particularly significant for corporate governance. Allegedly, these relationships promote investment and improve the performance of firms. Furthermore, German universal banks are believed to play a special role as large and informed monitoring investors (shareholders). However, for the very same reasons, German universal banks are frequently accused of abusing their influence on firms by exploiting rents and sustaining the entrenchment of firms against efficient transfers of firm control. In this paper, we review recent empirical evidence regarding the special role of banks for the corporate governance of German firms. We differentiate between large exchangelisted firms and small and medium sized companies throughout. With respect to the role of banks as monitoring investors, the evidence does not unanimously support a special role of banks for large firms. Only one study finds that banks´ control of management goes beyond what nonbank shareholders achieve. Proxyvoting rights apparently do not provide a significant means for banks to exert management control. Most of the recent evidence regarding small firms suggests that a Hausbank relationship can indeed be beneficial. Hausbanks are more willing to sustain financing when borrower quality deteriorates, and they invest more often than arm´s length banks in workouts if borrowers face financial distress.
In Germany a public discussion on the "power of banks" has been going on for decades now with power having at least two meanings. On the one hand it is the power of banks to control public corporations through direct shareholdings or the exercise of proxy votes - this is the power of banks in corporate control. On the other hand it is market power - due to imperfect competition in markets for financial services - that banks exercise vis-à-vis their loan and deposit customers. In the past, bank regulation has often been blamed to undermine competition and the working of market forces in the financial industry for the sake of soundness and stability of financial services firms. This chapter tries to shed some light on the historical development and current state of bank regulation in Germany. In so doing it tries to embed the analysis of bank regulation into a more general industrial organisation framework. For every regulated industry, competition and regulation are deeply interrelated as most regulatory institutions - even if they do not explicitly address the competitiveness of the market - either affect market structure or conduct. This paper tries to uncover some of the specific relationships between monetary policy, government interference and bank regulation on the one hand and bank market structure and economic performance on the other. In so doing we hope to point to several areas for fruitful research in the future. While our focus is on Germany, some of the questions that we raise and some of our insights might also be applicable to banking systems elsewhere. Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
The experience in the period during and after the Asian crisis of 1997-98 has provoked an extensive debate about the credit rating agencies' evaluation of sovereign risk in emerging markets lending. This study analyzes the role of credit rating agencies in international finan-cial markets, particularly whether sovereign credit ratings have an impact on the financial stability in emerging market economies. The event study and panel regression results indicate that credit rating agencies have substantial influence on the size and volatility of emerging markets lending. The empirical results are significantly stronger in the case of government's downgrades and negative imminent sovereign credit rating actions such as credit watches and rating outlooks than positive adjustments by the credit rating agencies while by the market participants' anticipated sovereign credit rating changes have a smaller impact on financial markets in emerging economies.
The German financial system is the archetype of a bank-dominated system. This implies that organized equity markets are, in some sense, underdeveloped. The purpose of this paper is, first, to describe the German equity markets and, second, to analyze whether it is underdeveloped in any meaningful sense. In the descriptive part we provide a detailed account of the microstructure of the German equity markets, putting special emphasis on recent developments. When comparing the German market with its peers, we find that it is indeed underdeveloped with respect to market capitalization. In terms of liquidity, on the other hand, the German equity market is not generally underdeveloped. It does, however, lack a liquid market for block trading. Klassifikation: G 51 . Revised version forthcoming in "The German Financial System", edited by Jan P. Krahnen and Reinhard H. Schmidt, Oxford University Press.
This chapter analyzes the role of financial accounting in the German financial system. It starts from the common perception that German accounting is rather "uninformative". This characterization is appropriate from the perspective of an arm´s length or outside investor and when confined to the financial statements per se. But it is no longer accurate when a broader perspective is adopted. The German accounting system exhibits several arrangements that privately communicate information to insiders, notably the supervisory board. Due to these features, the key financing and contracting parties seem reasonably well informed. The same cannot be said about outside investors relying primarily on public disclosure. A descriptive analysis of the main elements of the Germany system and a survey of extant empirical accounting research generally support these arguments.