Refine
Year of publication
- 2004 (477) (remove)
Document Type
- Article (162)
- Working Paper (71)
- Part of a Book (67)
- Preprint (48)
- Doctoral Thesis (43)
- Part of Periodical (31)
- Conference Proceeding (28)
- Report (13)
- Book (10)
- diplomthesis (2)
Language
- English (477) (remove)
Has Fulltext
- yes (477) (remove)
Is part of the Bibliography
- no (477) (remove)
Keywords
- Syntax (26)
- Generative Transformationsgrammatik (23)
- Wortstellung (19)
- Deutsch (16)
- Optimalitätstheorie (12)
- Phonologie (11)
- Deutschland (9)
- Englisch (8)
- Formale Semantik (8)
- Informationsstruktur (8)
Institute
- Physik (75)
- Wirtschaftswissenschaften (38)
- Center for Financial Studies (CFS) (28)
- Medizin (27)
- Extern (24)
- Biochemie und Chemie (23)
- Frankfurt Institute for Advanced Studies (FIAS) (20)
- Biowissenschaften (12)
- Informatik (12)
- Mathematik (9)
In contrast to the United States and the United Kingdom, little empirical work exists about the distributional characteristics of appraisalbased real estate returns outside these countries. The purpose of this study is to fill this gap by focusing on Germany. In line with other studies, this paper offers an extensive investigation into the distribution of German real estate returns and compares them with and U.S. and U.K. data in the same period. Furthermore, the comovements with bonds and stocks are also examined. In the core, the distributional characteristics for German real estate are comparable to that for the U.S. and U.K.
Open source projects produce goods or standards that do not allow for the appropriation of private returns by those who contribute to their production. In this paper we analyze why programmers will nevertheless invest their time and effort to code open source software. We argue that the particular way in which open source projects are managed and especially how contributions are attributed to individual agents, allows the best programmers to create a signal that more mediocre programmers cannot achieve. Through setting themselves apart they can turn this signal into monetary rewards that correspond to their superior capabilities. With this incentive they will forgo the immediate rewards they could earn in software companies producing proprietary software by restricting the access to the source code of their product. Whenever institutional arrangements are in place that enable the acquisition of such a signal and the subsequent substitution into monetary rewards, the contribution to open source projects and the resulting public good is a feasible outcome that can be explained by standard economic theory.
In this paper, we calculate a transaction–based price index for apartments in Paris (France). The heterogeneous character of real estate is taken into account using an hedonic model. The functional form is specified using a general Box–Cox function. The data basis covers 84 686 transactions of the housing market in 1990:01–1999:12, which is one of the largest samples ever used in comparable studies. Low correlations of the price index with stock and bond indices (first differences) indicate diversification benefits from the inclusion of real estate in a mixed asset portfolio. JEL C43, C51, O18, R20.
The paper is a follow-up to an article published in Technique Financière et Developpement in 2000 (see the appendix to the hardcopy version), which portrayed the first results of a new strategy in the field of development finance implemented in South-East Europe. This strategy consists in creating microfinance banks as greenfield investments, that is, of building up new banks which specialise in providing credit and other financial services to micro and small enterprises, instead of transforming existing credit-granting NGOs into formal banks, which had been the dominant approach in the 1990s. The present paper shows that this strategy has, in the course of the last five years, led to the emergence of a network of microfinance banks operating in several parts of the world. After discussing why financial sector development is a crucial determinant of general social and economic development and contrasting the new strategy to former approaches in the area of development finance, the paper provides information about the shareholder composition and the investment portfolio of what is at present the world's largest and most successful network of microfinance banks. This network is a good example of a well-functioning "private public partnership". The paper then provides performance figures and discusses why the creation of such a network seems to be a particularly promising approach to the creation of financially self-sustaining financial institutions with a clear developmental objective.
This paper provides an in-depth analysis of the properties of popular tests for the existence and the sign of the market price of volatility risk. These tests are frequently based on the fact that for some option pricing models under continuous hedging the sign of the market price of volatility risk coincides with the sign of the mean hedging error. Empirically, however, these tests suffer from both discretization error and model mis-specification. We show that these two problems may cause the test to be either no longer able to detect additional priced risk factors or to be unable to identify the sign of their market prices of risk correctly. Our analysis is performed for the model of Black and Scholes (1973) (BS) and the stochastic volatility (SV) model of Heston (1993). In the model of BS, the expected hedging error for a discrete hedge is positive, leading to the wrong conclusion that the stock is not the only priced risk factor. In the model of Heston, the expected hedging error for a hedge in discrete time is positive when the true market price of volatility risk is zero, leading to the wrong conclusion that the market price of volatility risk is positive. If we further introduce model mis-specification by using the BS delta in a Heston world we find that the mean hedging error also depends on the slope of the implied volatility curve and on the equity risk premium. Under parameter scenarios which are similar to those reported in many empirical studies the test statistics tend to be biased upwards. The test often does not detect negative volatility risk premia, or it signals a positive risk premium when it is truly zero. The properties of this test furthermore strongly depend on the location of current volatility relative to its long-term mean, and on the degree of moneyness of the option. As a consequence tests reported in the literature may suffer from the problem that in a time-series framework the researcher cannot draw the hedging errors from the same distribution repeatedly. This implies that there is no guarantee that the empirically computed t-statistic has the assumed distribution. JEL: G12, G13 Keywords: Stochastic Volatility, Volatility Risk Premium, Discretization Error, Model Error
In a framework closely related to Diamond and Rajan (2001) we characterize different financial systems and analyze the welfare implications of different LOLR-policies in these financial systems. We show that in a bank-dominated financial system it is less likely that a LOLR-policy that follows the Bagehot rules is preferable. In financial systems with rather illiquid assets a discretionary individual liquidity assistance might be welfare improving, while in market-based financial systems, with rather liquid assets in the banks' balance sheets, emergency liquidity assistance provided freely to the market at a penalty rate is likely to be efficient. Thus, a "one size fits all"-approach that does not take the differences of financial systems into account is misguiding. JEL - Klassifikation: D52 , E44 , G21 , E52 , E58
When options are traded, one can use their prices and price changes to draw inference about the set of risk factors and their risk premia. We analyze tests for the existence and the sign of the market prices of jump risk that are based on option hedging errors. We derive a closed-form solution for the option hedging error and its expectation in a stochastic jump model under continuous trading and correct model specification. Jump risk is structurally different from, e.g., stochastic volatility: there is one market price of risk for each jump size (and not just \emph{the} market price of jump risk). Thus, the expected hedging error cannot identify the exact structure of the compensation for jump risk. Furthermore, we derive closed form solutions for the expected option hedging error under discrete trading and model mis-specification. Compared to the ideal case, the sign of the expected hedging error can change, so that empirical tests based on simplifying assumptions about trading frequency and the model may lead to incorrect conclusions.
This paper deals with the superhedging of derivatives and with the corresponding price bounds. A static superhedge results in trivial and fully nonparametric price bounds, which can be tightened if there exists a cheaper superhedge in the class of dynamic trading strategies. We focus on European path-independent claims and show under which conditions such an improvement is possible. For a stochastic volatility model with unbounded volatility, we show that a static superhedge is always optimal, and that, additionally, there may be infinitely many dynamic superhedges with the same initial capital. The trivial price bounds are thus the tightest ones. In a model with stochastic jumps or non-negative stochastic interest rates either a static or a dynamic superhedge is optimal. Finally, in a model with unbounded short rates, only a static superhedge is possible.
Empirical evidence suggests that even those firms presumably most in need of monitoringintensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank. JEL - Klassifikation: G21 , G78 , G33
This paper suggests a motive for bank mergers that goes beyond alleged and typically unverifiable scale economies: preemtive resolution of banks´ financial distress. Such "distress mergers" can be a significant motivation for mergers because they can foster reorganizations, realize diversification gains, and avoid public attention. However, since none of these potential benefits comes without a cost, the overall assessment of distress mergers is unclear. We conduct an empirical analysis to provide evidence on consequences of distress mergers. The analysis is based on comprehensive data from Germany´s savings and cooperatives banks sectors over the period 1993 to 2001. During this period both sectors faced significant structural problems and superordinate institutions (associations) presumably have engaged in coordinated actions to manage distress mergers. The data comprise 3640 banks and 1484 mergers. Our results suggest that bank mergers as a means of preemtive distress resolution have moderate costs in terms of the economic impact on performance. We do find strong evidence consistent with diversification gains. Thus, distress mergers seem to have benefits without affecting systematic stability adversely.
Tests for the existence and the sign of the volatility risk premium are often based on expected option hedging errors. When the hedge is performed under the ideal conditions of continuous trading and correct model specification, the sign of the premium is the same as the sign of the mean hedging error for a large class of stochastic volatility option pricing models. We show, however, that the problems of discrete trading and model mis-specification, which are necessarily present in any empirical study, may cause the standard test to yield unreliable results.
The question whether the adoption of International Financial Reporting Standards (IFRS) will result in measurable economic benefits is of special policy relevance in particular given the European Union’s decision to require the application of IFRS by listed companies from 2005/2007. In this paper, I investigate the common con-jecture that internationally recognized high quality reporting standards (IAS/IFRS or US-GAAP) reduce the cost of capital of adopting firms (e.g. Levitt 1998; IASB 2002). Building on Leuz/Verrecchia (2000), I use a set of German firms which pre-adopted such standards before 2005, but investigate the potential economic benefits by analyzing their expected cost of equity capital utilizing and customizing avail-able implied estimation methods (e.g. Gebhardt/Lee/Swaminathan 2001, Easton/Taylor/Shroff/Sougiannis 2002, Easton 2004). Evidence from a sample of about 13,000 HGB, 4,500 IAS/IFRS and 3,000 US-GAAP firm-month observations in the period 1993-2002 generally fails to document lower expected cost of equity capital and therefore measurable economic benefits for firms applying IAS/IFRS or US-GAAP. Accordingly, I caution to state that reporting under internationally accepted standards, per se, lowers the cost of equity capital of adopting firms.
In this study, we develop a technique for estimating a firm’s expected cost of equity capital derived from analyst consensus forecasts and stock prices. Building on the work of Gebhardt/Lee/-Swaminathan (2001) and Easton/Taylor/Shroff/Sougiannis (2002), our approach allows daily estimation, using only publicly available information at that date. We then estimate the expected cost of equity capital at the market, industry and individual firm level using historical German data from 1989-2002 and examine firm characteristics which are systematically related to these estimates. Finally, we demonstrate the applicability of the concept in a contemporary case study for DaimlerChrysler and the European automobile industry.
Empirical evidence suggests that even those firms presumably most in need of monitoring-intensive financing (young, small, and innovative firms) have a multitude of bank lenders, where one may be special in the sense of relationship lending. However, theory does not tell us a lot about the economic rationale for relationship lending in the context of multiple bank financing. To fill this gap, we analyze the optimal debt structure in a model that allows for multiple but asymmetric bank financing. The optimal debt structure balances the risk of lender coordination failure from multiple lending and the bargaining power of a pivotal relationship bank. We show that firms with low expected cash-flows or low interim liquidation values of assets prefer asymmetric financing, while firms with high expected cash-flow or high interim liquidation values of assets tend to finance without a relationship bank.
We investigate the connection between corporate governance system configurations and the role of intermediaries in the respective systems from a informational perspective. Building on the economics of information we show that it is meaningful to distinguish between internalisation and externalisation as two fundamentally different ways of dealing with information in corporate governance systems. This lays the groundwork for a description of two types of corporate governance systems, i.e. insider control system and outsider control system, in which we focus on the distinctive role of intermediaries in the production and use of information. It will be argued that internalisation is the prevailing mode of information processing in insider control system while externalisation dominates in outsider control system. We also discuss shortly the interrelations between the prevailing corporate governance system and types of activities or industry structures supported.
Tractable hedging - an implementation of robust hedging strategies : [This Version: March 30, 2004]
(2004)
This paper provides a theoretical and numerical analysis of robust hedging strategies in diffusion–type models including stochastic volatility models. A robust hedging strategy avoids any losses as long as the realised volatility stays within a given interval. We focus on the effects of restricting the set of admissible strategies to tractable strategies which are defined as the sum over Gaussian strategies. Although a trivial Gaussian hedge is either not robust or prohibitively expensive, this is not the case for the cheapest tractable robust hedge which consists of two Gaussian hedges for one long and one short position in convex claims which have to be chosen optimally.
The main results obtained within the energy scan program at the CERN SPS are presented. The anomalies in energy dependence of hadron production indicate that the onset of deconfinement phase transition is located at about 30 A GeV. For the first time we seem to have clear evidence for the existence of a deconfined state of matter in nature. PACS numbers: 24.85.+p
A widely recognized paper by Colin Mayer (1988) has led to a profound revision of academic thinking about financing patterns of corporations in different countries. Using flow-of-funds data instead of balance sheet data, Mayer and others who followed his lead found that internal financing is the dominant mode of financing in all countries, that financing patterns do not differ very much between countries and that those differences which still seem to exist are not at all consistent with the common conviction that financial systems can be classified as being either bank-based or capital market-based. This leads to a puzzle insofar as it calls into question the empirical foundation of the widely held belief that there is a correspondence between the financing patterns of corporations on the one side, and the structure of the financial sector and the prevailing corporate governance system in a given country on the other side. The present paper addresses this puzzle on a methodological and an empirical basis. It starts by comparing and analyzing various ways of measuring financial structure and financing patterns and by demonstrating that the surprising empirical results found by studies that relied on net flows are due to a hidden assumption. It then derives an alternative method of measuring financing patterns, which also uses flow-of-funds data, but avoids the questionable assumption. This measurement concept is then applied to patterns of corporate financing in Germany, Japan and the United States. The empirical results, which use an estimation technique for determining gross flows of funds in those cases in which empirical data are not available, are very much in line with the commonly held belief prior to Mayer’s influential contribution and indicate that the financial systems of the three countries do indeed differ from one another in a substantial way, and moreover in a way which is largely in line with the general view of the differences between the financial systems of the countries covered in the present paper.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the strangeness production as a function of centre of mass energy and of the parameters of the source. We have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation. We show that, in this energy range, the use of hadron yields at midrapidity instead of in full phase space artificially enhances strangeness production and could lead to incorrect conclusions as far as the occurrence of full chemical equilibrium is concerned. In addition to the basic model with an extra strange quark non-equilibrium parameter, we have tested three more schemes: a two-component model superimposing hadrons coming out of single nucleon-nucleon interactions to those emerging from large fireballs at equilibrium, a model with local strangeness neutrality and a model with strange and light quark non-equilibrium parameters. The behaviour of the source parameters as a function of colliding system and collision energy is studied. The description of strangeness production entails a non-monotonic energy dependence of strangeness saturation parameter gamma_S with a maximum around 30A GeV. We also present predictions of the production rates of still unmeasured hadrons including the newly discovered Theta^+(1540) pentaquark baryon.
We suggest that the fluctuations of strange hadron multiplicity could be sensitive to the equation of state and microscopic structure of strongly interacting matter created at the early stage of high energy nucleus-nucleus collisions. They may serve as an important tool in the study of the deconfinement phase transition. We predict, within the statistical model of the early stage, that the ratio of properly filtered fluctuations of strange to non-strange hadron multiplicities should have a non-monotonic energy dependence with a minimum in the mixed phase region.
The data on mT spectra of K0S K+ and K- mesons produced in all inelastic p + p and p + pbar interactions in the energy range sqrt(s)NN=4.7-1800GeV are compiled and analyzed. The spectra are parameterized by a single exponential function, dN/(m_T*dm_T)=C exp(-m_T/T), and the inverse slope parameter T is the main object of study. The T parameter is found to be similar for K0S, K+ and K- mesons. It increases monotonically with collision energy from T~30MeV at sqrt(s)NN=4.7GeV to T~220MeV at sqrt(s)NN=1800GeV. The T parameter measured in p+p and p+pbar interactions is significantly lower than the corresponding parameter obtained for central Pb+Pb collisions at all studied energies. Also the shape of the energy dependence of T is different for central Pb+Pb collisions and p+p(pbar) interactions.
We propose a method to experimentally study the equation of state of strongly interacting matter created at the early stage of nucleus--nucleus collisions. The method exploits the relation between relative entropy and energy fluctuations and equation of state. As a measurable quantity, the ratio of properly filtered multiplicity to energy fluctuations is proposed. Within a statistical approach to the early stage of nucleus-nucleus collisions, the fluctuation ratio manifests a non--monotonic collision energy dependence with a maximum in the domain where the onset of deconfinement occurs.
Production of Lambda and Antilambda hyperons was measured in central Pb-Pb collisions at 40, 80, and 158 A GeV beam energy on a fixed target. Transverse mass spectra and rapidity distributions are given for all three energies. The Lambda/pi ratio at mid-rapidity and in full phase space shows a pronounced maximum between the highest AGS and 40 A GeV SPS energies, whereas the anti-Lambda}/pi ratio exhibits a monotonic increase. PACS numbers: 25.75.-q
Fluctuations of charged particle number are studied in the canonical ensemble. In the infinite volume limit the fluctuations in the canonical ensemble are different from the fluctuations in the grand canonical one. Thus, the well-known equivalence of both ensembles for the average quantities does not extend for the fluctuations. In view of a possible relevance of the results for the analysis of fluctuations in nuclear collisions at high energies, a role of the limited kinematical acceptance is studied.
Report from NA49
(2004)
The most recent data of NA49 on hadron production in nuclear collisions at CERN SPS energies are presented. Anomalies in the energy dependence of pion and kaon production in central Pb+Pb collisions are observed. They suggest that the onset of deconfinement is located at about 30 AGeV. Large multiplicity and transverse momentum fluctuations are measured for collisions of intermediate mass systems at 158 AGeV. The need for a new experimental programme at the CERN SPS is underlined.
The transverse mass mt distributions for deuterons and protons are measured in Pb+Pb reactions near midrapidity and in the range 0<mt–m<1.0 (1.5) GeV/c2 for minimum bias collisions at 158A GeV and for central collisions at 40 and 80 A GeV beam energies. The rapidity density dn/dy, inverse slope parameter T and mean transverse mass <mt> derived from mt distributions as well as the coalescence parameter B2 are studied as a function of the incident energy and the collision centrality. The deuteron mt spectra are significantly harder than those of protons, especially in central collisions. The coalescence factor B2 shows three systematic trends. First, it decreases strongly with increasing centrality reflecting an enlargement of the deuteron coalescence volume in central Pb+Pb collisions. Second, it increases with mt. Finally, B2 shows an increase with decreasing incident beam energy even within the SPS energy range. The results are discussed and compared to the predictions of models that include the collective expansion of the source created in Pb+Pb collisions.
Preliminary results on pion-pion Bose-Einstein correlations in central Pb+Pb collisions measured by the NA49 experiment are presented. Rapidity as well as transverse momentum dependence of the HBT-radii are shown for collisions at 20, 30, 40, 80, and 158 AGeV beam energy. Including results from AGS and RHIC experiments only a weak energy dependence of the radii is observed. Based on hydrodynamical models parameters like lifetime and geometrical radius of the source are derived from the dependence of the radii on transverse momentum.
Event-by-event fluctuations of particle ratios in central Pb + Pb collisions at 20 to 158 AGeV
(2004)
In the vicinity of the QCD phase transition, critical fluctuations have been predicted to lead to non-statistical fluctuations of particle ratios, depending on the nature of the phase transition. Recent results of the NA49 energy scan program show a sharp maximum of the ratio of K+ to Pi+ yields in central Pb+Pb collisions at beam energies of 20-30 AGeV. This observation has been interpreted as an indication of a phase transition at low SPS energies. We present first results on event-by-event fluctuations of the kaon to pion and proton to pion ratios at beam energies close to this maximum.
Results are presented on event-by-event electric charge fluctuations in central Pb+Pb collisions at 20, 30, 40, 80 and 158 AGeV. The observed fluctuations are close to those expected for a gas of pions correlated by global charge conservation only. These fluctuations are considerably larger than those calculated for an ideal gas of deconfined quarks and gluons. The present measurements do not necessarily exclude reduced fluctuations from a quark-gluon plasma because these might be masked by contributions from resonance decays.
System size and centrality dependence of the balance function in A + A collisions at √sNN = 17.2 GeV
(2004)
Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to force the hot and dense nuclear matter into a deconfined phase.
System size dependence of multiplicity fluctuations of charged particles produced in nuclear collisions at 158 A GeV was studied in the NA49 CERN experiment. Results indicate a non-monotonic dependence of the scaled variance of the multiplicity distribution with a maximum for semi-peripheral Pb+Pb interactions with number of projectile participants of about 35. This effect is not observed in a string-hadronic model of nuclear collision HIJING.
The hadronic final state of central Pb+Pb collisions at 20, 30, 40, 80, and 158 AGeV has been measured by the CERN NA49 collaboration. The mean transverse mass of pions and kaons at midrapidity stays nearly constant in this energy range, whereas at lower energies, at the AGS, a steep increase with beam energy was measured. Compared to p+p collisions as well as to model calculations, anomalies in the energy dependence of pion and kaon production at lower SPS energies are observed. These findings can be explained, assuming that the energy density reached in central A+A collisions at lower SPS energies is sufficient to transform the hot and dense nuclear matter into a deconfined phase.
In the early Nineties the Hague Conference on International Private Law on initiative of the United States started negotiations on a Convention on the Recognition and Enforcement of Foreign Judgments in Civil and Commercial Matters (the "Hague Convention"). In October 1999 the Special Commission on duty presented a preliminary text, which was drafted quite closely to the European Convention on Jurisdiction and Enforcement of Judgments in Civil and Commercial Matters (the "Brussels Convention"). The latter was concluded between the then 6 Member States of the EEC in Brussels in 1968 and amended several times on occasion of the entry of new Member States. In 2000, after the Treaty of Amsterdam altered the legal basis for judicial co-operation in civil matters in Europe, it was transformed into an EC Regulation (the "Brussels I Regulation"). The 1999 draft of the Hague Convention was heavily criticized by the USA and other states for its European approach of a double convention, regulating not only the recognition and enforcement of judgments, but at the same time the extent of and the limits to jurisdiction to adjudicate in international cases. During a diplomatic conference in June 2001 a second draft was presented which contained alternative versions of several articles and thus resembled more the existing dissent than a draft convention would. Difficulties to reach a consensus remained, especially with regard to activity based jurisdiction, intellectual property, consumer rights and employee rights. In addition, the appropriateness of the whole draft was questioned in light of the problems posed by the de-territorialization of relevant conduct through the advent of the Internet. In April 2002 it was decided to continue negotiations on an informal level on the basis of a nucleus approach. The core consensus as identified by a working group, however, was not very broad. The experts involved came to the conclusion that the project should be limited to choice of court agreements. In March 2004 a draft was presented which sets out its aims as follows: "The objective of the Convention is to make exclusive choice of court agreements as effective as possible in the context of international business. The hope is that the Convention will do for choice of court agreements what the New York Convention of 1958 has done for arbitration agreements." In April 2004 the Special Commission of the Hague Conference adopted a Draft "Convention on Exclusive Choice of Court Agreements", which according to its Art. 2 No. 1 a) is not applicable to choice of court agreements, to which a natural person acting primarily for personal, family or household purposes (a consumer) is a party". The broader project of a global judgments convention thus seems to be abandoned, or at least to be postponed for an unlimited time period. There are - of course - several reasons why the Hague Judgments project failed. Samuel Baumgartner has described an important one as the "Justizkonflikt" between the United States and Europe or, more specifically Germany. Within the context of the general topic of this conference, that is (international) jurisdiction for human rights, in the remainder of this presentation I shall elaborate on the socio-cultural aspects of the impartiality of judgments and their enforcement on a global scale.
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We modify the concept of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lovasz [LLL82] towards a faster reduction algorithm. We organize LLL-reduction in segments of the basis. Our SLLL-bases approximate the successive minima of the lattice in nearly the same way as LLL-bases. For integer lattices of dimension n given by a basis of length 2exp(O(n)), SLLL-reduction runs in O(n.exp(5+epsilon)) bit operations for every epsilon > 0, compared to O(exp(n7+epsilon)) for the original LLL and to O(exp(n6+epsilon)) for the LLL-algorithms of Schnorr (1988) and Storjohann (1996). We present an even faster algorithm for SLLL-reduction via iterated subsegments running in O(n*exp(3)*log n) arithmetic steps.
Let G be a Fuchsian group containing two torsion free subgroups defining isomorphic Riemann surfaces. Then these surface subgroups K and alpha-Kalpha exp(-1) are conjugate in PSl(2,R), but in general the conjugating element alpha cannot be taken in G or a finite index Fuchsian extension of G. We will show that in the case of a normal inclusion in a triangle group G these alpha can be chosen in some triangle group extending G. It turns out that the method leading to this result allows also to answer the question how many different regular dessins of the same type can exist on a given quasiplatonic Riemann surface.
The large conductance voltage- and Ca2+-activated potassium (BK) channel has been suggested to play an important role in the signal transduction process of cochlear inner hair cells. BK channels have been shown to be composed of the pore-forming alpha-subunit coexpressed with the auxiliary beta-1-subunit. Analyzing the hearing function and cochlear phenotype of BK channel alpha-(BKalpha–/–) and beta-1-subunit (BKbeta-1–/–) knockout mice, we demonstrate normal hearing function and cochlear structure of BKbeta-1–/– mice. During the first 4 postnatal weeks also, BKalpha–/– mice most surprisingly did not show any obvious hearing deficits. High-frequency hearing loss developed in BKalpha–/– mice only from ca. 8 weeks postnatally onward and was accompanied by a lack of distortion product otoacoustic emissions, suggesting outer hair cell (OHC) dysfunction. Hearing loss was linked to a loss of the KCNQ4 potassium channel in membranes of OHCs in the basal and midbasal cochlear turn, preceding hair cell degeneration and leading to a similar phenotype as elicited by pharmacologic blockade of KCNQ4 channels. Although the actual link between BK gene deletion, loss of KCNQ4 in OHCs, and OHC degeneration requires further investigation, data already suggest human BK-coding slo1 gene mutation as a susceptibility factor for progressive deafness, similar to KCNQ4 potassium channel mutations. © 2004, The National Academy of Sciences. Freely available online through the PNAS open access option.
Dendritic cells (DC) are known to present exogenous protein Ag effectively to T cells. In this study we sought to identify the proteases that DC employ during antigen processing. The murine epidermal-derived DC line Xs52, when pulsed with PPD, optimally activated the PPD-reactive Th1 clone LNC.2F1 as well as the Th2 clone LNC.4k1, and this activation was completely blocked by chloroquine pretreatment. These results validate the capacity of XS52 DC to digest PPD into immunogenic peptides inducing antigen specific T cell immune responses. XS52 DC, as well as splenic DC and DCs derived from bone marrow degraded standard substrates for cathepsins B, C, D/E, H, J, and L, tryptase, and chymases, indicating that DC express a variety of protease activities. Treatment of XS52 DC with pepstatin A, an inhibitor of aspartic acid proteases, completely abrogated their capacity to present native PPD, but not trypsin-digested PPD fragments to Th1 and Th2 cell clones. Pepstatin A also inhibited cathepsin D/E activity selectively among the XS52 DC-associated protease activities. On the other hand, inhibitors of serine proteases (dichloroisocoumarin, DCI) or of cystein proteases (E-64) did not impair XS52 DC presentation of PPD, nor did they inhibit cathepsin D/E activity. Finally, all tested DC populations (XS52 DC, splenic DC, and bone marrow-derived DC) constitutively expressed cathepsin D mRNA. These results suggest that DC primarily employ cathepsin D (and perhaps E) to digest PPD into antigenic peptides.
Background: The neurophysiological and neuroanatomical foundations of persistent developmental stuttering (PDS) are still a matter of dispute. A main argument is that stutterers show atypical anatomical asymmetries of speech-relevant brain areas, which possibly affect speech fluency. The major aim of this study was to determine whether adults with PDS have anomalous anatomy in cortical speech-language areas. Methods: Adults with PDS (n = 10) and controls (n = 10) matched for age, sex, hand preference, and education were studied using high-resolution MRI scans. Using a new variant of the voxel-based morphometry technique (augmented VBM) the brains of stutterers and non-stutterers were compared with respect to white matter (WM) and grey matter (GM) differences. Results: We found increased WM volumes in a right-hemispheric network comprising the superior temporal gyrus (including the planum temporale), the inferior frontal gyrus (including the pars triangularis), the precentral gyrus in the vicinity of the face and mouth representation, and the anterior middle frontal gyrus. In addition, we detected a leftward WM asymmetry in the auditory cortex in non-stutterers, while stutterers showed symmetric WM volumes. Conclusions: These results provide strong evidence that adults with PDS have anomalous anatomy not only in perisylvian speech and language areas but also in prefrontal and sensorimotor areas. Whether this atypical asymmetry of WM is the cause or the consequence of stuttering is still an unanswered question. This article is available from: http://www.biomedcentral.com/1471-2377/4/23 © 2004 Jäncke et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.