Refine
Year of publication
- 2003 (497) (remove)
Document Type
- Article (162)
- Working Paper (109)
- Conference Proceeding (62)
- Part of a Book (40)
- Preprint (40)
- Doctoral Thesis (33)
- Part of Periodical (29)
- Book (8)
- Report (6)
- Review (5)
Language
- English (497) (remove)
Keywords
- Deutschland (24)
- Morphologie (14)
- Phonologie (12)
- Geldpolitik (11)
- Aspekt (10)
- Englisch (9)
- Koreanisch (8)
- Europäische Union (7)
- Going Public (7)
- Kindersprache (7)
Institute
- Physik (63)
- Center for Financial Studies (CFS) (56)
- Wirtschaftswissenschaften (35)
- Medizin (17)
- Rechtswissenschaft (14)
- Biowissenschaften (12)
- Geowissenschaften (12)
- Informatik (12)
- Extern (11)
- Institut für Deutsche Sprache (IDS) Mannheim (11)
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
This paper is focused on the coordination of order and production policy between buyers and suppliers in supply chains. When a buyer and a supplier of an item work independently, the buyer will place orders based on his economic order quantity (EOQ). However, the buyer s EOQ may not lead to an optimal policy for the supplier. It can be shown that a cooperative batching policy can reduce total cost significantly. Should the buyer have the more powerful position to enforce his EOQ on the supplier, then no incentive exists for him to deviate from his EOQ in order to choose a cooperative batching policy. To provide an incentive to order in quantities suitable to the supplier, the supplier could offer a side payment. One critical assumption made throughout in the literature dealing with incentive schemes to influence buyer s ordering policy is that the supplier has complete information regarding buyer s cost structure. However, this assumption is far from realistic. As a consequence, the buyer has no incentive to report truthfully on his cost structure. Moreover there is an incentive to overstate the total relevant cost in order to obtain as high a side payment as possible. This paper provides a bargaining model with asymmetric information about the buyer s cost structure assuming that the buyer has the bargaining power to enforce his EOQ on the supplier in case of a break-down in negotiations. An algorithm for the determination of an optimal set of contracts which are specifically designed for different cost structures of the buyer, assumed by the supplier, will be presented. This algorithm was implemented in a software application, that supports the supplier in determining the optimal set of contracts.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.
Presentation at the Università di Pisa, Pisa, Itlay 3 July 2002, the conference on Irreversible Quantum Dynamics', the Abdus Salam ICTP, Trieste, Italy, 29 July - 2 August 2002, and the University of Natal, Pietermaritzburg, South Africa, 14 May 2003. Version of 24 April 2003: examples added; 16 December 2002: revised; 12 Sptember 2002. See the corresponding papers "Zeno Dynamics of von Neumann Algebras", "Zeno Dynamics in Quantum Statistical Mechanics" and "Mathematics of the Quantum Zeno Effect"
Introduction: This open label, multicentre study was conducted to assess the times to offset of the pharmacodynamic effects and the safety of remifentanil in patients with varying degrees of renal impairment requiring intensive care.
Methods: A total of 40 patients, who were aged 18 years or older and had normal/mildly impaired renal function (estimated creatinine clearance ≥ 50 ml/min; n = 10) or moderate/severe renal impairment (estimated creatinine clearance <50 ml/min; n = 30), were entered into the study. Remifentanil was infused for up to 72 hours (initial rate 6–9 μg/kg per hour), with propofol administered if required, to achieve a target Sedation–Agitation Scale score of 2–4, with no or mild pain.
Results: There was no evidence of increased offset time with increased duration of exposure to remifentanil in either group. The time to offset of the effects of remifentanil (at 8, 24, 48 and 72 hours during scheduled down-titrations of the infusion) were more variable and were statistically significantly longer in the moderate/severe group than in the normal/mild group at 24 hours and 72 hours. These observed differences were not clinically significant (the difference in mean offset at 72 hours was only 16.5 min). Propofol consumption was lower with the remifentanil based technique than with hypnotic based sedative techniques. There were no statistically significant differences between the renal function groups in the incidence of adverse events, and no deaths were attributable to remifentanil use.
Conclusion: Remifentanil was well tolerated, and the offset of pharmacodynamic effects was not prolonged either as a result of renal dysfunction or prolonged infusion up to 72 hours.
The study of organisms with restricted dispersal abilities and presence in the fossil record is particularly adequate to understand the impact of climate changes on the distribution and genetic structure of species. Trochoidea geyeri (Soós 1926) is a land snail restricted to a patchy, insular distribution in Germany and France. Fossil evidence suggests that current populations of T. geyeri are relicts of a much more widespread distribution during more favourable climatic periods in the Pleistocene. Results: Phylogeographic analysis of the mitochondrial 16S rDNA and nuclear ITS-1 sequence variation was used to infer the history of the remnant populations of T. geyeri. Nested clade analysis for both loci suggested that the origin of the species is in the Provence from where it expanded its range first to Southwest France and subsequently from there to Germany. Estimated divergence times predating the last glacial maximum between 25–17 ka implied that the colonization of the northern part of the current species range occurred during the Pleistocene. Conclusion: We conclude that T. geyeri could quite successfully persist in cryptic refugia during major climatic changes in the past, despite of a restricted capacity of individuals to actively avoid unfavourable conditions.
We present a method for the construction of a Krein space completion for spaces of test functions, equipped with an indefinite inner product induced by a kernel which is more singular than a distribution of finite order. This generalizes a regularization method for infrared singularities in quantum field theory, introduced by G. Morchio and F. Strocchi, to the case of singularites of infinite order. We give conditions for the possibility of this procedure in terms of local differential operators and the Gelfand-Shilov test function spaces, as well as an abstract sufficient condition. As a model case we construct a maximally positive definite state space for the Heisenberg algebra in the presence of an infinite infrared singularity. See the corresponding paper: Schmidt, Andreas U.: "Mathematical Problems of Gauge Quantum Field Theory: A Survey of the Schwinger Model" and the presentation "Infinite Infrared Regularization in Krein Spaces"
The paper analyses the effects of three sets of accounting rules for financial instruments - Old IAS before IAS 39 became effective, Current IAS or US GAAP, and the Full Fair Value (FFV) model proposed by the Joint Working Group (JWG) - on the financial statements of banks. We develop a simulation model that captures the essential characteristics of a modern universal bank with investment banking and commercial banking activities. We run simulations for different strategies (fully hedged, partially hedged) using historical data from periods with rising and falling interest rates. We show that under Old IAS a fully hedged bank can portray its zero economic earnings in its financial statements. As Old IAS offer much discretion, this bank may also present income that is either positive or negative. We further show that because of the restrictive hedge accounting rules, banks cannot adequately portray their best practice risk management activities under Current IAS or US GAAP. We demonstrate that - contrary to assertions from the banking industry - mandatory FFV accounting adequately reflects the economics of banking activities. Our detailed analysis identifies, in addition, several critical issues of the accounting models that have not been covered in previous literature. December 2002. Revised: June 2003. Later version: http://publikationen.ub.uni-frankfurt.de/volltexte/2005/1026/ with the title: "Accounting for financial instruments in the banking industry : conclusions from a simulation model"
This paper proposes an intertemporal model of venture capital investment with screening and advising where the venture capitalist´s time endowment is the scarce input factor. Screening improves the selection of firms receiving finance, advising allows firms to develop a marketable product, both have a variable intensity. In our setup, optimal linear contracts solves the moral hazard problem. Screening however asks for an entrepreneur wage and does not allow for upfront payments which would cause severe adverse selection. Project characteristics have implications for screening and advising intensity and the distribution of profits. Finally, we develop a formal version of the "venture capital cycle" by extending the basic setup to a simple model of venture capital supply and demand.
This paper analyses the effects of the Initial Public Offering (IPO) market on real investment decisions in emerging industries. We first propose a model of IPO timing based on divergence of opinion among investors and short-sale constraints. Using a real option approach, we show that firms are more likely to go public when the ratio of overvaluation over profits is high, that is after stock market run-ups. Because initial returns increase with the demand from optimistic investors at the time of the offer, the model provides an explanation for the observed positive causality between average initial returns and IPO volume. Second, we discuss the possibility of real overinvestment in high-tech industries. We claim that investing in the industry gives agents an option to sell the project on the stock market at an overvalued price enabling then the financing of positive NPV projects which would not be undertaken otherwise. It is shown that the IPO market can however also lead to overinvestment in new industries. Finally, we present some econometric results supporting the idea that funds committed to the financing of high-tech industries may respond positively to optimistic stock market valuations.