Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
This paper studies a setting in which a risk averse agent must be motivated to work on two tasks: he (1) evaluates a new project and, if adopted, (2) manages it. While a performance measure which is informative of an agent´s action is typically valuable because it can be used to improve the risk sharing of the contract, this is not necessarily the case in this two-task setting. I provide a sufficient condition under which a performance measure that is informative of the second task is worthless for contracting despite the agent being risk averse. This shows that information content is a necessary but not a sufficient condition for a performance measure to be valuable.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Reflexive transnational law : the privatisation of civil law and the civilisation of private law
(2002)
The author examines the emergence of a transnational private law in alternative dispute resolution bodies and private norm formulating agencies from a reflexive law perspective. After introducing the concept of reflexive law he applies the idea of law as a communicative system to the ongoing debate on the existence of a New Law Merchant or lex mercatoria. He then discusses some features of international commercial arbitration (e.g. the lack of transparency) which hinder self-reference (autopoiesis) and thus the production of legal certainty in lex mercatoria as an autonomous legal system. He then contrasts these findings with the Domain Name Dispute Resolution System, which as opposed to Lex Mercatoria was rationally planned and highly formally organised by WIPO and ICANN, and which is allowing for self-reference and thus is designed as an autopoietic legal system, albeit with a very limited scope, i.e. the interference of abusive domain name registrations with trademarks (cybersquatting). From the comparison of both examples the author derives some preliminary ideas regarding a theory of reflexive transnational law, suggesting that the established general trend of privatisation of civil law need to be accompanied by a civilisation of private law, i.e. the constitutionalization of transnational private regimes by embedding them into a procedural constitution of freedom.
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
A steep maximum occurs in the Wroblewski ratio between strange and non-strange quarks created in central nucleus-nucleus collisions, of about A=200, at the lower SPS energy square root s approximately equal to 7 GeV. By analyzing hadronic multiplicities within the grand canonical statistical hadronization model this maximum is shown to occur at a baryochemical potential of about 450 MeV. In comparison, recent QCD lattice calculations at finite baryochemical potential suggest a steep maximum of the light quark susceptibility, to occur at similar mu B, indicative of "critical fluctuation" expected to occur at or near the QCD critical endpoint. This endpoint hat not been firmly pinned down but should occur in the 300 MeV < mu c B < 700 MeV interval. It is argued that central collisions within the low SPS energy range should exhibit a turning point between compression/heating, and expansion/cooling at energy density, temperature and mu B close to the suspected critical point. Whereas from top SPS to RHIC energy the primordial dynamics create a turning point far above in epsilon and T, and far below in mu B. And at lower AGS energies the dynamical trajectory stays below the phase boundary. Thus, the observed sharp strangeness maximum might coincide with the critical square root s at which the dynamics settles at, or near the QCD endpoint.
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
With new data available from the SPS, at 40 and 80 GeV/A, I review the systematics of bulk hadron multiplicities, with prime focus on strangeness production. The classical concept of strangeness enhancement in central AA collisions is reviewed, in view of the statistical hadronization model which suggests to understand strangeness enhancement to arise chiefly in the transition from the canonical to the grand canonical version of that model. I. e. enhancement results from the fading away of canonical suppression. The model also captures the striking strangeness maximum observed in the vicinity of sqrt s approx 8 GeV. A puzzle remains in the understanding of apparent grand canonical order at the lower SPS, and at AGS energies.
Transverse momentum event-by-event fluctuations are studied within the string-hadronic model of high energy nuclear collisions, LUCIAE. Data on non-statistical pT fluctuations in p+p interactions are reproduced. Fluctuations of similar magnitude are predicted for nucleus-nucleus collisions, in contradiction to the preliminary NA49 results. The introduction of a string clustering mechanism (Firecracker Model) leads to a further, significant increase of pT fluctuations for nucleus-nucleus collisions. Secondary hadronic interactions, as implemented in LUCIAE, cause only a small reduction of pT fluctuations.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Hadronic yields and yield ratios observed in Pb+Pb collisions at the SPS energy of 158 GeV per nucleon are known to resemble a thermal equilibrium population at T=180 +/- 10 MeV, also observed in elementary e+ + e- to hadron data at LEP. We argue that this is the universal consequence of the QCD parton to hadron phase transition populating the maximum entropy state. This state is shown to survive the hadronic rescattering and expansion phase, freezing in right after hadronization due to the very rapid longitudinal and transverse expansion that is inferred from Bose-Einstein pion correlation analysis of central Pb+Pb collisions.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
A selection of recent data referring to Pb+Pb collisions at the SPS CERN energy of 158 GeV per nucleon is presented which might describe the state of highly excited strongly interacting matter both above and below the deconfinement to hadronization (phase) transition predicted by lattice QCD. A tentative picture emerges in which a partonic state is indeed formed in central Pb+Pb collisions which hadronizes at about T = 185 MeV, and expands its volume more than tenfold, cooling to about 120 MeV before hadronic collisions cease. We suggest further that all SPS collisions, from central S+S onward, reach that partonic phase, the maximum energy density increasing with more massive collision systems.