Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
In April 2003 I commented on the European Commission’s Action Plan on a More Coherent European Contract Law [COM(2003) 68 final] and the Green Paper on the Modernisation of the 1980 Rome Convention [COM(2002) 654 final].1 While the main argument of that paper, i.e. the common neglect of the inherent interrelation between both the further harmonisation of substantive contract law by directives or through an optional European Civil Code on the one hand and the modernisation of conflict rules for consumer contracts in Art. 5 Rome Convention on the other hand, remain pressing issues, and as the German Law Journal continues its efforts in offering timely and critical analysis on consumer law issues,2 there is a variety of recent developments worth noting.
The negative-pion multiplicity is measured for central collisions of 40Ar with KCl at eight energies from 0.36 to 1.8 GeV/nucleon and for 4He on KCl and 40Ar on BaI2 at 977 and 772 MeV/nucleon, respectively. A systematic discrepancy with a cascade-model calculation which fits proton- and pion-nucleus cross sections but omits potential-energy effects is used to derive the energy going into bulk compression of the system. A value of the incompressibility constant of K=240 MeV is extracted in a parabolic form of the nuclear-matter equation of state.
The parities of eleven J=1 levels in 208Pb were determined by nuclear resonance fluorescence scattering of linearly polarized photons. A new 1+ level at Ex=5.846 MeV with Gamma 02 / Gamma =1.2±0.4 eV was found. This level can probably be identified with the theoretically predicted isoscalar 1+ state in 208Pb. All other bound dipole states below 7 MeV with Gamma 02 / Gamma >1.5 eV have negative parity. The 1- assignment to the 4.842-MeV level is of special significance because of previous conflicting results about its parity.
The 16O ( gamma ,p0) reaction has been studied with linearly polarized bremsstrahlung photons in and below the giant E1 resonance. The parity of the absorbed radiation was determined from the observed azimuthal asymmetry of the emitted protons. Combined with unpolarized measurements the polarized results determine the proton decay amplitudes of the M1 resonance at Ex=16.2 MeV in 16O. The shape of the unpolarized 16O ( gamma ,p3) angular distribution in the giant E1 resonance was derived from the measured analyzing power. NUCLEAR REACTIONS 16O( gamma ,p), E=15-25 MeV; measured analyzing power theta =90° linearly polarized bremsstrahlung; 16O dipole levels deduced pi ; 16.2 MeV 1+ resonance deduced p0 decay amplitudes; 16O GEDR deduced p3 angular distribution.
The ultrarelativistic quantum molecular dynamics model (UrQMD) is used to study global observables in central reactions of Au+Au at sqrt[s]=200A GeV at the Relativistic Heavy Ion Collider (RHIC). Strong stopping governed by massive particle production is predicted if secondary interactions are taken into account. The underlying string dynamics and the early hadronic decoupling implies only small transverse expansion rates. However, rescattering with mesons is found to act as a source of pressure leading to additional flow of baryons and kaons, while cooling down pions.
11 262 keV 1+ state in 20Ne
(1983)
The excitation energy of the lowest 1+, T=1 state in 20Ne, which is important for parity nonconservation studies, has been determined in a photon scattering experiment to be 11 262.3 ± 1.9 keV. Values for the gamma -ray branching of this level to the ground state and to the first 2+ level in 20Ne are 84 ± 5% and 16 ± 5%, respectively. NUCLEAR REACTIONS 20Ne( gamma , gamma ), E gamma <18 MeV, bremsstrahlung; measured E gamma , gamma branching. Ne natural targets.
Proton emission in relativistic nuclear collisions is examined for events of low and high multiplicity, corresponding to large and small impact parameters. Peripheral reactions exhibit distributions of protons in agreement with spectator-participant decay modes. Central collisions of equal-size nuclei are dominated by the formation and decay of a fireball system. Central collisions of light projectiles with heavy targets exhibit an enhancement in sideward emission which is predicted by recent hydrodynamical calculations.
Angular distributions for elastic and inelastic transitions in 20Ne + 16O scattering have been measured at E(20Ne)=50 MeV. For the 0+, 2+, and 4+ members of the 20Ne ground-state rotational band, the angular distributions exhibit pronounced backward peaking characteristic of an alpha -cluster exchange mechanism. The analysis of the ground-state transition in the first-order elastic transfer model yields no satisfactory fit although microscopic cluster form factors and full recoil corrections are employed. A coupled channels calculation for the 0+, 2+, and 4+ transitions reveals very strong coupling effects, indicating that the coherent superposition of first-order optical model and distorted-wave Born-approximation amplitudes may not be an adequate model for these reactions. NUCLEAR REACTIONS 16O(20Ne, 16O) and 16O(20Ne, 20Ne), elastic and inelastic transfer; E=50MeV; measured sigma (Ef , theta ); optical model + DWBA, and CCBA analyses.
The elastic alpha scattering to backward angles has been studied for 40,42,44,48Ca between 40.7 and 72.3 MeV. The cross sections for 40Ca are larger than those for the higher isotopes up to the highest energies. They show backward increases that disappear above 50 MeV. The enhancement factor for 40Ca over 42,44Ca varies smoothly with energy. 48Ca does also show a backward cross-section enhancement over 42,44Ca. alpha -cluster rotational bands in the 44Ti compound state, four-nucleon correlations in 40Ca, and the l-dependent optical model are discussed as approaches to understand the anomaly. The rotator model appears to agree qualitatively with the experimental data. It involves rotational bands extending at least up to J=16 in 44Ti.
We present simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) for the Arctic winter 2002/2003. We integrated a Lagrangian denitrification scheme into the three-dimensional version of CLaMS that calculates the growth and sedimentation of nitric acid trihydrate (NAT) particles along individual particle trajectories. From those, we derive the HNO3 downward flux resulting from different particle nucleation assumptions. The simulation results show a clear vertical redistribution of total inorganic nitrogen (NOy), with a maximum vortex average permanent NOy removal of over 5 ppb in late December between 500 and 550 K and a corresponding increase of NOy of over 2 ppb below about 450 K. The simulated vertical redistribution of NOy is compared with balloon observations by MkIV and in-situ observations from the high altitude aircraft Geophysica. Assuming a globally uniform NAT particle nucleation rate of 3.4·10−6 cm−3 h−1 in the model, the observed denitrification is well reproduced. In the investigated winter 2002/2003, the denitrification has only moderate impact (<=10%) on the simulated vortex average ozone loss of about 1.1 ppm near the 460 K level. At higher altitudes, above 600 K potential temperature, the simulations show significant ozone depletion through NOx-catalytic cycles due to the unusual early exposure of vortex air to sunlight.
Chlorine monoxide (ClO) plays a key role in stratospheric ozone loss processes at midlatitudes. We present two balloonborne in situ measurements of ClO conducted in northern hemisphere midlatitudes during the period of the maximum of total inorganic chlorine loading in the atmosphere. Both ClO measurements were conducted on board the TRIPLE balloon payload, launched in November 1996 in Le´on, Spain, and in May 1999 in Aire sur l’Adour, France. For both flights a ClO daylight and night time vertical profile could be derived over an altitude range of approximately 15–31 km. ClO mixing ratios are compared to model simulations performed with the photochemical box model version of the Chemical Lagrangian Model of the Stratosphere (CLaMS). Simulations along 24-h backward trajectories were performed to study the diurnal variation of ClO in the midlatitude lower stratosphere. Model simulations for the flight launched in Aire sur l’Adour 1999 show a good agreement with the ClO measurements. For the flight launched in Le´on 1996, a similar good agreement is found, except at around ~ 650 K potential temperature (~26km altitude). However, a tendency is found that for solar zenith angles greater than 86°–87° the simulated ClO mixing ratios substantially overestimate measured ClO by approximately a factor of 2.5 or more for both flights. Therefore we conclude that no indication can be deduced from the presented ClO measurements that substantial uncertainties exist in midlatitude chlorine chemistry of the stratosphere. An exception is the situation at solar zenith angles greater than 86°–87° where model simulations substantial overestimate ClO observations.
Back-angle enhancements of elastic alpha -scattering cross sections have been observed for nuclei at the ends of the 1p, 2s-1d, and f7 / 2 shells. Strong reduction of this enhancement occurs if excess neutrons enter the next open major shell. The results are discussed in terms of intermediate alpha structure.
Pion-production cross sections have been measured for the reaction 40Ar+40Ca--> pi ++X at a laboratory energy of 1.05 GeV/nucleon. A maximum in the pi + cross section occurs at mid-rapidity, which is anomalous relative to p+p and p+nucleus reactions and compared to many other heavy-ion reactions. Calculations based on cascade and thermal models fail to fit the data.
Inclusive energy spectra of protons, deuterons, and tritons were measured with a telescope of silicon and germanium detectors with a detection range for proton energies up to 200 MeV. Fifteen sets of data were taken using projectiles ranging from protons to 40Ar on targets from 27Al to 238U at bombarding energies from 240 MeV/nucleon to 2.1 GeV/nucleon. Particular attention was paid to the absolute normalization of the cross sections. For three previously reported reactions, He fragment cross sections have been corrected and are presented. To facilitate a comparison with theory the sum of nucleonic charges emitted as protons plus composite particles was estimated and is presented as a function of fragment energy per nucleon in the interval from 15 to 200 MeV/nucleon. For low-energy fragments at forward angles the protons account for only 25% of the nucleonic charges. The equal mass 40Ar plus Ca systems were examined in the center of mass. Here at 0.4 GeV/nucleon 40Ar plus Ca the proton spectra appear to be nearly isotropic in the center of mass over the region measured. Comparisons of some data with firestreak, cascade, and fluid dynamics models indicate a failure of the first and a fair agreement with the latter two. In addition, associated fast charged particle multiplicities (where the particles had energies larger than 25 MeV/nucleon) and azimuthal correlations were measured with an 80 counter array of plastic scintillators. It was found that the associated multiplicities were a smooth function of the total kinetic energy of the projectile. NUCLEAR REACTIONS U(20Ne,X), E / A=240 MeV/nucleon; U(40Ar,X), Ca(40Ar,X), U(20Ne,X), Au(20Ne,X), Ag(20Ne,X), Al(20Ne,X), U(4He,X), Al(4He,X), E / A=390 MeV/nucleon; U(40Ar,X), Ca(40Ar,X), U(20Ne,X), U(4He,X), U(p,X), E / A=1.04 GeV/nucleon; U(20Ne,X), E / A=2.1 GeV/nucleon; measured sigma (E, theta ), X=p,d,t.
Exclusive pi - and charged-particle production in collisions of Ar+KCl is studied at incident energies from 0.4 to 1.8 GeV/u. Complete disintegration of both nuclei is observed. The correlation between pi - and total charge multiplicity shows no islands of anomalous pion production. For constant numbers of proton participants the pi - multiplicity distributions are Poissons. For central collisions <n pi -> increases smoothly and to first order linearly with the c.m. energy. Disagreement with the firestreak model is found. Pacs numbers: 25.70.Hi, 24.10.Dp
Lambda 's produced in central collisions of 40Ar+KC1 at 1.8-GeV/u incident energy were detected in a streamer chamber by their charged-particle decay. For central collisions with impact parameters b<2.4 fm the Lambda production cross section is 7.6±2.2 mb. A calculation in which Lambda production occurs in the early stage of the collision qualitatively reproduces the results but underestimates the transverse momenta. An average Lambda polarization of -0.10±0.05 is observed. PACS numbers: 25.70 Bc
Pion production and charged-particle multiplicity selection in relativistic nuclear collisions
(1982)
Spectra of positive pions with energies of 15-95 MeV were measured for high energy proton, 4He, 20Ne, and 40Ar bombardments of targets of 27Al, 40Ca, 107,109Ag, 197Au, and 238U. A Si-Ge telescope was used to identify charged pions by dE / dx-E and, in addition, stopped pi + were tagged by the subsequent muon decay. In all, results for 14 target-projectile combinations are presented to study the dependence of pion emission patterns on the bombarding energy (from E / A=0.25 to 2.1 GeV) and on the target and the projectile masses. In addition, associated charged-particle multiplicities were measured in an 80-paddle array of plastic scintillators, and used to make impact parameter selections on the pion-inclusive data. NUCLEAR REACTIONS U(20Ne, pi +), E / A=250 MeV; U(40Ar, pi +), Ca(40Ar, pi +), U(20Ne, pi +), Au(20Ne, pi +), Ag(20Ne, pi +), Al(20Ne, pi +), U(4He, pi +), Al(4He, pi +). E / A=400 MeV; Ca(40Ar, pi +), U(20Ne, pi +), U(4He, pi +), U(p, pi +), E / A=1.05), GeV; U(20Ne, pi +), E / A=2.1 GeV; measured sigma (E, theta ), inclusive and selected on associated charged-particle multiplicity.
Energy spectra and angular distributions have been measured of 3He and 4He fragments emitted from Ag and U targets, bombarded with 2.7-GeV protons, and 1.05-GeV/nucleon alpha particles and 16O ions. All cross sections increase dramatically with projectile mass. No narrow peaks are found in the angular distributions or in the energy spectra.
Double-differential cross sections have been measured for high-energy p, d, t, 3He, and 4He particles emitted from uranium targets irradiated with 20Ne ions at energies of 250, 400, and 2100 MeV/nucleon and 4He ions at 400 MeV/nucleon. By using the shape and yield of the proton energy spectra, the shape and yield of the d, t, 3He, and 4He energy spectra can be deduced at all measured angles for all incident projectile energies by assuming that they are formed by a coalescence of cascade nucleons, using a model analogous to that of Butler and Pearson, and Schwarzschild and Zupancic-caron.
A simple model is proposed for the emission of nucleons with velocities intermediate between those of the target and projectile. In this model, the nucleons which are mutually swept out from the target and projectile form a hot quasiequilibrated fireball which decays as an ideal gas. The overall features of the proton-inclusive spectra from 250- and 400-MeV/nucleon 20Ne ions and 400-MeV/nucleon 4He ions interacting with uranium are fitted without any adjustable parameters.
The energy spectra of protons and light nuclei produced by the interaction of 4He and 20Ne projectiles with Al and U targets have been investigated at incident energies ranging from 0.25 to 2.1 GeV per nucleon. Single fragment inclusive spectra have been obtained at angles between 25° and 150°, in the energy range from 30 to 150 MeV/nucleon. The multiplicity of intermediate and high energy charged particles was determined in coincidence with the measured fragments. In a separate study, fragment spectra were obtained in the evaporation energy range from 12C and 20Ne bombardment of uranium. We observe structureless, exponentially decaying spectra throughout the range of studied fragment masses. There is evidence for two major classes of fragments; one with emission at intermediate temperature from a system moving slowly in the lab frame, and the other with high temperature emission from a system propagating at a velocity intermediate between target and projectile. The high energy proton spectra are fairly well reproduced by a nuclear fireball model based on simple geometrical, kinematical, and statistical assumptions. Light cluster emission is also discussed in the framework of statistical models. NUCLEAR REACTIONS U(20Ne,X), E=250 MeV/nucl.; U(20Ne,X), U(α,X) E=400 MeV/nucl.; U(20Ne,X), Al(20Ne,X), E=2.1 GeV/nucl.; measured σ(E,θ), X=p, d, t, 3He,4He. U(20Ne,X), U(α,X), E=400 MeV/nucl.; U(20Ne,X), E=2.1 GeV/nucl.; measured σ(E, θ), Li to O. U(20Ne,X), U(12C,X), E=2.1 GeV/nucl.; measured σ(E, 90°), 4He to B. Nuclear fireballs, coalescence, thermodynamics of light nuclei production.
Results are presented from a search for the decays D0 -> K min pi plus and D0 bar -> K plus pi min in a sample of 3.8x10^6 central Pb-Pb events collected with a beam energy of 158A GeV by NA49 at the CERN SPS. No signal is observed. An upper limit on D0 production is derived and compared to predictions from several models.
Particle production in central Pb+Pb collisions was studied with the NA49 large acceptance spectrometer at the CERN SPS at beam energies of 20, 30, 40, 80, and 158 GeV per nucleon. A change of the energy dependence is observed around 30A GeV for the yields of pions and strange particles as well as for the shapes of the transverse mass spectra. At present only a reaction scenario with onset of deconfinement is able to reproduce the measurements.
The transverse mass spectra of Omega hyperons and phi mesons measured recently by STAR Collaboration in Au+Au collisions at sqrt(s_NN) = 130 GeV are described within a hydrodynamic model of the quark gluon plasma expansion and hadronization. The flow parameters at the plasma hadronization extracted by fitting these data are used to predict the transverse mass spectra of J/psi and psi' mesons.
Efficient systems for the securities transaction industry : a framework for the European Union
(2003)
This paper provides a framework for the securities transaction industry in the EU to understand the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure. Of particular interest are microeconomic incentives of the industry players that can be in contradiction to social welfare. We evaluate the three functions and the strategic parameters - the boundary decision, the communication standard employed and the governance implemented - along the lines of three efficiency concepts. By structuring the main factors that influence these concepts and by describing the underlying trade-offs among them, we provide insight into a highly complex industry. Applying our framework, the paper describes and analyzes three consistent systems for the securities transaction industry. We point out that one of the systems, denoted as 'contestable monopolies', demonstrates a superior overall efficiency while it might be the most sensitive in terms of configuration accuracy and thus difficult to achieve and sustain.
Despite a lot of re-structuring and many innovations in recent years, the securities transaction industry in the European Union is still a highly inefficient and inconsistently configured system for cross-border transactions. This paper analyzes the functions performed, the institutions involved and the parameters concerned that shape market and ownership structure in the industry. Of particular interest are microeconomic incentives of the main players that can be in contradiction to social welfare. We develop a framework and analyze three consistent systems for the securities transaction industry in the EU that offer superior efficiency than the current, inefficient arrangement. Some policy advice is given to select the 'best' system for the Single European Financial Market.
In recent years stock exchanges have been increasingly diversifying their operations into related business areas such as derivatives trading, post-trading services and software sales. This trend can be observed most notably among profit-oriented trading venues. While the pursuit for diversification is likely to be driven by the attractiveness of these investment opportunities, it is yet an open question whether certain integration activities are also efficient, both from a social welfare and from the exchanges' perspective. Academic contributions so far analyzed different business models primarily from the social welfare perspective, whereas there is only little literature considering their impact on the exchange itself. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the factor productivity of exchanges with different business models. Our findings suggest three conclusions: (1) Integration activity comes at the cost of increased operational complexity which in some cases outweigh the potential synergies between related activities and therefore leads to technical inefficiencies and lower productivity growth. (2) We find no evidence that vertical integration is more efficient and productive than other business models. This finding could contribute to the ongoing discussion about the merits of vertical integration from a social welfare perspective. (3) The existence of a strong in-house IT-competence seems to be beneficial to overcome.
Academic contributions on the demutualization of stock exchanges so far have been predominantly devoted to social welfare issues, whereas there is scarce empirical literature referring to the impact of a governance change on the exchange itself. While there is consensus that the case for demutualization is predominantly driven by the need to improve the exchange's competitiveness in a changing business environment, it remains unclear how different governance regimes actually affect stock exchange performance. Some authors propose that a public listing is the best suited governance arrangement to improve an exchange's competitiveness. By employing a panel data set of 28 stock exchanges for the years 1999-2003 we seek to shed light on this topic by comparing the efficiency and productivity of exchanges with differing governance arrangements. For this purpose we calculate in a first step individual efficiency and productivity values via DEA. In a second step we regress the derived values against variables that - amongst others - map the institutional arrangement of the exchanges in order to determine efficiency and productivity differences between (1) mutuals (2) demutualized but customer-owned exchanges and (3) publicly listed and thus at least partly outsider-owned exchanges. We find evidence that demutualized exchanges exhibit higher technical efficiency than mutuals. However, they perform relatively poor as far as productivity growth is concerned. Furthermore, we find no evidence that publicly listed exchanges possess higher efficiency and productivity values than demutualized exchanges with a customer-dominated structure. We conclude that the merits of outside ownership lie possibly in other areas such as solving conflicts of interest between too heterogeneous members.
This paper studies a setting in which a risk averse agent must be motivated to work on two tasks: he (1) evaluates a new project and, if adopted, (2) manages it. While a performance measure which is informative of an agent´s action is typically valuable because it can be used to improve the risk sharing of the contract, this is not necessarily the case in this two-task setting. I provide a sufficient condition under which a performance measure that is informative of the second task is worthless for contracting despite the agent being risk averse. This shows that information content is a necessary but not a sufficient condition for a performance measure to be valuable.
It is widely believed that the ideal board in corporations is composed almost entirely of independent (outside) directors. In contrast, this paper shows that some lack of board independence can be in the interest of shareholders. This follows because a lack of board independence serves as a substitute for commitment. Boards that are dependent on the incumbent CEO adopt a less aggressive CEO replacement rule than independent boards. While this behavior is inefficient ex post, it has positive ex ante incentive effects. The model suggests that independent boards (dependent boards) are most valuable to shareholders if the problem of providing appropriate incentives to the CEO is weak (severe).
Reflexive transnational law : the privatisation of civil law and the civilisation of private law
(2002)
The author examines the emergence of a transnational private law in alternative dispute resolution bodies and private norm formulating agencies from a reflexive law perspective. After introducing the concept of reflexive law he applies the idea of law as a communicative system to the ongoing debate on the existence of a New Law Merchant or lex mercatoria. He then discusses some features of international commercial arbitration (e.g. the lack of transparency) which hinder self-reference (autopoiesis) and thus the production of legal certainty in lex mercatoria as an autonomous legal system. He then contrasts these findings with the Domain Name Dispute Resolution System, which as opposed to Lex Mercatoria was rationally planned and highly formally organised by WIPO and ICANN, and which is allowing for self-reference and thus is designed as an autopoietic legal system, albeit with a very limited scope, i.e. the interference of abusive domain name registrations with trademarks (cybersquatting). From the comparison of both examples the author derives some preliminary ideas regarding a theory of reflexive transnational law, suggesting that the established general trend of privatisation of civil law need to be accompanied by a civilisation of private law, i.e. the constitutionalization of transnational private regimes by embedding them into a procedural constitution of freedom.
Wider participation in stockholding is often presumed to reduce wealth inequality. We measure and decompose changes in US wealth inequality between 1989 and 2001, a period of considerable spread of equity culture. Inequality in equity wealth is found to be important for net wealth inequality, despite equity's limited share. Our findings show that reduced wealth inequality is not a necessary outcome of the spread of equity culture. We estimate contributions of stockholder characteristics to levels and inequality in equity holdings, and we distinguish changes in configuration of the stockholder pool from changes in the influence of given characteristics. Our estimates imply that both the 1989 and the 2001 stockholder pools would have produced higher equity holdings in 1998 than were actually observed for 1998 stockholders. This arises from differences both in optimal holdings and in financial attitudes and practices, suggesting a dilution effect of the boom followed by a cleansing effect of the downturn. Cumulative gains and losses in stockholding are shown to be significantly influenced by length of household investment horizon and portfolio breadth but, controlling for those, use of professional advice is either insignificant or counterproductive. JEL Classification: E21, G11
We argue that the shape of the system-size dependence of strangeness production in nucleus-nucleus collisions can be understood in a picture that is based on the formation of clusters of overlapping strings. A string percolation model combined with a statistical description of the hadronization yields a quantitative agreement with the data at sqrt s_NN = 17.3 GeV. The model is also applied to RHIC energies.
A steep maximum occurs in the Wroblewski ratio between strange and non-strange quarks created in central nucleus-nucleus collisions, of about A=200, at the lower SPS energy square root s approximately equal to 7 GeV. By analyzing hadronic multiplicities within the grand canonical statistical hadronization model this maximum is shown to occur at a baryochemical potential of about 450 MeV. In comparison, recent QCD lattice calculations at finite baryochemical potential suggest a steep maximum of the light quark susceptibility, to occur at similar mu B, indicative of "critical fluctuation" expected to occur at or near the QCD critical endpoint. This endpoint hat not been firmly pinned down but should occur in the 300 MeV < mu c B < 700 MeV interval. It is argued that central collisions within the low SPS energy range should exhibit a turning point between compression/heating, and expansion/cooling at energy density, temperature and mu B close to the suspected critical point. Whereas from top SPS to RHIC energy the primordial dynamics create a turning point far above in epsilon and T, and far below in mu B. And at lower AGS energies the dynamical trajectory stays below the phase boundary. Thus, the observed sharp strangeness maximum might coincide with the critical square root s at which the dynamics settles at, or near the QCD endpoint.
Strangeness enhancement is discussed as a feature specific to relativistic nuclear collisions which create a fireball of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. The hadron gas at the instant of its formation captures conditions directly at the QCD phase boundary at top SPS and RHIC energy, chiefly the critical temperature and energy density.
Relativistic nucleus-nucleus collisions create a "fireball" of strongly interacting matter at high energy density. At very high energy this is suggested to be partonic matter, but at lower energy it should consist of yet unknown hadronic, perhaps coherent degrees of freedom. The freeze-out of this high density state to a hadron gas can tell us about properties of fireball matter. Date (v1): Thu, 19 Dec 2002 12:52:34 GMT (146kb) Date (revised v2): Thu, 16 Jan 2003 15:11:47 GMT (146kb) Date (revised v3): Wed, 14 May 2003 12:49:35 GMT (146kb)
With new data available from the SPS, at 40 and 80 GeV/A, I review the systematics of bulk hadron multiplicities, with prime focus on strangeness production. The classical concept of strangeness enhancement in central AA collisions is reviewed, in view of the statistical hadronization model which suggests to understand strangeness enhancement to arise chiefly in the transition from the canonical to the grand canonical version of that model. I. e. enhancement results from the fading away of canonical suppression. The model also captures the striking strangeness maximum observed in the vicinity of sqrt s approx 8 GeV. A puzzle remains in the understanding of apparent grand canonical order at the lower SPS, and at AGS energies.
Transverse momentum event-by-event fluctuations are studied within the string-hadronic model of high energy nuclear collisions, LUCIAE. Data on non-statistical pT fluctuations in p+p interactions are reproduced. Fluctuations of similar magnitude are predicted for nucleus-nucleus collisions, in contradiction to the preliminary NA49 results. The introduction of a string clustering mechanism (Firecracker Model) leads to a further, significant increase of pT fluctuations for nucleus-nucleus collisions. Secondary hadronic interactions, as implemented in LUCIAE, cause only a small reduction of pT fluctuations.
Attribution and detection of anthropogenic climate change using a backpropagation neural network
(2002)
The climate system can be regarded as a dynamic nonlinear system. Thus traditional linear statistical methods are not suited to describe the nonlinearities of this system which renders it necessary to find alternative statistical techniques to model those nonlinear properties. In addition to an earlier paper on this subject (WALTER et al., 1998), the problem of attribution and detection of the observed climate change is addressed here using a nonlinear Backpropagation Neural Network (BPN). In addition to potential anthropogenic influences on climate (CO2-equivalent concentrations, called greenhouse gases, GHG and SO2 emissions) natural influences on surface air temperature (variations of solar activity, volcanism and the El Niño/Southern Oscillation phenomenon) are integrated into the simulations as well. It is shown that the adaptive BPN algorithm captures the dynamics of the climate system, i.e. global and area weighted mean temperature anomalies, to a great extent. However, free parameters of this network architecture have to be optimized in a time consuming trial-and-error process. The simulation quality obtained by the BPN exceeds the results of those from a linear model by far; the simulation quality on the global scale amounts to 84% explained variance. Additionally the results of the nonlinear algorithm are plausible in a physical sense, i.e. amplitude and time structure. Nevertheless they cover a broad range, e.g. the GHG-signal on the global scale ranges from 0.37 K to 1.65 K warming for the time period 1856-1998. However the simulated amplitudes are situated within the discussed range (HOUGHTON et al., 2001). Additionally the combined anthropogenic effect corresponds to the observed increase in temperature for the examined time period. In addition to that, the BPN succeeds with the detection of anthropogenic induced climate change on a high significance level. Therefore the concept of neural networks can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Temporal changes in the occurrence of extreme events in time series of observed precipitation are investigated. The analysis is based on a European gridded data set and a German station-based data set of recent monthly totals (1896=1899–1995=1998). Two approaches are used. First, values above certain defined thresholds are counted for the first and second halves of the observation period. In the second step time series components, such as trends, are removed to obtain a deeper insight into the causes of the observed changes. As an example, this technique is applied to the time series of the German station Eppenrod. It arises that most of the events concern extreme wet months whose frequency has significantly increased in winter. Whereas on the European scale the other seasons also show this increase, especially in autumn, in Germany an insignificant decrease in the summer and autumn seasons is found. Moreover it is demonstrated that the increase of extreme wet months is reflected in a systematic increase in the variance and the Weibull probability density function parameters, respectively.
Hadronic yields and yield ratios observed in Pb+Pb collisions at the SPS energy of 158 GeV per nucleon are known to resemble a thermal equilibrium population at T=180 +/- 10 MeV, also observed in elementary e+ + e- to hadron data at LEP. We argue that this is the universal consequence of the QCD parton to hadron phase transition populating the maximum entropy state. This state is shown to survive the hadronic rescattering and expansion phase, freezing in right after hadronization due to the very rapid longitudinal and transverse expansion that is inferred from Bose-Einstein pion correlation analysis of central Pb+Pb collisions.
Simulation of global temperature variations and signal detection studies using neural networks
(1998)
The concept of neural network models (NNM) is a statistical strategy which can be used if a superposition of any forcing mechanisms leads to any effects and if a sufficient related observational data base is available. In comparison to multiple regression analysis (MRA), the main advantages are that NNM is an appropriate tool also in the case of non-linear cause-effect relations and that interactions of the forcing mechanisms are allowed. In comparison to more sophisticated methods like general circulation models (GCM), the main advantage is that details of the physical background like feedbacks can be unknown. Neural networks learn from observations which reflect feedbacks implicitly. The disadvantage, of course, is that the physical background is neglected. In addition, the results prove to be sensitively dependent from the network architecture like the number of hidden neurons or the initialisation of learning parameters. We used a supervised backpropagation network (BPN) with three neuron layers, an unsupervised Kohonen network (KHN) and a combination of both called counterpropagation network (CPN). These concepts are tested in respect to their ability to simulate the observed global as well as hemispheric mean surface air temperature annual variations 1874 - 1993 if parameter time series of the following forcing mechanisms are incorporated : equivalent CO2 concentrations, tropospheric sulfate aerosol concentrations (both anthropogenic), volcanism, solar activity, and ENSO (all natural). It arises that in this way up to 83% of the observed temperature variance can be explained, significantly more than by MRA. The implication of the North Atlantic Oscillation does not improve these results. On a global average, the greenhouse gas (GHG) signal so far is assessed to be 0.9 - 1.3 K (warming), the sulfate signal 0.2 - 0.4 K (cooling), results which are in close similarity to the GCM findings published in the recent IPCC Report. The related signals of the natural forcing mechanisms considered cover amplitudes of 0.1 - 0.3 K. Our best NNM estimate of the GHG doubling signal amounts to 2.1K, equilibrium, or 1.7 K, transient, respectively.
The climate system can be regarded as a dynamic nonlinear system. Thus, traditional linear statistical methods fail to model the nonlinearities of such a system. These nonlinearities render it necessary to find alternative statistical techniques. Since artificial neural network models (NNM) represent such a nonlinear statistical method their use in analyzing the climate system has been studied for a couple of years now. Most authors use the standard Backpropagation Network (BPN) for their investigations, although this specific model architecture carries a certain risk of over-/underfitting. Here we use the so called Cauchy Machine (CM) with an implemented Fast Simulated Annealing schedule (FSA) (Szu, 1986) for the purpose of attributing and detecting anthropogenic climate change instead. Under certain conditions the CM-FSA guarantees to find the global minimum of a yet undefined cost function (Geman and Geman, 1986). In addition to potential anthropogenic influences on climate (greenhouse gases (GHG), sulphur dioxide (SO2)) natural influences on near surface air temperature (variations of solar activity, explosive volcanism and the El Nino = Southern Oscillation phenomenon) serve as model inputs. The simulations are carried out on different spatial scales: global and area weighted averages. In addition, a multiple linear regression analysis serves as a linear reference. It is shown that the adaptive nonlinear CM-FSA algorithm captures the dynamics of the climate system to a great extent. However, free parameters of this specific network architecture have to be optimized subjectively. The quality of the simulations obtained by the CM-FSA algorithm exceeds the results of a multiple linear regression model; the simulation quality on the global scale amounts up to 81% explained variance. Furthermore the combined anthropogenic effect corresponds to the observed increase in temperature Jones et al. (1994), updated by Jones (1999a), for the examined period 1856–1998 on all investigated scales. In accordance to recent findings of physical climate models, the CM-FSA succeeds with the detection of anthropogenic induced climate change on a high significance level. Thus, the CMFSA algorithm can be regarded as a suitable nonlinear statistical tool for modeling and diagnosing the climate system.
Observed global and European spatiotemporal related fields of surface air temperature, mean-sea-level pressure and precipitation are analyzed statistically with respect to their response to external forcing factors such as anthropogenic greenhouse gases, anthropogenic sulfate aerosol, solar variations and explosive volcanism, and known internal climate mechanisms such as the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). As a first step, a principal component analysis (PCA) is applied to the observed spatiotemporal related fields to obtain spatial patterns with linear independent temporal structure. In a second step, the time series of each of the spatial patterns is subject to a stepwise regression analysis in order to separate it into signals of the external forcing factors and internal climate mechanisms as listed above as well as the residuals. Finally a back-transformation leads to the spatiotemporally related patterns of all these signals being intercompared. Two kinds of significance tests are applied to the anthropogenic signals. First, it is tested whether the anthropogenic signal is significant compared with the complete residual variance including natural variability. This test answers the question whether a significant anthropogenic climate change is visible in the observed data. As a second test the anthropogenic signal is tested with respect to the climate noise component only. This test answers the question whether the anthropogenic signal is significant among others in the observed data. Using both tests, regions can be specified where the anthropogenic influence is visible (second test) and regions where the anthropogenic influence has already significantly changed climate (first test).
A selection of recent data referring to Pb+Pb collisions at the SPS CERN energy of 158 GeV per nucleon is presented which might describe the state of highly excited strongly interacting matter both above and below the deconfinement to hadronization (phase) transition predicted by lattice QCD. A tentative picture emerges in which a partonic state is indeed formed in central Pb+Pb collisions which hadronizes at about T = 185 MeV, and expands its volume more than tenfold, cooling to about 120 MeV before hadronic collisions cease. We suggest further that all SPS collisions, from central S+S onward, reach that partonic phase, the maximum energy density increasing with more massive collision systems.
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
A systematic analysis of data on strangeness and pion production in nucleon–nucleon and central nucleus–nucleus collisions is presented. It is shown that at all collision energies the pion/baryon and strangeness/pion ratios indicate saturation with the size of the colliding nuclei. The energy dependence of the saturation level suggests that the transition to the Quark Gluon Plasma occurs between 15 A·GeV/c (BNL AGS) and 160 A·GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach show that the effective number of degrees of freedom increases in the course of the phase transition and that the plasma created at CERN SPS energies may have a temperature of about 280 MeV (energy density ~ 10 GeV/fm exp-3). The presence of the phase transition can lead to the non–monotonic collision energy dependence of the strangeness/pion ratio. After an initial increase the ratio should drop to the characteristic value for the QGP. Above the transition region the ratio is expected to be collision energy independent. Experimental studies of central Pb+Pb collisions in the energy range 20–160 A·GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
We argue that the measurement of open charm gives a unique opportunity to test the validity of pQCD-based and statistical models of nucleus-nucleus collisions at high energies. We show that various approaches used to estimate D-meson multiplicity in central Pb+Pb collisions at 158 A GeV give predictions which differ by more than a factor of 100. Finally we demonstrate that decisive experimental results concerning the open charm yield in A+A collisions can be obtained using data of the NA49 experiment at the CERN SPS.
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
The knowledge of the build up time of space charge compensation (SCC) and the investigation of the compensation process is of main interest for low energy beam transport of pulsed high perveance ion beams under space charge compensated conditions. To investigate experimentally the rise of compensation an LEBT system consisting of a pulsed ion source, two solenoids and a drift tube as diagnostic section has been set up. The beam potential has been measured time resolved by a residual gas ion energy analyser (RGA). A numerical simulation for the calculation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult measure directly. The results of the simulation has been compared with the measured data to investigate the behavior of the compensation electrons as a function of time. The acquired data shows that the theoretical rise time of space charge compensation is by a factor of two shorter than the build up time determined experimentally. In view of description the process of SCC an interpretation of the gained results is given.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.
For investigation of space charge compensation process due to residual gas ionization and the experimentally study of the rise of compensation, a Low Energy Beam Transport (LEBT) system consisting of an ion source, two solenoids, a decompensation electrode to generate a pulsed decompensated ion beam and a diagnostic section was set up. The potentials at the beam axis and the beam edge were ascertained from time resolved measurements by a residual gas ion energy analyzer. A numerical simulation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult to measure directly. The temporal development of the kinetic and potential energy of the compensation electrons has been analyzed by using the numerically gained results of the simulation. To investigate the compensation process the distribution and the losses of the compensation electrons were studied as a function of time. The acquired data show that the theoretical estimated rise time of space charge compensation neglecting electron losses is shorter than the build up time determined experimentally. To describe the process of space charge compensation an interpretation of the achieved results is given.
Low energy beam transport (LEBT) for a future heavy ion driven inertial fusion (HIDIF [1]) facility is a crucial point using a Bi+ beam of 40 mA at 156 keV. High space charge forces (generalised perveance K=3.6*10-3) restrict the use of electrostatic focussing systems. On the other hand magnetic lenses using space charge compensation suffer from the low particle velocity. Additionally the emittance requirements are very high in order to avoid particle losses in the linac and at ring injection [2]. urthermore source noise and rise time of space charge compensation [3] might enhance particle losses and emittance. Gabor lenses [4] using a continuous space charge cloud for focussing could be a serious alternative to conventional LEBT systems. They combine strong cylinder symmetric focussing with partly space charge compensation and low emittance growth due to lower non linear fields. A high tolerance against source noise and current fluctuations and reduced investment costs are other possible advantages. The proof of principle has already been shown [5, 6]. To broaden the experiences an experimental program was started. Therefrom the first experimental results using a double Gabor lens (DGPL, see fig. 1 ) LEBT system for transporting an high perveance Xe+ beam will be presented and the results of numerical simulations will be shown.
The determination of the beam emittance using conventional destructive methods suffers from two main disadvantages. The interaction between the ion beam and the measurement device produces a high amount of secondary particles. Those particles interact with the beam and can change the transport properties of the accelerator. Particularly in the low energy section of high current accelerators like proposed for IFMIF, heavy ion inertial fusion devices (HIDIF) and spallation sources (ESS, SNS) the power deposited on the emittance measurement device can lead to extensive heat on the detector itself and can destruct or at least dejust the device (slit or grit for example). CCD camera measurements of the incident light emitted from interaction of beam ions with residual gas are commonly used for determination of the beam emittance. Fast data acquisition and high time resolution are additional features of such a method. Therefore a matrix formalism is used to derive the emittance from the measured profile of the beam [1,2] which does not take space charge effects and emittance growth into account. A new method to derive the phase space distribution of the beam from a single CCD camera image using statistical numerical methods will be presented together with measurements. The results will be compared with measurements gained from a conventional Allison type (slit-slit) emittance measurement device.
Investigation of the focus shift due to compensation process for low energy ion beam transport
(2000)
In magnetic Low Energy Beam Transport (LEBT) sections space charge compensation helps to enhance the transportable beam current and to reduce emittance growth due to space charge forces. For pulsed beams the time neccesary to establish space charge compensation is of great interest for beam transport. Particularly with regard to beam injection into the first accelerator section (e.g. RFQ) investigation of effects on shift of the beam focus due to space charge compensation are very important. The achieved results helps to obviate a mismatch into the first RFQ. To investigate the space charge compensation due to residual gas ionization, time resolved measurements using pulsed ion beams were performed at the LEBT system at the IAP and at the CEA-Saclay injektion line. A residual gas ion energy analyser (RGIA) equiped with a channeltron was used to measure the potential destribution as a function of time to estimate the rise time of compensation. For time resolved measurements (delta t min=50ns) of the radial density profile of the ion beam a CCD-camera was applied. The measured data were used in a numerical simulation of selfconsistant eqilibrium states of the beam plasma [1] to determine plasma parameters such as the density, the temperature, the kinetic and potential energy of the compensation electrons as a function of time. Measurements were done using focused proton beams (10keV, 2mA at IAP and 92keV, 62mA at CEA-Saclay) to get a better understanding of the influence of the compensation process. An interpretation of the acquired data and the achieved results will be presented.
Influence of space charge fluctuations on the low energy beam transport of high current ion beams
(2000)
For future high current ion accelerators like SNS, ESS or IFMIF the beam behaviour in low energy beam transport sections is dominated by space charge forces. Therefore space charge fluctuations (e. g. source noise) can drastically influence the beam transport properties of the low energy beam transport section. Losses of beam ions and emittance growth are the most severe problems. For electrostatic transport systems either a LEBT design has to be found which is insensitive to variations of the space charge or the origin of the fluctuations has to be eliminated. For space charge compensated transport as proposed for ESS and IFMIF the situation is different: No major influence on beam transport is expected for fluctuations below a cut-off frequency given by the production rate of the compensation particles. Above this frequency the fluctuations can not be compensated by particle production alone, but redistributions of the compensation particles helps to compensate the influence of the fluctuations. Above a second cut-off frequency given by the density and the temperature of the compensation particles their redistribution is too slow to reduce the influence of the space charge fluctuations. Transport simulations for the IFMIF injector including space charge fluctuations will be presented together with a determination of the cut-off frequencies. The results will be compared with measurements of the rise time of space charge compensation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.
A LEBT system consisting of an ion source, two solenoids, and a diagnostic section has been set up to investigate the space charge compensation process due to residual gas ionization [1] and to study experimentally the rise of compensation. To gain the radial beam potential distribution time resolved measurements of the residual gas ion energy distribution were carried out using a Hughes Rojanski analyzer [2,3]. To measure the radial density profile of the ion beam a CCD-camera performed time resolved measurements, which allow an estimation the rise time of compensation. Further the dynamic effect of the space charge compensation on the beam transport was shown. A numerical simulation under assumption of selfconsistent states [4] of the beam plasma has been used to determine plasma parameters such as the radial density profile and the temperature of the electrons. The acquired data show that the theoretical estimated rise time of space charge compensation neglecting electron losses is shorter than the build up time determined experimentally. An interpretation of the achieved results is given.
To fulfil the requirements of ESS on beam transmission and emittance growth a detailed knowledge of the physics of beam formation as well as the interaction of the H- with the residual gas is substantial. Space charge compensated beam transport using solenoids for ion optics is in favour for the Low Energy Beam Transport (LEBT) between ion source and the first RFQ. Space charge compensation reduces the electrical self fields and beam radii and therefore emittance growth due to aberrations and redistribution. Transport of H- near the ion source is negatively influenced by the dipole fields required for beam extraction and e--dumping and the high gas pressure. The destruction of the rotational symmetry together with the space charge forces causes emittance growth and particle losses within the extraction system. High residual gas pressure near the extractor together with the high cross section for stripping will influence the transmission as well as space charge compensation. Therefore a detailed knowledge of the interaction of the residual gas with the beam and the influence of the external fields on the distribution of the compensation particles is necessary to reduce particle losses and emittance growth. Preliminary experiments using positive hydrogen ions for reference already show the influence of dipole fields on beam emittance. First measurements with H- confirm these results. Additional information on the interactions of the residual gas with the beam ions have been gained from the measurements using the momentum and energy analyser.
Vortrag gehalten an der Tagung "The XVI International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, organized by SUBATECH Laboratory", in Nantes, France, 18-24 Juli 2002.
Rapidity distributions for Lambda and anti-Lambda hyperons in central Pb-Pb collisions at 40, 80 and 158 AGeV and for K 0 s mesons at 158 AGeV are presented. The lambda multiplicities are studied as a function of collision energy together with AGS and RHIC measurements and compared to model predictions. A different energy dependence of the Lambda/pi and anti-Lambda/pi is observed. The anti-Lambda/Lambda ratio shows a steep increase with collision energy. Evidence for a anti-Lambda/anti-p ratio greater than 1 is found at 40 AGeV.
The experiment NA49 at the CERN SPS is a large acceptance detector for charmed hadrons. The identification of neutral strange hadrons Lambda and AntiLambda is based on the measurement of their charged decay particles and the reconstruciton of the decay vertex. The charged particles were measured with the 4 time projection chambers (TPC), two of them are situated inside 2 large dipole magnets, the two others are downstream of the magnet. Lambda and AntiLambda baryons have been measured in central Pb+Pb collisions at 40, 80 and 160 GeV/nucleon over a wide range in rapidity (1 - 5) and transverse momentum (0 - 3 GeV/c). Particle yields and spectra will be shown for the different energies. The results will be put into the existing systematics of Lambda-production as a function of beam energy.
In this paper we present recent results from the NA49 experiment for Lambda and Lambda hyperons produced in central Pb+Pb collisions at 40, 80 and 158 A GeV. Transverse mass spectra and rapidity distributions for Lambda are shown for all three energies. The shape of the rapidity distribution becomes flatter with increasing beam energy. The multiplicities at mid-rapidity as well as the total yields are studied as a function of collision energy including AGS measurements. The ratio Lambda/pi at mid-rapidity and in 4 pi has a maximum around 40 A GeV. In addition, Lambda rapidity distributions have been measured at 40 and 80 A GeV, which allows to study the Lambda Lambda ratio.
Excitation functions for quasi-elastic scattering have been measured at backward angles for the systems 32,34S+197Au and 32,34S+208Pb for energies spanning the Coulomb barrier. Representative distributions, sensitive to the low energy part of the fusion barrier distribution, have been extracted from the data. For the fusion reactions of 32,34S with 197Au couplings related to the nuclear structure of 197Au appear to be dominant in shaping the low energy part of the barrier distibution. For the system 32S+208Pb the barrier distribution is broader and extends further to lower energies, than in the case of 34S+208Pb. This is consistent with the interpretation that the neutron pick-up channels are energetically more favoured in the 32S induced reaction and therefore couple more strongly to the relative motion. It may also be due to the increased collectivity of 32S, when compared with 34S.
S.a. Deutsche Fassung: Ökonomie der Gabe - Positivität der Gerechtigkeit: Gegenseitige Heimsuchungen von System und différance. In: Albrecht Koschorke und Cornelia Vismann (Hg.) System - Macht - Kultur: Probleme der Systemtheorie. Akademie, Berlin 1999, 199-212. Auch auf unserem Server vorhanden. * Italienische Fassung: Economia del dono, positività della giustizia: la reciproca paranoia di Jacques Derrida e Niklas Luhmann. Sociologia e politiche sociali 6, 2003, 113-130. Portugiesische Fassung: Economia da dádiva ? posividade da rustica; ?assombracao?? mutua entre sistema e différance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 55-78.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
"Eurocomprehension" is the term used to describe European intercomprehension in Europe’s three major language families, the Romance, the Slavic and the Germanic. The aim of eurocomprehension is to achieve multilingualism conforming to EU language policy goals through the entry-point of receptive competence in a modular structure. Linguistic intercomprehension research forms the transfer bases for the cognitive use of relations between the language groups which didactics of multilingualism implement. ...
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Deutsche Fassung: Vertragswelten: Das Recht in der Fragmentierung von private governance regimes. Rechtshistorisches Journal 17, 1998, 234-265. Italienische Fassung: Mondi contrattuali. Discourse rights nel diritto privato. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 113-142. Portugiesische Fassung: Mundos contratuais: o direito na fragmentacao de regimes de private governance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 269-298.
s.a. Deutsche Fassung: Rechtshistorisches Journal 15, 1996, 255-290 und in: Eric Schwarz (Hg.) La théorie des systèmes: une approche inter- et transdisciplinaire. Bösch, Sion 1996, 101-119. Italienische Fassung: La Bukowina globale: il pluralismo giuridico nella società mondiale. Sociologic a politiche sociali 2, 1999, 49-80. Portugiesische Fassung: Bukowina global sobre a emergência de um pluralismo jurídico transnacional. Impulso: Direito e Globalização 14, 2003. Georgische Fassung: Globaluri bukovina: samarTlebrivi pluralizmi msoflio sazogadoebaSi. Journal of the Institute of State and Law of the Georgian Academy of Sciences 2005 (im Erscheinen)
s.a. Deutsche Fassung: Archiv für Rechts- und Sozialphilosophie. Beiheft 65, 1996, 199-220. Italienische Fassung: Altera pars audiatur: Il diritto nella collisione dei discorsi. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 27-70. Französische Fassung: Altera pars audiatur: le droit dans la collision des discours. Droit et Société 35, 1997, 99-123. Portugiesische Fassung: Altera pars audiatur: o direito na colisao de disursos. In: J.A. Lindgren Alves, Gunther Teubner, Joaquim Leonel de Rezende Alvim, Dorothe Susanne Rüdiger, Direito e Cidadania na Pos-Modernidade. Editora Unimep, Piracicaba, Brasilia 2002; 93-129.
Reflexives Recht. Entwicklungsmodelle des Rechts in vergleichender Perspektive (EUI Working Paper 1982/13). Archiv für Rechts- und Sozialphilosophie 68, 1982, 13-59, und in: Werner Maihofer (Hg.), Noi si Mura, Schriftenreihe des Europäischen Hochschulinstituts, Florenz 1986, 290-340. Englische Fassung: Substantive and Reflexive Elements in Modern Law. (EUI Working Paper 1982/14). Law and Society Review 17, 1983, 239-285 und in: Kahei Rokumoto (Hg.) Sociological Theories of Law. Dartmouth, Aldershot 1994, 415-462. Neuabdruck in: Carroll Seron, The Law and Society Canon, Ashgate, Aldershot 2005 (im Erscheinen). Französische Fassung: Eléments 'substantifs' et 'réflexifs' dans le droit moderne. L'Interdit. Revue de Psychanalyse Institutionelle, 1984, 129-132, und Droit et réflexivité: une perspective comparative sur des modèles d'évolution juridique in: Gunther Teubner, Droit et réflexivité. Librairie générale de droit et de jurisprudence, Paris 1994, 3-50. Dänische Fassung: Refleksiv Ret: Udviklingsmodeller i sammenlignende perspektiv. In: Asmund Born, Nils Bredsdorff, Leif Hansen and Finn Hansson (Hg.) Refleksiv Ret. Publication Series of the Institut for Organisation og Arbeidssociologi. Nytfrasamfundsvidenskaberne, Kopenhagen 1988, 21-79.
We introduce a new method for representing and solving a general class of non-preemptive resource-constrained project scheduling problems. The new approach is to represent scheduling problems as de- scriptions (activity terms) in a language called RSV, which allows nested expressions using pll, seq, and xor. The activity-terms of RSV are similar to concepts in a description logic. The language RSV generalizes previous approaches to scheduling with variants insofar as it permits xor's not only of atomic activities but also of arbitrary activity terms. A specific semantics that assigns their set of active schedules to activity terms shows correctness of a calculus normalizing activity terms RSV similar to propositional DNF-computation. Based on RSV, this paper describes a diagram-based algorithm for the RSV-problem which uses a scan-line principle. The scan-line principle is used for determining and resolving the occurring resource conflicts and leads to a nonredundant generation of all active schedules and thus to a computation of the optimal schedule.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.