Refine
Year of publication
Document Type
- Article (15661)
- Part of Periodical (2814)
- Working Paper (2350)
- Doctoral Thesis (2052)
- Preprint (1946)
- Book (1736)
- Part of a Book (1071)
- Conference Proceeding (750)
- Report (471)
- Review (165)
Language
- English (29206) (remove)
Keywords
- taxonomy (738)
- new species (441)
- morphology (173)
- Deutschland (142)
- Syntax (125)
- Englisch (120)
- distribution (116)
- biodiversity (99)
- Deutsch (98)
- inflammation (97)
Institute
- Medizin (5321)
- Physik (3710)
- Wirtschaftswissenschaften (1903)
- Frankfurt Institute for Advanced Studies (FIAS) (1653)
- Biowissenschaften (1539)
- Center for Financial Studies (CFS) (1485)
- Informatik (1389)
- Biochemie und Chemie (1084)
- Sustainable Architecture for Finance in Europe (SAFE) (1065)
- House of Finance (HoF) (708)
Configuration, simulation and visualization of simple biochemical reaction-diffusion systems in 3D
(2004)
Background In biological systems, molecules of different species diffuse within the reaction compartments and interact with each other, ultimately giving rise to such complex structures like living cells. In order to investigate the formation of subcellular structures and patterns (e.g. signal transduction) or spatial effects in metabolic processes, it would be helpful to use simulations of such reaction-diffusion systems. Pattern formation has been extensively studied in two dimensions. However, the extension to three-dimensional reaction-diffusion systems poses some challenges to the visualization of the processes being simulated. Scope of the Thesis The aim of this thesis is the specification and development of algorithms and methods for the three-dimensional configuration, simulation and visualization of biochemical reaction-diffusion systems consisting of a small number of molecules and reactions. After an initial review of existing literature about 2D/3D reaction-diffusion systems, a 3D simulation algorithm (PDE solver), based on an existing 2D-simulation algorithm for reaction-diffusion systems written by Prof. Herbert Sauro, has to be developed. In a succeeding step, this algorithm has to be optimized for high performance. A prototypic 3D configuration tool for the initial state of the system has to be developed. This basic tool should enable the user to define and store the location of molecules, membranes and channels within the reaction space of user-defined size. A suitable data structure has to be defined for the representation of the reaction space. The main focus of this thesis is the specification and prototypic implementation of a suitable reaction space visualization component for the display of the simulation results. In particular, the possibility of 3D visualization during course of the simulation has to be investigated. During the development phase, the quality and usability of the visualizations has to be evaluated in user tests. The simulation, configuration and visualization prototypes should be compliant with the Systems Biology Workbench to ensure compatibility with software from other authors. The thesis is carried out in close cooperation with Prof. Herbert Sauro at the Keck Graduate Institute, Claremont, CA, USA. Due to this international cooperation the thesis will be written in English.
We investigate the sensitivity of several observables to the density dependence of the symmetry potential within the microscopic transport model UrQMD (ultrarelativistic quantum molecular dynamics model). The same systems are used to probe the symmetry potential at both low and high densities. The influence of the symmetry potentials on the yields of pi-, pi+, the pi-/pi+ ratio, the n/p ratio of free nucleons and the t/3He ratio are studied for neutron-rich heavy ion collisions (208Pb+208Pb, 132Sn+124Sn, 96Zr+96Zr) at E_b=0.4A GeV. We find that these multiple probes provides comprehensive information on the density dependence of the symmetry potential.
DCD – a novel plant specific domain in proteins involved in development and programmed cell death
(2005)
Background: Recognition of microbial pathogens by plants triggers the hypersensitive reaction, a common form of programmed cell death in plants. These dying cells generate signals that activate the plant immune system and alarm the neighboring cells as well as the whole plant to activate defense responses to limit the spread of the pathogen. The molecular mechanisms behind the hypersensitive reaction are largely unknown except for the recognition process of pathogens. We delineate the NRP-gene in soybean, which is specifically induced during this programmed cell death and contains a novel protein domain, which is commonly found in different plant proteins.
Results: The sequence analysis of the protein, encoded by the NRP-gene from soybean, led to the identification of a novel domain, which we named DCD, because it is found in plant proteins involved in d evelopment and c ell d eath. The domain is shared by several proteins in the Arabidopsis and the rice genomes, which otherwise show a different protein architecture. Biological studies indicate a role of these proteins in phytohormone response, embryo development and programmed cell by pathogens or ozone.
Conclusion: It is tempting to speculate, that the DCD domain mediates signaling in plant development and programmed cell death and could thus be used to identify interacting proteins to gain further molecular insights into these processes.
Background: Osteoarthritis (OA) has a high prevalence in primary care. Conservative, guideline orientated approaches aiming at improving pain treatment and increasing physical activity, have been proven to be effective in several contexts outside the primary care setting, as for instance the Arthritis Self management Programs (ASMPs). But it remains unclear if these comprehensive evidence based approaches can improve patients' quality of life if they are provided in a primary care setting. Methods/Design: PraxArt is a cluster randomised controlled trial with GPs as the unit of randomisation. The aim of the study is to evaluate the impact of a comprehensive evidence based medical education of GPs on individual care and patients' quality of life. 75 GPs were randomised either to intervention group I or II or to a control group. Each GP will include 15 patients suffering from osteoarthritis according to the criteria of ACR. In intervention group I GPs will receive medical education and patient education leaflets including a physical exercise program. In intervention group II the same is provided, but in addition a practice nurse will be trained to monitor via monthly telephone calls adherence to GPs prescriptions and advices and ask about increasing pain and possible side effects of medication. In the control group no intervention will be applied at all. Main outcome measurement for patients' QoL is the GERMAN-AIMS2-SF questionnaire. In addition data about patients' satisfaction (using a modified EUROPEP-tool), medication, health care utilization, comorbidity, physical activity and depression (using PHQ-9) will be retrieved. Measurements (pre data collection) will take place in months I-III, starting in June 2005. Post data collection will be performed after 6 months. Discussion: Despite the high prevalence and increasing incidence, comprehensive and evidence based treatment approaches for OA in a primary care setting are neither established nor evaluated in Germany. If the evaluation of the presented approach reveals a clear benefit it is planned to provide this GP-centred interventions on a much larger scale.
We present a detailed study of chemical freeze-out in nucleus-nucleus collisions at beam energies of 11.6, 30, 40, 80 and 158A GeV. By analyzing hadronic multiplicities within the statistical hadronization approach, we have studied the chemical equilibration of the system as a function of center of mass energy and of the parameters of the source. Additionally, we have tested and compared different versions of the statistical model, with special emphasis on possible explanations of the observed strangeness hadronic phase space under-saturation.
Cancer has become one of the most fatal diseases. The Heidelberg Heavy Ion Cancer Therapy (HICAT) has the potential to become an important and efficient treatment method because of its excellent “Bragg peak” characteristics and on-line irradiation control by the PET diagnostics. The dedicated Heidelberg Heavy Ion Cancer Therapy Project includes two ECR ion sources, a RF linear injector, a synchrotron and three treatment rooms. It will deliver 4*10 high 10 protons, or 1*10 high 10 He, or 1*10 high 9 Carbons, or 5*10 high 8 Oxygens per synchrotron cycle with the beam energy 50-430AMeV for the treatments. The RF linear injector consists of a 400AkeV RFQ and of a very compact 7AMeV IH-DTL accelerator operated at 216.816MHz. The development of the IH-DTL within the HICAT project is a great challenge with respect to the present state of the DTL art because of the following reasons: • The highest operating frequency (216.816MHz) of all IH-DTL cavities; • Extremely large cavity length to diameter ratio of about 11; • IH-DTL with three internal triplets; • The highest effective voltage gain per meter (5.5MV/m); • Very short MEBT design for the beam matching. The following achievements have been reached during the development of the IH-DTL injector for HICAT : The KONUS beam dynamics design with LORASR code fulfills the beam requirement of the HICAT synchrotron at the injection point. The simulations for the IH-DTL injector have been performed not only with a homogeneous input beam, but also with the actual particle distribution from the exit of the HICAT RFQ accelerator as delivered by the PARMTEQ code. The output longitudinal normalized emittance for 95% of all particles is 2.00AkeVns, the emittance growth is less than 24%, while the X-X’ and Y-Y’ normalized emittance are 0.77mmmrad and 0.62mmmrad, respectively. The emittance growth in X-X’ is less than 18%, and the emittance growth in Y-Y’ is less than 5%. Based on the transverse envelopes of the transported particles, the redesign of the buncher drift tubes at the RFQ high energy end has been made to get a higher transit time factor for this novel RFQ internal buncher. An optimized effective buncher gap voltage of 45.4KV has been calculated to deliver a minimized longitudinal beam emittance, while the influence of the effective buncher voltage on the transverse emittance can be neglected. Six different tuning concepts were investigated in detail while tuning the 1:2 scaled HICAT IH model cavity. ‘Volume Tuning’ by a variation of the cavity cross sectional area can compensate the unbalanced capacitance distribution in case of an extreme beta-lambda-variation along an IH cavity. ‘Additional Capacitance Plates’ or copper sheets clamped on drift tube stems are a fast way for checking the tuning sensitivity, but they will be replaced by massive copper blocks mounted on the drift tube girders finally. ‘Lens Coupling’ is an important tuning to stabilize the operation mode and to increase or decrease the coupling between neighboring sections. ‘Tube Tuning’ is the fine tuning concept and also the standard tuning method to reach the needed field distributions as well as the gap voltage distributions. ‘Undercut Tuning’ is a very sensitive tuning for the end sections and with respect to the voltage distribution balance along the structure. The different types of ‘plungers’ in the 3rd and 4th sections have different effects on the resonance frequency and on the field distribution. The different triplet stems and the geometry of the cavity end have been also investigated to reach the design field and voltage distributions. Finally, the needed uniform field distribution along the IH-DTL cavity and the corresponding effective voltage distribution were realized, the remaining maximum gap voltage difference was less than 5% for the model cavity. The several important higher order modes were also measured. The RF tuning of the IH-DTL model cavity delivers the final geometry parameters of the IH-DTL power cavity. A rectangular cavity cross section was adopted for the first time for this IH-DTL cavity. This eases the realization of the volume tuning concept in the 1st and 2nd sections. Lens coupling determines the final distance between the triplet and the girder. The triplets are mounted on the lower cavity half shell. The Microwave Studio simulations have been carried out not only for the HICAT model cavity, but also for the final geometry of the IH-DTL power cavity. The field distribution for the operation mode H110 fits to the model cavity measurement as well as the Higher Order Modes. The simulations prove the IH-DTL geometrical design. On the other hand, the precision of one simulation with 2.3 million mesh points for full cross section area and the CPU time more than 15hours on a DELL PC with Intel Pentium 4 of 2.4GHz and 2.096GRAM were exploited to their limit when calculating the real parameters for the two final machining iterations during production. The shunt impedance of the IH-DTL power cavity is estimated by comparison with the existing tanks to about 195.8MOmega/m, which fits to the simulation result of 200.3MOmega/m with reducing the conductivity to the 5.0*10 high 7 Omega-1m-1. The effective shunt impedance is 153 MOmega/m. The needed RF power is 755kW. The expected quality factor of the IH-DTL cavity is about 15600. The IH-DTL power cavity tuning measurements before cavity copper plating have been performed. The results are within the specifications. There is no doubt that the needed accuracy of the voltage distribution will be reached with the foreseen fine tuning concepts in the last steps.
Fluctuations and NA49
(2005)
A systematic analysis of data on strangeness and pion production in nucleon–nucleon and central nucleus–nucleus collisions is presented. It is shown that at all collision energies the pion/baryon and strangeness/pion ratios indicate saturation with the size of the colliding nuclei. The energy dependence of the saturation level suggests that the transition to the Quark Gluon Plasma occurs between 15 A·GeV/c (BNL AGS) and 160 A·GeV/c (CERN SPS) collision energies. The experimental results interpreted in the framework of a statistical approach show that the effective number of degrees of freedom increases in the course of the phase transition and that the plasma created at CERN SPS energies may have a temperature of about 280 MeV (energy density ~ 10 GeV/fm exp-3). The presence of the phase transition can lead to the non–monotonic collision energy dependence of the strangeness/pion ratio. After an initial increase the ratio should drop to the characteristic value for the QGP. Above the transition region the ratio is expected to be collision energy independent. Experimental studies of central Pb+Pb collisions in the energy range 20–160 A·GeV/c are urgently needed in order to localize the threshold energy, and study the properties of the QCD phase transition.
We argue that the measurement of open charm gives a unique opportunity to test the validity of pQCD-based and statistical models of nucleus-nucleus collisions at high energies. We show that various approaches used to estimate D-meson multiplicity in central Pb+Pb collisions at 158 A GeV give predictions which differ by more than a factor of 100. Finally we demonstrate that decisive experimental results concerning the open charm yield in A+A collisions can be obtained using data of the NA49 experiment at the CERN SPS.
Under a conventional policy rule, a central bank adjusts its policy rate linearly according to the gap between inflation and its target, and the gap between output and its potential. Under "the opportunistic approach to disinflation" a central bank controls inflation aggressively when inflation is far from its target, but concentrates more on output stabilization when inflation is close to its target, allowing supply shocks and unforeseen fluctuations in aggregate demand to move inflation within a certain band. We use stochastic simulations of a small-scale rational expectations model to contrast the behavior of output and inflation under opportunistic and linear rules. Klassifikation: E31, E52, E58, E61. July, 2005.
This paper introduces a method for solving numerical dynamic stochastic optimization problems that avoids rootfinding operations. The idea is applicable to many microeconomic and macroeconomic problems, including life cycle, buffer-stock, and stochastic growth problems. Software is provided. Klassifikation: C6, D9, E2 . July 28, 2005.
Groundwater recharge is the major limiting factor for the sustainable use of groundwater. To support water management in a globalized world, it is necessary to estimate, in a spatially resolved way, global-scale groundwater recharge. In this report, improved model estimates of diffuse groundwater recharge at the global-scale, with a spatial resolution of 0.5° by 0.5°, are presented. They are based on calculations of the global hydrological model WGHM (WaterGAP Global Hydrology Model) which, for semi-arid and arid areas of the globe, was tuned against independent point estimates of diffuse groundwater recharge. This has led to a decrease of estimated groundwater recharge under semi-arid and arid conditions as compared to the model results before tuning, and the new estimates are more similar to country level data on groundwater recharge. Using the improved model, the impact of climate change on groundwater recharge was simulated, applying two greenhouse gas emissions scenarios as interpreted by two different climate models.
Prion diseases, also called transmissible spongiform encephalopathies, are a group of fatal neurodegenerative conditions that affect humans and a wide variety of animals. To date there is no therapeutic or prophylactic approach against prion diseases available. The causative infectious agent is the prion, also termed PrPSc, which is a pathological conformer of a cellular protein named prion protein PrPc. Prions are thought to multiply upon conversion of PrPc to PrPSc in a self-propagating manner. Immunotherapeutic strategies directed against PrPc represent a possible approach in preventing or curing prion diseases. Accordingly, it was already shown in animal models, that passive immunization delays the onset of prion diseases. The present thesis aimed at the development of a candidate vaccine towards the active immunization against prion diseases, an immune response, which has to be accompanied by the circumvention of host tolerance to the self-antigen PrPc. The vaccine development was approached using virus-like particles (retroparticles) derived from either the murine leukemia (MLV) or the human immunodeficiency virus (HIV). The display of PrP on the surface of such particles was addressed for both the cellular and the pathogenic form of PrP. The display of PrPc was achieved by either fusion to the transmembrane domain of the platelet derived growth factor receptor (PDGFR) or to the N-terminal part of the viral envelope protein (Env). In both cases, the corresponding PrPD- and PrPE-retroparticles were successfully produced and analyzed via immune fluorescence, Western Blot analysis, immunogold electron microscopy as well as by ELISA methods. Both, PrPD- and PrPE-retroparticles showed effective incorporation of N-terminally truncated forms of PrPc but not for the complete protein. PrPc at this revealed the typical glycosylation pattern, which was specifically removed by a glycosidase enzyme. Upon display of PrPc on retroparticles the protein remained detectable by PrP-specific antibodies under native conditions. Electron microscopy analysis of PrPc-variants revealed no alteration of the characteristic retroviral morphology of the generated particles. MLV-derived PrPD-retroparticles were successfully used in immunization studies. Contrary to approaches using bacterially expressed PrPc, the immunization of mice resulted in a specific antibody response. The display of the pathogenic isoform was aimed by two different strategies. The first one was directed at the conversion of the proteinase K (PK) sensitive from of PrP on the surface of PrPD-retroparticles into the PK resistant form. Albeit specific adaption of the PK digestion assay detecting resistant PrP, no PrP conversion was observed for PrPD-retroparticles. The second approach utilized a replication competent variant of the ecotropic MLV displaying PrPc on the viral Env protein. This MLV variant was stable in cell culture for six passages but did not replicate on scrapie-infected, PrPSc-propagating neuroblastoma cells. Thus, besides PrPc-displaying virus-like particles a replication competent MLV variant was obtained, which stably incorporated PrPc at the N-terminus of the viral Env protein. The incorporation of the cell-surface located PrPc into particles was expected from previously obtained data on protein display in the context of retrovirus-derived particles. Thus, the lack of incorporation observed for the complete PrPc sequence was rather unexpected and was found to be inhibited at both, fusion to PDGFR and the viral Env. In contrast to N-terminally truncated PrPc, the complete PrPc was shown to exhibit increased cell surface internalization rates and half-life times eventually contributing to the observed results. The PrP-vaccination approach described in this work represents the first successful system inducing PrP-specific antibody responses against the prion protein in wt mice. Explanations at this are based on the induction of specific T cell help or effects of the innate immunity, respectively. MLV-and HIV-derived particles bearing the PrP-coding sequence or being replication competent variants generated during this thesis might help to further improve the PrP-specific immune response.
Using CORSIKA for simulating extensive air showers, we study the relation between the shower characteristics and features of hadronic multiparticle production at low energies. We report about investigations of typical energies and phase space regions of secondary particles which are important for muon production in extensive air showers. Possibilities to measure relevant quantities of hadron production in existing and planned accelerator experiments are discussed.
The knowledge of the build up time of space charge compensation (SCC) and the investigation of the compensation process is of main interest for low energy beam transport of pulsed high perveance ion beams under space charge compensated conditions. To investigate experimentally the rise of compensation an LEBT system consisting of a pulsed ion source, two solenoids and a drift tube as diagnostic section has been set up. The beam potential has been measured time resolved by a residual gas ion energy analyser (RGA). A numerical simulation for the calculation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult measure directly. The results of the simulation has been compared with the measured data to investigate the behavior of the compensation electrons as a function of time. The acquired data shows that the theoretical rise time of space charge compensation is by a factor of two shorter than the build up time determined experimentally. In view of description the process of SCC an interpretation of the gained results is given.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.
For investigation of space charge compensation process due to residual gas ionization and the experimentally study of the rise of compensation, a Low Energy Beam Transport (LEBT) system consisting of an ion source, two solenoids, a decompensation electrode to generate a pulsed decompensated ion beam and a diagnostic section was set up. The potentials at the beam axis and the beam edge were ascertained from time resolved measurements by a residual gas ion energy analyzer. A numerical simulation of self-consistent equilibrium states of the beam plasma has been developed to determine plasma parameters which are difficult to measure directly. The temporal development of the kinetic and potential energy of the compensation electrons has been analyzed by using the numerically gained results of the simulation. To investigate the compensation process the distribution and the losses of the compensation electrons were studied as a function of time. The acquired data show that the theoretical estimated rise time of space charge compensation neglecting electron losses is shorter than the build up time determined experimentally. To describe the process of space charge compensation an interpretation of the achieved results is given.
Low energy beam transport (LEBT) for a future heavy ion driven inertial fusion (HIDIF [1]) facility is a crucial point using a Bi+ beam of 40 mA at 156 keV. High space charge forces (generalised perveance K=3.6*10-3) restrict the use of electrostatic focussing systems. On the other hand magnetic lenses using space charge compensation suffer from the low particle velocity. Additionally the emittance requirements are very high in order to avoid particle losses in the linac and at ring injection [2]. urthermore source noise and rise time of space charge compensation [3] might enhance particle losses and emittance. Gabor lenses [4] using a continuous space charge cloud for focussing could be a serious alternative to conventional LEBT systems. They combine strong cylinder symmetric focussing with partly space charge compensation and low emittance growth due to lower non linear fields. A high tolerance against source noise and current fluctuations and reduced investment costs are other possible advantages. The proof of principle has already been shown [5, 6]. To broaden the experiences an experimental program was started. Therefrom the first experimental results using a double Gabor lens (DGPL, see fig. 1 ) LEBT system for transporting an high perveance Xe+ beam will be presented and the results of numerical simulations will be shown.
The determination of the beam emittance using conventional destructive methods suffers from two main disadvantages. The interaction between the ion beam and the measurement device produces a high amount of secondary particles. Those particles interact with the beam and can change the transport properties of the accelerator. Particularly in the low energy section of high current accelerators like proposed for IFMIF, heavy ion inertial fusion devices (HIDIF) and spallation sources (ESS, SNS) the power deposited on the emittance measurement device can lead to extensive heat on the detector itself and can destruct or at least dejust the device (slit or grit for example). CCD camera measurements of the incident light emitted from interaction of beam ions with residual gas are commonly used for determination of the beam emittance. Fast data acquisition and high time resolution are additional features of such a method. Therefore a matrix formalism is used to derive the emittance from the measured profile of the beam [1,2] which does not take space charge effects and emittance growth into account. A new method to derive the phase space distribution of the beam from a single CCD camera image using statistical numerical methods will be presented together with measurements. The results will be compared with measurements gained from a conventional Allison type (slit-slit) emittance measurement device.
Investigation of the focus shift due to compensation process for low energy ion beam transport
(2000)
In magnetic Low Energy Beam Transport (LEBT) sections space charge compensation helps to enhance the transportable beam current and to reduce emittance growth due to space charge forces. For pulsed beams the time neccesary to establish space charge compensation is of great interest for beam transport. Particularly with regard to beam injection into the first accelerator section (e.g. RFQ) investigation of effects on shift of the beam focus due to space charge compensation are very important. The achieved results helps to obviate a mismatch into the first RFQ. To investigate the space charge compensation due to residual gas ionization, time resolved measurements using pulsed ion beams were performed at the LEBT system at the IAP and at the CEA-Saclay injektion line. A residual gas ion energy analyser (RGIA) equiped with a channeltron was used to measure the potential destribution as a function of time to estimate the rise time of compensation. For time resolved measurements (delta t min=50ns) of the radial density profile of the ion beam a CCD-camera was applied. The measured data were used in a numerical simulation of selfconsistant eqilibrium states of the beam plasma [1] to determine plasma parameters such as the density, the temperature, the kinetic and potential energy of the compensation electrons as a function of time. Measurements were done using focused proton beams (10keV, 2mA at IAP and 92keV, 62mA at CEA-Saclay) to get a better understanding of the influence of the compensation process. An interpretation of the acquired data and the achieved results will be presented.
Influence of space charge fluctuations on the low energy beam transport of high current ion beams
(2000)
For future high current ion accelerators like SNS, ESS or IFMIF the beam behaviour in low energy beam transport sections is dominated by space charge forces. Therefore space charge fluctuations (e. g. source noise) can drastically influence the beam transport properties of the low energy beam transport section. Losses of beam ions and emittance growth are the most severe problems. For electrostatic transport systems either a LEBT design has to be found which is insensitive to variations of the space charge or the origin of the fluctuations has to be eliminated. For space charge compensated transport as proposed for ESS and IFMIF the situation is different: No major influence on beam transport is expected for fluctuations below a cut-off frequency given by the production rate of the compensation particles. Above this frequency the fluctuations can not be compensated by particle production alone, but redistributions of the compensation particles helps to compensate the influence of the fluctuations. Above a second cut-off frequency given by the density and the temperature of the compensation particles their redistribution is too slow to reduce the influence of the space charge fluctuations. Transport simulations for the IFMIF injector including space charge fluctuations will be presented together with a determination of the cut-off frequencies. The results will be compared with measurements of the rise time of space charge compensation.
New results on the production of Xi and Omega hyperons in Pb+Pb interactions at 40 A GeV and Lambda at 30 A GeV are presented. Transverse mass spectra as well as rapidity spectra of these hyperons are shown and compared to previously measured data at different beam energies. The energy dependence of hyperon production (4Pi yields) is discussed. Additionally, the centrality dependence of Xi- production at 40 A GeV is presented.
First results on the production of Xi- and Anti-xi hyperons in Pb+Pb interactions at 40 A GeV are presented. The Anti-xi/Xi- ratio at midrapidity is studied as a function of collision centrality. The ratio shows no significant centrality dependence within statistical errors; it ranges from 0.07 to 0.15. The Anti-xi/Xi- ratio for central Pb+Pb collisions increases strongly with the collision energy.
High perveance negative ion beams with low emittance are essential for several next generation particle accelerators (i. g. spallation sources like ESS [1] and SNS [2]). The extraction and transport of these beams have intrinsic difficulties different from positive ion beams. Limitation of beam current and emittance growth have to be avoided. To fulfill the requirements of those projects a detailed knowledge of the physics of beam formation the interaction of the H- with the residual gas and transport is substantial. A compact cesium free H- volume source delivering a low energy high perveance beam (6.5 keV, 2.3 mA, perveance K= 0.0034) has been built to study the fundamental physics of beam transport and will be integrated into the existing LEBT section in the near future. First measurements of the interaction between the ion beam and the residual gas will be presented together with the experimental set up and preliminary results.
A LEBT system consisting of an ion source, two solenoids, and a diagnostic section has been set up to investigate the space charge compensation process due to residual gas ionization [1] and to study experimentally the rise of compensation. To gain the radial beam potential distribution time resolved measurements of the residual gas ion energy distribution were carried out using a Hughes Rojanski analyzer [2,3]. To measure the radial density profile of the ion beam a CCD-camera performed time resolved measurements, which allow an estimation the rise time of compensation. Further the dynamic effect of the space charge compensation on the beam transport was shown. A numerical simulation under assumption of selfconsistent states [4] of the beam plasma has been used to determine plasma parameters such as the radial density profile and the temperature of the electrons. The acquired data show that the theoretical estimated rise time of space charge compensation neglecting electron losses is shorter than the build up time determined experimentally. An interpretation of the achieved results is given.
To fulfil the requirements of ESS on beam transmission and emittance growth a detailed knowledge of the physics of beam formation as well as the interaction of the H- with the residual gas is substantial. Space charge compensated beam transport using solenoids for ion optics is in favour for the Low Energy Beam Transport (LEBT) between ion source and the first RFQ. Space charge compensation reduces the electrical self fields and beam radii and therefore emittance growth due to aberrations and redistribution. Transport of H- near the ion source is negatively influenced by the dipole fields required for beam extraction and e--dumping and the high gas pressure. The destruction of the rotational symmetry together with the space charge forces causes emittance growth and particle losses within the extraction system. High residual gas pressure near the extractor together with the high cross section for stripping will influence the transmission as well as space charge compensation. Therefore a detailed knowledge of the interaction of the residual gas with the beam and the influence of the external fields on the distribution of the compensation particles is necessary to reduce particle losses and emittance growth. Preliminary experiments using positive hydrogen ions for reference already show the influence of dipole fields on beam emittance. First measurements with H- confirm these results. Additional information on the interactions of the residual gas with the beam ions have been gained from the measurements using the momentum and energy analyser.
Vortrag gehalten an der Tagung "The XVI International Conference on Ultrarelativistic Nucleus-Nucleus Collisions, organized by SUBATECH Laboratory", in Nantes, France, 18-24 Juli 2002.
Rapidity distributions for Lambda and anti-Lambda hyperons in central Pb-Pb collisions at 40, 80 and 158 AGeV and for K 0 s mesons at 158 AGeV are presented. The lambda multiplicities are studied as a function of collision energy together with AGS and RHIC measurements and compared to model predictions. A different energy dependence of the Lambda/pi and anti-Lambda/pi is observed. The anti-Lambda/Lambda ratio shows a steep increase with collision energy. Evidence for a anti-Lambda/anti-p ratio greater than 1 is found at 40 AGeV.
The experiment NA49 at the CERN SPS is a large acceptance detector for charmed hadrons. The identification of neutral strange hadrons Lambda and AntiLambda is based on the measurement of their charged decay particles and the reconstruciton of the decay vertex. The charged particles were measured with the 4 time projection chambers (TPC), two of them are situated inside 2 large dipole magnets, the two others are downstream of the magnet. Lambda and AntiLambda baryons have been measured in central Pb+Pb collisions at 40, 80 and 160 GeV/nucleon over a wide range in rapidity (1 - 5) and transverse momentum (0 - 3 GeV/c). Particle yields and spectra will be shown for the different energies. The results will be put into the existing systematics of Lambda-production as a function of beam energy.
In this paper we present recent results from the NA49 experiment for Lambda and Lambda hyperons produced in central Pb+Pb collisions at 40, 80 and 158 A GeV. Transverse mass spectra and rapidity distributions for Lambda are shown for all three energies. The shape of the rapidity distribution becomes flatter with increasing beam energy. The multiplicities at mid-rapidity as well as the total yields are studied as a function of collision energy including AGS measurements. The ratio Lambda/pi at mid-rapidity and in 4 pi has a maximum around 40 A GeV. In addition, Lambda rapidity distributions have been measured at 40 and 80 A GeV, which allows to study the Lambda Lambda ratio.
Excitation functions for quasi-elastic scattering have been measured at backward angles for the systems 32,34S+197Au and 32,34S+208Pb for energies spanning the Coulomb barrier. Representative distributions, sensitive to the low energy part of the fusion barrier distribution, have been extracted from the data. For the fusion reactions of 32,34S with 197Au couplings related to the nuclear structure of 197Au appear to be dominant in shaping the low energy part of the barrier distibution. For the system 32S+208Pb the barrier distribution is broader and extends further to lower energies, than in the case of 34S+208Pb. This is consistent with the interpretation that the neutron pick-up channels are energetically more favoured in the 32S induced reaction and therefore couple more strongly to the relative motion. It may also be due to the increased collectivity of 32S, when compared with 34S.
S.a. Deutsche Fassung: Ökonomie der Gabe - Positivität der Gerechtigkeit: Gegenseitige Heimsuchungen von System und différance. In: Albrecht Koschorke und Cornelia Vismann (Hg.) System - Macht - Kultur: Probleme der Systemtheorie. Akademie, Berlin 1999, 199-212. Auch auf unserem Server vorhanden. * Italienische Fassung: Economia del dono, positività della giustizia: la reciproca paranoia di Jacques Derrida e Niklas Luhmann. Sociologia e politiche sociali 6, 2003, 113-130. Portugiesische Fassung: Economia da dádiva ? posividade da rustica; ?assombracao?? mutua entre sistema e différance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 55-78.
Globalized justice - fragmented justice. Human rights violations by "private" transnational actors
(2005)
Plenarvortrag Weltkongress der Rechtsphilosophie und Sozialphilosophie, 24.-29. Mai, Granada 2005. S.a. die deutsche Fassung: "Die anonyme Matrix: Menschenrechtsverletzungen durch "private" transnationale Akteure". Spanische Fassung: Sociedad global, justicia fragmentada: sobre la violatión de los derechos humanos por actores transnacionales 'privados'. In: Manuel Escamilla and Modesto Saavedra (eds.), Law and Justice in a global society, International Association for philosophy of law and social philosophy, Granada 2005, S. 529-546.
"Eurocomprehension" is the term used to describe European intercomprehension in Europe’s three major language families, the Romance, the Slavic and the Germanic. The aim of eurocomprehension is to achieve multilingualism conforming to EU language policy goals through the entry-point of receptive competence in a modular structure. Linguistic intercomprehension research forms the transfer bases for the cognitive use of relations between the language groups which didactics of multilingualism implement. ...
Deutsche Fassung: Expertise als soziale Institution: Die Internalisierung Dritter in den Vertrag. In: Gert Brüggemeier (Hg.) Liber Amicorum Eike Schmidt. Müller, Heidelberg, 2005, 303-334.
Deutsche Fassung: Vertragswelten: Das Recht in der Fragmentierung von private governance regimes. Rechtshistorisches Journal 17, 1998, 234-265. Italienische Fassung: Mondi contrattuali. Discourse rights nel diritto privato. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 113-142. Portugiesische Fassung: Mundos contratuais: o direito na fragmentacao de regimes de private governance. In: Gunther Teubner, Direito, Sistema, Policontexturalidade, Editora Unimep, Piracicaba Sao Paolo, Brasil 2005, 269-298.
s.a. Deutsche Fassung: Rechtshistorisches Journal 15, 1996, 255-290 und in: Eric Schwarz (Hg.) La théorie des systèmes: une approche inter- et transdisciplinaire. Bösch, Sion 1996, 101-119. Italienische Fassung: La Bukowina globale: il pluralismo giuridico nella società mondiale. Sociologic a politiche sociali 2, 1999, 49-80. Portugiesische Fassung: Bukowina global sobre a emergência de um pluralismo jurídico transnacional. Impulso: Direito e Globalização 14, 2003. Georgische Fassung: Globaluri bukovina: samarTlebrivi pluralizmi msoflio sazogadoebaSi. Journal of the Institute of State and Law of the Georgian Academy of Sciences 2005 (im Erscheinen)
s.a. Deutsche Fassung: Archiv für Rechts- und Sozialphilosophie. Beiheft 65, 1996, 199-220. Italienische Fassung: Altera pars audiatur: Il diritto nella collisione dei discorsi. In: Gunther Teubner, Diritto policontesturale: Prospettive giuridiche della pluralizzazione dei mondi sociali. La città del sole, Neapel 1999, 27-70. Französische Fassung: Altera pars audiatur: le droit dans la collision des discours. Droit et Société 35, 1997, 99-123. Portugiesische Fassung: Altera pars audiatur: o direito na colisao de disursos. In: J.A. Lindgren Alves, Gunther Teubner, Joaquim Leonel de Rezende Alvim, Dorothe Susanne Rüdiger, Direito e Cidadania na Pos-Modernidade. Editora Unimep, Piracicaba, Brasilia 2002; 93-129.
Reflexives Recht. Entwicklungsmodelle des Rechts in vergleichender Perspektive (EUI Working Paper 1982/13). Archiv für Rechts- und Sozialphilosophie 68, 1982, 13-59, und in: Werner Maihofer (Hg.), Noi si Mura, Schriftenreihe des Europäischen Hochschulinstituts, Florenz 1986, 290-340. Englische Fassung: Substantive and Reflexive Elements in Modern Law. (EUI Working Paper 1982/14). Law and Society Review 17, 1983, 239-285 und in: Kahei Rokumoto (Hg.) Sociological Theories of Law. Dartmouth, Aldershot 1994, 415-462. Neuabdruck in: Carroll Seron, The Law and Society Canon, Ashgate, Aldershot 2005 (im Erscheinen). Französische Fassung: Eléments 'substantifs' et 'réflexifs' dans le droit moderne. L'Interdit. Revue de Psychanalyse Institutionelle, 1984, 129-132, und Droit et réflexivité: une perspective comparative sur des modèles d'évolution juridique in: Gunther Teubner, Droit et réflexivité. Librairie générale de droit et de jurisprudence, Paris 1994, 3-50. Dänische Fassung: Refleksiv Ret: Udviklingsmodeller i sammenlignende perspektiv. In: Asmund Born, Nils Bredsdorff, Leif Hansen and Finn Hansson (Hg.) Refleksiv Ret. Publication Series of the Institut for Organisation og Arbeidssociologi. Nytfrasamfundsvidenskaberne, Kopenhagen 1988, 21-79.
We introduce a new method for representing and solving a general class of non-preemptive resource-constrained project scheduling problems. The new approach is to represent scheduling problems as de- scriptions (activity terms) in a language called RSV, which allows nested expressions using pll, seq, and xor. The activity-terms of RSV are similar to concepts in a description logic. The language RSV generalizes previous approaches to scheduling with variants insofar as it permits xor's not only of atomic activities but also of arbitrary activity terms. A specific semantics that assigns their set of active schedules to activity terms shows correctness of a calculus normalizing activity terms RSV similar to propositional DNF-computation. Based on RSV, this paper describes a diagram-based algorithm for the RSV-problem which uses a scan-line principle. The scan-line principle is used for determining and resolving the occurring resource conflicts and leads to a nonredundant generation of all active schedules and thus to a computation of the optimal schedule.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
In the last decade, much effort went into the design of robust third-person pronominal anaphor resolution algorithms. Typical approaches are reported to achieve an accuracy of 60-85%. Recent research addresses the question of how to deal with the remaining difficult-toresolve anaphors. Lappin (2004) proposes a sequenced model of anaphor resolution according to which a cascade of processing modules employing knowledge and inferencing techniques of increasing complexity should be applied. The individual modules should only deal with and, hence, recognize the subset of anaphors for which they are competent. It will be shown that the problem of focusing on the competence cases is equivalent to the problem of giving precision precedence over recall. Three systems for high precision robust knowledge-poor anaphor resolution will be designed and compared: a ruleset-based approach, a salience threshold approach, and a machine-learning-based approach. According to corpus-based evaluation, there is no unique best approach. Which approach scores highest depends upon type of pronominal anaphor as well as upon text genre.
Assessing enhanced knowledge discovery systems (eKDSs) constitutes an intricate issue that is understood merely to a certain extent by now. Based upon an analysis of why it is difficult to formally evaluate eKDSs, it is argued for a change of perspective: eKDSs should be understood as intelligent tools for qualitative analysis that support, rather than substitute, the user in the exploration of the data; a qualitative gap will be identified as the main reason why the evaluation of enhanced knowledge discovery systems is difficult. In order to deal with this problem, the construction of a best practice model for eKDSs is advocated. Based on a brief recapitulation of similar work on spoken language dialogue systems, first steps towards achieving this goal are performed, and directions of future research are outlined.
In the last years, much effort went into the design of robust anaphor resolution algorithms. Many algorithms are based on antecedent filtering and preference strategies that are manually designed. Along a different line of research, corpus-based approaches have been investigated that employ machine-learning techniques for deriving strategies automatically. Since the knowledge-engineering effort for designing and optimizing the strategies is reduced, the latter approaches are considered particularly attractive. Since, however, the hand-coding of robust antecedent filtering strategies such as syntactic disjoint reference and agreement in person, number, and gender constitutes a once-for-all effort, the question arises whether at all they should be derived automatically. In this paper, it is investigated what might be gained by combining the best of two worlds: designing the universally valid antecedent filtering strategies manually, in a once-for-all fashion, and deriving the (potentially genre-specific) antecedent selection strategies automatically by applying machine-learning techniques. An anaphor resolution system ROSANA-ML, which follows this paradigm, is designed and implemented. Through a series of formal evaluations, it is shown that, while exhibiting additional advantages, ROSANAML reaches a performance level that compares with the performance of its manually designed ancestor ROSANA.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency. In this paper, two approaches are presented which generalize the verification of coindexing constraints to de cient descriptions. At first, a partly heuristic method is described, which has been implemented. Secondly, a provable complete method is specified. It provides the means to exploit the results of anaphor resolution for a further structural disambiguation. By rendering possible a parallel processing model, this method exhibits, in a general sense, a higher degree of robustness. As a practically optimal solution, a combination of the two approaches is suggested.
An anaphor resolution algorithm is presented which relies on a combination of strategies for narrowing down and selecting from antecedent sets for re exive pronouns, nonre exive pronouns, and common nouns. The work focuses on syntactic restrictions which are derived from Chomsky's Binding Theory. It is discussed how these constraints can be incorporated adequately in an anaphor resolution algorithm. Moreover, by showing that pragmatic inferences may be necessary, the limits of syntactic restrictions are elucidated.
Coreference-Based Summarization and Question Answering: a Case for High Precision Anaphor Resolution
(2003)
Approaches to Text Summarization and Question Answering are known to benefit from the availability of coreference information. Based on an analysis of its contributions, a more detailed look at coreference processing for these applications will be proposed: it should be considered as a task of anaphor resolution rather than coreference resolution. It will be further argued that high precision approaches to anaphor resolution optimally match the specific requirements. Three such approaches will be described and empirically evaluated, and the implications for Text Summarization and Question Answering will be discussed.
Syntactic coindexing restrictions are by now known to be of central importance to practical anaphor resolution approaches. Since, in particular due to structural ambiguity, the assumption of the availability of a unique syntactic reading proves to be unrealistic, robust anaphor resolution relies on techniques to overcome this deficiency.
This paper describes the ROSANA approach, which generalizes the verification of coindexing restrictions in order to make it applicable to the deficient syntactic descriptions that are provided by a robust state-of-the-art parser. By a formal evaluation on two corpora that differ with respect to text genre and domain, it is shown that ROSANA achieves high-quality robust coreference resolution. Moreover, by an in-depth analysis, it is proven that the robust implementation of syntactic disjoint reference is nearly optimal. The study reveals that, compared with approaches that rely on shallow preprocessing, the largely nonheuristic disjoint reference algorithmization opens up the possibility/or a slight improvement. Furthermore, it is shown that more significant gains are to be expected elsewhere, particularly from a text-genre-specific choice of preference strategies.
The performance study of the ROSANA system crucially rests on an enhanced evaluation methodology for coreference resolution systems, the development of which constitutes the second major contribution o/the paper. As a supplement to the model-theoretic scoring scheme that was developed for the Message Understanding Conference (MUC) evaluations, additional evaluation measures are defined that, on one hand, support the developer of anaphor resolution systems, and, on the other hand, shed light on application aspects of pronoun interpretation.
This paper is focused on the coordination of order and production policy between buyers and suppliers in supply chains. When a buyer and a supplier of an item work independently, the buyer will place orders based on his economic order quantity (EOQ). However, the buyer s EOQ may not lead to an optimal policy for the supplier. It can be shown that a cooperative batching policy can reduce total cost significantly. Should the buyer have the more powerful position to enforce his EOQ on the supplier, then no incentive exists for him to deviate from his EOQ in order to choose a cooperative batching policy. To provide an incentive to order in quantities suitable to the supplier, the supplier could offer a side payment. One critical assumption made throughout in the literature dealing with incentive schemes to influence buyer s ordering policy is that the supplier has complete information regarding buyer s cost structure. However, this assumption is far from realistic. As a consequence, the buyer has no incentive to report truthfully on his cost structure. Moreover there is an incentive to overstate the total relevant cost in order to obtain as high a side payment as possible. This paper provides a bargaining model with asymmetric information about the buyer s cost structure assuming that the buyer has the bargaining power to enforce his EOQ on the supplier in case of a break-down in negotiations. An algorithm for the determination of an optimal set of contracts which are specifically designed for different cost structures of the buyer, assumed by the supplier, will be presented. This algorithm was implemented in a software application, that supports the supplier in determining the optimal set of contracts.
This paper provides global terrestrial surface balances of nitrogen (N) at a resolution of 0.5 by 0.5 degree for the years 1961, 1995 and 2050 as simulated by the model WaterGAP-N. The terms livestock N excretion (Nanm), synthetic N fertilizer (Nfert), atmospheric N deposition (Ndep) and biological N fixation (Nfix) are considered as input while N export by plant uptake (Nexp) and ammonia volatilization (Nvol) are taken into account as output terms. The different terms in the balance are compared to results of other global models and uncertainties are described. Total global surface N surplus increased from 161 Tg N yr-1 in 1961 to 230 Tg N yr-1 in 1995. Using assumptions for the scenario A1B of the Special Report on Emission Scenarios (SRES) of the International Panel on Climate Change (IPCC) as quantified by the IMAGE model, total global surface N surplus is estimated to be 229 Tg N yr-1 in 2050. However, the implementation of these scenario assumptions leads to negative surface balances in many agricultural areas on the globe, which indicates that the assumptions about N fertilizer use and crop production changes are not consistent. Recommendations are made on how to change the assumptions about N fertilizer use to receive a more consistent scenario, which would lead to higher N surpluses in 2050 as compared to 1995.
The Land and Water Development Division of the Food and Agriculture Organization of the United Nations and the Johann Wolfgang Goethe University, Frankfurt am Main, Germany, are cooperating in the development of a global irrigation-mapping facility. This report describes an update of the Digital Global Map of Irrigated Areas for the continent of Asia. For this update, an inventory of subnational irrigation statistics for the continent was compiled. The reference year for the statistics is 2000. Adding up the irrigated areas per country as documented in the report gives a total of 188.5 million ha for the entire continent. The total number of subnational units used in the inventory is 4 428. In order to distribute the irrigation statistics per subnational unit, digital spatial data layers and printed maps were used. Irrigation maps were derived from project reports, irrigation subsector studies, and books related to irrigation and drainage. These maps were digitized and compared with satellite images of many regions. In areas without spatial information on irrigated areas, additional information was used to locate areas where irrigation is likely, such as land-cover and land-use maps that indicate agricultural areas or areas with crops that are usually grown under irrigation. Contents 1. Working Report I: Generation of a map of administrative units compatible with statistics used to update the Digital Global Map of Irrigated Areas in Asia 2. Working Report II: The inventory of subnational irrigation statistics for the Asian part of the Digital Global Map of Irrigated Areas 3. Working Report III: Geospatial information used to locate irrigated areas within the subnational units in the Asian part of the Digital Global Map of Irrigated Areas 4. Working Report IV: Update of the Digital Global Map of Irrigated Areas in Asia, Results Maps
Pseudorandom function tribe ensembles based on one-way permutations: improvements and applications
(1999)
Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on oneway permutations given by Canetti, Micciancio and Reingold [CMR98]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as effcient and if the starting pseudorandom function ensemble is efficiently invertible (given the secret key) then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext independently of the choice of the secret key or the random bits used in the encryption process.
We introduce the relationship between incremental cryptography and memory checkers. We present an incremental message authentication scheme based on the XOR MACs which supports insertion, deletion and other single block operations. Our scheme takes only a constant number of pseudorandom function evaluations for each update step and produces smaller authentication codes than the tree scheme presented in [BGG95]. Furthermore, it is secure against message substitution attacks, where the adversary is allowed to tamper messages before update steps, making it applicable to virus protection. From this scheme we derive memory checkers for data structures based on lists. Conversely, we use a lower bound for memory checkers to show that so-called message substitution detecting schemes produce signatures or authentication codes with size proportional to the message length.
A memory checker for a data structure provides a method to check that the output of the data structure operations is consistent with the input even if the data is stored on some insecure medium. In [8] we present a general solution for all data structures that are based on insert(i,v) and delete(j) commands. In particular this includes stacks, queues, deques (double-ended queues) and lists. Here, we describe more time and space efficient solutions for stacks, queues and deques. Each algorithm takes only a single function evaluation of a pseudorandomlike function like DES or a collision-free hash function like MD5 or SHA for each push/pop resp. enqueue/dequeue command making our methods applicable to smart cards.
We present efficient non-malleable commitment schemes based on standard assumptions such as RSA and Discrete-Log, and under the condition that the network provides publicly available RSA or Discrete-Log parameters generated by a trusted party. Our protocols require only three rounds and a few modular exponentiations. We also discuss the difference between the notion of non-malleable commitment schemes used by Dolev, Dwork and Naor [DDN00] and the one given by Di Crescenzo, Ishai and Ostrovsky [DIO98].
We address to the problem to factor a large composite number by lattice reduction algorithms. Schnorr has shown that under a reasonable number theoretic assumptions this problem can be reduced to a simultaneous diophantine approximation problem. The latter in turn can be solved by finding sufficiently many l_1--short vectors in a suitably defined lattice. Using lattice basis reduction algorithms Schnorr and Euchner applied Schnorrs reduction technique to 40--bit long integers. Their implementation needed several hours to compute a 5% fraction of the solution, i.e., 6 out of 125 congruences which are necessary to factorize the composite. In this report we describe a more efficient implementation using stronger lattice basis reduction techniques incorporating ideas of Schnorr, Hoerner and Ritter. For 60--bit long integers our algorithm yields a complete factorization in less than 3 hours.
Based on the quadratic residuosity assumption we present a non-interactive crypto-computing protocol for the greater-than function, i.e., a non-interactive procedure between two parties such that only the relation of the parties' inputs is revealed. In comparison to previous solutions our protocol reduces the number of modular multiplications significantly. We also discuss applications to conditional oblivious transfer, private bidding and the millionaires' problem.
We propose a new security measure for commitment protocols, called Universally Composable (UC) Commitment. The measure guarantees that commitment protocols behave like an \ideal commitment service," even when concurrently composed with an arbitrary set of protocols. This is a strong guarantee: it implies that security is maintained even when an unbounded number of copies of the scheme are running concurrently, it implies non-malleability (not only with respect to other copies of the same protocol but even with respect to other protocols), it provides resilience to selective decommitment, and more. Unfortunately two-party uc commitment protocols do not exist in the plain model. However, we construct two-party uc commitment protocols, based on general complexity assumptions, in the common reference string model where all parties have access to a common string taken from a predetermined distribution. The protocols are non-interactive, in the sense that both the commitment and the opening phases consist of a single message from the committer to the receiver.
We review the representation problem based on factoring and show that this problem gives rise to alternative solutions to a lot of cryptographic protocols in the literature. And, while the solutions so far usually either rely on the RSA problem or the intractability of factoring integers of a special form (e.g., Blum integers), the solutions here work with the most general factoring assumption. Protocols we discuss include identification schemes secure against parallel attacks, secure signatures, blind signatures and (non-malleable) commitments.
We show that non-interactive statistically-secret bit commitment cannot be constructed from arbitrary black-box one-to-one trapdoor functions and thus from general public-key cryptosystems. Reducing the problems of non-interactive crypto-computing, rerandomizable encryption, and non-interactive statistically-sender-private oblivious transfer and low-communication private information retrieval to such commitment schemes, it follows that these primitives are neither constructible from one-to-one trapdoor functions and public-key encryption in general. Furthermore, our separation sheds some light on statistical zeroknowledge proofs. There is an oracle relative to which one-to-one trapdoor functions and one-way permutations exist, while the class of promise problems with statistical zero-knowledge proofs collapses in P. This indicates that nontrivial problems with statistical zero-knowledge proofs require more than (trapdoor) one-wayness.
We show lower bounds for the signature size of incremental schemes which are secure against substitution attacks and support single block replacement. We prove that for documents of n blocks such schemes produce signatures of \Omega(n^(1/(2+c))) bits for any constant c>0. For schemes accessing only a single block resp. a constant number of blocks for each replacement this bound can be raised to \Omega(n) resp. \Omega(sqrt(n)). Additionally, we show that our technique yields a new lower bound for memory checkers.
Given a real vector alpha =(alpha1 ; : : : ; alpha d ) and a real number E > 0 a good Diophantine approximation to alpha is a number Q such that IIQ alpha mod Zk1 ", where k \Delta k1 denotes the 1-norm kxk1 := max 1id jx i j for x = (x1 ; : : : ; xd ). Lagarias [12] proved the NP-completeness of the corresponding decision problem, i.e., given a vector ff 2 Q d , a rational number " ? 0 and a number N 2 N+ , decide whether there exists a number Q with 1 Q N and kQff mod Zk1 ". We prove that, unless ...
Given x small epsilon, Greek Rn an integer relation for x is a non-trivial vector m small epsilon, Greek Zn with inner product <m,x> = 0. In this paper we prove the following: Unless every NP language is recognizable in deterministic quasi-polynomial time, i.e., in time O(npoly(log n)), the ℓinfinity-shortest integer relation for a given vector x small epsilon, Greek Qn cannot be approximated in polynomial time within a factor of 2log0.5 − small gamma, Greekn, where small gamma, Greek is an arbitrarily small positive constant. This result is quasi-complementary to positive results derived from lattice basis reduction. A variant of the well-known L3-algorithm approximates for a vector x small epsilon, Greek Qn the ℓ2-shortest integer relation within a factor of 2n/2 in polynomial time. Our proof relies on recent advances in the theory of probabilistically checkable proofs, in particular on a reduction from 2-prover 1-round interactive proof-systems. The same inapproximability result is valid for finding the ℓinfinity-shortest integer solution for a homogeneous linear system of equations over Q.
We analyse a continued fraction algorithm (abbreviated CFA) for arbitrary dimension n showing that it produces simultaneous diophantine approximations which are up to the factor 2^((n+2)/4) best possible. Given a real vector x=(x_1,...,x_{n-1},1) in R^n this CFA generates a sequence of vectors (p_1^(k),...,p_{n-1}^(k),q^(k)) in Z^n, k=1,2,... with increasing integers |q^{(k)}| satisfying for i=1,...,n-1 | x_i - p_i^(k)/q^(k) | <= 2^((n+2)/4) sqrt(1+x_i^2) |q^(k)|^(1+1/(n-1)) By a theorem of Dirichlet this bound is best possible in that the exponent 1+1/(n-1) can in general not be increased.
In discussing final status issues, Palestinians and Israelis approach the question of the refugees and the right of return from radically different perspectives. The Palestinian narrative maintains that the Zionists forcibly expelled the Arab refugees in 1948. The Palestinians insist on the right of the refugees to return to their homes or, for those who choose not to do so, to accept compensation. And they demand that Israel unilaterally acknowledge its complete moral responsibility for the injustice of the refugees’ expulsion. In contrast, the Israeli narrative rejects the refugees’ right of return. Israel argues that it was the Arabs who caused the Palestinian refugee problem, by rejecting the creation of the State of Israel and declaring war upon it—a war which, like most wars, created refugee problems, including a Jewish one. Israel sees the return of Palestinian refugees as an existential threat, insofar as it would undermine the Jewish character and the viability of the state. The two sides’ traditional solutions make no attempt to reconcile these opposing narratives. Yet such an attempt is vital if the issue is to be engaged. Hence the Joint Working Group on Israeli–Palestinian Relations developed two compromise solutions. They narrow the gap between the positions, but do not fully reconcile them. The compromise solution espoused by the Palestinian members of the Joint Working Group would insist that Israel acknowledge both its responsibility for creating the refugee problem and the individual moral right of Palestinian refugees to return. But it recognizes that, in view of the changed situation of the refugees over 50 years, and taking into account Israel’s constraints, the return of only a limited number would be feasible. Israel would pay both individual and collective compensation. The Palestinians’ case for an Israeli withdrawal to the 1967 borders would be strengthened as a result of their willingness to absorb the refugees in the Palestinian state. Under the compromise solution proposed by the Israeli members of the Joint Working Group, Israel would acknowledge that it shares, with the other parties to the 1948 war, practical, but not moral, responsibility for the suffering of the refugees, and that rectification of their plight is a central goal of the peace process. Israel would accept repatriation of tens of thousands of refugees under its family reunification program. Israel would pay collective compensation to the Palestinian state, paralleled by Arab State compensation for Jewish refugees from 1948. In seeking to further reconcile these two compromise solutions, we note that they reflect a large measure of agreement between Palestinians and Israelis: that Israel had a historic role in the events that created the refugee issue; that a massive exercise of the right of return is unrealizable, and “return”/family reunification will be limited; that a larger number of Palestinians will “return” to the Palestinian state; that some resettlement will take place in host states, primarily Jordan; that Israel will pay some form of compensation; and that closing the file on the refugee issue means the dismantling of the entire international apparatus that has sustained the refugees—camps, UNRWA, etc. But there remain significant gaps between the two sides’ compromise proposals as well. These concern the nature of Israeli acknowledgement of Palestinian suffering and the responsibility for it; the nature and number of “return”/family reunification; the nature and size of compensation, and its linkage to compensation for Jewish refugees from 1948; and the size of “return” to the Palestinian state. In order to negotiate an agreed solution that bridges these remaining gaps, Israelis and Palestinians will have to develop the mutual trust required to further accommodate each other’s narratives. They will also, inevitably, have to factor the refugee/right of return issue into the broader fabric of tradeoffs and compromises that will characterize a comprehensive solution to the conflict. This will involve additional parties—primarily the refugee host countries—as well as related substantive issues, such as borders.
We generalize the concept of block reduction for lattice bases from l2-norm to arbitrary norms. This extends the results of Schnorr. We give algorithms for block reduction and apply the resulting enumeration concept to solve subset sum problems. The deterministic algorithm solves all subset sum problems. For up to 66 weights it needs in average less then two hours on a HP 715/50 under HP-UX 9.05.
We propose a fast variant of the Gaussian algorithm for the reduction of two dimensional lattices for the l1-, l2- and l-infinite- norm. The algorithm runs in at most O(nM(B) logB) bit operations for the l-infinite- norm and in O(n log n M(B) logB) bit operations for the l1 and l2 norm on input vectors a, b 2 ZZn with norm at most 2B where M(B) is a time bound for B-bit integer multiplication. This generalizes Schönhages monotone Algorithm [Sch91] to the centered case and to various norms.
This study analyses the labour market effects of fixed-term contracts (FTCs) in West Germany by microeconometric methods using individual and establishment level data. In the first part of the study the role of FTCs in firms’ labour demand is analysed. An econometric investigation of the firms’ reasons for using FTCs focussing on the identification of the link between dismissal protection for permanent contract workers and the firms’ use of FTCs is presented. Furthermore, a descriptive analysis of the role of FTCs in worker and job flows at the firm level is provided. The second part of the study evaluates the short-run effects of being employed on an FTC on working conditions and wages using a large cross-sectional dataset of employees. The final part of the study analyses whether taking up an FTC increases the (permanent contract) employment opportunities in the long-run (stepping stone effect) and whether FTCs affect job finding behaviour of unemployed job searchers. Firstly, an econometric unemployment duration analysis distinguishing between both types of contracts as destination states is performed. Secondly, the effects of entering into FTCs from unemployment on future (permanent contract) employment opportunities are evaluated attempting to account for the sequential decision problem of job searchers.
We present an efficient variant of LLL-reduction of lattice bases in the sense of Lenstra, Lenstra, Lov´asz [LLL82]. We organize LLL-reduction in segments of size k. Local LLL-reduction of segments is done using local coordinates of dimension 2k. Strong segment LLL-reduction yields bases of the same quality as LLL-reduction but the reduction is n-times faster for lattices of dimension n. We extend segment LLL-reduction to iterated subsegments. The resulting reduction algorithm runs in O(n3 log n) arithmetic steps for integer lattices of dimension n with basis vectors of length 2O(n), compared to O(n5) steps for LLL-reduction.
We introduce algorithms for lattice basis reduction that are improvements of the famous L3-algorithm. If a random L3-reduced lattice basis b1,b2,...,bn is given such that the vector of reduced Gram-Schmidt coefficients ({µi,j} 1<= j< i<= n) is uniformly distributed in [0,1)n(n-1)/2, then the pruned enumeration finds with positive probability a shortest lattice vector. We demonstrate the power of these algorithms by solving random subset sum problems of arbitrary density with 74 and 82 many weights, by breaking the Chor-Rivest cryptoscheme in dimensions 103 and 151 and by breaking Damgard's hash function.
We call a vector x/spl isin/R/sup n/ highly regular if it satisfies =0 for some short, non-zero integer vector m where <...> is the inner product. We present an algorithm which given x/spl isin/R/sup n/ and /spl alpha//spl isin/N finds a highly regular nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no short relation m~ of length less than /spl alpha//2 exists for points x~ within half the x'-distance from x. The integer relation m for x' is for random x up to an average factor 2/sup /spl alpha//2/ a shortest integer relation for x'. Our algorithm uses, for arbitrary real input x, at most O(n/sup 4/(n+log A)) many arithmetical operations on real numbers. If a is rational the algorithm operates on integers having at most O(n/sup 5/+n/sup 3/(log /spl alpha/)/sup 2/+log(/spl par/qx/spl par//sup 2/)) many bits where q is the common denominator for x.
We study the following problem: given x element Rn either find a short integer relation m element Zn, so that =0 holds for the inner product <.,.>, or prove that no short integer relation exists for x. Hastad, Just Lagarias and Schnorr (1989) give a polynomial time algorithm for the problem. We present a stable variation of the HJLS--algorithm that preserves lower bounds on lambda(x) for infinitesimal changes of x. Given x \in {\RR}^n and \alpha \in \NN this algorithm finds a nearby point x' and a short integer relation m for x'. The nearby point x' is 'good' in the sense that no very short relation exists for points \bar{x} within half the x'--distance from x. On the other hand if x'=x then m is, up to a factor 2^{n/2}, a shortest integer relation for \mbox{x.} Our algorithm uses, for arbitrary real input x, at most \mbox{O(n^4(n+\log \alpha))} many arithmetical operations on real numbers. If x is rational the algorithm operates on integers having at most \mbox{O(n^5+n^3 (\log \alpha)^2 + \log (\|q x\|^2))} many bits where q is the common denominator for x.
Black box cryptanalysis applies to hash algorithms consisting of many small boxes, connected by a known graph structure, so that the boxes can be evaluated forward and backwards by given oracles. We study attacks that work for any choice of the black boxes, i.e. we scrutinize the given graph structure. For example we analyze the graph of the fast Fourier transform (FFT). We present optimal black box inversions of FFT-compression functions and black box constructions of collisions. This determines the minimal depth of FFT-compression networks for collision-resistant hashing. We propose the concept of multipermutation, which is a pair of orthogonal latin squares, as a new cryptographic primitive that generalizes the boxes of the FFT. Our examples of multipermutations are based on the operations circular rotation, bitwise xor, addition and multiplication.
Parallel FFT-hashing
(1994)
We propose two families of scalable hash functions for collision resistant hashing that are highly parallel and based on the generalized fast Fourier transform (FFT). FFT hashing is based on multipermutations. This is a basic cryptographic primitive for perfect generation of diffusion and confusion which generalizes the boxes of the classic FFT. The slower FFT hash functions iterate a compression function. For the faster FFT hash functions all rounds are alike with the same number of message words entering each round.
We report on improved practical algorithms for lattice basis reduction. We propose a practical floating point version of theL3-algorithm of Lenstra, Lenstra, Lovász (1982). We present a variant of theL3-algorithm with "deep insertions" and a practical algorithm for block Korkin—Zolotarev reduction, a concept introduced by Schnorr (1987). Empirical tests show that the strongest of these algorithms solves almost all subset sum problems with up to 66 random weights of arbitrary bit length within at most a few hours on a UNISYS 6000/70 or within a couple of minutes on a SPARC1 + computer.
We call a distribution on n bit strings (", e) locally random, if for every choice of e · n positions the induced distribution on e bit strings is in the L1 norm at most " away from the uniform distribution on e bit strings. We establish local randomness in polynomial random number generators (RNG) that are candidate one way functions. Let N be a squarefree integer and let f1, . . . , f be polynomials with coe±- cients in ZZN = ZZ/NZZ. We study the RNG that stretches a random x 2 ZZN into the sequence of least significant bits of f1(x), . . . , f(x). We show that this RNG provides local randomness if for every prime divisor p of N the polynomials f1, . . . , f are linearly independent modulo the subspace of polynomials of degree · 1 in ZZp[x]. We also establish local randomness in polynomial random function generators. This yields candidates for cryptographic hash functions. The concept of local randomness in families of functions extends the concept of universal families of hash functions by Carter and Wegman (1979). The proofs of our results rely on upper bounds for exponential sums.
We propose two improvements to the Fiat Shamir authentication and signature scheme. We reduce the communication of the Fiat Shamir authentication scheme to a single round while preserving the e±ciency of the scheme. This also reduces the length of Fiat Shamir signatures. Using secret keys consisting of small integers we reduce the time for signature generation by a factor 3 to 4. We propose a variation of our scheme using class groups that may be secure even if factoring large integers becomes easy.
We introduce novel security proofs that use combinatorial counting arguments rather than reductions to the discrete logarithm or to the Diffie-Hellman problem. Our security results are sharp and clean with no polynomial reduction times involved. We consider a combination of the random oracle model and the generic model. This corresponds to assuming an ideal hash function H given by an oracle and an ideal group of prime order q, where the binary encoding of the group elements is useless for cryptographic attacks In this model, we first show that Schnorr signatures are secure against the one-more signature forgery : A generic adversary performing t generic steps including l sequential interactions with the signer cannot produce l+1 signatures with a better probability than (t 2)/q. We also characterize the different power of sequential and of parallel attacks. Secondly, we prove signed ElGamal encryption is secure against the adaptive chosen ciphertext attack, in which an attacker can arbitrarily use a decryption oracle except for the challenge ciphertext. Moreover, signed ElGamal encryption is secure against the one-more decryption attack: A generic adversary performing t generic steps including l interactions with the decryption oracle cannot distinguish the plaintexts of l + 1 ciphertexts from random strings with a probability exceeding (t 2)/q.
Assuming a cryptographically strong cyclic group G of prime order q and a random hash function H, we show that ElGamal encryption with an added Schnorr signature is secure against the adaptive chosen ciphertext attack, in which an attacker can freely use a decryption oracle except for the target ciphertext. We also prove security against the novel one-more-decyption attack. Our security proofs are in a new model, corresponding to a combination of two previously introduced models, the Random Oracle model and the Generic model. The security extends to the distributed threshold version of the scheme. Moreover, we propose a very practical scheme for private information retrieval that is based on blind decryption of ElGamal ciphertexts.
Let b1, . . . , bm 2 IRn be an arbitrary basis of lattice L that is a block Korkin Zolotarev basis with block size ¯ and let ¸i(L) denote the successive minima of lattice L. We prove that for i = 1, . . . ,m 4 i + 3 ° 2 i 1 ¯ 1 ¯ · kbik2/¸i(L)2 · ° 2m i ¯ 1 ¯ i + 3 4 where °¯ is the Hermite constant. For ¯ = 3 we establish the optimal upper bound kb1k2/¸1(L)2 · µ3 2¶m 1 2 1 and we present block Korkin Zolotarev lattice bases for which this bound is tight. We improve the Nearest Plane Algorithm of Babai (1986) using block Korkin Zolotarev bases. Given a block Korkin Zolotarev basis b1, . . . , bm with block size ¯ and x 2 L(b1, . . . , bm) a lattice point v can be found in time ¯O(¯) satisfying kx vk2 · m° 2m ¯ 1 ¯ minu2L kx uk2.
With ubiquitous use of digital camera devices, especially in mobile phones, privacy is no longer threatened by governments and companies only. The new technology creates a new threat by ordinary people, who now have the means to take and distribute pictures of one’s face at no risk and little cost in any situation in public and private spaces. Fast distribution via web based photo albums, online communities and web pages expose an individual’s private life to the public in unpreceeded ways. Social and legal measures are increasingly taken to deal with this problem. In practice however, they lack efficiency, as they are hard to enforce in practice. In this paper, we discuss a supportive infrastructure aiming for the distribution channel; as soon as the picture is publicly available, the exposed individual has a chance to find it and take proper action.
Korrektur zu: C.P. Schnorr: Security of 2t-Root Identification and Signatures, Proceedings CRYPTO'96, Springer LNCS 1109, (1996), pp. 143-156 page 148, section 3, line 5 of the proof of Theorem 3. Die Korrektur wurde präsentiert als: "Factoring N via proper 2 t-Roots of 1 mod N" at Eurocrypt '97 rump session.
Let G be a finite cyclic group with generator \alpha and with an encoding so that multiplication is computable in polynomial time. We study the security of bits of the discrete log x when given \exp_{\alpha}(x), assuming that the exponentiation function \exp_{\alpha}(x) = \alpha^x is one-way. We reduce he general problem to the case that G has odd order q. If G has odd order q the security of the least-significant bits of x and of the most significant bits of the rational number \frac{x}{q} \in [0,1) follows from the work of Peralta [P85] and Long and Wigderson [LW88]. We generalize these bits and study the security of consecutive shift bits lsb(2^{-i}x mod q) for i=k+1,...,k+j. When we restrict \exp_{\alpha} to arguments x such that some sequence of j consecutive shift bits of x is constant (i.e., not depending on x) we call it a 2^{-j}-fraction of \exp_{\alpha}. For groups of odd group order q we show that every two 2^{-j}-fractions of \exp_{\alpha} are equally one-way by a polynomial time transformation: Either they are all one-way or none of them. Our key theorem shows that arbitrary j consecutive shift bits of x are simultaneously secure when given \exp_{\alpha}(x) iff the 2^{-j}-fractions of \exp_{\alpha} are one-way. In particular this applies to the j least-significant bits of x and to the j most-significant bits of \frac{x}{q} \in [0,1). For one-way \exp_{\alpha} the individual bits of x are secure when given \exp_{\alpha}(x) by the method of Hastad, N\"aslund [HN98]. For groups of even order 2^{s}q we show that the j least-significant bits of \lfloor x/2^s\rfloor, as well as the j most-significant bits of \frac{x}{q} \in [0,1), are simultaneously secure iff the 2^{-j}-fractions of \exp_{\alpha'} are one-way for \alpha' := \alpha^{2^s}. We use and extend the models of generic algorithms of Nechaev (1994) and Shoup (1997). We determine the generic complexity of inverting fractions of \exp_{\alpha} for the case that \alpha has prime order q. As a consequence, arbitrary segments of (1-\varepsilon)\lg q consecutive shift bits of random x are for constant \varepsilon >0 simultaneously secure against generic attacks. Every generic algorithm using $t$ generic steps (group operations) for distinguishing bit strings of j consecutive shift bits of x from random bit strings has at most advantage O((\lg q) j\sqrt{t} (2^j/q)^{\frac14}).
Let G be a group of prime order q with generator g. We study hardcore subsets H is include in G of the discrete logarithm (DL) log g in the model of generic algorithms. In this model we count group operations such as multiplication, division while computations with non-group data are for free. It is known from Nechaev (1994) and Shoup (1997) that generic DL-algorithms for the entire group G must perform p2q generic steps. We show that DL-algorithms for small subsets H is include in G require m/ 2 + o(m) generic steps for almost all H of size #H = m with m <= sqrt(q). Conversely, m/2 + 1 generic steps are su±cient for all H is include in G of even size m. Our main result justifies to generate secret DL-keys from seeds that are only 1/2 * log2 q bits long.
We present a novel practical algorithm that given a lattice basis b1, ..., bn finds in O(n exp 2 *(k/6) exp (k/4)) average time a shorter vector than b1 provided that b1 is (k/6) exp (n/(2k)) times longer than the length of the shortest, nonzero lattice vector. We assume that the given basis b1, ..., bn has an orthogonal basis that is typical for worst case lattice bases. The new reduction method samples short lattice vectors in high dimensional sublattices, it advances in sporadic big jumps. It decreases the approximation factor achievable in a given time by known methods to less than its fourth-th root. We further speed up the new method by the simple and the general birthday method. n2
We enhance the security of Schnorr blind signatures against the novel one-more-forgery of Schnorr [Sc01] andWagner [W02] which is possible even if the discrete logarithm is hard to compute. We show two limitations of this attack. Firstly, replacing the group G by the s-fold direct product G exp(×s) increases the work of the attack, for a given number of signer interactions, to the s-power while increasing the work of the blind signature protocol merely by a factor s. Secondly, we bound the number of additional signatures per signer interaction that can be forged effectively. That fraction of the additional forged signatures can be made arbitrarily small.