Refine
Year of publication
Document Type
- Preprint (2329) (remove)
Has Fulltext
- yes (2329)
Keywords
- Kollisionen schwerer Ionen (33)
- heavy ion collisions (27)
- Deutsch (23)
- Quark-Gluon-Plasma (14)
- equation of state (13)
- QGP (12)
- heavy-ion collisions (11)
- Kongress (10)
- Syntax (10)
- quark-gluon plasma (10)
Institute
- Physik (1436)
- Frankfurt Institute for Advanced Studies (FIAS) (1010)
- Informatik (807)
- Medizin (176)
- Extern (82)
- Biowissenschaften (76)
- Ernst Strüngmann Institut (70)
- Mathematik (48)
- MPI für Hirnforschung (47)
- Psychologie (46)
We develop a 1+1 dimensional hydrodynamical model for central heavy-ion collisions at ultrarelativistic energies. Deviations from Bjorken's scaling are taken into account by implementing finite-size profiles for the initial energy density. The calculated rapidity distributions of pions, kaons and antiprotons in central Au+Au collisions at the c.m. energy 200 AGeV are compared with experimental data of the BRAHMS Collaboration. The sensitivity of the results to the choice of the equation of state, the parameters of initial state and the freeze-out conditions is investigated. The best fit of experimental data is obtained for a soft equation of state and Gaussian-like initial profiles of the energy density.
The concept of Large Extra Dimensions (LED) provides a way of solving the Hierarchy Problem which concerns the weakness of gravity compared with the strong and electro-weak forces. A consequence of LED is that miniature Black Holes (mini-BHs) may be produced at the Large Hadron Collider in p+p collisions. The present work uses the CHARYBDIS mini-BH generator code to simulate the hadronic signal which might be expected in a mid-rapidity particle tracking detector from the decay of these exotic objects if indeed they are produced. An estimate is also given for Pb+Pb collisions.
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
We examine experimental signatures of TeV-mass black hole formation in heavy ion collisions at the LHC. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Finally we point out that a Heckler-Kapusta-Hawking plasma may form from the emitted mono-jets. In this context we present new simulation data of Mach shocks and of the evolution of initial conditions until the freeze-out.
The production of Large Extra Dimension (LXD) Black Holes (BHs), with a new, fundamental mass scale of M_f = 1 TeV, has been predicted to occur at the Large Hadron Collider, LHC, with the formidable rate of 10^8 per year in p-p collisions at full energy, 14 TeV, and at full luminosity. We show that such LXD-BH formation will be experimentally observable at the LHC by the complete disappearance of all very high p_t (> 500 GeV) back-to-back correlated Di-Jets of total mass M > M_f = 1 TeV. We suggest to complement this clear cut-off signal at M > 2*500 GeV in the di-jet-correlation function by detecting the subsequent, Hawking-decay products of the LXD-BHs, namely either multiple high energy (> 100 GeV) SM Mono-Jets (i.e. away-side jet missing), sprayed off the evaporating BHs isentropically into all directions or the thermalization of the multiple overlapping Hawking-radiation in a eckler-Kapusta-Plasma. Microcanonical quantum statistical calculations of the Hawking evaporation process for these LXD-BHs show that cold black hole remnants (BHRs) of Mass sim M_f remain leftover as the ashes of these spectacular Di-Jet-suppressed events. Strong Di-Jet suppression is also expected with Heavy Ion beams at the LHC, due to Quark-Gluon-Plasma induced jet attenuation at medium to low jet energies, p_t < 200 GeV. The (Mono-)Jets in these events can be used to trigger for Tsunami-emission of secondary compressed QCD-matter at well defined Mach-angles, both at the trigger side and at the awayside (missing) jet. The Machshock-angles allow for a direct measurement of both the equation of state EoS and the speed of sound c_s via supersonic bang in the "big bang" matter. We discuss the importance of the underlying strong collective flow - the gluon storm - of the QCD- matter for the formation and evolution of these Machshock cones. We predict a significant deformation of Mach shocks from the gluon storm in central Au+Au collisions at RHIC and LHC energies, as compared to the case of weakly coupled jets propagating through a static medium. A possible complete stopping of pt > 50 GeV jets at the LHC in 2-3 fm yields nonlinear high density Mach shocks in he quark gluon plasma, which can be studied in the complex emission and disintegration pattern of the possibly supercooled matter. We report on first full 3-dimensional fluid dynamical studies of the strong effects of a first order phase transition on the evolution and the Tsunami-like Mach shock emission of the QCD matter.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
The rapidity dependence of the single- and double- neutron to proton ratios of nucleon emission from isospin-asymmetric but mass-symmetric reactions Zr+Ru and Ru+Zr at energy range 100 ~ 800 A MeV and impact parameter range 0 ~ 8 fm is investigated. The reaction system with isospin-asymmetry and mass-symmetry has the advantage of simultaneously showing up the dependence on the symmetry energy and the degree of the isospin equilibrium. We find that the beam energy- and the impact parameter dependence of the slope parameter of the double neutron to proton ratio (F_D) as function of rapidity are quite sensitive to the density dependence of symmetry energy, especially at energies E_b ~ 400 A MeV and reduced impact parameters around 0.5. Here the symmetry energy effect on the F_D is enhanced, as compared to the single neutron to proton ratio. The degree of the equilibrium with respect to isospin (isospin mixing) in terms of the F_D is addressed and its dependence on the symmetry energy is also discussed.
Several observables of unbound nucleons which are to some extent sensitive to the medium modifications of nucleon-nucleon elastic cross sections in neutron-rich intermediate energy heavy ion collisions are investigated. The splitting effect of neutron and proton effective masses on cross sections is discussed. It is found that the transverse flow as a function of rapidity, the Q_zz as a function of momentum, and the ratio of halfwidths of the transverse to that of longitudinal rapidity distribution R_t/l are very sensitive to the medium modifications of the cross sections. The transverse momentum distribution of correlation functions of two-nucleons does not yield information on the in-medium cross section.
No black holes at IceCube
(2006)
Gravitational radiation from ultra high energy cosmic rays in models with large extra dimensions
(2006)
The effects of classical gravitational radiation in models with large extra dimensions are investigated for ultra high energy cosmic rays (CRs). The cross sections are implemented into a simulation package (SENECA) for high energy hadron induced CR air showers. We predict that gravitational radiation from quasi-elastic scattering could be observed at incident CR energies above 10^9 GeV for a setting with more than two extra dimensions. It is further shown that this gravitational energy loss can alter the energy reconstruction for CR energies E_CR > 5 10^9 GeV.
The pion source as seen through HBT correlations at RHIC energies is investigated within the UrQMD approach. We find that the calculated transverse momentum, centrality, and system size dependence of the Pratt-HBT radii R_L and R_S are reasonably well in line with experimental data. The predicted R_O values in central heavy ion collisions are larger as compared to experimental data. The corresponding quantity sqrt R_O^2-R_S^2 of the pion emission source is somewhat larger than experimental estimates.
We demonstrate the occurrence of canonical suppression associated with the conservation of an U(1)-charge in current transport models. For this study a pion gas is simulated within two different transport approaches by incorporating inelastic and volume-limited collisions pi pi leftrightarrow K bar-K for the production of kaon pairs. Both descriptions can dynamically account for the suppression in the yields of rare strange particles in a limited box, being in full accordance with a canonical statistical description.
We propose to use the hadron number fluctuations in the limited momentum regions to study the evolution of initial flows in high energy nuclear collisions. In this method by a proper preparation of a collision sample the projectile and target initial flows are marked in fluctuations in the number of colliding nucleons. We discuss three limiting cases of the evolution of flows, transparency, mixing and reflection, and present for them quantitative predictions obtained within several models. Finally, we apply the method to the NA49 results on fluctuations of the negatively charged hadron multiplicity in Pb+Pb interactions at 158A GeV and conclude that the data favor a hydrodynamical model with a significant degree of mixing of the initial flows at the early stage of collisions.
Language universals are statements that are true of all languages, for example: “all languages have stop consonants”. But beneath this simple definition lurks deep ambiguity, and this triggers misunderstanding in both interdisciplinary discourse and within linguistics itself. A core dimension of the ambiguity is captured by the opposition “absolute vs. statistical universal”, although the literature uses these terms in varied ways. Many textbooks draw the boundary between absolute and statistical according to whether a sample of languages contains exceptions to a universal. But the notion of an exception-free sample is not very revealing even if the sample contained all known languages: there is always a chance that an as yet undescribed language, or an unknown language from the past or future, will provide an exception.
Recent approaches to Word Sense Disambiguation (WSD) generally fall into two classes: (1) information-intensive approaches and (2) information-poor approaches. Our hypothesis is that for memory-based learning (MBL), a reduced amount of data is more beneficial than the full range of features used in the past. Our experiments show that MBL combined with a restricted set of features and a feature selection method that minimizes the feature set leads to competitive results, outperforming all systems that participated in the SENSEVAL-3 competition on the Romanian data. Thus, with this specific method, a tightly controlled feature set improves the accuracy of the classifier, reaching 74.0% in the fine-grained and 78.7% in the coarse-grained evaluation.
Prepositional phrase (PP) attachment is one of the major sources for errors in traditional statistical parsers. The reason for that lies in the type of information necessary for resolving structural ambiguities. For parsing, it is assumed that distributional information of parts-of-speech and phrases is sufficient for disambiguation. For PP attachment, in contrast, lexical information is needed. The problem of PP attachment has sparked much interest ever since Hindle and Rooth (1993) formulated the problem in a way that can be easily handled by machine learning approaches: In their approach, PP attachment is reduced to the decision between noun and verb attachment; and the relevant information is reduced to the two possible attachment sites (the noun and the verb) and the preposition of the PP. Brill and Resnik (1994) extended the feature set to the now standard 4-tupel also containing the noun inside the PP. Among many publications on the problem of PP attachment, Volk (2001; 2002) describes the only system for German. He uses a combination of supervised and unsupervised methods. The supervised method is based on the back-off model by Collins and Brooks (1995), the unsupervised part consists of heuristics such as ”If there is a support verb construction present, choose verb attachment”. Volk trains his back-off model on the Negra treebank (Skut et al., 1998) and extracts frequencies for the heuristics from the ”Computerzeitung”. The latter also serves as test data set. Consequently, it is difficult to compare Volk’s results to other results for German, including the results presented here, since not only he uses a combination of supervised and unsupervised learning, but he also performs domain adaptation. Most of the researchers working on PP attachment seem to be satisfied with a PP attachment system; we have found hardly any work on integrating the results of such approaches into actual parsers. The only exceptions are Mehl et al. (1998) and Foth and Menzel (2006), both working with German data. Mehl et al. report a slight improvement of PP attachment from 475 correct PPs out of 681 PPs for the original parser to 481 PPs. Foth and Menzel report an improvement of overall accuracy from 90.7% to 92.2%. Both integrate statistical attachment preferences into a parser. First, we will investigate whether dependency parsing, which generally uses lexical information, shows the same performance on PP attachment as an independent PP attachment classifier does. Then we will investigate an approach that allows the integration of PP attachment information into the output of a parser without having to modify the parser: The results of an independent PP attachment classifier are integrated into the parse of a dependency parser for German in a postprocessing step.
The renowned Grimm Dictionary (1854-1961) makes the statement that the German copula sein (to be) is “the most general and colourless of all verbal concepts” (der allgemeinste und farbloseste aller verbalbegriffe). A more concise summary of the linguistic issues surrounding the copula is hardly possible. These two properties (and the latent tension between them!) make copulas a particularly interesting and vexing subject of linguistic research. Copulas appear to be almost colourless, i.e., devoid of any concrete meaning, thus leading to the question of why such expressions exist at all, not only in German but in the majority of the world’s languages. And at the same time copulas presumably provide the best window into the core of verbal concepts thereby telling us what it actually means to be a verb – at least in a language like German or English. While there is a rather rich body of research on copulas in philosophical and formal semantics including several in-depth studies on the copular systems of individual languages, copulas have received comparably little attention from a typological perspective. The monograph of Regina Pustet sets out to fill this gap. She presents an extensive cross-linguistic study of copula usage based on a sample of 154 languages drawn from the language families of the world. The analysis is embedded in the theoretical framework of functional typology. The study aims at uncovering universal principles that govern the distribution of copulas in nominal, adjectival, and verbal predications. Its major objective is the development of a “semantically-based model of copula distribution” (p.62) by means of which the presence vs. absence of copulas can be motivated through the inherent meaning of the lexical items they potentially combine with. Drawing mainly on the work by Givón (1979, 1984) and Croft (1991, 2001), who provide a functional foundation of the traditional parts of speech, Pustet identifies four semantic parameters which, if taken together, are claimed to support substantial generalisations on copula distribution – within a given language as well as crosslinguistically. These parameters are DYNAMICITY, TRANSIENCE, TRANSITIVITY, and DEPENDENCY. Pustet goes on to argue – and this is in fact the driving force behind the overall monograph – that the distributional behaviour of copulas, in turn, yields a useful methodology for developing a general approach to lexical categorization. Thus, in the long run Pustet aims at contributing to a better understanding of the traditional parts of speech, noun, adjective, and verb by defining them in terms of “semantic feature bundles, which can be arranged in [a] coherent semantic similarity space” (p.193).
This paper presents an LTAG analysis of reflexives like himself and reciprocals like each other. These items need to find a c-commanding antecedent from which they retrieve (part of) their own denotation and with which they syntactically agree. The relation between anaphoric item and antecendent must satisfy the following important locality conditions (Chomsky (1981)).
The goal of this paper is to re-examine the status of the condition in (1) proposed in Alexiadou and Anagnostopoulou (2001; henceforth A&A 2001), in view of recent developments in syntactic theory. (1) The subject-in-situ generalization (SSG) By Spell-Out, vP can contain only one argument with a structural Case feature. We argue that (1) is a more general condition than previously recognized, and that the domain of its application is parametrized. More specifically, based on a comparison between Indo-European (IE) and Khoisan languages, we argue that (1) supports an interpretation of the EPP as a general principle, and not as a property of T. Viewed this way, the SSG is a condition that forces dislocation of arguments as a consequence of a constraint on Case checking.
Presupposition
(2007)