Refine
Year of publication
- 2006 (38) (remove)
Document Type
- Preprint (38) (remove)
Language
- English (38) (remove)
Has Fulltext
- yes (38)
Is part of the Bibliography
- no (38) (remove)
Keywords
- Deutsch (2)
- Lexicalized Tree Adjoining Grammar (2)
- RHIC (2)
- syntax (2)
- ADD (1)
- African Diaspora (1)
- Benin (1)
- Brazil (1)
- D-Dbar (1)
- Economic development (1)
Institute
- Physik (23)
- Frankfurt Institute for Advanced Studies (FIAS) (15)
- Extern (9)
- Informatik (1)
- Medizin (1)
- Sportwissenschaften (1)
The concept of Large Extra Dimensions (LED) provides a way of solving the Hierarchy Problem which concerns the weakness of gravity compared with the strong and electro-weak forces. A consequence of LED is that miniature Black Holes (mini-BHs) may be produced at the Large Hadron Collider in p+p collisions. The present work uses the CHARYBDIS mini-BH generator code to simulate the hadronic signal which might be expected in a mid-rapidity particle tracking detector from the decay of these exotic objects if indeed they are produced. An estimate is also given for Pb+Pb collisions.
In this paper, we investigate the usefulness of a wide range of features for their usefulness in the resolution of nominal coreference, both as hard constraints (i.e. completely removing elements from the list of possible candidates) as well as soft constraints (where a cumulation of violations of soft constraints will make it less likely that a candidate is chosen as the antecedent). We present a state of the art system based on such constraints and weights estimated with a maximum entropy model, using lexical information to resolve cases of coreferent bridging.
The work presented here addresses the question of how to determine whether a grammar formalism is powerful enough to describe natural languages. The expressive power of a formalism can be characterized in terms of i) the string languages it generates (weak generative capacity (WGC)) or ii) the tree languages it generates (strong generative capacity (SGC)). The notion of WGC is not enough to determine whether a formalism is adequate for natural languages. We argue that even SGC is problematic since the sets of trees a grammar formalism for natural languages should be able to generate is difficult to determine. The concrete syntactic structures assumed for natural languages depend very much on theoretical stipulations and empirical evidence for syntactic structures is rather hard to obtain. Therefore, for lexicalized formalisms, we propose to consider the ability to generate certain strings together with specific predicate argument dependencies as a criterion for adequacy for natural languages.
The transverse momentum dependence of the anisotropic flow v_2 for pi, K, nucleon, Lambda, Xi and Omega is studied for Au+Au collisions at sqrt s_NN = 200 GeV within two independent string-hadron transport approaches (RQMD and UrQMD). Although both models reach only 60% of the absolute magnitude of the measured v_2, they both predict the particle type dependence of v_2, as observed by the RHIC experiments: v_2 exhibits a hadron-mass hierarchy (HMH) in the low p_T region and a number-of-constituent-quark (NCQ) dependence in the intermediate p_T region. The failure of the hadronic models to reproduce the absolute magnitude of the observed v_2 indicates that transport calculations of heavy ion collisions at RHIC must incorporate interactions among quarks and gluons in the early, hot and dense phase. The presence of an NCQ scaling in the string-hadron model results suggests that the particle-type dependencies observed in heavy-ion collisions at intermediate p_T are related to the hadronic cross sections in vacuum rather than to the hadronization process itself, as suggested by quark recombination models.
The causative/anticausative alternation has been the topic of much typological and theoretical discussion in the linguistic literature. This alternation is characterized by verbs with transitive and intransitive uses, such that the transitive use of a verb V means roughly "cause to Vintransitive" (see Levin 1993). The discussion revolves around two issues: the first one concerns the similarities and differences between the anticausative and the passive, and the second one concerns the derivational relationship, if any, between the transitive and intransitive variant. With respect to the second issue, a number of approaches have been developed. Judging the approach conceptually unsatisfactory, according to which each variant is assigned an independent lexical entry, it was concluded that the two variants have to be derivationally related. The question then is which one of the two is basic and where this derivation takes place in the grammar. Our contribution to this discussion is to argue against derivational approaches to the causative / anticausative alternation. We focus on the distribution of PPs related to external arguments (agent, causer, instrument, causing event) in passives and anticausatives of English, German and Greek and the set of verbs undergoing the causative/anticausative alternation in these languages. We argue that the crosslinguistic differences in these two domains provide evidence against both causativization and detransitivization analyses of the causative / anticausative alternation. We offer an approach to this alternation which builds on a syntactic decomposition of change of state verbs into a Voice and a CAUS component. Crosslinguistic variation in passives and anticausatives depends on properties of Voice and its combinations with CAUS and various types of roots.
Gravitational radiation from ultra high energy cosmic rays in models with large extra dimensions
(2006)
The effects of classical gravitational radiation in models with large extra dimensions are investigated for ultra high energy cosmic rays (CRs). The cross sections are implemented into a simulation package (SENECA) for high energy hadron induced CR air showers. We predict that gravitational radiation from quasi-elastic scattering could be observed at incident CR energies above 10^9 GeV for a setting with more than two extra dimensions. It is further shown that this gravitational energy loss can alter the energy reconstruction for CR energies E_CR > 5 10^9 GeV.
Elliptic flow analysis at RHIC with the Lee-Yang Zeroes method in a relativistic transport approach
(2006)
The Lee-Yang zeroes method is applied to study elliptic flow (v_2) in Au+Au collisions at sqrt s =200 A GeV, with the UrQMD model. In this transport approach, the true event plane is known and both the nonflow effects and event-by-event v_2 fluctuations exist. Although the low resolutions prohibit the application of the method for most central and peripheral collisions, the integral and differential elliptic flow from the Lee-Yang zeroes method agrees with the exact v_2 values very well for semi-central collisions.
The experimental signatures of TeV-mass black hole (BH) formation in heavy ion collisions at the LHC is examined. We find that the black hole production results in a complete disappearance of all very high p_T (> 500 GeV) back-to-back correlated di-jets of total mass M > M_f ~ 1 TeV. We show that the subsequent Hawking-decay produces multiple hard mono-jets and discuss their detection. We study the possibility of cold black hole remnant (BHR) formation of mass ~ M_f and the experimental distinguishability of scenarios with BHRs and those with complete black hole decay. Due to the rather moderate luminosity in the first year of LHC running the least chance for the observation of BHs or BHRs at this early stage will be by ionizing tracks in the ALICE TPC. Finally we point out that stable BHRs would be interesting candidates for energy production by conversion of mass to Hawking radiation.
The production of Large Extra Dimension (LXD) Black Holes (BHs), with a new, fundamental mass scale of M_f = 1 TeV, has been predicted to occur at the Large Hadron Collider, LHC, with the formidable rate of 10^8 per year in p-p collisions at full energy, 14 TeV, and at full luminosity. We show that such LXD-BH formation will be experimentally observable at the LHC by the complete disappearance of all very high p_t (> 500 GeV) back-to-back correlated Di-Jets of total mass M > M_f = 1 TeV. We suggest to complement this clear cut-off signal at M > 2*500 GeV in the di-jet-correlation function by detecting the subsequent, Hawking-decay products of the LXD-BHs, namely either multiple high energy (> 100 GeV) SM Mono-Jets (i.e. away-side jet missing), sprayed off the evaporating BHs isentropically into all directions or the thermalization of the multiple overlapping Hawking-radiation in a eckler-Kapusta-Plasma. Microcanonical quantum statistical calculations of the Hawking evaporation process for these LXD-BHs show that cold black hole remnants (BHRs) of Mass sim M_f remain leftover as the ashes of these spectacular Di-Jet-suppressed events. Strong Di-Jet suppression is also expected with Heavy Ion beams at the LHC, due to Quark-Gluon-Plasma induced jet attenuation at medium to low jet energies, p_t < 200 GeV. The (Mono-)Jets in these events can be used to trigger for Tsunami-emission of secondary compressed QCD-matter at well defined Mach-angles, both at the trigger side and at the awayside (missing) jet. The Machshock-angles allow for a direct measurement of both the equation of state EoS and the speed of sound c_s via supersonic bang in the "big bang" matter. We discuss the importance of the underlying strong collective flow - the gluon storm - of the QCD- matter for the formation and evolution of these Machshock cones. We predict a significant deformation of Mach shocks from the gluon storm in central Au+Au collisions at RHIC and LHC energies, as compared to the case of weakly coupled jets propagating through a static medium. A possible complete stopping of pt > 50 GeV jets at the LHC in 2-3 fm yields nonlinear high density Mach shocks in he quark gluon plasma, which can be studied in the complex emission and disintegration pattern of the possibly supercooled matter. We report on first full 3-dimensional fluid dynamical studies of the strong effects of a first order phase transition on the evolution and the Tsunami-like Mach shock emission of the QCD matter.
We have calculated the D-meson spectral density at finite temperature within a self-consistent coupled-channel approach that generates dynamically the Lambda_c (2593) resonance. We find a small mass shift for the D-meson in this hot and dense medium while the spectral density develops a sizeable width. The reduced attraction felt by the D-meson in hot and dense matter together with the large width observed have important consequences for the D-meson production in the future CBM experiment at FAIR.
We obtain the D-meson spectral density at finite temperature for the conditions of density and temperature expected at FAIR. We perform a self-consistent coupled-channel calculation taking, as a bare interaction, a separable potential model. The Lambda_c (2593) resonance is generated dynamically. We observe that the D-meson spectral density develops a sizeable width while the quasiparticle peak stays close to the free position. The consequences for the D-meson production at FAIR are discussed.
Event-by-event fluctuations of the net baryon number and electric charge in nucleus-nucleus collisions are studied in Pb+Pb at SPS energies within the HSD transport model. We reveal an important role of the fluctuations in the number of target nucleon participants. They strongly influence all measured fluctuations even in the samples of events with rather rigid centrality trigger. This fact can be used to check different scenarios of nucleus-nucleus collisions by measuring the multiplicity fluctuations as a function of collision centrality in fixed kinematical regions of the projectile and target hemispheres. The HSD results for the event-by-event fluctuations of electric charge in central Pb+Pb collisions at 20, 30, 40, 80 and 158 A GeV are in a good agreement with the NA49 experimental data and considerably larger than expected in a quark-gluon plasma. This demonstrate that the distortions of the initial fluctuations by the hadronization phase and, in particular, by the final resonance decays dominate the observable fluctuations.
We propose to use the hadron number fluctuations in the limited momentum regions to study the evolution of initial flows in high energy nuclear collisions. In this method by a proper preparation of a collision sample the projectile and target initial flows are marked in fluctuations in the number of colliding nucleons. We discuss three limiting cases of the evolution of flows, transparency, mixing and reflection, and present for them quantitative predictions obtained within several models. Finally, we apply the method to the NA49 results on fluctuations of the negatively charged hadron multiplicity in Pb+Pb interactions at 158A GeV and conclude that the data favor a hydrodynamical model with a significant degree of mixing of the initial flows at the early stage of collisions.
This paper presents an approach to the question whether it is possible to construct a parser based on ideas from case-based reasoning. Such a parser would employ a partial analysis of the input sentence to select a (nearly) complete syntax tree and then adapt this tree to the input sentence. The experiments performed on German data from the Tüba-D/Z treebank and the KaRoPars partial parser show that a wide range of levels of generality can be reached, depending on which types of information are used to determine the similarity between input sentence and training sentences. The results are such that it is possible to construct a case-based parser. The optimal setting out of those presented here need to be determined empirically.
In recent years, research in parsing has extended in several new directions. One of these directions is concerned with parsing languages other than English. Treebanks have become available for many European languages, but also for Arabic, Chinese, or Japanese. However, it was shown that parsing results on these treebanks depend on the types of treebank annotations used. Another direction in parsing research is the development of dependency parsers. Dependency parsing profits from the non-hierarchical nature of dependency relations, thus lexical information can be included in the parsing process in a much more natural way. Especially machine learning based approaches are very successful (cf. e.g.). The results achieved by these dependency parsers are very competitive although comparisons are difficult because of the differences in annotation. For English, the Penn Treebank has been converted to dependencies. For this version, Nivre et al. report an accuracy rate of 86.3%, as compared to an F-score of 92.1 for Charniaks parser. The Penn Chinese Treebank is also available in a constituent and a dependency representations. The best results reported for parsing experiments with this treebank give an F-score of 81.8 for the constituent version and 79.8% accuracy for the dependency version. The general trend in comparisons between constituent and dependency parsers is that the dependency parser performs slightly worse than the constituent parser. The only exception occurs for German, where F-scores for constituent plus grammatical function parses range between 51.4 and 75.3, depending on the treebank, NEGRA or TüBa-D/Z. The dependency parser based on a converted version of Tüba-D/Z, in contrast, reached an accuracy of 83.4%, i.e. 12 percent points better than the best constituent analysis including grammatical functions.
This paper presents a comparative study of probabilistic treebank parsing of German, using the Negra and TüBa-D/Z treebanks. Experiments with the Stanford parser, which uses a factored PCFG and dependency model, show that, contrary to previous claims for other parsers, lexicalization of PCFG models boosts parsing performance for both treebanks. The experiments also show that there is a big difference in parsing performance, when trained on the Negra and on the TüBa-D/Z treebanks. Parser performance for the models trained on TüBa-D/Z are comparable to parsing results for English with the Stanford parser, when trained on the Penn treebank. This comparison at least suggests that German is not harder to parse than its West-Germanic neighbor language English.
Using a qualitative analysis of disagreements from a referentially annotated newspaper corpus, we show that, in coreference annotation, vague referents are prone to greater disagreement. We show how potentially problematic cases can be dealt with in a way that is practical even for larger-scale annotation, considering a real-world example from newspaper text.
In the past, a divide could be seen between ’deep’ parsers on the one hand, which construct a semantic representation out of their input, but usually have significant coverage problems, and more robust parsers on the other hand, which are usually based on a (statistical) model derived from a treebank and have larger coverage, but leave the problem of semantic interpretation to the user. More recently, approaches have emerged that combine the robustness of datadriven (statistical) models with more detailed linguistic interpretation such that the output could be used for deeper semantic analysis. Cahill et al. (2002) use a PCFG-based parsing model in combination with a set of principles and heuristics to derive functional (f-)structures of Lexical-Functional Grammar (LFG). They show that the derived functional structures have a better quality than those generated by a parser based on a state-of-the-art hand-crafted LFG grammar. Advocates of Dependency Grammar usually point out that dependencies already are a semantically meaningful representation (cf. Menzel, 2003). However, parsers based on dependency grammar normally create underspecified representations with respect to certain phenomena such as coordination, apposition and control structures. In these areas they are too "shallow" to be directly used for semantic interpretation. In this paper, we adopt a similar approach to Cahill et al. (2002) using a dependency-based analysis to derive functional structure, and demonstrate the feasibility of this approach using German data. A major focus of our discussion is on the treatment of coordination and other potentially underspecified structures of the dependency data input. F-structure is one of the two core levels of syntactic representation in LFG (Bresnan, 2001). Independently of surface order, it encodes abstract syntactic functions that constitute predicate argument structure and other dependency relations such as subject, predicate, adjunct, but also further semantic information such as the semantic type of an adjunct (e.g. directional). Normally f-structure is captured as a recursive attribute value matrix, which is isomorphic to a directed graph representation. Figure 5 depicts an example target f-structure. As mentioned earlier, these deeper-level dependency relations can be used to construct logical forms as in the approaches of van Genabith and Crouch (1996), who construct underspecified discourse representations (UDRSs), and Spreyer and Frank (2005), who have robust minimal recursion semantics (RMRS) as their target representation. We therefore think that f-structures are a suitable target representation for automatic syntactic analysis in a larger pipeline of mapping text to interpretation. In this paper, we report on the conversion from dependency structures to fstructure. Firstly, we evaluate the f-structure conversion in isolation, starting from hand-corrected dependencies based on the TüBa-D/Z treebank and Versley (2005)´s conversion. Secondly, we start from tokenized text to evaluate the combined process of automatic parsing (using Foth and Menzel (2006)´s parser) and f-structure conversion. As a test set, we randomly selected 100 sentences from TüBa-D/Z which we annotated using a scheme very close to that of the TiGer Dependency Bank (Forst et al., 2004). In the next section, we sketch dependency analysis, the underlying theory of our input representations, and introduce four different representations of coordination. We also describe Weighted Constraint Dependency Grammar (WCDG), the dependency parsing formalism that we use in our experiments. Section 3 characterises the conversion of dependencies to f-structures. Our evaluation is presented in section 4, and finally, section 5 summarises our results and gives an overview of problems remaining to be solved.
This paper compares two approaches to computational semantics, namely semantic unification in Lexicalized Tree Adjoining Grammars (LTAG) and Lexical Resource Semantics (LRS) in HPSG. There are striking similarities between the frameworks that make them comparable in many respects. We will exemplify the differences and similarities by looking at several phenomena. We will show, first of all, that many intuitions about the mechanisms of semantic computations can be implemented in similar ways in both frameworks. Secondly, we will identify some aspects in which the frameworks intrinsically differ due to more general differences between the approaches to formal grammar adopted by LTAG and HPSG.
Relative quantifier scope in German depends, in contrast to English, very much on word order. The scope possibilities of a quantifier are determined by its surface position, its base position and the type of the quantifier. In this paper we propose a multicomponent analysis for German quantifiers computing the scope of the quantifier, in particular its minimal nuclear scope, depending on the syntactic configuration it occurs in.