Refine
Year of publication
Document Type
- Preprint (359)
- Article (188)
- Working Paper (7)
- Diploma Thesis (1)
- Doctoral Thesis (1)
Has Fulltext
- yes (556)
Is part of the Bibliography
- no (556)
Keywords
- Heavy Ion Experiments (16)
- Hadron-Hadron scattering (experiments) (11)
- Heavy-ion collision (5)
- Experimental nuclear physics (2)
- Experimental particle physics (2)
- Particle and resonance production (2)
- Particle correlations and fluctuations (2)
- ALICE detector (1)
- Abstrakte Reduktion (1)
- Anti-nuclei (1)
- Call-by-Need (1)
- Clean (1)
- Electron-pion identification (1)
- Electroweak interaction (1)
- Fibre/foam sandwich radiator (1)
- Funktionale Programmiersprache (1)
- Hadron-Hadron Scattering (1)
- Hadron-hadron interactions (1)
- Haskell (1)
- Ionisation energy loss (1)
- LHC (1)
- Lambda-Kalkül (1)
- Lepton-Nucleon Scattering (experiments) (1)
- Letrec-Kalkül (1)
- Multi-wire proportional drift chamber (1)
- Neural network (1)
- Pb–Pb collisions (1)
- Relativistic heavy-ion collisions (1)
- Sharing (1)
- Striktheitsanalyse (1)
- TR (1)
- Tracking (1)
- Transition radiation detector (1)
- Trigger (1)
- Xenon-based gas mixture (1)
- abstract reduction (1)
- dE/dx (1)
- heavy ion experiments (1)
- quark gluon plasma (1)
- strictness analysis (1)
Institute
- Physik (546)
- Frankfurt Institute for Advanced Studies (FIAS) (497)
- Informatik (475)
- Informatik und Mathematik (3)
- Hochschulrechenzentrum (1)
- Medizin (1)
In particle collider experiments, elementary particle interactions with large momentum transfer produce quarks and gluons (known as partons) whose evolution is governed by the strong force, as described by the theory of quantum chromodynamics (QCD)1. These partons subsequently emit further partons in a process that can be described as a parton shower2, which culminates in the formation of detectable hadrons. Studying the pattern of the parton shower is one of the key experimental tools for testing QCD. This pattern is expected to depend on the mass of the initiating parton, through a phenomenon known as the dead-cone effect, which predicts a suppression of the gluon spectrum emitted by a heavy quark of mass mQ and energy E, within a cone of angular size mQ/E around the emitter3. Previously, a direct observation of the dead-cone effect in QCD had not been possible, owing to the challenge of reconstructing the cascading quarks and gluons from the experimentally accessible hadrons. We report the direct observation of the QCD dead cone by using new iterative declustering techniques4,5 to reconstruct the parton shower of charm quarks. This result confirms a fundamental feature of QCD. Furthermore, the measurement of a dead-cone angle constitutes a direct experimental observation of the non-zero mass of the charm quark, which is a fundamental constant in the standard model of particle physics.
Various static analyses of functional programming languages that permit infinite data structures make use of set constants like Top, Inf, and Bot, denoting all terms, all lists not eventually ending in Nil, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics in the set of all, also infinite, computable trees, where all term constructors are non-strict. This internal report proves decidability, in particular DEXPTIME-completeness, of inclusion of co-inductively defined sets by using algorithms and results from tree automata and set constraints, and contains detailed proofs. The test for set inclusion is required by certain strictness analysis algorithms in lazy functional programming languages and could also be the basis for further set-based analyses.
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Die Implementation der Striktheits-Analyse, die im Zuge dieser Arbeit vorgenommen wurde, stellt eine effiziente Approximation der abstrakten Reduktion mit Pfadanalyse dar. Durch die G#-Maschine, ein neues, auf der G-Maschine basierendes Maschinenmodell, wurde die verwendete Methode systematisch dargelegt. Die große Ähnlichkeit mit der G-Maschin, die in unserer Implementation beibehalten werden konnte, zeigt, wie natürlich die verwendete Methode der Reduktion in funktionalen Programmiersprachen entspricht. Obwohl die Umsetzung mehr Wert auf Nachvollziehbarkeit, als auf Effizienz legt, zeigt sie, daß die Methode der abstrakten Reduktion mit Pfadanalyse auch in einer funktionalen Implementierung durchaus alltagstauglich ist und Striktheits-Information findet, die Umsetzungen anderer Methoden nicht finden. Es bestehen Möglichkeiten zur Optimierung u. a. von Programmteilen, die für jede simulierte G#-Maschinen-Anweisung ausgeführt werden. Bei vorsichtiger Einschätzung erscheint eine Halbierung der Laufzeit mit vertretbarem Aufwand erreichbar.
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case, constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently, the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the conjecture, if no abstract constants are used in the analysis.
We present a study of the inclusive charged-particle transverse momentum (pT) spectra as a function of charged-particle multiplicity density at mid-pseudorapidity, dNch/dη, in pp collisions at s√=5.02 and 13 TeV covering the kinematic range |η|<0.8 and 0.15<pT<20 GeV/c. The results are presented for events with at least one charged particle in |η|<1 (INEL>0). The pT spectra are reported for two multiplicity estimators covering different pseudorapidity regions. The pT spectra normalized to that for INEL>0 show little energy dependence. Moreover, the high-pT yields of charged particles increase faster than the charged-particle multiplicity density. The average pT as a function of multiplicity and transverse spherocity is reported for pp collisions at s√=13 TeV. For low- (high-) spherocity events, corresponding to jet-like (isotropic) events, the average pT is higher (smaller) than that measured in INEL>0 pp collisions. Within uncertainties, the functional form of ⟨pT⟩(Nch) is not affected by the spherocity selection. While EPOS LHC gives a good description of many features of data, PYTHIA overestimates the average pT in jet-like events.
J/ψ production as a function of charged-particle multiplicity in p-Pb collisions at √sNN = 8.16 TeV
(2020)
Inclusive J/ψ yields and average transverse momenta in p-Pb collisions at a center-of-mass energy per nucleon pair s NN $$ \sqrt{s_{\mathrm{NN}}} $$ = 8.16 TeV are measured as a function of the charged-particle pseudorapidity density with ALICE. The J/ψ mesons are reconstructed at forward (2.03 < y cms < 3.53) and backward (−4.46 < y cms < −2.96) center-of-mass rapidity in their dimuon decay channel while the charged-particle pseudorapidity density is measured around midrapidity. The J/ψ yields at forward and backward rapidity normalized to their respective average values increase with the normalized charged-particle pseudorapidity density, the former showing a weaker increase than the latter. The normalized average transverse momenta at forward and backward rapidity manifest a steady increase from low to high charged-particle pseudorapidity density with a saturation beyond the average value.
The energy deposited at very forward rapidities (very forward energy) is a powerful tool for characterising proton fragmentation in pp and p-Pb collisions. The correlation of very forward energy with particle production at midrapidity provides direct insights into the initial stages and the subsequent evolution of the collision. Furthermore, the correlation with the production of particles with large transverse momenta at midrapidity provides information complementary to the measurements of the underlying event, which are usually interpreted in the framework of models implementing centrality-dependent multiple parton interactions.
Results about very forward energy, measured by the ALICE zero degree calorimeters (ZDCs), and its dependence on the activity measured at midrapidity in pp collisions at s√ = 13 TeV and in p-Pb collisions at sNN−−−√ = 8.16 TeV are discussed. The measurements performed in pp collisions are compared with the expectations of three hadronic interaction event generators: PYTHIA 6 (Perugia 2011 tune), PYTHIA 8 (Monash tune), and EPOS LHC. These results provide new constraints on the validity of models in describing the beam remnants at very forward rapidities, where perturbative QCD cannot be used.