530 Physik
Refine
Year of publication
- 2019 (162) (remove)
Document Type
- Preprint (107)
- Article (53)
- Conference Proceeding (2)
Language
- English (162) (remove)
Has Fulltext
- yes (162)
Is part of the Bibliography
- no (162)
Keywords
Institute
- Frankfurt Institute for Advanced Studies (FIAS) (162) (remove)
We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.
PURPOSE: The purpose of this work is to analyze whether the Monte Carlo codes penh, fluka, and geant4/topas are suitable to calculate absorbed doses and fQ/fQ0 ratios in therapeutic high-energy photon and proton beams.
METHODS: We used penh, fluka, geant4/topas, and egsnrc to calculate the absorbed dose to water in a reference water cavity and the absorbed dose to air in two air cavities representative of a plane-parallel and a cylindrical ionization chamber in a 1.25 MeV photon beam and a 150 MeV proton beam - egsnrc was only used for the photon beam calculations. The physics and transport settings in each code were adjusted to simulate the particle transport as detailed as reasonably possible. From these absorbed doses, fQ0 factors, fQ factors, and fQ/fQ0 ratios (which are the basis of Monte Carlo calculated beam quality correction factors kQ,Q0 ) were calculated and compared between the codes. Additionally, we calculated the spectra of primary particles and secondary electrons in the reference water cavity, as well as the integrated depth-dose curve of 150 MeV protons in water.
RESULTS: The absorbed doses agreed within 1.4% or better between the individual codes for both the photon and proton simulations. The fQ0 and fQ factors agreed within 0.5% or better for the individual codes for both beam qualities. The resulting fQ/fQ0 ratios for 150 MeV protons agreed within 0.7% or better. For the 1.25 MeV photon beam, the spectra of photons and secondary electrons agreed almost perfectly. For the 150 MeV proton simulation, we observed differences in the spectra of secondary protons whereas the spectra of primary protons and low-energy delta electrons also agreed almost perfectly. The first 2 mm of the entrance channel of the 150 MeV proton Bragg curve agreed almost perfectly while for greater depths, the differences in the integrated dose were up to 1.5%.
CONCLUSION: penh, fluka, and geant4/topas are capable of calculating beam quality correction factors in proton beams. The differences in the fQ0 and fQ factors between the codes are 0.5% at maximum. The differences in the fQ/fQ0 ratios are 0.7% at maximum.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
We consider a simple model of modified gravity interacting with a single scalar field ϕ with weakly coupled exponential potential within the framework of non-Riemannian spacetime volume-form formalism. The specific form of the action is fixed by the requirement of invariance under global Weyl-scale symmetry. Upon passing to the physical Einstein frame we show how the non-Riemannian volume elements create a second canonical scalar field u and dynamically generate a non-trivial two-scalar-field potential Ueff(u,ϕ) with two remarkable features: (i) it possesses a large flat region for large u describing a slow-roll inflation; (ii) it has a stable low-lying minimum w.r.t. (u,ϕ) representing the dark energy density in the “late universe”. We study the corresponding two-field slow-roll inflation and show that the pertinent slow-roll inflationary curve ϕ = ϕ(u) in the two-field space (u,ϕ) has a very small curvature, i.e., ϕ changes very little during the inflationary evolution of u on the flat region of Ueff(u,ϕ). Explicit expressions are found for the slow-roll parameters which differ from those in the single-field inflationary counterpart. Numerical solutions for the scalar spectral index and the tensor-to-scalar ratio are derived agreeing with the observational data.
We present a study of the elliptic flow and RAA of D and D¯ mesons in Au+Au collisions at FAIR energies. We propagate the charm quarks and the D mesons following a previously applied Langevin dynamics. The evolution of the background medium is modeled in two different ways: (I) we use the UrQMD hydrodynamics + Boltzmann transport hybrid approach including a phase transition to QGP and (II) with the coarse-graining approach employing also an equation of state with QGP. The latter approach has previously been used to describe di-lepton data at various energies very successfully. This comparison allows us to explore the effects of partial thermalization and viscous effects on the charm propagation. We explore the centrality dependencies of the collisions, the variation of the decoupling temperature and various hadronization parameters. We find that the initial partonic phase is responsible for the creation of most of the D/D¯ mesons elliptic flow and that the subsequent hadronic interactions seem to play only a minor role. This indicates that D/D¯ mesons elliptic flow is a smoking gun for a partonic phase at FAIR energies. However, the results suggest that the magnitude and the details of the elliptic flow strongly depend on the dynamics of the medium and on the hadronization procedure, which is related to the medium properties as well. Therefore, even at FAIR energies the charm quark might constitute a very useful tool to probe the quark–gluon plasma and investigate its physics.
The coordinate and momentum space configurations of the net baryon number in heavy ion collisions that undergo spinodal decomposition, due to a first-order phase transition, are investigated using state-of-the-art machine-learning methods. Coordinate space clumping, which appears in the spinodal decomposition, leaves strong characteristic imprints on the spatial net density distribution in nearly every event which can be detected by modern machine learning techniques. On the other hand, the corresponding features in the momentum distributions cannot clearly be detected, by the same machine learning methods, in individual events. Only a small subset of events can be systematically differ- entiated if only the momentum space information is available. This is due to the strong similarity of the two event classes, with and without spinodal decomposition. In such sce- narios, conventional event-averaged observables like the baryon number cumulants signal a spinodal non-equilibrium phase transition. Indeed the third-order cumulant, the skewness, does exhibit a peak at the beam energy (Elab = 3–4 A GeV), where the transient hot and dense system created in the heavy ion collision reaches the first-order phase transition.
The effect of a non-zero strangeness chemical potential on the strong interaction phase diagram has been studied within the framework of the SU(3) quark-hadron chiral parity-doublet model. Both, the nuclear liquid-gas and the chiral/deconfinement phase transitions are modified. The first-order line in the chiral phase transition is observed to vanish completely, with the entire phase boundary becoming a crossover. These changes in the nature of the phase transitions are expected to modify various susceptibilities, the effects of which might be detectable in particle-number distributions resulting from moderate-temperature and high-density heavy-ion collision experiments.
Measurements of the π±, K±, and proton double differential yields emitted from the surface of the 90-cm-long carbon target (T2K replica) were performed for the incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS using data collected during 2010 run. The double differential π± yields were measured with increased precision compared to the previously published NA61/SHINE results, while the K± and proton yields were obtained for the first time. A strategy for dealing with the dependence of the results on the incoming proton beam profile is proposed. The purpose of these measurements is to reduce significantly the (anti)neutrino flux uncertainty in the T2K long-baseline neutrino experiment by constraining the production of (anti)neutrino ancestors coming from the T2K target.