Refine
Year of publication
Document Type
- Article (1906)
- Preprint (1284)
- Doctoral Thesis (596)
- Conference Proceeding (249)
- diplomthesis (101)
- Bachelor Thesis (75)
- Master's Thesis (61)
- Contribution to a Periodical (46)
- Book (33)
- Diploma Thesis (33)
Keywords
- Kollisionen schwerer Ionen (47)
- heavy ion collisions (44)
- LHC (25)
- Quark-Gluon-Plasma (25)
- Heavy Ion Experiments (20)
- equation of state (19)
- quark-gluon plasma (19)
- BESIII (17)
- Relativistic heavy-ion collisions (16)
- QCD (15)
Institute
- Physik (4479) (remove)
Im Rahmen dieser Arbeit wurde ein verbessertes Buncher-System für Hochfrequenzbeschleuniger mit niedrigem und mittlerem Ionenstrom entwickelt. Die entwickelte Methodik hat ermöglicht, ein effektives, vereinfachtes Buncher-System für die Injektion in HF-Beschleuniger wie RFQs, Zyklotrons, DTLs usw. zu entwerfen, welches kleine Ausgangsemittanzen und beträchtliche Strahltransmissionen erzielt. Um einen mono-energetischen und kontinuierlichen Strahl aus einer Ionenquelle für den Einschuss in eine Hochfrequenz-Beschleunigerstruktur anzupassen, wird eine Energiemodulation benötigt, die im weiteren Verlauf (Driftstrecke) zur Längsfokussierung des Strahls führt. Durch eine Sägezahnwellenform wird die ideale Energiemodulation aufgrund der linearen Abhängigkeit zwischen der Energie der Teilchen und ihren relativen Phasen erreicht. Dies ist jedoch technologisch nicht möglich, da Teilchenbeschleuniger Spannungsniveaus im Bereich kV bis 100 kV benötigen. Dagegen ist für eine solche Zielsetzung eine räumliche Trennung der sinusförmigen Anregung mit der Grundfrequenz und höheren Harmonischen möglich.
Daher wurde in dieser Arbeit ein verbesserter harmonischer Buncher, der sogenannte „Double Drift Harmonic Buncher - DDHB“ entwickelt, welcher zahlreiche Vorteile hat. Eine geringe longitudinale Emittanz sowie finanzielle Aspekte sprechen für diesen Lösungsansatz. Die Hauptelemente eines DDHB Systems sind zwei Kavitäten, die durch eine Driftlänge L1 getrennt sind, wobei der erste Resonator mit der Grundfrequenz bei -90° synchroner Phase und angelegter Spannung V1 und der zweite Resonator bei der zweiten harmonischen Frequenz mit +90 synchroner Phase und angelegter Spannung V2 betrieben werden. Schließlich ist eine zweite Drift L2 am Ende des Arrays für eine longitudinale Strahlfokussierung am Hauptbeschleunigereingang erforderlich. Somit erfüllt ein solcher Aufbau das angestrebte Ziel einer hohen Einfangseffizienz und einer kleinen longitudinalen Emittanz durch Anpassen der vier Designparameter V1, L1, V2 und L2.
Das Verständnis der Fokussierung, ausgehend von einem Gleichstromstrahl, einschließlich der Raumladungskräfte, ist einer der wesentlichen Bestandteile der Strahlphysik. Viele kommerzielle Codes bieten Simulationsmöglichkeiten in diesem Anwendungsbereich. Ihre Ansätze bleiben jedoch dem Anwender meist verborgen, oder es fehlen wichtige Details zur genauen Abbildung des vorliegenden Konzepts. Daher bestand eine Hauptaufgabe dieser Arbeit darin, einen speziellen Multi-Particle-Tracking-Beam-Dynamics-Code (BCDC) zu entwickeln, bei dem der Raumladungseffekt während des Bunch-Vorgangs, ausgehend von einem DC-Strahl berechnet wird. Der BCDC - Code enthält elementare Routinen wie Drift und Beschleunigungsspalt oder magnetische Linse für die transversale Strahlfokussierung und Raumladungsberechnungen unter Berücksichtigung der Auswirkungen der nächsten Nachbar-Bunche (NNB). Der Raumladungsalgorithmus in BCDC basiert auf einer direkten Coulomb- Gitter-Gitter-Wechselwirkung und Berechnungen des elektrischen Feldes durch Lokalisierung der Ladungsdichte auf einem kartesischen Gitter. Um Genauigkeit zu erreichen, werden die Feldberechnungen in Längsrichtung symmetrisch um das zentrale Bucket (βλ-Größe) erweitert, so dass das Simulationsfeld dreimal so groß ist. Die zentrale Teilchenverteilung wird dann nach jedem Schritt in die benachbarten Buckets kopiert. Anschließend werden die resultierenden Felder im Hauptgitterfeld neu berechnet, indem die elektrischen Felder im Hauptgitterfeld mit denen aus den benachbarten Regionen überlagert werden. Ohne diese Methode würde z. B. ein kontinuierlicher Strahl, welcher jedoch in der Simulation nur innerhalb einer Zelle der Länge βλ definiert ist, zu einer resultierenden Raumladungsfeldkomponente Ez an beiden Rändern der Zelle führen. Ein solches unphysikalisches Ergebnis konnte durch die Anwendung der NNB-Technik bereits weitgehend eliminiert werden. Zusätzlich zum NNB-Feature verfügt das BCDC über eine weitere Besonderheit nämlich die sogenannte Raumladungskompensation (SCC). Aufgrund der Ionisierung des Restgases kommt es entlang des Niederenergiestrahltransports zu einer teilweisen Raumladungskompensation, und zwar am und hinter dem Bunchersystem mit unterschiedlichen Prozentsätzen. Eines der Hauptziele des DDHB-Konzepts besteht darin, es für Hochstromstrahlanwendungen zu entwickeln. Dabei ermöglicht die teilweise Raumladungskompensation, dass das Design in der Praxis höhere Stromniveaus erreicht. Dadurch ist das BCDC-Programm ein leistungsstarkes Werkzeug für Simulationen in künftigen, stromstarken Projekten. Proof-of-Principle-Designs wurden in dieser Arbeit entwickelt.
We study vacuum masses of charmonia and the charm-quark diffusion coefficient in the quark-gluon plasma based on the spectral representation for meson correlators. To calculate the correlators, we solve the quark gap equation and the inhomogeneous Bethe–Salpeter equation in the rainbow-ladder approximation. It is found that the ground-state masses of charmonia in the pseudoscalar, scalar, and vector channels can be well described. For 1.5Tc<T<3.0Tc, the value of the diffusion coefficient D is comparable with that obtained by lattice QCD and experiments: 3.4<2πTD<5.9. Relating the diffusion coefficient with the ratio of shear viscosity to entropy density η/s of the quark-gluon plasma, we obtain values in the range 0.09<η/s<0.16.
We study anisotropic fluid dynamics derived from the Boltzmann equation based on a particular choice for the anisotropic distribution function within a boost-invariant expansion of the fluid in one spatial dimension. In order to close the conservation equations we need to choose an additional moment of the Boltzmann equation. We discuss the influence of this choice of closure on the time evolution of fluid-dynamical variables and search for the best agreement to the solution of the Boltzmann equation in the relaxation-time approximation.
A deep convolutional neural network (CNN) is developed to study symmetry energy (Esym(ρ)) effects by learning the mapping between the symmetry energy and the two-dimensional (transverse momentum and rapidity) distributions of protons and neutrons in heavy-ion collisions. Supervised training is performed with labeled data-set from the ultrarelativistic quantum molecular dynamics (UrQMD) model simulation. It is found that, by using proton spectra on event-by-event basis as input, the accuracy for classifying the soft and stiff Esym(ρ) is about 60% due to large event-by-event fluctuations, while by setting event-summed proton spectra as input, the classification accuracy increases to 98%. The accuracies for 5-label (5 different Esym(ρ)) classification task are about 58% and 72% by using proton and neutron spectra, respectively. For the regression task, the mean absolute errors (MAE) which measure the average magnitude of the absolute differences between the predicted and actual L (the slope parameter of Esym(ρ)) are about 20.4 and 14.8 MeV by using proton and neutron spectra, respectively. Fingerprints of the density-dependent nuclear symmetry energy on the transverse momentum and rapidity distributions of protons and neutrons can be identified by convolutional neural network algorithm.
The production of strange pentaquark states (e.g., Theta baryons and Ξ−− states) in hadronic interactions within a Gribov–Regge approach is explored. In this approach the Θ+(1540) and the Ξ are produced by disintegration of remnants formed by the exchange of pomerons between the two protons. We predict the rapidity and transverse momentum distributions as well as the 4π multiplicity of the Θ+, Ξ−−, Ξ−, Ξ0 and Ξ+ for s=17 GeV (SPS) and 200 GeV (RHIC). For both energies more than 10−3 Θ+ and more than 10−5 Ξ per pp event should be observed by the present experiments.
Within a dynamical quark recombination model, we explore various proposed event-by-event observables sensitive to the microscopic structure of the QCD-matter created at RHIC energies. Charge ratio fluctuations, charge transfer fluctuations and baryon-strangeness correlations are computed from a sample of central Au + Au events at the highest RHIC energy available (sNN=200 GeV). We find that for all explored observables, the calculations yield the values predicted for a quark–gluon plasma only at early times of the evolution, whereas the final state approaches the values expected for a hadronic gas. We argue that the recombination-like hadronization process itself is responsible for the disappearance of the predicted deconfinement signals. This might explain why no fluctuation signatures for the transition between quark and hadronic matter was ever observed in the experimental data up to now.
The ALICE detector is ideally suited to study the production of anti- and hyper-matter due to its excellent particle identification capabilities. The measurement of the He¯4-nucleus in Pb–Pb collisons at sNN=2.76TeV is presented. We further show the performance for the reconstruction of the (anti-)hypertriton in the decay to He3+π− (He¯3+π+). In addition to this, two searches have been performed, one for the H-Dibaryon →Λpπ− and one for the Λn bound state (Λn¯→d¯π+). No signals are observed for these exotic states and upper limits have been determined.
The production of light neutral mesons in AA collisions probes the physics of the Quark-Gluon Plasma (QGP), which is formed in heavy-ion collisions at the LHC. More specifically, the centrality dependent neutral meson spectra in AA collisions compared to its spectra in minimum-bias pp collisions, scaled with the number of hard collisions, provides information on the energy loss of partons traversing the QGP. The measurement allows to test with high precision the predictions of theoretical model calculations. In addition, the decay of the π0 and η mesons are the dominant back- grounds for all direct photon measurements. Therefore, pushing the limits of the precision of neutral meson production is key to learning about the temperature and space-time evolution of the QGP.
In the ALICE experiment neutral mesons can be detected via their decay into two photons. The latter can be reconstructed using the two calorimeters EMCal and PHOS or via conversions in the detector material. The excellent momentum resolution of the conversion photons down to very low pT and the high reconstruction efficiency and triggering capability of calorimeters at high pT, allow us to measure the pT dependent invariant yield of light neutral mesons over a wide kinematic range.
Combining state-of-the-art reconstruction techniques with the high statistics delivered by the LHC in Run 2 gives us the opportunity to enhance the precision of our measurements. In these proceedings, new ALICE run 2 preliminary results for neutral meson production in pp and Pb–Pb collisions at LHC energies are presented.
The state-of-the-art pattern recognition method in machine learning (deep convolution neural network) is used to identify the equation of state (EoS) employed in the relativistic hydrodynamic simulations of heavy ion collisions. High-level correlations of particle spectra in transverse momentum and azimuthal angle learned by the network act as an effective EoS-meter in deciphering the nature of the phase transition in QCD. The EoS-meter is model independent and insensitive to other simulation inputs including the initial conditions and shear viscosity for hydrodynamic simulations. Through this study we demonstrate that there is a traceable encoder of the dynamical information from the phase structure that survives the evolution and exists in the final snapshot of heavy ion collisions and one can exclusively and effectively decode these information from the highly complex final output with machine learning when traditional methods fail. Besides the deep neural network, the performance of traditional machine learning classifiers are also provided.
In this proceeding, we review our recent work using deep convolutional neural network (CNN) to identify the nature of the QCD transition in a hybrid modeling of heavy-ion collisions. Within this hybrid model, a viscous hydrodynamic model is coupled with a hadronic cascade “after-burner”. As a binary classification setup, we employ two different types of equations of state (EoS) of the hot medium in the hydrodynamic evolution. The resulting final-state pion spectra in the transverse momentum and azimuthal angle plane are fed to the neural network as the input data in order to distinguish different EoS. To probe the effects of the fluctuations in the event-by-event spectra, we explore different scenarios for the input data and make a comparison in a systematic way. We observe a clear hierarchy in the predictive power when the network is fed with the event-by-event, cascade-coarse-grained and event-fine-averaged spectra. The carefully-trained neural network can extract high-level features from pion spectra to identify the nature of the QCD transition in a realistic simulation scenario.
The upcoming high energy experiments at the LHC are one of the most outstanding efforts for a better understanding of nature. It is associated with great hopes in the physics community. But there is also some fear in the public, that the conjectured production of mini black holes might lead to a dangerous chain reaction. In this Letter we summarize the most straightforward arguments that are necessary to rule out such doomsday scenarios.
Within the ADD-model, we elaborate an idea by Vacavant and Hinchliffe [J. Phys. G 27 (2001) 1839] and show quantitatively how to determine the fundamental scale of TeV-gravity and the number of compactified extra dimensions from data at LHC. We demonstrate that the ADD-model leads to strong correlations between the missing ET in gravitons at different center of mass energies. This correlation puts strong constraints on this model for extra dimensions, if probed at s=5.5 TeV and s=14 TeV at LHC.
In high-energy nuclear collisions, heavy quark potential at finite temperature controls the quarkonium suppression. Including the relaxation of the medium induced by the relative velocity between quarkonia and the deconfined expanding matter, the Debye screening is reduced and the quarkonium dissociation takes place at a higher temperature. As a consequence of the velocity-dependent dissociation temperature, the quarkonium suppression at high transverse momentum is significantly weakened in high-energy nuclear collisions at RHIC and LHC.
We apply a coupled transport-hydrodynamics model to discuss the production of multi-strange meta-stable objects in Pb + Pb reactions at the FAIR facility. In addition to making predictions for yields of these particles we are able to calculate particle dependent rapidity and momentum distributions. We argue that the FAIR energy regime is the optimal place to search for multi-strange baryonic object (due to the high baryon density, favoring a distillation of strangeness). Additionally, we show results for strangeness and baryon density fluctuations. Using the UrQMD model we calculate the strangeness separation in phase space which might lead to an enhanced production of MEMOs compared to models that assume global thermalization.
We present first data on centrality dependent K+, K− and ϕ production in Au+Au collisions at a kinetic beam energy of 1.23A GeV measured with HADES. We observe no significant increase of the K+/K− and ϕ/K− multiplicity ratios with centrality of the collision. The measured ϕ/K− ratio is found to be larger than results at higher energies. The significant ϕ feed-down contribution to the K− yield substantially softens the measured transverse mass spectrum of K−, explaining its lower observed effective temperature in comparison to the one of K+.
We investigate the modification of the pion self-energy at finite temperature due to its interaction with a low-density, isospin-symmetric nuclear medium embedded in a constant magnetic background. To one loop, for fixed temperature and density, we find that the pion effective mass increases with the magnetic field. For the π−, interestingly, this happens solely due to the trivial Landau quantization shift ∼|eB|, since the real part of the self-energy is negative in this case. In a scenario in which other charged particle species are present and undergo an analogous trivial shift, the relevant behavior of the effective mass might be determined essentially by the real part of the self-energy. In this case, we find that the pion mass decreases by ∼10% for a magnetic field |eB|∼mπ2, which favors pion condensation at high density and low temperatures.
We present a simultaneous calculation of heavy single-Λ hypernuclei and compact stars containing hypernuclear core within a relativistic density functional theory based on a Lagrangian which includes the hyperon octet and lightest isoscalar-isovector mesons which couple to baryons with density-dependent couplings. The corresponding density functional allows for SU(6) symmetry breaking and mixing in the isoscalar sector, whereby the departures in the σ–Λ and σ–Σ couplings away from their values implied by the SU(3) symmetric model are used to adjust the theory to the laboratory and astronomical data. We fix σ–Λ coupling using the data on the single-Λ hypernuclei and derive an upper bound on the σ–Σ from the requirement that the lower bound on the maximum mass of a compact star is 2M⊙.
The Δ-isobar degrees of freedom are included in the covariant density functional (CDF) theory to study the equation of state (EoS) and composition of dense matter in compact stars. In addition to Δ's we include the full octet of baryons, which allows us to study the interplay between the onset of delta isobars and hyperonic degrees of freedom. Using both the Hartree and Hartree–Fock approximation we find that Δ's appear already at densities slightly above the saturation density of nuclear matter for a wide range of the meson–Δ coupling constants. This delays the appearance of hyperons and significantly affects the gross properties of compact stars. Specifically, Δ's soften the EoS at low densities but stiffen it at high densities. This softening reduces the radius of a canonical 1.4M⊙ star by up to 2 km for a reasonably attractive Δ potential in matter, while the stiffening results in larger maximum masses of compact stars. We conclude that the hypernuclear CDF parametrizations that satisfy the 2M⊙ maximum mass constraint remain valid when Δ isobars are included, with the important consequence that the resulting stellar radii are shifted toward lower values, which is in agreement with the analysis of neutron star radii.
We review the results from the event-by-event next-to-leading order perturbative QCD + saturation + viscous hydrodynamics (EbyE NLO EKRT) model. With a simultaneous analysis of LHC and RHIC bulk observables we systematically constrain the QCD matter shear viscosity-to-entropy ratio η/s(T), and test the initial state computation. In particular, we study the centrality dependences of hadronic multiplicities, pT spectra, flow coefficients, relative elliptic flow fluctuations, and various flow-correlations in 2.76 and 5.02 TeV Pb+Pb collisions at the LHC and 200 GeV Au+Au collisions at RHIC. Overall, our results match remarkably well with the LHC and RHIC measurements, and predictions for the 5.02 TeV LHC run are in an excellent agreement with the data. We probe the applicability of hydrodynamics via the average Knudsen numbers in the space-time evolution of the system and viscous corrections on the freeze-out surface.