Refine
Year of publication
- 2016 (175) (remove)
Document Type
- Preprint (71)
- Article (67)
- Doctoral Thesis (23)
- Conference Proceeding (4)
- Contribution to a Periodical (4)
- Book (3)
- Bachelor Thesis (2)
- Report (1)
Has Fulltext
- yes (175)
Is part of the Bibliography
- no (175)
Keywords
- 140Ce (1)
- Atoms (1)
- Biological physics (1)
- Biomoleküle (1)
- Centrality Class (1)
- Centrality Selection (1)
- Coincidence measurement (1)
- D-wave (1)
- Doku Mittelstufe (1)
- Emittanz (1)
Institute
- Physik (175) (remove)
The Large Hadron Collider (LHC) is the biggest and most powerful particle accelerator in the world, designed to collide two proton beams with particle momentum of 7 TeV/c each. The stored energy of 362MJ in each beam is sufficient to melt 500 kg of copper or to evaporate about 300 litre of water. An accidental release of even a small fraction of the beam energy can cause severe damage to accelerator equipment. Reliable machine protection systems are necessary to safely operate the accelerator complex. To design a machine protection system, it is essential to know the damage potential of the stored beam and the consequences in case of a failure. One (catastrophic) failure would be, if the entire beam is lost in the aperture due to a problem with the beam dumping system.
This thesis presents the simulation studies, results of a benchmarking experiment, and detailed target investigation, for this failure case. In the experiment, solid copper cylinders were irradiated with the 440GeV proton beam delivered by the Super Proton Synchrotron (SPS) at the High Radiation to Materials (HiRadMat) facility at CERN. The experiment confirmed the existence of the so-called hydrodynamic tunneling phenomenon for the first time. Detailed numerical simulations for particle-matter interaction with FLUKA, and with the two-dimensional hydrodynamic code, BIG2, were carried out. Excellent agreement was found between the experimental and the simulation results that validate predictions for the 7TeV beam of the LHC. The hydrodynamic tunneling effect is of considerable importance for the design of machine protection systems for accelerators with high stored beam energy. In addition, this thesis presents the first studies of the damage potential with beam parameters of the Future Circular Collider (FCC).
To detect beam losses due to fast failures it is essential to have fast beam instrumentation. Diamond based particle detectors are able to detect beam losses within a nanosecond time scale. Specially designed diamond detectors were used in the experiment mentioned above. Their efficiency and response has been studied for the first time over 5 orders of bunch intensity with electrons at the Beam Test Facility (BTF) at INFN, Frascati, Italy. The results of these measurements are discussed in this thesis. Furthermore an overview of the applications of diamond based particle detectors in damage experiments and for LHC operation is presented.
The elliptic flow of heavy-flavour decay electrons is measured at midrapidity |eta| < 0.8 in three centrality classes (0-10%, 10-20% and 20-40%) of Pb-Pb collisions at sqrt(sNN) = 2.76TeV with ALICE at LHC. The collective motion of the particles inside the medium which is created in the heavy-ion collisions can be analyzed by a Fourier decomposition of the azimuthal anisotropic particle distribution with respect to the event plane. Elliptic flow is the component of the collective motion characterized by the second harmonic moment of this decomposition. It is a direct consequence of the initial geometry of the collision which is translated to a particle number anisotropy due to the strong interactions inside the medium. The amount of elliptic flow of low-momentum heavy quarks is related to their thermalization with the medium, while high-momentum heavy quarks provide a way to assess the path-length dependence of the energy loss induced by the interaction with the medium.
The heavy-quark elliptic flow is measured using a three-step procedure.
First the v2 coefficient of the inclusive electrons is measured using the event-plane and scalar-product methods. The electron background from light flavours and direct photons is then simulated, calculating the decay kinematics of the electron sources which are initialised by their respective measured spectra. The final result of this work emerges by subtracting the background from the inclusive measurement. A significant elliptic flow is observed after this subtraction. Its value is decreasing from low to intermediate pT and from semi-central to central collisions.
The results are described by model calculations with significant elastic interactions of the heavy quarks with the expanding strongly-interacting medium.
Study of hard core repulsive interactions in an hadronic gas from a comparison with lattice QCD
(2016)
We study the influence of hard-core repulsive interactions within the Hadron-Resonace Gas model in comparison to first principle calculation performed on a lattice. We check the effect of a bag-like parametrization for particle eigenvolume on flavor correlators, looking for an extension of the agreement with lattice simulations up to higher temperatures, as was yet pointed out in an analysis of hadron yields measured by the ALICE experiment. Hints for a flavor depending eigenvolume are present.
Modelling glueballs
(2016)
Glueballs are predicted in various theoretical approaches of QCD (most notably lattice QCD), but their experimental verification is still missing. In the low-energy sector some promising candidates for the scalar glueball exist, and some (less clear) candidates for the tensor and pseudoscalar glueballs were also proposed. Yet, for heavier gluonic states there is much work to be done both from the experimental and theoretical points of view. In these proceedings, we briefly review the current status of research of glueballs and discuss future developments.
In this thesis, the production of charged kaons and Φ mesons in Au+Au collisions at sqrt sAuAu = 2.4 GeV is studied. At this energy, all particles carrying open and hidden strangeness are produced below their respective free nucleon-nucleon threshold with the corresponding so-called excess energies: sqrt sK+ exc = -0.15 GeV, sqrt sK- exc = -0.46 GeV, sqrt sΦ exc = -0.49 GeVGeV. As a consequence, the production cross sections are very sensitive to medium effects like momentum distributions, two- or multistep collisions, and modification of the in-medium spectral distribution of the produced states [1]. K+ and K- mesons exhibit different properties in baryon dominated matter, since only K- can be resonantly absorbed by nucleons. Although strangeness exchange reactions have been proposed to be the dominant channel for K- production in the analyzed energy regime, the production yield and kinematic distributions could also be explained in smaller systems based on statistical hadronization model fits to the measured particle yields, including a canonical strangeness suppression radius RC, and taking the Φ feed-down to kaons into account [2, 3]. For the first time in central Au+Au collisions at such low energies, it is possible to reconstruct and do a multi differential analysis of K- and Φ mesons. In principle, this should be the ideal environment for strangeness exchange reactions to occur, as the particles are produced deeply sub-threshold in a large and long-living system. Therefore, it is the ultimate test to differentiate between the different sources for K- production in HIC.
In total 7.3x10exp9 of the 40% most central Au(1.23 GeV per nucleon)+Au collisions are analyzed. The data has been recorded with the High Acceptance DiElectron Spectrometer HADES located at Helmholtzzentrum für Schwerionenforschung GSI in April/May 2012. A substantially improved reconstruction method has been employed to reconstruct the hadrons with high purity in a wide phase space region.
The estimated particle multiplicities follow a clear hierarchy of the excess energy: 41.5 ± 2.1|sys protons at mid-rapidity per unit in rapidity, 11.1 ± 0.6|sys ± 0.4|extrapol π-, (3.01 ± 0.03|stat ± 0.15|sys ± 0.30|extrapól) x10 exp -2 K+, (1.94 ± 0.09|stat ± 0.10|sys ± 0.10|extrapol)x10 exp -4 K- and (0.99 ± 0.24|stat ± 0.10|sys ± 0.05|extrapol)x10 exp -4 Φ per event. The multiplicities of the strange hadrons increase more than linear with the mean number of participating nucleons hAparti, supporting the assumption that the necessary energy to overcome the elementary production threshold is accumulated in multi-particle interactions. Transport models predict such an increase, but are overestimating the measured particle yield and are not able to describe the kinematic distributions of K+ mesons perfectly. However, the best description is given by the IQMD model with a density dependent kaonnucleon potential of 40 MeV at nuclear ground state density.
The K-=K+ multiplicity ratio is constant as a function of centrality and follows with (6.45 ± 0.77)x10 exp -3 the trend of increasing with beam energy indicated from previous experiments [4]. The effective temperature of K- TK+eff = (84 ± 6) MeV is found to be systematically lower than the one of K+ TK+eff = (104 ± 1) MeV, which has also been observed by the other experiments.
The Φ=K- ratio is with a value of 0.52 ± 0.16 higher than the one obtained at higher center-of-mass energies and smaller systems. This behavior is predicted from a tuned version of the UrQMD transport model [5], when including higher mass baryonic resonances which can decay into Φ mesons and from statistical hadronization models when suppressing open strangeness canonically. The found ratio is constant as a function of centrality and results with a branching ratio of 48.9%, that ~ 25% of all measured K- originate from Φ feed-down decays. A two component PLUTO simulation, consisting of a pure thermal and a K- contribution originating from Φ decays, can fully explain the observed lower effective temperature in comparison to K+ and the shape of the measured rapidity distribution of K-. As a result, we find no indication for strangeness exchange reactions being the dominant mechanism for K- production in the SIS18 energy regime, if taking the contribution from Φ feed-down decays into account.
The hadron yields for the 20% most central collisions can be described by a statistical hadronization model fit with the chemical freeze-out temperature of Tchem = (68 ± 2) MeV and baryochemical potential of μB = (883 ± 25) MeV, which is higher than expected from previous parameterizations. The analysis of the transverse mass spectra of protons indicate a kinetic freeze-out temperature of Tkin = (70 ± 4) MeV and radial flow velocity of βr = 0.43 ± 0.01, which is in agreement with the parameters obtained from the linear dependence of the effective temperatures on the particle mass Tkin = (71.5 ± 4.2) MeV and βr = 0.28 ± 0.09.
The CBM experiment (FAIR/GSI, Darmstadt, Germany) will focus on the measurement of rare probes at interaction rates up to 10MHz with data flow of up to 1 TB/s. It requires a novel read-out and data-acquisition concept with self-triggered electronics and free-streaming data. In this case resolving different collisions is a non-trivial task and event building must be performed in software online. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and selection. This is a task of the First-Level Event Selection (FLES).
The FLES reconstruction and selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. The Cellular Automaton (CA) track finder algorithm was adapted towards time-based reconstruction. In this article, we describe in detail the modification done to the algorithm, as well as the performance of the developed time-based CA approach.
For the transport of high-intensity hadron beams in low-energy beam lines of linear accelerators, the compensation of space charge forces by the accumulation of particles of opposite charge is an important effect, reducing the required focusing strength and potentially the emittance growth due to space charge forces. In this thesis, space charge compensation was studied by including the secondary particles in particle-in-cell simulations.
For this purpose, a new electrostatic particle-in-cell code named bender was developed. The software was tested using known self-consistent solutions for an electron plasma confined in an external potential as well as for a KV distributed beam in a periodic focusing lattice. For the simulation of compensation, models for residual gas ionisation by proton and electron impact were implemented.
The compensation process was studied for a 120 keV, 100 mA proton beam transported through a short drift section. Various features in the particle distributions were identified, which can not explained by a uniform reduction in the electric field of the beam. These were tied to the presence of thermal electrons confined within the beam potential. Using the Poisson-Boltzmann equation, their distribution could be reproduced and their influence on the beam for a wider range of parameters studied. However, the observed temperatures show a significant numerical influence. The hypothesis was formed, that stochastical heating present in particle-in-cell simulations is the mechanism leading to the formation of the observed (partial) thermal equilibrium.
For the low-energy beam transport line of the Frankfurt neutron source FRANZ, bender was used to predict the pulse shaping in the novel ExB chopper system. The code was also used for the design and the study of an electron lens for the Integrable Optics Test Accelerator at Fermi National Accelerator Laboratory. Aberrations due to guiding center drifts and the strong electric field of the electron beam as well as the current limits in such a system were investigated.
Eine möglichst realistische Abschätzung von Strahlenschäden ist von entscheidender Bedeutung im Strahlenschutz und für die Strahlentherapie. Die primären Strahlenschäden an der DNS werden heute mit Monte-Carlo-Codes berechnet. Diese Codes benötigen möglichst genaue Fragmentierungsquerschnitte verschiedenster biomolekularer Systeme als Eingangsparameter. Im Rahmen der vorliegenden Arbeit wurde ein Experiment aufgebaut, welches die Bestimmung der Fragmentierungsquerschnitte von Biomolekülen ermöglicht. Die einzelnen Baugruppen des Aufbaus wurden vor dem Beginn des Experimentes bezüglich ihrer Eigenschaften, die die Genauigkeit der Messergebnisse beeinflussen können, charakterisiert. Die Resultate dieser Experimente werden als Eingangsdaten für die Berechnung von primären strahleninduzierten Schäden in der DNS mit Hilfe von Monte-Carlo-Codes eingesetzt.
Eine besondere Herausforderung stellte die Präparation eines Überschallgasstrahls für biomolekulare Substanzen dar. Für die Präparation müssen die Targetsubstanzen zunächst in die Gasphase überführt werden. Im Falle von Biomolekülen ist diese Überführung auf Grund ihrer niedrigen Dampfdrücke bei Raumtemperatur und chemischen Reaktivität mit technischen Problemen verbunden. Die Probleme wurden mittels einer speziellen Konstruktion der Präparationseinrichtung, welche eine direkte Einleitung der Probensubstanzen in die vom Trägergas durchströmte Mischkammer ermöglicht, gelöst. Für die Genauigkeit der gemessenen Fragmentierungsquerschnitte spielen mehrere Faktoren eine Rolle. Neben dem Bewegungsprofil des Überschallgasstrahls, den kinetischen Energien der Fragmentionen und den ionenoptischen Eigenschaften des Flugzeitspektrometers beeinflusst die geometrische Beschaffenheit der Detektionszone maßgeblich die Genauigkeit des Experimentes. Die Position und Ausdehnung des sichtbaren Volumens sind nicht nur durch den Überlappungsbereich zwischen dem Elektronen- und dem Überschallgasstrahl bestimmt, sondern hängen auch von der kinetischen Energie der Fragmente ab. Für dessen Ermittlung wurden daher auch die Trajektorien der Fragmente simuliert. Bei den Experimenten an der PTB-Apparatur ist die frei wählbare Zeitdifferenz zwischen dem Auslösen eines Elektronenpulses und dem Absaugen der Fragmentionen ein wichtiger Messparameter. Ihr Einfluss auf die Messergebnisse wurde ebenfalls neben der Nachweiswahrscheinlichkeit des verwendeten Ionendetektors untersucht. Die Kalibrierung der Flugzeitspektren, d. h. die Umwandlung der Flugzeitspektren in Massenspektren erfolgte anhand der bekannten Flugzeitspektren von Edelgasen und Wasserstoff.
Nach der Charakterisierung der Einflussfaktoren und Kalibrierung der Flugzeitspektren wurden die energieabhängigen Fragmentierungsquerschnitte für Elektronenstoß von mehreren organischen Molekülen, darunter die von Modellmolekülen für die DNS-Bausteine gemessen. Die Flugzeitspektren von THF wurden mit der PTB-Apparatur für einige kinetische Energien der Elektronen in Abhängigkeit von der Zeitdifferenz zwischen dem Auslösen des Elektronenpulses und dem Starten der Analyse durchgeführt. Messungen von Pyrimidin wurden sowohl an der PTB-Apparatur als auch mit COLTRIMS durchgeführt. Die mit COLTRIMS gewonnenen Ergebnisse liefern wichtige Zusatzinformationen über die Fragmentierungsprozesse. COLTRIMS ermöglicht die Messung der zeitlichen Korrelationen zwischen den auftretenden Fragmentionen und damit tiefere Einblicke in die bei der Entstehung der Fragmente beteiligten Reaktionskanäle. Der Vorteil der PTB-Apparatur besteht darin, dass die relativen Auftrittswahrscheinlichkeiten aller Fragmentionen genauer bestimmt werden können.
The Standard Model is one of the greatest successes of modern theoretical physics. Itl describes the physics of elementary particles by means of three forces, the electro-magnetisc, the weak and the strong interactions. The electro-magnetic and the weak interaction are rather well understood in comparison to the strong interaction.
The latest is as fundamental as the others, it is responsible for the formation of all hadrons which are classified into mesons and baryons. Well-known examples of the former is the pion and of the latter is the proton and the neutron, which form the nucleus of every atom. This fundamental force is believed to be described by the Quantum Chromodynamics (QCD) theory. According to this theory, hadrons are not elementary particles but are composed of quarks and gluons. The latter are the vector particles of the force and so are bosons of spin 1 and the former constitute the matter and are fermions with spin 1/2. To describe the interaction a new quantum number had to be introduced: the color charge which exists in three different types (blue, green and red). The name has not been chosen arbitrary as elements created from three quarks of different colors are colorless in the same way that mixing the three primary colors leads to white. However, experimentally no colored structure has ever been observed. The quarks and the gluons seem to be confined in colorless hadrons. This property of QCD is called confinement and results from a large coupling constant at low energy (or large distance). For high energy (or small distance), the perturbative analysis of QCD permits to establish the coupling constant to be small and quarks and gluons are almost free. This property is called asymptotic freedom. The possibility for QCD to describe both behaviors is one of its amazing characteristics. However, both phenomena are not well understood and one needs a method to study both the pertubative and the confining regime.
The only known method which fulfills the above criteria is Lattice QCD and more generally Lattice Quantum Field Theory (LQFT). It consists of a discretization of the spacetime and a formulation of QCD on a four-dimensional Euclidean spacetime grid of spacing a. In this way, the theory is naturally regularized and mathematically well-defined. On the other hand, the path integral formalism allows the theory to be treated as a Statistical Mechanics system which can be evaluated via a Markov chain Monte-Carlo algorithm. This method was first suggested by Wilson in 1974 [1] and shortly after Creutz performed the first numerical simulations of Yang-Mills theory [2] using a heath-bath Monte-Carlo algorithm. It appears that this method is extremely demanding in computational power. In its early days the method was criticized as the only feasible simulations involved non-physical values such as extremely large quark masses, large lattice spacing a and no dynamical quarks. With the progress of the computers and the appearance of the super-computer, the studies have come close to the physical point. But one still needs to deal with discrete space time and finite volume. Several techniques have been developed to estimate the infinite volume limit and the continuum limit. The smaller the lattice spacing and the larger the volume, the better the extrapolation to continuum and infinite volume limits is. The simulations are still very expensive and for the moment a typical length of the box is L ≈ 4fm and a ≈ 0.08fm. However, it has been realized simulating pure Yang-Mills theory and other lower dimensional models that the topology is freezing at small a [3]. It was also observed recently on full QCD simulations [4,5].
The typical lattice spacing for which this problem appears in QCD is a ≈ 0.05fm but this value depends on the quark mass used and on the algorithm. The freezing of topology leads to results which differ from physical results. Solving this issue is important for the future of LQCD [6]. Recently several methods to overcome the problem have been suggested, one of the most popular is the used of open boundary conditions [7] but this promising method has still its own issues, mainly the breaking of translation invariance.
In this thesis, we study some features of the quantum chromodynamics (QCD) phase diagram at purely imaginary chemical potential using lattice techniques. This is one of the possible methodologies to get insights about the situation at finite density, where the sign problem prevents direct investigations from first principles.
We focus, in particular, on the Roberge-Weiss plane, where the phase structure with two degenerate flavours is studied both in the light and in the heavy quark mass limit. On the lattice, any result is affected by cut-off effects and so are the positions of the two tricritical points m_{tric}^{1,2} separating the second-order intermediate mass region from the first-order triple light and heavy mass regions. Therefore, changing the lattice spacing 'a', the values of m_{tric}^1 and m_{tric}^2 will change. In order to find their position in the continuum limit – i.e. for 'a' going to 0 – they have to be located on finer and finer lattices. Typically, in lattice QCD (LQCD) simulations, the temperature T is tuned through the bare coupling β, on which 'a' depends, while keeping Nt fixed. Hence, it is common to implicitly refer to how fine the lattice is just mentioning its temporal extent.
Using both Wilson and staggered fermions, we simulate Nf=2 QCD on Nt=6 lattices, varying the quark bare mass from the chiral (m_{u,d} going to 0) to the quenched (m_{u,d} going to infinity) limit. For each quark mass, a thorough finite scaling analysis is carried out, taking advantage of two different but consistent methods. In this way we identify the order of the phase transition locating, then, the position of the tricritical points. In order to convert our measurements to physical units we fix the scale measuring the lattice spacing as well as the pion mass corresponding to the quark bare mass used. This allows a comparison between different discretisation, getting a first idea of how serious are cut-off effects.
To be able to make a comparison between two different discretisations, we added an RHMC algorithm with staggered fermions to the CL2QCD software, a GPU code based on OpenCL, which we released in 2014. A considerable part of our work has been invested in ameliorating and optimising CL2QCD, as well as in developing new analysis tools regularly used next to it. Just to mention one, the multiple histogram method has been implemented in a completely general way and we took advantage of it in order to obtain more precise results. Finally, in order to efficiently handle and monitor the hundreds of simulations that are typically concurrently run in finite temperature LQCD, a completely new Bash library of tools has been developed. We plan to release it as a byproduct of CL2QCD in the near future.
In the 1960s, theoretical concepts prepared the path to nuclear matter with proton and neutron numbers far beyond the nuclei known at that time. The new laboratory GSI was founded for research on reactions with heavy ions, in particular those for production of the predicted super-heavy nuclei. In this contribution it is presented how the interaction between experiment and theory resulted in a continuous improvement of the experimental set-ups on the one hand, and of the knowledge of the processes during the nuclear reaction and of the properties of the produced nuclei on the other hand. In the course of this work six new elements from 107 to 112 were produced and identified. An overview of the present status of experimental results and a comparison with theoretical interpretations is given.
Als wir im Herbst 2015 auf den Homepages von BURG FÜRSTENECK und der Schülerakademie unsere Ausschreibung für die Akademie 2016 veröffentlichten, ahnten wir noch nicht, dass wir uns weitere Werbung mit dem jährlichen Flyer, den wir zum Jahreswechsel an die hessischen Gymnasien und Gesamtschulen mit gymnasialen Zweig versenden, hätten (fast) sparen können. Zu unserer Überraschung und großer Freude zählten wir bereits im Februar 2016 "58" Anmeldungen von Schülerinnen und Schülern. Die Werbung hat uns im Anschluss über 20 weitere Bewerbungen beschert und in die unangenehme Situation gebracht, (zu) vielen Schülerinnen und Schülern absagen bzw. sie auf das nächste Jahr vertrösten zu müssen.
Recently the LIGO and VIRGO Collaborations reported the observation of gravitational-wave signal corresponding to the inspiral and merger of two black holes, resulting into formation of the final black hole. It was shown that the observations are consistent with the Einstein theory of gravity with high accuracy, limited mainly by the statistical error. Angular momentum and mass of the final black hole were determined with rather large allowance of tens of percents. Here we shall show that this indeterminacy in the range of the black-hole parameters allows for some non-negligible deformations of the Kerr spacetime leading to the same frequencies of the black-hole ringing. This means that at the current precision of the experiment there remains some possibility for alternative theories of gravity.
At sufficiently high temperatures and baryon densities, nuclear matter is expected to undergo a transition into the Quark-Gluon-Plasma (QGP) consisting of deconfined quarks and gluons and accompanied by chiral symmetry restoration. Signals of these two fundamental characteristics of Quantum-Chromo-Dynamics (QCD) can be studied in ultra-relativistic heavy-ion collisions producing a relatively large volume of high energy and nucleon densities as existent in the early universe. Dileptons are unique bulk-penetrating sources for this purpose since they penetrate through the surrounding medium with negligible interaction and are created throughout the entire evolution of the initially created fireball. A multitude of experiments at SIS18, SPS and RHIC have taken on the challenging task to measure these rare probes in a heavy-ion environment. NA60's results from high-quality dimuon measurements have identified the broadened ρ spectral function as favorable scenario to explain the low-mass dilepton excess, and partonic sources as dominant at intermediate dilepton masses.
Enabled by the addition of a TOF detector system in 2010, the first phase of the Beam Energy Scan (BES-I) at RHIC allows STAR to conduct an unprecedented energy-dependent study of dielectron production within a homogeneous experimental environment, and hence close the wide gap in the QCD phase diagram between SPS and top RHIC energies. This thesis concentrates on the understanding of the LMR enhancement regarding its invariant mass, transverse momentum and energy dependence. It studies dielectron production in Au+Au collisions at beam energies of 19.6, 27, 39, and 62.4 GeV with sufficient statistics. In conjunction with the published STAR results at top RHIC energy, this thesis presents results on the first comprehensive energy-dependent study of dielectron production.
This includes invariant mass- and transverse momenta-spectra for the four beam energies measured in 0-80% minimum-bias Au+Au collisions with high statistics up to 3.5 GeV/c² and 2.2 GeV/c, respectively. Their comparison with cocktail simulations of hadronic sources reveals a sizeable and steadily increasing excess yield in the LMR at all beam energies. The scenario of broadened in-medium ρ spectral functions proves to not only serve well as dominating underlying source but also to be universal in nature since it quantitatively and qualitatively explains the LMR enhancements measured over the wide range from SPS to top RHIC energies. It shows that most of the enhancement is governed by interactions of the ρ meson with thermal resonance excitations in the late(r)-stage hot and dense hadronic phase. This conclusion is supported by the energy-dependent measurement of integrated LMR excess yields and enhancement factors. The former do not exhibit a strong dependence on beam energy as expected from the approximately constant total baryon density above 20 GeV, and the latter show agreement with the CERES measurement at SPS energy. The consistency in excess yields and agreement with model calculations over the wide RHIC energy regime makes a strong case for LMR enhancements on the order of a factor 2-3.
The extent of the results presented here enables a more solid discussion of its relation to chiral symmetry restoration from a theoretical point of view. High-statistics measurements at BES-II hold the promise to confirm these conclusions along with the LMR enhancment's relation to total baryon density with decreasing beam energy.
Different approaches are possible when it comes to modeling the brain. Given its biological nature, models can be constructed out of the chemical and biological building blocks known to be at play in the brain, formulating a given mechanism in terms of the basic interactions underlying it. On the other hand, the functions of the brain can be described in a more general or macroscopic way, in terms of desirable goals. This goals may include reducing metabolic costs, being stable or robust, or being efficient in computational terms. Synaptic plasticity, that is, the study of how the connections between neurons evolve in time, is no exception to this. In the following work we formulate (and study the properties of) synaptic plasticity models, employing two complementary approaches: a top-down approach, deriving a learning rule from a guiding principle for rate-encoding neurons, and a bottom-up approach, where a simple yet biophysical rule for time-dependent plasticity is constructed.
We begin this thesis with a general overview, in Chapter 1, of the properties of neurons and their connections, clarifying notations and the jargon of the field. These will be our building blocks and will also determine the constrains we need to respect when formulating our models. We will discuss the present challenges of computational neuroscience, as well as the role of physicists in this line of research.
In Chapters 2 and 3, we develop and study a local online Hebbian self-limiting synaptic plasticity rule, employing the mentioned top-down approach. Firstly, in Chapter 2 we formulate the stationarity principle of statistical learning, in terms of the Fisher information of the output probability distribution with respect to the synaptic weights. To ensure that the learning rules are formulated in terms of information locally available to a synapse, we employ the local synapse extension to the one dimensional Fisher information. Once the objective function has been defined, we derive an online synaptic plasticity rule via stochastic gradient descent.
In order to test the computational capabilities of a neuron evolving according to this rule (combined with a preexisting intrinsic plasticity rule), we perform a series of numerical experiments, training the neuron with different input distributions.
We observe that, for input distributions closely resembling a multivariate normal distribution, the neuron robustly selects the first principal component of the distribution, showing otherwise a strong preference for directions of large negative excess kurtosis.
In Chapter 3 we study the robustness of the learning rule derived in Chapter 2 with respect to variations in the neural model’s transfer function. In particular, we find an equivalent cubic form of the rule which, given its functional simplicity, permits to analytically compute the attractors (stationary solutions) of the learning procedure, as a function of the statistical moments of the input distribution. In this way, we manage to explain the numerical findings of Chapter 2 analytically, and formulate a prediction: if the neuron is selective to non-Gaussian input directions, it should be suitable for applications to independent component analysis. We close this section by showing how indeed, a neuron operating under these rules can learn the independent components in the non-linear bars problem.
A simple biophysical model for time-dependent plasticity (STDP) is developed in Chapter 4. The model is formulated in terms of two decaying traces present in the synapse, namely the fraction of activated NMDA receptors and the calcium concentration, which serve as clocks, measuring the time of pre- and postsynaptic spikes. While constructed in terms of the key biological elements thought to be involved in the process, we have kept the functional dependencies of the variables as simple as possible to allow for analytic tractability. Despite its simplicity, the model is able to reproduce several experimental results, including the typical pairwise STDP curve and triplet results, in both hippocampal culture and layer 2/3 cortical neurons. Thanks to the model’s functional simplicity, we are able to compute these results analytically, establishing a direct and transparent connection between the model’s internal parameters and the qualitative features of the results.
Finally, in order to make a connection to synaptic plasticity for rate encoding neural models, we train the synapse with Poisson uncorrelated pre- and postsynaptic spike trains and compute the expected synaptic weight change as a function of the frequencies of these spike trains. Interestingly, a Hebbian (in the rate encoding sense of the word) BCM-like behavior is recovered in this setup for hippocampal neurons, while dominating depression seems unavoidable for parameter configurations reproducing experimentally observed triplet nonlinearities in layer 2/3 cortical neurons. Potentiation can however be recovered in these neurons when correlations between pre- and postsynaptic spikes are present. We end this chapter by discussing the relation to existing experimental results, leaving open questions and predictions for future experiments.
A set of summary cards of the models employed, together with listings of the relevant variables and parameters, are presented at the end of the thesis, for easier access and permanent reference for the reader.
Lepton pairs emerging from decays of virtual photons represent promising probes of nuclear matter under extreme conditions of temperature and density. These etreme conditions can be reached in heavy-ion collisions in various facilities around the world. Hereby the collision energy in the center-of-mass system (√SNN) varies from few GeV (SIS) to the TeV (LHC). In the energy domain of 1 - 2 GeV per nucleon (GeV/u), the HADES experiment at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt studies dielectrons and strangeness production.
Various reactions, for example collisions of pions, protons, deuterons and heavy-ions with nuclei have been studied since its installation in the year 2001. Hereby the so called DLS Puzzle was solved experimentally, with remeasuring C+C at 1 and 2 GeV/u and by careful studies of inclusive pp and pn reactions at 1.25 GeV. With these measurements the so-called reference spectrum was established. Measurements of e+ e− production Ar+KCl showed an enhancement on the dilepton spectrum above the trivial NN back-
ground. Theory predicts a strong enhancement of medium radiation with the system size, due to large production of fast decaying baryonic resonances like ∆ and N∗ . The heaviest system measured so far was Au+Au at a kinetic beam energy of 1.23 GeV/u. The precise determination of the medium radiation depends
on a precise knowledge of the underlying hadronic cocktail composed of various sources contributing to the measured dilepton spectrum. In general the medium radiation needs to be separated from contributions coming from long-lived particles, that decay after the freeze out of the system. For a more model independent
understanding of the dilepton cocktail the production cross sections of these particles need to measured independently. In the related energy regime the main contributers are π0 and η Dalitz decays. Both mesons have a dominant decay into two real photons and have been reconstructed successfully in this channel. Since HADES has no electromagnetic calorimeter the mesons can not be identified in this decay channel directly. In this thesis the capability of HADES to detect e+ e− pairs from conversions of real photons is demonstrated.
Therefore not only the conversion probability but also the resulting efficiencies are shown. Furthermore, the reconstruction method for neutral mesons will be explained and the resulting spectra are interpreted. The measurement of neutral pions is compared to the independent measured charged pion distribution, and
extrapolated to full phase space. An integrated approach is used to determine the η yield. Both measurement are compared to the world data and to theory model claculations. Finally, the measurements will be used together with the reconstructed dilepton spectra to determine the amount and the properties of in medium radiation in the Au+Au system.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
In this thesis we explore the characteristics of strongly interacting matter, described by Quantum Chromodynamics (QCD). In particular, we investigate the properties of QCD at extreme densities, a region yet to be explored by first principle methods. We base the study on lattice gauge theory with Wilson fermions in the strong coupling, heavy quark regime. We expand the lattice action around this limit, and carry out analytic integrals over the gauge links to obtain an effective, dimensionally reduced, theory of Polyakov loop interactions.
The 3D effective theory suffers only from a mild sign problem, and we briefly outline how it can be simulated using either Monte Carlo techniques with reweighting, or the Complex Langevin flow. We then continue to the main topic of the thesis, namely the analytic treatment of the effective theory. We introduce the linked cluster expansion, a method ideal for studying thermodynamic expansions. The complex nature of the effective theory action requires the development of a generalisation of the linked cluster expansion. We find a mapping between generalised linked cluster expansion and our effective theory, and use this to compute the thermodynamic quantities.
Lastly, various resummation techniques are explored, and a chain resummation is implemented on the level of the effective theory itself. The resummed effective theory describes not only nearest neighbour, next to nearest neighbour, and so on, interactions, but couplings at all distances, making it well suited for describing macroscopic effects. We compute the equation of state for cold and dense heavy QCD, and find a correspondence with that of non-relativistic free fermions, indicating a shift of the dynamics in the continuum.
We conclude this thesis by presenting two possible extensions to new physics using the techniques outlined within. First is the application of the effective theory in the large-$N_c$ limit, of particular interest to the study of conformal field theory. Second is the computation of analytic Yang Lee zeros, which can be applied in the search for real phase transitions.
Exotic nuclear matter
(2016)
Recent developments of nuclear structure theory for exotic nuclei are addressed. The inclusion of hyperons and nucleon resonances is discussed. Nuclear multipole response functions, hyperon interactions in infinite matter and in neutron stars and theoretical aspects of excitations of nucleon resonances in nuclei are discussed.
We discuss different models for the spin structure of the nonperturbative pomeron: scalar, vector, and rank-2 symmetric tensor. The ratio of single-helicity-flip to helicity-conserving amplitudes in polarised high-energy proton–proton elastic scattering, known as the complex r5 parameter, is calculated for these models. We compare our results to experimental data from the STAR experiment. We show that the spin-0 (scalar) pomeron model is clearly excluded by the data, while the vector pomeron is inconsistent with the rules of quantum field theory. The tensor pomeron is found to be perfectly consistent with the STAR data.
This letter reports on how the Wilson flow technique can efficaciously kill the short-distance quantum fluctuations of 2- and 3-gluon Green functions, remove the ΛQCD scale and destroy the transition from the confining non-perturbative to the asymptotically-free perturbative sector. After the Wilson flow, the behavior of the Green functions with momenta can be described in terms of the quasi-classical instanton background. The same behavior also occurs, before the Wilson flow, at low-momenta. This last result permits applications as, for instance, the detection of instanton phenomenological properties or a determination of the lattice spacing only from the gauge sector of the theory.
We report on new results on the infrared behavior of the three-gluon vertex in quenched Quantum Chromodynamics, obtained from large-volume lattice simulations. The main focus of our study is the appearance of the characteristic infrared feature known as ‘zero crossing’, the origin of which is intimately connected with the nonperturbative masslessness of the Faddeev–Popov ghost. The appearance of this effect is clearly visible in one of the two kinematic configurations analyzed, and its theoretical origin is discussed in the framework of Schwinger–Dyson equations. The effective coupling in the momentum subtraction scheme that corresponds to the three-gluon vertex is constructed, revealing the vanishing of the effective interaction at the exact location of the zero crossing.
A generalized teleparallel cosmological model, f(TG,T), containing the torsion scalar T and the teleparallel counterpart of the Gauss–Bonnet topological invariant TG, is studied in the framework of the Noether symmetry approach. As f(G,R) gravity, where G is the Gauss–Bonnet topological invariant and R is the Ricci curvature scalar, exhausts all the curvature information that one can construct from the Riemann tensor, in the same way, f(TG,T) contains all the possible information directly related to the torsion tensor. In this paper, we discuss how the Noether symmetry approach allows one to fix the form of the function f(TG,T) and to derive exact cosmological solutions.
We study the effect of thermal charm production on charmonium regeneration in high energy nuclear collisions. By solving the kinetic equations for charm quark and charmonium distributions in Pb+Pb collisions, we calculate the global and differential nuclear modification factors RAA(Npart) and RAA(pt) for J/ψ s. Due to the thermal charm production in hot medium, the charmonium production source changes from the initially created charm quarks at SPS, RHIC and LHC to the thermally produced charm quarks at Future Circular Collider (FCC), and the J/ψ suppression (RAA<1) observed so far will be replaced by a strong enhancement (RAA>1) at FCC at low transverse momentum.
The decay properties of the Pygmy Dipole Resonance (PDR) have been investigated in the semi-magic N=82 nucleus 140Ce using a novel combination of nuclear resonance fluorescence and γ–γ coincidence techniques. Branching ratios for transitions to low-lying excited states are determined in a direct and model-independent way both for individual excited states and for excitation energy intervals. Comparison of the experimental results to microscopic calculations in the quasi-particle phonon model exhibits an excellent agreement, supporting the observation that the Pygmy Dipole Resonance couples to the ground state as well as to low-lying excited states. A 10% mixing of the PDR and the [21+ x PDR] is extracted.
Zur vollständigen Charakterisierung der Hochstrom-Protonenquelle im Rahmen des FRANZ-Projektes war es notwendig, die Emittanz dieser zu bestimmen. Die vorliegende Arbeit befasst sich mit der Entwicklung zweier unterschiedlicher Emittanz-Messsysteme, welche in der Lage sind, im kritischen Einsatzbereich hinter der Ionenquelle die Emittanz zu bestimmen.
Die grundsätzliche Problematik der Emittanzmessung an Hochstrom-Ionenquellen liegt in den besonderen Anforderungen, die an diese Messsysteme gestellt werden. Zum einen müssen diese extrem hohe Strahlleistungsdichten und Strahlströme verarbeiten können, ohne Schaden zu nehmen. Zum anderen, was die besondere Herausforderung darstellt, ist es notwendig, dass sie unempfindlich gegenüber Hochspannungsüberschläge sind, da es naturgemäß an einer Ionenquelle zu Hochspannungsüberschlägen kommen kann, welche die sensible und teure Messelektronik schädigen können.
Aus diesem Grund wurde eine Pepperpot-Emittanz-Messanlage weiterentwickelt, welche komplett ohne hochspannungsempfindliche Elektronik auskommt. Diese besteht aus einem effizient wassergekühlten Messkopf mit einer Lochblende aus einer Wolframlegierung. Die Lochgeometrie wurde an die zu vermessende Ionenquelle angepasst. Anstelle einer Multichannelplate und / oder eines Leuchtschirms kommt eine mit Öl vorbehandelte Aluminiumplatte als Schirm zum Einsatz. Aufgrund der Wechselwirkung der, durch die Lochblende hindurch driftenden, Teilstrahlen mit der Oberfläche des Schirms, bilden sich auf diesem, mit bloÿem Auge sichtbare, Kohlenstoffabdrücke aus. Aus der Lage im Ortsraum und der Intensitätsverteilung der einzelnen Abdrücke kann die Phasenraum-Verteilung berechnet werden. Der Nachweis, dass die Intensitätsverteilung der Kohlenstoffabdrücke proportional zur Strahlstromdichtenverteilung eines jeden Abdrucks ist, wurde im Rahmen der Grundlagenuntersuchungen erbracht. Parallel wurde eine zweite, konventionelle Schlitz-Gitter-Emittanz-Messanlage entwickelt und aufgebaut.
Für die Auswertung der Rohdaten wurde eine Analysesoftware entwickelt, welche kompatibel zu beiden Messsystemen ist. Mittels dieser kann aus den Rohdaten die Phasenraum-Verteilung, die Emittanzen (Lage und Fläche) berechnet und in verschiedenen Schnittebenen graphisch dargestellt werden. Ein Hauptaspekt lag in der notwendigen Untergrundreduktion. Insbesondere bei der Analyse der Pepperpot-Schirme tritt bei der Digitalisierung derselben eine nicht physikalische Veränderung der Intensitätsverteilung der Kohlenstoffabdrücke auf. Die erfolgreiche Separation der Abdrücke vom Hintergrund war von entscheidender Bedeutung.
Mit beiden Emittanzmesssystemen konnte im Rahmen dieser Arbeit die Emittanz der FRANZ-Hochstrom-Protonenquelle bestimmt und Abhängigkeiten diverser Strahlparameter untersucht werden. Dabei zeigen die Ergebnisse beider Messsysteme eine sehr gute Übereinstimmung, was die Leistungsfähigkeit des Pepperpot-Messsystems in diesem Einsatzbereich bestätigt.
Für die Erzeugung der, im Rahmen verschiedener Emittanzmessungen, benötigten Plasmadichten wurde die eingespeiste Bogenleistung um 265% von 2.85kW auf 7.56kW erhöht. Die geringe Varianz der gemessenen Emittanzen lässt den Schluss zu, dass sich die Ionentemperatur im Rahmen der Messgenauigkeit in dem untersuchten Bereich nicht merklich ändert. Dies ist insofern bemerkenswert, da dies bedeutet, dass sich die Ionentemperatur nicht signifikant verändert hat, obwohl die Leistung im Plasma stark erhöht wurde.
Im Laufe der Grundlagenuntersuchungen des Pepperpot-Systems wurde festgestellt, dass es unter bestimmten Voraussetzungen zur Bildung von zwei Kohlenstoffabdrücken pro Blendenloch kommen kann. Mit Hilfe von Strahlsimulationen mittels dem Code IGUN sowie vergleichenden Emittanzmessungen konnte nachgewiesen werden, dass bei der Extraktion im sogenannten angepassten Fall zwei Teilstrahlen extrahiert werden. Durch eine geringfügige Erhöhung der Perveanz können diese beiden Teilstrahlen in einen laminaren Ionenstrahl überführt werden.
Im Hinblick auf die Konditionierung der FRANZ-LEBT wurde erstmals im Institut der Transport eines Hochstrom-Ionenstrahls durch einen Solenoiden sowie die Auswirkungen dessen auf die Strahlemittanz untersucht. Aufgrund des projektierten Protonenstroms von Ip = 50mA wurden diese Untersuchungen mit einem vergleichbaren Protonenstrom und einer Strahlenergie von E = 55keV durchgeführt.
Darüber hinaus wurde die zeitliche Entwicklung der Emittanz innerhalb eines Strahlpulses (80Hz,1ms,Ip = 56mA,It = 70mA) hinter dem Solenoiden untersucht. Eine Analyse zeigt, dass die Strahlemittanz innerhalb der Messgenauigkeit entlang des Pulsplateus nahezu konstant bleibt. Jedoch ändert sich die Divergenz des Strahlkerns innerhalb des Zeitraumes des Pulsanstiegs, aufgrund der Raumladungskompensation sowie des ansteigenden Stroms.
In der Experimentierhalle der Physik am Campus Riedberg der Goethe – Universität wird gegenwärtig die Beschleunigeranlage FRANZ aufgebaut. FRANZ steht für Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum. Die Anlage bietet vielfältige Experimentiermöglichkeiten in der Untersuchung intensiver, gepulster Protonenstrahlen. Ein Forschungsschwerpunkt an den sekundären Neutronenstrahlen sind Messungen zur nuklearen
Astrophysik. Die Neutronen werden durch einen 2 MeV Protonenstrahl mittels der Reaktion 7Li (p, n) 7Be erzeugt. Die geplanten Experimente erfordern sowohl eine hier weltweit erstmals realisierte Pulsrepetitionsrate von bis zu 250 kHz bei Pulsströmen im 100 mA – Bereich als auch eine extreme Pulskompression auf eine Nanosekunde bei dann auftretenden Pulsströmen im Ampere – Bereich. Daneben ist auch ein Dauerstrich – Strahlbetrieb im mA – Strombereich möglich. Auch viele einzelne Beschleunigerkomponenten wie die Ionenquelle, der Chopper zur Pulsformung, die hochfrequent gekoppelte RFQ-IH-Kombination, der Rebuncher in Form einer CH – Struktur und der Bunchkompressor sind Neuentwicklungen. Mittlere Strahlleistungen von bis zu 24 kW treten im Niederenergiestrahltransportbereich auf, da die Ionenquelle grundsätzlich im Dauerstrich zu betreiben ist, auch bei Hochstrom mit hohen Pulsrepetitionsraten. Der Personen- und Geräteschutz spielt damit auch eine wesentliche Rolle bei der Auslegung des Kontrollsystems für FRANZ. Der Aufbau von FRANZ und seine wesentlichen Komponenten werden in Kapitel 2 erläutert. Die vielen unterschiedlichen Komponenten wie Hochspannungsbereich, Magneten, Hochfrequenzbauteile und Kavitäten, Vakuumbauteile, Strahldiagnose und Detektoren machen plausibel, dass auch das Kontrollsystem für eine solche Anlage speziell ausgelegt werden muss. In Kapitel 4 werden zum Vergleich die Konzepte zur Steuerung und Regelung aktueller, großer Beschleunigerprojekte aufgezeigt, nämlich für die „European Spallation Source ESS“ und für die „Facility for Antiproton and Ion Research FAIR“. In der vorliegenden Arbeit wurde die Ionenquelle als komplexe Beschleunigerkomponente ausgewählt, um Entwicklungen zur Steuerung und Regelung durchzuführen und zu testen. Zum Anfahren und Betreiben der Ionenquelle wurde ein Flussdiagramm (Abb. 5.15) entwickelt und realisiert. Im Detail wurden Untersuchungen zur Abhängigkeit der Heizkathodenparameter von der Betriebsdauer gemacht. Daraus konnte ein Algorithmus zur Vorhersage eines rechtzeitigen Filamentaustausches abgeleitet werden. Weiterhin konnte die Nachregelung des Kathodenheizstromes automatisiert werden, um damit die Bogenentladungsspannung innerhalb eines Intervalls von ± 0.5 V zu stabilisieren. Das Anfahren des Filamentstroms wurde ebenfalls automatisiert. Dazu wird die Vakuumdruckänderung in Abhängigkeit der Filamentstromerhöhung gemessen, ausgewertet und daraus der nächste erlaubte Stromerhöhungsschritt abgeleitet. Auf diese Weise wird der Betriebszustand schneller und kontrollierter erreicht als bei manuellem Hochfahren. Das Ziel eines unbemannten Ionenquellenbetriebs ist damit näher gerückt. In einem ersten Test zur Komponentensteuerung und zur Datenaufnahme wurde ein Ionenstrahl extrahiert und durch den ersten Fokussierungsmagneten – einen Solenoiden – transportiert. Es wurde der Erregungsstrom des Solenoiden sowie die Strahlenergie automatisch durchgefahren, die Daten abgespeichert und daraus ein Kontourplot zum gemessenen Strahlstrom hinter der Fokussierlinse erstellt (Abb. 5). Die vorliegende Arbeit beschäftigt sich nur mit den „langsamen“ Steuerungs- und Regelungsprozessen, während die schnellen Prozesse im Hochfrequenzregelungssystem unabhängig geregelt werden. Neben der Überwachung des Betriebszustandes aller Komponenten werden auch alle für den Service und die Personensicherheit benötigten Daten weggeschrieben. Das System basiert auf MNDACS (Mesh Networked Data Acquisition and Control System) und ist in JAVA geschrieben. MNDACS besteht aus einem Kernel, welcher die Komponententreiber-Software sowie den Netzwerkserver und das graphische Netzwerkinterface (GUI) betreibt. Weterhin gehört dazu das Driver Abstraction Layer (DAL), welches den Zugang zu weiteren Computern oder zu lokalen Treibern ermöglicht. CORBA stellt die Middleware für Netzwerkkommunikation dar. Dadurch wird Kommunikation mit externer Software geregelt, weiterhin wird die Umlegung von Kommunikation im Fall von Leitungsunterbrechungen oder einem lokalen Computerabsturz festgelegt. Es gibt bei FRANZ zwei Kontrollebenen: Über Ethernet läuft die „High Level Control“ und die Datenverarbeitung. Über die „Low Level Control“ läuft das Interlock – und Sicherheitssystem. Die Netzwerkverbindungen laufen über 1 Gb Ethernet Links, womit ein schneller Austausch auch bei lokalen Netzwerkstörungen noch möglich ist. Um bei Stromausfällen das Computersystem am Laufen zu halten, wurde im Rahmen dieser Arbeit ein „Uninterruptable Power Supply“ UPS beschafft und erfolgreich am Hochspannungsterminal getestet.
The process of electron-loss to the continuum (ELC) has been studied for the collision systems U28++H2 at a collision energy of 50 MeV/u, U28++N2 at 30 MeV/u, and U28++Xe at 50 MeV/u. The energy distributions of cusp electrons emitted at an angle of 0∘ with respect to the projectile beam were measured using a magnetic forward-angle electron spectrometer. For these collision systems far from equilibrium charge state, a significantly asymmetric cusp shape is observed. The experimental results are compared to calculations based on first-order perturbation theory, which predict an almost symmetric cusp shape. Some possible reasons for this discrepancy are discussed.
Using an advanced version of the hadron resonance gas model we have found several remarkable irregularities at chemical freeze-out. The most prominent of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which we found at center of mass energies 3.6-4.9 GeV and 7.6-10 GeV. The low energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shockadiabat model we demonstrate that the low energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties of the mixed phase at its boundary to the quark-gluon plasma. The question is whether the high energy correlated quasi-plateaus are also related to some kind of mixed phase. In order to answer this question we employ the results of a systematic meta-analysis of the quality of data description of 10 existing event generators of nucleus-nucleus collisions in the range of center of mass collision energies from 3.1 GeV to 17.3 GeV. These generators are divided into two groups: the first group includes the generators which account for the quark-gluon plasma formation during nuclear collisions, while the second group includes the generators which do not assume the quark-gluon plasma formation in such collisions. Comparing the quality of data description of more than a hundred of different data sets of strange hadrons by these two groups of generators, we find two regions of the equal quality of data description which are located at the center of mass collision energies 4.3-4.9 GeV and 10.-13.5 GeV. These two regions of equal quality of data description we interpret as regions of the hadron-quark-gluon mixed phase formation. Such a conclusion is strongly supported by the irregularities in the collision energy dependence of the experimental ratios of the Lambda hyperon number per proton and positive kaon number per Lambda hyperon. Although at the moment it is unclear, whether these regions belong to the same mixed phase or not, there are arguments that the most probable collision energy range to probe the QCD phase diagram (tri)critical endpoint is 12-14 GeV.
The centrality dependence of the charged-particle pseudorapidity density measured with ALICE in Pb–Pb collisions at √sNN=2.76 TeV over a broad pseudorapidity range is presented. This Letter extends the previous results reported by ALICE to more peripheral collisions. No strong change of the overall shape of charged-particle pseudorapidity density distributions with centrality is observed, and when normalised to the number of participating nucleons in the collisions, the evolution over pseudorapidity with centrality is likewise small. The broad pseudorapidity range (−3.5<η<5) allows precise estimates of the total number of produced charged particles which we find to range from 162±22(syst.) to 17170±770(syst.) in 80–90% and 0–5% central collisions, respectively. The total charged-particle multiplicity is seen to approximately scale with the number of participating nucleons in the collision. This suggests that hard contributions to the charged-particle multiplicity are limited. The results are compared to models which describe dNch/dη at mid-rapidity in the most central Pb–Pb collisions and it is found that these models do not capture all features of the distributions.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p–Pb collisions at √sNN = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range - 0.5 < y < 0. The transverse momentum spectra, measured as a function of the multiplicity, have a pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at √s = 7 TeV and Pb–Pb collisions at √sNN = 2.76 TeV. In Pb–Pb and p–Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological environment has been reflected in a number of biomedical applications.
Strategies to choose the size and the composition of nanoscale systems are often hindered by a limited understanding of interactions that are difficult to study experimentally. However, this goal can be achieved by means of advanced computer simulations. This thesis explores, from a theoretical and a computational viewpoints, stability, electronic and thermo-mechanical properties of nanoscale systems and materials which are related to biomedical applications.
We examine the ability of existing classical interatomic potentials to reproduce stability and thermo-mechanical properties of metal systems, assuming that these potentials have been fitted to describe ground-state properties of the perfect bulk materials.
It is found that existing classical interatomic potentials poorly describe highly-excited vibrational states when the system is far from the potential energy minimum. On the other hand, construction of a reliable computational model is essential for further development of nanomaterials for applications. A new interatomic potential that is able to correctly reproduce both the melting temperature and the ground-state properties of different metals, such as gold, platinum, titanium, and magnesium, by means of classical molecular dynamics simulations is proposed in this work. The suggested modification of a many-body potential has a general nature and can be utilized for similar numerical exploration of thermo-mechanical properties of a broad range of molecular and solid state systems experiencing phase transitions.
The applicability of the classical interatomic potentials to the description of nanoscale systems, consisting of several tens-hundreds of atoms, is also explored in this study. This issue is important, for instance, in the case of nanostructured materials, where grains or nanocrystals have a typical size of about a few nanometers. We validate classical potentials through the comparison with density-functional theory calculations of small
atomic clusters made of titanium and nickel. By this analysis, we demonstrate that the classical potentials fitted to describe ground-state properties of a bulk material can describe the energetics of nanoscale systems with a reasonable accuracy.
In this work, we also analyze electronic properties of nanometer-size nanoparticles made of gold, platinum, silver, and gadolinium; nanoparticles composed of these materials are of current interest for radiation therapy applications. We focus on the production of low-energy electrons, having the kinetic energy from a few electronvolts to several tens of electronvolts. It is currently established that the low-energy secondary electrons of such energies play an important role in the nanoscale mechanisms of biological damage resulting from ionizing radiation. We provide a methodology for analyzing the dynamic response of nanoparticles of the experimentally relevant sizes, namely of about several nanometers, exposed to ionizing radiation. Because of a large number of constituent atoms (about 1000 −10000 atoms) and consequently high computational costs, the electronic properties of such systems can hardly be described by means of ab initio methods based on a quantum-mechanical treatment of electrons, and this analysis should rely on model approaches. By comparing the response of smaller systems (of about 1 nm size) calculated within the ab initio- and the model framework, we validate this methodology and make predictions for the electron production in larger systems.
We have revealed that a significant increase in the number of the low-energy electrons emitted from nanometer-size noble metal nanoparticles arises from collective electron excitations formed in the systems. It is demonstrated that the dominating mechanisms of electron yield enhancement are related to the formation of plasmons excited in a whole system and of atomic giant resonances formed due to excitation of valence d electrons in individual atoms of a nanoparticle. Being embedded in a biological medium, the noble metal nanoparticles thus represent an important source of low-energy electrons, able to produce a significant irrepairable damage in biological systems.
A general methodology for studying electronic properties of nanosystems is used to make quantitative predictions for electron production by non-metal nanoparticles. The analysis illustrates that due to a prominent collective response to an external electric field, carbon nanoparticles embedded in a biological medium also enhance the production of low-energy electrons. The number of low-energy electrons emitted from carbon nanoparticles is demonstrated to be several times higher as compared to the case of liquid water.
Hadronic polarization and the related anisotropy of the dilepton angular distribution are studied for the reaction πN→Ne+e−. We employ consistent effective interactions for baryon resonances up to spin-5/2, where non-physical degrees of freedom are eliminated, to compute the anisotropy coefficients for isolated intermediate baryon resonances. It is shown that the spin and parity of the intermediate baryon resonance is reflected in the angular dependence of the anisotropy coefficient. We then compute the anisotropy coefficient including the N(1520) and N(1440) resonances, which are essential at the collision energy of the recent data obtained by the HADES Collaboration on this reaction. We conclude that the anisotropy coefficient provides useful constraints for unraveling the resonance contributions to this process.
Collective flow phenomena are a sensitive probe for the properties of extreme QCD matter. However, their interpretation relies on the understanding of the initial conditions e.g. the eccentricity of the nuclear overlap region. HADES [1] provides a large acceptance combined with a high mass-resolution and therefore allows to study di-electron and hadron production in heavy-ion collisions with unprecedented precision. In this contribution, the capability of HADES to study flow harmonics by utilizing multi-particle azimuthal correlation techniques is discussed. Due to the high statistics of seven billion Au+Au collisions at 1.23 AGeV collected in 2012, a systematic study of higher-order flow harmonics, the differentiation between collective and non-flow effects, and as well the multi-differential (pt, rapidity, centrality) analysis is possible.
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(2016)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta 8<pTtrig<16 GeV/c and associated charged particles of 0.5<pTassoc<10 GeV/c versus the azimuthal angle difference Δφ at midrapidity in pp and central Pb–Pb collisions at √sNN=2.76 TeV with ALICE. The new measurements exploit associated charged hadrons down to 0.5 GeV/c, which significantly extends our previous measurement that only used charged hadrons above 3 GeV/c. After subtracting the contributions of the flow background, v2 to v5, the per-trigger yields are extracted for |Δφ|<0.7 on the near and for |Δφ−π|<1.1 on the away side. The ratio of per-trigger yields in Pb–Pb to those in pp collisions, IAA, is measured on the near and away side for the 0–10% most central Pb–Pb collisions. On the away side, the per-trigger yields in Pb–Pb are strongly suppressed to the level of IAA≈0.6 for pTassoc>3 GeV/c, while with decreasing momenta an enhancement develops reaching about 5 at low pTassoc. On the near side, an enhancement of IAA between 1.2 at the highest to 1.8 at the lowest pTassoc is observed. The data are compared to parton-energy-loss predictions of the JEWEL and AMPT event generators, as well as to a perturbative QCD calculation with medium-modified fragmentation functions. All calculations qualitatively describe the away-side suppression at high pTassoc. Only AMPT captures the enhancement at low pTassoc, both on the near and away side. However, it also underpredicts IAA above 5 GeV/c, in particular on the near-side.
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity (−0.5<y<0) in p–Pb collisions at √sNN=5.02 TeV using the ALICE detector at the LHC. Exploiting particle identification capabilities at high transverse momentum (pT), the previously published pT spectra have been extended to include measurements up to 20 GeV/c for seven event multiplicity classes. The pT spectra for pp collisions at s=7 TeV, needed to interpolate a pp reference spectrum, have also been extended up to 20 GeV/c to measure the nuclear modification factor (RpPb) in non-single diffractive p–Pb collisions. At intermediate transverse momentum (2<pT<10 GeV/c) the proton-to-pion ratio increases with multiplicity in p–Pb collisions, a similar effect is not present in the kaon-to-pion ratio. The pT dependent structure of such increase is qualitatively similar to those observed in pp and heavy-ion collisions. At high pT (>10 GeV/c), the particle ratios are consistent with those reported for pp and Pb–Pb collisions at the LHC energies. At intermediate pT the (anti)proton RpPb shows a Cronin-like enhancement, while pions and kaons show little or no nuclear modification. At high pT the charged pion, kaon and (anti)proton RpPb are consistent with unity within statistical and systematic uncertainties.
Electronic states with non-trivial topology host a number of novel phenomena with potential for revolutionizing information technology. The quantum anomalous Hall effect provides spin-polarized dissipation-free transport of electrons, while the quantum spin Hall effect in combination with superconductivity has been proposed as the basis for realizing decoherence-free quantum computing. We introduce a new strategy for realizing these effects, namely by hole and electron doping kagome lattice Mott insulators through, for instance, chemical substitution. As an example, we apply this new approach to the natural mineral herbertsmithite. We prove the feasibility of the proposed modifications by performing ab-initio density functional theory calculations and demonstrate the occurrence of the predicted effects using realistic models. Our results herald a new family of quantum anomalous Hall and quantum spin Hall insulators at affordable energy/temperature scales based on kagome lattices of transition metal ions.
The term superconductivity describes the phenomenon of vanishing electrical resistivity in a certain material, then called a superconductor, below a critical typically very low temperature. Since the discovery of superconductivity in mercury in 1911 many other superconductors have been found and the critical temperature below which superconductivity occurs could recently be raised to the temperatures encountered in a cold antarctic winter.
Superconductors are promising materials for applications. They can serve as nearly loss-free cables for energy transmission, in coils for the generation of high magnetic fields or in various electronic devices, such as detectors for magnetic fields. Despite their obvious advantages, the cost for using superconductors, however, depends a lot on the cooling effort needed to realize the superconducting state. Therefore, the search for a superconductor with critical temperature above room-temperature, which would avoid the need for any specialized cooling system, is one of the main projects of contemporary research in condensed matter physics.
While a theory of superconductivity in simple metals has already been developed in the 1950s, it has meanwhile been recognized that many superconductors are unconventional in the sense that their behavior does not follow the aforementioned theory. Unconventional superconductors differ from conventional superconductors mainly by the momentum- and real-space symmetry of the order parameter, which is associated with the superconducting state. While conventional superconductors have a uniform order parameter, unconventional superconductors can have an order parameter that bears structure. Of course, alternative theoretical descriptions have been suggested, but the discussion on the right theory for unconventional superconductivity has not yet been settled. Ultimately, this lack of a general theory of superconductivity prevents a targeted search for the room-temperature superconductor. Any new theoretical approach must, however, prove its value by correctly predicting the structure of the superconducting order parameter and further material properties.
In this work we participate in the search for a theory of unconventional superconductivity. We discuss the theory of superconductivity mediated by electron-electron interactions, which has been popular in the last few decades due to its success in explaining various properties of the copper-based superconductors that emerged in the 1980s. We give a detailed derivation of the so-called random phase approximation for the Hubbard model in terms of a diagrammatic many-body theory and apply it in conjunction with low-energy kinetic Hamiltonians, which we construct from first principles calculations in the framework of density functional theory. Density functional theory is an established technique for calculating the electronic and magnetic properties of materials solely based on their crystal structure. Its practical implementations in computer codes, however, do for example not describe complicated many-electron phenomena like the superconducting state that we are interested in here. Nevertheless, it can provide important information about the properties of the normal state of the material, which superconductivity emerges from. In our theory we use these information and approach the superconducting state from the normal state.
Such an interfacing of different calculational techniques requires a lot of implementation work in the form of computer code. Inclusion of the computer code into this work would consume by far too much space, but since some of the decisions on approximations in the calculational formalism are guided by the feasibility of the associated computer calculations, we discuss the numerical implementation in great detail.
We apply the developed methods to quasi-two-dimensional organic charge transfer salts and iron-based superconductors. Finally, we discuss implications of our findings for the interpretation of various experiments.
The production of the hypertriton nuclei HΛ3 and H‾Λ¯3 has been measured for the first time in Pb–Pb collisions at sNN=2.76 TeV with the ALICE experiment at LHC. The pT-integrated HΛ3 yield in one unity of rapidity, dN/dy×B.R.(HΛ3→He3,π−)=(3.86±0.77(stat.)±0.68(syst.))×10−5 in the 0–10% most central collisions, is consistent with the predictions from a statistical thermal model using the same temperature as for the light hadrons. The coalescence parameter B3 shows a dependence on the transverse momentum, similar to the B2 of deuterons and the B3 of 3He nuclei. The ratio of yields S3=HΛ3/(He3×Λ/p) was measured to be S3=0.60±0.13(stat.)±0.21(syst.) in 0–10% centrality events; this value is compared to different theoretical models. The measured S3 is compatible with thermal model predictions. The measured HΛ3 lifetime, τ=181−39+54(stat.)±33(syst.)ps is in agreement within 1σ with the world average value.
Direct photon production at mid-rapidity in Pb–Pb collisions at √sNN=2.76 TeV was studied in the transverse momentum range 0.9<pT<14 GeV/c. Photons were detected with the highly segmented electromagnetic calorimeter PHOS and via conversions in the ALICE detector material with the e+e− pair reconstructed in the central tracking system. The results of the two methods were combined and direct photon spectra were measured for the 0–20%, 20–40%, and 40–80% centrality classes. For all three classes, agreement was found with perturbative QCD calculations for pT≳5 GeV/c. Direct photon spectra down to pT≈1 GeV/c could be extracted for the 20–40% and 0–20% centrality classes. The significance of the direct photon signal for 0.9<pT<2.1 GeV/c is 2.6σ for the 0–20% class. The spectrum in this pT range and centrality class can be described by an exponential with an inverse slope parameter of (297±12stat±41syst) MeV. State-of-the-art models for photon production in heavy-ion collisions agree with the data within uncertainties.
We present measurements of the elliptic (v2), triangular (v3) and quadrangular (v4) anisotropic azimuthal flow over a wide range of pseudorapidities (−3.5<η<5). The measurements are performed with Pb–Pb collisions at √sNN=2.76 TeV using the ALICE detector at the Large Hadron Collider (LHC). The flow harmonics are obtained using two- and four-particle correlations from nine different centrality intervals covering central to peripheral collisions. We find that the shape of vn(η) is largely independent of centrality for the flow harmonics n=2–4, however the higher harmonics fall off more steeply with increasing |η|. We assess the validity of extended longitudinal scaling of v2 by comparing to lower energy measurements, and find that the higher harmonic flow coefficients are proportional to the charged particle densities at larger pseudorapidities. Finally, we compare our measurements to both hydrodynamical and transport models, and find they both have challenges when it comes to describing our data.
„Bei mir ist viel glücklich gelaufen“, sagt Hannah Petersen, wenn man sie auf ihre beeindruckende Karriere anspricht: Sie war gerade erst 30 Jahre alt, als sie im Oktober 2012 als Nachwuchsgruppenleiterin an die Goethe-Universität kam – eine der jüngsten Physik-Professorinnen in Deutschland. Jetzt wird sie für ihre Arbeit mit dem Heinz Maier-Leibnitz-Preis der Deutschen Forschungsgemeinschaft (DFG) ausgezeichnet. Der mit 20.000 Euro dotierte Preis ist der wichtigste für Nachwuchsforscher in Deutschland.
Neutron-induced fission cross sections of 238U and 235U are used as standards in the fast neutron region up to 200 MeV. A high accuracy of the standards is relevant to experimentally determine other neutron reaction cross sections. Therefore, the detection effciency should be corrected by using the angular distribution of the fission fragments (FFAD), which are barely known above 20 MeV. In addition, the angular distribution of the fragments produced in the fission of highly excited and deformed nuclei is an important observable to investigate the nuclear fission process.
In order to measure the FFAD of neutron-induced reactions, a fission detection setup based on parallel-plate avalanche counters (PPACs) has been developed and successfully used at the CERN-n_TOF facility. In this work, we present the preliminary results on the analysis of new 235U(n,f) and 238U(n,f) data in the extended energy range up to 200 MeV compared to the existing experimental data.
he study of the resonant structures in neutron-nucleus cross-sections, and therefore of the compound-nucleus reaction mechanism, requires spectroscopic measurements to determine with high accuracy the energy of the neutron interacting with the material under study.
To this purpose, the neutron time-of-flight facility n_TOF has been operating since 2001 at CERN. Its characteristics, such as the high intensity instantaneous neutron flux, the wide energy range from thermal to few GeV, and the very good energy resolution, are perfectly suited to perform high-quality measurements of neutron-induced reaction cross sections. The precise and accurate knowledge of these cross sections plays a fundamental role in nuclear technologies, nuclear astrophysics and nuclear physics.
Two different measuring stations are available at the n_TOF facility, called EAR1 and EAR2, with different characteristics of intensity of the neutron flux and energy resolution. These experimental areas, combined with advanced detection systems lead to a great flexibility in performing challenging measurement of high precision and accuracy, and allow the investigation isotopes with very low cross sections, or available only in small quantities, or with very high specific activity.
The characteristics and performances of the two experimental areas of the n_TOF facility will be presented, together with the most important measurements performed to date and their physics case. In addition, the significant upcoming measurements will be introduced.
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S→π−π+, ϕ→K−K+, and Λ→pπ− in p-Pb collisions at sNN−−−√=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at s√=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0→K−π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λ+c→pK−π+ in pp collisions at s√=7 TeV, using the Bayesian approach for the identification of its decay products.
The multi-strange baryon yields in PbPb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, Ξ and Ω production rates have been measured with the ALICE experiment as a function of transverse momentum, pT, in pPb collisions at a centre-of-mass energy of sNN=5.02 TeV. The results cover the kinematic ranges 0.6 GeV/c<pT<7.2 GeV/c and 0.8 GeV/c<pT<5 GeV/c, for Ξ and Ω respectively, in the common rapidity interval −0.5<yCMS<0. Multi-strange baryons have been identified by reconstructing their weak decays into charged particles. The pT spectra are analysed as a function of event charged-particle multiplicity, which in pPb collisions ranges over one order of magnitude and lies between those observed in pp and PbPb collisions. The measured pT distributions are compared to the expectations from a Blast-Wave model. The parameters which describe the production of lighter hadron species also describe the hyperon spectra in high multiplicity pPb collisions. The yield of hyperons relative to charged pions is studied and compared with results from pp and PbPb collisions. A continuous increase in the yield ratios as a function of multiplicity is observed in pPb data, the values of which range from those measured in minimum bias pp to the ones in PbPb collisions. A statistical model qualitatively describes this multiplicity dependence using a canonical suppression mechanism, in which the small volume causes a relative reduction of hadron production dependent on the strangeness content of the hyperon.
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, Ξ and Ω production rates have been measured with the ALICE experiment as a function of transverse momentum, pT, in p-Pb collisions at a centre-of-mass energy of sNN−−−√ = 5.02 TeV. The results cover the kinematic ranges 0.6 GeV/c<pT<7.2 GeV/c and 0.8 GeV/c<pT< 5 GeV/c, for Ξ and Ω respectively, in the common rapidity interval -0.5 <yCMS< 0. Multi-strange baryons have been identified by reconstructing their weak decays into charged particles. The pT spectra are analysed as a function of event charged-particle multiplicity, which in p-Pb collisions ranges over one order of magnitude and lies between those observed in pp and Pb-Pb collisions. The measured pT distributions are compared to the expectations from a Blast-Wave model. The parameters which describe the production of lighter hadron species also describe the hyperon spectra in high multiplicity p-Pb. The yield of hyperons relative to charged pions is studied and compared with results from pp and Pb-Pb collisions. A statistical model is employed, which describes the change in the ratios with volume using a canonical suppression mechanism, in which the small volume causes a species-dependent relative reduction of hadron production. The calculations, in which the magnitude of the effect depends on the strangeness content, show good qualitative agreement with the data.
Measurement of an excess in the yield of J/ψ at very low pT in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
We report on the first measurement of an excess in the yield of J/ψ at very low transverse momentum (pT<0.3 GeV/c) in peripheral hadronic Pb-Pb collisions at sNN−−−√ = 2.76 TeV, performed by ALICE at the CERN LHC. Remarkably, the measured nuclear modification factor of J/ψ in the rapidity range 2.5<y<4 reaches about 7 (2) in the pT range 0-0.3 GeV/c in the 70-90% (50-70%) centrality class. The J/ψ production cross section associated with the observed excess is obtained under the hypothesis that coherent photoproduction of J/ψ is the underlying physics mechanism. If confirmed, the observation of J/ψ coherent photoproduction in Pb-Pb collisions at impact parameters smaller than twice the nuclear radius opens new theoretical and experimental challenges and opportunities. In particular, coherent photoproduction accompanying hadronic collisions may provide insight into the dynamics of photoproduction and nuclear reactions, as well as become a novel probe of the Quark-Gluon Plasma.
The centrality dependence of the charged-particle pseudorapidity density measured with ALICE in Pb-Pb collisions at sNN−−−√ over a broad pseudorapidity range is presented. This Letter extends the previous results reported by ALICE to more peripheral collisions. No strong change of the charged-particle pseudorapidity density distributions with centrality is observed, and when normalised to the number of participating nucleons in the collisions, the evolution over pseudorapidity with centrality is likewise small. The broad pseudorapidity range allows precise estimates of the total number of produced charged particles which we find to range from 162±22 (syst.) to 17170±770 (syst.) in 80-90% and 0-5 central collisions, respectively. The total charged-particle multiplicity is seen to approximately scale with the number of participating nucleons in the collision. This suggests that hard contributions to the charged-particle multiplicity are limited. The results are compared to models which describe dNch/dη at mid-rapidity in the most central Pb-Pb collisions and it is found that these models do not capture all features of the distributions.
The production of J/ψ and ψ(2S) was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity (2.5<y<4) down to zero transverse momentum (pT) in the dimuon decay channel. Inclusive J/ψ yields were extracted in different centrality classes and the centrality dependence of the average pT is presented. The J/ψ suppression, quantified with the nuclear modification factor (RAA), was studied as a function of centrality, transverse momentum and rapidity. Comparisons with similar measurements at lower collision energy and theoretical models indicate that the J/ψ production is the result of an interplay between color screening and recombination mechanisms in a deconfined partonic medium, or at its hadronization. Results on the ψ(2S) suppression are provided via the ratio of ψ(2S) over J/ψ measured in pp and Pb-Pb collisions.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range −0.5<y<0. The transverse momentum spectra, measured as a function of the multiplicity, have pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at s√ = 7 TeV and Pb-Pb collisions at sNN−−−√ = 2.76 TeV. In Pb-Pb and p-Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
We present a measurement of inclusive J/ψ production in p-Pb collisions at sNN−−−√ = 5.02 TeV as a function of the centrality of the collision, as estimated from the energy deposited in the Zero Degree Calorimeters. The measurement is performed with the ALICE detector down to zero transverse momentum, pT, in the backward (−4.46<ycms<−2.96) and forward (2.03<ycms<3.53) rapidity intervals in the dimuon decay channel and in the mid-rapidity region (−1.37<ycms<0.43) in the dielectron decay channel. The backward and forward rapidity intervals correspond to the Pb-going and p-going direction, respectively. The pT-differential J/ψ production cross section at backward and forward rapidity is measured for several centrality classes, together with the corresponding average pT and p2T values. The nuclear modification factor, QpPb, is presented as a function of centrality for the three rapidity intervals, and, additionally, at backward and forward rapidity, as a function of pT for several centrality classes. At mid- and forward rapidity, the J/ψ yield is suppressed up to 40% compared to that in pp interactions scaled by the number of binary collisions. The degree of suppression increases towards central p-Pb collisions at forward rapidity, and with decreasing pT of the J/ψ. At backward rapidity, the QpPb is compatible with unity within the total uncertainties, with an increasing trend from peripheral to central p-Pb collisions.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range −0.5<y<0. The transverse momentum spectra, measured as a function of the multiplicity, have pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at s√ = 7 TeV and Pb-Pb collisions at sNN−−−√ = 2.76 TeV. In Pb-Pb and p-Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The interaction between the Heat Shock Proteins 70 and 40 is at the core of the ATPase regulation of the chaperone machinery that maintains protein homeostasis. However, the structural details of this fundamental interaction are still elusive and contrasting models have been proposed for the transient Hsp70/Hsp40 complexes. Here we combine molecular simulations based on both coarsegrained and atomistic models with co-evolutionary sequence analysis to shed light on this problem by focusing on the bacterial DnaK/DnaJ system. The integration of these complementary approaches resulted into a novel structural model that rationalizes previous experimental observations. We identify an evolutionary-conserved interaction surface formed by helix II of the DnaJ J-domain and a groove on lobe IIA of the DnaK nucleotide binding domain, involving the inter-domain linker.
Great interest has emerged recently in the search for Kitaev spin liquid states in real materials. Such states rely on strongly anisotropic magnetic interactions, which have been suggested to exist in a number of candidate materials based on Ir and Ru. This thesis concentrates on two priority purposes. The first is the investigation of electronic and magnetic properties of candidate materials Na2IrO3, α-Li2IrO3, α-RuCl3, γ-Li2IrO3, and Ba3YIr2O9 for Kitaev physics where both spin-orbit coupling and correlation effects are important. The second is the method development for the microscopic description of correlated materials combining many-body methods and density functional theory (DFT). ...
Magnetism is a beautiful example of a macroscopic quantum phenomenon. While known at least since the ancient Greeks, a microscopic theoretical explanation of magnetism could only be achieved with the advent of quantum mechanics at the beginning of the 20th century. Then it was understood that in a certain class of solids the famous Pauli exclusion principle leads to an effective interaction between the microscopic magnetic moments, i.e., the spins, which favors an ordered, and hence macroscopically magnetic, state. Nowadays, magnetic phenomena are used in a host of applications, and are especially relevant for information storage and processing technologies.
Despite the long history of the field, magnetic phenomena are still an active research topic. In particular, in the last decade the fields of spintronics and spin-caloritronics emerged, which manipulate the microscopic spins via charge and heat currents respectively. This opens new avenues to potential applications; including the possibility to use the magnetic spin degrees of freedom instead of charges as carriers of information, which could provide a number of advantages such as reduced losses and further miniaturization.
In this thesis we do not delve any further into the realm of possible applications. Instead we use sophisticated theories to explore the microscopic spin dynamics which is the basis of all such applications. We also focus on a particular compound: Yttrium-iron garnet (YIG), which is a ferrimagnetic insulator. This material has been widely used in experiments on magnetism over the last decades, and is a popular candidate for spintronic devices. Microscopically, the low-energy magnetic properties of YIG can be described by a ferromagnetic Heisenberg model. For spintronics and spin-caloritronics applications, it is however insufficient to only consider the magnetic degrees of freedom; one should also include the coupling of the spins to the elastic lattice vibrations, i.e., the phonons. Besides giving an overview on techniques used throughout the thesis, the introductory Ch. 1 provides a discussion of the microscopic Hamiltonian used to model the coupled spin-phonon system in the subsequent chapters.
The topic of Ch. 2 are the consequences of the magnetoelastic coupling on the low-energy magnon excitations in YIG. Starting from the microscopic spin-phonon Hamiltonian, we rigorously derive the magnon-phonon hybridization and scattering vertices in a controlled spin wave expansion. For the experimentally relevant case of thin YIG films at room temperature, these vertices are then used to compute the magnetoelastic modes as well as the magnon damping. In the course of this work, the damping of magnons in this system was also investigated experimentally using Brillouin light scattering spectroscopy. While comparison to the experimental data shows that the magnetoelastic interactions do not dominate the total magnon relaxation in the experimentally accessible regime, we are able to show that the spin-lattice relaxation time is strongly momentum dependent, thereby providing a microscopic explanation of a recent experiment.
In the final Ch. 3, we investigate a different phenomenon occurring in thin YIG films: Room temperature condensation of magnons. Prior work attributed this condensation process to quantum mechanics, i.e., it was interpreted as Bose-Einstein condensation. However, this is not satisfactory because at room temperature, the magnons in YIG behave as purely classical waves. In particular, the quantum Bose-Einstein distribution reduces to the classical Rayleigh-Jeans distribution in this case. In addition, the effective spin in YIG is very large. Therefore we start from the hypothesis that the room temperature magnon condensation is actually a new example of the kinetic condensation of classical waves, which has so far only been observed by imaging classical light in a photorefractive crystal. To distinguish this classical condensation from the quantum mechanical Bose-Einstein one, we refer to it as Rayleigh-Jeans condensation. To prove our claim, we consider the classical equations of motion of the coupled spin-phonon system. By eliminating the phonon degrees of freedom, we microscopically derive a non-Markovian stochastic Landau-Lifshitz-Gilbert equation (LLG) for the classical spin vectors. We then use this LLG to perform numerical simulations of the magnon dynamics, with all parameters fixed by experiments. These simulations accurately reproduce all stages of the magnon time evolution observed in experiments, including the appearance of the magnon condensate at the bottom of the magnon spectrum. In this way we confirm our initial hypothesis that the magnon condensation is a classical Rayleigh-Jeans condensation, which is unrelated to quantum mechanics.
The phenomenon of magnetism has been known to humankind for at least over 2500 years and many useful applications of magnetism have been developed since then, starting from the compass to modern information storage and processing devices. While technological applications are an important part of the continuing interest in magnetic materials, their fundamental properties are still being studied, leading to new physical insights at the forefront of physics. The magnetism of magnetic materials is a pure quantum effect due to the electrons that carry an intrinsic spin of 1/2. The physics of interacting quantum spins in magnetic insulators is the main subject of this thesis.We focus here on a theoretical description of the antiferromagnetic insulator Cs2CuCl4. This material is highly interesting because it is a nearly ideal realization of the two-dimensional antiferromagnetic spin-1/2 Heisenberg model on an anisotropic triangular lattice, where the Cu(2+) ions carry a spin of 1/2 and the spins interact via exchange couplings. Due to the geometric frustration of the triangular lattice, there exists a spin-liquid phase with fractional excitations (spinons) at finite temperatures in Cs2CuCl4. This spin-liquid phase is characterized by strong short-range spin correlations without long-range order. From an experimental point of view, Cs2CuCl4 is also very interesting because the exchange couplings are relatively weak leading to a saturation field of only B_c=8.5 T. All relevant parts of the phase diagram are therefore experimentally accessible. A recurring theme in this thesis will be the use of bosonic or fermionic representations of the spin operators which each offer in different situations suitable starting points for an approximate treatment of the spin interactions. The methods which we develop in this thesis are not restricted to Cs2CuCl4 but can also be applied to other materials that can be described by the spin-1/2 Heisenberg model on a triangular lattice; one important example is the material class Cs2Cu(Cl{4-x}Br{x}) where chlorine is partially substituted by bromine which changes the strength of the exchange couplings and the degree of frustration.
Our first topic is the finite-temperature spin-liquid phase in Cs2CuCl4. We study this regime by using a Majorana fermion representation of the spin-1/2 operators motivated by theoretical and experimental evidence for fermionic excitations in this spin-liquid phase. Within a mean-field theory for the Majorana fermions, we determine the magnetic field dependence of the critical temperature for the crossover from spin-liquid to paramagnetic behavior and we calculate the specific heat and magnetic susceptibility in zero magnetic field. We find that the Majorana fermions can only propagate in one dimension along the direction of the strongest exchange coupling; this reduction of the effective dimensionality of excitations is known as dimensional reduction.
The second topic is the behavior of ultrasound propagation and attenuation in the spin-liquid phase of Cs2CuCl4, where we consider longitudinal sound waves along the direction of the strongest exchange coupling. Due to the dimensional reduction of the excitations in the spin-liquid phase, we expect that we can describe the ultrasound physics by a one-dimensional Heisenberg model coupled to the lattice degrees of freedom via the exchange-striction mechanism. For this one-dimensional problem we use the Jordan-Wigner transformation to map the spin-1/2 operators to spinless fermions. We treat the fermions within the self-consistent Hartree-Fock approximation and we calculate the change of the sound velocity and attenuation as a function of magnetic field using a perturbative expansion in the spin-phonon couplings. We compare our theoretical results with experimental data from ultrasound experiments, where we find good agreement between theory and experiment.
Our final topic is the behavior of Cs2CuCl4 in high magnetic fields larger than the saturation field B_c=8.5 T. At zero temperature, Cs2CuCl4 is then fully magnetized and the ground state is therefore a ferromagnet where the excitations have an energy gap. The elementary excitations of this ferromagnetic state are spin-flips (magnons) which behave as hard-core bosons. At finite temperatures there will be thermally excited magnons that interact via the hard-core interaction and via additional exchange interactions. We describe the thermodynamic properties of Cs2CuCl4 at finite temperatures and calculate experimentally observable quantities, e.g., magnetic susceptibility and specific heat. Our approach is based on a mapping of the spin-1/2 operators to hard-core bosons, where we treat the hard-core interaction by the self-consistent ladder approximation and the exchange interactions by the self-consistent Hartree-Fock approximation. We find that our theoretical results for the specific heat are in good agreement with the available experimental data.
High shares of intermittent renewable power generation in a European electricity system will require flexible backup power generation on the dominant diurnal, synoptic, and seasonal weather timescales. The same three timescales are already covered by today’s dispatchable electricity generation facilities, which are able to follow the typical load variations on the intra-day, intra-week, and seasonal timescales. This work aims to quantify the changing demand for those three backup flexibility classes in emerging large-scale electricity systems, as they transform from low to high shares of variable renewable power generation. A weather-driven modelling is used, which aggregates eight years of wind and solar power generation data as well as load data over Germany and Europe, and splits the backup system required to cover the residual load into three flexibility classes distinguished by their respective maximum rates of change of power output. This modelling shows that the slowly flexible backup system is dominant at low renewable shares, but its optimized capacity decreases and drops close to zero once the average renewable power generation exceeds 50% of the mean load. The medium flexible backup capacities increase for modest renewable shares, peak at around a 40% renewable share, and then continuously decrease to almost zero once the average renewable power generation becomes larger than 100% of the mean load. The dispatch capacity of the highly flexible backup system becomes dominant for renewable shares beyond 50%, and reach their maximum around a 70% renewable share. For renewable shares above 70% the highly flexible backup capacity in Germany remains at its maximum, whereas it decreases again for Europe. This indicates that for highly renewable large-scale electricity systems the total required backup capacity can only be reduced if countries share their excess generation and backup power.
The PANDA experiment will be one of the flagship experiments at the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. It is a versatile detector dedicated to topics in hadron physics such as charmonium spectroscopy and nucleon structure. A DIRC counter will deliver hadronic particle identification in the barrel part of the PANDA target spectrometer and will cleanly separate kaons with momenta up to 3.5 GeV/c from a large pion background. An alternative DIRC design option, using wide Cherenkov radiator plates instead of narrow bars, would significantly reduce the cost of the system. Compact fused silica photon prisms have many advantages over the traditional stand-off boxes filled with liquid. This work describes the study of these design options, which are important advancements of the DIRC technology in terms of cost and performance. Several new reconstruction methods were developed and will be presented. Prototypes of the DIRC components have been built and tested in particle beam, and the new concepts and approaches were applied. An evaluation of the performance of the designs, feasibility studies with simulations, and a comparison of simulation and prototype tests will be presented.
The pseudorapidity (η) and transverse-momentum (pT) distributions of charged particles produced in proton-proton collisions are measured at the centre-of-mass energy s√ = 13 TeV. The pseudorapidity distribution in |η|< 1.8 is reported for inelastic events and for events with at least one charged particle in |η|< 1. The pseudorapidity density of charged particles produced in the pseudorapidity region |η|< 0.5 is 5.31 ± 0.18 and 6.46 ± 0.19 for the two event classes, respectively. The transverse-momentum distribution of charged particles is measured in the range 0.15 < pT < 20 GeV/c and |η|< 0.8 for events with at least one charged particle in |η|< 1. The correlation between transverse momentum and particle multiplicity is also investigated by studying the evolution of the spectra with event multiplicity. The results are compared with calculations from PYTHIA and EPOS Monte Carlo generators.
The production of prompt charmed mesons D0, D+ and D∗+, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV. The production yields for rapidity |y|<0.5 are presented as a function of transverse momentum, pT, in the interval 1-36 GeV/c for the centrality class 0-10% and in the interval 1-16 GeV/c for the centrality class 30-50%. The nuclear modification factor RAA was computed using a proton-proton reference at s√=2.76 TeV, based on measurements at s√=7 TeV and on theoretical calculations. A maximum suppression by a factor of 5-6 with respect to binary-scaled pp yields is observed for the most central collisions at pT of about 10 GeV/c. A suppression by a factor of about 2-3 persists at the highest pT covered by the measurements. At low pT (1-3 GeV/c), the RAA has large uncertainties that span the range 0.35 (factor of about 3 suppression) to 1 (no suppression). In all pT intervals, the RAA is larger in the 30-50% centrality class compared to central collisions. The D-meson RAA is also compared with that of charged pions and, at large pT, charged hadrons, and with model calculations.
Measurement of D+s production and nuclear modification factor in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
The production of prompt D+s mesons was measured for the first time in collisions of heavy nuclei with the ALICE detector at the LHC. The analysis was performed on a data sample of Pb-Pb collisions at a centre-of-mass energy per nucleon pair, sNN−−−√, of 2.76 TeV in two different centrality classes, namely 0-10% and 20-50%. D+s mesons and their antiparticles were reconstructed at mid-rapidity from their hadronic decay channel D+s→ϕπ+, with ϕ→K−K+, in the transverse momentum intervals 4<pT<12 GeV/c and 6<pT<12 GeV/c for the 0-10% and 20-50% centrality classes, respectively. The nuclear modification factor RAA was computed by comparing the pT-differential production yields in Pb-Pb collisions to those in proton-proton (pp) collisions at the same energy. This pp reference was obtained using the cross section measured at s√=7 TeV and scaled to s√=2.76 TeV. The RAA of D+s mesons was compared to that of non-strange D mesons in the 10% most central Pb-Pb collisions. At high pT (8<pT<12 GeV/c) a suppression of the D+s-meson yield by a factor of about three, compatible within uncertainties with that of non-strange D mesons, is observed. At lower pT (4<pT<8 GeV/c) the values of the D+s-meson RAA are larger than those of non-strange D mesons, although compatible within uncertainties. The production ratios D+s/D0 and D+s\D+ were also measured in Pb-Pb collisions and compared to their values in proton-proton collisions.
Two-particle angular correlations between trigger particles in the forward pseudorapidity range (2.5<|η|<4.0) and associated particles in the central range (|η|<1.0) are measured with the ALICE detector in p-Pb collisions at a nucleon-nucleon centre-of-mass energy of 5.02 TeV. The trigger particles are reconstructed using the muon spectrometer, and the associated particles by the central barrel tracking detectors. In high-multiplicity events, the double-ridge structure, previously discovered in two-particle angular correlations at midrapidity, is found to persist to the pseudorapidity ranges studied in this Letter. The second-order Fourier coefficients for muons in high-multiplicity events are extracted after jet-like correlations from low-multiplicity events have been subtracted. The coefficients are found to have a similar transverse momentum (pT) dependence in p-going (p-Pb) and Pb-going (Pb-p) configurations, with the Pb-going coefficients larger by about 16±6%, rather independent of pT within the uncertainties of the measurement. The data are compared with calculations using the AMPT model, which predicts a different pT and η dependence than observed in the data. The results are sensitive to the parent particle v2 and composition of reconstructed muon tracks, where the contribution from heavy flavour decays are expected to dominate at pT>2 GeV/c.
The production of J/ψ and ψ(2S) was measured with the ALICE detector in Pb-Pb collisions at the LHC. The measurement was performed at forward rapidity (2.5<y<4) down to zero transverse momentum (pT) in the dimuon decay channel. Inclusive J/ψ yields were extracted in different centrality classes and the centrality dependence of the average pT is presented. The J/ψ suppression, quantified with the nuclear modification factor (RAA), was studied as a function of centrality, transverse momentum and rapidity. Comparisons with similar measurements at lower collision energy and theoretical models indicate that the J/ψ production is the result of an interplay between color screening and recombination mechanisms in a deconfined partonic medium, or at its hadronization. Results on the ψ(2S) suppression are provided via the ratio of ψ(2S) over J/ψ measured in pp and Pb-Pb collisions.
Transverse momentum (pT) spectra of pions, kaons, and protons up to pT=20 GeV/c have been measured in Pb-Pb collisions at sNN−−−√=2.76 TeV using the ALICE detector for six different centrality classes covering 0-80%. The proton-to-pion and the kaon-to-pion ratios both show a distinct peak at pT≈3 GeV/c in central Pb-Pb collisions that decreases towards more peripheral collisions. For pT>10 GeV/c, the nuclear modification factor is found to be the same for all three particle species in each centrality interval within systematic uncertainties of 10-20%. This suggests there is no direct interplay between the energy loss in the medium and the particle species composition in the hard core of the quenched jet. For pT<10 GeV/c, the data provide important constraints for models aimed at describing the transition from soft to hard physics.
Transverse momentum (pT) spectra of pions, kaons, and protons up to pT=20 GeV/c have been measured in Pb-Pb collisions at sNN−−−√=2.76 TeV using the ALICE detector for six different centrality classes covering 0-80%. The proton-to-pion and the kaon-to-pion ratios both show a distinct peak at pT≈3 GeV/c in central Pb-Pb collisions that decreases towards more peripheral collisions. For pT>10 GeV/c, the nuclear modification factor is found to be the same for all three particle species in each centrality interval within systematic uncertainties of 10-20%. This suggests there is no direct interplay between the energy loss in the medium and the particle species composition in the hard core of the quenched jet. For pT<10 GeV/c, the data provide important constraints for models aimed at describing the transition from soft to hard physics.
We report on results obtained with the Event Shape Engineering technique applied to Pb-Pb collisions at sNN−−−√=2.76 TeV. By selecting events in the same centrality interval, but with very different average flow, different initial state conditions can be studied. We find the effect of the event-shape selection on the elliptic flow coefficient v2 to be almost independent of transverse momentum pT, as expected if this effect is due to fluctuations in the initial geometry of the system. Charged hadron, pion, kaon, and proton transverse momentum distributions are found to be harder in events with higher-than-average elliptic flow, indicating an interplay between radial and elliptic flow.
Two-particle angular correlations between trigger particles in the forward pseudorapidity range (2.5<|η|<4.0) and associated particles in the central range (|η|<1.0) are measured with the ALICE detector in p-Pb collisions at a nucleon-nucleon centre-of-mass energy of 5.02 TeV. The trigger particles are reconstructed using the muon spectrometer, and the associated particles by the central barrel tracking detectors. In high-multiplicity events, the double-ridge structure, previously discovered in two-particle angular correlations at midrapidity, is found to persist to the pseudorapidity ranges studied in this Letter. The second-order Fourier coefficients for muons in high-multiplicity events are extracted after jet-like correlations from low-multiplicity events have been subtracted. The coefficients are found to have a similar transverse momentum (pT) dependence in p-going (p-Pb) and Pb-going (Pb-p) configurations, with the Pb-going coefficients larger by about 16±6%, rather independent of pT within the uncertainties of the measurement. The data are compared with calculations using the AMPT model, which predicts a different pT and η dependence than observed in the data. The results are sensitive to the parent particle v2 and composition of reconstructed muon tracks, where the contribution from heavy flavour decays are expected to dominate at pT>2 GeV/c.
Measurement of an excess in the yield of J/ψ at very low pT in Pb–Pb collisions at √sNN = 2.76 TeV
(2016)
We report on the first measurement of an excess in the yield of J/ψ at very low transverse momentum (pT<0.3 GeV/c) in peripheral hadronic Pb-Pb collisions at sNN−−−√ = 2.76 TeV, performed by ALICE at the CERN LHC. Remarkably, the measured nuclear modification factor of J/ψ in the rapidity range 2.5<y<4 reaches about 7 (2) in the pT range 0-0.3 GeV/c in the 70-90% (50-70%) centrality class. The J/ψ production cross section associated with the observed excess is obtained under the hypothesis that coherent photoproduction of J/ψ is the underlying physics mechanism. If confirmed, the observation of J/ψ coherent photoproduction in Pb-Pb collisions at impact parameters smaller than twice the nuclear radius opens new theoretical and experimental challenges and opportunities. In particular, coherent photoproduction accompanying hadronic collisions may provide insight into the dynamics of photoproduction and nuclear reactions, as well as become a novel probe of the Quark-Gluon Plasma.
We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb-Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral Magnetic Wave (CMW) in heavy-ion collisions. Charge-dependent flow is reported for different collision centralities as a function of the event charge asymmetry. While our results are in qualitative agreement with expectations based on the CMW, the nonzero signal observed in higher harmonics correlations indicates a possible significant background contribution. We also present results on a differential correlator, where the flow of positive and negative charges is reported as a function of the mean charge of the particles and their pseudorapidity separation. We argue that this differential correlator is better suited to distinguish the differences in positive and negative charges expected due to the CMW and the background effects, such as local charge conservation coupled with strong radial and anisotropic flow.
Three- and four-pion Bose-Einstein correlations are presented in pp, p-Pb, and Pb-Pb collisions at the LHC. We compare our measured four-pion correlations to the expectation derived from two- and three-pion measurements. Such a comparison provides a method to search for coherent pion emission. We also present mixed-charge correlations in order to demonstrate the effectiveness of several analysis procedures such as Coulomb corrections. Same-charge four-pion correlations in pp and p-Pb appear consistent with the expectations from three-pion measurements. However, the presence of non-negligible background correlations in both systems prevent a conclusive statement. In Pb-Pb collisions, we observe a significant suppression of three- and four-pion Bose-Einstein correlations compared to expectations from two-pion measurements. There appears to be no centrality dependence of the suppression within the 0-50% centrality interval. The origin of the suppression is not clear. However, by postulating either coherent pion emission or large multibody Coulomb effects, the suppression may be explained.
We have has performed the first measurement of the coherent ψ(2S) photo-production cross section in ultra-peripheral Pb-Pb collisions at the LHC. This charmonium excited state is reconstructed via the ψ(2S) →l+l− and ψ(2S) → J/ψπ+π− decays, where the J/ψ decays into two leptons. The analysis is based on an event sample corresponding to an integrated luminosity of about 22 μb−1. The cross section for coherent ψ(2S) production in the rapidity interval −0.9<y<0.9 is dσcohψ(2S)/dy=0.83±0.19(stat+syst) mb. The ψ(2S) to J/ψ coherent cross section ratio is 0.34+0.08−0.07(stat+syst). The obtained results are compared to predictions from theoretical models.
Three- and four-pion Bose-Einstein correlations are presented in pp, p-Pb, and Pb-Pb collisions at the LHC. We compare our measured four-pion correlations to the expectation derived from two- and three-pion measurements. Such a comparison provides a method to search for coherent pion emission. We also present mixed-charge correlations in order to demonstrate the effectiveness of several analysis procedures such as Coulomb corrections. Same-charge four-pion correlations in pp and p-Pb appear consistent with the expectations from three-pion measurements. However, the presence of non-negligible background correlations in both systems prevent a conclusive statement. In Pb-Pb collisions, we observe a significant suppression of three- and four-pion Bose-Einstein correlations compared to expectations from two-pion measurements. There appears to be no centrality dependence of the suppression within the 0%–50% centrality interval. The origin of the suppression is not clear. However, by postulating either coherent pion emission or large multibody Coulomb effects, the suppression may be explained.
We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, Δη and Δφ respectively. These correlations are studied using the balance function that probes the charge creation time and the development of collectivity in the produced system. The dependence of the balance function on the event multiplicity as well as on the trigger and associated particle transverse momentum (pT) in pp, p-Pb, and Pb-Pb collisions at sNN−−−√=7, 5.02, and 2.76 TeV, respectively, are presented. In the low transverse momentum region, for 0.2<pT<2.0 GeV/c, the balance function becomes narrower in both Δη and Δφ directions in all three systems for events with higher multiplicity. The experimental findings favor models that either incorporate some collective behavior (e.g. AMPT) or different mechanisms that lead to effects that resemble collective behavior (e.g. PYTHIA8 with color reconnection). For higher values of transverse momenta the balance function becomes even narrower but exhibits no multiplicity dependence, indicating that the observed narrowing with increasing multiplicity at low pT is a feature of bulk particle production.
We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, Δη and Δφ respectively. These correlations are studied using the balance function that probes the charge creation time and the development of collectivity in the produced system. The dependence of the balance function on the event multiplicity as well as on the trigger and associated particle transverse momentum (pT) in pp, p-Pb, and Pb-Pb collisions at sNN−−−√=7, 5.02, and 2.76 TeV, respectively, are presented. In the low transverse momentum region, for 0.2<pT<2.0 GeV/c, the balance function becomes narrower in both Δη and Δφ directions in all three systems for events with higher multiplicity. The experimental findings favor models that either incorporate some collective behavior (e.g. AMPT) or different mechanisms that lead to effects that resemble collective behavior (e.g. PYTHIA8 with color reconnection). For higher values of transverse momenta the balance function becomes even narrower but exhibits no multiplicity dependence, indicating that the observed narrowing with increasing multiplicity at low pT is a feature of bulk particle production.
We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb-Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral Magnetic Wave (CMW) in heavy-ion collisions. Charge-dependent flow is reported for different collision centralities as a function of the event charge asymmetry. While our results are in qualitative agreement with expectations based on the CMW, the nonzero signal observed in higher harmonics correlations indicates a possible significant background contribution. We also present results on a differential correlator, where the flow of positive and negative charges is reported as a function of the mean charge of the particles and their pseudorapidity separation. We argue that this differential correlator is better suited to distinguish the differences in positive and negative charges expected due to the CMW and the background effects, such as local charge conservation coupled with strong radial and anisotropic flow.
We report on results obtained with the Event Shape Engineering technique applied to Pb-Pb collisions at sNN−−−√=2.76 TeV. By selecting events in the same centrality interval, but with very different average flow, different initial state conditions can be studied. We find the effect of the event-shape selection on the elliptic flow coefficient v2 to be almost independent of transverse momentum pT, as expected if this effect is due to fluctuations in the initial geometry of the system. Charged hadron, pion, kaon, and proton transverse momentum distributions are found to be harder in events with higher-than-average elliptic flow, indicating an interplay between radial and elliptic flow.
The ALICE Collaboration is collecting data with both Minimum Bias and Muon triggers with pp collisions at √s = 13 TeV in the ongoing LHC Run II. An excellent performance of tracking and PID in the central barrel and in the muon spectrometer has been obtained. First results on the charged-particle pseudorapidity density and on identified particle transverse momentum spectra at √s = 13 TeV is presented.
Observations show that, at the beginning of their existence, neutron stars are accelerated briskly to velocities of up to a thousand kilometers per second. We argue that this remarkable effect can be explained as a manifestation of quantum anomalies on astrophysical scales. To theoretically describe the early stage in the life of neutron stars we use hydrodynamics as a systematic effective-field-theory framework. Within this framework, anomalies of the Standard Model of particle physics as underlying microscopic theory imply the presence of a particular set of transport terms, whose form is completely fixed by theoretical consistency. The resulting chiral transport effects in proto-neutron stars enhance neutrino emission along the internal magnetic field, and the recoil can explain the order of magnitude of the observed kick velocities.
Die vorgestellte Arbeit beschreibt die Messung neutraler Pionen in pp-Kollisionen bei √s = 8 TeV. Die Messung kann als Referenz für Pb-Pb-Kollisionen dienen und somit dazu beitragen, die Eigenschaften des QGP zu untersuchen. Für die Messung werden Daten des ALICE-EMCal-Detektors verwendet, die 2012 gemessen wurden. Das EMCal kann die deponierte Energie und die Position von Photonen messen. Es fasst die deponierte Energie zu sogenannten Clustern zusammen. Durch die Kombination von Clustern aus derselben Kollision werden π0 rekonstruiert. Mithilfe des ITS wird der primäre Vertex bestimmt, um die Verteilung der Cluster-Paare als Funktion von minv und pT anzugeben. Die potentiellen π0 werden anschließend in pT-Bereiche eingeteilt. Durch die mixed Event Methode wird der unkorrelierte Untergrund abgezogen. Das im Folgenden extrahierte π0-Signal wird parametrisiert, um die Position des peaks zu bestimmen. Ausgehend von der Parametrisierung wird der korrelierte Untergrund subtrahiert und das Signal in einem definierten Bereich um die peak-Position integriert. Man erhält ein pT abhängiges Spektrum. Das Spektrum wird sowohl für die gemessenen Daten als auch für simulierte Daten berechnet. Durch die Simulation wird eine Korrektur des Spektrums hinsichtlich der Akzeptanz des Detektors und Effizienz der Analyse-Methoden ermöglicht. Das korrigierte Spektrum wird für die Standardanalyse sowie für systematische Variationen berechnet. Aufgrund von resultierenden Unterschieden kann eine systematische Unsicherheit für das Ergebnis abgeschätzt werden.
Ergebnis dieser Arbeit ist der Lorentz-invariante Yield (vgl. Abbildung 23) als Funktion von pT. Das Raw Yield wurde dazu mithilfe von Simulationen korrigiert und systematische Fehler wurden abgeschätzt.
Die Messung kann mit anderen π0 Analysen verglichen werden. Für π0 Analysen können neben dem EMCal auch weitere Detektoren verwendet werden. Eine dieser Analysen verwendet eine Rekonstruktion der π0 durch konvertierte Photonen, die sogenannte Photon-Conversion-Method (PCM). Außerdem sind Analysen mit dem PHOS Kalorimeter und hybride Methoden möglich, beispielsweise PCM-EMCal.
Light scalar mesons can be understood as dynamically generated resonances. They arise as 'companion poles' in the propagators of quark-antiquark seed states when accounting for hadronic loop contributions to the self-energies of the latter. Such a mechanism may explain the overpopulation in the scalar sector - there exist more resonances with total spin J=0 than can be described within a quark model.
Along this line, we study an effective Lagrangian approach where the isovector state a_{0}(1450) couples via both non-derivative and derivative interactions to pseudoscalar mesons. It is demonstrated that the propagator has two poles: a companion pole corresponding to a_{0}(980) and a pole of the seed state a_{0}(1450). The positions of these poles are in quantitative agreement with experimental data. Besides that, we investigate similar models for the isodoublet state K_{0}^{*}(1430) by performing a fit to pion-kaon phase shift data in the I=1/2, J=0 channel. We show that, in order to fit the data accurately, a companion pole for the K_{0}^{*}(800), that is, the light kappa resonance, is required. A large-N_{c} study confirms that both resonances below 1 GeV are predominantly four-quark states, while the heavy states are quarkonia.
We compare the reconstructed hadronization conditions in relativistic nuclear collisions in the nucleon–nucleon centre-of-mass energy range 4.7–2760 GeV in terms of temperature and baryon-chemical potential with lattice QCD calculations, by using hadronic multiplicities. We obtain hadronization temperatures and baryon chemical potentials with a fit to measured multiplicities by correcting for the effect of post-hadronization rescattering. The post-hadronization modification factors are calculated by means of a coupled hydrodynamical-transport model simulation under the same conditions of approximate isothermal and isochemical decoupling as assumed in the statistical hadronization model fits to the data. The fit quality is considerably better than without rescattering corrections, as already found in previous work. The curvature of the obtained “true” hadronization pseudo-critical line κ is found to be 0.0048 ± 0.0026, in agreement with lattice QCD estimates; the pseudo-critical temperature at vanishing is found to be 164.3 ± 1.8 MeV.
ALICE is the dedicated heavy-ion experiment at the Large Hadron Collider at CERN. After a two-year long shutdown, the LHC restarted its physics programme in June 2015 with proton-proton collisions at √s = 13 TeV and Pb-Pb collisions at √sNN = 5.02 TeV, the highest centre-of-mass energy ever reached in laboratory. Recent results and future perspective for ALICE will be presented.
Cellular informational and metabolic processes are propagated with specific membrane fusions governed by soluble N-ethylmaleimide sensitive factor attachment protein receptors (SNARE). SNARE protein Ykt6 is highly expressed in brain neurons and plays a critical role in the membrane-trafficking process. Studies suggested that Ykt6 undergoes a conformational change at the interface between its longin domain and the SNARE core. In this work, we study the conformational state distributions and dynamics of rat Ykt6 by means of single-molecule Förster Resonance Energy Transfer (smFRET) and Fluorescence Cross-Correlation Spectroscopy (FCCS). We observed that intramolecular conformational dynamics between longin domain and SNARE core occurred at the timescale ~200 μs. Furthermore, this dynamics can be regulated and even eliminated by the presence of lipid dodecylphoshpocholine (DPC). Our molecular dynamic (MD) simulations have shown that, the SNARE core exhibits a flexible structure while the longin domain retains relatively stable in apo state. Combining single molecule experiments and theoretical MD simulations, we are the first to provide a quantitative dynamics of Ykt6 and explain the functional conformational change from a qualitative point of view.
One of important consequences of Hagedorn statistical bootstrap model is the prediction of limiting temperature Tcrit for hadron systems colloquially known as Hagedorn temperature. According to Hagedorn, this effect should be observed in hadron spectra obtained in infinite equilibrated nuclear matter rather than in relativistic heavy-ion collisions. We present results of microscopic model calculations for the infinite nuclear matter, simulated by a box with periodic boundary conditions. The limiting temperature indeed appears in the model calculations. Its origin is traced to strings and many-body decays of resonances.
First observation of the competitive double-γ decay process is presented. It is a second-order electromagnetic decay mode. The 662-keV decay transition from the 11/2− isomer of 137Ba to its ground state proceeds at a fraction of 2 × 10−6 by simultaneous emission of two γ quanta instead of one. The observed angular correlation and energy distribution of coincident γ quanta are well described by a dominant M2 – E2 and a minor E3 – M1 contribution to the double-γ decay branch. The data were well accounted for by a calculation using the Quasiparticle Phonon Model.
Ziel dieser Arbeit war die Untersuchung eines neuen Prototypen für den Übergangsstrahlungsdetektor im zukünftigen CBM-Experiment. Da der TRD zur Untersuchung des Quark-Gluon-Plasmas im Bereich hoher Baryonendichten bei hohen Kollisionsraten besonders schnell sein muss, wurde ein Prototyp mit einem kleinen Gasvolumen ohne Driftbereich entwickelt. Die Geometrie ist jedoch mit einer Reduzierung der Stabilität der Gasverstärkung verbunden, denn das elektrische Feld in der Kammer ist bei den geringen Abständen von Verformungen des dünnen Kathodenfensters abhängig. Daher wurde eine vielversprechende, veränderte Drahtgeometrie eingeführt: zwischen den Anodendrähten wurden zusätzliche Felddrähte positioniert, um das elektrische Feld im Bereich der Gasverstärkung zu stabilisieren.
Der neue Prototyp mit alternierender Hochspanngung und mit einer Dicke von 8 mm sowie einer aktiven Fläche von 15 x 15 cm2 wurde im Labor mit einer 55Fe-Quelle getestet.
Dazu wurden Strommessungen und eine spektrale Analyse für 25 verschiedene Positionen der Quelle vor der Kammer durchgeführt, sowohl mit der neuen Kammer als auch mit einer Standardkammer als Referenz. Die mit der neuen Kammer verbundenen positiven Erwartungen konnten durchweg bestätigt werden. Sowohl für die Strom- als auch für Energiemessung konnte eine signifikante Verbesserung der Stabilität der Gasverstärkung festgestellt werden. Variationen von über 60 % über die verschiedenen Messpunkte für die Standardkammer konnten mit der Kammer mit alternierender Hochspannung auf unter 15 % reduziert werden. Auch bei einer Variation des differentiellen Drucks, der mit der Ausdehnung des Folienfensters verbunden ist, kann das elektrische Feldes mithilfe der Felddrähte stabilisiert werden. Ebenso kann eine Analyse der Energieauflösung für die mit den Prototypen aufgezeichneten Spektren den stabilisierenden Effekt bestätigen. Eine zusätzliche Verbesserung durch das Anlegen einer negativen Spannung an den Felddrähten konnte allerdings nicht beobachtet werden. Ebenso zeigten die Messungen mit einer zweiten Kammer mit asymmetrischer Geometrie, das heißt die Drahtebene wurde in Richtung der hinteren Kathode verschoben, keine weitere Stabilisierung. Messungen der an den Felddrähten influenzierten Ströme zeigen, dass diese etwa bei einem Drittel der Anodenströme liegen, wobei sie für eine Erhöhung der Felddrahtspannung ebenso wie für die Messung mit der asymmetrischen Kammer leicht ansteigen. Die Ströme an den Felddrähten sind mit der Bewegung der Ionen in der Kammer verbunden, die das elektrische Feld stören können. Durch die Einführung der Felddrähte wird sich ein Teil der Ionen zu diesen bewegen, anstelle den Weg durch die Kammer bis zu den Kathoden zurückzulegen.
Die positiven Ergebnisse für die Kammer mit alternierenden Drähten sind nun Ausgangspunkt für weitere Schritte. Größere Kammern mit einer Fläche von 60 x 60 cm2, wie sie auch im finalen Experiment eingesetzt werden, wurden bereits gebaut und in einem gemischten Elektron-Pion-Strahl am PS (Protonsynchrotron) und mit einem Bleitarget am SPS (Super-Proton Synchrotron) am CERN getestet. Dabei wurde die Dicke des Gasvolumens nochmals – auf 7 mm – reduziert, was die Schnelligkeit des Detektors weiter erhöht, allerdings auch die Stabilität der Gasverstärkung wieder auf die Probe stellt. Die Daten werden derzeit ausgewertet. Eine weitere Analyse auf Basis der Padauslese im Labor ist in Planung. Hierbei ist insbesondere die Verteilung eines Signals über die Pads (Pad-Response-Funktion) von Bedeutung, wobei diese von der Bewegung der Ionen und damit von der Geometrie des elektrischen Feldes beeinflusst wird. Die Einführung der Felddrähte spielt hier eine wesentliche Rolle; insbesondere beträgt der Drahtabständ zwischen den Andodendrähten nun 5 mm, während die Abstände bei den vorhergehenden Generationen bei 2-3 mm lagen.
Auch die Signalform ist von Interesse. Die derzeit ebenfalls in Entwicklung befindliche Ausleseelektronik und die Algorithmen zur Datenverarbeitung sind auf die bekannte Signalform eines Standardprototypen ausgerichtet. Eine veränderte Form müsste entsprechend berücksichtigt werden, um aussagekräftige Ergebnisse zu erhalten. Die Auswertungen in dieser Arbeit zeigen, dass sich die Signalform grundsätzlich nicht von der des Standardprototypen unterscheidet. Wichtig sind auch die Driftzeiten für Elektronen aus der Lawine. Sie spielen eine entscheidende Rolle für die die Schnelligkeit des Detektors. Mit der Einführung der Felddrähte liegen sie zwar zum großen Teil nach wie vor im Bereich eines Standardprototyen mit entsprechender Dicke des Gasvolumens von 8 mm bei bis zu 150 ns, jedoch folgt dann ein sehr langsamer Abfall mit Elektrondriftzeiten von bis zu 450 ns [47]. Eine Verbesserung ist durch ein kleineres Gasvolumen möglich, für einen Anoden-Kathoden-Abstand von 3 mm sinken die maximalen Driftzeiten auf 300 ns. Eine andere Alternative ist das Anlegen einer negativen Spannung an das Eintrittsfenster.
Entwicklung und Test einer supraleitenden 217 MHz CH-Kavität für das Demonstrator-Projekt an der GSI
(2016)
In den letzten Jahrzehnten vergrößerten sich die Anwendungsgebiete von Linearbeschleunigern für Protonen und schwere Ionen, insbesondere im Nieder- und Mittelenergiebereich, stetig. Der überwiegende Teil dieser mittlerweile bewährten Anwendungen lag im Bereich der Synchrotroninjektion oder der Nachbeschleunigung von radioaktiven Ionenstrahlen. Darüber hinaus wird seit einiger Zeit die Entwicklung neuartiger, supraleitender Hochleistungslinearbeschleunigerkavitäten stark vorangetrieben, welche vor allem bei der Forschung an Spallationsneutronenquellen, in der Isotopenproduktion oder bei der Transmutation langlebiger Abfälle aus Spaltreaktoren Anwendung finden sollen. Die am Institut für Angewandte Physik der Goethe-Universität Frankfurt entwickelte CH-Kavität ist optimal für den Einsatz in derartigen Hochleistungsapplikationen geeignet. Sie ist die erste Vielzellenstruktur für den Nieder- und Mittelenergiebereich und kann sowohl normal- als auch supraleitend verwendet werden. Bislang konnten in der Vergangenheit ein supraleitender 360 MHz CH-Prototyp sowie eine für hohe Leistungen optimierte supraleitende 325 MHz CH-Struktur erfolgreich bei kryogenen Temperaturen ohne Strahl getestet werden. Um die Forschung im Bereich der Kernphysik, der Kernchemie und vor allem im Bereich der superschweren Elemente auch in Zukunft weiter fortzusetzen, ist der Bau eines neuen supraleitenden, dauerstrichbetriebenen Linearbeschleunigers an der GSI geplant. Das Kernstück des zukünftigen cw-LINAC basiert auf dem Einsatz supraleitender 217 MHz CH-Kavitäten, mit deren Hilfe ein adäquater Teilchenstrahl mit
maximal 7,5 MeV/u für die SHE-Synthese bereitgestellt werden soll. Auf dem Weg zur Realisierung des geplanten cw-LINACs wurde im Zuge des Demonstrator-Projektes die Umsetzung der ersten Sektion des gesamten Beschleunigers beschlossen. Der Fokus des Projektes liegt auf der Demonstration der Betriebstauglichkeit innerhalb einer realistischen Beschleunigerumgebung sowie insbesondere auf der erstmaligen Inbetriebnahme einer supraleitenden CH-Kavität mit Strahl. Im Rahmen der vorliegenden Arbeit wurde die erste supraleitende 217 MHz CH-Kavität für das Demonstrator-Projekt entwickelt, produziert und ihre Hochleistungseigenschaften in einem vertikalen Kryostaten bei 4,2 K getestet. Hierbei lag das Hauptaugenmerk auf der HF-Auslegung der Kavität, den begleitenden Tuningmaßnahmen während der Produktion sowie den ersten Leistungstests unter kryogenen Bedingungen. Weitere Schwerpunkte lagen auf der kompakten Bauweise, dem effektiven Tuning, der Oberflächenpräparation sowie auf dem Strahlbetrieb der Kavität mit einem dauerstrichfähigem 5 kW Hochleistungskoppler. Die Umsetzung
der Kavität beruhte auf dem geometrischen Konzept der supraleitenden, siebenzelligen 325 MHz CH-Struktur.
Ihre elektromagnetische und strukturmechanische Auslegung erfolgte mittels der Simulationsprogramme ANSYS Multiphysics und CST Studio Suite. Um während des Test- bzw. Strahlbetriebs mit der entsprechend notwendigen Kopplungsstärke die HF-Leistung in die Kavität einzuspeisen, wurden unterschiedliche Kopplerantennen für den jeweiligen Fall ausgelegt. Zum Erreichen der geforderten Zielfrequenz wurde ein Verfahren erarbeitet, welches die hierfür notwendigen Mess- und Arbeitsschritte während der einzelnen
Produktionsphasen beinhaltet. Diesbezüglich wurden während der Produktion der Kavität eine Reihe von Zwischenmessungen beim Hersteller durchgeführt, um den Frequenzverlauf innerhalb der jeweiligen Fertigungsschritte entsprechend beeinflussen zu können
und um vorangegangene Simulationswerte zu validieren. Alle untersuchten Parameter konnten während der Messungen in guter Übereinstimmung zu den Simulationen reproduziert und die Zielfrequenz der Kavität schließlich erreicht werden. Nach Abschluss der letzten Oberflächenpräparationen wurde die Kavität in einer neuen kryogenen Testumgebung innerhalb der Experimentierhalle des IAP für einen vertikalen Kalttest vorbereitet.
Daraufhin erfolgte das Evakuieren der Kavität, das Abkühlen auf 4,2 K sowie ihre Konditionierung. Anschließend erfolgte die Bestimmung der intrinsischen Güte der Kavität.
Sie betrug 1,44 x 10E9 und besitzt somit den bisher höchsten Gütewert, der jemals bei einer supraleitenden CH-Struktur erreicht wurde. Es konnte ein maximaler Beschleunigungsgradient von 7 MV/m im Dauerstrichbetrieb erreicht werden, was einer effektiven Spannung von 4,2 MV entspricht. Die zugehörigen magnetischen und elektrischen Oberflächenfelder lagen bei 39,3 mT bzw. 43,5 MV/m. Ein thermaler Zusammenbruch konnte während des gesamten Leistungstests nicht festgestellt werden, was auf eine gute thermische Eigenschaft der Kavität hindeutet. Allerdings zeigte der gemessene Verlauf ein frühes Abfallen der Güte ab 2,5 MV/m, was durch anormale Leistungsverluste aufgrund von Feldemission hervorgerufen wurde. Dies war aufgrund der unzureichenden Oberflächenbehandlung der Kavität zu erwarten, da die Hochdruckspülung aus technischen Gründen nur entlang der Strahlachse erfolgte. Dennoch konnte die Designvorgabe des geplanten cw-LINACs hinsichtlich der Güte bei 5,5 MV/m um einen Faktor 2 übertroffen werden.
Die positiven Ergebnisse der Simulationsrechnungen und der Messungen zeigen, dass die Anforderungen des Demonstrator-Projekts, insbesondere hinsichtlich des benötigten Beschleunigungsgradienten, mittels der entwickelten supraleitenden 217 MHz CH-Kavität erfüllt werden. Somit wurde im Rahmen dieser Arbeit maßgeblich an der Umsetzung des Demonstrator-Projekts bzw. an der Realisierung des geplanten cw-LINACs beigetragen und der Weg für einen Strahlbetrieb der Kavität vorbereitet.
Im Rahmen des FAIR Projektes wurde ein neuartiger Prototyp eines nicht strahlzerstörenden Bunch Struktur Monitors (BSM) am GSI UNILAC entwickelt. Ziel ist es, ein zuverlässiges Diagnosegerät zu entwickeln, welches die longitudinale Struktur der Ionenbunche innerhalb des LINACs untersuchen kann. Notwendig ist hierbei eine effektive Zeitauflösung deutlich unter 100 ps, bei möglichst wenigen Makropuls Mittelungen. Nach der erfolgreichen Inbetriebnahme soll der BSM Prototyp dazu dienen, die Umsetzbarkeit eines weiteren nichtinvasiven Geräts für den geplanten Proton-LINAC bei FAIR mit einer notwendigen Zeitauflösung von 10 ps zu beurteilen.
Die numerische Simulation von Materialien, welche dem Hochstrom-Ionenstrahl ausgesetzt sind, zeigten einen sehr hohen thermischen Stress. Daher wurde der Ansatz eines nicht strahlzerstörenden Diagnosegerätes verfolgt. Das Design beruht auf der Erzeugung von Sekundärelektronen durch Strahl-Restgas Kollisionen im Strahlrohr. Durch das Anlegen eines homogenen Hochspannungspotentials von bis zu -31 kV, wird ein Elektronenstrahl erzeugt, welcher die zeitliche Struktur des Ionenbunches trägt. Die zeitliche Information des Elektronenstrahles wird beim Durchfliegen eines HF-Ablenkers, welcher resonant an die 36 MHz des Beschleunigers gekoppelt ist, in eine räumliche Intensitätsverteilung umgewandelt. Anschließend wird die Elektronenverteilung auf einem bildgebenden MCP-Phosphor-Detektor durch eine CCD-Kamera detektiert und in die Bunch Struktur überführt.
Intensive Untersuchungen der BSM Eigenschaften ergaben eine höchste Auflösung von 37 ±6.3 ps bei gleichzeitig akzeptabler Intensität auf dem MCP-Detektor. Unter anderem wurden auch stabile Einzelschussmessungen durchgeführt, welche für die Profilmessung nur einen einzelnen Makropuls benötigten, statt über typischerweise 8-32 Pulse zu mitteln.
Durch die systematische Manipulation der Bunchlänge durch einen Rebuncher sind nicht gaußförmige Profile von 280 ps bis 650 ps detektiert worden, welche als Studie für eine Emittanzbestimmung genutzt worden sind. In Abhängigkeit des Analyseverfahrens sind Werte von εGauss = 1.42 ±0.14 keV/u ns bis εSD = 3.03 ±0.33 keV/u ns für die Emittanz bestimmt worden.
Des Weiteren ist ein Finite-Elemente Modell erstellt worden, um die Zeitstruktur der Sekundärelektronen innerhalb des elektronenoptischen Systems zu bestimmen. Für das Setup mit der höchsten Auflösung von 37 ps ergab sich eine zusätzliche Zeitverbreiterung von 5.6 ps, welche nur geringfügig die experimentell bestimmte Auflösung verschlechtert.
Der nicht strahlzerstörende BSM liefert eine ausreichend hohe zeitliche Auflösung für detailreiche Untersuchung der longitudinalen Bunchstruktur, ohne negative Einflüsse auf den Ionenstrahl auszuüben. Fortgeschrittene Messungen, wie longitudinale Emittanzbestimmung und Makropulsanalysen, sind möglich und werden dazu beitragen, die LINAC Strukturen besser zu verstehen und weiter zu optimieren.
Obwohl bei der Umsetzung des Arbeitsprinzips für den geplanten Proton-LINAC die veränderten Strahlparameter berücksichtigt werden müssen, zeigen die Ergebnisse, wie die Zeitstrukturuntersuchung und die erreichte Phasenauflösung von 0.5° bei 36 MHz, dass zeitliche Auflösungen bei Aufrechterhaltung der Phasenauflösung von bis zu 10 ps für einen neuen BSM Prototypen möglich sind.