Refine
Year of publication
- 2016 (161) (remove)
Document Type
- Preprint (71)
- Article (67)
- Doctoral Thesis (20)
- Conference Proceeding (3)
Language
- English (161) (remove)
Has Fulltext
- yes (161)
Is part of the Bibliography
- no (161) (remove)
Keywords
- 140Ce (1)
- Atoms (1)
- Biological physics (1)
- Centrality Class (1)
- Centrality Selection (1)
- Coincidence measurement (1)
- D-wave (1)
- Energy system design (1)
- Flexible backup power (1)
- Hydrogen ground state (1)
Institute
- Physik (161) (remove)
The Large Hadron Collider (LHC) is the biggest and most powerful particle accelerator in the world, designed to collide two proton beams with particle momentum of 7 TeV/c each. The stored energy of 362MJ in each beam is sufficient to melt 500 kg of copper or to evaporate about 300 litre of water. An accidental release of even a small fraction of the beam energy can cause severe damage to accelerator equipment. Reliable machine protection systems are necessary to safely operate the accelerator complex. To design a machine protection system, it is essential to know the damage potential of the stored beam and the consequences in case of a failure. One (catastrophic) failure would be, if the entire beam is lost in the aperture due to a problem with the beam dumping system.
This thesis presents the simulation studies, results of a benchmarking experiment, and detailed target investigation, for this failure case. In the experiment, solid copper cylinders were irradiated with the 440GeV proton beam delivered by the Super Proton Synchrotron (SPS) at the High Radiation to Materials (HiRadMat) facility at CERN. The experiment confirmed the existence of the so-called hydrodynamic tunneling phenomenon for the first time. Detailed numerical simulations for particle-matter interaction with FLUKA, and with the two-dimensional hydrodynamic code, BIG2, were carried out. Excellent agreement was found between the experimental and the simulation results that validate predictions for the 7TeV beam of the LHC. The hydrodynamic tunneling effect is of considerable importance for the design of machine protection systems for accelerators with high stored beam energy. In addition, this thesis presents the first studies of the damage potential with beam parameters of the Future Circular Collider (FCC).
To detect beam losses due to fast failures it is essential to have fast beam instrumentation. Diamond based particle detectors are able to detect beam losses within a nanosecond time scale. Specially designed diamond detectors were used in the experiment mentioned above. Their efficiency and response has been studied for the first time over 5 orders of bunch intensity with electrons at the Beam Test Facility (BTF) at INFN, Frascati, Italy. The results of these measurements are discussed in this thesis. Furthermore an overview of the applications of diamond based particle detectors in damage experiments and for LHC operation is presented.
The elliptic flow of heavy-flavour decay electrons is measured at midrapidity |eta| < 0.8 in three centrality classes (0-10%, 10-20% and 20-40%) of Pb-Pb collisions at sqrt(sNN) = 2.76TeV with ALICE at LHC. The collective motion of the particles inside the medium which is created in the heavy-ion collisions can be analyzed by a Fourier decomposition of the azimuthal anisotropic particle distribution with respect to the event plane. Elliptic flow is the component of the collective motion characterized by the second harmonic moment of this decomposition. It is a direct consequence of the initial geometry of the collision which is translated to a particle number anisotropy due to the strong interactions inside the medium. The amount of elliptic flow of low-momentum heavy quarks is related to their thermalization with the medium, while high-momentum heavy quarks provide a way to assess the path-length dependence of the energy loss induced by the interaction with the medium.
The heavy-quark elliptic flow is measured using a three-step procedure.
First the v2 coefficient of the inclusive electrons is measured using the event-plane and scalar-product methods. The electron background from light flavours and direct photons is then simulated, calculating the decay kinematics of the electron sources which are initialised by their respective measured spectra. The final result of this work emerges by subtracting the background from the inclusive measurement. A significant elliptic flow is observed after this subtraction. Its value is decreasing from low to intermediate pT and from semi-central to central collisions.
The results are described by model calculations with significant elastic interactions of the heavy quarks with the expanding strongly-interacting medium.
Study of hard core repulsive interactions in an hadronic gas from a comparison with lattice QCD
(2016)
We study the influence of hard-core repulsive interactions within the Hadron-Resonace Gas model in comparison to first principle calculation performed on a lattice. We check the effect of a bag-like parametrization for particle eigenvolume on flavor correlators, looking for an extension of the agreement with lattice simulations up to higher temperatures, as was yet pointed out in an analysis of hadron yields measured by the ALICE experiment. Hints for a flavor depending eigenvolume are present.
Modelling glueballs
(2016)
Glueballs are predicted in various theoretical approaches of QCD (most notably lattice QCD), but their experimental verification is still missing. In the low-energy sector some promising candidates for the scalar glueball exist, and some (less clear) candidates for the tensor and pseudoscalar glueballs were also proposed. Yet, for heavier gluonic states there is much work to be done both from the experimental and theoretical points of view. In these proceedings, we briefly review the current status of research of glueballs and discuss future developments.
In this thesis, the production of charged kaons and Φ mesons in Au+Au collisions at sqrt sAuAu = 2.4 GeV is studied. At this energy, all particles carrying open and hidden strangeness are produced below their respective free nucleon-nucleon threshold with the corresponding so-called excess energies: sqrt sK+ exc = -0.15 GeV, sqrt sK- exc = -0.46 GeV, sqrt sΦ exc = -0.49 GeVGeV. As a consequence, the production cross sections are very sensitive to medium effects like momentum distributions, two- or multistep collisions, and modification of the in-medium spectral distribution of the produced states [1]. K+ and K- mesons exhibit different properties in baryon dominated matter, since only K- can be resonantly absorbed by nucleons. Although strangeness exchange reactions have been proposed to be the dominant channel for K- production in the analyzed energy regime, the production yield and kinematic distributions could also be explained in smaller systems based on statistical hadronization model fits to the measured particle yields, including a canonical strangeness suppression radius RC, and taking the Φ feed-down to kaons into account [2, 3]. For the first time in central Au+Au collisions at such low energies, it is possible to reconstruct and do a multi differential analysis of K- and Φ mesons. In principle, this should be the ideal environment for strangeness exchange reactions to occur, as the particles are produced deeply sub-threshold in a large and long-living system. Therefore, it is the ultimate test to differentiate between the different sources for K- production in HIC.
In total 7.3x10exp9 of the 40% most central Au(1.23 GeV per nucleon)+Au collisions are analyzed. The data has been recorded with the High Acceptance DiElectron Spectrometer HADES located at Helmholtzzentrum für Schwerionenforschung GSI in April/May 2012. A substantially improved reconstruction method has been employed to reconstruct the hadrons with high purity in a wide phase space region.
The estimated particle multiplicities follow a clear hierarchy of the excess energy: 41.5 ± 2.1|sys protons at mid-rapidity per unit in rapidity, 11.1 ± 0.6|sys ± 0.4|extrapol π-, (3.01 ± 0.03|stat ± 0.15|sys ± 0.30|extrapól) x10 exp -2 K+, (1.94 ± 0.09|stat ± 0.10|sys ± 0.10|extrapol)x10 exp -4 K- and (0.99 ± 0.24|stat ± 0.10|sys ± 0.05|extrapol)x10 exp -4 Φ per event. The multiplicities of the strange hadrons increase more than linear with the mean number of participating nucleons hAparti, supporting the assumption that the necessary energy to overcome the elementary production threshold is accumulated in multi-particle interactions. Transport models predict such an increase, but are overestimating the measured particle yield and are not able to describe the kinematic distributions of K+ mesons perfectly. However, the best description is given by the IQMD model with a density dependent kaonnucleon potential of 40 MeV at nuclear ground state density.
The K-=K+ multiplicity ratio is constant as a function of centrality and follows with (6.45 ± 0.77)x10 exp -3 the trend of increasing with beam energy indicated from previous experiments [4]. The effective temperature of K- TK+eff = (84 ± 6) MeV is found to be systematically lower than the one of K+ TK+eff = (104 ± 1) MeV, which has also been observed by the other experiments.
The Φ=K- ratio is with a value of 0.52 ± 0.16 higher than the one obtained at higher center-of-mass energies and smaller systems. This behavior is predicted from a tuned version of the UrQMD transport model [5], when including higher mass baryonic resonances which can decay into Φ mesons and from statistical hadronization models when suppressing open strangeness canonically. The found ratio is constant as a function of centrality and results with a branching ratio of 48.9%, that ~ 25% of all measured K- originate from Φ feed-down decays. A two component PLUTO simulation, consisting of a pure thermal and a K- contribution originating from Φ decays, can fully explain the observed lower effective temperature in comparison to K+ and the shape of the measured rapidity distribution of K-. As a result, we find no indication for strangeness exchange reactions being the dominant mechanism for K- production in the SIS18 energy regime, if taking the contribution from Φ feed-down decays into account.
The hadron yields for the 20% most central collisions can be described by a statistical hadronization model fit with the chemical freeze-out temperature of Tchem = (68 ± 2) MeV and baryochemical potential of μB = (883 ± 25) MeV, which is higher than expected from previous parameterizations. The analysis of the transverse mass spectra of protons indicate a kinetic freeze-out temperature of Tkin = (70 ± 4) MeV and radial flow velocity of βr = 0.43 ± 0.01, which is in agreement with the parameters obtained from the linear dependence of the effective temperatures on the particle mass Tkin = (71.5 ± 4.2) MeV and βr = 0.28 ± 0.09.
The CBM experiment (FAIR/GSI, Darmstadt, Germany) will focus on the measurement of rare probes at interaction rates up to 10MHz with data flow of up to 1 TB/s. It requires a novel read-out and data-acquisition concept with self-triggered electronics and free-streaming data. In this case resolving different collisions is a non-trivial task and event building must be performed in software online. That requires full online event reconstruction and selection not only in space, but also in time, so-called 4D event building and selection. This is a task of the First-Level Event Selection (FLES).
The FLES reconstruction and selection package consists of several modules: track finding, track fitting, short-lived particles finding, event building and event selection. The Cellular Automaton (CA) track finder algorithm was adapted towards time-based reconstruction. In this article, we describe in detail the modification done to the algorithm, as well as the performance of the developed time-based CA approach.
For the transport of high-intensity hadron beams in low-energy beam lines of linear accelerators, the compensation of space charge forces by the accumulation of particles of opposite charge is an important effect, reducing the required focusing strength and potentially the emittance growth due to space charge forces. In this thesis, space charge compensation was studied by including the secondary particles in particle-in-cell simulations.
For this purpose, a new electrostatic particle-in-cell code named bender was developed. The software was tested using known self-consistent solutions for an electron plasma confined in an external potential as well as for a KV distributed beam in a periodic focusing lattice. For the simulation of compensation, models for residual gas ionisation by proton and electron impact were implemented.
The compensation process was studied for a 120 keV, 100 mA proton beam transported through a short drift section. Various features in the particle distributions were identified, which can not explained by a uniform reduction in the electric field of the beam. These were tied to the presence of thermal electrons confined within the beam potential. Using the Poisson-Boltzmann equation, their distribution could be reproduced and their influence on the beam for a wider range of parameters studied. However, the observed temperatures show a significant numerical influence. The hypothesis was formed, that stochastical heating present in particle-in-cell simulations is the mechanism leading to the formation of the observed (partial) thermal equilibrium.
For the low-energy beam transport line of the Frankfurt neutron source FRANZ, bender was used to predict the pulse shaping in the novel ExB chopper system. The code was also used for the design and the study of an electron lens for the Integrable Optics Test Accelerator at Fermi National Accelerator Laboratory. Aberrations due to guiding center drifts and the strong electric field of the electron beam as well as the current limits in such a system were investigated.
The Standard Model is one of the greatest successes of modern theoretical physics. Itl describes the physics of elementary particles by means of three forces, the electro-magnetisc, the weak and the strong interactions. The electro-magnetic and the weak interaction are rather well understood in comparison to the strong interaction.
The latest is as fundamental as the others, it is responsible for the formation of all hadrons which are classified into mesons and baryons. Well-known examples of the former is the pion and of the latter is the proton and the neutron, which form the nucleus of every atom. This fundamental force is believed to be described by the Quantum Chromodynamics (QCD) theory. According to this theory, hadrons are not elementary particles but are composed of quarks and gluons. The latter are the vector particles of the force and so are bosons of spin 1 and the former constitute the matter and are fermions with spin 1/2. To describe the interaction a new quantum number had to be introduced: the color charge which exists in three different types (blue, green and red). The name has not been chosen arbitrary as elements created from three quarks of different colors are colorless in the same way that mixing the three primary colors leads to white. However, experimentally no colored structure has ever been observed. The quarks and the gluons seem to be confined in colorless hadrons. This property of QCD is called confinement and results from a large coupling constant at low energy (or large distance). For high energy (or small distance), the perturbative analysis of QCD permits to establish the coupling constant to be small and quarks and gluons are almost free. This property is called asymptotic freedom. The possibility for QCD to describe both behaviors is one of its amazing characteristics. However, both phenomena are not well understood and one needs a method to study both the pertubative and the confining regime.
The only known method which fulfills the above criteria is Lattice QCD and more generally Lattice Quantum Field Theory (LQFT). It consists of a discretization of the spacetime and a formulation of QCD on a four-dimensional Euclidean spacetime grid of spacing a. In this way, the theory is naturally regularized and mathematically well-defined. On the other hand, the path integral formalism allows the theory to be treated as a Statistical Mechanics system which can be evaluated via a Markov chain Monte-Carlo algorithm. This method was first suggested by Wilson in 1974 [1] and shortly after Creutz performed the first numerical simulations of Yang-Mills theory [2] using a heath-bath Monte-Carlo algorithm. It appears that this method is extremely demanding in computational power. In its early days the method was criticized as the only feasible simulations involved non-physical values such as extremely large quark masses, large lattice spacing a and no dynamical quarks. With the progress of the computers and the appearance of the super-computer, the studies have come close to the physical point. But one still needs to deal with discrete space time and finite volume. Several techniques have been developed to estimate the infinite volume limit and the continuum limit. The smaller the lattice spacing and the larger the volume, the better the extrapolation to continuum and infinite volume limits is. The simulations are still very expensive and for the moment a typical length of the box is L ≈ 4fm and a ≈ 0.08fm. However, it has been realized simulating pure Yang-Mills theory and other lower dimensional models that the topology is freezing at small a [3]. It was also observed recently on full QCD simulations [4,5].
The typical lattice spacing for which this problem appears in QCD is a ≈ 0.05fm but this value depends on the quark mass used and on the algorithm. The freezing of topology leads to results which differ from physical results. Solving this issue is important for the future of LQCD [6]. Recently several methods to overcome the problem have been suggested, one of the most popular is the used of open boundary conditions [7] but this promising method has still its own issues, mainly the breaking of translation invariance.
In this thesis, we study some features of the quantum chromodynamics (QCD) phase diagram at purely imaginary chemical potential using lattice techniques. This is one of the possible methodologies to get insights about the situation at finite density, where the sign problem prevents direct investigations from first principles.
We focus, in particular, on the Roberge-Weiss plane, where the phase structure with two degenerate flavours is studied both in the light and in the heavy quark mass limit. On the lattice, any result is affected by cut-off effects and so are the positions of the two tricritical points m_{tric}^{1,2} separating the second-order intermediate mass region from the first-order triple light and heavy mass regions. Therefore, changing the lattice spacing 'a', the values of m_{tric}^1 and m_{tric}^2 will change. In order to find their position in the continuum limit – i.e. for 'a' going to 0 – they have to be located on finer and finer lattices. Typically, in lattice QCD (LQCD) simulations, the temperature T is tuned through the bare coupling β, on which 'a' depends, while keeping Nt fixed. Hence, it is common to implicitly refer to how fine the lattice is just mentioning its temporal extent.
Using both Wilson and staggered fermions, we simulate Nf=2 QCD on Nt=6 lattices, varying the quark bare mass from the chiral (m_{u,d} going to 0) to the quenched (m_{u,d} going to infinity) limit. For each quark mass, a thorough finite scaling analysis is carried out, taking advantage of two different but consistent methods. In this way we identify the order of the phase transition locating, then, the position of the tricritical points. In order to convert our measurements to physical units we fix the scale measuring the lattice spacing as well as the pion mass corresponding to the quark bare mass used. This allows a comparison between different discretisation, getting a first idea of how serious are cut-off effects.
To be able to make a comparison between two different discretisations, we added an RHMC algorithm with staggered fermions to the CL2QCD software, a GPU code based on OpenCL, which we released in 2014. A considerable part of our work has been invested in ameliorating and optimising CL2QCD, as well as in developing new analysis tools regularly used next to it. Just to mention one, the multiple histogram method has been implemented in a completely general way and we took advantage of it in order to obtain more precise results. Finally, in order to efficiently handle and monitor the hundreds of simulations that are typically concurrently run in finite temperature LQCD, a completely new Bash library of tools has been developed. We plan to release it as a byproduct of CL2QCD in the near future.
In the 1960s, theoretical concepts prepared the path to nuclear matter with proton and neutron numbers far beyond the nuclei known at that time. The new laboratory GSI was founded for research on reactions with heavy ions, in particular those for production of the predicted super-heavy nuclei. In this contribution it is presented how the interaction between experiment and theory resulted in a continuous improvement of the experimental set-ups on the one hand, and of the knowledge of the processes during the nuclear reaction and of the properties of the produced nuclei on the other hand. In the course of this work six new elements from 107 to 112 were produced and identified. An overview of the present status of experimental results and a comparison with theoretical interpretations is given.
Recently the LIGO and VIRGO Collaborations reported the observation of gravitational-wave signal corresponding to the inspiral and merger of two black holes, resulting into formation of the final black hole. It was shown that the observations are consistent with the Einstein theory of gravity with high accuracy, limited mainly by the statistical error. Angular momentum and mass of the final black hole were determined with rather large allowance of tens of percents. Here we shall show that this indeterminacy in the range of the black-hole parameters allows for some non-negligible deformations of the Kerr spacetime leading to the same frequencies of the black-hole ringing. This means that at the current precision of the experiment there remains some possibility for alternative theories of gravity.
At sufficiently high temperatures and baryon densities, nuclear matter is expected to undergo a transition into the Quark-Gluon-Plasma (QGP) consisting of deconfined quarks and gluons and accompanied by chiral symmetry restoration. Signals of these two fundamental characteristics of Quantum-Chromo-Dynamics (QCD) can be studied in ultra-relativistic heavy-ion collisions producing a relatively large volume of high energy and nucleon densities as existent in the early universe. Dileptons are unique bulk-penetrating sources for this purpose since they penetrate through the surrounding medium with negligible interaction and are created throughout the entire evolution of the initially created fireball. A multitude of experiments at SIS18, SPS and RHIC have taken on the challenging task to measure these rare probes in a heavy-ion environment. NA60's results from high-quality dimuon measurements have identified the broadened ρ spectral function as favorable scenario to explain the low-mass dilepton excess, and partonic sources as dominant at intermediate dilepton masses.
Enabled by the addition of a TOF detector system in 2010, the first phase of the Beam Energy Scan (BES-I) at RHIC allows STAR to conduct an unprecedented energy-dependent study of dielectron production within a homogeneous experimental environment, and hence close the wide gap in the QCD phase diagram between SPS and top RHIC energies. This thesis concentrates on the understanding of the LMR enhancement regarding its invariant mass, transverse momentum and energy dependence. It studies dielectron production in Au+Au collisions at beam energies of 19.6, 27, 39, and 62.4 GeV with sufficient statistics. In conjunction with the published STAR results at top RHIC energy, this thesis presents results on the first comprehensive energy-dependent study of dielectron production.
This includes invariant mass- and transverse momenta-spectra for the four beam energies measured in 0-80% minimum-bias Au+Au collisions with high statistics up to 3.5 GeV/c² and 2.2 GeV/c, respectively. Their comparison with cocktail simulations of hadronic sources reveals a sizeable and steadily increasing excess yield in the LMR at all beam energies. The scenario of broadened in-medium ρ spectral functions proves to not only serve well as dominating underlying source but also to be universal in nature since it quantitatively and qualitatively explains the LMR enhancements measured over the wide range from SPS to top RHIC energies. It shows that most of the enhancement is governed by interactions of the ρ meson with thermal resonance excitations in the late(r)-stage hot and dense hadronic phase. This conclusion is supported by the energy-dependent measurement of integrated LMR excess yields and enhancement factors. The former do not exhibit a strong dependence on beam energy as expected from the approximately constant total baryon density above 20 GeV, and the latter show agreement with the CERES measurement at SPS energy. The consistency in excess yields and agreement with model calculations over the wide RHIC energy regime makes a strong case for LMR enhancements on the order of a factor 2-3.
The extent of the results presented here enables a more solid discussion of its relation to chiral symmetry restoration from a theoretical point of view. High-statistics measurements at BES-II hold the promise to confirm these conclusions along with the LMR enhancment's relation to total baryon density with decreasing beam energy.
Different approaches are possible when it comes to modeling the brain. Given its biological nature, models can be constructed out of the chemical and biological building blocks known to be at play in the brain, formulating a given mechanism in terms of the basic interactions underlying it. On the other hand, the functions of the brain can be described in a more general or macroscopic way, in terms of desirable goals. This goals may include reducing metabolic costs, being stable or robust, or being efficient in computational terms. Synaptic plasticity, that is, the study of how the connections between neurons evolve in time, is no exception to this. In the following work we formulate (and study the properties of) synaptic plasticity models, employing two complementary approaches: a top-down approach, deriving a learning rule from a guiding principle for rate-encoding neurons, and a bottom-up approach, where a simple yet biophysical rule for time-dependent plasticity is constructed.
We begin this thesis with a general overview, in Chapter 1, of the properties of neurons and their connections, clarifying notations and the jargon of the field. These will be our building blocks and will also determine the constrains we need to respect when formulating our models. We will discuss the present challenges of computational neuroscience, as well as the role of physicists in this line of research.
In Chapters 2 and 3, we develop and study a local online Hebbian self-limiting synaptic plasticity rule, employing the mentioned top-down approach. Firstly, in Chapter 2 we formulate the stationarity principle of statistical learning, in terms of the Fisher information of the output probability distribution with respect to the synaptic weights. To ensure that the learning rules are formulated in terms of information locally available to a synapse, we employ the local synapse extension to the one dimensional Fisher information. Once the objective function has been defined, we derive an online synaptic plasticity rule via stochastic gradient descent.
In order to test the computational capabilities of a neuron evolving according to this rule (combined with a preexisting intrinsic plasticity rule), we perform a series of numerical experiments, training the neuron with different input distributions.
We observe that, for input distributions closely resembling a multivariate normal distribution, the neuron robustly selects the first principal component of the distribution, showing otherwise a strong preference for directions of large negative excess kurtosis.
In Chapter 3 we study the robustness of the learning rule derived in Chapter 2 with respect to variations in the neural model’s transfer function. In particular, we find an equivalent cubic form of the rule which, given its functional simplicity, permits to analytically compute the attractors (stationary solutions) of the learning procedure, as a function of the statistical moments of the input distribution. In this way, we manage to explain the numerical findings of Chapter 2 analytically, and formulate a prediction: if the neuron is selective to non-Gaussian input directions, it should be suitable for applications to independent component analysis. We close this section by showing how indeed, a neuron operating under these rules can learn the independent components in the non-linear bars problem.
A simple biophysical model for time-dependent plasticity (STDP) is developed in Chapter 4. The model is formulated in terms of two decaying traces present in the synapse, namely the fraction of activated NMDA receptors and the calcium concentration, which serve as clocks, measuring the time of pre- and postsynaptic spikes. While constructed in terms of the key biological elements thought to be involved in the process, we have kept the functional dependencies of the variables as simple as possible to allow for analytic tractability. Despite its simplicity, the model is able to reproduce several experimental results, including the typical pairwise STDP curve and triplet results, in both hippocampal culture and layer 2/3 cortical neurons. Thanks to the model’s functional simplicity, we are able to compute these results analytically, establishing a direct and transparent connection between the model’s internal parameters and the qualitative features of the results.
Finally, in order to make a connection to synaptic plasticity for rate encoding neural models, we train the synapse with Poisson uncorrelated pre- and postsynaptic spike trains and compute the expected synaptic weight change as a function of the frequencies of these spike trains. Interestingly, a Hebbian (in the rate encoding sense of the word) BCM-like behavior is recovered in this setup for hippocampal neurons, while dominating depression seems unavoidable for parameter configurations reproducing experimentally observed triplet nonlinearities in layer 2/3 cortical neurons. Potentiation can however be recovered in these neurons when correlations between pre- and postsynaptic spikes are present. We end this chapter by discussing the relation to existing experimental results, leaving open questions and predictions for future experiments.
A set of summary cards of the models employed, together with listings of the relevant variables and parameters, are presented at the end of the thesis, for easier access and permanent reference for the reader.
Lepton pairs emerging from decays of virtual photons represent promising probes of nuclear matter under extreme conditions of temperature and density. These etreme conditions can be reached in heavy-ion collisions in various facilities around the world. Hereby the collision energy in the center-of-mass system (√SNN) varies from few GeV (SIS) to the TeV (LHC). In the energy domain of 1 - 2 GeV per nucleon (GeV/u), the HADES experiment at GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt studies dielectrons and strangeness production.
Various reactions, for example collisions of pions, protons, deuterons and heavy-ions with nuclei have been studied since its installation in the year 2001. Hereby the so called DLS Puzzle was solved experimentally, with remeasuring C+C at 1 and 2 GeV/u and by careful studies of inclusive pp and pn reactions at 1.25 GeV. With these measurements the so-called reference spectrum was established. Measurements of e+ e− production Ar+KCl showed an enhancement on the dilepton spectrum above the trivial NN back-
ground. Theory predicts a strong enhancement of medium radiation with the system size, due to large production of fast decaying baryonic resonances like ∆ and N∗ . The heaviest system measured so far was Au+Au at a kinetic beam energy of 1.23 GeV/u. The precise determination of the medium radiation depends
on a precise knowledge of the underlying hadronic cocktail composed of various sources contributing to the measured dilepton spectrum. In general the medium radiation needs to be separated from contributions coming from long-lived particles, that decay after the freeze out of the system. For a more model independent
understanding of the dilepton cocktail the production cross sections of these particles need to measured independently. In the related energy regime the main contributers are π0 and η Dalitz decays. Both mesons have a dominant decay into two real photons and have been reconstructed successfully in this channel. Since HADES has no electromagnetic calorimeter the mesons can not be identified in this decay channel directly. In this thesis the capability of HADES to detect e+ e− pairs from conversions of real photons is demonstrated.
Therefore not only the conversion probability but also the resulting efficiencies are shown. Furthermore, the reconstruction method for neutral mesons will be explained and the resulting spectra are interpreted. The measurement of neutral pions is compared to the independent measured charged pion distribution, and
extrapolated to full phase space. An integrated approach is used to determine the η yield. Both measurement are compared to the world data and to theory model claculations. Finally, the measurements will be used together with the reconstructed dilepton spectra to determine the amount and the properties of in medium radiation in the Au+Au system.
The detailed biophysical mechanisms through which transcranial magnetic stimulation (TMS) activates cortical circuits are still not fully understood. Here we present a multi-scale computational model to describe and explain the activation of different cell types in motor cortex due to transcranial magnetic stimulation. Our model determines precise electric fields based on an individual head model derived from magnetic resonance imaging and calculates how these electric fields activate morphologically detailed models of different neuron types. We predict detailed neural activation patterns for different coil orientations consistent with experimental findings. Beyond this, our model allows us to predict activation thresholds for individual neurons and precise initiation sites of individual action potentials on the neurons’ complex morphologies. Specifically, our model predicts that cortical layer 3 pyramidal neurons are generally easier to stimulate than layer 5 pyramidal neurons, thereby explaining the lower stimulation thresholds observed for I-waves compared to D-waves. It also predicts differences in the regions of activated cortical layer 5 and layer 3 pyramidal cells depending on coil orientation. Finally, it predicts that under standard stimulation conditions, action potentials are mostly generated at the axon initial segment of corctial pyramidal cells, with a much less important activation site being the part of a layer 5 pyramidal cell axon where it crosses the boundary between grey matter and white matter. In conclusion, our computational model offers a detailed account of the mechanisms through which TMS activates different cortical cell types, paving the way for more targeted application of TMS based on individual brain morphology in clinical and basic research settings.
In this thesis we explore the characteristics of strongly interacting matter, described by Quantum Chromodynamics (QCD). In particular, we investigate the properties of QCD at extreme densities, a region yet to be explored by first principle methods. We base the study on lattice gauge theory with Wilson fermions in the strong coupling, heavy quark regime. We expand the lattice action around this limit, and carry out analytic integrals over the gauge links to obtain an effective, dimensionally reduced, theory of Polyakov loop interactions.
The 3D effective theory suffers only from a mild sign problem, and we briefly outline how it can be simulated using either Monte Carlo techniques with reweighting, or the Complex Langevin flow. We then continue to the main topic of the thesis, namely the analytic treatment of the effective theory. We introduce the linked cluster expansion, a method ideal for studying thermodynamic expansions. The complex nature of the effective theory action requires the development of a generalisation of the linked cluster expansion. We find a mapping between generalised linked cluster expansion and our effective theory, and use this to compute the thermodynamic quantities.
Lastly, various resummation techniques are explored, and a chain resummation is implemented on the level of the effective theory itself. The resummed effective theory describes not only nearest neighbour, next to nearest neighbour, and so on, interactions, but couplings at all distances, making it well suited for describing macroscopic effects. We compute the equation of state for cold and dense heavy QCD, and find a correspondence with that of non-relativistic free fermions, indicating a shift of the dynamics in the continuum.
We conclude this thesis by presenting two possible extensions to new physics using the techniques outlined within. First is the application of the effective theory in the large-$N_c$ limit, of particular interest to the study of conformal field theory. Second is the computation of analytic Yang Lee zeros, which can be applied in the search for real phase transitions.
Exotic nuclear matter
(2016)
Recent developments of nuclear structure theory for exotic nuclei are addressed. The inclusion of hyperons and nucleon resonances is discussed. Nuclear multipole response functions, hyperon interactions in infinite matter and in neutron stars and theoretical aspects of excitations of nucleon resonances in nuclei are discussed.
We discuss different models for the spin structure of the nonperturbative pomeron: scalar, vector, and rank-2 symmetric tensor. The ratio of single-helicity-flip to helicity-conserving amplitudes in polarised high-energy proton–proton elastic scattering, known as the complex r5 parameter, is calculated for these models. We compare our results to experimental data from the STAR experiment. We show that the spin-0 (scalar) pomeron model is clearly excluded by the data, while the vector pomeron is inconsistent with the rules of quantum field theory. The tensor pomeron is found to be perfectly consistent with the STAR data.
This letter reports on how the Wilson flow technique can efficaciously kill the short-distance quantum fluctuations of 2- and 3-gluon Green functions, remove the ΛQCD scale and destroy the transition from the confining non-perturbative to the asymptotically-free perturbative sector. After the Wilson flow, the behavior of the Green functions with momenta can be described in terms of the quasi-classical instanton background. The same behavior also occurs, before the Wilson flow, at low-momenta. This last result permits applications as, for instance, the detection of instanton phenomenological properties or a determination of the lattice spacing only from the gauge sector of the theory.
We report on new results on the infrared behavior of the three-gluon vertex in quenched Quantum Chromodynamics, obtained from large-volume lattice simulations. The main focus of our study is the appearance of the characteristic infrared feature known as ‘zero crossing’, the origin of which is intimately connected with the nonperturbative masslessness of the Faddeev–Popov ghost. The appearance of this effect is clearly visible in one of the two kinematic configurations analyzed, and its theoretical origin is discussed in the framework of Schwinger–Dyson equations. The effective coupling in the momentum subtraction scheme that corresponds to the three-gluon vertex is constructed, revealing the vanishing of the effective interaction at the exact location of the zero crossing.
A generalized teleparallel cosmological model, f(TG,T), containing the torsion scalar T and the teleparallel counterpart of the Gauss–Bonnet topological invariant TG, is studied in the framework of the Noether symmetry approach. As f(G,R) gravity, where G is the Gauss–Bonnet topological invariant and R is the Ricci curvature scalar, exhausts all the curvature information that one can construct from the Riemann tensor, in the same way, f(TG,T) contains all the possible information directly related to the torsion tensor. In this paper, we discuss how the Noether symmetry approach allows one to fix the form of the function f(TG,T) and to derive exact cosmological solutions.
We study the effect of thermal charm production on charmonium regeneration in high energy nuclear collisions. By solving the kinetic equations for charm quark and charmonium distributions in Pb+Pb collisions, we calculate the global and differential nuclear modification factors RAA(Npart) and RAA(pt) for J/ψ s. Due to the thermal charm production in hot medium, the charmonium production source changes from the initially created charm quarks at SPS, RHIC and LHC to the thermally produced charm quarks at Future Circular Collider (FCC), and the J/ψ suppression (RAA<1) observed so far will be replaced by a strong enhancement (RAA>1) at FCC at low transverse momentum.
The decay properties of the Pygmy Dipole Resonance (PDR) have been investigated in the semi-magic N=82 nucleus 140Ce using a novel combination of nuclear resonance fluorescence and γ–γ coincidence techniques. Branching ratios for transitions to low-lying excited states are determined in a direct and model-independent way both for individual excited states and for excitation energy intervals. Comparison of the experimental results to microscopic calculations in the quasi-particle phonon model exhibits an excellent agreement, supporting the observation that the Pygmy Dipole Resonance couples to the ground state as well as to low-lying excited states. A 10% mixing of the PDR and the [21+ x PDR] is extracted.
In der Experimentierhalle der Physik am Campus Riedberg der Goethe – Universität wird gegenwärtig die Beschleunigeranlage FRANZ aufgebaut. FRANZ steht für Frankfurter Neutronenquelle am Stern-Gerlach-Zentrum. Die Anlage bietet vielfältige Experimentiermöglichkeiten in der Untersuchung intensiver, gepulster Protonenstrahlen. Ein Forschungsschwerpunkt an den sekundären Neutronenstrahlen sind Messungen zur nuklearen
Astrophysik. Die Neutronen werden durch einen 2 MeV Protonenstrahl mittels der Reaktion 7Li (p, n) 7Be erzeugt. Die geplanten Experimente erfordern sowohl eine hier weltweit erstmals realisierte Pulsrepetitionsrate von bis zu 250 kHz bei Pulsströmen im 100 mA – Bereich als auch eine extreme Pulskompression auf eine Nanosekunde bei dann auftretenden Pulsströmen im Ampere – Bereich. Daneben ist auch ein Dauerstrich – Strahlbetrieb im mA – Strombereich möglich. Auch viele einzelne Beschleunigerkomponenten wie die Ionenquelle, der Chopper zur Pulsformung, die hochfrequent gekoppelte RFQ-IH-Kombination, der Rebuncher in Form einer CH – Struktur und der Bunchkompressor sind Neuentwicklungen. Mittlere Strahlleistungen von bis zu 24 kW treten im Niederenergiestrahltransportbereich auf, da die Ionenquelle grundsätzlich im Dauerstrich zu betreiben ist, auch bei Hochstrom mit hohen Pulsrepetitionsraten. Der Personen- und Geräteschutz spielt damit auch eine wesentliche Rolle bei der Auslegung des Kontrollsystems für FRANZ. Der Aufbau von FRANZ und seine wesentlichen Komponenten werden in Kapitel 2 erläutert. Die vielen unterschiedlichen Komponenten wie Hochspannungsbereich, Magneten, Hochfrequenzbauteile und Kavitäten, Vakuumbauteile, Strahldiagnose und Detektoren machen plausibel, dass auch das Kontrollsystem für eine solche Anlage speziell ausgelegt werden muss. In Kapitel 4 werden zum Vergleich die Konzepte zur Steuerung und Regelung aktueller, großer Beschleunigerprojekte aufgezeigt, nämlich für die „European Spallation Source ESS“ und für die „Facility for Antiproton and Ion Research FAIR“. In der vorliegenden Arbeit wurde die Ionenquelle als komplexe Beschleunigerkomponente ausgewählt, um Entwicklungen zur Steuerung und Regelung durchzuführen und zu testen. Zum Anfahren und Betreiben der Ionenquelle wurde ein Flussdiagramm (Abb. 5.15) entwickelt und realisiert. Im Detail wurden Untersuchungen zur Abhängigkeit der Heizkathodenparameter von der Betriebsdauer gemacht. Daraus konnte ein Algorithmus zur Vorhersage eines rechtzeitigen Filamentaustausches abgeleitet werden. Weiterhin konnte die Nachregelung des Kathodenheizstromes automatisiert werden, um damit die Bogenentladungsspannung innerhalb eines Intervalls von ± 0.5 V zu stabilisieren. Das Anfahren des Filamentstroms wurde ebenfalls automatisiert. Dazu wird die Vakuumdruckänderung in Abhängigkeit der Filamentstromerhöhung gemessen, ausgewertet und daraus der nächste erlaubte Stromerhöhungsschritt abgeleitet. Auf diese Weise wird der Betriebszustand schneller und kontrollierter erreicht als bei manuellem Hochfahren. Das Ziel eines unbemannten Ionenquellenbetriebs ist damit näher gerückt. In einem ersten Test zur Komponentensteuerung und zur Datenaufnahme wurde ein Ionenstrahl extrahiert und durch den ersten Fokussierungsmagneten – einen Solenoiden – transportiert. Es wurde der Erregungsstrom des Solenoiden sowie die Strahlenergie automatisch durchgefahren, die Daten abgespeichert und daraus ein Kontourplot zum gemessenen Strahlstrom hinter der Fokussierlinse erstellt (Abb. 5). Die vorliegende Arbeit beschäftigt sich nur mit den „langsamen“ Steuerungs- und Regelungsprozessen, während die schnellen Prozesse im Hochfrequenzregelungssystem unabhängig geregelt werden. Neben der Überwachung des Betriebszustandes aller Komponenten werden auch alle für den Service und die Personensicherheit benötigten Daten weggeschrieben. Das System basiert auf MNDACS (Mesh Networked Data Acquisition and Control System) und ist in JAVA geschrieben. MNDACS besteht aus einem Kernel, welcher die Komponententreiber-Software sowie den Netzwerkserver und das graphische Netzwerkinterface (GUI) betreibt. Weterhin gehört dazu das Driver Abstraction Layer (DAL), welches den Zugang zu weiteren Computern oder zu lokalen Treibern ermöglicht. CORBA stellt die Middleware für Netzwerkkommunikation dar. Dadurch wird Kommunikation mit externer Software geregelt, weiterhin wird die Umlegung von Kommunikation im Fall von Leitungsunterbrechungen oder einem lokalen Computerabsturz festgelegt. Es gibt bei FRANZ zwei Kontrollebenen: Über Ethernet läuft die „High Level Control“ und die Datenverarbeitung. Über die „Low Level Control“ läuft das Interlock – und Sicherheitssystem. Die Netzwerkverbindungen laufen über 1 Gb Ethernet Links, womit ein schneller Austausch auch bei lokalen Netzwerkstörungen noch möglich ist. Um bei Stromausfällen das Computersystem am Laufen zu halten, wurde im Rahmen dieser Arbeit ein „Uninterruptable Power Supply“ UPS beschafft und erfolgreich am Hochspannungsterminal getestet.
The process of electron-loss to the continuum (ELC) has been studied for the collision systems U28++H2 at a collision energy of 50 MeV/u, U28++N2 at 30 MeV/u, and U28++Xe at 50 MeV/u. The energy distributions of cusp electrons emitted at an angle of 0∘ with respect to the projectile beam were measured using a magnetic forward-angle electron spectrometer. For these collision systems far from equilibrium charge state, a significantly asymmetric cusp shape is observed. The experimental results are compared to calculations based on first-order perturbation theory, which predict an almost symmetric cusp shape. Some possible reasons for this discrepancy are discussed.
Using an advanced version of the hadron resonance gas model we have found several remarkable irregularities at chemical freeze-out. The most prominent of them are two sets of highly correlated quasi-plateaus in the collision energy dependence of the entropy per baryon, total pion number per baryon, and thermal pion number per baryon which we found at center of mass energies 3.6-4.9 GeV and 7.6-10 GeV. The low energy set of quasi-plateaus was predicted a long time ago. On the basis of the generalized shockadiabat model we demonstrate that the low energy correlated quasi-plateaus give evidence for the anomalous thermodynamic properties of the mixed phase at its boundary to the quark-gluon plasma. The question is whether the high energy correlated quasi-plateaus are also related to some kind of mixed phase. In order to answer this question we employ the results of a systematic meta-analysis of the quality of data description of 10 existing event generators of nucleus-nucleus collisions in the range of center of mass collision energies from 3.1 GeV to 17.3 GeV. These generators are divided into two groups: the first group includes the generators which account for the quark-gluon plasma formation during nuclear collisions, while the second group includes the generators which do not assume the quark-gluon plasma formation in such collisions. Comparing the quality of data description of more than a hundred of different data sets of strange hadrons by these two groups of generators, we find two regions of the equal quality of data description which are located at the center of mass collision energies 4.3-4.9 GeV and 10.-13.5 GeV. These two regions of equal quality of data description we interpret as regions of the hadron-quark-gluon mixed phase formation. Such a conclusion is strongly supported by the irregularities in the collision energy dependence of the experimental ratios of the Lambda hyperon number per proton and positive kaon number per Lambda hyperon. Although at the moment it is unclear, whether these regions belong to the same mixed phase or not, there are arguments that the most probable collision energy range to probe the QCD phase diagram (tri)critical endpoint is 12-14 GeV.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p–Pb collisions at √sNN = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range - 0.5 < y < 0. The transverse momentum spectra, measured as a function of the multiplicity, have a pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at √s = 7 TeV and Pb–Pb collisions at √sNN = 2.76 TeV. In Pb–Pb and p–Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
Nanomaterials, i.e., materials that are manufactured at a very small spatial scale, can possess unique physical and chemical properties and exhibit novel characteristics as compared to the same material without nanoscale features. The reduction of size down to the nanometer scale leads to the abundance of potential applications in different fields of technology. For instance, tailoring the physicochemical properties of nanomaterials for modification of their interaction with a biological environment has been reflected in a number of biomedical applications.
Strategies to choose the size and the composition of nanoscale systems are often hindered by a limited understanding of interactions that are difficult to study experimentally. However, this goal can be achieved by means of advanced computer simulations. This thesis explores, from a theoretical and a computational viewpoints, stability, electronic and thermo-mechanical properties of nanoscale systems and materials which are related to biomedical applications.
We examine the ability of existing classical interatomic potentials to reproduce stability and thermo-mechanical properties of metal systems, assuming that these potentials have been fitted to describe ground-state properties of the perfect bulk materials.
It is found that existing classical interatomic potentials poorly describe highly-excited vibrational states when the system is far from the potential energy minimum. On the other hand, construction of a reliable computational model is essential for further development of nanomaterials for applications. A new interatomic potential that is able to correctly reproduce both the melting temperature and the ground-state properties of different metals, such as gold, platinum, titanium, and magnesium, by means of classical molecular dynamics simulations is proposed in this work. The suggested modification of a many-body potential has a general nature and can be utilized for similar numerical exploration of thermo-mechanical properties of a broad range of molecular and solid state systems experiencing phase transitions.
The applicability of the classical interatomic potentials to the description of nanoscale systems, consisting of several tens-hundreds of atoms, is also explored in this study. This issue is important, for instance, in the case of nanostructured materials, where grains or nanocrystals have a typical size of about a few nanometers. We validate classical potentials through the comparison with density-functional theory calculations of small
atomic clusters made of titanium and nickel. By this analysis, we demonstrate that the classical potentials fitted to describe ground-state properties of a bulk material can describe the energetics of nanoscale systems with a reasonable accuracy.
In this work, we also analyze electronic properties of nanometer-size nanoparticles made of gold, platinum, silver, and gadolinium; nanoparticles composed of these materials are of current interest for radiation therapy applications. We focus on the production of low-energy electrons, having the kinetic energy from a few electronvolts to several tens of electronvolts. It is currently established that the low-energy secondary electrons of such energies play an important role in the nanoscale mechanisms of biological damage resulting from ionizing radiation. We provide a methodology for analyzing the dynamic response of nanoparticles of the experimentally relevant sizes, namely of about several nanometers, exposed to ionizing radiation. Because of a large number of constituent atoms (about 1000 −10000 atoms) and consequently high computational costs, the electronic properties of such systems can hardly be described by means of ab initio methods based on a quantum-mechanical treatment of electrons, and this analysis should rely on model approaches. By comparing the response of smaller systems (of about 1 nm size) calculated within the ab initio- and the model framework, we validate this methodology and make predictions for the electron production in larger systems.
We have revealed that a significant increase in the number of the low-energy electrons emitted from nanometer-size noble metal nanoparticles arises from collective electron excitations formed in the systems. It is demonstrated that the dominating mechanisms of electron yield enhancement are related to the formation of plasmons excited in a whole system and of atomic giant resonances formed due to excitation of valence d electrons in individual atoms of a nanoparticle. Being embedded in a biological medium, the noble metal nanoparticles thus represent an important source of low-energy electrons, able to produce a significant irrepairable damage in biological systems.
A general methodology for studying electronic properties of nanosystems is used to make quantitative predictions for electron production by non-metal nanoparticles. The analysis illustrates that due to a prominent collective response to an external electric field, carbon nanoparticles embedded in a biological medium also enhance the production of low-energy electrons. The number of low-energy electrons emitted from carbon nanoparticles is demonstrated to be several times higher as compared to the case of liquid water.
Collective flow phenomena are a sensitive probe for the properties of extreme QCD matter. However, their interpretation relies on the understanding of the initial conditions e.g. the eccentricity of the nuclear overlap region. HADES [1] provides a large acceptance combined with a high mass-resolution and therefore allows to study di-electron and hadron production in heavy-ion collisions with unprecedented precision. In this contribution, the capability of HADES to study flow harmonics by utilizing multi-particle azimuthal correlation techniques is discussed. Due to the high statistics of seven billion Au+Au collisions at 1.23 AGeV collected in 2012, a systematic study of higher-order flow harmonics, the differentiation between collective and non-flow effects, and as well the multi-differential (pt, rapidity, centrality) analysis is possible.
Electronic states with non-trivial topology host a number of novel phenomena with potential for revolutionizing information technology. The quantum anomalous Hall effect provides spin-polarized dissipation-free transport of electrons, while the quantum spin Hall effect in combination with superconductivity has been proposed as the basis for realizing decoherence-free quantum computing. We introduce a new strategy for realizing these effects, namely by hole and electron doping kagome lattice Mott insulators through, for instance, chemical substitution. As an example, we apply this new approach to the natural mineral herbertsmithite. We prove the feasibility of the proposed modifications by performing ab-initio density functional theory calculations and demonstrate the occurrence of the predicted effects using realistic models. Our results herald a new family of quantum anomalous Hall and quantum spin Hall insulators at affordable energy/temperature scales based on kagome lattices of transition metal ions.
The term superconductivity describes the phenomenon of vanishing electrical resistivity in a certain material, then called a superconductor, below a critical typically very low temperature. Since the discovery of superconductivity in mercury in 1911 many other superconductors have been found and the critical temperature below which superconductivity occurs could recently be raised to the temperatures encountered in a cold antarctic winter.
Superconductors are promising materials for applications. They can serve as nearly loss-free cables for energy transmission, in coils for the generation of high magnetic fields or in various electronic devices, such as detectors for magnetic fields. Despite their obvious advantages, the cost for using superconductors, however, depends a lot on the cooling effort needed to realize the superconducting state. Therefore, the search for a superconductor with critical temperature above room-temperature, which would avoid the need for any specialized cooling system, is one of the main projects of contemporary research in condensed matter physics.
While a theory of superconductivity in simple metals has already been developed in the 1950s, it has meanwhile been recognized that many superconductors are unconventional in the sense that their behavior does not follow the aforementioned theory. Unconventional superconductors differ from conventional superconductors mainly by the momentum- and real-space symmetry of the order parameter, which is associated with the superconducting state. While conventional superconductors have a uniform order parameter, unconventional superconductors can have an order parameter that bears structure. Of course, alternative theoretical descriptions have been suggested, but the discussion on the right theory for unconventional superconductivity has not yet been settled. Ultimately, this lack of a general theory of superconductivity prevents a targeted search for the room-temperature superconductor. Any new theoretical approach must, however, prove its value by correctly predicting the structure of the superconducting order parameter and further material properties.
In this work we participate in the search for a theory of unconventional superconductivity. We discuss the theory of superconductivity mediated by electron-electron interactions, which has been popular in the last few decades due to its success in explaining various properties of the copper-based superconductors that emerged in the 1980s. We give a detailed derivation of the so-called random phase approximation for the Hubbard model in terms of a diagrammatic many-body theory and apply it in conjunction with low-energy kinetic Hamiltonians, which we construct from first principles calculations in the framework of density functional theory. Density functional theory is an established technique for calculating the electronic and magnetic properties of materials solely based on their crystal structure. Its practical implementations in computer codes, however, do for example not describe complicated many-electron phenomena like the superconducting state that we are interested in here. Nevertheless, it can provide important information about the properties of the normal state of the material, which superconductivity emerges from. In our theory we use these information and approach the superconducting state from the normal state.
Such an interfacing of different calculational techniques requires a lot of implementation work in the form of computer code. Inclusion of the computer code into this work would consume by far too much space, but since some of the decisions on approximations in the calculational formalism are guided by the feasibility of the associated computer calculations, we discuss the numerical implementation in great detail.
We apply the developed methods to quasi-two-dimensional organic charge transfer salts and iron-based superconductors. Finally, we discuss implications of our findings for the interpretation of various experiments.
Neutron-induced fission cross sections of 238U and 235U are used as standards in the fast neutron region up to 200 MeV. A high accuracy of the standards is relevant to experimentally determine other neutron reaction cross sections. Therefore, the detection effciency should be corrected by using the angular distribution of the fission fragments (FFAD), which are barely known above 20 MeV. In addition, the angular distribution of the fragments produced in the fission of highly excited and deformed nuclei is an important observable to investigate the nuclear fission process.
In order to measure the FFAD of neutron-induced reactions, a fission detection setup based on parallel-plate avalanche counters (PPACs) has been developed and successfully used at the CERN-n_TOF facility. In this work, we present the preliminary results on the analysis of new 235U(n,f) and 238U(n,f) data in the extended energy range up to 200 MeV compared to the existing experimental data.
he study of the resonant structures in neutron-nucleus cross-sections, and therefore of the compound-nucleus reaction mechanism, requires spectroscopic measurements to determine with high accuracy the energy of the neutron interacting with the material under study.
To this purpose, the neutron time-of-flight facility n_TOF has been operating since 2001 at CERN. Its characteristics, such as the high intensity instantaneous neutron flux, the wide energy range from thermal to few GeV, and the very good energy resolution, are perfectly suited to perform high-quality measurements of neutron-induced reaction cross sections. The precise and accurate knowledge of these cross sections plays a fundamental role in nuclear technologies, nuclear astrophysics and nuclear physics.
Two different measuring stations are available at the n_TOF facility, called EAR1 and EAR2, with different characteristics of intensity of the neutron flux and energy resolution. These experimental areas, combined with advanced detection systems lead to a great flexibility in performing challenging measurement of high precision and accuracy, and allow the investigation isotopes with very low cross sections, or available only in small quantities, or with very high specific activity.
The characteristics and performances of the two experimental areas of the n_TOF facility will be presented, together with the most important measurements performed to date and their physics case. In addition, the significant upcoming measurements will be introduced.
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation of the adopted methodology and formalism, the performance of the Bayesian PID approach for charged pions, kaons and protons in the central barrel of ALICE is studied. PID is performed via measurements of specific energy loss (dE/dx) and time-of-flight. PID efficiencies and misidentification probabilities are extracted and compared with Monte Carlo simulations using high-purity samples of identified particles in the decay channels K0S→π−π+, ϕ→K−K+, and Λ→pπ− in p-Pb collisions at sNN−−−√=5.02 TeV. In order to thoroughly assess the validity of the Bayesian approach, this methodology was used to obtain corrected pT spectra of pions, kaons, protons, and D0 mesons in pp collisions at s√=7 TeV. In all cases, the results using Bayesian PID were found to be consistent with previous measurements performed by ALICE using a standard PID approach. For the measurement of D0→K−π+, it was found that a Bayesian PID approach gave a higher signal-to-background ratio and a similar or larger statistical significance when compared with standard PID selections, despite a reduced identification efficiency. Finally, we present an exploratory study of the measurement of Λ+c→pK−π+ in pp collisions at s√=7 TeV, using the Bayesian approach for the identification of its decay products.
The multi-strange baryon yields in PbPb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, Ξ and Ω production rates have been measured with the ALICE experiment as a function of transverse momentum, pT, in pPb collisions at a centre-of-mass energy of sNN=5.02 TeV. The results cover the kinematic ranges 0.6 GeV/c<pT<7.2 GeV/c and 0.8 GeV/c<pT<5 GeV/c, for Ξ and Ω respectively, in the common rapidity interval −0.5<yCMS<0. Multi-strange baryons have been identified by reconstructing their weak decays into charged particles. The pT spectra are analysed as a function of event charged-particle multiplicity, which in pPb collisions ranges over one order of magnitude and lies between those observed in pp and PbPb collisions. The measured pT distributions are compared to the expectations from a Blast-Wave model. The parameters which describe the production of lighter hadron species also describe the hyperon spectra in high multiplicity pPb collisions. The yield of hyperons relative to charged pions is studied and compared with results from pp and PbPb collisions. A continuous increase in the yield ratios as a function of multiplicity is observed in pPb data, the values of which range from those measured in minimum bias pp to the ones in PbPb collisions. A statistical model qualitatively describes this multiplicity dependence using a canonical suppression mechanism, in which the small volume causes a relative reduction of hadron production dependent on the strangeness content of the hyperon.
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, Ξ and Ω production rates have been measured with the ALICE experiment as a function of transverse momentum, pT, in p-Pb collisions at a centre-of-mass energy of sNN−−−√ = 5.02 TeV. The results cover the kinematic ranges 0.6 GeV/c<pT<7.2 GeV/c and 0.8 GeV/c<pT< 5 GeV/c, for Ξ and Ω respectively, in the common rapidity interval -0.5 <yCMS< 0. Multi-strange baryons have been identified by reconstructing their weak decays into charged particles. The pT spectra are analysed as a function of event charged-particle multiplicity, which in p-Pb collisions ranges over one order of magnitude and lies between those observed in pp and Pb-Pb collisions. The measured pT distributions are compared to the expectations from a Blast-Wave model. The parameters which describe the production of lighter hadron species also describe the hyperon spectra in high multiplicity p-Pb. The yield of hyperons relative to charged pions is studied and compared with results from pp and Pb-Pb collisions. A statistical model is employed, which describes the change in the ratios with volume using a canonical suppression mechanism, in which the small volume causes a species-dependent relative reduction of hadron production. The calculations, in which the magnitude of the effect depends on the strangeness content, show good qualitative agreement with the data.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range −0.5<y<0. The transverse momentum spectra, measured as a function of the multiplicity, have pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at s√ = 7 TeV and Pb-Pb collisions at sNN−−−√ = 2.76 TeV. In Pb-Pb and p-Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The production of K∗(892)0 and ϕ(1020) mesons has been measured in p-Pb collisions at sNN−−−√ = 5.02 TeV. K∗0 and ϕ are reconstructed via their decay into charged hadrons with the ALICE detector in the rapidity range −0.5<y<0. The transverse momentum spectra, measured as a function of the multiplicity, have pT range from 0 to 15 GeV/c for K∗0 and from 0.3 to 21 GeV/c for ϕ. Integrated yields, mean transverse momenta and particle ratios are reported and compared with results in pp collisions at s√ = 7 TeV and Pb-Pb collisions at sNN−−−√ = 2.76 TeV. In Pb-Pb and p-Pb collisions, K∗0 and ϕ probe the hadronic phase of the system and contribute to the study of particle formation mechanisms by comparison with other identified hadrons. For this purpose, the mean transverse momenta and the differential proton-to-ϕ ratio are discussed as a function of the multiplicity of the event. The short-lived K∗0 is measured to investigate re-scattering effects, believed to be related to the size of the system and to the lifetime of the hadronic phase.
The interaction between the Heat Shock Proteins 70 and 40 is at the core of the ATPase regulation of the chaperone machinery that maintains protein homeostasis. However, the structural details of this fundamental interaction are still elusive and contrasting models have been proposed for the transient Hsp70/Hsp40 complexes. Here we combine molecular simulations based on both coarsegrained and atomistic models with co-evolutionary sequence analysis to shed light on this problem by focusing on the bacterial DnaK/DnaJ system. The integration of these complementary approaches resulted into a novel structural model that rationalizes previous experimental observations. We identify an evolutionary-conserved interaction surface formed by helix II of the DnaJ J-domain and a groove on lobe IIA of the DnaK nucleotide binding domain, involving the inter-domain linker.
Great interest has emerged recently in the search for Kitaev spin liquid states in real materials. Such states rely on strongly anisotropic magnetic interactions, which have been suggested to exist in a number of candidate materials based on Ir and Ru. This thesis concentrates on two priority purposes. The first is the investigation of electronic and magnetic properties of candidate materials Na2IrO3, α-Li2IrO3, α-RuCl3, γ-Li2IrO3, and Ba3YIr2O9 for Kitaev physics where both spin-orbit coupling and correlation effects are important. The second is the method development for the microscopic description of correlated materials combining many-body methods and density functional theory (DFT). ...
Magnetism is a beautiful example of a macroscopic quantum phenomenon. While known at least since the ancient Greeks, a microscopic theoretical explanation of magnetism could only be achieved with the advent of quantum mechanics at the beginning of the 20th century. Then it was understood that in a certain class of solids the famous Pauli exclusion principle leads to an effective interaction between the microscopic magnetic moments, i.e., the spins, which favors an ordered, and hence macroscopically magnetic, state. Nowadays, magnetic phenomena are used in a host of applications, and are especially relevant for information storage and processing technologies.
Despite the long history of the field, magnetic phenomena are still an active research topic. In particular, in the last decade the fields of spintronics and spin-caloritronics emerged, which manipulate the microscopic spins via charge and heat currents respectively. This opens new avenues to potential applications; including the possibility to use the magnetic spin degrees of freedom instead of charges as carriers of information, which could provide a number of advantages such as reduced losses and further miniaturization.
In this thesis we do not delve any further into the realm of possible applications. Instead we use sophisticated theories to explore the microscopic spin dynamics which is the basis of all such applications. We also focus on a particular compound: Yttrium-iron garnet (YIG), which is a ferrimagnetic insulator. This material has been widely used in experiments on magnetism over the last decades, and is a popular candidate for spintronic devices. Microscopically, the low-energy magnetic properties of YIG can be described by a ferromagnetic Heisenberg model. For spintronics and spin-caloritronics applications, it is however insufficient to only consider the magnetic degrees of freedom; one should also include the coupling of the spins to the elastic lattice vibrations, i.e., the phonons. Besides giving an overview on techniques used throughout the thesis, the introductory Ch. 1 provides a discussion of the microscopic Hamiltonian used to model the coupled spin-phonon system in the subsequent chapters.
The topic of Ch. 2 are the consequences of the magnetoelastic coupling on the low-energy magnon excitations in YIG. Starting from the microscopic spin-phonon Hamiltonian, we rigorously derive the magnon-phonon hybridization and scattering vertices in a controlled spin wave expansion. For the experimentally relevant case of thin YIG films at room temperature, these vertices are then used to compute the magnetoelastic modes as well as the magnon damping. In the course of this work, the damping of magnons in this system was also investigated experimentally using Brillouin light scattering spectroscopy. While comparison to the experimental data shows that the magnetoelastic interactions do not dominate the total magnon relaxation in the experimentally accessible regime, we are able to show that the spin-lattice relaxation time is strongly momentum dependent, thereby providing a microscopic explanation of a recent experiment.
In the final Ch. 3, we investigate a different phenomenon occurring in thin YIG films: Room temperature condensation of magnons. Prior work attributed this condensation process to quantum mechanics, i.e., it was interpreted as Bose-Einstein condensation. However, this is not satisfactory because at room temperature, the magnons in YIG behave as purely classical waves. In particular, the quantum Bose-Einstein distribution reduces to the classical Rayleigh-Jeans distribution in this case. In addition, the effective spin in YIG is very large. Therefore we start from the hypothesis that the room temperature magnon condensation is actually a new example of the kinetic condensation of classical waves, which has so far only been observed by imaging classical light in a photorefractive crystal. To distinguish this classical condensation from the quantum mechanical Bose-Einstein one, we refer to it as Rayleigh-Jeans condensation. To prove our claim, we consider the classical equations of motion of the coupled spin-phonon system. By eliminating the phonon degrees of freedom, we microscopically derive a non-Markovian stochastic Landau-Lifshitz-Gilbert equation (LLG) for the classical spin vectors. We then use this LLG to perform numerical simulations of the magnon dynamics, with all parameters fixed by experiments. These simulations accurately reproduce all stages of the magnon time evolution observed in experiments, including the appearance of the magnon condensate at the bottom of the magnon spectrum. In this way we confirm our initial hypothesis that the magnon condensation is a classical Rayleigh-Jeans condensation, which is unrelated to quantum mechanics.
The phenomenon of magnetism has been known to humankind for at least over 2500 years and many useful applications of magnetism have been developed since then, starting from the compass to modern information storage and processing devices. While technological applications are an important part of the continuing interest in magnetic materials, their fundamental properties are still being studied, leading to new physical insights at the forefront of physics. The magnetism of magnetic materials is a pure quantum effect due to the electrons that carry an intrinsic spin of 1/2. The physics of interacting quantum spins in magnetic insulators is the main subject of this thesis.We focus here on a theoretical description of the antiferromagnetic insulator Cs2CuCl4. This material is highly interesting because it is a nearly ideal realization of the two-dimensional antiferromagnetic spin-1/2 Heisenberg model on an anisotropic triangular lattice, where the Cu(2+) ions carry a spin of 1/2 and the spins interact via exchange couplings. Due to the geometric frustration of the triangular lattice, there exists a spin-liquid phase with fractional excitations (spinons) at finite temperatures in Cs2CuCl4. This spin-liquid phase is characterized by strong short-range spin correlations without long-range order. From an experimental point of view, Cs2CuCl4 is also very interesting because the exchange couplings are relatively weak leading to a saturation field of only B_c=8.5 T. All relevant parts of the phase diagram are therefore experimentally accessible. A recurring theme in this thesis will be the use of bosonic or fermionic representations of the spin operators which each offer in different situations suitable starting points for an approximate treatment of the spin interactions. The methods which we develop in this thesis are not restricted to Cs2CuCl4 but can also be applied to other materials that can be described by the spin-1/2 Heisenberg model on a triangular lattice; one important example is the material class Cs2Cu(Cl{4-x}Br{x}) where chlorine is partially substituted by bromine which changes the strength of the exchange couplings and the degree of frustration.
Our first topic is the finite-temperature spin-liquid phase in Cs2CuCl4. We study this regime by using a Majorana fermion representation of the spin-1/2 operators motivated by theoretical and experimental evidence for fermionic excitations in this spin-liquid phase. Within a mean-field theory for the Majorana fermions, we determine the magnetic field dependence of the critical temperature for the crossover from spin-liquid to paramagnetic behavior and we calculate the specific heat and magnetic susceptibility in zero magnetic field. We find that the Majorana fermions can only propagate in one dimension along the direction of the strongest exchange coupling; this reduction of the effective dimensionality of excitations is known as dimensional reduction.
The second topic is the behavior of ultrasound propagation and attenuation in the spin-liquid phase of Cs2CuCl4, where we consider longitudinal sound waves along the direction of the strongest exchange coupling. Due to the dimensional reduction of the excitations in the spin-liquid phase, we expect that we can describe the ultrasound physics by a one-dimensional Heisenberg model coupled to the lattice degrees of freedom via the exchange-striction mechanism. For this one-dimensional problem we use the Jordan-Wigner transformation to map the spin-1/2 operators to spinless fermions. We treat the fermions within the self-consistent Hartree-Fock approximation and we calculate the change of the sound velocity and attenuation as a function of magnetic field using a perturbative expansion in the spin-phonon couplings. We compare our theoretical results with experimental data from ultrasound experiments, where we find good agreement between theory and experiment.
Our final topic is the behavior of Cs2CuCl4 in high magnetic fields larger than the saturation field B_c=8.5 T. At zero temperature, Cs2CuCl4 is then fully magnetized and the ground state is therefore a ferromagnet where the excitations have an energy gap. The elementary excitations of this ferromagnetic state are spin-flips (magnons) which behave as hard-core bosons. At finite temperatures there will be thermally excited magnons that interact via the hard-core interaction and via additional exchange interactions. We describe the thermodynamic properties of Cs2CuCl4 at finite temperatures and calculate experimentally observable quantities, e.g., magnetic susceptibility and specific heat. Our approach is based on a mapping of the spin-1/2 operators to hard-core bosons, where we treat the hard-core interaction by the self-consistent ladder approximation and the exchange interactions by the self-consistent Hartree-Fock approximation. We find that our theoretical results for the specific heat are in good agreement with the available experimental data.
The PANDA experiment will be one of the flagship experiments at the future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany. It is a versatile detector dedicated to topics in hadron physics such as charmonium spectroscopy and nucleon structure. A DIRC counter will deliver hadronic particle identification in the barrel part of the PANDA target spectrometer and will cleanly separate kaons with momenta up to 3.5 GeV/c from a large pion background. An alternative DIRC design option, using wide Cherenkov radiator plates instead of narrow bars, would significantly reduce the cost of the system. Compact fused silica photon prisms have many advantages over the traditional stand-off boxes filled with liquid. This work describes the study of these design options, which are important advancements of the DIRC technology in terms of cost and performance. Several new reconstruction methods were developed and will be presented. Prototypes of the DIRC components have been built and tested in particle beam, and the new concepts and approaches were applied. An evaluation of the performance of the designs, feasibility studies with simulations, and a comparison of simulation and prototype tests will be presented.
Three- and four-pion Bose-Einstein correlations are presented in pp, p-Pb, and Pb-Pb collisions at the LHC. We compare our measured four-pion correlations to the expectation derived from two- and three-pion measurements. Such a comparison provides a method to search for coherent pion emission. We also present mixed-charge correlations in order to demonstrate the effectiveness of several analysis procedures such as Coulomb corrections. Same-charge four-pion correlations in pp and p-Pb appear consistent with the expectations from three-pion measurements. However, the presence of non-negligible background correlations in both systems prevent a conclusive statement. In Pb-Pb collisions, we observe a significant suppression of three- and four-pion Bose-Einstein correlations compared to expectations from two-pion measurements. There appears to be no centrality dependence of the suppression within the 0-50% centrality interval. The origin of the suppression is not clear. However, by postulating either coherent pion emission or large multibody Coulomb effects, the suppression may be explained.
Three- and four-pion Bose-Einstein correlations are presented in pp, p-Pb, and Pb-Pb collisions at the LHC. We compare our measured four-pion correlations to the expectation derived from two- and three-pion measurements. Such a comparison provides a method to search for coherent pion emission. We also present mixed-charge correlations in order to demonstrate the effectiveness of several analysis procedures such as Coulomb corrections. Same-charge four-pion correlations in pp and p-Pb appear consistent with the expectations from three-pion measurements. However, the presence of non-negligible background correlations in both systems prevent a conclusive statement. In Pb-Pb collisions, we observe a significant suppression of three- and four-pion Bose-Einstein correlations compared to expectations from two-pion measurements. There appears to be no centrality dependence of the suppression within the 0%–50% centrality interval. The origin of the suppression is not clear. However, by postulating either coherent pion emission or large multibody Coulomb effects, the suppression may be explained.
We report on measurements of a charge-dependent flow using a novel three-particle correlator with ALICE in Pb-Pb collisions at the LHC, and discuss the implications for observation of local parity violation and the Chiral Magnetic Wave (CMW) in heavy-ion collisions. Charge-dependent flow is reported for different collision centralities as a function of the event charge asymmetry. While our results are in qualitative agreement with expectations based on the CMW, the nonzero signal observed in higher harmonics correlations indicates a possible significant background contribution. We also present results on a differential correlator, where the flow of positive and negative charges is reported as a function of the mean charge of the particles and their pseudorapidity separation. We argue that this differential correlator is better suited to distinguish the differences in positive and negative charges expected due to the CMW and the background effects, such as local charge conservation coupled with strong radial and anisotropic flow.
The ALICE Collaboration is collecting data with both Minimum Bias and Muon triggers with pp collisions at √s = 13 TeV in the ongoing LHC Run II. An excellent performance of tracking and PID in the central barrel and in the muon spectrometer has been obtained. First results on the charged-particle pseudorapidity density and on identified particle transverse momentum spectra at √s = 13 TeV is presented.