Refine
Year of publication
- 2019 (299) (remove)
Document Type
- Article (142)
- Preprint (112)
- Doctoral Thesis (21)
- Conference Proceeding (18)
- Contribution to a Periodical (4)
- Habilitation (1)
- Master's Thesis (1)
Has Fulltext
- yes (299)
Is part of the Bibliography
- no (299)
Keywords
- Hadron-Hadron scattering (experiments) (4)
- Heavy Ion Experiments (4)
- Heavy-ion collisions (4)
- ALICE (3)
- BESIII (2)
- LHC (2)
- Lattice QCD (2)
- NA61/SHINE (2)
- Phase Diagram of QCD (2)
- QCD equation of state (2)
Institute
- Physik (299) (remove)
To gain a better understanding of complex mechanisms in biological systems, simultaneous control over multiple processes is key. To this purpose selective photouncaging has been developed. Photo-uncaging is an experimental scheme in which a molecule of interest has been inactivated synthetically and is activated by light. Usually a bond is cleaved and a leaving group is set free. The molecule which inactivates the molecule of interest and sets the leaving group free is called (photo-)cage. In a selective photo-uncaging scheme a number of leaving groups can be released independently, usually by irradiation with light of different wavelengths. This approach is, however, seriously limited in its applicability due to the properties of the involved cages and irradiation schemes. A major drawback is the usually quite broad UV-Vis absorption of the cages. This makes a selective activation by light difficult and limits the maximal number of independent cages severely.
Therefore, the aim of this thesis is to introduce the Vibrationally Promoted Electronic Resonance (VIPER) 2D-IR pulse sequence in a alternative selective uncaging scheme.
The VIPER 2D-IR pulse sequence is a spectroscopic tool which allows to generate 2D-IR signals whose lifetime are independent of the vibrational relaxation lifetime. It has been first used to monitor chemical exchange. It consists of a narrowband infared pump pulse, a subsequent UV-Vis pump pulse and a broadband infrared probe pulse. The UV-Vis pump pulse is off-resonant with regard to the UV-Vis absorption band. Electronic excitation becomes only possible, if the infrared pump pulse modulates the UV-Vis transition of the IR-excited molecule. This modulation brings the UV-Vis transition in resonance with the UV-Vis pump pulse. Thereby, only the molecules which were pre-excited with the infrared pulse can be excited into the electronically excited state. A computational prediction of the modulation was carried out by Jan von Cosel in the Burghardt group.
The narrowband infrared pump pulse can be used to selectively excite a subensemble of molecules in a mixture into an electronically excited state even if the UV-Vis spectra of all molecules are virtually identical. For this the sub-ensemble needs to exhibit an identifiable infrared spectrum. Combined with the introduction of isotope labels, which lead to changes in the infrared absorption spectra, the larger selectivity in the infrared region can be exploited for an alternative selective uncaging approach. In VIPER uncaging the infrared pump pulse selects the species and the subsequent UV-Vis pulse provides the energy needed for electronic excitation upon which the photo cleavage can occur.
After an introduction of the principle idea of uncaging and VIPER spectroscopy, the concept of VIPER uncaging is introduced and its limits and requirements are discussed. Some examples for possible VIPER cages are reviewed.
A coumarin molecule (7-diethylamino coumarin) which can release an azide group was chosen as a first test molecule for VIPER uncaging. Its isotopomers were characterized to determine suitable spectroscopic markers for successful uncaging and to find fitting experimental conditions. The chosen coumarin cage has an UV-Vis absorption band at approximately 380 nm and a steep flank on the high wavelength side of the band. The quantum yield for the azide compound is between 10-20 % depending on the solvent’s water content. The release was found to be on a picosecond timescale which is among the fastest known photo reactions, but the photo reaction mechanism has proven to be not straightforward. For the VIPER experiment on the mixture two isotopomers were chosen with a 13C atom at different positions. In one species a ring mode of the coumarin is changed by the 13C atom. In the other isotopomer the carbonyl stretching mode is influenced. The change in the ring mode region allows to select one species or the other with the infrared pre-excitation. Because of experimental difficulties only isotopomers with the same leaving group could be used. The successful selective electronic excitation of the individual isotopomers in a mixture was monitored by probing the carbonyl region.
As a second VIPER cage, para-hydroxyphenacyl (pHP) was chosen. A thiocyanate group was selected as leaving group. pHP cages have their electronic transition in the UV, with a maximum absorption at 290 nm. The shape of the spectrum is suitable and the quantum yield is very high, with values in the literature of up to 90 %. Also the photo reaction is well studied and the expected byproducts are well characterized. The chosen isotopologues were characterized spectroscopically. The resulting data on the photo reaction were in agreement with the mechanism proposed in the literature. The mixture for the VIPER experiment consisted of two isotopologues, where for one species all the C atoms in the ring were labelled and for the other the C-atom in the thiocyanate leaving group was labelled. Here the release of the different leaving groups, labelled and unlabelled thiocyanate, could be monitored selectively. This shows that it is possible to selectively release a molecule in a mixture of caged molecules by applying the VIPER pulse sequence.
The samples were synthesized by Matiss Reinfelds from the Heckel group and the VIPER experiments were done together with Carsten Neumann and with support
of the Bredenbeck group.
The leaving groups were chosen because of their infrared absorption which allowed to directly monitor the successful cleavage by spectroscopy. This was needed for the proof-of-concept experiment and to allow direct optimization of the experimental parameters but is not necessarily a requirement for VIPER uncaging.
Concerning the selectivity of the VIPER uncaging, the approach is at the moment mainly limited by the infrared pulse energy. The selective VIPER excitation is competing with unselective excitation directly by just the UV-Vis pulse. A more intense infrared pump pulse would increase only the selective VIPER excitation and thereby improve the contrast to the unspecific background.
To address this issue, the first steps towards an alternative infrared light generation are undertaken. In this alternative approach the infrared light for preexcitation is directly generated by difference frequency generation of the laser output, i.e. the high energy 800 nm fundamental, and the output of a non-collinear optical parametric amplifier (NOPA). To achieve a narrowband pump pulse the pulses are chirped before mixing. In the scope of this thesis a NOPA has been installed and the mixing has been tested with available test crystal medium. While infrared wavelength region and power were not in the aspired range with this alternative crystal the feasibility of mixing between a NOPA output and the fundamental could be shown.
Other possibilities to increase the contrast to the unspecific background excitation by the UV-Vis pump pulse are discussed. For most applications of selective VIPER uncaging the detection by fs-laser spectroscopy will not be needed and could be replaced by other methods e.g. chromatography. This will allow the experimental parameters of the VIPER pulse sequence to be changed in a way which reduces unspecific excitation i.e. reducing the UV-Vis-pump energy and result in much better contrast.
In conclusion, the experimental data in this thesis shows the VIPER pulse sequence to be applicable to selective uncaging schemes and indicates measures to arrive at the specificity necessary for uncaging applications. This thesis was focused on uncaging photo reactions with isotopomers and isotopologues, but other types of photo reactions could in principle be controlled in the same way. It should be possible to address different isomers in mixtures or different ground states of proteins selectively. The discussed experiments are a significant step towards control over photo reactions in mixtures.
Der Urknall vor ungefähr 13.8 Milliarden Jahren markiert die Entstehung des Universums. Die gesamte Energie und Materie war in einem Punkt konzentriert und expandiert seitdem kontinuierlich. Wenige Sekundenbruchteile nach dem Urknall war die Temperatur und Dichte dieser Materie extrem hoch und die erschaffenen Elementarteilchen, speziell Quarks und Gluonen, durchliefen einen Zustand den man als Quark-Gluon-Plasma (QGP) bezeichnet und innerhalb dessen die starke Wechselwirkung dominiert. Innerhalb dieses Plasmas können Quarks und Gluonen, welche sonst in Hadronen gebunden sind, sich frei bewegen. Die direkte Beobachtung des frühzeitlichen QGPs ist mit heutigen Mitteln nicht möglich. Allerdings ist es möglich die Dynamik und Kinematik innerhalb eines künstlich erzeugten QGPs zu erforschen und damit Rückschlüsse auf die Vorgänge während des Urknalls zu machen.
Um künstliche QGPs unter kontrollierten Bedingungen zu erzeugen, werden heutzutage ultrarelativistische Schwerionen zur Kollision gebracht. Der stärkste je gebaute Schwerionenbeschleuniger LHC befindet sich am Kernforschungzentrum CERN in der Nähe von Genf. Das ALICE Experiment, als eines der vier großen Experimente am LHC, wurde speziell gebaut um das QGP näher zu untersuchen. Vollständig ionisierte Bleikerne werden mit nahezu Lichtgeschwindigkeit in den Experimenten zur Kollision gebracht. Die deponierte Energie lässt die Temperatur der Quarks und Gluonen innerhalb der kollidierenden Nukleonen ansteigen bis eine kritische Temperatur überschritten wird und ein Phasenübergang in das QGP erfolgt. Im Laufe der Kollision kühlt das Medium ab und gelangt unter die kritische Temperatur. Nun werden aus den ehemals freien Quarks Hadronen gebildet. Diese Hadronen oder Zerfallsprodukte dieser Hadronen können daraufhin in die Detektoren des Experiments fliegen und werden dann dort gemessen.
Es gibt mehrere mögliche Observablen des QGP, die messbar mit dem ALICE Experiment sind. Die Observablen, die in dieser Arbeit detailliert untersucht werden, sind die invariante Masse und der Paartransversalimpuls eines Dielektrons. Ein Dielektron besteht aus einem Elektron und einem Positron, welche miteinander korreliert sind. Dielektronen sind ideale Sonden zur Vermessung des QGPs. Sie werden durch verschiedene Prozesse während allen Kollisionsphasen produziert, wie beispielsweise bei den initialen, harten Stößen der kollidierenden Nukleonen oder durch den elektromagnetischen Zerfall verschiedener Hadronen wie π0 und J/ψ. Zusätzlich strahlt das QGP Dielektronen abhängig von seiner Temperatur ab. Theoretisch erlaubt dies die direkte Temperaturmessung des QGPs. Ein weiterer Vorteil der Dielektronenmessung gegenüber der Messung von Hadronen liegt darin, dass Elektronen und Positronen keine Farbladungen tragen und somit auch nicht mit der dominierenden starken Wechselwirkung innerhalb des QGPs interagieren und somit unbeeinflusst Informationen über seine Dynamik liefern können.
In dieser vorliegenden Arbeit werden Dielektronenspektren als Funktion der invarianten Masse und des Paartransversalimpulses in Blei-Blei-Kollisionen mit einer Schwerpunktsenergie von √sNN = 5.02 TeV gemessen. Das erste Mal in Schwerionenkollisionen konnte an einem der großen LHC Experimente der minimale Transversalimpuls der gemessenen Elektronen und Positronen auf peT > 0.2 GeV/c minimiert werden. Dies gibt im Vergleich zu der publizierten Messung mit peT > 0.4 GeV/c die Möglichkeit auch sogenannte weiche Prozesse zu messen, erhöht aber auch den Komplexit ätsgrad der Messung durch massiv gesteigerten Untergrund. Zusätzlich ist die Messung zentralitäsabhängig durchgeführt. Zentralität ist ein Maß für den Abstand der beiden Bleikerne zum Zeitpunkt der Kollision. Je zentraler eine Kollision, desto größer ist die deponierte Energie und desto größer und heißer ist das erzeugte QGP und die daraus resultierenden Effekte.
Die gemessenen Dielektronenverteilungen werden mit dem erwarteten Beiträgen aus hadronischen Zerfällen verglichen. Die Messung ergibt, dass der Beitrag aus semileptonischen Zerfällen von Charmquarks gemessen im Vakuum, welcher mit der Anzahl der binären Nukleon-Nukleon-Kollisionen in Blei-Blei-Ereignissen hochskaliert ist, nicht das Dielektronenspektrum beschreibt. Eine Modifizierung des Beitrag gemäß des unabhängig gemessenen nuklearen Modifikationsfaktors für einzelne Elektronen aus Charm- und Beautyquarks verbessert die Beschreibung des Dielektronenspektrums. Zusätzlich wurde der Beitrag virtueller direkter Photonen abgeschätzt. Die gemessenen Werte sind vergleichbar mit vorangegangenen Messungen bei einer niedrigeren Schwerpunktsenergie. Ebenso ist es möglich in periphären Kollisionen einen Beitrag durch eine Quelle zu vermessen, die Dielektronen bei niedrigem Transversalimpuls pT,ee < 0.15 GeV/c aussendet.
Focused electron and ion beam-induced deposition (FEBID/FIBID) are direct-write techniques with particular advantages in three-dimensional (3D) fabrication of ferromagnetic or superconducting nanostructures. Recently, two novel precursors, HCo 3 Fe(CO) 12 and Nb(NMe 3 ) 2 (N-t-Bu), were introduced, resulting in fully metallic CoFe ferromagnetic alloys by FEBID and superconducting NbC by FIBID, respectively. In order to properly define the writing strategy for the fabrication of 3D structures using these precursors, their temperature-dependent average residence time on the substrate and growing deposit needs to be known. This is a prerequisite for employing the simulation-guided 3D computer aided design (CAD) approach to FEBID/FIBID, which was introduced recently. We fabricated a series of rectangular-shaped deposits by FEBID at different substrate temperatures between 5 ∘ C and 24 ∘ C using the precursors and extracted the activation energy for precursor desorption and the pre-exponential factor from the measured heights of the deposits using the continuum growth model of FEBID based on the reaction-diffusion equation for the adsorbed precursor.
Suppression of light nuclei production in collisions of small systems at the Large Hadron Collider
(2019)
We show that the recently observed suppression of the yield ratio of deuteron to proton and of helium-3 to proton in p+p collisions compared to those in p+Pb or Pb+Pb collisions by the ALICE Collaboration at the Large Hadron Collider (LHC) can be explained if light nuclei are produced from the coalescence of nucleons at the kinetic freeze-out of these collisions. This suppression is attributed to the non-negligible sizes of deuteron and helium-3 compared to the size of the nucleon emission source in collisions of small systems, which reduces the overlap of their internal wave functions with those of nucleons. The same model is also used to study the production of triton and hypertriton in heavy-ion collisions at the LHC. Compared to helium-3 in events of low charged particle multiplicity, the triton is less suppressed due to its smaller size and the hypertriton is even more suppressed as a result of its much larger size.
We study how the mass and magnetic moment of the quarks are dynamically generated in nonequilibrium quark matter. We derive the equal-time transport and constraint equations for the quark Wigner function in a magnetized quark model and solve them in the semi-classical expansion. The quark mass and magnetic moment are self-consistently coupled to the Wigner function and controlled by the kinetic equations. While the quark mass is dynamically generated at the classical level, the quark magnetic moment is a pure quantum effect, induced by the quark spin interaction with the external magnetic field.
The plasma membrane (PM) is composed of a complex lipid mixture that forms heterogeneous membrane environments. Yet, how small-scale lipid organization controls physiological events at the PM remains largely unknown. Here, we show that ORP-related Osh lipid exchange proteins are critical for the synthesis of phosphatidylinositol (4,5)-bisphosphate [PI(4,5)P2], a key regulator of dynamic events at the PM. In real-time assays, we find that unsaturated phosphatidylserine (PS) and sterols, both Osh protein ligands, synergistically stimulate phosphatidylinositol 4-phosphate 5-kinase (PIP5K) activity. Biophysical FRET analyses suggest an unconventional co-distribution of unsaturated PS and phosphatidylinositol 4-phosphate (PI4P) species in sterol-containing membrane bilayers. Moreover, using in vivo imaging approaches and molecular dynamics simulations, we show that Osh protein-mediated unsaturated PI4P and PS membrane lipid organization is sensed by the PIP5K specificity loop. Thus, ORP family members create a nanoscale membrane lipid environment that drives PIP5K activity and PI(4,5)P2 synthesis that ultimately controls global PM organization and dynamics.
HADES (High Acceptance DiElectron Spectrometer), located at GSI, is a versatile detector for precise spectroscopy of e+ e- pairs and charged hadrons produced on a fixed target in a 1 to 3.5 AGeV kinetic beam energy region. The main experimental goal is to investigate properties of dense nuclear matter created in heavy ion collisions and learn about in-medium hadron properties.
In the HADES set-up 24 Mini Drift Chambers (MDC) allow for track reconstruction and determining the particle momentum by exploiting charged particle deflection in a magnetic field. In addition, the drift chambers contribute to particle identification by measuring the energy loss. The read-out concept foresees each sensing wire to be equipped with a preamplifier, analog pulse shaper and discriminator. In the current front-end electronics, the ASD-8 ASIC comprises the above modules. Due to limitations of the current on-board time to digital converters (TDC), especially regarding higher reaction rates expected at the future FAIR facility (HADES at SIS-100), the electronics need to be replaced by new board featuring multi-hit TDCs. Whereas ASD-8 chips cannot be procured anymore, a promising replacement candidate is the PASTTREC ASIC, developed by JU Krakow, which was tested w.r.t. suitability for MDC read-out in a variety of set-ups and, where possible, in direct comparison to ASD-8.
The timing precision, being the most crucial performance parameter of the joint system of detector and read-out electronics, was assessed in two different set-ups, i.e. a cosmic muon tracking set-up and a beam test at the COSY accelerator at Juelich using a minimum ionizing proton beam.
The beam test results were reproduced and can thus be quantitatively explained in a three dimensional GARFIELD simulation of a HADES MDC drift cell. In particular, the simulation is able to describe the characteristic dependence of the time precision on the track position within the cell.
A circuit simulation (SPICE) was used to closely model the time development of a raw drift chamber pulse, measured as a response to X-rays from a 55 Fe source. The insights gained from this model were used for attributing realistic charge values to the time over threshold values measured with the read-out ASICs in a charge calibration set-up. Furthermore, a high-level circuit simulation of the PASTTREC shaper is implemented to serve as a demonstration of the effect of the individual shaping and tail cancellation stages which are present in both ASICs.
The Compressed Baryonic Matter experiment (CBM) at FAIR and the NA61/SHINE experiment at CERN SPS aim to study the area of the QCD phase diagram at high net baryon densities and moderate temperatures using heavy-ion collisions. The FAIR and SPS accelerators cover energy ranges 2-11 and 13-150 GeV per nucleon respectively in laboratory frame for heavy ions up to Au and Pb. One of the key observables to study the properties of a matter created in such collisions is an anisotropic transverse flow of particles.
In this work, the performance of the CBM experiment for anisotropic flow measurements is studied with Monte-Carlo simulations using gold ions at SIS-100 energies employing different heavy-ion event generators. Also, procedures for centrality estimation and charged hadron identification are described and corresponding frameworks are developed.
The measurement of the reaction plane angle is performed with Projectile Spectator Detector (PSD), which is a hadron calorimeter located at a very forward angle. To prevent radiation damage by the high-intensity ion beam, the PSD has a hole in the center to let the beam pass through. Various combinations of CBM detector subsystems are used to investigate the possible systematic biases in flow and centrality measurements. Effects of detector azimuthal non uniformity and the PSD beam hole size on physics performance are studied. The resulting performance of CBM for flow measurements is demonstrated for identified charged hadron anisotropic flow as a function of rapidity and transverse momentum in different centrality classes.
The measurement techniques developed for CBM were also validated with the experimental data recently collected by the NA61/SHINE experiment at CERN SPS for Pb+Pb collisions at the beam momenta 30A GeV/c. Compared to the existing data from the NA49 experiment at the CERN SPS, the new data allows for a more precise measurement of anisotropic flow harmonics. The fixed target setup of NA61/SHINE also allows extending flow measurements available from the STAR at the RHIC beam energy scan (BES) program to a wide rapidity range up to the forward region where the projectile nucleon spectators appear. In this thesis, an analysis of the anisotropic flow harmonics in Pb+Pb collisions at beam momenta 30A GeV/c collected by the NA61/SHINE experiment in the year 2016 is presented. Flow coefficients are measured relative to the spectator plane estimated with the Projectile Spectators Detector (PSD). The flow coefficients are obtained as a function of rapidity and transverse momentum in different classes of collision centrality. The results are compared with the corresponding NA49 data and the measurements from the RHIC BES program.
Cortical circuits exhibit highly dynamic and complex neural activity. Intriguingly, cortical activity exhibits consistently two key features across observed species and brain areas. First, individual neurons tend to be co-active in spatially localized domains forming orderly arranged, modular layouts with a typical spatial scale. Second, cortical elements are correlated in their activity over large distances reflecting long-range network interactions distributed over several millimeters. Currently, it is unclear how these two fundamental properties emerge in the early developing cortical activity.
Here, I aim to fill this gap by combining analyses of chronic imaging data and network models of developing cortical activity. Neural recordings of spontaneous and visually evoked activity in primary visual cortex of ferrets during their early cortical development were obtained using in vivo 2-photon and widefield epi-fluorescence calcium imaging. Spontaneous activity was used to probe the early state of cortical networks as its spatiotemporal organization is independent of a stimulus-imposed structure, and it is already present early in cortical development prior to reliably evoked responses. To assess the mature functional organization of distributed networks in cortex, the tuning of neural responses to stimulus features, in particular to the orientation of an edge-like stimulus, was assessed. Cortical responses to moving gratings of varying orientations form an orderly arranged layout of orientation domains extending over several millimeters.
To begin with, I showed that spontaneous activity correlations extend over several millimeters, supporting the assumption of using spontaneous activity to assess distributed networks in cortex.
Next, I asked how distributed networks in the mature visual cortex - assessed by spontaneous activity correlations - are related to its fine-scale functional organization. I found that the spatially extended and modular spontaneous correlation patterns accurately predict the fine spatial structure of visually evoked orientation domains several millimeters away. These results suggest a close relation between spontaneous correlations and visually evoked responses on a fine spatial scale and across large spatial distances.
As the principles governing the functional organization and development of distributed network interactions in the neocortex remain poorly understood, I next asked how long range correlated activity arises early in development. I found that key features of mature spontaneous activity introduced in this work, including long-range spontaneous correlations, were present already early in cortical development prior to the maturation of long-range, horizontal connections, and the predicted mature orientation preference layout. Even after silencing feed-forward input drive by inactivating retina or thalamus, long-range correlated and modular activity robustly emerged in early cortex. These results suggest that local recurrent connections in early cortical circuits can generate structured long-range network correlations that guide the formation of visually-evoked distributed functional networks.
To investigate how these large-scale cortical networks emerge prior to the maturation and elaboration of long-range horizontal connectivity, I examined a statistical network model describing an ensemble of spatially extended spontaneous activity patterns. I found a direct relationship between the dimensionality of this ensemble of activity patterns and the decay of its correlation structure. Specifically, reducing the dimensionality of the ensemble leads to an increase in the spatial range of the correlation structure.
To test whether this mechanism could generate a long-range correlation structure in cortical circuits, I studied a dynamical network model implementing a dimensionality reduction mechanism. Based on previous work demonstrating that network heterogeneity reduces the dimensionality of activity patterns, I showed that by increasing the degree of heterogeneity in the network, the dimensionality of the ensemble of activity patterns decreases and in turn their correlations extend over a greater range. A comparison to experimental data revealed a quantitative match between the network model and the observations in vivo in several of the key features of the early cortex including the spatial scale of correlations. Low dimensionality of spontaneous activity thus might provide an organizational principle explaining the observed long-range correlation structure in the early cortex.
Finally, I asked whether a network with a biologically plausible architecture can generate modular activity. Several classical models showed that modular activity patterns can emerge via an intracortical mechanism involving lateral inhibition. However, this assumption appears to be in conflict with current experimental evidence. Moreover, these network models were not experimentally tested, so far. Here, I showed by using linear stability analysis that spatially localized self-inhibition relaxes the constraints on the connectivity structure in a network model, such that biologically more plausible network motifs with shorter ranging inhibition than excitation can robustly generate modular activity.
Importantly, I also provided several model predictions to make the class of network models experimentally testable in view of recent technological advancements in imaging and manipulation of cortical circuits. A critical prediction of the model is the decrease in spacing of active domains when the total amount of inhibition increases. These results provide a novel mechanism of how cortical circuits with short-range inhibition can form modular activity.
Taken together, this thesis provides evidence that the two described fundamental features of neural activity are already present in the early cortex and shows that activity with those features can be generated in network models with an architecture consistent with the early cortex using basic principles.
We calculate ratios of higher-order susceptibilities quantifying fluctuations in the number of net-protons and in the net-electric charge using the Hadron Resonance Gas (HRG) model. We take into account the effect of resonance decays, the kinematic acceptance cuts in rapidity, pseudo-rapidity and transverse momentum used in the experimental analysis, as well as a randomization of the isospin of nucleons in the hadronic phase. By comparing these results to the latest experimental data from the STAR Collaboration, we determine the freeze-out conditions from net-electric charge and net-proton distributions and discuss their consistency.
The changing shape of the rapidity spectrum of net protons over the SPS energy range is still lacking theoretical understanding. In this work, a model for string excitation and string fragmentation is implemented for the description of high energy collisions within a hadronic transport approach. The free parameters of the string model are tuned to reproduce the experimentally measured particle production in proton-proton collisions. With the fixed parameters we advance to calculations for heavy ion collisions, where the shape of the proton rapidity spectrum changes from a single peak to a double peak structure with increasing beam energy in the experiment. We present calculations of proton rapidity spectra at different SPS energies in heavy ion collisions. Qualitatively, a good agreement with the experimental findings is obtained. In a future work, the formation process of string fragments will be studied in detail aiming to quantitatively reproduce the measurement.
We report on the successful implementation and characterization of a cryogenic solid hydrogen target in experiments on high-power laser-driven proton acceleration. When irradiating a solid hydrogen filament of 10 μm diameter with 10-Terawatt laser pulses of 2.5 J energy, protons with kinetic energies in excess of 20 MeV exhibiting non-thermal features in their spectrum were observed. The protons were emitted into a large solid angle reaching a total conversion efficiency of several percent. Two-dimensional particle-in-cell simulations confirm our results indicating that the spectral modulations are caused by collisionless shocks launched from the surface of the the high-density filament into a low-density corona surrounding the target. The use of solid hydrogen targets may significantly improve the prospects of laser-accelerated proton pulses for future applications.
The particle-in-cell (PIC) method was developed to investigate microscopic phenomena, and with the advances in computing power, newly developed codes have been used for several fields, such as astrophysical, magnetospheric, and solar plasmas. PIC applications have grown extensively, with large computing powers available on supercomputers such as Pleiades and Blue Waters in the US. For astrophysical plasma research, PIC methods have been utilized for several topics, such as reconnection, pulsar dynamics, non-relativistic shocks, relativistic shocks, and relativistic jets. PIC simulations of relativistic jets have been reviewed with emphasis placed on the physics involved in the simulations. This review summarizes PIC simulations, starting with the Weibel instability in slab models of jets, and then focuses on global jet evolution in helical magnetic field geometry. In particular, we address kinetic Kelvin-Helmholtz instabilities and mushroom instabilities.
The differences between contemporary Monte Carlo generators of high energy hadronic interactions are discussed and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs) is studied. Key directions for further model improvements are outlined. The prospect for a coherent interpretation of the data in terms of the UHECR composition is investigated.
Early, non-invasive sensing of sustained hyperglycemia in mice using millimeter-wave spectroscopy
(2019)
Diabetes is a very complex condition affecting millions of people around the world. Its occurrence, always accompanied by sustained hyperglycemia, leads to many medical complications that can be greatly mitigated when the disease is treated in its earliest stage. In this paper, a novel sensing approach for the early non-invasive detection and monitoring of sustained hyperglycemia is presented. The sensing principle is based on millimeter-wave transmission spectroscopy through the skin and subsequent statistical analysis of the amplitude data. A classifier based on functional principal components for sustained hyperglycemia prediction was validated on a sample of twelve mice, correctly classifying the condition in diabetic mice. Using the same classifier, sixteen mice with drug-induced diabetes were studied for two weeks. The proposed sensing approach was capable of assessing the glycemic states at different stages of induced diabetes, providing a clear transition from normoglycemia to hyperglycemia typically associated with diabetes. This is believed to be the first presentation of such evolution studies using non-invasive sensing. The results obtained indicate that gradual glycemic changes associated with diabetes can be accurately detected by non-invasively sensing the metabolism using a millimeter-wave spectral sensor, with an observed temporal resolution of around four days. This unprecedented detection speed and its non-invasive character could open new opportunities for the continuous control and monitoring of diabetics and the evaluation of response to treatments (including new therapies), enabling a much more appropriate control of the condition.
We present a model for the autonomous and simultaneous learning of active binocular and motion vision. The model is based on the Active Efficient Coding (AEC) framework, a recent generalization of classic efficient coding theories to active perception. The model learns how to efficiently encode the incoming visual signals generated by an object moving in 3-D through sparse coding. Simultaneously, it learns how to produce eye movements that further improve the efficiency of the sensory coding. This learning is driven by an intrinsic motivation to maximize the system's coding efficiency. We test our approach on the humanoid robot iCub using simulations. The model demonstrates self-calibration of accurate object fixation and tracking of moving objects. Our results show that the model keeps improving until it hits physical constraints such as camera or motor resolution, or limits on its internal coding capacity. Furthermore, we show that the emerging sensory tuning properties are in line with results on disparity, motion, and motion-in-depth tuning in the visual cortex of mammals. The model suggests that vergence and tracking eye movements can be viewed as fundamentally having the same objective of maximizing the coding efficiency of the visual system and that they can be learned and calibrated jointly through AEC.
We study the well-known resonance ψ(4040), corresponding to a 33S1 charm–anticharm vector state ψ(3S), within a QFT approach, in which the decay channels into DD, D∗D, D∗D∗, DsDs and D∗s Ds are considered. The spectral function shows sizable deviations from a Breit–Wigner shape (an enhancement, mostly generated by DD∗loops, occurs); moreover, besides the c ¯ c pole of ψ(4040), a second dynamically generated broad pole at 4 GeV emerges. Naively, it is tempting to identify this new pole with the unconfirmed state Y (4008). Yet, this state was not seen inthe reaction e+e− → ψ(4040) → DD∗, but in processes with π+π−J/ψ in the final state. A detailed study shows a related but different mechanism: a broad peak at 4GeV in the process e+e− → ψ(4040) → DD∗ → π+π−J/ψ appears when DD∗ loops are considered. Its existence in this reaction is not necessarily connected to the existence of a dynamically generated pole, but the underlying mechanism – the strong coupling of c ¯ c to DD∗ loops – can generate both of them. Thus, the controversial state Y (4008) may not be a genuine resonance, but a peak generated by the ψ(4040) and D∗D loops with π+π−J/ψ in the final state.
The Projectile Spectator Detector (PSD) of the CBM experiment at the future FAIR facility is a compensating lead-scintillator calorimeter designed to measure the energy distribution of the forward going projectile nucleons and nuclei fragments (reaction spectators) produced close to the beam rapidity. The detector performance for the centrality and reaction plane determination is reviewed based on Monte-Carlo simulations of gold-gold collisions by means of four different heavy-ion event generators. The PSD energy resolution and the linearity of the response measured at CERN PS for the PSD supermodule consisting of 9 modules are presented. Predictions of the calorimeter radiation conditions at CBM and response measurement of one PSD module equipped with neutron irradiated MPPCs used for the light read out are discussed.
From the colour glass condensate to filamentation: systematics of classical Yang–Mills theory
(2019)
The non-equilibrium early time evolution of an ultra-relativistic heavy ion collision is often described by classical lattice Yang–Mills theory, starting from the colour glass condensate (CGC) effective theory with an anisotropic energy momentum tensor as initial condition. In this work we investigate the systematics associated with such studies and their dependence on various model parameters (IR, UV cutoffs and the amplitude of quantum fluctuations) which are not yet fixed by experiment. We perform calculations for SU() and SU(), both in a static box and in an expanding geometry. Generally, the dependence on model parameters is found to be much larger than that on technical parameters like the number of colours, boundary conditions or the lattice spacing. In a static box, all setups lead to isotropisation through chromo-Weibel instabilities, which is illustrated by the accompanying filamentation of the energy density. However, the associated time scale depends strongly on the model parameters and in all cases is longer than the phenomenologically expected one. In the expanding system, no isotropisation is observed for any parameter choice. We show how investigations at fixed initial energy density can be used to better constrain some of the model parameters.
The main phospholipid (MPL) of Thermoplasma acidophilum DSM 1728 was isolated, purified and physico-chemically characterized by differential scanning calorimetry (DSC)/differential thermal analysis (DTA) for its thermotropic behavior, alone and in mixtures with other lipids, cholesterol, hydrophobic peptides and pore-forming ionophores. Model membranes from MPL were investigated; black lipid membrane, Langmuir-Blodgett monolayer, and liposomes. Laboratory results were compared to computer simulation. MPL forms stable and resistant liposomes with highly proton-impermeable membrane and mixes at certain degree with common bilayer-forming lipids. Monomeric bacteriorhodopsin and ATP synthase from Micrococcus luteus were co-reconstituted and light-driven ATP synthesis measured. This review reports about almost four decades of research on Thermoplasma membrane and its MPL as well as transfer of this research to Thermoplasma species recently isolated from Indonesian volcanoes.
Steep rise of parton densities in the limit of small parton momentum fraction x poses a challenge for describing the observed energy-dependence of the total and inelastic proton-proton cross sections σtot/inelpp : considering a realistic parton spatial distribution, one obtains a too-strong increase of σtot/inelpp in the limit of very high energies. We discuss various mechanisms which allow one to tame such a rise, paying special attention to the role of parton-parton correlations. In addition, we investigate a potential impact on model predictions for σtotpp, related to dynamical higher twist corrections to parton-production process.
This thesis is a summary of existing and upcoming publications, with a focus on high order methods in numerical relativity and general relativistic flows. The text is structed in five chapters. In the first three ones, the ADER-DG technique and its application to the Einstein-Euler equations is introduced. Novel formulations for both the Einstein equations in the 3+1 split as well as the general relativistic magnetohydrodynamics (GRMHD) had to be derived. The first order conformal and covariant Z4 formulation of Einstein equations (FO-CCZ4) is proposed and proven to be strongly hyperbolic. Together with the fluid equations of general relativistic magnetohydodynamics (GRMHD), a number of benchmark scenarios is presented to show both the correctness of the PDEs as well as the applicability of the numerical scheme.
As an application in astrophysics, a general-relativistic study of the treshold mass for a prompt-collapse of a binary neutron star merger with realistic nuclear equation of states has been carried out. A nonlinear universal relation between the treshold mass and the maximum compactness is found. Furthermore, by taking recent measurements of GW170817 into account, lower limits on the stellar radii for any mass can be given.
Furthermore, an (unpaired) work in quantum mechanical black hole engineering is presented. Higher dimensional extensions of generalized Heisenberg’s uncertainty principle (GUP) are studied. A number of new phenomenology is found, such as the existence of a conical singularity which mimics the effect of a gravitational monopole on short scale and that of a Schwarzschild black hole at a large scale, as well as oscillating Hawking temperatures which we call "lighthouse effect". All results are consistent with the self complete paradigm and a cold evaporation endpoint remnant.
Für das direkte Bild des Schwarzen Lochs benötigten die Astronomen ein Teleskop von bisher unerreichter Präzision und Empfindlichkeit. Das Event-Horizon-Teleskop ist kein einzelnes Teleskop, sondern eine Vernetzung von acht Radioteleskopen auf der ganzen Welt an Standorten mit teilweise herausfordernden klimatischen Bedingungen: auf dem Gipfel des Mauna Kea auf Hawaii, in der Atacama-Wüste in Chile, der Antarktis, in Mexiko, Arizona und der Sierra Nevada in Südspanien. ...
The brain is a large complex system which is remarkably good at maintaining stability under a wide range of input patterns and intensities. In addition, such a stable dynamical state is able to sustain essential functions, including the encoding of information about the external environment and storing memories. In order to succeed in these challenging tasks, neural circuits rely on a variety of plasticity mechanisms that act as self-organizational rules and regulate their dynamics. Based on toy models of self-organized criticality, this stable state has been proposed to be a phase transition point, poised between distinct types of unhealthy dynamics, in what has become known as the critical brain hypothesis. It is not yet known, however, if and how self-organization could drive biological neural networks towards a critical state while maintaining or improving their learning and memory functions.
Here, we investigate the emergence of criticality signatures in the form of neuronal avalanches due to self-organizational plasticity rules in a recurrent neural network. We show that power-law distributions of events, widely observed in experiments, arise from a combination of biologically inspired synaptic and homeostatic plasticity but are highly dependent on the external drive. Additionally, we describe how learning abilities and fading memory emerge and are improved by the same self-organizational processes. We finally propose an application of these enhanced functions, focusing on sequence and simple language learning tasks.
Taken together, our results suggest that the same self-organizational processes can be responsible for improving the brain’s spatio-temporal learning abilities and memory capacity while also giving rise to criticality signatures under particular input conditions, thus proposing a novel link between such abilities and neuronal avalanches. Although criticality was not verified, the detailed study of self-organization towards critical dynamics further elucidates its potential emergence and functions in the brain.
The diffusive behavior of macromolecules in solution is a key factor in the kinetics of macromolecular binding and assembly, and in the theoretical description of many experiments. Experiments on high-density protein solutions have found that a slow down of the diffusion dynamics is larger than expected from colloidal theory for non-interaction hard-spheres. It has also been shown that the rotational diffusion anisotropy in high-density protein solutions is larger than in dilute ones. High-density protein solutions are a complex fluid that is different from the neat fluid assumption used in the hydrodynamic theory. It is therefore important to have methods to accurately calculate the translational and rotational diffusion tensor from simulations as well as simulation algorithms to explore high-density solutions.
Simulations provide a powerful tool to study diffusion in complex fluids. They can be used to study the macroscopic and microscopic effects of complex fluids on the diffusive behavior. There has been already a lot of work done to accurately simulate diffusion and to determine the diffusion coefficients from simulations.
The translational diffusion of molecules in simple and complex liquids can be determined with high accuracy from simulations. This is not yet the case for rotational diffusion. Existing algorithms to calculate the rotational diffusion coefficients from simulations make assumptions about the shape of the protein or only work at short times. For the simulation of diffusive behavior of macromolecules two options exist today. An all-atom integrator with explicit solvent molecules or coarse-grained (CG) simulations with an implicit solvent. CG simulations of dynamic behavior with implicit solvent are also called Brownian dynamics (BD) simulations. For the CG simulations the Ermak-McCammon algorithm is often used to solve the underlying Langevin equation. The algorithm is an extension of the Euler-Maruyama integrator to include translation and rotation in three dimensions. This algorithm only correctly reproduces the equilibrium probability for short time-steps and the error depends linearly on the time-step. It has been shown that Monte Carlo based algorithms can produce BD for translational dynamics, when appropriately parametrized. The advantage of Monte Carlo based algorithm is that they will reproduce the correct equilibrium distribution independent of the chosen time-step. This in return allows choosing larger time-steps in simulations. The aim of this thesis is to develop novel´methods to accurately determine the rotational diffusion coefficient from simulations and extend existing Monte Carlo algorithms to include rotational dynamics.
The first project addresses the question of how to accurately determine the rotational diffusion coefficients from simulations. We develop a quaternion based method to calculate the rotational diffusion tensor from simulations and a theory for the effects of periodic boundary conditions (PBC) on the rotational diffusion coefficient in simulations.
Our method for calculating rotational diffusion coefficients is based on the quaternion covariances from Favro for a freely rotating rigid molecule. The covariances as formulated by Favro are only valid in the principal coordinate system (PCS) of the rotation diffusion tensor. The covariances can be generalized for an arbitrary reference coordinate system (RCS), i.e., a simulation, given the principle axes of the rotational diffusion tensor in the RCS. We show that no prior knowledge of the diffusion tensor and its principal axes is required to calculate the generalized covariances from simulations using common root-mean-square distance (RMSD) procedures. We develop two methods to fit the covariances calculated from simulations to our generalized equations to fit the rotational diffusion tensor. In the first method we minimize the sum of the squared error deviations between model and simulation data. For this six dimensional optimization we use a simulated annealing algorithm. Alternatively the rotational diffusion tensor can also be determined from a eigenvalue decomposition of covariance after integration. To minimize the effects of sampling noise in the integration we first apply a Laplace-transformation to smooth the covariances at large times. For ideal sampling the resulting rotational diffusion coefficient should be independent of the value of the Laplace variable. In practice, however, the best results are achieved using a value close to the inverse autocorrelation time of the rotational motion.
...
High-energetic heavy-ion collisions offer the unique opportunity to produce and to study dense nuclear matter in the laboratory. The future Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, will provide beams of heavy nuclei up to kinetic energies of 11 GeV/nucleon. At these energies, the nuclear matter in the collision zone of two nuclei will be compressed to densities of up to 5 − 10 times the saturation density of atomic nuclei, similar to matter densities existing in the core of massive neutron stars. Under those conditions, nucleons are expected to melt and form a new state of matter, which consists of quarks and gluons, the so called Quark-Gluon Plasma (QGP). The search for such a phase transition from hadronic to partonic matter, and the exploration of the nuclear matter equation-of-state at high densities are the major goals of heavy ion experiments worldwide.
The observables, which are proposed to probe the properties of dense nuclear matter and possible phase transitions, include multi-strange hyperons, antibaryons, lepton pairs, collective flow of identified particles, fluctuations and correlations of various particles, particles containing charm quarks, and hypernuclei. These observables have to be measured in multi-dimensions, i.e. as function of collision centrality, rapidity, transverse momentum, energy, emission angle, etc., which requires extremely high statistics. Moreover, some of these particles are produced very rarely.
Therefore, the Compressed Baryonic Matter (CBM) experiment at FAIR is designed to run at collision rates of up to 10 MHz, in order to perform measurements with unprecedented precision. Due to the complicated decay topology of many observables, no hardware trigger can be applied, and the data have to be analysed online in order to filter out the interesting events.
This strategy requires free-streaming read-out electronics, which provides time stamps to all detector signals, a high performance computer center, and high-speed reconstruction algorithms, which provide an online track and event reconstruction based on time and position information of the detector hits (”4-D“ reconstruction).
The core detector of the CBM experiment is the Silicon Tracking System (STS). The main task of the STS is to provide track reconstruction and momentum de- termination of charged particles originating from beam-target interactions. To fulfil the whole tasks the STS is located in the large gap of a superconducting dipole magnet with a bending power of 1 Tm providing momentum measurements for charged particles. The STS comprises 8 detector stations, which are positioned from 30 cm to 100 cm downstream the target. The corresponding active area of the stations grows up from 40×50 cm 2 up to 100×100 cm 2 with a totalarea of 4 m2. The silicon double-sided sensors exhibit 1024 strips on each side with a stereo angle at p-side of 7.5 ◦ and a strip pitch of 58 μm. The strip length ranges from 2 cm for sensors located in a close vicinity to the beam axis, up to 12 cm for other sensors where the flux of the reaction products drops down substantially. In total, the STS consist of 896 sensors mounted on 106 detector ladders. The detector readout electronics dissipates 40 kW and will be equipped with a CO 2 bi-phase cooling system. The detector including electronics will be mounted in a thermal enclosure to allow for sensor operation at below −5 ◦ C which minimizes radiation induced leakage currents.
The task of the STS is to measure the trajectories of up to 800 charged particles per collision with an efficiency of more than 95% and a momentum resolution of 1 − 2%. In order to guarantee the required performance over the full lifetime of the CBM experiment, the detector system has to have a low material budget, a high granularity, a high signal-to-noise (SNR) ratio, and a high radiation tolerance. As a result of optimisation studies, the STS consists of double-sided silicon microstrip sensors, about 300 μm thick, which have to provide a SNR ratio of more than 10, even after radiation with the expected equivalent lifetime fluence of 10 14 1 MeV n eq cm −2.
This thesis is devoted to the characterization of double-sided silicon microstrip sensors with an emphasis on investigation of their radiation hardness. Different prototypes of double sided silicon sensors produced by two vendors have been irradiated by 23 MeV protons up to the double life time fluence for the CBM experiment (2 × 10 14 1 MeV n eq cm −2 ).
The sensor properties have been characterised before and after irradiation. It was found, that after irradiation with a double lifetime fluence the leakage current increased 1000 times, which results in an increased shot noise. Moreover, the relative charge collection efficiency of irradiated with respect to non-irradiated sensors drops down to 85% for the lifetime equivalent fluence, and down to 73% for the double lifetime fluence, both for the p-side and n-side. For non-irradiated sensors the SNR was found to be in the range of 20 − 25, whereas for irradiated sensors it dropped down to 12 − 17.
In addition to the sensor characterization, a part of this thesis was devoted to the optimisation of the sensor readout scheme. In order to investigate the possible increase of SNR, and to reduce the number of readout channels in the outer aperture of STS, three versions of routing lines have been realized for the p-side readout of the sensor prototype, and have been tested in the laboratory and under beam conditions.
The tests have been performed with different inclination angles between beam direction and sensor surface, corresponding to the polar angle acceptance of the CBM experiment, which is from 2.5 ◦ to 25 ◦.
As a result of the studies carried out in this thesis work, the radiation hardness of the double-sided silicon microstrip sensors developed for the CBM STS detector was confirmed. Also the advantage of individual read-out of sensor channels in the lateral regions of the detector was verified. This allowed to start the tendering process for sensor series production in industry, an important step towards the construction of the detector in the coming years.
Five decades of US, UK, German and Dutch music charts show that cultural processes are accelerating
(2019)
Analysing the timeline of US, UK, German and Dutch music charts, we find that the evolution of album lifetimes and of the size of weekly rank changes provide evidence for an acceleration of cultural processes. For most of the past five decades, number one albums needed more than a month to climb to the top, nowadays an album is in contrast top ranked either from the start, or not at all. Over the last three decades, the number of top-listed albums increased as a consequence from roughly a dozen per year, to about 40. The distribution of album lifetimes evolved during the last decades from a log-normal distribution to a power law, a profound change. Presenting an information–theoretical approach to human activities, we suggest that the fading relevance of personal time horizons may be causing this phenomenon. Furthermore, we find that sales and airplay- based charts differ statistically and that the inclusion of streaming affects chart diversity adversely. We point out in addition that opinion dynamics may accelerate not only in cultural domains, as found here, but also in other settings, in particular in politics, where it could have far reaching consequences.
Behavior is characterized by sequences of goal oriented conducts, such as food uptake, socializing and resting. Classically, one would define for each task a corresponding satisfaction level, with the agent engaging, at a given time, in the activity having the lowest satisfaction level. Alternatively, one may consider that the agent follows the overarching objective to generate sequences of distinct activities. To achieve a balanced distribution of activities would then be the primary goal, and not to master a specific task. In this setting the agent would show two types of behaviors, task-oriented and task-searching phases, with the latter interseeding the former. We study the emergence of autonomous task switching for the case of a simulated robot arm. Grasping one of several moving objects corresponds in this setting to a specific activity. Overall, the arm should follow a given object temporarily and then move away, in order to search for a new target and reengage. We show that this behavior can be generated robustly when modeling the arm as an adaptive dynamical system. The dissipation function is in this approach time dependent. The arm is in a dissipative state when searching for a nearby object, dissipating energy on approach. Once close, the dissipation function starts to increase, with the eventual sign change implying that the arm will take up energy and wander off. The resulting explorative state ends when the dissipation function becomes again negative and the arm selects a new target. We believe that our approach may be generalized to generate self-organized sequences of activities in general.
Charmonia with different transverse momentum pT usually comes from different mechanisms in the relativistic heavy ion collisions. This work tries to review the theoretical studies on quarkonium evolutions in the deconfined medium produced in p-Pb and Pb-Pb collisions. The charmonia with high pT are mainly from the initial hadronic collisions, and therefore sensitive to the initial energy density of the bulk medium. For those charmonia within 0.1 < pT < 5 GeV/c at the energies of Large Hadron Collisions (LHC), They are mainly produced by the recombination of charm and anti-charm quarks in the medium. In the extremely low pT ∼ 1/RA (RA is the nuclear radius), additional contribution from the coherent interactions between electromagnetic fields generated by one nucleus and the target nucleus plays a non-negligible role in the J/ψ production even in semi-central Pb-Pb collisions.
As a first step, a simple and pedagogical recall of the η-η′ system is presented, in which the role of the axial anomaly, related to the heterochiral nature of the multiplet of (pseudo)scalar states, is underlined. As a consequence, η is close to the octet and η′ to the singlet configuration. On the contrary, for vector and tensor states, which belong to homochiral multiplets, no anomalous contribution to masses and mixing is present. Then, the isoscalar physical states are to a very good approximation nonstrange and strange, respectively. Finally, for pseudotensor states, which are part of an heterochiral multiplet (just as pseudoscalar ones), a sizable anomalous term is expected: η2(1645) roughly corresponds to the octet and η2(1870) to the singlet.
We investigate the well-known vector state ψ(4040) in the frame-work of a quantum field theoretical model. In particular, we study its spectral function and search for the pole(s) in the complex plane. Quite interestingly, the spectral function has a non-standard shape and two poles are present. The role of the meson-meson quantum loops (in particular DD* ones) is crucial and could also explain the not yet conformed “state” Y(4008).
In this review a summary is given on recent theoretical work, on understanding accreting supermassive black hole binaries in the gravitational wave (GW)-driven regime. A particular focus is given to theoretical predictions of properties of disks and jets in these systems during the gravitational wave driven phase. Since a previous review by Schnittman 2013, which focussed on Newtonian aspects of the problem, various relativistic aspects have been studied. In this review we provide an update on these relativistic aspects. Further, a perspective is given on recent observational developments that have seen a surge in the number of proposed supermassive black hole binary candidates. The prospect of bringing theoretical and observational efforts closer together makes this an exciting field of research for years to come.
Wilhelm H. Kegel : Nachruf
(2019)
We present a study of the elliptic flow and RAA of D and D¯ mesons in Au+Au collisions at FAIR energies. We propagate the charm quarks and the D mesons following a previously applied Langevin dynamics. The evolution of the background medium is modeled in two different ways: (I) we use the UrQMD hydrodynamics + Boltzmann transport hybrid approach including a phase transition to QGP and (II) with the coarse-graining approach employing also an equation of state with QGP. The latter approach has previously been used to describe di-lepton data at various energies very successfully. This comparison allows us to explore the effects of partial thermalization and viscous effects on the charm propagation. We explore the centrality dependencies of the collisions, the variation of the decoupling temperature and various hadronization parameters. We find that the initial partonic phase is responsible for the creation of most of the D/D¯ mesons elliptic flow and that the subsequent hadronic interactions seem to play only a minor role. This indicates that D/D¯ mesons elliptic flow is a smoking gun for a partonic phase at FAIR energies. However, the results suggest that the magnitude and the details of the elliptic flow strongly depend on the dynamics of the medium and on the hadronization procedure, which is related to the medium properties as well. Therefore, even at FAIR energies the charm quark might constitute a very useful tool to probe the quark–gluon plasma and investigate its physics.
We determine the gluon and ghost spectral functions along with the analytic structure of the associated propagators from numerical data describing gauge correlators at space-like momenta obtained by either solving the Dyson-Schwinger equations or through lattice simulations. Our novel reconstruction technique shows the expected branch cut for the gluon and the ghost propagator, which, in the gluon case, is supplemented with a pair of complex conjugate poles. Possible implications of the existence of these poles are briefly addressed.
The coordinate and momentum space configurations of the net baryon number in heavy ion collisions that undergo spinodal decomposition, due to a first-order phase transition, are investigated using state-of-the-art machine-learning methods. Coordinate space clumping, which appears in the spinodal decomposition, leaves strong characteristic imprints on the spatial net density distribution in nearly every event which can be detected by modern machine learning techniques. On the other hand, the corresponding features in the momentum distributions cannot clearly be detected, by the same machine learning methods, in individual events. Only a small subset of events can be systematically differ- entiated if only the momentum space information is available. This is due to the strong similarity of the two event classes, with and without spinodal decomposition. In such sce- narios, conventional event-averaged observables like the baryon number cumulants signal a spinodal non-equilibrium phase transition. Indeed the third-order cumulant, the skewness, does exhibit a peak at the beam energy (Elab = 3–4 A GeV), where the transient hot and dense system created in the heavy ion collision reaches the first-order phase transition.
The effect of a non-zero strangeness chemical potential on the strong interaction phase diagram has been studied within the framework of the SU(3) quark-hadron chiral parity-doublet model. Both, the nuclear liquid-gas and the chiral/deconfinement phase transitions are modified. The first-order line in the chiral phase transition is observed to vanish completely, with the entire phase boundary becoming a crossover. These changes in the nature of the phase transitions are expected to modify various susceptibilities, the effects of which might be detectable in particle-number distributions resulting from moderate-temperature and high-density heavy-ion collision experiments.
Charge states and energy loss of heavy ions after passing an inductively coupled plasma target
(2019)
In various kinds of fields such as accelerator physics, warm dense matter, high energy density physics, and inertial confinement fusion, heavy ions beam-plasma interaction plays an important role, and abundant investigations have been and are being carried out. Taking advantage of a good level of understanding on the interaction between a swift heavy ions beam and a hydrogen gas discharge plasma, an engineering application of a spherical theta-pinch device as a plasma stripper for FAIR (facility for antiproton and ion research) and a scientific application of a swift heavy ions beam as a novel plasma diagnostic tool are proposed and investigated.
The spherical theta-pinch device is manufactured, improved, and comprehensively tested for its application as a plasma stripper. The device is mainly composed of an evacuated glass vessel that can be filled with gas (for example: hydrogen) and a LRC circuit including a capacitors bank and a set of coils. Discharging the device at an initial hydrogen pressure in the glass vessel and an operation voltage for the capacitors bank, a circuit current oscillates in the LRC circuit. The oscillating circuit current in the set of coils induces a corresponding alternating magnetic field inside the glass vessel to ignite and maintain a hydrogen plasma.
Based on the built setup of circuit and plasma diagnostics, the measurements of circuit current, plasma light emission, plasma shape, and hydrogen Balmer series are carried out. The recorded signals of the circuit current and the plasma light emission of many consecutively repetitive discharges overlap perfectly, which indicate a very good reproducibility of the parameters of the LRC circuit during discharge and the generated plasma. From the measured circuit current, a real energy transfer efficiency is calculated by our proposed new model, which shows its overall tendency varying with the hydrogen pressure and the operation voltage, including the maximum value of 25% occurring at an initial hydrogen pressure of around 25 Pa and a maximum operation voltage of 14 kV. So, the discharge at an initial hydrogen pressure of 20 Pa and an operation voltage of 14 ...
Als Plasmafenster wird ein Aufbau bezeichnet, welcher zwei Bereiche unterschiedlicher Drücke voneinander trennt, Teilchenstrahlen jedoch nahezu verlustfrei passieren lässt.
Diese Anwendung einer kaskadierten Bogenentladung wurde von A. Hershcovitch vorgeschlagen.
Im Rahmen dieser Arbeit wurde ein solches Plasmafenster mit Kanaldurchmessern von 3.3 mm und 5.0 mm aufgebaut sowie die erreichbaren Druckunterschiede untersucht.
Auf der Bestimmung des Einflusses der Plasmaparametern und deren Abhängigkeit von äußeren Parametern auf die erreichbare Trennung der Druckbereiche liegt der Schwerpunkt dieser Arbeit.
Ein ausgeklügeltes optisches System ermöglicht die simultane Aufnahme mehrerer Spektren entlang der Entladungsachse, welche die gleichzeitige Bestimmung der Elektronendichte und -temperatur ermöglichen.
Für die Analyse der Plamaparameter aus über 6700 Spektren wird eine selbst entwickelte Software genutzt.
Die gemessenen Elektronendichte reicht von 8e14 cm^-3 bis zu 4.2e16 cm^-3.
Sie skaliert sowohl mit der Entladungsstromstärke als auch dem Teilchenfluss.
Für die Elektronentemperatur stellen sich Werte zwischen 1 eV und 1.3 eV ein, sie variiert nur leicht mit der Stromstärke und dem Teilchenfluss.
Wie später gezeigt wird, stimmen die hier präsentierten Daten gut mit Ergebnissen aus Simulationen und Experimenten anderer Arbeitsgruppen überein.
Als Betriebsgas wurde eine 98%Ar-2%H2 Mixtur genutzt, da die Stark-Verbreiterung der H-beta-Linie sowie die physikalischen Eigenschaften von Argon gut beschrieben sind und somit eine akkurate Elektronendichte- und -temperaturbestimmung ermöglichen.
Während die Drücke auf der Niederdruckseite einigen mbar entsprechen, werden auf der Hochdruckseite Drücke bis zu 750 mbar bei Teilchenflüsse zwischen 4.5e20 s^-1 und 18e20 s^-1 sowie Stromstärken von 45 A bis 60 A erreicht.
Die erzielten Druckverhältnisse entsprechen Werten zwischen 40 und 150, was eine Steigerung um einen Faktor von bis zu 12 gegenüber dem Druckverhältnis einer einfachen differentiellen Pumpstufe entspricht.
Zusätzlich zur Trennung der Druckbereiche kann am vorgestellten Experiment die Starkverbreiterung von Emissionslinien untersucht werden.
Vorteilhaft gegenüber anderen Aufbauten ist hier die Möglichkeit, zeitgleich Spektren unterschiedlicher Elektronendichten aufzunehmen.
Die entwickelte Software ist in der Lage, akkurate Halbwertsbreiten zu bestimmen und daher für eine solche Anwendung gut geeignet.
Alleinstellungsmerkmale dieses Aufbaus sind unter anderem die angesprochene Möglichkeit der simultanen Bestimmung von Plasmaparamertern und Linienverbreiterungen sowie der Verzicht auf Keramikisolatoren zwischen den Kühlplatten des Aufbaus.
Optische Analysen ergaben keine signifikante Schädigung der Bestandteile des Aufbaus nach einer Betriebsdauer von über 10 h; einzig die Kathodenspitzen müssen alle 5 h ausgetauscht werden.
Im Rahmen der hier vorgestellten Arbeit wurden eine Master- sowie Bachelorarbeit betreut und erfolgreich zum Abschluss gebracht.
Wie im Rahmen dieser Arbeit gezeigt, ist das entwickelte Plasmapfenster in der Lage, zwei Bereiche unterschiedlicher Drücke zu trennen und diese Trennung sicher aufrecht zu erhalten.
Die zugrundeliegenden Plasmaparameter sind erforscht und ihr Einfluss auf die Trennungseigentschaft des Plasmafensters beschrieben.
Als nächsten Schritt bietet sich die Erschließung technischer Einsatzmöglichkeiten des Plasmafensters an, so könnte dieses als Plasmastripper oder zum Schutz einer Beschleunigerstruktur vor durch Kollisionsexperimente entstandene radioaktive Isotope oder Sekundärteilchen.
The present study focuses on the beam line optimization from the heavy-ion synchrotron SIS18 to the HADES experiment. BOBYQA (Bound Optimization BY Quadratic Approximation) solves bound constrained optimization problems without using derivatives of the objective function. The Bayesian optimization is another strategy for global optimization of costly, noisy functions without using derivatives. A python programming interface to MADX allow the use of the python implementation of BOBYQA and Bayesian method. This gave the possibility to use tracking simulation with MADX to determine the loss budget for each lattice setting during the optimization and compare both optimization methods.
Mit immer komplexeren Experimenten erhöhen sich die Anforderungen an die Detektoren und diese Arbeit ist ein neuer Beitrag für eine weiterentwickelte technologische Lösung. In der vorliegenden Dissertation wurde eine nichtinvasive optische Strahldiagnose für intensive Ionenstrahlen in starken Magnetfeldern entwickelt. Das optische System besteht aus miniaturisierten Einplatinen CMOS-Kameras. Sowohl die hardwareseitige Entwicklung als auch die softwareseitige Implementierung der Algorithmen zur Kamerakalibrierung, Netzwerksteuerung und Strahlrekonstruktion wurden in dieser Arbeit entwickelt. Die Leistungsstärke dieses neuartigen Diagnosesystems wurde dann experimentell an einem Teststand demonstriert. Dabei wurde das optische System ins Vakuumstrahlrohr eingebettet. Ein Wasserstoffionenstrahl mit einer Energie von 7keV bis 10keV und einem Strahlstrom bis 1mA wurde in einer Stickstoffatmosphäre bis 1E-5 mbar untersucht. Dabei wurde der Ionenstrahl entlang des Strahlrohres des Toroidsegmentmagnetes mit einer Bogenlänge von 680mm mit einem xy-Kamerasystem beobachtet.
Der Strahlschwerpunkt und die Breite des Strahlprofils wurden im Ortsraum rekonstruiert. Die analytisch berechnete und in anderen Arbeiten simulierte Gyrationsbewegung sowie der RxB-Drift des Strahlschwerpunktes konnte experimentell bestätigt werden.
Chirality is omnipresent in living nature. On the single molecule level, the response of a chiral species to a chiral probe depends on their respective handedness. A prominent example is the difference in the interaction of a chiral molecule with left or right circularly polarized light. In the present study, we show by Coulomb explosion imaging that circularly polarized light can also induce a chiral fragmentation of a planar and thus achiral molecule. The observed enantiomer strongly depends on the orientation of the molecule with respect to the light propagation direction and the helicity of the ionizing light. This finding might trigger new approaches to improve laser-driven enantioselective chemical synthesis.
Entwicklung und Inbetriebnahme zweier supraleitender 217 MHz CH-Strukturen für das HELIAC-Projekt
(2019)
Im Rahmen der hier vorgestellten Arbeit wurden zwei baugleiche CH-Strukturen für das im Bau befindliche HELIAC-Projekt (HELmholtz LInear ACcelerator) entwickelt und während der Produktion bis hin zu den finalen Kalttests bei 4.2 K begleitet. Zusammen mit der CH-Struktur des Demonstrator-Projektes ermöglichen sie die vollständige Inbetriebnahme und den ersten Strahltest des ersten Kryomoduls des HELIAC's, welcher aus vier Kryomodulen mit insgesamt 12 CH-Strukturen besteht. Im Vergleich zu bisherigen CH-Strukturen wurde das Design der Kavitäten im Rahmen dieser Dissertation grundlegend überarbeitet und optimiert. Durch die Entfernung der Girder und die konisch geformten Endkappen konnte die Stabilität der neuen CH-Strukturen deutlich erhöht werden, sodass die Drucksensitivität im Vergleich zur ersten CH-Kavität des Demonstrator-Projektes um ca. 80% reduziert werden konnte. Durch die nach außen gezogenen Lamellen der dynamischen Tuner konnte die mechanische Spannung sowie die benötigte Anzahl an Lamellen und damit das Risiko für das Auftreten von Multipacting reduziert werden. Das verringerte Risiko für Multipacting durch die entsprechenden Optimierungen der Kavitäten konnte durch die dauerhafte Überwindung aller Multipacting-Barrieren in den späteren Messungen verifiziert werden. Die Optimierung beider Kavitäten erfolgte dabei mit Hilfe der Simulationsprogramme CST Studio Suite und Ansys Workbench.
Beide Kavitäten wurden von der Firma Research Instruments (RI) gefertigt und während der gesamten Konstruktion durch diverse Zwischenmessungen überwacht. Nach jedem einzelnen Produktionsschritt wurden alle Einflüsse auf die Resonanzfrequenz so präzise ermittelt, dass die Zielfrequenz bei 4.2 K auf mehr als 1‰ genau erreicht werden konnte. Sowohl während der Zwischenmessungen als auch während den finalen Messungen bei 4.2 K wurden automatisierte Aufzeichnungsroutinen verwendet, welche eine sekundengenaue Auslese der Messdaten und damit eine hohe Messgenauigkeit ermöglichten. Im Hinblick auf die Komplexität der CH-Strukturen sind die geringen Abweichungen von der Zielfrequenz der direkte Beweis dafür, wie erfolgreich und präzise die Auswertungen und daraus folgenden Abschätzungen der einzelnen Zwischenmessungen waren. Insgesamt konnten bis auf die mechanischen Eigenmoden alle Ergebnisse der Simulationen durch entsprechende Messungen in guter Näherung verifiziert werden. In jeder Kavität wurden zwei dynamische Tuner verbaut, welche statische und dynamische Frequenzabweichungen im späteren Betrieb ausgleichen können. Die dynamischen Tuner wurden hinsichtlich ihrer mechanischen Stabilität und der erzeugbaren Frequenzänderung sowie ihrer mechanischen Eigenfrequenzen ausführlich mit Hilfe der Simulationsprogramme CST Studio Suite und Ansys Workbench untersucht und optimiert. Um die Ergebnisse der Simulationen zu überprüfen wurden ein eigens dafür entworfener und in der Werkstatt des Instituts für Angewandte Physik gefertigter Messaufbau verwendet, welcher es ermöglichte alle entscheidenden Eigenschaften der dynamischen Tuner präzise zu vermessen. Insgesamt stellen die ausführlichen Messungen mit Hilfe des entworfenen Aufbaus die bisher umfassendsten Messungen dynamischer Balgtuner innerhalb supraleitender CH-Strukturen dar und zeigen, mit welchen Abweichungen zwischen Simulationen und Messungen bei zukünftigen Kavitäten zu rechnen ist. Auch die Feldverteilung entlang der Strahlachse wurde während der Produktion der Kavitäten mit Hilfe der Störkörpermessmethode überprüft. Die dadurch ermittelten Werte stimmten mit einer maximalen Diskrepanz von 9% sehr gut mit den Simulationen überein.
Um eine möglichst gute Oberflächenqualität zu garantieren wurden an der Innenfläche beider Strukturen mindestens 200µm mit einer Mischung aus Fluss-, Salpeter und Phosphorsäure in mehreren Schritten abgetragen. Durch das Aufteilen der Behandlung in einzelne Schritte konnte der Einfluss der Oberflächenbehandlung auf die Resonanzfrequenz besser abgeschätzt und vorausgesehen werden. Dies führte, zusammen mit den Messungen zur Bestimmung der Drucksensitivität und der thermischen Kontraktion der Kavität beim Abkühlen, zu der hohen Übereinstimmung der gemessenen finalen Resonanzfrequenz mit der Zielfrequenz.
Die abschließenden Kalttests der beiden Kavitäten, ohne Heliummantel, wurden am Institut für Angewandte Physik der Johann Wolfgang Goethe Universität in einem vertikalen Bad-Kryostaten durchgeführt. Die erste CH-Struktur konnte erfolgreich bis zu einem maximalen Feldgradienten von 9.2 MV/m getestet werden, was einer effektiven Spannung von 3.37 MV entspricht. Die unbelastete Güte fiel dabei von anfangs 1.08 ∙ 109 auf 2.6 ∙ 108 ab. Die Vorgaben des HELIAC-Projektes liegen bei einem Beschleunigungsgradienten von 5.5 MV/m mit einer unbelasteten Güte von mindestens 3 ∙ 108. Diese Werte wurden von der ersten Kavität deutlich übertroffen, sodass sie für den Betrieb innerhalb des ersten Kryomoduls uneingeschränkt verwendet werden kann.
Bei der zweiten Kavität trat beim Abkühlen auf 4.2 K ein Vakuumleck auf, welches unter Raumtemperatur nicht detektierbar war. Aufgrund der schlechten Vakuumbedingungen innerhalb der Kavität konnten somit keine Messungen hinsichtlich der Leistungsfähigkeit durchgeführt werden, solange das Kaltleck vorhanden war. Ein erneuter Kalttest der Kavität nach Beseitigung des Lecks konnte zeitlich nicht mehr im Rahmen dieser Arbeit durchgeführt werden und ist aus diesem Grund Gegenstand nachfolgender Untersuchungen.
Insgesamt stellen die Entwicklungen, Untersuchungen und Messungen im Rahmen der hier vorgestellten Dissertation einen entscheidenden Schritt zur Inbetriebnahme des ersten Kryomoduls des HELIAC's sowie der Entwicklung weiterer CH-Kavitäten dar. Das überarbeitete Design der CH-Strukturen hat sich als erfolgreich erwiesen, weswegen es als Ausgangspunkt für die Entwicklung aller nachfolgenden CH-Strukturen des HELIAC, bis hin zur Fertigstellung des kompletten Beschleunigers, verwendet wird.
GSI High Energy Beam Transfer lines (HEST) link the SIS18 synchrotron with two storage rings (Experimental Storage Ring and Cryring) and six experimental caves. The recent upgrades to HEST beam instrumentation enables precise measurements of beam properties along the lines and allow for faster and more precise beams setup on targets. Preliminary results of some of the measurements performed during runs in 2018 and 2019 are presented here. The focus is on response matrix measurements and quadrupole scans performed on HADES beam line. The errors and future improvements are discussed.
Measurements of the π±, K±, and proton double differential yields emitted from the surface of the 90-cm-long carbon target (T2K replica) were performed for the incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS using data collected during 2010 run. The double differential π± yields were measured with increased precision compared to the previously published NA61/SHINE results, while the K± and proton yields were obtained for the first time. A strategy for dealing with the dependence of the results on the incoming proton beam profile is proposed. The purpose of these measurements is to reduce significantly the (anti)neutrino flux uncertainty in the T2K long-baseline neutrino experiment by constraining the production of (anti)neutrino ancestors coming from the T2K target.
We study the Wigner function for massive spin-1/2 fermions in electromagnetic fields. The Wigner function is analytically solved in five cases when electromagnetic fields are constants. For a general space-time dependent field configuration, we use the method of semi-classical expansion and solved the Wigner function at linear order in the Planck's constant. At the same order, we obtained a generalized Boltzmann equation for particle distribution, and a generalized BMT equation for spin polarization. Using the Wigner function, we calculated some physical quantities in a thermal equilibrium system.
As its fundamental function, the brain processes and transmits information using populations of interconnected nerve cells alias neurons. The communication between these neurons occurs via discrete electric impulses called spikes. A core challenge in neuroscience has been to quantify how much information about relevant stimuli or signals a neuron transports in its spike sequences, or spike trains. The recently introduced correlation method allows to determine this so-called mutual information in terms of a neuron’s temporal spike correlations under certain stationarity assumptions. Based on the correlation method, I address several open questions regarding neural information encoding in the cortex.
In the first part (chapter 2), I investigate the role of temporal spike correlations for neural information transmission. Temporal correlations in neuronal spike trains diminish independence in the information that is transmitted by the different spikes and hence introduce redundancy to stimulus encoding. However, exact methods to describe how such spike correlations impact information transmission quantitatively have been lacking. Here, I provide a general measure for the information carried by spike trains of neurons with correlated rate modulations only, neglecting other spike correlations, and use it to investigate the effect of rate correlations on encoding redundancy. I derive it analytically by calculating the mutual information between a time correlated, rate-modulating signal and the resulting spikes of Poisson neurons. Whereas this information is determined by spike autocorrelations only, the redundancy in information encoding due to rate correlations depends on both the distribution and the autocorrelation of the rate histogram. I further demonstrate that, at very small signal strengths, the information carried by rate correlated spikes becomes identical to that of independent spikes, in effect measuring the rate modulation depth. In contrast, a vanishing signal correlation time maximizes information transmission but does not generally yield the information of independent spikes.
In the second part (chapter 3), I analyze the information transmission capabilities of two particular schemes of encoding stimuli in the synaptic inputs using integrate-and-fire neuron models. Specifically, I calculate the exact information contained in spike trains about signals which modulate either the mean or the variance of the somatic currents in neurons, as is observed experimentally. I show that the information content about mean modulating signals is generally substantially larger than about variance modulating signals for biological parameters. This result provides evidence, by means of exact calculations of the mutual information, against the potential benefit of variance encoding that had been suggested previously.
Another analysis reveals that higher information transmission is generally associated with a larger proportion of nonlinear signal encoding. Moreover, I show that a combination of signal-dependent mean and variance modulations of the input current can synergistically benefit information transmission through a nonlinear coupling of both channels. On a more general level, I identify what was previously considered an upper bound as the exact, full mutual information. Furthermore, by analyzing the statistics of the spike train Fourier coefficients, I identify the means of the Fourier coefficients as information-carrying features.
Overall, this work contributes answers to central questions of theoretical neuroscience concerning the neural code and neural information transmission. It sheds light on the role of signal-induced temporal correlations for neural coding by providing insight into how signal features shape redundancy and by establishing mathematical links between existing methods and providing new insights into the spike train statistics in stationary situations. Moreover, I determine what fraction of the mutual information is linearly decodable for two specific signal encoding schemes.
Bardeen black hole chemistry
(2019)
In the present paper we try to connect the Bardeen black hole with the concept of the recently proposed black hole chemistry. We study thermodynamic properties of the regular black hole with an anti-deSitter background. The negative cosmological constant Λ plays the role of the positive thermodynamic pressure of the system. After studying the thermodynamic variables, we derive the corresponding equation of state and we show that a neutral Bardeen-anti-deSitter black hole has similar phenomenology to the chemical Van der Waals fluid. This is equivalent to saying that the system exhibits criticality and a first order small/large black hole phase transition reminiscent of the liquid/gas coexistence.
We extend the parton‐hadron‐string dynamics (PHSD) transport approach in the partonic sector by explicitly calculating the total and differential partonic scattering cross sections as a function of temperature T and baryon chemical potential μB on the basis of the effective propagators and couplings from the dynamical quasiparticle model (DQPM) that is matched to reproduce the equation of state of the partonic system above the deconfinement temperature Tc from lattice quantum chromodynamics (QCD). We calculate the collisional widths for the partonic degrees of freedom at finite T and μB in the time‐like sector and conclude that the quasiparticle limit holds sufficiently well. Furthermore, the ratio of shear viscosity η over entropy density s, that is, η/s, is evaluated using the collisional widths and compared to lattice QCD(lQCD) calculations for μB = 0 as well. We find that the ratio η/s does not differ very much from that calculated within the original DQPM on the basis of the Kubo formalism. Furthermore, there is only a very modest change of η/s with the baryon chemical μB as a function of the scaled temperature T/Tc(μB). This also holds for a variety of hadronic observables from central A + A collisions in the energy range 5 GeV urn:x-wiley:00046337:media:asna201913708:asna201913708-math-0001 200 GeV when implementing the differential cross sections into the PHSD approach. Accordingly, it will be difficult to extract finite μB signals from the partonic dynamics based on “bulk” observables.
The properties of open strange meson K1± in nuclear matter are estimated in the QCD sum rule approach. We obtain a relation between the in-medium mass and width of K1− (K1+) in nuclear matter, and show that the upper limit of the mass shift is as large as −249 (−35) MeV. The spectral modification of the K1 meson is possible to be probed by using kaon beams at J-PARC. Such measurement together with that of K⁎ will shed light on how chiral symmetry is partially restored in nuclear matter.
Die vorliegende Dissertation untersucht die Nichtgleichgewichtsdynamik von relativistischen Schwerionenkollisionen ausgehend von der anfänglichen Produktion von Teilchen durch den Zerfall von Strings, der Bildung eines Quark-Gluon-Plasmas (QGP), dessen kinetische und chemische Äquilibrierung als Funktion der Zeit sowie seine Transporteigenschaften im Gleichgewicht bei endlicher Temperatur und endlichem chemischen Potential. Ein Verständnis der frühen Phase der Schwerionenkollisionen ist insbesondere von großen Interesse, da letztere eine Verbindung zwischen den ersten Nukleon-Nukleon Kollisionen und der Quark-Gluon-Plasma Phase herstellen, die zu einem späteren Zeitpunkt ein gewisses Maß an Thermalisierung zeigt. Allerdings können nur Nichtgleichgewichts-Theorien eine Verbindung zwischen dem anfänglichen QGP und seiner - zumindest partiellen - Thermalisierung herstellen. Um die Dynamik eines stark wechselwirkenden Mediums wie des Quark-Gluon-Plasmas zu beschreiben, reichen übliche Transportgleichungen (basierend auf der Boltzmann-Gleichung) nicht aus und es müssen komplexere Theorien, die auch für stark korrelierte Medien geeignet sind, angewendet werden. Hier kommen hydrodynamische Simulationen oder Transportrechnungen - basierend auf verallgemeinerten Transportgleichungen - zum Einsatz. Solche verallgemeinerte Transportgleichungen, wie die Kadanoff-Baym-Gleichungen, ergeben sich aus der quantenmechanischen Nichtgleichgewichts-Vielteilchentheorie, in der Green’s- Funktionen in Minkowski Raum-Zeit die interessierenden Größen sind, um die Dynamik des betrachteten Mediums zu beschreiben. Mit geeigneten Näherungen kann man so kinetische Transportgleichungen erhalten, die eine einheitliche Behandlung von stabilen und instabilen Teilchen auch außerhalb des Gleichgewichts ermöglichen. Diese Bestandteile bilden die Basis des Transportmodells Parton-Hadron-String Dynamics (PHSD), welches daher ein geeignetes ’Instrument’ ist um die verschiedenen Phasen einer Schwerionenkollision zu analysieren, egal ob die verschiedenen Formen der Materie im Gleichgewicht sind oder nicht.
In dieser Arbeit wird zunächst die Quantenchromodynamik (QCD) vorgestellt und erklärt, wie diese Theorie im Laufe der Jahre entwickelt wurde um ein wichtiger Bestandteil des Standardmodells der Teilchenphysik zu werden. Wir werden weiterhin die verbleibenden Herausforderungen in unserem Verständnis der QCD vorstellen, die sich primär auf das Phasendiagramm der stark wechselwirkenden Materie konzentrieren.
Im zweiten Kapitel untersuchen wir die Nichtgleichgewichts-Feldtheorie und die damit verbundenen Techniken - wie die Keldysh-Kontur - zur Beschreibung der Green’schen Funktionen als wesentlichen Freiheitsgrade. Wir leiten die Evolutionsgleichung für die Green’schen Funktionen her, d. h. die Kadanoff Baym-Gleichungen am Beispiel einer skalaren Feldtheorie.
Im nächsten Kapitel wird das Transportmodell Parton-Hadron-String Dynamics (PHSD), welches die Anwendung der verallgemeinerten Transportgleichungen zur Beschreibung relativistischer Schwerionenkollisionen darstellt, vorgestellt.
Wir beginnen im Kapitel 4 mit der Untersuchung der Nichtgleichgewichtseigenschaften des Quark-Gluon-Plasmas, welches bei relativistischen Schwerionenkollisionen erzeugt wird. Zu diesem Zweck vergleichen wir die Quark-Gluon-Plasmaentwicklung aus dem PHSD mit einem viskosen hydrodynamischen Modell, bei dem ein lokales kinetisches und chemisches Gleichgewicht angenommen wird.
Im Kapitel 5 konzentrieren wir uns auf das frühe Vorgleichgewichtsstadium ultra-relativistischer Schwerionenkollisionen und insbesondere auf die Freiheitsgrade der QGP-Phase in diesem Stadium. Wir untersuchen die Auswirkungen eines QGP, welches anfänglich entweder aus einem System aus massiven Gluonen (Szenario I) oder alternativ aus Quarks und Antiquarks (Szenario II) besteht. Das nächste Kapitel wird ebenfalls die Produktion von Teilchen im Frühstadium von Schwerionenkollisionen behandeln, jedoch bei niedrigeren Kollisionsenergien. Hier wird eine mikroskopische Beschreibung des K+/pi+-Verhältnisses im Vordergrund stehen, d. h. die Erklärung des Maximums in diesem Verhältnis bei etwa 30 A GeV ("Horn") in zentralen Au+Au (oder Pb+Pb) Kollisionen. Insbesonders werden wir die Modifikation des String-Fragmentierungsprozesses (über den Schwinger-Mechanismus) in einer Umgebung mit hoher hadronischer Dichte aufgrund der teilweisen Wiederherstellung der chiralen Symmetrie untersuchen.
In Kapitel 7 erweitern wir das Parton-Hadron-String Dynamics (PHSD)-Transportmodell im partonischen Sektor, indem wir explizit die totalen und differentiellen partonischen Streuungsquerschnitte als Funktion der Temperatur T und des baryochemischen Potentials μB berechnen auf der Basis der effektiven Propagatoren und Kopplungen des Dynamical QuasiParticle Models (DQPM), welches auch die generelle Zeitentwicklung der partonischen Freiheitsgrade beschreibt. Wir finden nur eine sehr bescheidene Änderung von n/s mit dem baryonchemischen Potential μB in Abhängigkeit von der skalierten Temperatur T/Tc(μB). Dies gilt auch für eine Vielzahl von hadronischen Observablen aus zentralen A+A Kollisionen im Energiebereich von 5 GeV < vsNN < 200 GeV bei der Implementierung der differentiellen Querschnitte in das PHSD-Modell. Da wir in Schwerionen-Observablen nur kleine Spuren einer μB-Abhängigkeit finden - obwohl die effektiven Partonenmassen und Kollisionsbreiten sowie deren Partonenquerschnitte eindeutig von μB abhängen - impliziert dies, dass man eine beträchtliche Partonendichte und ein großes Raum-Zeit-QGP-Volumen zur Untersuchung der Dynamik in der partonischen Phase benötigt. Diese Bedingungen sind nur bei hohen Kollisionsenergien erfüllt, bei denen μB jedoch eher niedrig ist. Wenn andererseits die Kollisionsenergie verringert und somit μB erhöht wird, wird die hadronische Phase dominant und dementsprechend wird es zunehmend schwieriger, Signale aus der Partonendynamik auf der Basis von "Bulk"-Observablen zu extrahieren.
48Si: An atypical nucleus?
(2019)
Using the relativistic Hartree–Fock Lagrangian PKA1, we investigate the properties of the exotic nucleus 48Si, which is predicted to be an atypical nucleus characterized by i) the onset of doubly magicity, ii) its location at the drip line, iii) the presence of dual semi-bubble structure (distinct central depletion in both of neutron and proton density profiles) in the ground state, and iv) the occurrence of pairing reentrance at finite temperature. While not being new for each, these phenomena are found to simultaneously occur in 48Si. For instance, the dual semi-bubble structure reduces the spin–orbit splitting of low-ℓ orbitals and upraises the s orbitals, leading therefore to distinct N=34 and Z=14 magic shells in 48Si. Consequently, the doubly magicities provide extra stability for such extreme neutron-rich system at the drip line. Associating with the neutron shell N=34 and continuum above, the pairing correlations are reengaged interestingly at finite temperature. Theoretical nuclear modelings are known to be poorly predictive in general, and we asset our confidence in the prediction of our modeling on the fact that the predictions of PKA1 in various regions of the nuclear chart have systematically been found correct and more specifically in the region of pf shell. Whether our predictions are confirmed or not, 48Si provides a concrete benchmark for the understanding of the nature of nuclear force.
We discuss the diffusion currents occurring in a dilute system and show that the charge currents do not only depend on gradients in the corresponding charge density, but also on the other conserved charges in the system—the diffusion currents are therefore coupled. Gradients in one charge thus generate dissipative currents in a different charge. In this approach, we model the Navier-Stokes term of the generated currents to consist of a diffusion coefficient matrix, in which the diagonal entries are the usual diffusion coefficients and the off-diagonal entries correspond to the coupling of different diffusion currents. We evaluate the complete diffusion matrix for a specific hadron gas and for a simplified quark-gluon gas, including baryon, electric and strangeness charge. Our findings are that the off-diagonal entries can range within the same magnitude as the diagonal ones.
We study the production of entropy in the context of a nonequilibrium chiral phase transition. The dynamical symmetry breaking is modeled by a Langevin equation for the order parameter coupled to the Bjorken dynamics of a quark plasma. We investigate the impact of dissipation and noise on the entropy and explore the possibility of reheating for crossover and first-order phase transitions, depending on the expansion rate of the fluid. The relative increase in is estimated to range from 10% for a crossover to 100% for a first-order phase transition at low beam energies, which could be detected in the pion-to-proton ratio as a function of beam energy.
We present a study of the inclusive charged-particle transverse momentum (pT) spectra as a function of charged-particle multiplicity density at mid-pseudorapidity, dNch/dη, in pp collisions at s√=5.02 and 13 TeV covering the kinematic range |η|<0.8 and 0.15<pT<20 GeV/c. The results are presented for events with at least one charged particle in |η|<1 (INEL>0). The pT spectra are reported for two multiplicity estimators covering different pseudorapidity regions. The pT spectra normalized to that for INEL>0 show little energy dependence. Moreover, the high-pT yields of charged particles increase faster than the charged-particle multiplicity density. The average pT as a function of multiplicity and transverse spherocity is reported for pp collisions at s√=13 TeV. For low- (high-) spherocity events, corresponding to jet-like (isotropic) events, the average pT is higher (smaller) than that measured in INEL>0 pp collisions. Within uncertainties, the functional form of ⟨pT⟩(Nch) is not affected by the spherocity selection. While EPOS LHC gives a good description of many features of data, PYTHIA overestimates the average pT in jet-like events.
Measurement of the production of charm jets tagged with D0 mesons in pp collisions at √s = 7 TeV
(2019)
The production of charm jets in proton-proton collisions at a center-of-mass energy of s√=7 TeV was measured with the ALICE detector at the CERN Large Hadron Collider. The measurement is based on a data sample corresponding to a total integrated luminosity of 6.23 nb−1, collected using a minimum-bias trigger. Charm jets are identified by the presence of a D0 meson among their constituents. The D0 mesons are reconstructed from their hadronic decay D0 →K−π+. The D0-meson tagged jets are reconstructed using tracks of charged particles (track-based jets) with the anti-kT algorithm in the jet transverse momentum range 5<pchT,jet< 30 GeV/c and pseudorapidity |ηjet| < 0.5. The fraction of charged jets containing a D0-meson increases with pchT,jet from 0.042 ± 0.004 (stat) ± 0.006 (syst) to 0.080 ± 0.009 (stat) ± 0.008 (syst). The distribution of D0-meson tagged jets as a function of the jet momentum fraction carried by the D0 meson in the direction of the jet axis (zch∥) is reported for two ranges of jet transverse momenta, 5<pchT,jet< 15 GeV/c and 15<pchT,jet< 30 GeV/c in the intervals 0.2<zch∥∥<1.0 and 0.4<zch∥∥<1.0, respectively. The data are compared with results from Monte Carlo event generators (PYTHIA 6, PYTHIA 8 and Herwig 7) and with a Next-to-Leading-Order perturbative Quantum Chromodynamics calculation, obtained with the POWHEG method and interfaced with PYTHIA 6 for the generation of the parton shower, fragmentation, hadronisation and underlying event.
Charged-particle pseudorapidity density at mid-rapidity in p-Pb collisions at √sNN = 8.16 TeV
(2019)
The pseudorapidity density of charged particles, dNch/dη, in p–Pb collisions has been measured at a centre-of-mass energy per nucleon–nucleon pair of sNN−−−√ = 8.16 TeV at mid-pseudorapidity for non-single-diffractive events. The results cover 3.6 units of pseudorapidity, |η|<1.8. The dNch/dη value is 19.1±0.7 at |η|<0.5. This quantity divided by ⟨Npart⟩ / 2 is 4.73±0.20, where ⟨Npart⟩is the average number of participating nucleons, is 9.5% higher than the corresponding value for p–Pb collisions at sNN−−−√ = 5.02 TeV. Measurements are compared with models based on different mechanisms for particle production. All models agree within uncertainties with data in the Pb-going side, while HIJING overestimates, showing a symmetric behaviour, and EPOS underestimates the p-going side of the dNch/dη distribution. Saturation-based models reproduce the distributions well for η>−1.3. The dNch/dη is also measured for different centrality estimators, based both on the charged-particle multiplicity and on the energy deposited in the Zero-Degree Calorimeters. A study of the implications of the large multiplicity fluctuations due to the small number of participants for systems like p–Pb in the centrality calculation for multiplicity-based estimators is discussed, demonstrating the advantages of determining the centrality with energy deposited near beam rapidity.
Measurement of the inclusive isolated photon production cross section in pp collisions at √s = 7 TeV
(2019)
The production cross section of inclusive isolated photons has been measured by the ALICE experiment at the CERN LHC in pp collisions at a centre-of-momentum energy of s√= 7 TeV. The measurement is performed with the electromagnetic calorimeter EMCal and the central tracking detectors, covering a range of |η|<0.27 in pseudorapidity and a transverse momentum range of 10<pγT<60 GeV/c. The result extends the pT coverage of previously published results of the ATLAS and CMS experiments at the same collision energy to smaller pT. The measurement is compared to next-to-leading order perturbative QCD calculations and to the results from the ATLAS and CMS experiments. All measurements and theory predictions are in agreement with each other.
A measurement of the production of prompt +c baryons in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC is reported. The +c and − c were reconstructed at midrapidity (|y| < 0.5) via the hadronic decay channel +c → pK0 S (and charge conjugate) in the transverse momentum and centrality intervals 6 < pT < 12 GeV/c and 0–80%. The +c /D0 ratio, which is sensitive to the charm quark hadronisation mechanisms in the medium, is measured and found to be larger than the ratio measured in minimum-bias pp collisions at √s = 7 TeV and in p–Pb collisions at √sNN = 5.02 TeV. In particular, the values in p–Pb and Pb–Pb collisions differ by about two standard deviations of the combined statistical and systematic uncertainties in the common pT interval covered by the measurements in the two collision systems. The + c /D0 ratio is also compared with model calculations including different implementations of charm quark hadronisation. The measured ratio is reproduced by models implementing a pure coalescence scenario, while adding a fragmentation contribution leads to an underestimation. The + c nuclear modification factor, RAA, is also presented. The measured values of the RAA of + c , D+ s and non-strange D mesons are compatible within the combined statistical and systematic uncertainties. They show, however, a hint of a hierarchy (RD0 AA < RD+ s AA < R+ c AA ), conceivable with a contribution from coalescence mechanisms to charm hadron formation in the medium.
Measurement of ϒ(1S) elliptic flow at forward rapidity in Pb-Pb collisions at √sNN = 5.02 TeV
(2019)
The first measurement of the ϒ(1S) elliptic flow coefficient (v2) is performed at forward rapidity (2.5 < y < 4) in Pb–Pb collisions at √sNN = 5.02 TeV with the ALICE detector at the LHC. The results are obtained with the scalar product method and are reported as a function of transverse momentum (pT) up to 15 GeV/c in the 5%–60% centrality interval. The measured Υ(1S)v2 is consistent with 0 and with the small positive values predicted by transport models within uncertainties. The v2 coefficient in 2 < pT < 15 GeV/c is lower than that of inclusive J/ψ mesons in the same pT interval by 2.6 standard deviations. These results, combined with earlier suppression measurements, are in agreement with a scenario in which the Υ(1S) production in Pb–Pb collisions at LHC energies is dominated by dissociation limited to the early stage of the collision, whereas in the J/ψ case there is substantial experimental evidence of an additional regeneration component.
The production yield of prompt D mesons and their elliptic flow coefficient v2 were measured with the Event-Shape Engineering (ESE) technique applied to mid-central (10–30% and 30–50% centrality classes) Pb-Pb collisions at the centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV, with the ALICE detector at the LHC. The ESE technique allows the classification of events, belonging to the same centrality, according to the azimuthal anisotropy of soft particle production in the collision. The reported measurements give the opportunity to investigate the dynamics of charm quarks in the Quark-Gluon Plasma and provide information on their participation in the collective expansion of the medium. D mesons were reconstructed via their hadronic decays at mid-rapidity, |η| < 0.8, in the transverse momentum interval 1 < pT < 24 GeV/c. The v2 coefficient is found to be sensitive to the event-shape selection confirming a correlation between the D-meson azimuthal anisotropy and the collective expansion of the bulk matter, while the per-event D-meson yields do not show any significant modification within the current uncertainties.
The ALICE Collaboration has measured the energy dependence of exclusive photoproduction of J/ψ vector mesons off proton targets in ultra–peripheral p–Pb collisions at a centre-of-mass energy per nucleon pair sNN−−−√=5.02 TeV. The e+e− and μ+μ− decay channels are used to measure the cross section as a function of the rapidity of the J/ψ in the range −2.5<y<2.7, corresponding to an energy in the γp centre-of-mass in the interval 40<Wγp<550 GeV. The measurements, which are consistent with a power law dependence of the exclusive J/ψ photoproduction cross section, are compared to previous results from HERA and the LHC and to several theoretical models. They are found to be compatible with previous measurements.
The jet radial structure and particle transverse momentum (pT) composition within jets are presented in centrality-selected Pb–Pb collisions at √sNN = 2.76 TeV. Track-based jets, which are also called charged jets, were reconstructed with a resolution parameter of R = 0.3 at midrapidity |ηch jet| < 0.6 for transverse momenta pT, ch jet = 30–120 GeV/c. Jet–hadron correlations in relative azimuth and pseudorapidity space (Δϕ, Δη) are measured to study the distribution of the associated particles around the jet axis for different pT,assoc-ranges between 1 and 20 GeV/c. The data in Pb–Pb collisions are compared to reference distributions for pp collisions, obtained using embedded PYTHIA simulations. The number of high-pT associate particles (4 < pT,assoc < 20 GeV/c) in Pb–Pb collisions is found to be suppressed compared to the reference by 30 to 10% depending on centrality. The radial particle distribution relative to the jet axis shows a moderate modification in Pb–Pb collisions with respect to PYTHIA. High-pT associate particles are slightly more collimated in Pb–Pb collisions compared to the reference, while low-pT associate particles tend to be broadened. The results, which are presented for the first time down to pT, ch jet = 30 GeV/c in Pb–Pb collisions, are compatible with both previous jet–hadron-related measurements from the CMS Collaboration and jet shape measurements from the ALICE Collaboration at higher pT, and add further support for the established picture of in-medium parton energy loss.
Study of the Λ–Λ interaction with femtoscopy correlations in pp and p–Pb collisions at the LHC
(2019)
This work presents new constraints on the existence and the binding energy of a possible – bound state, the H-dibaryon, derived from – femtoscopic measurements by the ALICE collaboration. The results are obtained from a new measurement using the femtoscopy technique in pp collisions at √s = 13 TeV and p–Pb collisions at √sNN = 5.02 TeV, combined with previously published results from pp collisions at √s = 7 TeV. The – scattering parameter space, spanned by the inverse scattering length f −1 0 and the effective range d0, is constrained by comparing the measured – correlation function with calculations obtained within the Lednický model. The data are compatible with hypernuclei results and lattice computations, both predicting a shallow attractive interaction, and permit to test different theoretical approaches describing the – interaction. The region in the (f −1 0 ,d0) plane which would accommodate a – bound state is substantially restricted compared to previous studies. The binding energy of the possible – bound state is estimated within an effective-range expansion approach and is found to be B = 3.2+1.6 −2.4(stat)+1.8 −1.0(syst) MeV.
The measurements of the production of prompt D0, D+, D∗+, and D+s mesons in proton–proton (pp) collisions at s√=5.02 TeV with the ALICE detector at the Large Hadron Collider (LHC) are reported. D mesons were reconstructed at mid-rapidity (|y|<0.5) via their hadronic decay channels D0→K−π+, D+→K−π+π+, D∗+→D0π+→K−π+π+, D+s→ϕπ+→K+K−π+, and their charge conjugates. The production cross sections were measured in the transverse momentum interval 0<pT<36 GeV/c for D0, 1<pT<36 GeV/c for D+ and D∗+, and in 2<pT<24 GeV/c for D+s mesons. Thanks to the higher integrated luminosity, an analysis in finer pT bins with respect to the previous measurements at s√=7 TeV was performed, allowing for a more detailed description of the cross-section pT shape. The measured pT-differential production cross sections are compared to the results at s√=7 TeV and to four different perturbative QCD calculations. Its rapidity dependence is also tested combining the ALICE and LHCb measurements in pp collisions at s√=5.02 TeV. This measurement will allow for a more accurate determination of the nuclear modification factor in p–Pb and Pb–Pb collisions performed at the same nucleon–nucleon centre-of-mass energy.
Two-particle correlations in high-energy collision experiments enable the extraction of particle source radii by using the Bose-Einstein enhancement of pion production at low relative momentum q ∝ 1/R. It was previously observed that in pp collisions at s√ = 7TeV the average pair transverse momentum kT range of such analyses is limited due to large background correlations which were attributed to mini-jet phenomena. To investigate this further, an event-shape dependent analysis of Bose-Einstein correlations for pion pairs is performed in this work. By categorizing the events by their transverse sphericity ST into spherical (ST > 0:7) and jet-like (ST < 0:3) events a method was developed that allows for the determination of source radii for much larger values of kT for the first time. Spherical events demonstrate little or no background correlations while jet-like events are dominated by them. This observation agrees with the hypothesis of a mini-jet origin of the non-femtoscopic background correlations and gives new insight into the physics interpretation of the kT dependence of the radii. The emission source size in spherical events shows a substantially diminished kT dependence, while jet-like events show indications of a negative trend with respect to kT in the highest multiplicity events. Regarding the emission source shape, the correlation functions for both event sphericity classes show good agreement with an exponential shape, rather than a Gaussian one.
The transverse structure of jets was studied via jet fragmentation transverse momentum (jT) distributions, obtained using two-particle correlations in proton-proton and proton-lead collisions, measured with the ALICE experiment at the LHC. The highest transverse momentum particle in each event is used as the trigger particle and the region 3 < pTt < 15GeV/c is explored in this study. The measured distributions show a clear narrow Gaussian component and a wide non-Gaussian one. Based on Pythia simulations, the narrow component can be related to non-perturbative hadronization and the wide component to quantum chromodynamical splitting. The width of the narrow component shows a weak dependence on the transverse momentum of the trigger particle, in agreement with the expectation of universality of the hadronization process. On the other hand, the width of the wide component shows a rising trend suggesting increased branching for higher transverse momentum. The results obtained in pp collisions at s√=7 TeV and in p–Pb collisions at sNN−−−√=5.02 TeV are compatible within uncertainties and hence no significant cold nuclear matter effects are observed. The results are compared to previous measurements from CCOR and PHENIX as well as to PYTHIA 8 and Herwig 7 simulations.
Since the last 20 years, modern heuristic algorithms and machine learning have been increasingly used for several purposes in accelerator technology and physics. Since computing power has become less and less of a limiting factor, these tools have become part of the physicist community's standard toolkit [1][2] [3] [4] [5]. This paper describes the construction of an algorithm that can be used to generate an optimised lattice design for transfer lines under the consideration of restrictions that usually limit design options in reality. The developed algorithm has been applied to the existing SIS18 to HADES transfer line in GSI.
The second (v2) and third (v3) flow harmonic coefficients of J/ψ mesons are measured at forward rapidity (2.5 < y < 4.0) in Pb-Pb collisions at sNN−−−√=5.02 TeV with the ALICE detector at the LHC. Results are obtained with the scalar product method and reported as a function of transverse momentum, pT, for various collision centralities. A positive value of J/ψ v3 is observed with 3.7σ significance. The measurements, compared to those of prompt D0 mesons and charged particles at mid-rapidity, indicate an ordering with vn(J/ψ) < vn(D0) < vn(h±) (n = 2, 3) at low and intermediate pT up to 6 GeV/c and a convergence with v2(J/ψ) ≈ v2(D0) ≈ v2(h±) at high pT above 6–8 GeV/c. In semi-central collisions (5–40% and 10–50% centrality intervals) at intermediate pT between 2 and 6 GeV/c, the ratio v3/v2 of J/ψ mesons is found to be significantly lower (4.6σ) with respect to that of charged particles. In addition, the comparison to the prompt D0-meson ratio in the same pT interval suggests an ordering similar to that of the v2 and v3 coefficients. The J/ψ v2 coefficient is further studied using the Event Shape Engineering technique. The obtained results are found to be compatible with the expected variations of the eccentricity of the initial-state geometry.
Gravitational waves, electromagnetic radiation, and the emission of high energy particles probe the phase structure of the equation of state of dense matter produced at the crossroad of the closely related relativistic collisions of heavy ions and of binary neutron stars mergers. 3 + 1 dimensional special- and general relativistic hydrodynamic simulation studies reveal a unique window of opportunity to observe phase transitions in compressed baryon matter by laboratory based experiments and by astrophysical multimessenger observations. The astrophysical consequences of a hadron-quark phase transition in the interior of a compact star will be focused within this article. Especially with a future detection of the post-merger gravitational wave emission emanated from a binary neutron star merger event, it would be possible to explore the phase structure of quantum chromodynamics. The astrophysical observables of a hadron-quark phase transition in a single compact star system and binary hybrid star merger scenario will be summarized within this article. The FAIR facility at GSI Helmholtzzentrum allows one to study the universe in the laboratory, and several astrophysical signatures of the quark-gluon plasma have been found in relativistic collisions of heavy ions and will be explored in future experiments.
The long-awaited detection of a gravitational wave from the merger of a binary neutron star in August 2017 (GW170817) marks the beginning of the new field of multi-messenger gravitational wave astronomy. By exploiting the extracted tidal deformations of the two neutron stars from the late inspiral phase of GW170817, it is now possible to constrain several global properties of the equation of state of neutron star matter. However, the most interesting part of the high density and temperature regime of the equation of state is solely imprinted in the post-merger gravitational wave emission from the remnant hypermassive/supramassive neutron star. This regime was not observed in GW170817, but will possibly be detected in forthcoming events within the current observing run of the LIGO/VIRGO collaboration. Numerous numerical-relativity simulations of merging neutron star binaries have been performed during the last decades, and the emitted gravitational wave profiles and the interior structure of the generated remnants have been analysed in detail. The consequences of a potential appearance of a hadron-quark phase transition in the interior region of the produced hypermassive neutron star and the evolution of its underlying matter in the phase diagram of quantum cromo dynamics will be in the focus of this article. It will be shown that the different density/temperature regions of the equation of state can be severely constrained by a measurement of the spectral properties of the emitted post-merger gravitational wave signal from a future binary compact star merger event.
In this work, we discuss the dense matter equation of state (EOS) for the extreme range of conditions encountered in neutron stars and their mergers. The calculation of the properties of such an EOS involves modeling different degrees of freedom (such as nuclei, nucleons, hyperons, and quarks), taking into account different symmetries, and including finite density and temperature effects in a thermodynamically consistent manner. We begin by addressing subnuclear matter consisting of nucleons and a small admixture of light nuclei in the context of the excluded volume approach. We then turn our attention to supranuclear homogeneous matter as described by the Chiral Mean Field (CMF) formalism. Finally, we present results from realistic neutron-star-merger simulations performed using the CMF model that predict signatures for deconfinement to quark matter in gravitational wave signals.
An improved value for the lifetime of the (anti-)hypertriton has been obtained using the data sample of Pb–Pb collisions at √sNN = 5.02 TeV collected by the ALICE experiment at the LHC. The (anti-)hypertriton has been reconstructed via its charged two-body mesonic decay channel and the lifetime has been determined from an exponential fit to the dN/d(ct) spectrum. The measured value, τ = 242+34 −38 (stat.) ± 17 (syst.) ps, is compatible with representative theoretical predictions, thus contributing to the solution of the longstanding hypertriton lifetime puzzle.
Inclusive J/ψ production is studied in minimum-bias proton-proton collisions at a centre-of-mass energy of s√ = 5.02 TeV by ALICE at the CERN LHC. The measurement is performed at mid-rapidity (|y| < 0.9) in the dielectron decay channel down to zero transverse momentum pT, using a data sample corresponding to an integrated luminosity of Lint = 19.4 ± 0.4 nb−1. The measured pT-integrated inclusive J/ψ production cross sec- tion is dσ/dy = 5.64 ± 0.22(stat.) ± 0.33(syst.) ± 0.12(lumi.) μb. The pT-differential cross section d2σ/dpTdy is measured in the pT range 0–10 GeV/c and compared with state-of- the-art QCD calculations. The J/ψ 〈pT〉 and ⟨p2T⟩ are extracted and compared with results obtained at other collision energies.
Charged-particle spectra at midrapidity are measured in Pb–Pb collisions at the centre-of-mass energy per nucleon–nucleon pair √sNN = 5.02 TeV and presented in centrality classes ranging from most central (0–5%) to most peripheral (95–100%) collisions. Possible medium effects are quantified using the nuclear modification factor (RAA) by comparing the measured spectra with those from proton–proton collisions, scaled by the number of independent nucleon–nucleon collisions obtained from a Glauber model. At large transverse momenta (8 < pT < 20 GeV/c), the average RAA is found to increase from about 0.15 in 0–5% central to a maximum value of about 0.8 in 75–85% peripheral collisions, beyond which it falls off strongly to below 0.2 for the most peripheral collisions. Furthermore, RAA initially exhibits a positive slope as a function of pT in the 8–20 GeV/c interval, while for collisions beyond the 80% class the slope is negative. To reduce uncertainties related to event selection and normalization, we also provide the ratio of RAA in adjacent centrality intervals. Our results in peripheral collisions are consistent with a PYTHIA-based model without nuclear modification, demonstrating that biases caused by the event selection and collision geometry can lead to the apparent suppression in peripheral collisions. This explains the unintuitive observation that RAA is below unity in peripheral Pb–Pb, but equal to unity in minimum-bias p–Pb collisions despite similar charged-particle multiplicities.
In this letter, the production of deuterons and anti-deuterons in pp collisions at √s = 7 TeV is studied as a function of the charged-particle multiplicity density at mid-rapidity with the ALICE detector at the LHC. Production yields are measured at mid-rapidity in five multiplicity classes and as a function of the deuteron transverse momentum (pT). The measurements are discussed in the context of hadron–coalescence models. The coalescence parameter B2, extracted from the measured spectra of (anti-)deuterons and primary (anti-)protons, exhibits no significant pT-dependence for pT < 3 GeV/c, in agreement with the expectations of a simple coalescence picture. At fixed transverse momentum per nucleon, the B2 parameter is found to decrease smoothly from low multiplicity pp to Pb–Pb collisions, in qualitative agreement with more elaborate coalescence models. The measured mean transverse momentum of (anti-)deuterons in pp is not reproduced by the Blast-Wave model calculations that simultaneously describe pion, kaon and proton spectra, in contrast to central Pb–Pb collisions. The ratio between the pT-integrated yield of deuterons to protons, d/p, is found to increase with the chargedparticle multiplicity, as observed in inelastic pp collisions at different centre-of-mass energies. The d/p ratios are reported in a wide range, from the lowest to the highest multiplicity values measured in pp collisions at the LHC.
The measurement of the production of prompt D0, D+, D*+, and D+S mesons in proton–lead (p–Pb) collisions at the centre-of-mass energy per nucleon pair of sNN−−−√ = 5.02 TeV, with an integrated luminosity of 292 ± 11 μb−1, are reported. Differential production cross sections are measured at mid-rapidity (−0.96 < ycms< 0.04) as a function of transverse momentum (pT) in the intervals 0 < pT< 36 GeV/c for D0, 1 < pT< 36 GeV/c for D+ and D*+, and 2 < pT< 24 GeV/c for D+ mesons. For each species, the nuclear modification factor RpPb is calculated as a function of pT using a proton-proton (pp) ref- erence measured at the same collision energy. The results are compatible with unity in the whole pT range. The average of the non-strange D mesons RpPb is compared with theoretical model predictions that include initial-state effects and parton transport model predictions. The pT dependence of the D0, D+, and D*+ nuclear modification factors is also reported in the interval 1 < pT< 36 GeV/c as a function of the collision centrality, and the central-to-peripheral ratios are computed from the D-meson yields measured in different centrality classes. The results are further compared with charged-particle measurements and a similar trend is observed in all the centrality classes. The ratios of the pT-differential cross sections of D0, D+, D*+, and D+S mesons are also reported. The D+S and D+ yields are compared as a function of the charged-particle multiplicity for several pT intervals. No modification in the relative abundances of the four species is observed with respect to pp collisions within the statistical and systematic uncertainties.
First results on K/π, p/π and K/p fluctuations are obtained with the ALICE detector at the CERN LHC as a function of centrality in Pb--Pb collisions at sNN−−−√=2.76 TeV. The observable νdyn, which is defined in terms of the moments of particle multiplicity distributions, is used to quantify the magnitude of dynamical fluctuations of relative particle yields and also provides insight into the correlation between particle pairs. This study is based on a novel experimental technique, called the Identity Method, which allows one to measure the moments of multiplicity distributions in case of incomplete particle identification. The results for p/π show a change of sign in νdyn from positive to negative towards more peripheral collisions. For central collisions, the results follow the smooth trend of the data at lower energies and νdyn exhibits a change in sign for p/π and K/p.
Production cross sections of muons from semi-leptonic decays of charm and beauty hadrons were measured at forward rapidity (2.5 < y < 4) in proton-proton (pp) collisions at a centre-of-mass energy s√ = 5.02 TeV with the ALICE detector at the CERN LHC. The results were obtained in an extended transverse momentum interval, 2 < pT< 20 GeV/c, and with an improved precision compared to previous measurements performed in the same rapidity interval at centre-of-mass energies s√ = 2.76 and 7 TeV. The pT- and y-differential production cross sections as well as the pT-differential production cross section ratios between different centre-of-mass energies and different rapidity intervals are described, within experimental and theoretical uncertainties, by predictions based on perturbative QCD.
The single crystal growth of 19 different intermetallic compounds within the LnT2X2 family (with Ln = lanthanides, T = Co, Ru, Rh, Ir, and X = Si, P) is presented, by employing a high-temperature metal-flux technique. The habitus of the obtained crystals is platelet-like with the crystallographic c direction perpendicular to the surface and with individual masses between 1 and 100 mg. The magnetic properties of these crystals are characterized by magnetization, heat-capacity, and resistivity measurements. These crystals form the materials basis for a thorough study of exciting surface properties by angle-resolved photoemission spectroscopy.
CMOS Monolithic Active Pixel Sensors for charged particle tracking (CPS) form are ultra-light and highly granular silicon pixel detectors suited for highly sensitive charged particle tracking. Unlike to most other silicon radiation detectors, they rely on standard CMOS technology. This cost efficient approach allows for building particularly small and thin pixels but also introduced, until recently, substantially constraints on the design of the sensors. The most important among them is the missing compatibility with the use of PMOS transistors and depleted charge collection diodes in the pixel. Traditional CPS were thus first of all suited for vertex detectors of relativistic heavy ion and particle physics experiments, which require highest tracking accuracy in combination with moderate time resolution and radiation tolerance.
This work reviews the R&D on understanding and improving the radiation tolerance of traditional CPS with non- and partially depleted active medium as pioneered by the MIMOSA-series developed by the IPHC Strasbourg. It introduces the specific measurement methods used to assess the radiation tolerance of those non-standard pixels. Moreover, it discusses the major mechanisms of radiation damage and procedures for radiation hardening, which allowed to extend the radiation tolerance of the devices by more than an order of magnitude.
73Ge(n, γ ) cross sections were measured at the neutron time-of-flight facility n_TOF at CERN up to neutron energies of 300 keV, providing for the first time experimental data above 8 keV. Results indicate that the stellar cross section at kT = 30 keV is 1.5 to 1.7 times higher than most theoretical predictions. The new cross sections result in a substantial decrease of 73Ge produced in stars, which would explain the low isotopic abundance of 73Ge in the solar system.
Application of the Luttinger theorem to the Kondo lattice YbRh2Si2 suggests that its large 4f-derived Fermi surface (FS) in the paramagnetic (PM) regime should be similar in shape and volume to that of the divalent local-moment antiferromagnet (AFM) EuRh2Si2 in its PM regime. Here we show by angle-resolved photoemission spectroscopy that paramagnetic EuRh2Si2 has a large FS essentially similar to the one seen in YbRh2Si2 down to 1 K. In EuRh2Si2 the onset of AFM order below 24.5 K induces an extensive fragmentation of the FS due to Brillouin zone folding, intersection and resulting hybridization of the Fermi-surface sheets. Our results on EuRh2Si2 indicate that the formation of the AFM state in YbRh2Si2 is very likely also connected with similar changes in the FS, which have to be taken into account in the controversial analysis and discussion of anomalies observed at the quantum critical point in this system.
The present thesis is primarily concerned with the application of the functional renormalization group (FRG) to spin systems. In the first part, we study the critical regime close to the Berezinskii-Kosterlitz-Thouless (BKT) transition in several systems. Our starting point is the dual-vortex representation of the two-dimensional XY model, which is obtained by applying a dual transformation to the Villain model. In order to deal with the integer-valued field corresponding to the dual vortices, we apply the lattice FRG formalism developed by Machado and Dupuis [Phys. Rev. E 82, 041128 (2010)]. Using a Litim regulator in momentum space with the initial condition of isolated lattice sites, we then recover the Kosterlitz-Thouless renormalization group equations for the rescaled vortex fugacity and the dimensionless temperature. In addition to our previously published approach based on the vertex expansion [Phys. Rev. E 96, 042107 (2017)], we also present an alternative derivation within the derivative expansion. We then generalize our approach to the O(2) model and to the strongly anisotropic XXZ model, which enables us to show that weak amplitude fluctuations as well as weak out-of-plane fluctuations do not change the universal properties of the BKT transition.
In the second part of this thesis, we develop a new FRG approach to quantum spin systems. In contrast to previous works, our spin functional renormalization group (SFRG) does not rely on a mapping to bosonic or fermionic fields, but instead deals directly with the spin operators. Most importantly, we show that the generating functional of the irreducible vertices obeys an exact renormalization group equation, which resembles the Wetterich equation of a bosonic system. As a consequence, the non-trivial structure of the su(2) algebra is fully taken into account by the initial condition of the renormalization group flow. Our method is motivated by the spin-diagrammatic approach to quantum spin system that was developed more than half a century ago in a seminal work by Vaks, Larkin, and Pikin (VLP) [Sov. Phys. JETP 26, 188 (1968)]. By embedding their ideas in the language of the modern renormalization group, we avoid the complicated diagrammatic rules while at the same time allowing for novel approximation schemes. As a demonstration, we explicitly show how VLP's results for the leading corrections to the free energy and to the longitudinal polarization function of a ferromagnetic Heisenberg model can be recovered within the SFRG. Furthermore, we apply our method to the spin-S Ising model as well as to the spin-S quantum Heisenberg model, which allows us to calculate the critical temperature for both a ferromagnetic and an antiferromagnetic exchange interaction. Finally, we present a new hybrid formulation of the SFRG, which combines features of both the pure and the Hubbard-Stratonovich SFRG that were published recently [Phys. Rev. B 99, 060403(R) (2019)].
In this thesis, we presented the theoretical description of the magnetic properties of various frustrated spin systems. Especially in search of exotic states, such as quantum spin liquids, magnetically frustrated systems have been subject of intense research within the last four decades. Relating experimental observations in real materials with theoretical models that capture those exotic magnetic phenomena has been one of the great challenges within the field of magnetism in condensed matter.
In order to build such a bridge between experimental observations and theoretical models, we followed two complementary strategies in this thesis. One strategy was based on first principles methods that enable the theoretical prediction of electronic properties of real materials without further experimental input than the crystal structure. Based on these predictions, low-energy models that describe magnetic interactions can be extracted and, through further theoretical modelling, can be compared to experimental observations. The second strategy was to establish low-energy models through comparison of data from experiments, such as inelastic neutron scattering intensities, with calculated predictions based on a variety of plausible magnetic models guided by microscopic insights. Both approaches allow to relate theoretical magnetic models with real materials and may provide guidance for the design of new frustrated materials or the investigation of promising models related to exotic magnetic states.
Mechanism of the electroneutral sodium/proton antiporter PaNhaP from transition-path shooting
(2019)
Na+/H+ antiporters exchange sodium ions and protons on opposite sides of lipid membranes. The electroneutral Na+/H+ antiporter NhaP from archaea Pyrococcus abyssi (PaNhaP) is a functional homolog of the human Na+/H+ exchanger NHE1, which is an important drug target. Here we resolve the Na+ and H+ transport cycle of PaNhaP by transition-path sampling. The resulting molecular dynamics trajectories of repeated ion transport events proceed without bias force, and overcome the enormous time-scale gap between seconds-scale ion exchange and microseconds simulations. The simulations reveal a hydrophobic gate to the extracellular side that opens and closes in response to the transporter domain motion. Weakening the gate by mutagenesis makes the transporter faster, suggesting that the gate balances competing demands of fidelity and efficiency. Transition-path sampling and a committor-based reaction coordinate optimization identify the essential motions and interactions that realize conformational alternation between the two access states in transporter function.
The last decades have brought tremendous progress in understanding the phase structure of the strongly interacting matter. This has been driven by studying heavy-ion collisions on the experimental side and Lattice QCD, functional approaches to QCD, perturbation theory and effective theories on the theoretical side. Of particular interest is the transition from hadrons to partonic degrees of freedom which is expected to occur at high temperatures or high baryon densities. These phases play an important role in the early universe and the core of neutron stars. Nowadays, the existence of a deconfined phase, i.e. Quark Gluon Plasma (QGP) and its phase transition at vanishing and small net-baryon densities, are well established. However, the situation at larger densities is less clear.
Complementary to the studies of matter at high temperatures and low net-baryon densities performed at RHIC and LHC, the proposed Compressed Baryonic Matter (CBM) experiment at the future FAIR facility, aims to explore the QCD phase diagram at very high baryon-net densities and moderate temperatures. The CBM research program includes the search for the deconfinement phase transition, the study of chiral symmetry restoration in super dense baryonic matter, the search for the critical endpoint, and the study of the nuclear equation of state at high densities. While other experiments (STAR-BES at BNL, BM@N at NICA) are suited to measure bulk observables, CBM is explicitly designed to access rare observables, such as multi-strange hadrons, dileptons, hypernuclei and charmonium. Therefore, a key feature of CBM is the very high interaction rate, exceeding those of contemporary and proposed nuclear collision experiments by several orders of magnitude. However, some of the rare probes have a complex signature, hidden in a background of several hundreds of charged tracks. This forbids a conventional, hardware-triggered readout; instead, the experiment combines self-triggered front-end electronics, fast and free-streaming data transport, online event reconstruction and online event selection.
The central detector for tracking and momentum determination of charged particles in the CBM experiment is the Silicon Tracking System (STS). It is designed to measure up to 700 charged particles in nucleus-nucleus collisions between 0.1 and 10 MHz interaction rate, to achieve a momentum resolution in 1 Tm dipole magnetic field better than 2%, and to be capable of identifying complex particle decays topologies, e.g., such with strangeness content. The STS comprises 8 tracking stations equipped with double-sided silicon microstrip sensors. Two million channels are read out with self-triggering electronics, matching the data streaming and on-line event analysis concept applied throughout the experiment. The detector’s functional building block consists of a silicon sensor, aluminum-kapton microcables and two front-end electronics boards integrated in a module. The custom-designed ASIC (STS-XYTER) implements the analog front-end, the digitizer and the generation of individual hit data for each signal.
Design of the front-end chip requires finding an optimal solution for time and input charge measurements with tight constraints: small area (58 μm channel pitch), low noise levels (below 1500 ENC(e− )), low power consumption (610 mW/channel), radiation hard architecture and speed requirements. Being a part of the first processing stage in the full readout and data acquisition chain, the characterization of the chip and its integration with the detector components is a crucial task. In this work, various methods and tools are established for testing and qualifying the ASIC analog front-end. A procedure for amplitude and timing calibration is developed using different functionalities of the chip. The procedure is optimized for our prototype system in order to achieve the best accuracy in the shortest amount of time. Results were verified using a gamma source and an external pulse generator, showing discrepancies below 5%.
Among the multiple operation requirements of the ASIC, the noise performance is of essential importance. The characterization of the chip noise is carried out as a function of a large number of parameters such as: low-voltage power regulators, input capacitance, shaping time, temperature and bond’s protective glue (glob-top). These studies allowed to optimize the ASIC configuration settings, to identify possible malfunctions in the low voltage powering scheme and to select possible glob-top materials to be used in the module assembly. Moreover, important differences are found among odd and even channels, which main cause was related to the bias scheme of the amplifiers of the two groups of channels. This effect has been corrected in the new version (v2.1) of the ASIC.
Despite the STS front-end electronics being located outside of the physics acceptance, they will be exposed to high fluxes of charged particles. Considering the SIS100 possible running scenario, the lifetime dose at the location of the electronics is expected not to exceed 800 krad. Consequently, the STS-XYTERv2 ASIC implements a radiation hard design based on dual-interlocked cells (DICE), and triple modular redundancy (TMR).
Multiple dedicated beam campaigns were carried out to evaluate the ASIC’s design in terms of immunity to single event upsets (SEU) errors and overall performance after a lifetime doses. The DICE cell SEU cross section was measured in a high-intensity proton beam. Result show a significant improvement of the SEU immunity in the STS-XYTERv2 compared to its predecessor, and allows to estimate the upset rate in the CBM running scenario, resulting in less than one SEU/ASIC/day.
The studies on the total ionizing dose (TID) show that the overall noise levels for the ASIC, at the end of the experiment lifetime, are expected to increase by approximately 40 – 60%. Moreover, they demonstrated that short periods of annealing at room temperature can favorably influence the noise performance of the chip.
The assembly and test of the STS modules, a complex process with multiple stages and a long learning curve, is illustrated in different parts of this work. The first prototype modules were built with the front-end board type B (FEBs-B), capable of reading out 128 channels for p and n side respectively. The studies were conducted with a relativistic proton beam of 1.7 GeV/c momentum at the COSY accelerator facility, Research Center Juelich, in March 2018. The campaign brought valuable insights to the development of an effective grounding and powering scheme for reading out the detectors. The signal-to-noise was measured for one of the prototype modules, resulting in values larger than 15 for both polarities. A deeper analysis into the collected data allowed the identification of a logic error in the ASIC that affected the readout rate and the quality of the data. This issue was corrected in the new version of the chip.
A precursor of the STS detector, named mini-STS (mSTS), has been built within the mCBM project carried out in FAIR Phase0. mSTS was built from 4 fully assembled detector modules. To ensure the proper operation of the ASICs that were used in the module assembly, it was required to develop a rigorous quality assurance procedure. A dedicated setup was built based on a custom designed pogo-pin station and a total of 339 chips were tested. More than 90% of good-quality and operational ASICs were obtained. In the mCBM beam campaign of March 2019, four detector modules were successfully operated in a close-to-final readout chain and valuable data were collected. The mSTS detector was exposed to the products of Ag+Au collisions at energies above 1.58 AGeV and overall interaction rates up to 106 , which resembles the real conditions of the CBM experiment.
Along this work, significant progress for the development of the STS detector modules was achieved. Techniques for characterization of the front-end electronics and the complete detector system were developed and worked out. They will be applied for QA of the components during the series production.
Holographic imaging techniques, which exploit the coherence properties of light, enable the reconstruction of the 3D scenery being viewed. While the standard approaches for the recording of holographic images require the superposition of scattered light with a reference field, heterodyne detection techniques enable direct measurement of the amplitude and relative phase of the electric light field. Here, we explore heterodyne Fourier imaging and its capabilities using active illumination with continuous-wave radiation at 300 GHz and a raster-scanned antenna-coupled field-effect transistor (TeraFET) for phase-sensitive detection. We demonstrate that the numerical reconstruction of the scenery provides access to depth resolution together with the capability to numerically refocus the image and the capability to detect an object obscured by another object in the beam path. In addition, the digital refocusing capability allows us to employ Fourier imaging also in the case of small lens-object distances (virtual imaging regime), thus allowing high spatial frequencies to pass through the lens, which results in enhanced lateral resolution.
Ziel der Simulationsstudien in dieser Arbeit war es, die Leistungsfähigkeit des Transition Radiation Detectors zur Identifikation von leichten Kernen und Hyperkernen im CBM-Experiment zu untersuchen. Die Trennung von Helium und Deuterium
mithilfe ihres spezifischen Energieverlustes im TRD ist zentral, um eine Rekonstruktion des seltenen Hyperkerns 6 ΛΛHe mit einem hohen Signal-zu-Untergrund-Verhältnisse zu leisten. Zur Erfüllung der Anforderungen, die sich aus dem CBM-Forschungsprogramm ergeben, wird eine Auflösung des Energieverlustes dEdx von Helium von höchstens 30 % verlangt...
We have identified a mistake in how Fig. 1 is referenced in the text of the article Eur. Phys. J. C 77 (2017) no. 8, 569 which affected three paragraphs of the results section. The corrected three paragraphs as well as the unmodified accompanying figure are reproduced in this document with the correct labeling.
In addition, an editing issue led to a missing acknowledgements section. The missing section is reproduced at the end of this document in the manner in which it should have appeared in the published article.
In this work we provided additional insights into our understanding of bulk QCD matter through the study of the transport coeffcients which govern the non-equilibrium microscopical processes of statistical ensembles. Specically, we focused on the low energy regime corresponding to the hadron gas, as the properties of this region of the phase diagram are still relatively unknown, and existing calculations for the transport coeffcients are either scarce, contradictory, or somewhat limited in scope; this thesis' main goal was thus to shed some light on this by providing new independent calculations of these quantities.
We subsequently presented two formalisms which can be used to calculate transport coeffcients. The first one (which also was the main tool we used in the following chapters to produce our results) relies on the development of so-called Green-Kubo formulas, which relate non-equilibrium dissipative fluctuations with transport coeffcients; notably, the off-diagonal components of the energy-momentum tensor are shown to be related to the shear viscosity, its diagonal components to the bulk viscosity and fluctuations in the electric current can be related to the electric conductivity. We additionally introduced two new conductivities, namely the baryon-electric and strange electric conductivities, which we dubbed, together with the already known electric one, the "cross-conductivity", which encodes information about how electric fluctuations are correlated to changes in electric, baryonic or strange currents, or vice-versa. The second way of calculating transport coeffcient which we discussed consists in linearizing the collision term of the Boltzmann equation through the Chapman-Enskog formalism. While in principle providing direct semi-analytical results for the transport coeffcients, this approach is complicated to implement when more than a few species are considered, and as such was then mostly used as a tool to calibrate our Green-Kubo calculations.
The hadron gas model that we used for all calculations, namely the transport approach SMASH, was then presented. The main features of the model were explained, such as the collision criterion, the considered degrees of freedom and the specific way in which they microscopically interact with each other. It was verified that SMASH does reproduce analytical results of the Boltzmann equation in an expanding universe scenario, thus showing the equivalence of this transport approach and the associated kinetic theory results. A special care was taken to detail the ways in which a state of thermal and chemical equilibrium (which is necessary for Green-Kubo relations to be valid) can be reached and described using SMASH.
...
Lattice QCD with heavy quarks reduces to a three-dimensional effective theory of Polyakov loops, which is amenable to series expansion methods. We analyse the effective theory in the cold and dense regime for a general number of colours, Nc. In particular, we investigate the transition from a hadron gas to baryon condensation. For any finite lattice spacing, we find the transition to become stronger, i.e. ultimately first-order, as Nc is made large. Moreover, in the baryon condensed regime, we find the pressure to scale as p ∼ Nc through three orders in the hopping expansion. Such a phase differs from a hadron gas with p ∼ N0c, or a quark gluon plasma, p ∼ N2c, and was termed quarkyonic in the literature, since it shows both baryon-like and quark-like aspects. A lattice filling with baryon number shows a rapid and smooth transition from condensing baryons to a crystal of saturated quark matter, due to the Pauli principle, and is consistent with this picture. For continuum physics, the continuum limit needs to be taken before the large Nc limit, which is not yet possible in practice. However, in the controlled range of lattice spacings and Nc-values, our results are stable when the limits are approached in this order. We discuss possible implications for physical QCD.
We report on the observation of coherent terahertz (THz) emission from the quasi-one-dimensional charge-density wave (CDW) system, blue bronze (K0.3MoO3), upon photo-excitation with ultrashort near-infrared optical pulses. The emission contains a broadband, low-frequency component due to the photo-Dember effect, which is present over the whole temperature range studied (30–300 K), as well as a narrow-band doublet centered at 1.5 THz, which is only observed in the CDW state and results from the generation of coherent transverse-optical phonons polarized perpendicular to the incommensurate CDW b-axis. As K0.3MoO3 is centrosymmetric, the lowest-order generation mechanism which can account for the polarization dependence of the phonon emission involves either a static surface field or quadrupolar terms due to the optical field gradients at the surface. This phonon signature is also present in the ground-state conductivity, and decays in strength with increasing temperature to vanish above $T\sim 100\,{\rm{K}}$, i.e. significantly below the CDW transition temperature. The temporal behavior of the phonon emission can be well described by a simple model with two coupled modes, which initially oscillate with opposite polarity.
Light-matter interaction in the strong coupling regime is of profound interest for fundamental quantum optics, information processing and the realization of ultrahigh-resolution sensors. Here, we report a new way to realize strong light-matter interaction, by coupling metamaterial plasmonic "quasi-particles" with photons in a photonic cavity, in the terahertz frequency range. The resultant cavity polaritons exhibit a splitting which can reach the ultra-strong coupling regime, even with the comparatively low density of quasi-particles, and inherit the high Q-factor of the cavity despite the relatively broad resonances of the Swiss-cross and split-ring-resonator metamaterials used. We also demonstrate nonlocal collective interaction of spatially separated metamaterial layers mediated by the cavity photons. By applying the quantum electrodynamic formalism to the density dependence of the polariton splitting, we can deduce the intrinsic transition dipole moment for single-quantum excitation of the metamaterial quasi-particles, which is orders of magnitude larger than those of natural atoms. These findings are of interest for the investigation of fundamental strong-coupling phenomena, but also for applications such as ultra-low-threshold terahertz polariton lasing, voltage-controlled modulators and frequency filters, and ultra-sensitive chemical and biological sensing.