Institutes
Refine
Year of publication
Document Type
- Doctoral Thesis (211) (remove)
Has Fulltext
- yes (211)
Is part of the Bibliography
- no (211)
Keywords
- Beschleuniger (3)
- HADES (3)
- Heavy Ion Collisions (3)
- CBM (2)
- CBM Experiment (2)
- Control System (2)
- Dissertation (2)
- EPICS (2)
- FAIR (2)
- Flow (2)
Institute
- Physik (211)
- MPI für Biophysik (1)
The core of this work is represented by the investigation of the chiral phase transition, using Monte Carlo simulations and unimproved staggered fermions, both in the weak and strong coupling regimes of Quantum Chromodynamics. Based on recent results from Monte Carlo simulations, both using unimproved staggered fermions and Wilson fermions, the chiral phase transition in the continuum and chiral limit shows compatibility with a second-order phase transition for Nf (number of flavours) in range [2:7], at zero baryon chemical potential. This achievement relies on the analytic continuation of Nf to non-integer values on the lattice, which allows to make use of extrapolation techniques to the chiral limit, where simulations are not possible. Furthermore, these results provide a resolution to the ambiguous scenario for Nf = 2 in the chiral limt. The first part of this thesis is devoted to the investigation of the chiral phase transition when a non-zero imaginary baryon chemical potential is involved, whose value corresponds to the 81% of the Roberge-Weiss one. Using the same extrapolation techniques aforementioned, the order of the chiral phase transition in the continuum and chiral limit shows compatibility with a second-order phase transition for Nf in range [2:6], highlighting a lack of dependence of the order of the chiral phase transition on the imaginary baryon chemical potential value. The second part of this thesis is about the study of the extension of the first-order chiral region in the strong coupling regime, at zero baryon chemical potential. Using Monte Carlo techniques, this can be done by investigating the Z2 boundary on a coarse lattice, whose temporal extent reads Nt = 2, and simulations are realised for Nf = 4, 8. The results in the weak coupling regime show, for $Nt = 8, 6, 4 and fixed Nf value, an inflating first-order chiral region. As in the strong coupling limit a second-order chiral phase transition is expected, the first-order chiral region has to shrink as the strong coupling regime is approached, resulting in a non-monotonic behaviour of the Z2 boundary. For Nf = 8, a critical mass on the Z2 boundary has been obtained, confirming the expected non-monotonic behaviour. For Nf = 4 the results do not provide a unique conclusion: Either a Z2 boundary at extremely low bare quark mass or a second-order chiral phase transition in the O(2) universality class in the chiral limit can take place. In addition to the two main topics, the performances of the second-order minimum norm integrator (2MN) and the fourth-order minimum norm integrator (4MN) have been compared, after implementing the 4MN one in the CL2QCD code used to realise our simulations. The 2MN integrator had already been implemented in the code since the first version was released. The two integrators belong to the class of symplectic integrators and represent an essential component of the RHMC algorithm, involved in our investigation. This step is extremely important, in order to guarantee the best quality when collecting data from simulations, and the results of the comparison suggested to favor the 2MN integrator, for both the topics.
A powerful technique to distinguish the enantiomers of a chiral molecule is the Coulomb Explosion Imaging (CEI). This technique allows us to determine the handedness of a single molecule. In CEI, the molecule becomes charged by losing many electrons in a very short period of time by interacting with the light. The repulsion forces between the positive charged particles of the molecule leads the molecule to break into parts-fragments. By measuring the three vector momentum of (at least) four fragments, the handedness observable can be determined. In this thesis, CEI is induced by absorption of a single high energy photon, which creates an inner-shell hole (K shell) of the molecule. The subsequent cascade of Auger decays lead to fragmentation. We decided to work with the formic acid molecule in this thesis. Two different experiments were conducted. The first experiment focused on exciting electrons to different energy states, while the second experiment focused on extracting directly a photoelectron to the continuum and measure the angular distribution of the photoelectron in the molecular frame. The primary goal was to search for chiral signal in a pure achiral planar molecule under the previous electron processes. The results of these findings were further implemented to two more molecules.
In the framework of the LHC Injectors Upgrade Project (LIU), the CERN Proton Synchrotron Booster (PSB) went through major upgrades resulting in new effects to study, challenges to overcome and new parameter regimes to explore. To assess the achievable beam brightness limit of the machine, a series of experimental and computational studies in the transverse planes were performed. In particular, the new injection scheme induces optics perturbations that are strongly enhanced near the half-integer resonance. In this thesis, methods for dynamically measuring and correcting these perturbations and their impact on the beam performance will be presented. Additionally, the quality of the transverse beam distributions and strategies for improvement will be addressed. Finally, the space charge effects when dynamically crossing the half-integer resonance will be characterized. The results of these studies and their broader significance beyond the PSB will be discussed.
In this thesis, the flow coefficients vn of the orders n = 1 − 6 are studied for protons and light nuclei in Au+Au collisions at Ebeam = 1.23 AGeV, equivalent to a center-of-mass energy in the nucleon-nucleon system of √sNN = 2.4 GeV. The detailed multi-differential measurement is performed with the HADES experiment at SIS18/GSI. HADES, with its large acceptance, covering almost full azimuth angle, combined with its high mass-resolution and good particle-identification capability, is well equipped to study the azimuthal flow pattern not only for protons, deuterons, and tritons but also for charged pions, kaons, the φ-mesons, electrons/positrons, as well as light nuclei like helions and alphas. The high statistics of more than seven billion Au-Au collisions recorded in April/May 2012 with HADES enables for the first time the measurement of higher order flow coefficients up to the 6th harmonic. Since the Fourier coefficient of 7th and 8th order are beyond the statistical significance only an upper bound is given. The Au+Au collision system is the largest reaction system with the highest particle multiplicities, which was measured so far with HADES. A dedicated correction method for the flow measurement had to be developed to cope with the reconstruction in-efficiencies due to occupancies of the detector system. The systematical bias of the flow measurement is studied and several sources of uncertainties identified, which mainly arise from the quality selection criteria applied to the analyzed tracks, the correction procedure for reconstruction inefficiencies, the procedures for particle identification (PID) and the effects of an azimuthally non-uniform detector acceptance. The systematic point-to-point uncertainties are determined separately for each particle type (proton, deuteron and triton), the order of the flow harmonics vn, and the centrality class. Further, the validity of the results is inspected in the range of their evaluated systematic uncertainties with several consistency checks. In order to enable meaningful comparisons between experimental observations and predictions of theoretical models, the classification of events should be well defined and in sufficiently narrow intervals of impact parameter. Part of this work included the implementation of the procedure to determine the centrality and orientation of the reaction.
In the conclusion the experimental results are discussed, including various scaling properties of the flow harmonics. It is found that the ratio v4/v2 for protons and light nuclei (deuterons and tritons) at midrapidity for all centrality classes approaches values close to 0.5 at high transverse momenta, which was suggested to be indicative for an ideal hydrodynamic behaviour. A remarkable scaling is observed in the pt dependence of v2 (v4) at mid-rapidity of the three hydrogen isotopes, when dividing by their nuclear mass number A (A^2) and pt by A. This is consistent with naive expectations from nucleon coalescence, butraises the question whether this mass ordering can also be explained by a hydrodynamical-inspired approach, like the blast-wave model. The relation of v2 and v4 to the shape of the initial eccentricity of the collision system is studied. It is found that v2 is independent of centrality for all three particle species after dividing it by the averaged second order participant eccentricity v2/⟨ε2⟩. A similar scaling is shown for v4 after division by ⟨ε2⟩^2.
This thesis investigates exotic phases within effective models for strongly interacting matter.
The focus lies on the chiral inhomogeneous phase (IP) that is characterized by a spontaneous breaking of translational symmetry and the moat regime, which is a precursor phenomenon exhibiting a non-trivial mesonic dispersion relation.
These phenomena are expected to occur at non-zero baryon densities, which is a parameter region that is mostly non-accessible to first-principle investigations of Quantum chromodynamics (QCD).
As an alternative approach, we consider the Gross-Neveu (GN) and Nambu-Jona-Lasinio (NJL) model within the mean-field approximation, which can be regarded as effective models for QCD.
We focus on two aspects of the moat regime and the IP in these models.
First, we investigate the influence of the employed regularization scheme in the (3+1)-dimensional NJL model, which is nonrenormalizable, i.e., the regulator cannot be removed.
We find that the moat regime is a robust feature under change of regularization scheme, while the IP is sensitive to the specific choice of scheme.
This suggests that the moat regime is a universal feature of the phase diagram of the NJL model, while the IP might only be an artifact of the employed regulator.
Second, we study the influence of the number of spatial dimensions on the emergence of the IP.
To this end, we investigate the GN model in noninteger spatial dimensions d.
We find that the IP and the moat regime are present for d < 2, while they are absent for d > 2.
This demonstrates the central role of the dimensionality of spacetime and illustrates the connection of previously obtained results in this model in integer number of spatial dimensions.
Moreover, this suggests that the occurrence of these phenomena in three spatial dimensions is solely caused by the finite regulator.
In summary, this thesis contributes to advancing our understanding of the phase structure of QCD, particularly regarding the existence and characteristics of inhomogeneous phases and the moat regime.
Even though the investigations are performed within effective models, they provide valuable insight into the aspects that are crucial for the formation of an inhomogeneous chiral condensate in fermionic theories.
By combining two unique facilities at the Gesellschaft fuer Schwerionenforschung (GSI), the Fragment Separator (FRS) and the Experimental Storage Ring (ESR), the first direct measurement of a proton capture reaction of stored radioactive isotopes was accomplished. The combination of well-defined ion energy, an ultra-thin internal gas target, and the ability to adjust the beam energy in the storage ring enables precise, energy-differentiated measurements of the (p,gamma) cross sections. The new setup provides a sensitive method for measuring (p,gamma) reactions relevant for nucleosynthesis processes in supernovae, which are among the most violent explosions in the universe and are not yet well understood. The cross sections of the 118Te(p,gamma) and 124Xe(p,gamma) reactions were measured
at energies of astrophysical interest. The heavy ions were stored with energies of 6 MeV/nucleon and 7 MeV/nucleon and interacted with a hydrogen gas-jet target.
The produced proton-capture products were detected with a double-sided silicon strip detector. The radiative recombination process of the fully stripped ions and electrons from the hydrogen target was used as a luminosity monitor.
Additionally, post-processing nucleosynthesis simulations within the NuGrid [1] research platform have been performed. The impact of the new experimental results on the p-process nucleosynthesis around 124Xe and 118Te in a core-collapse supernova was investigated. The successful measurement of the proton capture cross sections of radioactive isotopes rises the motivation to proceed with experiments in lower energy regions.
[1] M. Pignatari and F. Herwig, “The nugrid research platform: A comprehensive simulation approach for nuclear astrophysics,” Nuclear Physics News, vol. 22, no. 4, pp. 18–23, 2012.
In this thesis, we present a detailed consideration of both qualitative and quantitative properties of static spherically symmetric solutions of the Einstein equations with self-interacting scalar fields. Our focus is on solutions with naked singularities. We study the qualitative properties of the solutions of the Einstein equations with real static self-interacting $N$ scalar fields, making some assumptions on self-interaction. We provide a rigorous proof that the corresponding solutions will be regular up to $r=0$. Furthermore, we find the rigorous form of asymptotic solutions near the singularity and at spatial infinity. We construct some examples of spherical-like naked singularities at $r=r_s\neq0$ in curvature coordinates.
We analyze the stability of the previously considered solutions against odd-parity gravitational perturbations and also examine the fundamental quasi-normal modes spectra. For the general class of the self-interaction potential, we demonstrate well-posedness of the initial problem and stability for positively defined potentials. As an example, we numerically study the case of the scalar field with power-law self-interaction potential and find the fundamental quasi-normal modes frequencies. We demonstrate that they differ from the standard Schwarzschild black hole case.
We study in detail the motion of particles in the vicinity of previously considered solutions. Mainly, we are interested in considering properties of the distribution of stable circular orbits around the corresponding configurations and images of the accretion disk for a distant observer. For all cases, we find possible types of stable circular orbit distributions and domains of parameters where they are realized.
We also demonstrate that the presence of self-interaction can lead to a new type of circular orbit distributions, which is absent in the linear massless scalar field case. We build Keplerian disk images in the plane of a distant observer and demonstrate the possibility to mimic the shadows of black holes.
In this thesis, the early time dynamics in a heavy ion collision of Pb-Nuclei at LHC center-of-mass energies of 5 TeV is studied. Right after the collision the system is out-of-equilibrium and essentially gluon dominated, with their density saturating at a specific momentum scale Q_s. Based on a separation of scales for the soft and hard gluonic degrees of freedom, the initial state is given from an effective model, known as the Color Glass Condensate. Within this model, the soft gluons behave classical to leading order, making it possible to study their dynamics in gauge invariant fashion on a three dimensional lattice, solving Hamiltonian field equations of motion, keeping real time. Quark-Antiquark pairs are produced in the gluonic medium, known as the Glasma and manifest themselves as a source of quantum fluctuations.
They enter the dynamics of the gluons as a current, making the system semi-classical. In lattice simulations, the non-equilibrium system is tested for pressure isotropization, which is a necessary ingredient to reach a local thermal equilibrium (LTE), making a hydrodynamical description at a later stage possible. In addition, the occupation of energy modes is studied with its implications on thermalization and classicality.
Das Experiment ALICE (A Large Ion Collider Experiment) am CERN (Conseil Européen pour la Recherche Nucléaire) LHC (Large Hadron Collider) fokussiert sich auf die Untersuchung stark wechselwirkender Materie unter extremen Bedingungen. Solche Bedingungen existierten wenige Mikrosekunden nach dem Urknall, als die Temperaturen so hoch waren, dass Partonen (Quarks und Gluonen) nicht zu farbneutralen Hadronen gebunden waren. In solch einem Quark-Gluon-Plasma können sich die Partonen frei bewegen, wobei sie allerdings mit anderen Partonen aus dem Medium stark wechselwirken. Am LHC werden Bleikerne auf ultra-relativistische Energien von bis zu 2.68 TeV beschleunigt und zur Kollision gebracht, wobei für weniger als 10 fm/c ein QGP entsteht, das schnell expandiert. Die Partonen hadronisieren, wenn das QGP sich auf Temperaturen von weniger als der Phasenübergangstemperatur von ≈155MeV abkühlt. Die finalen Teilchen- und Impulsverteilungen werden werden vom ALICE Detektor gemessen und geben Aufschluss auf elementare Prozesse im QGP.
Die TPC (Time Projection Chamber ) ist eines der wichtigsten Detektorsysteme von ALICE. Sie trägt maßgeblich zur Rekonstruktion von Teilchenspuren und zur Identifikation der Teilchensorten bei mittleren Rapiditäten bei. Die TPC ist eine große zylindrische Spurendriftkammer und besteht aus einem 88mˆ3 großen Gasvolumen, das von der zentralen Hochspannungselektrode in zwei Seiten geteilt wird. Durchquert ein Teilchen das Gasvolumen, ionisiert es entlang seiner Spur eine spezifische Menge von Gasatomen. Die Ionisationselektronen driften entlang des extrem homogenen elektrischen Feldes zu den Auslesekammern an den Endkappen auf beiden Seiten der TPC. Die Messung der Position und der Menge der Ionisationselektronen erlaubt die Rekonstruktion der Teilchenspur sowie, in Kombination mit der Impulsmessungen über die Krümmung der Teilchenspur im Magnetfeld, die Bestimmung der Teilchensorte über den spezifischen Energieverlust pro Wegstrecke im Gas. Das Gasvolumen der TPC war in LHC Run 1 (2010–2013) mit Ne-CO_2 (90-10) gefüllt. Die Gasmischung wurde zu Ar-CO_2 (88-12) für Run 2 (2015–2018) geändert. Als Auslesekammern wurden Vieldrahtproportionalkammern verwendet, die aus einer segmentierten Ausleseebene, einer Anodendrahtebene, einer Kathodendrahtebene und einem Gating-Grid (GG) bestehen. Das GG is eine zusätzliche Drahtebene, die durch zwei verschiedene Spannungseinstellungen transparent oder undurchlässig für Elektronen und positive Ionen geschaltet werden kann.
In den ersten Daten von Run 2 bei hohen Interaktionsraten wurden große Verzerrungen der gemessenen Spurpunkte beobachtet, die auf Grund von Verzerrungen des Driftfeldes auftreten und nicht von Daten aus Run 1 bekannt waren. Diese Verzerrungen treten nur sehr lokal an den Grenzen von manchen der inneren Auslesekammern (IROCs) auf. Zudem wurden auch große Verzerrungen in einer (C06) der äußeren Auslesekammern (OROCs) festgestellt, die sich bei einem bestimmten Radius über die ganze Breite der Kammer erstrecken. Die Ergebnisse dieser Arbeit befassen sich mit der Untersuchung jener Verzerrungen und ihrer Ursache, sowie mit der Entwicklung von Strategien um die Verzerrungen zu minimieren.
Messungen der Verzerrungen in den IROCs und Vergleiche mit Simulationen lassen darauf schließen, dass die Verzerrungen von positiver Raumladung hervorgerufen werden, die durch Gasverstärkung an sehr begrenzten Regionen der Auslesekammern entsteht und sich durch das Driftvolumen bewegt. Es werden charakteristische Abhängigkeiten von der Interaktionsrate sowie systematische Veränderungen bei Umkehrung der Orientierung des Magnetfeldes gemessen. Eine erneute Analyse von Run 1 Daten mit den Methoden aus Run 2 zeigt, dass die Verzerrungen bereits in Run 1 auftraten, jedoch durch die Ne-Gasmischung und niedrigere Interaktionsraten um eine Größenordnung kleiner waren. Neue Daten aus Run 2, für die die Gasmischung zeitweise wieder von Ar-CO_2 zu Ne-CO_2- N_2 geändert wurde, bestätigen die Ergebnisse der Run 1 Datenanalyse. Der Ursprung der Raumladung wird systematisch eingegrenzt. Es werden einzelne IROCs identifiziert, an deren Anodendrähten die Raumladung entsteht. Physikalische Modelle ermöglichen es, die Entstehung der Raumladung auf das Volumen zurückzuführen, das sich zwischen zwei IROCs befindet. Damit besteht die Vermutung, dass einzelne Spitzen von Anodendrähten am äußeren Rand dieser IROCs in das Gasvolumen hineinragen und somit hohe elektrische Felder erzeugen, an denen Gasverstärkung stattfindet. Die positiven Ionen können dann ungehindert in das Driftvolumen gelangen. Um diesen Effekt zu unterdrücken, wird das Potential der Cover-Elektroden angepasst, die sich auf den Befestigungsvorrichtungen der Drahtebenen an den Kammerrändern befinden. Dadurch kann die Menge von Ionisationselektronen, die in das Volumen zwischen zwei IROCs hineindriftet und vervielfacht wird, eingeschränkt werden. Über elektro-statische Simulationen und Messungen wird eine Einstellung für das Cover-Elektroden-Potential gefunden, mit der die Verzerrungen auf 30 % reduziert werden können. Die Verzerrungen in OROC C06 entstehen durch positive Ionen, die aus der Verstärkungsregion in das Driftvolumen gelangen, da an dieser bestimmten Stelle zwei aufeinanderfolgende GG-Drähte den Kontakt verloren haben. Die Verzerrungen werden um mehr als einen Faktor 3 reduziert, indem die Hochspannung der Anodendrähte um 50 V und somit der Gasverstärkungsfaktor um einen Faktor 2 verringert wird und indem das Potential der noch funktionierenden GG-Drähte erhöht wird.
Zusammenfassend konnten die lokalen Raumladungsverzerrungen für die letzte Pb−Pb Strahlzeit von Run 2 auf weniger als 1cm bei den höchsten Interaktionsraten verringert werden. Zudem wurde der Anteil des von Raumladungsverzerrungen betroffenen Volumens der TPC signifikant verringert, sodass die ursprüngliche Auflösung der Spurrekonstruktion wieder erreicht werden konnte.
Experiments on Vibrational Energy Transfer (VET) in proteins contribute to our understanding of fundamental biological processes such as allostery, dissipation of excess energy, and possibly enzymatic catalysis. While these processes have been studied for a long time, many questions remain unanswered. The aim of this work was to expand the application of existing spectroscopic techniques to investigate VET, seeking tailored solutions for the diversity of proteins and amino acid environments. Additionally, new target proteins were to be established to broaden the spectrum of VET experiments towards the role of VET and low-frequency protein modes (LFMs).
To test their suitability as VET sensors, the non-canonical amino acids (ncAAs) Azidoalanine (N3Ala), azido-L-Homoalanine (Aha), p-azido-Phenylalanine (N3Phe), p-cyano-Phenylalanine (CNPhe), and 4-cyano-Tryptophan (CNTrp) were coupled to the VET donor β-(1-azulenyl)-L-Alanine (AzAla) in dipeptides. Their spectral properties were compared using FTIR and VET spectra in H2O, dimethyl sulfoxide, and tetrahydrofuran.
The solvent strongly influences the measured VET signals, which can be explained by the direct interaction of the solvent with the dipeptides. Additionally, the peak time within the subgroups of azide and nitrile sensors increased with the size of the side chain, indicating the dependence between peak time and the distance between VET donor and sensor. When incorporated into a protein, solvent interactions are less dominant. Therefore, Aha, N3Phe, and CNPhe were additionally incorporated at two different positions in the PDZ protein domain and investigated. Due to Fermi resonances, signals from azide sensors are challenging to predict, unlike those of the nitrile sensors.
Overall, the experiments showed that nitrile groups can serve well as VET sensors, as their lower extinction coefficient is compensated for by a narrower bandwidth. This expands the number of potential target proteins, and sensor incorporation can be less disruptive at various protein locations.
Since the VET donor AzAla can inject the energy of a photon into a protein as vibrational energy at a specific location, it can also be used for the targeted excitation of LFMs. If these modes are involved in an enzymatic reaction, a direct influence on activity is expected. This hypothesis has long existed but has not been definitively verified. Some studies have found evidence for the involvement of LFMs in formate dehydrogenase (FDH) catalysis. Therefore, FDH was chosen for the investigation of LFMs in enzymes. This specific system additionally allows the use of a natural VET sensor: it forms a stable complex with NAD+ and N3-, an excellent IR marker. Thus, it provided the opportunity to test low-molecular-weight non-covalent ligands as VET sensors.
After ensuring sufficient AzAla supply through the internal establishment of an enzymatic synthesis, AzAla could be incorporated at various positions in FDH. Despite spectral overlap between free and bound N3-, the latter could be identified by its narrower FWHM. For some variants, no binding could be observed. Circular dichroism spectra showed that these variants structurally deviate slightly from other variants and the wild type (WT). VET could be observed over 22 Å from two regions of the protein to the N3- bound in the active center, at protein concentrations of below 2 mM. Unbound N3- did not generate signals, allowing it to be added in excess ensuring the saturation of the protein in VET experiments.
The activity of FDH WT and four AzAla mutants was investigated under substrate saturation without and with AzAla excitation. In these experiments, a slight reduction in activity under illumination was observed, even for the WT, who is not expected to interact with the excitation light. So far, a difference in sample temperature cannot be excluded as the cause for this decline.
The presented experiments with FDH illustrate the potential of low-molecular-weight ligands as VET sensors, with N3- being particularly attractive due to its simple structure (preventing Fermi resonances) and its high extinction coefficient. Its use can add many metalloproteins as potential targets for VET experiments and allows investigation without a VET sensor ncAA. Additionally, initial experiments were conducted to measure light-dependent FDH activity. By specifically exciting protein LFMs, this project could contribute in the future to answering longstanding questions about the extraordinary catalytic efficiency of enzymes.
Binary neutron star mergers represent unique observational phenomena because all four fundamental interactions play an important role at various stages of their evolution by leaving imprints in astronomical observables. This makes their accurate numerical modeling a challenging multiphysics problem that promises to increase our understanding of the high-energy astrophysics at play, thereby providing constraints for the underlying fundamental theories such as the gravitational interaction or the strong interaction of dense matter. For example, the first and so far only multi-messenger observation of the binary neutron star merger GW170817 resulted in numerous bounds on the parameters of isolated non-rotating neutron stars, e.g., their maximum mass or their distribution in radii, which can be directly used to constrain the equation of state of cold nuclear matter. While many of these results stem from the observation of the inspiral gravitational-wave signal, the postmerger phase of binary neutron star mergers encodes even more details about the extreme physics of hot and dense neutron star matter. In this Thesis we focus on the exploration of dissipative and shearing effects in binary neutron star mergers in order to identify novel approaches to constrain hot and dense neutron star matter.
The first effect is the well-motivated dissipation of energy due to the bulk viscosity which arises from violations of weak chemical equilibrium. We start by exploring the impact of bulk viscosity on black-hole accretion. This simplified problem gives us the opportunity to develop a test case for future codes taking into account the effects of dissipation in a fully general-relativistic setup and build intuition in the physics of relativistic dissipation. Next, we move on to isolated neutron stars and binary neutron star mergers by developing a robust implementation of bulk-viscous dissipation for numerical relativity simulations. We test our implementation by calculating the damping of eigenmodes of isolated neutron stars and the violent migration scenario. Finally, we present the first results on the impact of bulk viscosity on binary neutron star mergers. We identify a number of ways how bulk viscosity impacts the postmerger phase, out of which the suppression of gravitational-wave emission and dynamical mass ejection are the most notable ones.
In the last part of this Thesis we investigate how the shearing dynamics at the beginning of the merger affects the amplification of different initial magnetic-field topologies. We explore the hypothesis that magnetic fields which are located only in a small region near the stellar surface prior to merger lead to a weaker magnetic-field amplification. We show first evidence which confirms this hypothesis and discuss possible implications for constraining the physics of superconduction in cold neutron stars.
This work focuses on the investigation of K+, K- and ϕ-meson production in Ag(1.58 A GeV)+Ag collisions. The energetically cheapest channel for direct K+ production in binary NN-collisions NN→NΛK+ lies at exactly this energy. For the remaining K- and ϕ-mesons, an excess energy of 0.31 GeV and 0.34 GeV in the centre of mass system has to be provided by the system. This makes these particles an excellent probe for effects inside the medium.
K+ and K- mesons can be reconstructed directly as they possess a cτ of approximately 3.7 m. Using the approximately 3 billion recorded Ag(1.58 A GeV)+Ag 0-30% most central collision events, all reconstructed K+ and K- within the detector acceptance are investigated for their kinematic properties and their particle production rates compared to a selection of existing models.
The Compressed Baryonic Matter (CBM) is one of the core experiments at the future Facility for Anti-proton and Ion Research (FAIR), Darmstadt, Germany. Its goal is to investigate nuclear matter characteristics at high net-baryon densities and moderate temperatures. The Silicon Tracking System (STS) is a central detector system of CBM.
It is placed inside a 1Tm magnet and operated at a temperature of about −10 °C to keep radiation-induced bulk current in the 300μm double-sided microstrip silicon sensors low. The design of the STS aims to minimize the material budget in the detector acceptance (2.5° < θ < 25°). In order to do so, the readout electronics is placed outside the active area, and the analog signals are transported via ultra-thin micro-cables. The STS comprises eight tracking stations with 876 modules. Each module is assembled on a carbon fiber ladder, which is subsequently mounted in the C-shaped aluminum frame.
The scope of the thesis focused on developing a modular control system framework that can be implemented for different sizes of experimental setups. The developed framework was used for setups that required a remote operation, like the irradiation of the powering modules for the front-end electronics (FEE), but also in laboratory-based setups where the automation and archiving were needed (thermal cycling of the STS electronics).
The low voltage powering modules will be placed in the vicinity of the experiment, therefore they will experience a total dose of up to 40mGy over the 10 years of STS lifetime.
To estimate the effects of the radiation on the low-voltage module performance, a dedicated irradiation campaign took place. It aimed at estimating the rate of radiation induced soft errors, that lead to the switch off of the FEE.
Regular power cycles of multiple front-end boards (FEBs) pose a risk to the experiment operation. Firstly, such behavior could negatively influence the physics performance but also have deteriorating effects on the hardware. It was further assessed what are the limitations of the FEBs with respect to the thermal cycling and the mechanical stress. The results served as an indication of possible failure modes of the FEB at the end of STS lifetime. Failure modes after repeated cycles and potential reasons were determined (e.g., Coefficient of Thermal Expansion (CTE) difference between the materials).
Due to the conditions inside the STS efficient temperature and humidity monitoring and control are required to avoid icing or water condensation on the electronics or silicon sensors. The most important properties of a suitable sensor candidate are resilience to the magnetic field, ionizing radiation tolerance, and fairly small size.
A general strategy for ambient parameters monitoring inside the STS was developed, and potential sensor candidates were chosen. To characterize the chosen relative humidity sensors the developed control framework was introduced. A sampling system with a ceramic sensor and Fiber Optic Sensors (FOS) were identified as reliable solutions for the distributed sensing system. Additionally, the industrial capacitive sensors will be used as a reference during the commissioning.
Two different designs of FOS were tested: a hygrometer and 5 sensors multiplexed in an array. The FOS hygrometer turned out to be a more reliable solution. One of the possible reasons for a worse performance is a relatively low distance between the subsequent sensors (15 cm) and a thicker coating. The results obtained from the time response study pointed out that the thinner coating of about 15μm should be a good compromise between the humidity sensitivity and the time response.
The implementation of the containerized-based control system framework for the mSTS is described in detail. The deployed EPICS-based framework proved to be a reliable solution and ensured the safety of the detector for almost 1.5 years. Moreover, the data related to the performance of the detector modules were analyzed and significant progress in the quality of modules was noted. Obtained data was also used to estimate the total fluence, which was based on the leakage current changes.
The developed framework provided a unique opportunity to automate and control different experimental setups which provided crucial data for the STS. Furthermore, the work underlines the importance of such a system and outlines the next steps toward the realization of a reliable Detector Control System for STS.
In the last twenty years, a variety of unexpected resonances had been observed within the charmonium mass region. Although the existence of unconventional states has been predicted by the quantum chromodynamics (QCD), a quantum field theory describing the strong force, a clear evidence was missing. The Y(4260) is such an unexpected and supernummerary state, first observed at BaBar in 2005, and aroused great interest, because it couples much stronger to hidden charm decays (charm-anticharm states like J/Psi or h_c) instead of open charm decays (D meson pairs). This is unusual for states with masses above the D anti-D threshold. Furthermore, it decays into a charged exotic state Y(4260)->Z_c(3900)^+- pi^-+. The charge of the Z_c(3900)^+- is an indication that it comprises of two more quarks than the charm-anticharm pair, and could therefore be assumed to be a four-quark state. Due to these still not understood properties of these QCD-allowed states, they are referred to as exotic XYZ states to emphasize their particularity.
In 2017, the collaboration of the Beijing Spectrometer III (BESIII) investigated the production reaction of the Y(4260) resonance based on a high-luminosity data set. This significantly improved precision of the measurement of the cross-section sigma(e+e- -> J/Psi pi^+ pi^-) permitted a resolution into two resonances, the Y(4230) and the Y(4360). The Z_c(3900)^+- had been discovered by the BESIII collaboration in 2013, thus this experiment at the Beijing Electron-Positron Collider II (BEPCII) is a top-performing facililty to study exotic charmonium-like states.
In this work, an inclusive reconstruction of the strange hyperon Lambda in the charmonium mass region is performed to study possible decays of Y states in order to provide further insight into their nature. Finding more states or new decay channels may provide crucial hints to understand the strong interaction beyond nonperturbative approaches.
Three resonances are observed in the energy dependent cross-section: the first with a mass of (4222.01 +- 5.68) MeV and a width of (154.26 +- 28.16) MeV, the second with a mass of (4358.88 +- 4.97) MeV and a width of (49.58 +- 13.54) MeV and the third with a mass of (4416.41 +- 2.37) MeV and a width of (23.88 +- 7.18) MeV. These resonances, with a statistical significance Z > 5sigma, can be interpreted as the states Y(4230), Y(4360) and psi(4415).
Additionally, a proton momentum-dependent analysis strategy has been used in terms of the inclusiveness of the reconstruction and to address the momentum discrepancies between generic MC and measured data.
This Ph. D. thesis with the title "Characterisation of laser-driven radiation beams: Gamma-ray dosimetry and Monte Carlo simulations of optimised target geometry for record-breaking efficiency of MeV gamma-sources" is dedicated to the study of the acceleration of electrons by intense sub-picosecond laser pulses propagating in a sub-millimeter plasma with near-critical electron density (NCD) and resulting generation of the gamma bremsstrahlung and positrons in the targets of different materials and thickness.
Laser-driven particle acceleration is an area of increasing scientific interest since the recent development of short pulse, high-intensity laser systems. The interaction of intense high-energy, short-pulse lasers with solid targets leads to the production of high-energy electrons in the relativistic laser intensity regime of more than 1018 W /cm2. These electrons play the leading role in the first stage of the interaction of laser with matter, which leads to the creation of laser sources of particles and radiation. Therefore, the optimisation of the electron beam parameters in the direction of increasing the effective temperature and beam charge, together with a slight divergence, plays a decisive role, especially for further detection and characterisation of laser-driven photon and positron beams.
In the context of this work, experiments were carried out at the PHELIX laser system (Petawatt High-Energy Laser for Heavy Ion eXperiments) at GSI Helmholtz Center for Heavy-Ion Research GmbH in Darmstadt, Germany. This thesis presents a thermoluminescence dosimetry (TLD) based method for the measurement of bremsstrahlung spectra in the energy range from 30 keV to 100 MeV. The results of the TLD measurements reinforced the observed tendency towards the strong increase of the mean electron energy and number of super-ponderomotive electrons. In the case of laser interaction with long-scale NCD-plasmas, the dose caused by the gamma-radiation measured in the direction of the laser pulse propagation showed a 1000-fold increase compared to the high contrast shots onto plane foils and doses measured perpendicular to the laser propagation direction for all used combinations of targets and laser parameters.
In this thesis I present novel characterisation method using a combination of TLD measurements and Monte Carlo FLUKA simulations applicable to laser-driven beams. The thermoluminescence detector-based spectrometry method for simultaneous detection of electrons and photons from relativistic laser-induced plasmas initially developed by Behrens et al. (Behrens et al., 2003) and further applied in experiments at PHELIX laser (Horst et al., 2015) delivered good spectral information from keV energies up to some MeV, but as it was presented in (Horst et al., 2015) this method was not really suitable to resolve the content of photon spectra above 10 MeV because of the dominant presence of electrons. Therefore, I created new evaluation method of the incident electron spectra from the readings of TLDs. For this purpose, by means of MatLab programming language an unfolding algorithm was written. It was based on a sequential enumeration of matching data series of the dose values measured by the dosimeters and calculated with of FLUKA-simulations. The significant advantage of this method is the ability to obtain the spectrum of incident electrons in the low energy range from 1 keV, which is very difficult to measure reliably using traditional electron spectrometers.
The results of the evaluation of the effective temperature of super-ponderomotive electrons retrieved from the measured TLD-doses by means of the Monte-Carlo simulations demonstrated, that application of low density polymer foam layers irradiated by the relativistic sub-ps laser pulse provided a strong increase of the electron effective temperature from 1.5 - 2 MeV in the case of the relativistic laser interaction with a metallic foil up to 13 MeV for the laser shots onto the pre-ionized foam and more than 10 times higher charge carried by relativistic electrons.
The progressive simulation method of whole electron spectra described with two -temperatures Maxwellian distribution function has been developed and the results of dose simulations were compared with the acquired experimental data. The advanced feature of this method, which distinguishes it from the results of the simulation of the photon spectrum using the interaction with the target of mono-energetic electron beams (Nilgün Demir, 2013; Nilgün Demir, 2019) or the initial electron spectrum expressed as a function of one electron temperature (Fiorini, 2012), is the ability to simulate the initial electron spectrum described by the Maxwellian distribution function with two temperatures.
The important objective of this thesis was dedicated to the study and characterisation of laser-driven photon beams. In addition to this, the positron beams were evaluated. The investigation of bremsstrahlung photons and positrons spectra from high Z targets by varying the target thickness from 10 µm to 4 mm in simulated models of the interactions of electron spectra with Maxwellian distribution functions allowed to define an optimal thickness when the fluences of photons and positrons are maximal. Furthermore based on the results of FLUKA simulations the gold material was found to be the most suitable for the future experiments as e − γ target because of its highest bremsstrahlung yield.
Additionally Monte Carlo simulations were performed applying the obtained electron beam parameters from the electron acceleration process in laser-plasma interactions simulated with particle-in-cell (PIC) code for two laser energies of 20 J and 200 J. The corresponding electron spectra were imported into a Monte Carlo code FLUKA to simulate the production process of bremsstrahlung photons and positrons in Au converter. FLUKA simulations showed the record conversion of efficiency in MeV gammas can reach 10%, which reinforces the generation of positrons. The obtained results demonstrate the advantages of long-scale plasmas of near critical density (NCD) to increase the parameters of MeV particles and photon beams generated in relativistic laser-plasma interaction. The efficiency of the laser-driven generation of MeV electrons and photons by application of low-density polymer foams is essentially enhanced.
Artificial intelligence in heavy-ion collisions : bridging the gap between theory and experiments
(2023)
Artificial Intelligence (AI) methods are employed to study heavy-ion collisions at intermediate collision energies, where high baryon density and moderate temperature QCD matter is produced. The experimental measurements of various conventional observables such as collective flow, particle number fluctuations, etc. are usually compared with expensive model calculations to infer the physics governing the evolution of the matter produced in the collisions. Various experimental effects and processing algorithms can greatly affect the sensitivity of these observables. AI methods are used to bridge this gap between theory and experiments of heavy-ion collisions. The problems with conventional methods of analyzing experimental data are illustrated in a comparative study of the Glauber MC model and the UrQMD transport model. It is found that the centrality determination and the estimated fluctuations of the number of participant nucleons suffer from strong model dependencies for Au-Au collisions at 1.23 AGeV. This can bias the results of the experimental analysis if the number of participant nucleons used is not consistent throughout the analysis and in the final model-to-data comparison. The measurable consequences of this model dependence of the number of participant nucleons are also discussed. In this context, PointNet-based AI models are developed to accurately reconstruct the impact parameter or the number of participant nucleons in a collision event from the hits and/or reconstructed track of particles in 10 AGeV Au-Au collisions at the CBM experiment. In the last part of the thesis, different AI methods to study the equation of state (EoS) at high baryon densities are discussed. First, a Bayesian inference is performed to constrain the density dependence of the EoS from the available experimental measurements of elliptical flow and mean transverse kinetic energy of mid rapidity protons in intermediate energy collisions. The UrQMD model was augmented to include arbitrary potentials (or equivalently the EoSs) in the QMD part to provide a consistent treatment of the EoS throughout the evolution of the system. The experimental data constrain the posterior constructed for the EoS for densities up to four times saturation density. However, beyond three times saturation density, the shape of the posterior depends on the choice of observables used. There is a tension in the measurements at a collision energy of about 4 GeV. This could indicate large uncertainties in the measurements, or alternatively the inability of the underlying model to describe the observables with a given input EoS. Tighter constraints and fully conclusive statements on the EoS require accurate, high statistics data in the whole beam energy range of 2-10 GeV, which will hopefully be provided by the beam energy scan programme of STAR-FXT at RHIC, the upcoming CBM experiment at FAIR, and future experiments at HIAF and NICA. Finally, it is shown that the PointNet-based models can also be used to identify the equation of state in the CBM experiment. Despite the uncertainties due to limited detector acceptance and biases in the reconstruction algorithms, the PointNet-based models are able to learn the features that can accurately identify the underlying physics of the collision. The PointNet-based models are an ideal AI tool to study heavy-ion collisions, not only to identify the geometric event features, such as the impact parameter or the number of participant nucleons, but also to extract abstract physical features, such as the EoS, directly from the detector outputs.
A synchrotron is a particular type of cyclic particle accelerator and the first accelerator concept to enable the construction of large-scale facilities [10], such as the largest particle accelerator in the world, the 27-kilometre-circumference Large Hadron Collider (LHC) by CERN near Geneva, Switzerland, the European Synchrotron Radiation Facility (ESRF) in Grenoble, France for the synchrotron radiation, the superconducting, heavy ion synchrotron SIS100 under construction for the FAIR facility at GSI, Darmstadt, Germany and so on. Unlike a cyclotron, which can accelerate particles starting at low kinetic energy, a synchrotron needs a pre-acceleration facility to accelerate particles to an appropriate initial value before synchrotron injection. A pre-acceleration can be realized by a chain of other accelerator structures like a linac, a microtron in case of electrons, for example, Proton and ion injectors Linac 4 and Linac 3 for the LHC, UNLAC as the injector for the SIS18 in GSI and in future the SIS18 as injector for the SIS100. The linac is a commonly used injector for the ion synchrotron and consists of some key components. The three main parts of a linac are: An ion source creating the particles, a buncher system or an RFQ followed by the main drift tube accelerator DTL. In order to meet the energy and the beam current requirement of a synchrotron injector linac, its cost is a remarkable percentage of the total facility costs.
However, the normal conducting linac operation at cryogenic temperatures can be a promising solution in improving the efficiency and reducing the costs of a linac. Synchrotron injectors operate at very low duty factor with beam pulse lengths in 1 micros to 100 micros range, as most of the time is needed to perform the synchrotron cycle. Superconducting linacs are not convenient, as they cannot efficiently operate at low duty factor and high beam currents.
The cryogenic operation of ion linacs is discussed and investigated at IAP in Frankfurt since around 2012 [1, 37]. The motivation was to develop very compact synchrotron injectors at reduced overall linac costs per MV of acceleration voltage. As the needed beam currents for new facilities are increasing as well, the new technology will also allow an efficient realization of higher injector linac energies, which is needed in that case. Operating normal conducting structures at cryogenic temperature exploits the significantly higher conductivity of copper at temperatures of liquid nitrogen and below. On the other hand, the anomalous skin effect reduces the gain in shunt impedance quite a bit[25, 31, 9]. Some intense studies and experiments were performed recently, which are encouraging with respect to increased field levels at linac operation temperatures between 30 K and 70 K [17, 24, 4, 23, 5, 8]. While these studies are motivated by applications in electron acceleration at GHz-frequencies, the aim of this paper is to find applications in the 100 to 700 MHz range, typical for proton and ion acceleration. At these frequencies, a higher impact in saving RF power is expected due to the larger skin depth, which is proportional to the frequency to the power of negative half with respect to the normal skin effect. On the other hand, it is assumed that the improvement in maximum surface field levels will be similar to what was demonstrated already for electron accelerator cavities. This should allow to find a good compromise between reduced RF power needs for achieving a given accelerator voltage and a reduced total linac length to save building costs.
A very important point is the temperature stability of the cavity surface during the RF pulse. This is of increasing importance the lower the operating temperature is chosen: the temperature dependence of the electric conductivity in copper gets rather strong below 80 K, as long as the RRR - value of the copper is adequate. It is very clear, that this technology is suited for low duty cycle operated cavities only - with RF pulse lengths below one millisecond. At longer pulses the cavity surface will be heated within the pulse to temperatures, where the conductivity advantage is reduced substantially. These conditions fit very well to synchrotron injectors or to pulsed beam power applications.
H – Mode structures of the IH – and of the CH – type are well-known to have rather small cavity diameters at a given operating frequency. Moreover, they can achieve effective acceleration voltage gains above 10 MV/m even at low beam energies, and already at room temperature operation[29]. With the new techniques of 3d – printing of stainless steel and copper components one can reduce cavity sizes even further – making the realization of complex cooling channels much easier.
Another topic are copper components in superconducting cavities – like power couplers. It is of great importance to know exactly the thermal losses at these surfaces, which can’t be cooled efficiently in an easy way.
Im Rahmen dieser Arbeit wurde ein verbessertes Buncher-System für Hochfrequenzbeschleuniger mit niedrigem und mittlerem Ionenstrom entwickelt. Die entwickelte Methodik hat ermöglicht, ein effektives, vereinfachtes Buncher-System für die Injektion in HF-Beschleuniger wie RFQs, Zyklotrons, DTLs usw. zu entwerfen, welches kleine Ausgangsemittanzen und beträchtliche Strahltransmissionen erzielt. Um einen mono-energetischen und kontinuierlichen Strahl aus einer Ionenquelle für den Einschuss in eine Hochfrequenz-Beschleunigerstruktur anzupassen, wird eine Energiemodulation benötigt, die im weiteren Verlauf (Driftstrecke) zur Längsfokussierung des Strahls führt. Durch eine Sägezahnwellenform wird die ideale Energiemodulation aufgrund der linearen Abhängigkeit zwischen der Energie der Teilchen und ihren relativen Phasen erreicht. Dies ist jedoch technologisch nicht möglich, da Teilchenbeschleuniger Spannungsniveaus im Bereich kV bis 100 kV benötigen. Dagegen ist für eine solche Zielsetzung eine räumliche Trennung der sinusförmigen Anregung mit der Grundfrequenz und höheren Harmonischen möglich.
Daher wurde in dieser Arbeit ein verbesserter harmonischer Buncher, der sogenannte „Double Drift Harmonic Buncher - DDHB“ entwickelt, welcher zahlreiche Vorteile hat. Eine geringe longitudinale Emittanz sowie finanzielle Aspekte sprechen für diesen Lösungsansatz. Die Hauptelemente eines DDHB Systems sind zwei Kavitäten, die durch eine Driftlänge L1 getrennt sind, wobei der erste Resonator mit der Grundfrequenz bei -90° synchroner Phase und angelegter Spannung V1 und der zweite Resonator bei der zweiten harmonischen Frequenz mit +90 synchroner Phase und angelegter Spannung V2 betrieben werden. Schließlich ist eine zweite Drift L2 am Ende des Arrays für eine longitudinale Strahlfokussierung am Hauptbeschleunigereingang erforderlich. Somit erfüllt ein solcher Aufbau das angestrebte Ziel einer hohen Einfangseffizienz und einer kleinen longitudinalen Emittanz durch Anpassen der vier Designparameter V1, L1, V2 und L2.
Das Verständnis der Fokussierung, ausgehend von einem Gleichstromstrahl, einschließlich der Raumladungskräfte, ist einer der wesentlichen Bestandteile der Strahlphysik. Viele kommerzielle Codes bieten Simulationsmöglichkeiten in diesem Anwendungsbereich. Ihre Ansätze bleiben jedoch dem Anwender meist verborgen, oder es fehlen wichtige Details zur genauen Abbildung des vorliegenden Konzepts. Daher bestand eine Hauptaufgabe dieser Arbeit darin, einen speziellen Multi-Particle-Tracking-Beam-Dynamics-Code (BCDC) zu entwickeln, bei dem der Raumladungseffekt während des Bunch-Vorgangs, ausgehend von einem DC-Strahl berechnet wird. Der BCDC - Code enthält elementare Routinen wie Drift und Beschleunigungsspalt oder magnetische Linse für die transversale Strahlfokussierung und Raumladungsberechnungen unter Berücksichtigung der Auswirkungen der nächsten Nachbar-Bunche (NNB). Der Raumladungsalgorithmus in BCDC basiert auf einer direkten Coulomb- Gitter-Gitter-Wechselwirkung und Berechnungen des elektrischen Feldes durch Lokalisierung der Ladungsdichte auf einem kartesischen Gitter. Um Genauigkeit zu erreichen, werden die Feldberechnungen in Längsrichtung symmetrisch um das zentrale Bucket (βλ-Größe) erweitert, so dass das Simulationsfeld dreimal so groß ist. Die zentrale Teilchenverteilung wird dann nach jedem Schritt in die benachbarten Buckets kopiert. Anschließend werden die resultierenden Felder im Hauptgitterfeld neu berechnet, indem die elektrischen Felder im Hauptgitterfeld mit denen aus den benachbarten Regionen überlagert werden. Ohne diese Methode würde z. B. ein kontinuierlicher Strahl, welcher jedoch in der Simulation nur innerhalb einer Zelle der Länge βλ definiert ist, zu einer resultierenden Raumladungsfeldkomponente Ez an beiden Rändern der Zelle führen. Ein solches unphysikalisches Ergebnis konnte durch die Anwendung der NNB-Technik bereits weitgehend eliminiert werden. Zusätzlich zum NNB-Feature verfügt das BCDC über eine weitere Besonderheit nämlich die sogenannte Raumladungskompensation (SCC). Aufgrund der Ionisierung des Restgases kommt es entlang des Niederenergiestrahltransports zu einer teilweisen Raumladungskompensation, und zwar am und hinter dem Bunchersystem mit unterschiedlichen Prozentsätzen. Eines der Hauptziele des DDHB-Konzepts besteht darin, es für Hochstromstrahlanwendungen zu entwickeln. Dabei ermöglicht die teilweise Raumladungskompensation, dass das Design in der Praxis höhere Stromniveaus erreicht. Dadurch ist das BCDC-Programm ein leistungsstarkes Werkzeug für Simulationen in künftigen, stromstarken Projekten. Proof-of-Principle-Designs wurden in dieser Arbeit entwickelt.
In this thesis, we use lattice QCD to study a part of the QCD phase diagram, specifically the QCD phase transition at mu=0, where the QCD matter changes from hadron gas to quark-gluon plasma (QGP) with increasing temperature.
This phase transition takes place as a crossover, but when theoretically changing the masses of the quarks, the order of the phase transition changes as well.
We focus on the region of heavy quark masses with Nf=2 flavours, where we investigate the critical quark mass at the second order phase transition in the form of a Z2 point between the first-order and the crossover region.
The first-order region is positioned at infinitely heavy quarks. As the quark masses decrease, the associated Z3 centre symmetry breaks explicitly, causing the first-order phase transition to weaken until it turns into the Z2 point and finally into a crossover.
We study this Z2 point using simulations at Nf=2 and lattices of the sizes Nt = {6, 8, 10, 12}, partially building on previous work, in which the simulations for Nt = {6, 8, 10} were started.
The simulations for Nt=12 are not finished yet though, but we were able to draw some preliminary conclusions. These simulations are run on GPUs and CPUs, using the codes Cl2QCD and open-QCD-FASTSUM, respectively. Afterwards, the data goes through a first analysis step in the form of the Python program PLASMA, preparing it for the two techniques we use to analyse the nature of the phase transition.
As a first, reliable analysis method, we perform a finite size scaling analysis of the data to find the location of the Z2 point. Since we are using lattice QCD, performing a continuum extrapolation is necessary to reach the continuum result.
In regard to this, the finite size scaling analysis method is hampered by the excessive amount of simulated data that is needed regarding statistics and the total number of simulations, which is why this thesis is only an intermediate step towards the continuum limit.
This also leads to the second analysis technique we explore in this thesis.
We start to design a Landau theory which describes the phase boundary for heavy masses at Nf=2 based on the simulated data.
We develop a Landau functional for every Nt we have simulation data for.
Albeit the results are not at the same precision as the ones from the finite size scaling analysis, we are able to reproduce the position of the Z2 point for every Nt.
Even though we are not able to take a continuum extrapolation right now, after more development takes place in future works, this approach might, in the long run, lead to a continuum result that won't need as many simulations as the finite size scaling analysis.
Precise intensity monitoring at CRYRING@ESR: on designing a Cryogenic Current Comparator for FAIR
(2023)
In the field of today’s beam intensity diagnostic there is a significant gap in the non-interceptive, calibrated measurement of the absolute intensity of continuous (unbunched) dc beams with current amplitudes below 1 μA. At the Facility for Antiproton and Ion Research (FAIR) low-intensity DC beams will occur during slow extraction from the synchrotrons as well as for coasting beams of highly-charged or exotic nuclei in the storage rings. The lack of adequate beam instrumentation limits the experimental program as well as the accuracy of experimental results.
The Cryogenic Current Comparator (CCC) can close the diagnostic gap with a high-precision dc current reading independent of ion-species and of beam parameters. However, the established detector design based on a core with high magnetic permeability and on a radial shield geometry has well-known weaknesses concerning magnetic shielding efficiency and intrinsic current noise. To eliminate these weaknesses, a novel coreless CCC with a co-axial shield was constructed and combined with a high-performance SQUID contributed by the Leibniz-Institute of Photonic Technology (Leibniz-IPHT Jena). The new axial CCC model was compared to a radial CCC with the established design provided by the Friedrich-Schiller-University Jena. According to numerical simulations prepared at TU Darmstadt and test measurements of the detectors in the laboratory, the new design offered a significant improvement of the shielding factor – from 75dB to 207dB at the required dimensions – and eliminated all noise contributions from the core material, promising an improved current resolution. Although the lower inductance of the pickup coil reduced the coupling to the beam significantly, the noise properties of the new CCC type were comparable to the classical version with a high-permeability core. However, the expected decrease of the low-frequency noise and thus an increase of the current resolution could not be observed at this stage of development.
Consequently, the classical CCC based on the radial shielding and high-permeability core had to be installed in CRYRING@ESR to provide best possible intensity measurements for the upcoming experimental campaign. In CRYRING the CCC was operated with beam currents between 1nA and 20μA and with different ion species (H, Ne, O, Pb, U). It was shown that the CCC provides a noise-limited current resolution of better than 3.2 nArms at a bandwidth of 200 kHz as well as a noise level below 40 pA/√Hz above 1 kHz. During the operation, the main noise sources of the accelerator environment had to be identified and suitable mitigation strategies were developed. Temperature and pressure fluctuations were suppressed with a newly-designed cryogenic support system based on a 70 l helium bath cryostat, developed and built in collaboration with the Institut für Luft- und Kältetechnik Dresden, in combination with a helium re-liquefier. The cryogenic operating time was restricted to around 7 days, which must be expanded significantly in the future. Digital filters were developed to remove the perturbations of the helium liquefier and of the neighboring dipole magnets. Given the promising results the CCC system can be considered as a prototype for future CCCs at FAIR.
This thesis deals with several aspects of non-perturbative calculations in low-dimensional quantum field theories. It is split into two main parts:
The first part focuses on method development and testing. Using exactly integrable QFTs in zero spacetime dimensions as toy models, the need for non-perturbative methods in QFT is demonstrated. In particular, we focus on the functional renormalization group (FRG) as a non-perturbative exact method and present a novel fluid-dynamic reformulation of certain FRG flow equations. This framework and the application of numerical schemes from the field of computational fluid dynamics (CFD) to the FRG is tested and benchmarked against exact results for correlation functions. We also draw several conclusions for the qualitative understanding and interpretation of renormalization group (RG) flows from this fluid-dynamic reformulation and discuss the generalization of our findings to realistic higher-dimensional QFTs.
The topics discussed in the second part are also manifold. In general, the second part of this thesis deals with the Gross-Neveu (GN) model, which is a prototype of a relativistic QFT. Even though being a model in two spacetime dimensions, it shares many features of realistic models and theories for high-energy particle physics, but also emerges as a limiting case from systems in solid state physics. Especially, it is interesting to study the model at non-vanishing temperatures and densities, thus, its thermodynamic properties and phase structure.
First, we use this model to test and apply our findings of the first part of this thesis in a realistic environment. We analyze how the fluid-dynamic aspects of the FRG realize themselves in the RG flow of a full-fledged QFT and how we profit from this numeric framework in actual calculations. Thereby, however, we also aim at answering a long-standing question: Is there still symmetry breaking and condensation at non-zero temperatures in the GN model, if one relaxes the commonly used approximation of an infinite number of fermion species and works with a finite number of fermions? In short: Is matter (in the GN model) in a single spatial dimension at non-zero temperature always gas-like?
In general, we also use the GN model to learn about the correct description of QFTs at non-zero temperatures and densities. This is of utmost relevance for model calculations in low-energy quan- tum chromodynamics (QCD) or other QFTs in medium and we draw several conclusions for the requirements for stable calculations at non-zero chemical potential.
Investigation of the kinematics involved in compton scattering and hard X-ray photoabsorption
(2023)
The present work investigates the kinematics of Compton scattering at gaseous, internally-cool helium and molecular nitrogen targets in the high- and the low-energy regime. Additionally, photoionization at molecular nitrogen with high-energy photons is investigated. These exeprimental regimes were previously inaccessible due to the extremely small cross sections involved. Nowadays, the third- and fourth-generation synchrotron machines produce sufficient photon flux, enabling the investiagtion of the above processes. The utilized cold-target recoil-ion momentum spectroscopy (COLTRIMS) technique further increases the detection efficiency of the observed processes, since it enables full-solid-angle detection by exploiting momentum conservation.
Compton scattering is investigated at both high (helium and N2) and low (helium) photon energies. In the high-energy regime, the impulse approximation is mostly valid, which is not the case for the low-energy regime. The impulse approximation assumes that the Compton-scattering process takes place at a free electron with a momentum distribution as if it was bound, thus ignoring the binding energy of the system. In the low-energy regime, the impulse approximation is not valid.
Photoionization is investigated at high photon energies, where the linear momentum of the photon cannot be neglected, as is the fashion of the commonly used dipole approximation.
Magnetische Quadrupole und Solenoide sind ein elementarer Bestandteil einer Beschleunigeranlage und begrenzen die transversale Ausdehnung eines Teilchenstrahls durch eine Reflexion der Teilchen in Richtung der Beschleunigerachse. Die konventionelle Bauweise als Elektromagnet besteht aus einem Eisenjoch welches mit Spulen umwickelt ist. In dieser Arbeit werden diese Magnetstrukturen auf Basis von Permanentmagneten designt und hinsichtlich ihrer Qualität zum Strahltransport optimiert, sowie Feldmessungen an permanentmagnetischen Quadrupolen durchgeführt. Diese wurden mit 3D-gedruckten Halterungen aus Kunststoff gefertigt, was eine Vielzahl von Formvariationen ermöglicht. Darauf aufbauend wurde ein im Vakuum befindlicher Aufbau entwickelt, mit welchem die Strahlenvelope im inneren eines permanentmagnetischen Quadrupol Tripletts diagnostiziert werden kann. Dies greift auf ein am Institut für angewandte Physik entwickeltes System zur nicht-invasiven Strahldiagnose mithilfe von Raspberry Pi Einplatinencomputern und Kameras in starken Magnetfeldern zurück.
Die in dieser Arbeit vorgestellte Konfiguration eines PMQ’s ist eine Weiterentwicklung des am CERN im Linac4, einem Alvarez-Driftröhrenbeschleuniger zur Beschleunigung von H– , verwendeten Designs. Bei diesem sind je acht quaderförmige Permanentmagnete aus Samarium Cobalt (SmCo) in die Driftröhren des Beschleunigers integriert.
Darauf aufbauend wurden die geometrischen Designparameter hinsichtlich ihres Einflusses auf die Qualität des Magnetfelds untersucht. In einem magnetischen Quadrupol zur Strahlfokussierung wird dies durch einen linearen Anstieg des Magnetfeldes von Quadrupolachse zu Polflächen charakterisiert. Das Design wurde im Zuge dessen zur Verwendung von industriellen Standardgeometrien von Quadermagneten und der Erhöhung der magnetischen Flussdichte erweitert. Dazu wurde untersucht wie sich das Hinzufügen von zusätzlichen Magneten auswirkt und ob eine bessere Feldqualität durch andere Magnetformen erreicht wird.
Die Kombination mehrerer PMQ in geringem Abstand (<10 mm) führt abhängig von der Geometrie der PMQ-Singlets zu einer erheblichen Verschlechterung der Feldlinearität, was eine Erhöhung des besetzten Phasenraumvolumens der Teilchen nach sich zieht.
Am Beispiel von PMQ-Tripletts werden die zu beachtenden Designparameter analysiert und Lösungsansätze vorgestellt. Die auftretenden Effekte werden anhand von Strahldynamiksimulation veranschaulicht. Für eine Anwendung der vorgestellten Designs wurde eine Magnethülle mit einer Wabenstruktur zur Aufnahme der Einzelmagnete entwickelt. Diese besteht aus zwei Halbschalen, welche jeweils den Kompletteinschluss aller Magnete garantiert und eine einfache Montage um ein Strahlrohr ermöglicht. Diese wurden in der Institutswerkstatt aus Kunststoff via 3D-Druck gefertigt. Aufgrund der höheren erreichbaren Magnetisierung wurden Neodym-Eisen-Bor-Magnete (Nd2F14B, Br =1,36 T) für den Bau der entwickelten Strukturen verwendet. Für eine Magnetfeldmessung zur Bestätigung der magnetostatischen Simulationen und einer Bewertung der Druckqualität wurde eine motorisierte xyz-Stage zur Bewegung einer Hallsonde aufgebaut. Die Messungen zeigen eine gute Zentrierung des Magnetfeldes, sodass PMQ mit einer Kunststoffhalterung eine schnelle und billige Möglichkeit sind, kurzfristig eine Quadrupol-Konfiguration aufzubauen. Die Kosten belaufen sich für einen einzelnen PMQ je nach Länge auf 50€ bis 100€.
Basierend auf der PMQ-Struktur wurde ein PMQ-Triplett in ein Vakuum versetzt und mit Raspberry Pi Kameras im Zwischenraum der Singlets ausgestattet. Dies ermöglichte die Aufnahme der Strahlenvelope innerhalb des Tripletts anhand der durch einen Heliumstrahl induzierten Fluoreszenz und erste Erkenntnisse für notwendige Weiterentwicklungen wurden gesammelt. Auf den genauen technischen Aufbau wird im abschließenden Kapitel der Arbeit detailliert eingegangen.
In der einfachsten Form wird ein PM-Solenoid anhand eines einzelnen axial magnetisierten Hohlzylinders realisiert und erzeugt näherungsweise die Feldverteilung einer Zylinderspule. Durch die radialen Magnetfeldkomponenten an den Rändern des Solenoiden erhalten Teilchen eine tangentiale Geschwindigkeitskomponente und führen eine Gyrationsbewegung entlang der Solenoidachse aus. Diese reduziert den Strahlradius und die Teilchen behalten eine Geschwindigkeitskomponente, welche zur Solenoidachse zeigt. Für eine Maximierung dieser Fokussierung muss das Magnetfeld auf die Zylinderachse konzentriert werden. Insbesondere bei einer Verlängerung des Hohlzylinders wird die Kopplung der Polflächen über das Innenvolumen abgeschwächt. Aufgrund dessen wurde ein Design bestehend aus drei Hohlzylindersegmenten entwickelt. Dieses setzt sich aus zwei radial und einem axial magnetisierten Hohlzylinder zusammen und erhöht die mittlere magnetische Flussdichte für ausgewählte Geometrien um einen Faktor zwei im Vergleich zu einem einzelnen Hohlzylinder gleicher Geometrie. Dies ist gleichzusetzen mit einer Vervierfachung der Fokussierstärke, welche quadratisch mit der mittleren magnetischen Flussdichte skaliert. Die Strahldynamischen Konsequenzen werden anhand von Simulationen mit generierten Magnetfeldverteilungen erläutert. Für eine kostengünstige Bauweise wurde eine Design basierend auf quaderförmigen Magneten entwickelt.
Es wurde das Leitfähigkeitsverhalten von reinem, lufthaltigem Wasser bei kontinuierlicher und impulsgetasteter Röntgenbestrahlung (60 kV8) untersucht. Hierbei ergaben sich zwei einander überlagerte Effekte: 1. Ein der Röntgen-Dosisleistung proportionaler irreversibler Leitfähigkeitsanstieg, der vermutlich auf eine Strahlenreaktion des gelösten CO2 zurückzuführen ist, 2. eine reversible Leitfähigkeitserhöhung während der Bestrahlung, die sich mit der Entstehung einer Ionenart mit einer mittleren Lebensdauer von ca. 0,15 sec erklären läßt. Es wird angenommen, daß es sich dabei um Radikalionen O2⊖ handelt, welche durch die Reaktion der als Strahlungsprodukt entstehenden Η-Radikale mit dem gelösten Sauerstoff gebildet werden. Ein möglicher chemischer Reaktionsmechanismus wird angegeben, der zu befriedigender quantitativer Übereinstimmung der Versuchsergebnisse mit Ausbeutewerten und Reaktionskonstanten aus der Literatur führt.
Neutron stars are unique laboratories for the investigation of the high density properties of bulk matter. In this work, the astrophysical constraints for a phase transition from hadronic matter to deconfined quark matter are examined thoroughly. A scheme for relating known astrophysical observables such as mass, radius and tidal deformability to the parameter space of such a transition is devised and applied to the set of data currently available.
In order to span a wide parameter space, a highly parameterizable relativistic mean field equation in compliance with chiral effective field theory results is used, where the stiffness of the equation of state can be varied via the effective mass at saturation density. The phase transitions are modelled using a Maxwell construction and assumed to be of first order, with a constant speed of sound quark matter model. The resulting equations of state are analyzed and divided into four categories, which can be used to constrain the parameter space that allows phase transition. It is highlighted, that a subset of this parameter space would even be detectable without the need of higher precision measurements. A phase transition at high densities is shown to be particularly promising in this regard. Finally, the groundwork is laid to apply the equation of state used in this work for supernova or merger simulations, by extending it to non-zero temperatures.
In order to understand the origin of the elements in the universe, one must understand the nuclear reactions by which atomic nuclei are transformed. There are many different astrophysical environments that fulfill the conditions of different nucleosynthesis processes. Even though great progress has been made in recent decades in understanding the origin of the elements in the universe, some questions remain unanswered. In order to understand the processes, it is necessary to measure cross sections of the involved reactions and constrain theoretical model predictions. A variety of methods have been developed to measure nuclear reaction cross sections relevant for nuclear astrophysics. In this thesis, two different experiments and their results, both using the well-established activation method, are presented.
A measurement of the proton capture cross section on the p-nuclide 96Ru was performed at the Institute of Structure and Nuclear Astrophysics ISNAP - Notre Dame, USA. The main goal of this experiment was to compare the results with those obtained by Mei et al. in a pioneering experiment using the method of inverse kinematics at the GSI Helmholtzzentrum für Schwerionenforschung GmbH - Darmstadt, Germany. Therefore, the activations were taken out at the same center of mass energies of 9 MeV, 10 MeV and 11 MeV. Another activation was taken out at an energy of 3.2 MeV to compare the result to a measurement of Bork et al. who also used the activation method. While the results at 3.2 MeV agree quite well with those of Bork et al., the results at higher energies show significantly smaller cross sections than those measured by Mei et al.. Experimental details, the data analysis and sources of uncertainties are discussed.
The second part of this thesis describes a neutron capture cross section experiment. At the Institut für Kernphysik - Goethe Universtität Frankfurt an experimental setup allows to produce quasi maxwell-distributed neutron fields to measure maxwell-averaged cross sections (MACS) relevant for s-process nucleosynthesis. The setup was upgraded by a fast electric linear guide to transport samples from the activation to the detection site. The cyclic activation of the sample allows to increase the signal-to-noise ratio and to measure neutron captures that lead to nuclei with
half-lives on the order of seconds. In a first campaign, MACS of the reactions 51V(n,γ), 107,109Ag(n,γ) and 103Rh(n,γ) were measured. The new components of the setup aswell as the data analysis framework are described and the results of the measurements are discussed.
We study the polarization of relativistic fluids using the relativistic density operator at global and local equilibrium. In global equilibrium, a new technique to compute exact expectation values is introduced, which is used to obtain the exact polarization vector for fields of any spin. The same result has been extended to the case of massless fields. Furthermore, it is demonstrated that at local equilibrium not only the thermal vorticity but also the thermal shear contribute to the polarization vector. It is shown that assuming an isothermal local equilibrium, the new term can solve the polarization sign puzzle in heavy ion collisions.
Terahertz (THz) radiation lies between the micro and far-infrared range in the electromagnetic spectrum. Compared with microwave and millimeter waves, it has a larger signal bandwidth and extremely narrow antenna beam. Thus, it is easier to achieve high-resolution for imaging and detection applications. The unique properties, such as penetration for majority non-polar materials, non-ionizing characteristic and the spectral fingerprint of materials, makes THz imaging an appealing artifice in the military, biomedical, astronomical communications, and other areas. However, THz radiation’s current low power level and detection sensitivity block THz imaging system from including fewer optical elements than the visible or infrared range. This leads to imaging resolution, contrast, and imaging field of view degenerate and makes the aberration more serious. THz imaging based on the space Fourier spectrum detection is developed in this thesis to achieve high-quality imaging. The main concept of Fourier imaging is by recording the field distribution in the Fourier plane (focal plane) of the imaging system; the information of the target is obtained. The numerical processing method is needed to extract the amplitude and phase information of the imaged target. With additional process, three-dimensional (3D) information can be obtained based on the phase information. The novel recording and reconstructing ways of the Fourier imaging system enables it to have a higher resolution, better contrast, and broader field of view than conventional imaging systems such as microscopy and plane to plane telescopic imaging system.
The work presented in this thesis consists of two imaging systems, one is working at 300 GHz based on the fundamental heterodyne detection of the THz radiation, the other is operated at 600 GHz by utilizing the sub harmonic heterodyne detection technique. The realization and test of the heterodyne detection are based on the THz antenna-coupled field-effect transistor (TeraFET) detector developed by Dr. Alvydas Lisauskas. Both systems use two synchronized electronic multiplier chains to radiate the THz waves. One radiation works as the local oscillator (LO), the other works as illumination with a slight frequency shift, the radiations are mixed on the detector scanning in the Fourier plane to record the complex Fourier spectrum of the imaged target. The LO has the same frequency range as the illuminating radiation for fundamental heterodyne detection but half the frequency range for the sub-harmonic heterodyne detection. The 2-mm resolution, 60-dB contrast, and 5.5-cm diameter imaging area at 300 GHz and the of 500-μm resolution, 40-dB contrast, and 3.5-cm diameter imaging area at 600 GHz are achieved (the 300-GHz illuminating radiation has the approximate power of 600 μW , the 600-GHz illuminating radiation has the approximate power of 60 μW ).
The thesis consists of 6 parts. After the introduction, the second chapter expands on the topic of Fourier optics from a theoretical point of view and the simulations of the Fourier imaging system. First, the theory of the electromagnetic field propagation in free space and through an optical system are investigated to elicit the Fourier transform function of the imaging system. The simulation is used for theoretical considerations and the implementation of a Fourier optic script that allows for numerical investigations on reconstruction. The preliminary imaging field of view and resolution are also demonstrated. The third chapter describes the Fourier imaging system at 300 GHz based on the fundamental heterodyne detection, including the experimental setup, the 2D, and 3D imaging results. The following fourth chapter reports the integration of the TeraFET detector with two substrate lenses (one is a Si lens on the back-side Si substrate, the other is a wax/PTFE lens on the front side containing the bonding wires) for sub-harmonic heterodyne detection at 600 GHz. The characteristic of the wax/PTFE lens at THz range is presented. After that, the compared imaging results between the detector with and without the wax/PTFE lens are shown. The fifth chapter extends the demonstration on the lateral and depth resolution of the Fourier imaging system in detail and uses the experimental results at 600 GHz to validate the analytical predictions. The comparison of the resolution between the Fourier imaging system and the conventional microscopy system proves that the Fourier imaging system has better imaging quality under the same system configuration. The last chapter in this thesis concludes on the findings of the THz Fourier imaging and gives an outlook for the enhancement of the Fourier imaging system at THz range.
Die vorliegende Arbeit befasst sich mit der Untersuchung der Transporteigenschaften inklusive Ladungsträgerdynamik von quasi-zweidimensionalen organischen Ladungstransfersalzen. Diese Materialien besitzen eine Schichtstruktur und weisen eine hohe Anisotropie der elektrischen Leitfähigkeit auf. Aufgrund der geringen Bandbreite und der niedrigen Ladungsträgerkonzentration gehören die Materialien zu den stark-korrelierten Elektronensystemen, wobei sich die elektronischen Eigenschaften leicht durch chemische Modifikationen oder äußere Parameter beeinflussen lassen. Die starken Korrelationen resultieren in Metall-Isolator-Übergängen, die sich beim Mott-isolierenden Zustand in einer homogenen Verteilung und beim ladungsgeordneten Zustand in einer periodischen Anordnung der lokalisierten Ladungsträger manifestieren.
Mithilfe der Fluktuationsspektroskopie, die sich mit der Analyse der zeitabhängigen Widerstandsfluktuationen befasst, konnten im Rahmen dieser Arbeit neue Erkenntnisse über die Ladungsträgerdynamik in den verschiedenen elektronischen Zuständen gewonnen werden. Die Metall-Isolator-Übergänge in den untersuchten Systemen, die auf den Molekülen BEDT-TTF (kurz: ET) bzw. BEDT-TSF (kurz: BETS) basieren, sind von der Stärke der strukturellen Dimerisierung abhängig und wurden durch die Kühlrate, eine Zugbelastung sowie durch die Ausnutzung des Feldeffekts beeinflusst.
In den Systemen κ-(BETS)₂Mn[N(CN)₂]₃, κ-(ET)₂Hg(SCN)₂Cl und κ-(ET)₂Cu[N(CN)₂]Br sind die Donormoleküle als Dimere angeordnet, sodass aufgrund der effektiv halben Bandfüllung bei genügender Korrelationsstärke häufig ein Mott-Übergang auftritt. In κ-(ET)₂Hg(SCN)₂Cl führt eine schwächere Dimerisierung jedoch zu einem Ladungsordnungsübergang, der mit elektronischer Ferroelektrizität einhergeht. Dabei wird die polare Ordnung durch eine Ladungsdisproportionierung innerhalb der Dimere verursacht. Die Widerstandsfluktuationen zeigen am ferroelektrischen Übergang einen starken Anstieg der spektralen Leistungsdichte, eine Abhängigkeit vom angelegten elektrischen Feld sowie Zeitabhängigkeiten, die auf räumliche Korrelationen der fluktuierenden Prozesse hindeuten. Diese Eigenschaften wurden ebenfalls für das System κ-(BETS)₂Mn[N(CN)₂]₃ beobachtet. Hierbei wurden mithilfe der dielektrischen Spektroskopie ebenfalls Hinweise auf Ferroelektrizität gefunden, während durch die Analyse der stromabhängigen Widerstandsfluktuationen die Größe der polaren Regionen abgeschätzt werden konnte. Das System κ-(ET)₂Cu[N(CN)₂]Br, das in einer Feldeffekttransistor-Struktur vorliegt, erlaubt neben der Untersuchung des Bandbreiten-getriebenen Mott-Übergangs durch die Zugbelastung eines Substrats auch die Beeinflussung der elektronischen Eigenschaften durch die Änderung der Bandfüllung mittels elektrostatischer Dotierung. Hierbei wurden starke Abhängigkeiten des Widerstands von der Gatespannung beobachtet und Ähnlichkeiten der Ladungsträgerdynamik zu herkömmlichen Volumenproben gefunden.
Bei den Systemen θ-(ET)₂MM'(SCN)₄ mit MM'=CsCo, RbZn, TlZn tritt ein Ladungsordnungsübergang auf, der eine starke Abhängigkeit von der Kühlrate zeigt. Durch schnelles Abkühlen lässt sich der Phasenübergang erster Ordnung kinetisch vermeiden, wodurch ein Ladungsglaszustand realisiert wird. Dieser metastabile Zustand zeigt neuartige physikalische Eigenschaften mit Ähnlichkeiten zu herkömmlichen Gläsern und wurde als Folge der geometrischen Frustration der Ladung auf einem Dreiecksgitter diskutiert. Im Rahmen dieser Arbeit konnte die Ladungsträgerdynamik in den verschiedenen Ladungszuständen von unterschiedlich frustrierten Systemen verglichen werden. Zur Realisierung sehr schneller Abkühlraten wurde dafür eine Heizpulsmethode verwendet und weiterentwickelt. Der Ladungsglaszustand zeigte dabei für verschiedene Systeme ein deutlich niedrigeres Rauschniveau als der ladungsgeordnete Zustand. In Kombination mit Messungen der thermischen Ausdehnung und kühlratenabhängiger Transportmessungen wurde in den Systemen mit der stärksten Frustration die Existenz eines strukturellen Glasübergangs nachgewiesen, der von einer starken Verlangsamung der Ladungsträgerdynamik begleitet wird. Diese Erkenntnisse werfen ein neues Licht auf die bisherige rein elektronische Interpretation des Ladungsglaszustands und heben den Einfluss der strukturellen Freiheitsgrade hervor.
This work ties in with the investigation of the intermediate valent states and valence fluctuations in certain europium based intermetallic systems. Valence fluctuations are a property of the electronic system of a compound that is possibly accompanied by structural effects, which, in some cases, are quite noticable. By assuming how the changes in the electronic system and in the crystal lattice are connected, valence _uctuations of europium are believed to be a possible probe for the theory of quantum critical elasticity, which is investigated on by the SFB TRR 288 (Frankfurt, Mainz, Karlsruhe, Bochum, Dresden).
Here, the proceedings in growing single crystals of di_erent compounds related to this _eld of research are reported. This includes the ThCr2Si2 (122) type compounds EuPd2Si2 as well as the doping series EuPd2(Si1-xGex)2, the Europium based ternary Phosphides EuFe2P2, EuCo2P2, EuNi2P2 and EuRu2P2, and attempts to grow compounds of a derived 1144 structure by ordered substitution of half the Europium, EuKRu4P4.
The largest part of this work focusses on the EuPd2Si2 system, which exhibits intermediate valent europium and a temperature dependent transition between two di_erent intermediate valent states of europium. Crystals of this system were grown using the Czochralski method with a levitating melt and an europium excess flux after a two step prereaction process. Also, explorations of a PdSi-rich flux and external flux methods are reported. Ten Czochralski grown experiments, in six generations iteratevely seeded by the previous generation, were prepared.
Thermodynamical and structural analyses of the crystals located the transition between the di_erent intermediate valent states of europium between 140K and 165 K, transitioning from a high temperature Eu2.3+ state to a low temperature Eu2.7+ state, and classified it as a second order transition. To this transition a lattice anomaly of the a-parameter collapsing about 2% is connected, while the c-parameter remains largely unaffected. Large differences between individual samples can be explained by combining thermodynamical and structural analyses with compositional analysis, revealing the valence transition temperature as strongly dependent on the sample composition and Pd-Si site interchanges.
Searching to change the character of the valence transition to first order, silicon was substituted by germanium to introduce negative pressure. Germanium substituted samples of EuPd2(Si1-xGex)2 were grown using the Czochralski method with the optimized parameters from the growth experiments for the undoped compound. Samples were prepared with a nominal substitution of x = 0.05, x = 0.10, x = 0.15, x = 0.20 (twice) and x = 0.30. For the EuPd2(Si1-xGex)2 system, a phase diagram for the europium valence states is derived from chemical and thermodynamical characterizations.
n ternary europium phosphides EuT2P2, the position of the compounds in the generalized phase diagram and the question of long range magnetic order or valence transition appear connected to an isostructural transition of the tetragonal crystal structure, drastically decreasing the length of the c-parameter while establishing covalent bonds between phosphorus atoms of different interlayers of the structure, the so called ‚collapse‘. While EuFe2P2, EuT2P2 and EuCo2P2 display both long range magnetic order and a non-collapsed crystal structure, EuNi2P2 shows both a valence transition between two intermediate valent states at a characteristic temperature of 36K - accompanied by a small lattice anomaly of the a-parameter shrinking about 0.2% - and a collapsed crystal structure. Samples of EuFe2P2, EuCo2P2 and EuNi2P2 were grown in tin flux and using solid-solid sintering approaches.
Single crystals of EuFe2P2, EuCo2P2 and EuRu2P2 were investigated at ESRF in Grenoble with single crystal X-ray di_ractometry on a pressure range up to 15GPa and at temperatures down to 15K to investigate the nature of the structural transitions in the compounds. While in EuCo2P2 the structural transition occurs as a transition of first order at all temperatures (e.g. at 2GPa for 15 K), in EuFe2P2 and EuRu2P2 the structural collapse evolves over a broad pressure range up to 8GPa and as a transition of second order troughout the temperature ranges, albeit seeming to sharpen at lower temperatures. From the crystallographic data, elastic constants of the compounds could be derived, revealing EuFe2P2 and EuRu2P2 as unexpectedly elastic materials.
In order to probe the structural collapse at more accessible pressures, crystals with a sturcture derived from the 122 structure, but with ordered 50% substitution of europium and hence altering the symmetry from I4/mmm to P4/mmm in a 1144 structure, were exploratively pursued. Different experiments to obtain EuAT4P4 (with A = K, Rb, Cs and T = Fe, Ru) from binary or ternary prereactants or directly from the elements remained largely unsuccessful.
High-resolution, compactness, scalability, efficiency – these are the critical requirements which imaging radar systems have to fulfil in applications such as environmental monitoring, cloud mapping, body sensing or autonomous driving. This thesis presents a modular millimetre-wave frequency modulated continuous-wave (FMCW) radar front-end solution intended for such applications. High-resolution is achieved by enlarging the operating frequency band of the radar system. This can be realized at millimetre-wave frequencies due to the large spectrum availability. Furthermore, the size of components decreasing with increasing frequency makes millimetre-wave systems a good candidate for compactness. However, the full integration of radar front-ends is a challenge at millimetre-wave frequencies due to poor signal integrity and spectral purity, which are essential for imaging applications. The proposed radar uses an alternative technique and tackles this limitation by featuring highly-integrable architectures, specifically the Hartley architecture for signal conversion and enhanced push-pull amplifier for harmonic suppression. The resolution of imaging radars can be further improved by increasing the number of transmitters and receivers. This has spurred the investigation of spectrum, time and energy-efficient multiplexing techniques for multi-input multi-output (MIMO) radar systems. The FMCW radar architecture proposed in this thesis is based on code-division technique using intra-pulse, also called intra-chirp modulation. This advanced scalable and non-complex solution, made possible by the latest achievements on direct digital synthesis for signal generation, guarantees signal integrity and compact size implementation. The proposed architecture is investigated by a thorough system analysis. A transmitter module and a receiver module for a 35 GHz imaging radar prototype are designed, fabricated and fully characterized to validate the feasibility of our novel approach for high-resolution highly-integrated MIMO front-ends.
Simulations of conformational changes and enzyme-substrate interactions in protein drug targets
(2022)
Finding new drugs is a difficult, time-consuming, and costly challenge, with only a small success rate along the drug discovery pipeline of far less than 10%. The high failure rate of drug discovery projects motivates the integration of computational tools throughout the whole drug discovery pipeline, from target identification to clinical trials. Target identification is the first step in the process. A biological target, e.g., a protein that plays a role in disease, is identified and its molecular mechanism in the disease is studied. Further, a potential binding site on the target, where therapeutic molecules can bind and modulate the target’s activity, needs to be characterized. Computational tools can contribute to improving the initial molecular target elucidation and assessment.
In this thesis, I use computational, physics-based approaches to characterize binding sites of drug targets and to decipher enzyme-substrate interactions, which play a role in disease mechanisms. Molecular dynamics (MD) simulations were applied to study the dynamics of molecules in solution at high temporal and spatial resolution. The method generates time-resolved trajectories of the particles in a system of interest by integrating Newton’s equations of motion numerically, starting from a set of coordinates and velocities. In MD simulations, all atoms of a chosen system, including solvent, are represented explicitly. Atomistic simulations are especially well-suited to study detailed interactions that depend on intermolecular interactions, such as hydration effects, hydrogen bonding, hydrophobic interactions, or subtle chemical differences. System properties are inferred from the trajectories, provided that the force fields, describing the interactions between the particles in the system, have a high accuracy. The bonded and non-bonded interactions are parametrized on experimental and quantum chemical data. The purpose of MD simulations can be to gain insight into the behavior of complex biological systems at molecular level, which often cannot be observed in experiments at the same resolution. With recent advances in computer hardware and simulation software, molecular systems of increasing size and simulation length can be investigated.
In the first part of the thesis, I investigated the conformational ensemble of various protein drug targets. Proteins are dynamic biomacromolecules that can have diverse and nearly isoenergetic conformational states. Ligand binding can shift the equilibrium of this conformational ensemble and can uncover binding sites, called cryptic sites. Cryptic sites only emerge upon small molecule binding and are often flat and featureless, and thus not easily recognized in crystal structures without bound ligands. If new binding sites including cryptic sites are detected, they can potentially be exploited for binding to ligands and enable a druggable target. Druggability is the ability of a protein to bind small, drug-like molecules, which is the basis for rational drug design. In this thesis, I used state-of-the-art physics-based, computational approaches to investigate the conformational ensembles of binding sites. In all studied systems, it is known from experiment that a specific group of ligands can induce conformational changes. The aim is to sample the conformational space made accessible upon ligand binding, yet without using the specific ligand structures or details about their interactions. We are interested in sampling the
pocket conformational states and identifying the respective pocket opening mechanism. For some cases, I additionally assessed whether the observed flexibility is a feature of the protein family, or specific to the protein under consideration.
The first studied system is factor VIIa (FVIIa). FVIIa is an essential part of the coagulation cascade and hence a potential drug target for thrombotic diseases. In addition, I investigated various other trypsin-like serine proteases from the same protein family. The binding pocket of trypsin-like serine proteases is called S1 pocket. An X-ray crystal structure solved by our collaborators reveals that a b-sheet structure in the S1 pocket is distorted by a bound ligand. I resolved the conformational change with MD simulations, starting from the unbound protein structure solvated in water and ions. I observed multiple spontaneous transition events. In 7 out of 22 simulations with the b-sheet as starting structure, the S1 pocket eventually rearranged into a distorted loop structure. These transitions occurred spontaneously and were mediated by water molecules probing the backbone hydrogen bonds. The conformational change studied here controls the onset of substrate binding and catalysis. Furthermore, I used metadynamics simulation, an enhanced-sampling method, to estimate the free energy barrier of this conformational change..
This thesis has two main parts.
The first part is based on our publication [1], where we use perturbation theory to calculate decay rates of magnons in the Kitaev-Heisenberg-Γ (KHΓ) model. This model describes the magnetic properties of the material α-RuCl 3 , which is a candidate for a Kitaev spin liquid. Our motivation is to validate a previous calculation from Ref. [2]. In this thesis, we map out the classical phase diagram of the KHΓ model. We use the Holstein-Primakoff
transformation and the 1/S expansion to describe the low temperature dynamics of the Kitaev-Heisenberg-Γ model in the experimentally relevant zigzag phase by spin waves. By parametrizing the spin waves in terms of hermitian fields, we find a special parameter region within the KHΓ model where the analytical expressions simplify. This enables us to construct the Bogoliubov transformation analytically. For a representative point in the special parameter region, we use these results to numerically calculate the magnon damping, which is to leading order caused by the decay of single magnons into two. We also calculate the dynamical structure factor of the magnons.
The second part of this thesis is based on our publication [3], where we use the functional renormalization group to analyze a discontinuous quantum phase transition towards a non-Fermi liquid phase in the Sachdev-Ye-Kitaev (SYK) model. In this thesis, we perform a disorder average over the random interactions in the SYK model. We argue that in the thermodynamic limit, the average renormalization group (RG) flow of the SYK model is identical to the RG flow of an effective disorder averaged model. Using the functional RG, we find a fixed point describing the discontinuous phase transition to the non-Fermi liquid phase at zero temperature. Surprisingly, we find a finite anomalous dimension of the fermions, which indicates critical fluctuations and is unusual for a discontinuous transition. We also determine the RG flow at zero temperature, and relate it to the phase diagram known from the literature.
The stellar nucleosynthesis of elements heavier than iron can primarily be attributed to neutron capture reactions in the s and r process. While the s process is considered to be well understood with regards to the stellar sites, phases and conditions where it occurs, nucleosynthesis networks still need accurate neutron capture cross sections
with low uncertainties as input parameters. Their quantitative outputs for the isotopic abundances produced in the s process, coupled with the observable solar abundances, can be used to indirectly infer the expected r process abundances. The two stable gallium isotopes, 69Ga and 71Ga, have been shown in sensitivity studies to have considerable impact on the weak s process in massive stars. The available experimental data, mostly derived from neutron activation measurements for quasi-stellar neutron spectra at kBT = 25 keV, show disagreements up to a factor of three.
Determining the differential neutron capture cross section can provide input data for the whole range of astrophysically relevant energies. To that end, a neutron time of flight experimental campaign at the n_TOF facility at CERN was performed for three months, using isotopically enriched samples of both isotopes. The data taken at the EAR1 experimental area covered a wide neutron energy range from thermal to several hundred keV. The respective differential and spectrum averaged neutron capture cross sections for 69Ga and 71Ga were determined in this thesis. They show good agreement with the evaluated cross sections for 71Ga, but reproduce the deviations from the evaluated data that other, more recent activation measurements showed for 69Ga.
Die vorliegende Arbeit präsentiert Forschungsarbeiten basierend auf nanoskopischen Oberflächenmessungen an plasmonischen Metaoberflächen und zweidimensionalen Materialien, insbesondere dem halbleitenden Übergangsmetal-Dichalcogenid (TMDC) WS_2. Die Thesis ist in sieben Kapitel untergegliedert. Die Einleitung vermittelt einen Überblick über die treibenden Kräfte hinter der Forschung im Bereich der Nanophotonik an zweidimensionalen Materialsystemen. Die Untersuchung der Licht-Materie-Wechselwirkung an dünnen Materialgrenzflächen zieht sich als roter Faden durch die gesamte Arbeit.
Das zweite Kapitel beschreibt den experimentellen Aufbau, der für die Durchführung der nanoskopischen Messungen in dieser Arbeit implementiert wurde. Es werden theoretische Grundlagen, das Messprinzip und die Implementierung des optischen Rasternahfeldmikroskops (s-SNOM) skizziert. Außerdem wird ein Strom-Spannungs-Rasterkraftmikroskop (c-AFM) im Kontaktmodus genutzt, um elektrische Ströme auf mikroskopischen zweidimensionalen TMDC-Terrassen zu messen. In den darauffolgenden vier Kapiteln werden die Beiträge dieser Arbeit zur Untersuchung der Licht-Materie-Wechselwirkung auf der Nanoskala aus verschiedenen Perspektiven vorgestellt. Jedes Kapitel enthält eine kurze Einleitung, einen Theorieteil, Messdaten oder Simulationsergebnisse sowie eine Analyse; vervollständigt durch einen Schlussteil.
Die zentrale Arbeit an einer metallischen Metaoberfläche aus elliptischen Goldscheiben wird in Kapitel 3 vorgestellt. Der zugehörige Theorieteil führt in das Konzept von Oberflächen-Plasmon-Polaritonen (SPP) ein, das für den Forschungsbereich der Plasmonik im Allgemeinen wesentlich ist. Verschiedene Methoden zur Berechnung der Dispersionsrelation dieser Oberflächenmoden an ein- und mehrschichtigen Grenzflächen werden auf die untersuchte Metaoberflächenprobe angewendet. Das Modell sagt drei verschiedene Moden voraus, die sich an der Grenzfläche ausbreiten. Eine teil-gebundene ins Substrat abstrahlende Oberflächenmode sowie zwei vergrabene stark gebundene anisotrope Moden. Eine auf der Probe platzierte Nanokugel aus Silizium wird als radiale Anregungsquelle verwendet.
Der Vergleich mit s-SNOM-Nahfeldbildern zeigt, dass nur die schwach gebundene geführte Modenresonanz ausreichend angeregt wurde, um durch s-SNOM-Bildgebung nachgewiesen werden zu können. Die schwache Oberflächenbindung erklärt die scheinbar isotrope Ausbreitung auf der anisotropen Oberfläche. Die Beobachtung der verbleibenden stark eingegrenzten anisotropen vergrabenen Moden würde eine verbesserte tiefenempfindliche Auflösung des Systems erfordern, die im Prinzip für Schichtdicken von 20 nm möglich sein sollte. Darüber hinaus wirft die Beobachtung die Frage auf, ob die durch Impuls- und Modenvolumenanpassung der Nanokugel gegebene Anregungseffizienz einen ausreichenden Anregungsquerschnitt erzeugt, um nachweisbare vergrabene SPP-Moden zu erzeugen.
In Kapitel 4 wird die Idee der Visualisierung vergrabener elektrischer Felder mit s-SNOM fortgesetzt. Hier wird es auf die Untersuchung von WS_2 angewendet, einem zweidimensionalen TMDC-Material, welches Photolumineszenz zeigt. Durch die Strukturierung des Galliumphosphid-Substrats unter der hängenden Monolage, die von einer dünnen Schicht aus hBN getragen wird, wird die Photolumineszenzausbeute um den Faktor 10 erhöht. Dies wird durch den Entwurf einer lateralen DBR-Mikrokavität mit zusätzlich optimierter vertikaler Tiefe erreicht, die in das Substrat geätzt wurde.
Die hochauflösende Abbildung der elektrischen Feldverteilung im Resonator wird durch den Einsatz von s-SNOM ermöglicht, um die Verbesserung der Einkopplung durch diese beiden Ansätze zu bewerten. Es konnte festgestellt werden, dass die laterale Struktur überwiegend zur verstärkten Photolumineszenzausbeute beiträgt, während für die Einkopplung keine offensichtliche Verstärkung auf die vertikale Strukturoptimierung zurückgeführt werden konnte.
Das zweidimensionale Material WS_2 wird in Kapitel 5 erneut mit Hilfe von c-AFM untersucht. Unterschiedlich dicke Multilagen auf Graphen und Gold dienen als Tunnelbarrieren für vertikale Ströme zwischen Substrat und leitender c-AFM-Messpitze. Die Daten können mit einem Fowler-Nordheim-Modell mit Parametern für die Tunnelbreite und Schottky-Barrierenhöhen der beiden Grenzflächen erklärt werden. Die Messungen zeigen jedoch eine schwache Reproduzierbarkeit, was eine detailliertere Zusammenfassung der relevanten Fehlerquellen erfordert. In der Schlussfolgerung des Kapitels werden mehrere Schlüsselaspekte vorgeschlagen, die bei künftigen Messungen berücksichtigt werden sollten. Entscheidend ist, dass c-AFM sehr empfindlich auf die Adsorption von Wasserfilmen an der Probenoberfläche reagiert, worunter WS_2-Oberflächen unter Umgebungsbedingungen leiden...
Navigating a complex environment is assumed to require stable cortical representations of environmental stimuli. Previous experimental studies, however, show substantial ongoing remodeling at the level of synaptic connections, even under behaviorally and environmentally stable conditions. It remains unclear, how these changes affect sensory representations on the level of neuronal populations during basal conditions and how learning influences these dynamics.
Our approach is a joint effort between the analysis of experimental data and theory. We analyze chronic neuronal population activity data – acquired by out collaborators in Mainz – to describe population activity dynamics during basal dynamics and during learning (fear conditioning). The data analysis is complemented by the analysis of a circuit model investigating the link between a neural network’s activity and changes in its underlying structure.
Using chronic two-photon imaging data recorded in awake mouse auditory cortex, we reproduce previous findings that responses of neuronal populations to short complex sounds typically cluster into a near discrete set of possible responses. This means that different stimuli evoke basically the same response and are thus grouped together into one of a small set of possible response modes. The near discrete set of response modes can be utilized as a sensitive and robust means to detect and track changes in population activity over time. Doing so we find that sound representations are subject to a significant ongoing remodeling across the time span of days under basal conditions. Auditory cued fear conditioning introduces a bias into these ongoing dynamics, resulting in a differential generalization both on the level of neuronal populations and on the behavioral level. This means that sounds that are perceived similar to the conditioned stimulus (CS+) show an increased co-mapping to the same response mode the CS+ is mapped to. This differential generalization is also observed in animal behavior, where sounds similar to the CS+ result in the same freezing behavior as the CS+, whereas dissimilar sounds do not. These observations could provide a potential mechanism of stimulus generalization, which is one of the most common phenomena associated with post-traumatic stress disorder, on the level of neuronal populations.
To investigate how the aforementioned changes in neuronal population activity are linked to changes in the underlying synaptic connectivity, we devised a circuit model of excitatory and inhibitory neurons. We studied this firing rate model to investigate the effect of gradual changes in the network’s connectivity on its activity. Apart from an input dominated uni-stable regime (one response per stimulus independent of the network) and a network dominated uni-stable regime (one response per network independent of the stimulus), we also find a multi-stable regime for strong recurrent connectivity and a high ratio of inhibition to excitation. In this regime the model reproduces properties of neural population activity in mouse auditory cortex, including sparse activity, a broad distribution of firing rates, and clustering of stimuli into a near discrete set of response modes. This clustering in the multi-stable regime means that, not only can identical stimuli evoke different responses, depending on the network’s initial condition, but different stimuli can also evoke the same response.
Applying gradual drift to the network connectivity we find periods of stable responses, interrupted by abrupt transitions altering the stimulus response mapping. We study the mechanism underlying these transitions by analyzing changes in the fixed points of this network model, employing a method to numerically find all the fixed points of the system. We find that such abrupt transitions typically cannot be explained by the mere displacement of existing fixed points, but involve qualitative changes in the fixed point structure in the vicinity of the response trajectory. We conclude that gradual synaptic drift can lead to abrupt transitions in stimulus responses and that qualitative changes in the network’s fixed point topology underlie such transitions.
In summary we find that cortical networks display ongoing representational drift under basal conditions that is biased towards a differential generalization during fear conditioning. A circuit model is able to reproduce key characteristics of auditory cortex, including a clustering of stimulus responses into a near discrete set of response modes. Implementing synaptic drift into this model leads to periods of stable responses interrupted by abrupt transitions towards new responses.
Im Zentrum dieser Arbeit steht die Diagnostik eines Wasserstoff-Theta-Pinch-Plasmas hinsichtlich der integrierten Elektronen- und Neutralgasdichte mittels Zweifarben Interferometrie. Die integrierte Elektronen- und Neutralgasdichte sind essenzielle Größen, aus welchen sich die Ratenkoeffizienten der Ionisation und Rekombination bei einer Plasma-Ionenstrahl-Wechselwirkung bestimmen lassen.
Ein Theta-Pinch-Plasma ist ein induktiv gezündetes Plasma, wobei das zur Zündung notwendige elektrische Feld durch ein magnetisches Wechselfeld generiert wird. Das induzierte, azimutale elektrische Feld beschleunigt freie Elektronen im Arbeitsgas, welches durch Stoßionisation in den Plasmazustand gebracht wird. Der azimutale Plasmastrom erzeugt einen radialen magnetischen Druckgradienten, der das Plasma komprimiert. Da in axialer Richtung keine Kompressionskraft wirkt, weicht das Plasma einer weiteren Kompression aus, wodurch es zu einer axialen Expansion des Plasmas kommt. Die Expansion erzeugt eine Ionisationswelle im kalten Restgas und es wird eine lange, hoch ionisierte Plasmasäule gebildet.
Dieser hochdynamische Prozess ist mit einem Mach-Zehnder-Interferometer bei der Verwendung von zwei verschiedenen Versionen des Theta-Pinchs zeitaufgelöst untersucht worden. Der Unterschied dieser Versionen liegt in der Geometrie und Induktivität der Spulen, wobei zum einen eine zylindrische und zum anderen eine sphärische Spule eingesetzt worden ist. Das grundlegende Messprinzip beruht darauf, dass das Plasma einen Brechungsindex besitzt, welcher von den Dichten der im Plasma enthaltenen Teilchenspezies abhängt. In einem Wasserstoffplasmas sind dies der Beitrag der freien Elektronen und der des Neutralgases, wodurch ein Zweifarben-Interferometer eingesetzt wird. Um eine von den Laserintensitäten unabhängige Messung zu ermöglichen, wird das heterodyne Verfahren benutzt, bei dem die Referenzstrahlen beider Wellenlängen jeweils mit einem akusto-optischen Modulator frequenzverschoben werden. Durch einen Vergleich mit einem stationären Referenzsignal mittels eines I/Q-Demodulators wird die interferometrische Phasenverschiebung aus dem Messsignal extrahiert.
Mit diesem diagnostischen Verfahren ist die integrierte Elektronen- und Neutralgasdichte des Theta-Pinch-Plasmas bei Variation des Arbeitsdrucks und der Ladespannung der Kondensatorbank untersucht worden. Mit der zylindrischen Experimentversion ist eine optimale Kombination aus integrierter Elektronendichte und effektivem Ionisationsgrad η von (1,45 ± 0,04) · 1018 cm−2 bei η = (0,826 ± 0,022) bei einem Arbeitsdruck von 20 Pa und einer Ladespannung von 16 kV ermittelt worden. Dagegen beträgt die optimale Kombination bei einem Arbeitsdruck von 20 Pa und einer Ladespannung von 18 kV bei Verwendung der sphärischen Experimentversion lediglich (1,23 ± 0,03) · 1018 cm−2 bei η = (0,699 ± 0,019).
Des Weiteren ist bei beiden Experimentversionen nachgewiesen worden, dass die integrierte Elektronendichte dem oszillierenden Strom folgend periodische lokale Maxima zeigt, welche zeitlich mit signifikanten Einbrüchen in der integrierten Neutralgasdichte zusammenfallen. Diese Einbrüche werden durch die axiale Expansion des Plasmas und der damit verbundenen Ionisationswelle im Restgas erzeugt. Neben diesem zentralen Teil dieser Arbeit ist eine lasergestützte polarimetrische Diagnostik durchgeführt worden, mit der die longitudinale Komponente der magnetischen Flussdichte der Theta-Pinch-Spulen zeit- und ortsaufgelöst bestimmt worden ist. Als Messprinzip ist der Faraday-Effekt eines magneto-optischen TGGKristalls verwendet worden.
Vor der polarimetrischen Diagnostik ist der TGG-Kristall bezüglich seiner Verdet- Konstante kalibriert worden, wobei ein Wert von V = (−149,7 ± 6,4) rad/Tm gemessen worden ist. Die ortsaufgelöste polarimetrische Diagnostik ist durch einen Seilzug ermöglicht worden, mit dem der TGG-Kristall auf einem Schlitten an unterschiedliche Positionen entlang der Spulenachse gefahren werden konnte. An den jeweiligen Messpunkten ist für beide Experimentversionen die magnetische Flussdichte für verschiedene Ladespannungen zeitaufgelöst bestimmt worden. Als Messverfahren ist dabei das Δ/Σ-Verfahren eingesetzt worden, mit dem sich eine intensitätsunabhängige Messung erzielen ließ.
Die ortsaufgelösten Messergebnisse fallen gegenüber Simulationen allerdings zu niedrig aus. Bei der zylindrischen Spule betragen die Abweichungen im Spulenzentrum circa 14 - 16% und bei der sphärischen Spule in etwa 16 - 18%. Bei einer Normierung der Messwerte und der simulierten Werte auf den jeweiligen Wert im Zentrum ist dagegen innerhalb der Fehler eine völlige Übereinstimmung zwischen den Messwerten und der Simulation für die zylindrische Spule erzielt worden. Als Ursache der negativen Abweichungen wird die Hysterese des TGG-Kristalls diskutiert. Es zeigt sich insbesondere zu Beginn der Entladung eine zeitliche Verzögerung der gemessenen magnetischen Flussdichte gegenüber dem Strom, die in der Umgebung des Stromnulldurchgangs besonders stark ausgeprägt ist.
The aim of this thesis is to provide a complete and consistent derivation of second-order dissipative relativistic spin hydrodynamics from quantum field theory. We will proceed in two main steps. The first one is the formulation of spin kinetic theory from quantum field theory using the Wigner-function formalism and performing an expansion in powers of the Planck constant. The essential ingredient here is the nonlocal collision term. We will find that the nonlocality of the collision term arises at first order in the Planck constant and is responsible for the spin alignment with vorticity, as it allows for conversion between spin and orbital angular momentum.
In the second step, this kinetic theory is used as the starting point to derive hydrodynamics including spin degrees of freedom. The so-called canonical form of the conserved currents follows from Noether’s theorem.
Applying an HW pseudo-gauge transformation, we obtain a spin tensor and energy-momentum tensor with obvious physical interpretation. Promoting all components of the HW tensors to be dynamical, we derive
second-order dissipative spin hydrodynamics. The additional equations of motion for the dissipative currents are obtained from kinetic theory generalizing the method of moments to include spin degrees of freedom.
Die vorgelegte Dissertation behandelt den Einfluss homöostatischer Adaption auf die Informationsverarbeitung und Lenrprozesse in neuronalen Systemen. Der Begriff Homöostase bezeichnet die Fähigkeit eines dynamischen Systems, bestimmte interne Variablen durch Regelmechanismen in einem dynamischen Gleichgewicht zu halten. Ein klassisches Beispiel neuronaler Homöostase ist die dynamische Skalierung synaptischer Gewichte, wodurch die Aktivität bzw. Feuerrate einzelner Neuronen im zeitlichen Mittel konstant bleibt. Bei den von uns betrachteten Modellen handelt es sich um eine duale Form der neuronalen Homöostase. Das bedeutet, dass für jedes Neuron zwei interne Parameter an eine intrinsische Variable wie die bereits erwähnte mittlere Aktivität oder das Membranpotential gekoppelt werden. Eine Besonderheit dieser dualen Adaption ist die Tatsache, dass dadurch nicht nur das zeitliche Mittel einer dynamischen Variable, sondern auch die zeitliche Varianz, also die stärke der Fluktuation um den Mittelwert, kontrolliert werden kann. In dieser Arbeit werden zwei neuronale Systeme betrachtet, in der dieser Aspekt zum Tragen kommt.
Das erste behandelte System ist ein sogennantes Echo State Netzwerk, welches unter die Kategorie der rekurrenten Netzwerke fällt. Rekurrente neuronale Netzwerke haben im Allgemeinen die Eigenschaft, dass eine Population von Neuronen synaptische Verbindungen besitzt, die auf die Population selbst projizieren, also rückkoppeln. Rekurrente Netzwerke können somit als autonome (falls keinerlei zusätzliche externe synaptische Verbindungen existieren) oder nicht-autonome dynamische Systeme betrachtet werden, die durch die genannte Rückkopplung komplexe dynamische Eigenschaften besitzen. Abhängig von der Struktur der rekurrenten synaptischen Verbindungen kann beispielsweise Information aus externem Input über einen längeren Zeitraum gespeichert werden. Ebenso können dynamische Fixpunkte oder auch periodische bzw. chaotische Aktivitätsmuster entstehen. Diese dynamische Vielseitigkeit findet sich auch in den im Gehirn omnipräsenten rekurrenten Netzwerken und dient hier z.B. der Verarbeitung sensorischer Information oder der Ausführung von motorischen Bewegungsmustern. Das von uns betrachtete Echo State Netzwerk zeichnet sich dadurch aus, dass rekurrente synaptische Verbindungen zufällig generiert werden und keiner synaptischen Plastizität unterliegen. Verändert werden im Zuge eines Lernprozesses nur Verbindungen, die von diesem sogenannten dynamischen Reservoir auf Output-Neuronen projizieren. Trotz der Tatsache, dass dies den Lernvorgang stark vereinfacht, ist die Fähigkeit des Reservoirs zur Verarbeitung zeitabhängiger Inputs stark von der statistischen Verteilung abhängig, die für die Generierung der rekurrenten Verbindungen verwendet wird. Insbesondere die Varianz bzw. die Skalierung der Gewichte ist hierbei von großer Bedeutung. Ein Maß für diese Skalierung ist der Spektralradius der rekurrenten Gewichtsmatrix.
In vorangegangenen theoretischen Arbeiten wurde gezeigt, dass für das betrachtete System ein Spektralradius nahe unterhalb des kritischen Wertes von 1 zu einer guten Performance führt. Oberhalb dieses Wertes kommt es im autonomen Fall zu chaotischem dynamischen Verhalten, welches sich negativ auf die Informationsverarbeitung auswirkt. Der von uns eingeführte und als Flow Control bezeichnete duale Adaptionsmechanismus zielt nun darauf ab, über eine Skalierung der synaptischen Gewichte den Spektralradius auf den gewünschten Zielwert zu regulieren. Essentiell ist hierbei, dass die verwendete Adaptionsdynamik im Sinne der biologischen Plausibilität nur auf lokale Größen zurückgreift. Dies geschieht im Falle von Flow Control über eine Regulation der im Membranpotential der Zelle auftretenden Fluktuationen. Bei der Evaluierung der Effektivität von Flow Control zeigte sich, dass der Spektralradius sehr präzise kontrolliert werden kann, falls die Aktivitäten der Neuronen in der rekurrenten Population nur schwach korreliert sind. Korrelationen können beispielsweise durch einen zwischen den Neuronen stark synchronisierten externen Input induziert werden, der sich dementsprechend negativ auf die Präzision des Adaptionsmechanismus auswirkt.
Beim Testen des Netzwerks in einem Lernszenario wirkte sich dieser Effekt aber nicht negativ auf die Performance aus: Die optimale Performance wurde unabhängig von der stärke des korrelierten Inputs für einen Spektralradius erreicht, der leicht unter dem kritischen Wert von 1 lag. Dies führt uns zu der Schlussfolgerung, dass Flow Control unabhängig von der Stärke der externen Stimulation in der Lage ist, rekurrente Netze in einen für die Informationsverarbeitung optimalen Arbeitsbereich einzuregeln.
Bei dem zweiten betrachteten Modell handelt es sich um ein Neuronenmodell mit zwei Kompartimenten, welche der spezifischen Anatomie von Pyramidenneuronen im Kortex nachempfunden ist. Während ein basales Kompartiment synaptischen Input zusammenfasst, der in Dendriten nahe des Zellkerns auftritt, repräsentiert das zweite apikale Kompartiment die im Kortex anzutreffende komplexe dendritische Baumstruktur. In früheren Experimenten konnte gezeigt werden, dass eine zeitlich korrelierte Stimulation sowohl im basalen als auch apikalen Kompartiment eine deutlich höhere neuronale Aktivität hervorrufen kann als durch Stimulation nur einer der beiden Kompartimente möglich ist. In unserem Modell können wir zeigen, dass dieser Effekt der Koinzidenz-Detektion es erlaubt, den Input im apikalen Kompartiment als Lernsignal für synaptische Plastizität im basalen Kompartiment zu nutzen. Duale Homöostase kommt auch hier zum Tragen, da diese in beiden Kompartimenten sicherstellt, dass sich der synaptische Input hinsichtlich des zeitlichen Mittels und der Varianz in einem für den Lernprozess benötigten Bereich befindet. Anhand eines Lernszenarios, das aus einer linearen binären Klassifikation besteht, können wir zeigen, dass sich das beschriebene Framework für biologisch plausibles überwachtes Lernen eignet.
Die beiden betrachteten Modelle zeigen beispielhaft die Relevanz dualer Homöostase im Hinblick auf zwei Aspekte. Das ist zum einen die Regulation rekurrenter neuronaler Netze in einen dynamischen Zustand, der für Informationsverarbeitung optimal ist. Der Effekt der Adaption zeigt sich hier also im Verhalten des Netzwerks als Ganzes. Zum anderen kann duale Homöostase, wie im zweiten Modell gezeigt, auch für Plastizitäts- und Lernprozesse auf der Ebene einzelner Neuronen von Bedeutung sein. Während neuronale Homöostase im klassischen Sinn darauf beschränkt ist, Teile des Systems möglichst präzise auf einen gewünschten Mittelwert zu regulieren, konnten wir Anhand der diskutierten Modelle also darlegen, dass eine Kontrolle des Ausmaßes von Fluktuationen ebenfalls Einfluss auf die Funktionalität neuronaler Systeme haben kann.
In der vorliegenden Arbeit wurde die Ionisation von Atomen und Molekülen in starken Laserfeldern experimentell untersucht. Hierbei kam die COLTRIMS-Technik zur koinzidenten Messung der Impulse aller aus einem Ionisationsereignis stammender Ionen und Elektronen zum Einsatz. Unter Mitwirkung des Autors wurde ein COLTRIMS-Reaktionsmikroskop umgebaut und mit einem neuen Spektrometer sowie einer atomaren Wasserstoffquelle ausgestattet. Des Weiteren entstand ein interferometrischer Aufbau zur Erzeugung von Zwei-Farben-Feldern. Aus jedem der vorgestellten Experimente konnten Informationen über die elektronische Wellenfunktion an der Grenze zum klassisch verbotenen Bereich gewonnen werden. Dies geschah sowohl im Hinblick auf die Amplitude, als auch auf die Phase der Wellenfunktion. Mit dem Wasserstoffatom (Kapitel 9), dem Wasserstoffmolekül (Kapitel 10) und dem Argondimer (Kapitel 11) wurden drei Systeme unterschiedlicher Komplexität gewählt.
Folgend auf den ersten Realisierungen von Bose-Einstein Kondensaten erschienen weitere innovative Experimente, die sich in den optischen Gittern gefangenen Quantengasen widmeten. In diesen zahlreichen, wissenschaftlichen Untersuchungen konnten die Eigenschaften von Bose-Einstein Kondensaten besser verstanden werden. Das Prinzip von Vielteilchensystemen, gefangen in einem periodischen Potential, bot eine Plattform zur Untersuchung weiterer Quantenphasen.
Eine konzeptionell einfache Modifikation von solchen Systemen erhält man durch die Kopplung der Grundzustände der gefangenen Teilchen an hoch angeregten Zuständen mithilfe einer externen Lichtquelle. Im Falle dessen, dass diese Zustände nahe der Ionisationsgrenze des Atoms liegen, spricht man von Rydberg-Zuständen und Atome, welche zu diesen Zuständen angeregt werden, bezeichnet man als Rydberg-Atome. Eines der vielen charakteristischen Eigenschaften von Rydberg-Atomen ist die Fähigkeit über große Entfernungen jenseits der atomaren Längenskalen zu wechselwirken. Im Rahmen von Vielteilchensystemen wurden dementsprechend Kristallstrukturen aus gefangenen Rydberg-Atomen experimentell beobachtet.
Nun stellt sich die Frage, was mit einem gefangenen Bose-Einstein Kondensat passiert, dessen Teilchen an langreichweitig wechselwirkenden Zuständen gekoppelt sind. Gibt es ein Parameterregime, in dem sowohl Kristallstruktur als auch Suprafluidität in solchen Systemen koexistieren können? Dies ist die zentrale Frage dieser Arbeit, die sich mit der Theorie von gefangenen Quantengasen gekoppelt an Rydberg-Zuständen auseinandersetzt.
Most elements heavier than iron are synthesized in stars during neutron capture reactions in the r- and s-process. The s-process nucleosynthesis is composed of the main and weak component. While the s-process is considered to be well understood, further investigations using nucleosynthesis simulations rely on measured neutron capture cross sections as crucial input parameters. Neutron capture cross sections
relevant for the s-process can be measured using various experimental methods. A prominent example is the activation method relying on the 7Li(p,n)7Be reaction as a neutron source, which has the advantage of high neutron intensities and is able to create a quasi-stellar neutron spectrum at kBT = 25 keV. Other neutron sources able to provide quasi-stellar spectra at different energies suffer from lower neutron intensities. Simulations using the PINO tool suggest the neutron activation of samples with different neutron spectra, provided by the 7Li(p,n)7Be reaction, and a subsequent linear combination of the obtained spectrum-averaged cross sections
to determine the Maxwellian-averaged cross section (MACS) at various energies of astrophysical relevance. To investigate the accuracy of the PINO tool at proton energies between the neutron emission threshold at Ep = 1880.4 keV and 2800 keV,
measurements of the 7Li(p,n)7Be neutron fields are presented, which were carried out at the PTB Ion Accelerator Facility at the Physikalisch-Technische Bundesanstalt in Braunschweig. The neutron fields of ten different proton energies were measured.
The presented neutron fields show a good agreement at proton energies Ep = 1887, 1897, 1907, 1912 and 2100 keV. For the other proton energies, E p = 2000, 2200, 2300, 2500, and 2800 keV, differences between measurement and simulation were found and discussed. The obtained results can be used to benchmark and adapt the PINO tool and provide crucial information for further improvement of the neutron activation method for astrophysics.
An application for the 7Li(p,n)7Be neutron fields is presented as an activation experiment campaign of gallium, an element that is mostly produced during the weak s-process in massive stars. The available cross section data for the 69,71Ga(n,γ)
reactions, mostly determined by activation measurements, show differences up toa factor of three. To improve the data situation, activation measurements were carried out using the 7Li(p,n)7Be reaction. The neutron capture cross sections for
a quasi-stellar neutron spectrum at kBT = 25 keV were determined for 69Ga and 71Ga.
Die minoren Aktinoiden dominieren auf lange Sicht die Radioaktivität des gesamten abgebrannten Brennstoffes und können somit, obwohl sie nur etwa 0,2 % davon ausmachen, als die Hauptverursacher der Endlagerproblematik betrachtet werden.
Neben einer möglichen Endlagerung und den damit verbundenen Problemen, bietet die Transmutation eine Alternative im Umgang mit dieser Art der radioaktiven Abfälle. Hierbei werden die minoren Aktinoide durch Neutroneneinfang zur Spaltung angeregt, wodurch sowohl deren Halbwertszeit, als auch deren Radiotoxizität deutlich reduziert werden soll.
Innerhalb des in der vorliegenden Arbeit vorgestellten MYRRHA-Projektes, das im belgischen Mol realisiert werden soll, soll gezeigt werden, dass die Transmutation in einem industriellen Maßstab möglich ist. Bei MYRRHA handelt es sich um ein sog. ADS (Accelerator Driven System), bei dem ein 4 mA Protonenstrahl mit 600 MeV in einem Target aus LBE (Lead-Bismuth Eutectic) per Spallation Neutronen erzeugen soll, die für die Transmutation in einem ansonsten unterkritischen Reaktor notwendig sind. Da eine solche Anlage enorme Ansprüche an die Zuverlässigkeit des Teilchenstrahls stellt, um den thermischen Stress innerhalb des Reaktors so gering wie möglich zu halten, werden auch hohe Ansprüche an die verwendeten Kavitäten innerhalb des Beschleunigers gestellt.
Besonderes Augenmerk muss hierbei auf den Injektor gelegt werden. In diesem wird der Protonenstrahl auf 16,6 MeV beschleunigt, wobei in seinem aktuellen Design nur noch normalleitende Kavitäten verwendet werden.
Als erstes beschleunigendes Bauteil nach der Ionenquelle fungiert hier ein im Rahmen der vorliegenden Arbeit gebauter 4-Rod-RFQ, dessen HF-Design auf dem bereits am IAP getesteten MAX-Prototypen basiert.
Für den MYRRHA-RFQ konnte eine neue Art der Dipolkompensation für 4-Rod-RFQs entwickelt werden, die bereits in anderen Beschleunigern, wie etwa dem neuen HLI-RFQ-Prototypen eingesetzt werden konnte. Hierbei werden die Stützen, auf denen die Elektroden befestigt werden alternierend verbreitert, um so den Strompfad zum niedrigeren Elektrodenpaar zu verlängern, wodurch sich die dortige Spannung erhöht. Im Zuge dieser Entwicklung wurden Simulations- und Messmethoden erarbeitet, um den Dipolanteil sowohl an bereits gebauten, wie auch an zukünftigen 4-Rod-RFQs untersuchen zu können. Der Erfolg dieser neuartigen Dipolkompensation konnte in den Low-Level-Messungen, die sich an den Zusammenbau des MYRRHA-RFQs anschlossen, validiert werden.
Die CH-Sektion, die im MYRRHA-Injektor auf den RFQ und die MEBT folgt, besteht aus insgesamt 16 normalleitenden Kavitäten. Sie gliedert sich in sieben beschleunigende CHs, auf die ein CH-Rebuncher und weitere acht beschleunigende CHs folgen.
Im Rahmen der vorliegenden Arbeit wurde - aufbauend auf bereits vorhandenen Entwürfen - das Design der ersten sieben CH-Strukturen des MYRRHA-Injektors erstellt und hinsichtlich seiner HF-Eigenschaften optimiert.
Die dabei während den Simulationen zu CH1 auftretende Problematik einer parasitären Tunermode konnte durch zahlreiche Simulationen umgangen werden.
Weiter wurde das aus der FRANZ-CH bekannte Kühlkonzept überarbeitet, um eine hohe thermische Stabilität gewährleisten zu können, wobei mehrere verschiedene Konzepte entwickelt, simuliert und bewertet wurden.
Das so entwickelte HF- und Kühldesign der ersten sieben MYRRHA-CHs dient als Vorlage für die weiteren MYRRHA-CHs sowie für zukünftige Beschleunigerprojekte, wie etwa HBS am Forschungszentrum Jülich.
Im Anschluss an die Designphase wurden die ersten beiden CH-Strukturen des Injektors und ein zusätzlicher dickschichtverkupferter Deckel für CH1 von den Fimen NTG und PINK gefertigt und anschließend Low-Level-Messungen unterzogen, in denen die Simulationsergebnisse bestätigt werden konnten, während diese Messungen zusätzlich als Vorbereitung für die Konditionierung dienten.
Sowohl der MYRRHA-RFQ, als auch die CH-Strukturen wurden nach ihren jeweiligen Low-Level-Messungen duch eine Konditionierung auf den späteren Strahlbetrieb vorbereitet.\\
Die Konditionierung des MYRRHA-RFQ erfolgte in zwei Phasen. Zunächst wurde er in der Experimentierhalle des IAP im cw-Betrieb vorkonditioniert, bevor er nach Louvain-la-Neuve transportiert wurde. In der dort fortgesetzten Konditionierung, die sowohl gepulst, als auch im cw-Betrieb erfolgte, konnten im Rahmen dieser Arbeit 120 kW cw stabil eingkoppelt werden, wobei diese transmittierte Leistung später noch vom SCK auf bis zu 145 kW cw gesteigert wurde. Nach Abschluss der Konditionierung konnten sowohl vom IAP, als auch vom SCK Röntgenspektren aufgenommen werden, um so die Shuntimpedanz bestimmen zu können. Die Ergebnisse dieser Messungen zusammen mit der alternativen Bestimmung der Shuntimpedanz über den R/Q-Wert wurden ebenfalls in dieser Arbeit besprochen.
Die CH-Kavitäten wurden im Bunker der Experimentierhalle des IAP konditioniert, wobei zusätzlich neue Konditionierungsmethoden erarbeitet und erprobt werden konnten. In den abschließenden Untersuchungen, die sich an jede der drei Konditionierungen anschlossen, konnten Erkenntnisse über das thermische Verhalten der CHs, sowie über den Einfluss verschiedener Verschaltungen des Kühlsystems darauf gewonnen werden, die bei der Installation auch zukünftiger CHs von Nutzen sein werden.
Single-electron transport in focused electron beam induced deposition (FEBID)-based nanostructures
(2022)
Mit steigender Komplexität von integrierten Schaltungen im Nanometer-Maÿstab werden immer innovativere Techniken nötig, um diese zu fabrizieren. Dies erfordert einen starken Fokus auf die Kontrolle der Fabrikation akkurater Strukturen und der Materialreinheit, und dies im Zusammenhang mit einer skalierbaren Produktion. In diesem Kontext hat Elektronenstrahlinduzierte Abscheidung (engl. Focused Electron Beam Induced Deposition, FEBID) eine wachsende Aufmerksamkeit im Bereich der Nanostrukturierung gewonnen. Der FEBID-Prozess basiert auf der lokalen Abscheidung von Material auf einem Substrat. Das Deponat entsteht durch die Spaltung von Präkursor-Molekülen durch die Interaktion mit einem Elektronenstrahl entsteht. Als Beispiel sei hier der Präkursor Me3PtCpMe angeführt. Das auf dem Substrat abgelagerte Material besteht aus wenigen Nanometer großen Kristalliten aus Platin, welche in einer Matrix aus amorphem Kohlenstoff eingebettet sind. Die Pt-C FEBID Ablagerungen sind nano-granulare Metalle, deren elektrische Transporteigenschaften die Folge des Zusammenspiels von diffusivem Transport von Ladungen innerhalb der Pt-Kristalliten und temperaturabhängigen Tunneleffekten sind. Das größte Interesse an diesen Materialien liegt an der Möglichkeit, Strukturen für technische Anwendungen im Nanometerbereich herstellen zu können.
In dieser Arbeit wurden Anwendungen, die auf Einzelelektroneneffekten beruhen, ausgewählt, um die FEBID basierte Probenpräparation zu testen. Um Einzelelektronentransport zu ermöglichen, der auf dem Tunneln einzelner Elektronen basiert, müssen alle Parameter wie Grösse und Abstände der Strukturen genauestens definiert sein. Im Rahmen dieser Arbeit wurden Einzelelektronenbausteine entwickelt, die auf zwei unterscheidlichen Anwendungen des Pt-C FEBID-Prozesses basieren. Die beiden Anwendungen sind: 1) Arrays von Gold-Nanopartikeln (Au-NP), welche mittels Pt-Strukturen kontaktiert wurden, die mit FEBID präpariert und anschlieÿend aufgereinigt wurden; 2) Einzelelektronentransistoren (engl. Single-Electron Transistors, SET), deren Inseln aus elektronennachbestrahlten Pt-C FEBID Deponate bestehen. Die elektrischen Eigenschaften der präparierten Nanostrukturen wurden charakterisiert und mit der erzielten Auflösung und Materialqualität in Relation gesetzt. Es wurden Optimierungen an der Präparationsmethode durchgeführt, welche direkt die Leitfähigkeit des Pt-C FEBID-Materials erhöhen. Dies kann durch die Änderung der
Karbonmatrix oder die Erhöhung des metallischen Gehalts der Struktur geschehen. In dieser Arbeit wurde eine katalytische Aufreinigungsmethode von Pt-C FEBID Strukturen für zwei Anwendungen genutzt: zum Einen wurden die aufgereinigten Strukturen als Keimschichten für die nachfolgende ortsgenaue Atomlagenabscheidung (engl. Area-Selective Atomic Layer
Deposition, AS-ALD) von Pt-Dünnschichten genutzt. Zum Anderen wurde diese Technik dafür genutzt, Metallbrücken zwischen den bereits durch Auftropfen zufällig auf dem Substrat aufgebrachten NP-Gruppen und den zuvor aufgebrachten UV-Lithographie (UVL) präparierten Cr-Au Kontakten zu erzeugen. Eine NP-Gruppe ist ein periodisches, granulares Array von Partikeln, welche uniform in Größe und Form sind und einen unterschiedlichen Grad von Ordnung inne haben. Durch die Art des Aufbringens kann die Anordnung der Nanopartikel durch Lösen und Erzeugen der Verbindungen beeinflusst werden. Diese Systeme zeigen ein Verhalten wie Tunnelkontakte mit Coulombblockade und eine Verteilung der Schwellspannung. Die Ergebnisse der elektrischen Messungen bestätigen den Einzelelektronentransport durch die Nanopartikel in einem typischen Elektronentransportregime mit schwacher Kopplung. Trotz dieser Ergebnisse war die Anwendung dieser Technik für die SET Nanostrukturierung nicht erfolgreich. Die Ursache
konnte zurückgeführt werden auf das Vorhandensein von Pt-Partikeln in der Nähe der Kontakte zu den Au-NP-Arrays. Die Pt-Partikel sind durch den FEBID Fertigungsprozess in
der Nähe der vorgegebenen Struktur entstanden. Aus diesem Grund wurde das FEBID Co-Deponat in der folgenden SET-Nanofabrikation entfernt.
Ein SET basiert auf einer Nano-Insel, welche durch Tunnelkontakte mit Source- und Drain-Elektroden verbunden ist. Darüber hinaus besteht eine kapazitive Verbindung zu einer
oder mehreren Gate-Elektrode(n). Innerhalb der Insel gibt es eine feste Anzahl von Elektronen.
In dieser Arbeit wurden die Source-, Drain- und Gate-Kontakte durch Ätzen mittels eines fokussierten Gallium-Strahls erzeugt, was Abstände von 50nm ermöglichte, wohingegen die SET Insel mit Pt-C FEBID-Material erzeugt wurde. Die Leitfähigkeit der Insel aus Pt-C wurde mit anschließender Elektronenbestrahlung erhöht. Als letzter Präparationsmethode wurde ein neueartiges Argon-Ätzverfahren genutzt, um die durch FEBID erzeugten Co-Ablagerungen in der direkten Umgebung der Insel zu entfernen. Durch die Elektronennachbestrhalung kann die Kopplung der einzelnen metallischen Kristalliten angepasst werden. Die Auswirkungen unterschiedlicher starker Tunnelkontakte auf die elektronischen Eigenschaften der Insel und die daraus resultierende Performanz des SETs wurden in dieser Arbeit beobachtet ...
Classical light microscopy is one of the main tools for science to study small things. Microscopes and their technology and optics have been developed and improved over centuries, however their resolution is ultimately restricted physically by the diffraction of light based on its wave nature described by Maxwell’s equations. Hence, the nanoworld – often characterized by sub-100-nm structural sizes – is not accessible with classical far-field optics (apart from special x-ray laser concepts) since its lateral resolution scales with the wavelength.
It was not until the 20th century that various technologies emerged to circumvent the diffraction limit, including so-called near-field microscopy. Although conceptually based on Maxwell’s long known equations, it took a long time for the scientific community to recognize its powerful opportunities and the first embodiments of near-field microscopes were developed. One representative of them is the scattering-type Scanning Near-field Optical Microscope (s-SNOM). It is a Scanning Probe Microscope (SPM) that enables imaging and spectroscopy at visible light frequencies down to even radio waves with a sub-100-nm resolution regardless of the wavelength used. This work also reflects this wide spectral range as it contains applications from near-infrared light down to deep THz/GHz radiation.
This thesis is subdivided into two parts. First, new experimental capabilities for the s-SNOM are demonstrated and evaluated in a more technical manner. Second, among other things, these capabilities are used to study various transport phenomena in solids, as already indicated in the title.
On the technical side, preliminary studies on the suitability of the qPlus sensor – a novel scanning probe technology – for near-field microscopy are presented.
The scanning head incorporating the qPlus sensor–named TRIBUS – is originally intended and built for ultra-high vacuum, low temperature, and high resolution applications. These are desirable environments and properties for sensitive nearfield measurements as well. However, since its design was not planned for near-field measurements, several special technical and optical aspects have to be taken into account, among others the scanning tip design and a spring suspended measurement head.
In addition, in this thesis field-effect transistors are used as THz detectors in an s-SNOM for the first time. Although THz s-SNOM is already an emerging technology, it still suffers from the requirements of sophisticated and specialized infrastructure on both the detector and laser side. Field-effect transistors offer an alternative that is flexible, cost-efficient, room-temperature operating, and easy to handle. Here, their suitability for s-SNOM measurements, which in general require very sensitive and fast detectors, is evaluated.
In the scientific part of this thesis, electromagnetic surface waves on silver nanowires and the conductivity/charge carrier density in silicon are investigated. Both are completely different concepts of transport phenomena, but this already shows the general versatility of the s-SNOM as it can enter both fields. Silver nanowires are analysed by means of near-infrared radiation. Their plasmonic behaviour in this spectral region is studied complementing other simulations and studies in literature performed on them using for example far-field optics.
Furthermore, the surface wave imaging ability of the s-SNOM in the near-infrared regime is thoroughly investigated in this thesis. Mapping surface waves in the mid-infrared regime is widespread in the community, however for much smaller wavelengths there are several important aspects to be considered additionally, such as the smaller focal spot size.
After that, doped and photo-excited silicon substrates are investigated. As the characteristic frequencies of charge carriers in semiconductors – described by the plasma frequency and the Drude model – are within the THz range, the THz s-SNOM is very well suited to probe their behaviour and to reveal contrasts, which has already been shown qualitatively by numerous literature reports. Here, the photo-excitation enables to set and tune the charge carrier density continuously.
Furthermore, the analysis of all silicon samples focuses on a quantitative extraction of the charge carrier densities and doping levels ...
We discuss aspects of the phase structure of a three-dimensional effective lattice theory of Polyakov loops derived from QCD by strong coupling and hopping parameter expansions. The theory is valid for the thermodynamics of heavy quarks where it shows all qualitative features of nuclear physics emerging from QCD. In particular, the SU(3) pure gauge effective theory also exhibits a first-order thermal deconfinement transition due to spontaneous breaking of its global Z₃ center symmetry. The presence of heavy dynamical quarks breaks this symmetry explicitly and consequently, the transition weakens with decreasing quark mass until it disappears at a critical endpoint. At non-zero baryon density, the effective theory can be evaluated either analytically by the so-called high-temperature expansion which does not suffer from the sign problem, or numerically by standard Monte-Carlo methods due to its mild sign problem. The first part of this work devotes to a systematic derivation of the effective theory up to the 6th order in the hopping parameter κ. This method combined with the SU(3) link update algorithm provides a way to simulate the O(κ⁶) effective theory. The second part involves a study of the deconfinement transition of the pure gauge effective theory, with and without static quarks, at all chemical potentials with help of the high-temperature expansion. Our estimate of the deconfinement transition and its critical endpoint as a function of quark mass and all chemical potentials agrees well with recent Monte-Carlo simulations. In the third part, we investigate the N ſ ∈ {1,2} effective theory with zero chemical potential up to O(κ⁴). We determine the location of the critical hopping parameter at which the first-order deconfinement phase transition terminates and changes to a crossover. Our results for the critical endpoint of the O(κ²) effective theory are in excellent agreement with the determinations from simulations of four-dimensional QCD with a hopping expanded determinant by the WHOT-QCD collaboration. For the O(κ⁴) effective theory, our estimate suggests that the critical quark mass increases as the order of κ-contributions increases. We also compare with full lattice QCD with N ſ = 2 degenerate standard Wilson fermions and thus obtain a measure for the validity of both the strong coupling and the hopping expansion in this regime.
In this thesis, the emission of protons as well as the production of Λ hyperons, Κ0S mesons and 3ΛH hypernuclei are analyzed multi-differentially as a function of transverse momentum, rapidity and centrality. Therefore, the 3.03 billion 30 % most central Ag(1.58A GeV)+Ag events recorded by HADES are used. Furthermore, the lifetimes of Λ hyperons, Κ0S mesons and 3ΛH hypernuclei are measured. The obtained 3ΛH lifetime of (253 ± 24 ± 42) ps is compatible with the lifetime of free Λ hyperons, as predicted by theoretic calculations due to its low binding energy. Finally, also the double strange Ξ– hyperons are reconstructed. Unfortunately, the fully optimized signals lie below the confidence threshold of 5σ, which is why both an production rate and an upper production limit are estimated using averaged acceptance and efficiency corrections. Never before, 3ΛH or Ξ– were successfully reconstructed and analyzed in heavy-ion collisions at such low energies. The obtained results are compared to previous measurements and put in context with world data form different energies and collision systems.
The present research in high energy physics as well as in the nuclear physics requires the use of more powerful and complex particle accelerators to provide high luminosity, high intensity, and high brightness beams to experiments. With the increased technological complexity of accelerators, meeting the demand of experimenters necessitates a blend of accelerator physics with technology. The problem becomes severe when optimization of beam quality has to be provided in accelerator systems with thousands of free parameters including strengths of quadrupoles, sextupoles, RF voltages, etc. Machine learning methods and concepts of artificial intelligence are considered in various industry and scientific branches, and recently, these methods are used in high energy physics mainly for experiments data analysis.
In Accelerator Physics the machine learning approach has not found a wide application yet, and in general the use of these methods is carried out without a deep understanding on their effectiveness with respect to more traditional schemes or other alternative approaches. The purpose of this PhD research is to investigate the methods of machine learning applied to accelerator optimization, accelerator control and in particular on optics measurements and corrections. Optics correction, maximization of acceptance, and simultaneous control of various accelerator components such as focusing magnets is a typical accelerator scenario. The effectiven- ess of machine learning methods in a complex system such as the Large Hadron Collider, which beam dynamics exhibits nonlinear response to machine settings is the core of the study. This work presents successful application of several machine learning techniques such as clustering, decision trees, linear multivariate models and neural networks to beam optics measurements and corrections at the LHC, providing the guidelines for incorporation of machine learning techniques into accelerator operation and discussing future opportunities and potential work in this field.
The main subject of this thesis is the study of hadron and photon production in relativistic heavy-ion collisions by means of hydrodynamics+transport approaches. Two different kinds of such hybrid approaches are employed in this work, the SMASH-vHLLE-hybrid and a MUSIC+SMASH hybrid. While the former is capable of simulating heavy-ion collisions covering a wide range of collision energies down to √s = 4.3 GeV, reproducing the correct baryon stopping powers, the latter provides a framework to consistently model photon production in the hadronic stage of high-energy heavy-ion collisions.
The SMASH-vHLLE-hybrid is a novel state-of-the-art hybrid approach whose development constitutes a major contribution to this thesis. It couples the hadronic transport SMASH to the 3+1D viscous hydrodynamics approach vHLLE. Therein, SMASH is employed to provide the fluctuating 3D initial conditions and to model the late hadronic rescattering stage, and vHLLE for the fluid dynamical evolution of the hot and dense fireball. The initial conditions are provided on a hypersurface of constant proper time, and the macroscopic evolution of the fireball is carried out down to an energy density of ecrit = 0.5 GeV/fm3, where particlization occurs. Consistency at the interfaces is verified in view of global, on-average quantum number conservation and the SMASH-vHLLE-hybrid is validated by comparison to SMASH+CLVisc as well as UrQMD+vHLLE hybrid approaches. The establishment of the SMASH-vHLLE-hybrid to theoretically describe heavy-ion collisions at intermediate and high collision energies forms a basis for a range of extensions and future research projects. It is further made available to the heavy-ion community by virtue of being published on Github.
The SMASH-vHLLE-hybrid is applied to simulate Au+Au/Pb+Pb collisions between √s = 4.3 GeV and √s = 200.0 GeV. A good agreement with the experimentally measured rapidity and transverse mass spectra is obtained. In particular the baryon stopping dynamics are well reproduced at low, intermediate, and high collision energies. Excitation functions for the mid-rapidity yield and mean transverse momentum of pions, protons and kaons are demonstrated to agree well with their experimentally measured counterpart. These results further validate the approach and provide a solid baseline for potential future studies. The importance of annihilations and regenerations of protons and anti-protons is additionally investigated in Au+Au/Pb+Pb collisions between √s = 17.3 GeV and √s = 5.02 TeV with the SMASH-vHLLE-hybrid. It is found that, regarding the p + p ̄ ↔ 5 π reaction, 20-50% (depending on the rapidity range) of the (anti-)proton yield lost to annihila- tions in the hadronic rescattering stage is restored owing to the back reaction. The back reaction thus constitutes a non-negligible contribution to the final (anti-)proton yield and should not be neglected when modelling the late rescattering stage of heavy-ion collisions.
The MUSIC+SMASH hybrid is a hybrid approach ideally suited to model the production of photons in relativistic heavy-ion collisions. Therein, the macroscopic production of photons in the hadronic stage in MUSIC relies on the identical effective field theories as the photon cross sections implemented in SMASH for the microscopic production. The MUSIC+SMASH hybrid thus provides the first consistent framework to the end of hadronic photon production. It accounts for 2 → 2 scattering processes of the kind π + ρ → π + γ and pion bremsstrahlung processes π + π → π + π + γ. The MUSIC+SMASH hybrid is employed in an ideal 2D setup to systematically assess the importance of non-equliibrium dynamics in the hadronic rescattering stage on mid-rapidity transverse momentum spectra and elliptic flow of photons at RHIC/LHC energies. This is achieved by comparing the outcome of the MUSIC+SMASH hybrid, involving an out-of-equilibrium late rescattering stage, to macroscopically approximating late stage photon production by means of MUSIC, employed down to temperatures well below the switching temperature. It is found that non-equilibrium dynamics have only minor implications for photon transverse momentum spectra, but significantly enhance the photon elliptic flow. At RHIC energies, an enhancement of up to 70%, and at LHC of up to 65% is observed in the non-equilibrium afterburner as compared to its hydrodynamical counterpart. In combination with the large amount of photons produced above the particlization temperature, these differences are modest regarding the transverse momentum spectra, but a significant enhancement of the elliptic flow is observed at low transverse momenta. Below pT ≈ 1.4 GeV, the combined v2 is enhanced by up to 30% at RHIC, and up to 20% at the LHC within the non-equilibrium setup as compared to its approximation via hydrodynamics. Non-equilibrium dynamics in the hadronic rescattering stage are hence important, especially in view of momentum anisotropies at low transverse momenta. These findings thus contribute to the understanding of low-pT photons produced in heavy-ion collisions at RHIC/LHC energies and the MUSIC+SMASH hybrid employed for this study provides a baseline for additional studies regarding photon production in the future.
To summarize, the approaches and frameworks presented in this thesis provide a good baseline for further extensions and studies in order to improve the understanding of hadron and photon production in relativistic heavy-ion collisions across a wide range of collision energies. More broadly, such future studies of hadrons and photons may contribute to enhance the understandig of the properties of the fundamental building blocks of matter, of which everything that surrounds us is made of.